WO2006035677A1 - 画像処理方法および画像処理装置 - Google Patents
画像処理方法および画像処理装置 Download PDFInfo
- Publication number
- WO2006035677A1 WO2006035677A1 PCT/JP2005/017517 JP2005017517W WO2006035677A1 WO 2006035677 A1 WO2006035677 A1 WO 2006035677A1 JP 2005017517 W JP2005017517 W JP 2005017517W WO 2006035677 A1 WO2006035677 A1 WO 2006035677A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- superimposed
- straight line
- pattern
- position information
- Prior art date
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/32—Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
- H04N1/32101—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
- H04N1/32144—Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/387—Composing, repositioning or otherwise geometrically modifying originals
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/40—Picture signal circuits
Definitions
- the present invention relates to an image processing method and an image processing apparatus capable of verifying falsification on the side of receiving a form when the printed form is falsified.
- the present invention has been made in view of the above-described problems of the prior art, and an object of the present invention is to provide a new and improved image processing method capable of highly accurate alteration detection. And providing an image processing apparatus.
- a detection unit that detects a superimposed position of a superimposed pattern from a pattern superimposed image in which an identifiable pattern is superimposed on the original image;
- An image processing apparatus comprising: a corrected image creating unit that creates a corrected image of the pattern superimposed image based on the detected superposition position information.
- the printed image is corrected based on the position information of the signal embedded at the time of printing. Therefore, it is possible to associate the positions of these images with high accuracy, and to perform high-performance alteration detection.
- the pattern superimposed image may be an image itself that is output by superimposing an identifiable pattern, or by superimposing an identifiable pattern and printing it as a printed material (such as a form). It may be an image captured by an input device such as an apparatus.
- the identifiable pattern may be superimposed on the entire original image at a known interval.
- the identifiable pattern can be superimposed on the entire original image at equal intervals in both the vertical and horizontal directions.
- the following method can be employed.
- a set of position information arranged in the horizontal direction is approximated by a straight line
- a set of position information arranged in the vertical direction is approximated by a straight line.
- An intersection point between the horizontal approximate line and the vertical approximate line may be calculated, and the intersection point may be detected as a superimposed position of the pattern superimposed on the original image.
- a set of position information arranged in the vertical direction is linearly approximated to a set of position information arranged in the horizontal direction.
- the slope of the approximate straight line in the horizontal direction is replaced with the average value of the approximate straight line and other nearby horizontal lines (for example, adjacent to it)
- the slope of the approximate straight line in the vertical direction is Replace with the average value of the approximate straight line and other nearby vertical lines (for example, adjacent to it), calculate the intersection of the horizontal approximate line and the vertical approximate line, and Detect as the superimposed position of the pattern superimposed on the image Please do it.
- a set of position information arranged in the vertical direction is linearly approximated and a set of position information arranged in the vertical direction is approximated.
- the vertical position of the horizontal approximate line is replaced with the average value of the approximate line and the vertical position of the other horizontal lines in the vicinity (for example, adjacent to it).
- Replace the horizontal position of the approximate straight line with the average value of the horizontal position of the approximate straight line and other nearby vertical lines (for example, adjacent to it), and It is also possible to calculate an intersection point with the approximate straight line in the direction and detect the intersection point of the pattern in which the intersection point is superimposed on the original image.
- the corrected image creation unit creates a corrected image of the pattern superimposed image so that the superimposed positions detected by the detection unit are aligned at known intervals in both the vertical and horizontal directions. It may be deformed.
- a tampering determination unit that determines that tampering has been performed on the image may be further provided.
- the image features of an arbitrary area of the original image and the position information of the area are recorded as visible or invisible information in the original image, and the detection unit detects the pattern superimposed image force image features and position information.
- the tampering determination unit compares the extracted image feature with the image feature at the same position in the deformed pattern superimposed image, and if there is a difference between them, it may be determined that the tampering has occurred.
- the image feature of an arbitrary region of the original image and the position information of the region are stored separately from the original image, and the tampering determination unit is configured to store the stored feature image and the deformed butterfly. Compare the image features at the same position in the superimposed image, and if there is a difference between them, determine that it has been tampered with.
- a detection step of detecting a superimposed position of a superimposed pattern from a pattern superimposed image in which an identifiable pattern is superimposed on the original image There is provided an image processing method characterized by including a corrected image creation step of creating a corrected image of a pattern superimposed image based on the detected superimposed position information.
- the pattern superimposed image may be an image itself that is output by superimposing an identifiable pattern, or by superimposing an identifiable pattern and printing it as a printed material (such as a form). It may be an image captured by an input device such as an apparatus.
- the image processing method of the present invention can be applied as follows.
- the identifiable pattern can be, for example, superimposed on the entire original image at a known interval.
- the identifiable pattern can be superimposed on the entire original image at equal intervals in both the vertical and horizontal directions.
- the following method can be used to detect the superimposed position in the detection step.
- a set of position information arranged in the horizontal direction is approximated by a straight line
- a set of position information arranged in the vertical direction is approximated by a straight line.
- the intersection point between the horizontal approximate line and the vertical approximate line may be calculated, and the intersection point may be detected as a superimposed position of the pattern superimposed on the original image.
- a set of position information arranged in the vertical direction is linearly approximated and a set of position information arranged in the vertical direction is approximated.
- the slope of the approximate straight line in the horizontal direction is replaced with the average value of the approximate straight line and other nearby horizontal lines (for example, adjacent to it)
- the slope of the approximate straight line in the vertical direction is Replace with the average value of the approximate straight line and other nearby vertical lines (for example, adjacent to it), calculate the intersection of the horizontal approximate line and the vertical approximate line, and Try to detect it as the superimposed position of the pattern superimposed on the image.
- a set of position information arranged in the vertical direction is linearly approximated to a set of position information arranged in the horizontal direction.
- the vertical position of the horizontal approximate line is replaced with the average value of the approximate line and the vertical position of the other horizontal lines in the vicinity (for example, adjacent to it).
- the position of the approximate straight line in the horizontal direction Is replaced with the average value of other vertical lines in the vicinity of (for example, adjacent to) the horizontal position, and the intersection of the horizontal approximate line and the vertical approximate line is calculated. You may make it detect with the superimposition position of the pattern superimposed on.
- the corrected image creation step a corrected image of the pattern superimposed image is created so that the superimposed positions detected in the detection step are aligned at known intervals in both the vertical and horizontal directions, and the non-turned superimposed image is deformed. You may make it do.
- a tampering determination step for determining that tampering has been performed on the image may be further included.
- the image features of an arbitrary area of the original image and the position information of the area are recorded as visible or invisible information in the original image.
- the position information are extracted, and in the tampering determination process, the extracted image feature is compared with the image feature at the same position of the deformed pattern superimposed image, and if there is a difference between them, it is determined that the tampering has occurred. Even so,
- the image features of an arbitrary area of the original image and the position information of the area are stored separately from the original image, and the stored feature image and the deformed pattern in the tampering determination step. Compare the image features at the same position in the superimposed image, and if there is a difference between them, determine that it has been tampered with.
- a program for causing a computer to function as the image processing apparatus and a computer-readable recording medium on which the program is recorded.
- the program may be written in any programming language.
- a recording medium for example, a CD-ROM, a DVD-ROM, a flexible disk, or the like that is generally used as a recording medium capable of recording a program, or a recording medium that will be used in the future. Can also be adopted.
- an image captured with a printed document is corrected based on position information of a signal embedded at the time of printing. Therefore, it is possible to restore the position between these images with high accuracy, and to perform high-performance tamper detection.
- FIG. 1 is an explanatory diagram showing a configuration of a watermark information embedding device and a watermark information detection device.
- FIG. 2 is a flowchart showing a processing flow of a watermarked document image composition unit 13.
- FIG. 3 An explanatory diagram showing an example of a watermark signal. (1) shows unit A and (2) shows unit B.
- FIG. 4 is a cross-sectional view of the change in pixel value in FIG. 3 (1) as viewed from the direction of arctan (lZ3).
- FIG. 5 An explanatory diagram showing an example of a watermark signal. (3) is unit C, (4) is unit D,
- FIG. 6 An explanatory diagram of the background image.
- (1) shows the case where unit E is defined as the background unit and is arranged without gaps as the background of the document image.
- (2) shows the case of (1).
- An example in which unit A is embedded in the background image is shown, and (3) shows an example in which unit B is embedded in the background image in (1).
- FIG. 7 is an explanatory diagram showing an example of a symbol embedding method in a document image.
- FIG. 8 is a flowchart showing a method for embedding confidential information in a document image.
- FIG. 9 is an explanatory diagram showing an example of a method for embedding secret information in a document image.
- FIG. 10 is an explanatory diagram showing an example of a watermarked document image.
- FIG. 11 is an explanatory view showing a part of FIG. 10 in an enlarged manner.
- FIG. 12 is a flowchart showing a process flow of the watermark detection unit 32 in the first embodiment.
- FIG. 13 is an explanatory diagram of a signal detection filtering step (step S310) in the first embodiment.
- FIG. 14 is an explanatory diagram of a signal position search step (step S320) in the first embodiment.
- FIG. 15 is an explanatory diagram of a signal boundary determining step (step S340) in the first embodiment.
- FIG. 16 is an explanatory diagram showing an example of an information restoration step (step S305).
- FIG. 17 is an explanatory diagram showing a flow of processing of a data code restoration method.
- FIG. 18 is an explanatory diagram showing an example of a data code restoration method.
- FIG. 19 is an explanatory diagram showing an example of a data code restoration method.
- FIG. 20 is an explanatory diagram showing a configuration of a watermark information embedding device and a watermark information detection device in a fifth embodiment.
- FIG. 21 is a flowchart showing a process flow of the falsification determination unit 33.
- FIG. 22 is an explanatory diagram of the feature comparison step (step S450).
- FIG. 23 is an explanatory diagram of the feature comparison step (step S450).
- FIG. 26 is an explanatory diagram showing a configuration of a transparent image output unit.
- FIG. 27 is an explanatory diagram showing a configuration of a transparent document input unit.
- ⁇ 29] is a flowchart showing the operation of the input image transformation unit.
- FIG. 30 is an explanatory diagram showing an example of a detected signal unit position.
- FIG. 31 is an explanatory diagram showing an example of detecting an approximate line.
- FIG. 32 is an explanatory diagram showing an example of a result of linear approximation.
- FIG. 33 is an explanatory diagram showing tilt correction.
- FIG. 34 is an explanatory diagram showing position correction.
- FIG. 35 is an explanatory diagram showing an example of the intersection of straight lines.
- ⁇ 36 It is an explanatory diagram showing an example of the correspondence between the position of the input image and the corrected image.
- FIG. 37 is an explanatory diagram showing an example of a method for associating an input image with a corrected image.
- FIG. 1 is an explanatory diagram showing the configuration of the watermark information embedding device and the watermark information detection device according to the present embodiment.
- the watermark information embedding device 10 is a device that forms a watermarked document image based on the document image and confidential information embedded in the document and prints it on a paper medium. As shown in FIG. 1, the watermark information embedding device 10 is composed of a watermarked document image composition unit 13 and an output device 14. Document image 15 is an image created by a document creation tool or the like. Secret information 16 is information (character strings, images, audio data) embedded in paper media in a format other than characters.
- the white pixel region in the document image 15 is a portion where nothing is printed, and the black pixel region is a portion where black paint is applied.
- the description will be made on the assumption that printing is performed on white paper with black ink (single color).
- the present invention is not limited to this, and printing is performed in color (multicolor). Similarly, the present invention can be applied.
- the watermarked document image composition unit 13 creates a watermarked document image by superimposing the document image 15 and the secret information 16.
- the watermarked document image composition unit 13 decodes the confidential information 16.
- N-ary code (N is 2 or more) is converted to a numerical value and assigned to each signal of the codeword.
- the signal represents a wave with an arbitrary direction and wavelength by arranging dots in a rectangular area of arbitrary size, and a symbol is assigned to the direction and wavelength of the wave.
- a watermarked document image is one in which these signals are placed on the image according to certain rules.
- the output device 14 is an output device such as a printer, and prints a watermarked document image on a paper medium.
- the watermarked document image composition unit 13 may be realized as one function in the printer driver.
- the printed document 20 is printed with the confidential information 16 embedded in the original document image 15, and is physically stored and managed.
- the transparent information detecting device 30 is a device that takes in a document printed on a paper medium as an image and restores the embedded secret information 16. As shown in FIG. 1, the watermark information detection device 30 includes an input device 31 and a watermark detection unit 32.
- the input device 31 is an input device such as a scanner, and takes in the document 20 printed on paper as a multi-value gray image into a computer.
- the watermark detection unit 32 performs a filtering process on the input image and detects an embedded signal. The detected signal power symbol is restored, and the embedded secret information 16 is extracted.
- Document image 15 is data including font information and layout information, and is created by document creation software. Document image 15 can be created for each page as an image of the document printed on paper. This document image 15 is a black and white binary image, white on the image, pixels (pixels with a value of 1) are background, black !, pixels (pixels with a value of SO) are character areas (ink is applied) Area).
- the secret information 16 is various data such as characters, sounds and images.
- the watermarked document image composition unit 13 superimposes this secret information 16 as the background of the document image 15.
- FIG. 2 is a flowchart showing a processing flow of the watermarked document image composition unit 13.
- the secret information 16 is converted into an N-element code (step S101).
- the data may be encoded as it is, or the encrypted data may be encoded.
- a watermark signal is assigned to each symbol of the code word (step S 102).
- the permeability signal represents a wave having an arbitrary wavelength and direction by the arrangement of dots (black pixels).
- the watermark signal will be further described later.
- a signal unit corresponding to the bit string of the encoded data is arranged on the document image 15 (step S103).
- FIG. 3 is an explanatory diagram showing an example of a watermark signal.
- the width and height of the watermark signal be Sw and Sh, respectively.
- the unit of length is the number of pixels.
- a rectangle having a width and a height of Sw and Sh is referred to as a “signal unit” as one signal unit.
- the distance between dots is dense in the direction of arctan (3) (arctan is the inverse function of tan) with respect to the horizontal axis, and the wave propagation direction is arctan (–1Z3).
- This signal cut is hereinafter referred to as unit A.
- the distance between the dots is dense in the arctan (–3) direction with respect to the horizontal axis, and the wave propagation direction is arctan (lZ3).
- this signal unit is referred to as “user B”.
- FIG. 4 is a cross-sectional view of the change in the pixel value of FIG. 3 (1) in which the directional force of arctan (lZ3) is also seen.
- the dots are arranged and become the antinode of the minimum value of the wave (the point where the amplitude is maximum), and the dots are arranged and the portion becomes the antinode of the maximum value of the wave.
- symbol 0 is assigned to the watermark signal expressed by unit A
- symbol 1 is assigned to the watermark signal expressed by unit B. These are also called symbol cuts.
- Figs. 5 (3) the distance between dots is dense in the direction of arctan (lZ3) with respect to the horizontal axis, and the wave propagation direction is arctan (–3).
- this signal unit is referred to as unit C.
- Fig. 5 (4) the distance between the dots is dense in the arctan (–1Z3) direction with respect to the horizontal axis, and the wave propagation direction is arctan (3).
- this signal unit is referred to as unit D.
- the distance between dots is dense in the direction of arctan (l) with respect to the horizontal axis, and the propagation direction of the wave is arctan (–1).
- the distance between dots is dense in the direction of arct an (-1) with respect to the horizontal axis, and the wave propagation direction can be considered to be arctan (1).
- this signal unit is referred to as unit E.
- step S102 shown in FIG. 2 when the secret information is encoded with a quaternary code,
- codeword symbol 0 to unit A
- symbol 1 to unit B
- symbol 2 to unit
- symbol 3 to unit D.
- unit E is defined as a background unit (signal unit to which no symbol is assigned), which is arranged without gaps as the background of document image 15, and symbol unit (When embedding unit A and unit B) in document image 15, replace the background unit (unit E) and symbol unit (unit A and unit B) at the position to be embedded.
- FIG. 6 (1) is an explanatory diagram showing a case where unit E is defined as a background unit, which is arranged without gaps and used as the background of the document image 15.
- Fig. 6 (2) shows an example of unit A embedded in the background image of Fig. 6 (1)
- Fig. 6 (3) shows an example of unit B embedded in the background image of Fig. 6 (1). Show.
- a method of using the background unit as the background of the document image 15 will be described.
- only the symbol unit may be arranged as the background of the document image 15.
- FIG. 7 is an explanatory diagram showing an example of a symbol embedding method in the document image 15.
- “0101” will be described when a bit string is embedded.
- the same symbol unit is repeatedly embedded. This is to prevent the characters in the document from being detected when a signal is detected when they are overlaid on the embedded symbol unit.
- the number of symbol unit repetitions and the pattern of arrangement (hereinafter referred to as the unit pattern). ) Is optional.
- the number of repetitions is 4 (4 symbol units exist in one unit pattern) as shown in Fig. 7 (1), or as shown in Fig. 7 (2).
- the number of repetitions can be 2 (two symbol units exist in one unit pattern), or the number of repetitions is 1 (only one symbol unit exists in one unit pattern) Even more.
- Figs. 7 (1) and (2) one symbol unit is given one symbol. Forces Symbols may be given to the symbol unit layout pattern as shown in Fig. 7 (3).
- How many bits of information can be embedded in one page depends on the size of the signal unit, the size of the unit pattern, and the size of the document image.
- the number of signals embedded in the horizontal and vertical directions of the document image may be detected as known, or the input device power may be calculated back from the size of the input image and the size of the signal unit. good.
- the number of bits that can be embedded in one page is called the “number of embedded bits”.
- the number of embedded bits is Pw X Ph.
- FIG. 8 is a flowchart showing a method for embedding the secret information 16 in the document image 15.
- the secret information 16 is converted into an N-element code (step S201). This is the same as step S101 in FIG. Below, the encoded data is referred to as a data code, and the data code expressed by a combination of unit patterns is referred to as a data code unit Du.
- step S202 based on the code length of the data code (here, the number of bits) and the number of embedded bits, how many times the data code unit can be repeatedly embedded in one image is calculated (step S202).
- the code length data of the data code is inserted into the first row of the unit pattern matrix.
- the code length of the data code may be fixed and the code length data may not be embedded.
- the number Dn of embedding data code units is calculated by the following equation, where the data code length is Cn.
- the unit pattern matrix has a Dn number of data code units and a queue corresponding to the first Rn bits of the data code. Embedded pattern. However, the Rn bit in the remainder does not necessarily have to be embedded.
- the unit pattern matrix size is 9 XII (11 rows and 9 columns), and the data code length is 12 (in the figure, 0 to L 1 numbers are assigned to each code of the data code. Represents a word).
- code length data is embedded in the first row of the unit pattern matrix (step S203).
- the code length is represented by 9-bit data and embedded only once. However, if the unit pattern matrix width Pw is sufficiently large, the code length is the same as the data code. Data can be embedded repeatedly.
- the data code unit is repeatedly embedded in the second and subsequent rows of the unit pattern matrix (step S204).
- the MSB (most significant bit) or LSB (least significant bit) strength of the data code is embedded in the row direction.
- the example in Fig. 9 shows an example in which the data code unit is embedded 7 times and the first 6 bits of the data code are embedded.
- the data embedding method may be embedded so as to be continuous in the row direction as shown in FIG. 9 or may be embedded so as to be continuous in the column direction.
- the watermarked document image composition unit 13 superimposes the document image 15 and the secret information 16.
- the value of each pixel in the watermarked document image is calculated by ANDing the corresponding pixel values of the document image 15 and the secret information 16. In other words, if either document image 15 or confidential information 16 is 0 (black), the pixel value of the watermarked document image is 0 (black). Otherwise, it is 1 (white).
- FIG. 10 is an explanatory diagram showing an example of a watermarked document image.
- Fig. 11 is an explanatory diagram showing a part of Fig. 10 on an enlarged scale.
- the unit pattern shown in Fig. 7 (1) is used.
- the watermarked document image is output by the output device 14.
- FIG. 12 is a flowchart showing the process flow of the watermark detection unit 32.
- a watermarked document image is input to a memory of a computer or the like by an input device 31 such as a scanner (step S301).
- This image is referred to as an input image.
- the input image is a multi-valued image, and is described below as a gray image with 256 gradations.
- the resolution of the input image (the resolution when it is read by the input device 31) may be different from the watermarked document image created by the above-mentioned transparent watermark blueprint embedding device 10! In the following description, it is assumed that the resolution is the same as the image created by the information embedding device 10. The case where one unit pattern consists of one symbol unit is explained.
- step S310 the entire input image is filtered, and the filter output value is calculated and the filter output value is compared.
- the filter output value is calculated as follows:
- gw and gh are filter sizes, which are the same size as the signal unit embedded in the information embedding device 10 above.
- the filter output value at an arbitrary position in the input image is calculated by convolution between the filter and the image.
- a Gabor filter there are a real filter and an imaginary filter (an imaginary filter is a filter whose phase is shifted by half a wavelength from the real filter), and the mean square value of these filters is used as the filter output value.
- the filter output value F (A, X, y) is calculated by the following formula.
- the filter output values calculated as described above are compared at each pixel, and the maximum value F (x , y) is stored as a filter output value matrix.
- the number of the signal unit corresponding to the filter with the maximum value is stored as a filter type matrix (Fig. 13).
- the number of filters is two. However, even when the number of filters is larger than that, the maximum value of the plurality of filter output values and the signals corresponding to the filters at that time are similarly used. Remember the unit number.
- step S320 the position of the signal unit is determined using the filter output value matrix obtained in step S310. Specifically, first, the size of the signal unit is ShXSw. Assuming that this is done, a signal location search template is created in which the vertical spacing of the grid points is Sh, the horizontal spacing is Sw, and the number of grid points is Nh X Nw (Fig. 14). The size of the template created in this way should be the optimum value for searching for the signal unit position for the forces Nh and Nw for Th (Sh * Nh) XTw (Sw * Nw)!
- the filter output value matrix is divided for each template size. Furthermore, in each divided area, the template grid points are moved while the template is moved pixel by pixel on the filter output value matrix within the range that does not overlap the signal units in the adjacent areas (horizontal direction SwZ2, vertical direction ShZ2,).
- the sum V of the filter output value matrix values F (x, y) above is obtained using the following formula (Fig. 14), and the lattice point of the template with the largest sum is taken as the position of the signal unit in that region.
- the above example is a case where the filter output value is obtained for all the pixels in step S310.
- filtering it is possible to perform filtering only for pixels at a certain interval. For example, when filtering is performed every two pixels, the interval between the grid points in the above signal position search template may be set to 1Z2.
- step S330 the signal unit is determined to be A or B by referring to the value of the filter type matrix at the signal unit position determined in step S320 (signal unit number corresponding to the filter).
- the determination result of the determined signal unit is stored as a symbol matrix.
- step S320 the entire image is filtered regardless of whether the signal unit is embedded. It is necessary to decide whether it was. Therefore, in step S340, the signal boundary is found by searching for the pattern determined when embedding the signal unit in advance from the symbol matrix.
- the number of signal units A is counted in the horizontal direction of the symbol matrix determined in step S330, and the signal unit A is counted vertically from the center.
- the position with the largest number of signal units A is defined as the upper end Z lower end of the signal boundary.
- the signal unit A in the symbol matrix is represented by “black” (or “0” in terms of value). Therefore, by counting the number of black pixels in the symbol matrix, the number of signal units A is calculated. Counting can be performed, and the upper end Z lower end of the signal boundary can be obtained from the frequency distribution. The left end and the right end can also be obtained in the same way, except for the direction in which the number of units A is counted.
- the signal boundary is not limited to the above method, and patterns that can be searched for symbol matrix power need only be determined on the embedding side and the detection side.
- step S3 05 the partial force information corresponding to the inside of the signal boundary in the symbol matrix is restored.
- the unit pattern matrix is equivalent to the symbol matrix.
- FIG. 16 is an explanatory diagram showing an example of information restoration. The steps of information restoration are as follows.
- FIG. 17 to 19 are explanatory diagrams showing an example of a data code restoration method.
- the restoration method is basically the reverse process of Fig. 8.
- the code length data portion of the first row force of the unit pattern matrix is also extracted to obtain the code length of the embedded data code (step S401).
- step S402 the size of the unit pattern matrix and the code length of the data code obtained in step S401 Based on the above, the number Dn of data code units and the remainder Rn are calculated (step S402).
- the data code unit is extracted from the second and subsequent rows of the unit pattern matrix by the reverse method of step S203 (step S403).
- step S404 the embedded data code is reconstructed by performing bit certainty calculation on the data code unit extracted in step S403 (step S404).
- the bit confidence calculation is described below.
- Du (l, l) to Du (12, 1) which are first extracted from the second row and first column of the unit pattern matrix, are Du (l, l) to Du (12, 1). 2) to Du (12, 2),. The remainder is Du (l, 8) to Du (6, 8).
- the bit certainty calculation is to determine the value of each symbol of the data code by taking a majority vote for each element of each data code unit. As a result, even if the correct signal detection is not possible from any unit in the data encoding unit (such as bit inversion error) due to overlap with the character area or contamination on the paper, The data code can be restored.
- the signal detection result of Du (l, 1), Du (l, 2), ..., Du (l, 8) should be 1. When there are many, it is determined as 1, and when there are more 0, it is determined as 0.
- the second bit of the data code is determined by majority decision based on the signal detection result of Du (2, 1), Du (2, 2), ..., Du (2, 8), and the 12th bit of the data code is Du (12, 1), Du (12, 2), ..., Du (12, 7) (Du (12, 8) does not exist, so it is up to Du (12, 7)) Judgment by majority vote.
- each signal unit position is obtained by using the signal unit position obtained in the signal position search step (step S320).
- the feature values of the document image (image data before embedding transparency) and the input image (an image obtained by scanning a printed document with the transparency embedded) are compared, and the contents of the print document are altered. Judgment is made.
- FIG. 20 is a processing configuration diagram in the second embodiment.
- a tampering determination unit 33 is provided.
- the falsification determination unit 33 determines whether the contents of the printed document have been falsified by comparing the feature values related to the document image embedded in advance with the feature values related to the input image.
- FIG. 21 shows a processing flow of the falsification determination unit 33
- FIG. 22 shows a processing explanatory diagram of the falsification determination unit 33.
- step S410 as in the first embodiment, the watermarked document image read by the input device 31 such as a scanner is input to the memory of the computer or the like (this image is referred to as an input image).
- step S420 the data amount decoded in the information decoding step (step S305) of the detection unit 32 is also extracted through the feature amount related to the document image embedded in advance.
- the document image feature amount is a reduced binary with the upper left coordinate of the area where the signal unit is embedded as the reference point (reference point P in FIG. 22) in the watermarked document image as shown in FIG.
- the image is used. Since the document image on the embedding side is a binary image, it is only necessary to perform reduction processing using a known technique.
- the image data is MR, MMR. After compressing the amount of data using a binary image compression method such as the above, embed it using the signal unit assigned to each symbol.
- step S430 the input image is binarized.
- the data force obtained by decoding the information relating to the binary threshold value embedded in advance in the information decoding step (step S305) of the watermark detection unit 32 is also extracted.
- the extracted information power also determines the binary key threshold and binaries the input image.
- this binary key threshold information is encoded using an arbitrary method such as using an error correction code, and is assigned to each symbol unit. You should embed it using.
- the number of black pixels included in the document image when embedding is an example.
- the number of black pixels in the binary image obtained by binarizing the input image that is normally the same size as the document image is included in the document image when embedding.
- Set a binary key threshold to match the number.
- the binary image threshold may be determined by using a known technique and the input image may be binarized.
- the watermark detection side can create almost the same data as the binary image of the document image when embedding.
- step S440 a feature quantity related to the input image is obtained from the input image and the signal unit position obtained in the signal position search step (step S320) of the watermark detection unit 32 and the signal boundary obtained in the signal boundary determination step (step S340). create. Specifically, the upper left coordinate of the signal boundary is used as a reference point (reference point Q in Fig. 22), and multiple signal units are divided into one unit. Divide and obtain a reduced image of the input image corresponding to the coordinate position in that unit. In Fig. 22, as an example of the region divided as described above, a rectangle with the upper left coordinate (xs, ys) and the lower right coordinate (xe, ye) is shown as an example. The same reduction method should be used for the embedding side
- the upper left coordinate of the signal boundary is used as a reference point (reference point Q in Fig. 23), a plurality of signal units are divided into one unit, and the coordinate position corresponds to that unit. After creating a corrected image of the input image, you can reduce the corrected image.
- step S450 the features obtained in the document image feature extraction process (step S420) and the input image feature creation process (step S440) are compared. If they do not match, the printed document corresponding to the position is falsified. It is determined that Specifically, the reduced input image for each signal unit obtained in step S440.
- the corresponding document image feature extraction process step S420.
- Reduced binary image (xs, ys)-(xe, ye) with reference point P in Fig. 22 as P Is compared with the rectangle with the upper left Z and the lower right vertex), and tampering is determined. For example, if the number of pixels with different luminance values in two images to be compared is equal to or greater than a predetermined threshold, it can be determined that the printed document corresponding to the signal unit has been altered.
- a reduced binary image is used as the feature quantity, but instead, coordinate information and text data written in a printed document may be used.
- the input image data corresponding to the coordinate information is referred to, the image information is recognized using a known OCR technology, and the recognition result is compared with the text data. Judgment can be made.
- the feature amount relating to the document image that has been embedded with emphasis and the feature amount of the input image read by the scanner with the print document embedded with the confidential information are used for signal position search. It is possible to detect whether the contents of a printed document have been tampered with by comparing the signal unit determined using the template as a reference. wear. According to the first embodiment, the position of the signal unit can be accurately obtained, so that the feature quantity can be easily compared by using the position, and the alteration of the printed document can be determined.
- FIG. 24 shows the signal unit positions detected in the first and second embodiments.
- the position of the signal unit is detected in a state of being arranged almost uniformly in the entire input image (watermarked document image).
- the detected signal unit positions are locally unevenly distributed due to rotation of the input image and local distortion of the paper.
- the signal unit position on the filter output value matrix is correlated with the position on the input image by simply multiplying the signal unit position several times (or twice when filtering is performed every two pixels).
- This embodiment includes a transparent image output unit 100 shown in FIG. 26 and a transparent image input unit 200 shown in FIG. Hereinafter, it demonstrates in order.
- FIG. 26 is a configuration diagram of the transparent image output unit 100.
- the transparent image output unit 100 is a functional unit that performs processing using the image 110 as an input, and includes a feature image generation unit 120 and a watermark information synthesis unit 130. Then, the transparency image output unit 100 outputs an output image 140 with a watermark.
- the image 110 is an image of document data created by document creation software or the like.
- the feature image generation unit 120 is a functional unit that generates image feature data to be embedded as a watermark.
- the image feature data can be generated in the same manner as the watermarked document image composition unit 13 of the first and second embodiments, for example.
- the watermark information synthesis unit 130 is a functional unit that embeds image feature data in the image 110 as information.
- the embedding of the transparency information can be performed, for example, in the same manner as the watermarked document image composition unit 13 of the first and second embodiments.
- the output image 140 is a watermarked image.
- FIG. 27 is a configuration diagram of the transparent image input unit 200.
- the watermark image input unit 200 is a functional unit that extracts watermark information using the input image 210 as input and corrects the input image.
- An input image transformation unit 230 and a tampering judgment unit 240 are provided.
- the input image 210 is an output image 140 or an image of the paper on which the output image 140 is printed using an input device such as a scanner.
- the transparency information extraction unit 220 is a functional unit that extracts the transparency information of the input image force and restores the feature image 250. The extraction of watermark information can be performed, for example, in the same manner as the watermark detection unit 32 of the first and second embodiments.
- the input image deformation unit 230 is a functional unit that corrects the distortion of the input image and generates the corrected image 260.
- the tampering determination unit 240 is a functional unit that detects the difference area as tampering by superimposing the feature image 250 and the correction image 260.
- the present embodiment is configured as described above.
- the description will focus on the parts that are different from the second embodiment. It is assumed that the output image 140 output from the watermark image output unit 100 is printed and then converted into an image by a scanner and passed to the watermark image input unit 200.
- FIG. 28 shows an example of a watermarked image.
- the upper left coordinate of the area in which the signal unit of the watermarked document image is embedded is set as the reference coordinate (0, 0).
- the reference coordinate (0, 0)
- a falsification detection area is set in the image 110 so that only falsification of an important area in the image 110 can be detected.
- the upper left coordinate of the falsification detection area when the reference coordinate is the origin is (A X, Ay)
- the width of the falsification detection area is Aw
- the height is Ah.
- the reference coordinate is the upper left coordinate of the watermark area.
- the feature image is an image obtained by cutting out the falsification detection area from the image 110, or an image obtained by reducing it.
- the permeability information synthesis unit 130 synthesizes the falsification detection area information (for example, upper left coordinates, width, height) with the feature image and the image 110 as watermark information.
- the transparency information extracting unit 220 extracts the information from the input image 210 and restores the feature image 250 embedded by the watermark output unit 100. This operation is the same as in the first and second embodiments.
- FIG. 29 is a flowchart of the input image deformation unit 230. The following explanation is based on this flowchart.
- FIG. 30 shows the signal unit position detected in the first and second embodiments on the input image (watermarked document image) 210.
- U (l, y) to U (Wu, y) are signal queues in the same row (reference numeral 710 in FIG. 30), and U (x, l) to U (x, Hu) are in the same column
- U (l, y) to U (Wu, y) and U (x, l) to U (x, Hu) In some cases, they are not lined up on the same straight line, but slightly shifted up and down and left and right.
- the input image is filtered every N pixels vertically and horizontally (N is a natural number). This filtering is performed in the same manner as the signal position search step (step S320)> in the first embodiment.
- P is a value obtained by simply multiplying the coordinate value of each signal unit in the signal output value matrix N times vertically and horizontally.
- Figure 31 shows an example of line approximation in the row direction.
- the positions of signal units U (l, y) to U (Wu, y) in the same row are approximated by a straight line Lh (y).
- the approximate line is such that the sum of the distances between the position of each signal unit and the line Lh (y) is the smallest.
- Such a straight line can be obtained by a general method such as a least square method or principal component analysis.
- the linear approximation in the row direction is performed for all rows, the rows, and similarly the linear approximation in the column direction for all columns!
- FIG. 32 shows an example of the result of performing linear approximation in the row direction and the column direction.
- Lh (y) is a straight line approximating U (l, y) to U (Wu, y) (symbol 810 in Fig. 32), and Lv (x) is U (x, l) to U (x, Hu) Is a straight line (reference numeral 820 in Fig. 32).
- step S620 The straight line approximated in step S620 does not have the same slope or position when viewed individually, for example, because the detected signal units are misaligned to some extent. Therefore, in step S630, equalization is performed by correcting the inclination and position of each straight line.
- FIG. 33 shows an example of correcting the slope of the approximate straight line Lh (y) in the row direction.
- Figure 33 (a) is before correction
- Fig. 33 (b) is after correction.
- the slope of 01 () in Fig. 33 (&) is 11 ()
- the slope of Lh (y) is corrected to be the average value of the slope of the straight line near Lh (y).
- TMy) AVERAGE (TMy ⁇ Nh) to Th (y + Nh)).
- AVERA GE (A to B) is a formula that calculates the average value from A to B
- Nh is an arbitrary natural number .
- Fig. 33 shows an example when Nh is 1, and Fig. 33 (b) is corrected by the average value of the slope of the straight line of Lh (y) force Lh (y-1) to Lh (y + 1).
- Nh 1
- Fig. 33 (b) is corrected by the average value of the slope of the straight line of Lh (y) force Lh (y-1) to Lh (y + 1).
- FIG. 34 shows an example of correcting the position of the approximate straight line Lh (y) in the row direction.
- Figure 34 (a) is before correction
- Fig. 34 (b) is after correction.
- Fig. 34 (a) when an arbitrary reference line 1130 is set in the vertical direction and the y coordinate of the intersection of this line and Lh (y) is Q (y), Q (y) is Lh (y It is corrected so that it becomes the average of the straight line positions in the vicinity of).
- Q (y) AVERAGE ( And
- Mh is an arbitrary natural number. If y—Mh 1 or y + Mh> Hu, no change is made.
- Fig. 34 shows an example when Mh is 1.
- Fig. 34 (b) is corrected by the midpoint (average) of the Lh (y) force SLh (y-1) and Lh (y + 1) straight line positions. An example is shown. This process can be omitted.
- Figure 35 shows an example of calculating the intersections of approximate straight lines Lh (l) to Lh (Hu) in the row direction and approximate straight lines Lv (l) to Lv (Wu) in the column direction.
- the intersection point is calculated by a general mathematical method.
- the intersection calculated here is the corrected signal unit position. That is, the intersection of the approximate straight line Lh (y) in the row direction and the approximate straight line Lv (x) in the column direction is defined as the corrected position (Rx (x, y), Ry) of the signal unit U (x, y). (x, y)).
- the corrected position of signal unit U (l, 1) is the intersection of Lh (l) and Lv (l)
- a corrected image is created from the input image with reference to the signal unit position calculated in step S640.
- Dout be the resolution when printing the transparent image output by the transparent image output unit 100
- Din when acquiring the image to be input to the watermark image input unit 200.
- the size of the corrected image is assumed to be the same magnification as the input image.
- the signal unit in the transparent image output unit is width Sw and height Sh
- the corrected image is created so that the signal units are evenly arranged.
- the position of the upper left signal unit U (l, 1) is (0, 0), which is the origin of the corrected image.
- the pixel value Vm at an arbitrary position (Xm, Ym) on the corrected image is obtained from the pixel value Vi at the coordinates (Xi, Yi) on the input image.
- Fig. 36 shows an example of the correspondence between these coordinates.
- Fig. 36 (a) shows the input image 1310
- Fig. 36 (b) shows the corrected image 1320. The relationship between (Xm, Ym) and (Xi, Yi) is explained using this figure.
- the closest signal units in the upper left, upper right, and lower left areas when viewed at (Xm, Ym) as the center are U (x, y) (The coordinate values are (Sx (x, y), Sy (x, y)), 1360), U (x + 1, y) (1370), U (x, y + 1) (1380)
- El, E2, and E3 be the distances (specifically, X is the smallest integer that does not exceed XmZTw + 1, and y is the smallest integer that does not exceed XmZTw + 1).
- U (x, y) (coordinate values are (Rx (x, y), Ry (x, y)), 1330), U (x + 1, y) in the input image 1 310 in FIG. ) (13 40), U (x, y + 1)
- the distance between (1350) and (Xi, Yi) is Dl, D2, D3, respectively, and the ratio of D1 to D3 Dl: D2: D3 is El : E2: E3 is equal! /
- the pixel value Vm of (Xm, Ym) is obtained from the pixel value Vi of the coordinates (Xi, Yi) on the input image 1310.
- Fig. 37 shows a specific calculation method of such (Xi, Yi).
- the symbol 14 30 in Fig. 37 (a) is the point where (Xm, Ym) is projected onto the straight line connecting U (x, y) and U (x + 1, y).
- Fx Xm— Sx (x, y) It is.
- Figure 38 shows an example of tampering determination
- Bx Ax X Din / Dout
- By Ay X Din / Dout
- Bw AwX Din / Dout
- Bh Ah X Din / Dout.
- the corrected image 1530 in Fig. 38 (c) is binarized with an appropriate threshold, and the enlarged or reduced feature image (Fig. 38 (a)) is displayed in the corrected image (Bx, By) in the upper left. They are superimposed so that they fit. At this time, the difference between the two images is regarded as tampering.
- the image captured from the printed document is corrected based on the position information of the signal embedded at the time of printing, the image before printing is distorted from the image captured from the printed material. Since restoration can be performed without expansion or contraction, the positions of these images can be correlated with high accuracy, and high-performance alteration detection can be performed.
- the present invention is applicable to an image processing method and an image processing apparatus capable of verifying falsification on the side of receiving a form when the printed form is falsified. / vu / Osoozfcld-/-9ss900iAV ⁇
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Editing Of Facsimile Originals (AREA)
- Image Processing (AREA)
- Facsimile Image Signal Circuits (AREA)
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP05785784A EP1798950A4 (en) | 2004-09-29 | 2005-09-22 | IMAGE PROCESSING METHOD AND IMAGE PROCESSING DEVICE |
US11/663,922 US20080260200A1 (en) | 2004-09-29 | 2005-09-22 | Image Processing Method and Image Processing Device |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2004284389A JP3999778B2 (ja) | 2004-09-29 | 2004-09-29 | 画像処理方法および画像処理装置 |
JP2004-284389 | 2004-09-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2006035677A1 true WO2006035677A1 (ja) | 2006-04-06 |
Family
ID=36118824
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2005/017517 WO2006035677A1 (ja) | 2004-09-29 | 2005-09-22 | 画像処理方法および画像処理装置 |
Country Status (6)
Country | Link |
---|---|
US (1) | US20080260200A1 (ja) |
EP (1) | EP1798950A4 (ja) |
JP (1) | JP3999778B2 (ja) |
KR (1) | KR20070052332A (ja) |
CN (1) | CN100464564C (ja) |
WO (1) | WO2006035677A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008252239A (ja) * | 2007-03-29 | 2008-10-16 | Oki Electric Ind Co Ltd | 帳票処理装置 |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4437756B2 (ja) * | 2005-02-25 | 2010-03-24 | 株式会社リコー | 情報抽出方法および情報抽出装置および情報抽出プログラムおよび記憶媒体 |
KR100942248B1 (ko) * | 2008-06-23 | 2010-02-16 | 이화여자대학교 산학협력단 | 워터 마킹 패턴을 이용한 영상의 기하학적 왜곡 보정 방법 |
JP2011166402A (ja) * | 2010-02-09 | 2011-08-25 | Seiko Epson Corp | 画像処理装置、方法及びコンピュータプログラム |
US10628736B2 (en) * | 2015-09-24 | 2020-04-21 | Huron Technologies International Inc. | Systems and methods for barcode annotations for digital images |
US11042772B2 (en) | 2018-03-29 | 2021-06-22 | Huron Technologies International Inc. | Methods of generating an encoded representation of an image and systems of operating thereof |
CA3118014C (en) | 2018-11-05 | 2024-06-11 | Hamid Reza Tizhoosh | Systems and methods of managing medical images |
US11610395B2 (en) | 2020-11-24 | 2023-03-21 | Huron Technologies International Inc. | Systems and methods for generating encoded representations for multiple magnifications of image data |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09205542A (ja) * | 1996-01-25 | 1997-08-05 | Ricoh Co Ltd | ディジタル複写機 |
JPH10155091A (ja) * | 1996-11-22 | 1998-06-09 | Fuji Photo Film Co Ltd | 画像記録装置 |
JPH10164549A (ja) * | 1996-11-28 | 1998-06-19 | Ibm Japan Ltd | 認証情報を画像に隠し込むシステム及び画像認証システム |
JPH11355547A (ja) * | 1998-05-22 | 1999-12-24 | Internatl Business Mach Corp <Ibm> | 幾何変換特定システム |
JP2003244427A (ja) * | 2001-12-10 | 2003-08-29 | Canon Inc | 画像処理装置及び画像処理方法 |
JP2003264685A (ja) * | 2002-03-08 | 2003-09-19 | Oki Electric Ind Co Ltd | 文書画像出力方法及び装置、改ざん判定方法及びシステム、並びに改ざん判定システムの制御用プログラム |
JP2004064516A (ja) * | 2002-07-30 | 2004-02-26 | Kyodo Printing Co Ltd | 電子透かし挿入方法及びその装置並びに電子透かし検出方法及びその装置 |
JP2004179744A (ja) * | 2002-11-25 | 2004-06-24 | Oki Electric Ind Co Ltd | 電子透かし埋め込み装置及び電子透かし検出装置 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3809297B2 (ja) * | 1998-05-29 | 2006-08-16 | キヤノン株式会社 | 画像処理方法、装置及び媒体 |
US6456727B1 (en) * | 1999-09-02 | 2002-09-24 | Hitachi, Ltd. | Method of extracting digital watermark information and method of judging but value of digital watermark information |
AU2002214358A1 (en) * | 2000-11-02 | 2002-05-15 | Markany Inc. | Watermarking system and method for protecting a digital image from forgery or alteration |
JP4005780B2 (ja) * | 2001-07-12 | 2007-11-14 | 興和株式会社 | 電子透かしの埋め込みおよび検出 |
KR100456629B1 (ko) * | 2001-11-20 | 2004-11-10 | 한국전자통신연구원 | 웨이블릿 기반에서 디지털 워터마크 삽입/추출장치 및 방법 |
US7065237B2 (en) * | 2001-12-10 | 2006-06-20 | Canon Kabushiki Kaisha | Image processing apparatus and method |
-
2004
- 2004-09-29 JP JP2004284389A patent/JP3999778B2/ja not_active Expired - Fee Related
-
2005
- 2005-09-22 EP EP05785784A patent/EP1798950A4/en not_active Withdrawn
- 2005-09-22 KR KR1020077007218A patent/KR20070052332A/ko not_active Application Discontinuation
- 2005-09-22 WO PCT/JP2005/017517 patent/WO2006035677A1/ja active Application Filing
- 2005-09-22 US US11/663,922 patent/US20080260200A1/en not_active Abandoned
- 2005-09-22 CN CNB2005800330763A patent/CN100464564C/zh not_active Expired - Fee Related
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH09205542A (ja) * | 1996-01-25 | 1997-08-05 | Ricoh Co Ltd | ディジタル複写機 |
JPH10155091A (ja) * | 1996-11-22 | 1998-06-09 | Fuji Photo Film Co Ltd | 画像記録装置 |
JPH10164549A (ja) * | 1996-11-28 | 1998-06-19 | Ibm Japan Ltd | 認証情報を画像に隠し込むシステム及び画像認証システム |
JPH11355547A (ja) * | 1998-05-22 | 1999-12-24 | Internatl Business Mach Corp <Ibm> | 幾何変換特定システム |
JP2003244427A (ja) * | 2001-12-10 | 2003-08-29 | Canon Inc | 画像処理装置及び画像処理方法 |
JP2003264685A (ja) * | 2002-03-08 | 2003-09-19 | Oki Electric Ind Co Ltd | 文書画像出力方法及び装置、改ざん判定方法及びシステム、並びに改ざん判定システムの制御用プログラム |
JP2004064516A (ja) * | 2002-07-30 | 2004-02-26 | Kyodo Printing Co Ltd | 電子透かし挿入方法及びその装置並びに電子透かし検出方法及びその装置 |
JP2004179744A (ja) * | 2002-11-25 | 2004-06-24 | Oki Electric Ind Co Ltd | 電子透かし埋め込み装置及び電子透かし検出装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP1798950A4 * |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2008252239A (ja) * | 2007-03-29 | 2008-10-16 | Oki Electric Ind Co Ltd | 帳票処理装置 |
Also Published As
Publication number | Publication date |
---|---|
EP1798950A4 (en) | 2007-11-07 |
US20080260200A1 (en) | 2008-10-23 |
JP3999778B2 (ja) | 2007-10-31 |
EP1798950A1 (en) | 2007-06-20 |
CN100464564C (zh) | 2009-02-25 |
JP2006101161A (ja) | 2006-04-13 |
CN101032158A (zh) | 2007-09-05 |
KR20070052332A (ko) | 2007-05-21 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4277800B2 (ja) | 透かし情報検出方法 | |
JP3628312B2 (ja) | 透かし情報埋め込み装置,及び,透かし情報検出装置 | |
JP3136061B2 (ja) | ドキュメントコピー防止方法 | |
JP5015540B2 (ja) | 電子透かし埋め込み装置および検出装置 | |
JP3964684B2 (ja) | 電子透かし埋め込み装置,電子透かし検出装置,電子透かし埋め込み方法,及び,電子透かし検出方法 | |
WO2006035677A1 (ja) | 画像処理方法および画像処理装置 | |
US20110052094A1 (en) | Skew Correction for Scanned Japanese/English Document Images | |
US8275168B2 (en) | Orientation free watermarking message decoding from document scans | |
WO2005094058A1 (ja) | 印刷媒体の品質調整システム,検査用透かし媒体出力装置,透かし品質検査装置,調整済透かし媒体出力装置,印刷媒体の品質調整方法,および検査用透かし媒体 | |
CN101119429A (zh) | 一种数字水印嵌入与提取的方法及装置 | |
JP4400565B2 (ja) | 透かし情報埋め込み装置及び、透かし情報検出装置 | |
Tan et al. | Print-Scan Resilient Text Image Watermarking Based on Stroke Direction Modulation for Chinese Document Authentication. | |
JPWO2008035401A1 (ja) | 電子透かし埋め込み装置および検出装置 | |
JP2004128845A (ja) | 透かし情報埋め込み方法,透かし情報検出方法,透かし情報埋め込み装置,及び,透かし情報検出装置 | |
JP2007088693A (ja) | 画像処理システム,改ざん検証装置,改ざん検証方法およびコンピュータプログラム | |
AU2006252223A1 (en) | Tamper Detection of Documents using Encoded Dots | |
JP4192887B2 (ja) | 改ざん検出装置,透かし入り画像出力装置,透かし入り画像入力装置,透かし入り画像出力方法,および透かし入り画像入力方法 | |
JP2004147253A (ja) | 画像処理装置及び画像処理方法 | |
US20070074029A1 (en) | Data embedding apparatus | |
JP4232676B2 (ja) | 情報検出装置,画像処理システム,および情報検出方法 | |
WO2006059681A1 (ja) | 改ざん検出装置,透かし入り画像出力装置,透かし入り画像入力装置,透かし入り画像出力方法,および透かし入り画像入力方法 | |
JP4192906B2 (ja) | 透かし情報検出装置及び透かし情報検出方法 | |
JP4672513B2 (ja) | 情報処理システム,地紋重畳装置,回答抽出装置,地紋重畳方法,回答抽出方法 | |
JP4668086B2 (ja) | 画像処理装置,画像処理方法,およびコンピュータプログラム | |
JP2006186509A (ja) | 電子透かしシステム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KM KP KR KZ LC LK LR LS LT LU LV LY MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2005785784 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 1020077007218 Country of ref document: KR Ref document number: 200580033076.3 Country of ref document: CN |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWP | Wipo information: published in national office |
Ref document number: 2005785784 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 11663922 Country of ref document: US |