EP1471722A2 - Image processing system - Google Patents

Image processing system Download PDF

Info

Publication number
EP1471722A2
EP1471722A2 EP20040009683 EP04009683A EP1471722A2 EP 1471722 A2 EP1471722 A2 EP 1471722A2 EP 20040009683 EP20040009683 EP 20040009683 EP 04009683 A EP04009683 A EP 04009683A EP 1471722 A2 EP1471722 A2 EP 1471722A2
Authority
EP
European Patent Office
Prior art keywords
information
image information
unit
frequency component
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
EP20040009683
Other languages
German (de)
French (fr)
Other versions
EP1471722A3 (en
EP1471722B1 (en
Inventor
Shinya Intellectual Property Division Tokuda
Takashi Intellectual Property Division Yamaguchi
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Corp
Original Assignee
Toshiba Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Toshiba Corp filed Critical Toshiba Corp
Publication of EP1471722A2 publication Critical patent/EP1471722A2/en
Publication of EP1471722A3 publication Critical patent/EP1471722A3/en
Application granted granted Critical
Publication of EP1471722B1 publication Critical patent/EP1471722B1/en
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01FMEASURING VOLUME, VOLUME FLOW, MASS FLOW OR LIQUID LEVEL; METERING BY VOLUME
    • G01F19/00Calibrated capacity measures for fluids or fluent solid material, e.g. measuring cups
    • G01F19/002Measuring spoons or scoops
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • H04N1/32149Methods relating to embedding, encoding, decoding, detection or retrieval operations
    • H04N1/32203Spatial or amplitude domain methods
    • AHUMAN NECESSITIES
    • A47FURNITURE; DOMESTIC ARTICLES OR APPLIANCES; COFFEE MILLS; SPICE MILLS; SUCTION CLEANERS IN GENERAL
    • A47GHOUSEHOLD OR TABLE EQUIPMENT
    • A47G19/00Table service
    • A47G19/02Plates, dishes or the like
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • G06T1/0028Adaptive watermarking, e.g. Human Visual System [HVS]-based watermarking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • H04N1/32149Methods relating to embedding, encoding, decoding, detection or retrieval operations
    • H04N1/32203Spatial or amplitude domain methods
    • H04N1/32208Spatial or amplitude domain methods involving changing the magnitude of selected pixels, e.g. overlay of information or super-imposition
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • H04N1/32149Methods relating to embedding, encoding, decoding, detection or retrieval operations
    • H04N1/32203Spatial or amplitude domain methods
    • H04N1/32251Spatial or amplitude domain methods in multilevel data, e.g. greyscale or continuous tone data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • H04N1/32149Methods relating to embedding, encoding, decoding, detection or retrieval operations
    • H04N1/32288Multiple embedding, e.g. cocktail embedding, or redundant embedding, e.g. repeating the additional information at a plurality of locations in the image
    • H04N1/32293Repeating the additional information in a regular pattern
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • H04N1/32149Methods relating to embedding, encoding, decoding, detection or retrieval operations
    • H04N1/32309Methods relating to embedding, encoding, decoding, detection or retrieval operations in colour image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2201/00General purpose image data processing
    • G06T2201/005Image watermarking
    • G06T2201/0051Embedding of the watermark in the spatial domain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2201/00General purpose image data processing
    • G06T2201/005Image watermarking
    • G06T2201/0083Image watermarking whereby only watermarked image required at decoder, e.g. source-based, blind, oblivious

Definitions

  • the present invention relates to an image processing system which creates composite image information by embedding, in visible main image information (e.g., a human facial image), another additional sub-information (e.g., security information) in an invisible state, records the created composite image information on a recording medium, and restores the embedded sub-information from the recorded composite image information, and an image processing method and apparatus used in the image processing system.
  • visible main image information e.g., a human facial image
  • another additional sub-information e.g., security information
  • the digital watermarking technique of embedding additional sub-information (sub-image information) in main image information in an invisible state has been provided as a countermeasure against unauthorized copying, counterfeiting, and tampering of a personal authentication medium such as an ID card or a photograph in which copyright information is embedded.
  • Jpn. Pat. Appln. KOKAI Publication No. 9-248935 discloses en digital watermark insertion method of embedding data in image data output onto printed matter by using the characteristics of high spatial frequency components and color difference components which are difficult for man to perceive.
  • Jpn. Pat. Appln. KOKAI Publication No. 2001-268346 discloses a printing apparatus for digital watermarks that can be recognized through optical filters.
  • Jpn. Pat. Appln. KOKAI Publication No. 9-248935 there is a description about a method of predicting the degree of deterioration and increasing the strength of embedding in accordance with the predicted degree. This method, however, increases the risk of disclosing sub-information.
  • a sublimation/thermal transfer recording scheme is generally used to record facial images for personal authentication on various kinds of ID cards such as driver's licenses and personal authentication media typified by membership cards.
  • sublimation/thermal transfer recording scheme materials that can be dyed with sublimable materials are limited. This scheme can therefore be adapted to only limited recording media. For this reason, the degree of freedom in selecting recording media as personal authentication media on which facial images for personal authentication are recorded is low. As a consequence, easily available media must be selected. This often decreases the security.
  • sublimable dyes generally have poor image durability, e.g., poor light resistance and poor solvent resistance.
  • a material having good light resistance can be generally selected as a coloring material.
  • This scheme therefore allows a high degree of freedom of choice regarding recording media. In the scheme, therefore, a high-specialty recording medium can be used. This makes it possible to improve security.
  • the fusion thermal transfer recording scheme uses a dot area gradation method of performing gradation recording by changing the sizes of transferred dots. With this scheme, therefore, it is difficult to realize as high gradation performance as that with the sublimation/thermal transfer recording scheme.
  • Jpn. Pat. Appln. KOKOKU Publication No. 6-59739 discloses a method of recording transferred dots in a so-called staggered array (this method will be referred to as an alternate driving/recording scheme hereinafter).
  • Jpn. Pat. Appln. KOKOKU Publication No. 6-59739 discloses a recording method of improving the gradation recording performance in the fusion thermal transfer recording scheme. If, however, facial image information in which watermark information is embedded by using a digital watermarking technique is recorded, data is thinned out in a staggered pattern, resulting in loss of corresponding information. This destroys the digital watermark information.
  • an object of the present invention to provide an image processing system and image processing method and apparatus which can create composite image information constituted by main image information and another additional sub-information embedded in the main image information in an invisible state for analog data output to a recording medium, and maintain the digital watermark information in the composite image information after recording.
  • an image processing system comprising a first image processing apparatus which records, on a recording medium, composite image information created by embedding invisible sub-information in visible main image information, and a second image processing apparatus which restores the sub-information from the composite image information recorded on the recording medium by the first image processing apparatus
  • the first image processing apparatus including a pre-processing unit which performs, for main image information, pre-processing corresponding to pixel formation processing for image recording in the first image processing apparatus, an embedding processing unit which creates composite image information by embedding sub-information in main image information in an invisible state using the main image information, the sub-information, and key information used to restore the sub-information, and a recording unit which records the composite image information created by the embedding processing unit on a recording medium
  • the second image processing apparatus including an image input unit which inputs the composite image information from the recording medium on which the composite image information is recorded by the recording unit of the first image processing apparatus, a frequency component extracting unit which extracts a spatial frequency component unique to the
  • an image processing system comprising a first image processing apparatus which records, on a recording medium, composite image information created by embedding invisible sub-information in visible main image information, and a second image processing apparatus which restores the sub-information from the composite image information recorded on the recording medium by the first image processing apparatus, the first image processing apparatus including a first pre-processing unit which performs, for main image information, first pre-processing corresponding to pixel formation processing for image recording in the first image processing apparatus, a second pre-processing unit which performs geometric transformation with respect to the main image information having undergone the first pre-processing by the first pre-processing unit, an embedding processing unit which creates composite image information by embedding sub-information in main image information in an invisible state using the main image information, the sub-information, and key information used to restore the sub-information, an inverse transformation unit which performs transformation processing inverse to the transformation processing by the second pre-processing unit with respect to the composite image information created by the embedding processing unit, and a recording
  • an image processing apparatus comprising an image input unit which inputs composite image information from a recording medium on which the composite image information is recorded, which is created by color difference modulation processing using visible main image information, sub-information embedded in the main image information in an invisible state, and key information used to restore the sub-information, a frequency component extracting unit which extracts a spatial frequency component unique to the key information from the composite image information input by the image input unit, and a reconstructing unit which reconstructs the sub-information from the spatial frequency component extracted by the frequency component extracting unit.
  • an image processing apparatus comprising an image input unit which inputs composite image information from a recording medium on which the composite image information is recorded, which is created by color difference modulation processing using visible main image information, sub-information embedded in the main image information in an invisible state, and key information used to restore the sub-information, a color component information storage unit which stores color component information, a color component extracting unit which extracts a color component from the composite image information input by the image input unit on the basis of the color component information stored in the color component information storage unit, a frequency component extracting unit which extracts a spatial frequency component unique to the key information from the color component extracted by the color component extracting unit, and a reconstructing unit which reconstructs the sub-information from the spatial frequency component extracted by the frequency component extracting unit.
  • an image processing apparatus comprising an image input unit which inputs composite image information from a recording medium on which the composite image information is recorded, which is created by color difference modulation processing using visible main image information, sub-information embedded in the main image information in an invisible state, and key information used to restore the sub-information, an area extracting unit which extracts a local area from the composite image information input by the image input unit, a color feature extracting unit which extracts a color feature in the local area extracted by the area extracting unit from the local area, a color combining unit which creates color component composite image information by combining color components on the basis of the color feature extracted by the color feature extracting unit, a frequency component extracting unit which extracts a spatial frequency component unique to the key information from the color component composite image information created by the color combining unit, and a reconstructing unit which reconstructs the sub-information from the spatial frequency component extracted by the frequency component extracting unit.
  • an image processing apparatus comprising an image input unit which inputs composite image information from a recording medium on which the composite image information is recorded, which is created by color difference modulation processing using visible main image information, sub-information embedded in the main image information in an invisible state, and key information used to restore the sub-information, an area extracting unit which extracts a local area from the composite image information input by the image input unit, a color feature extracting unit which extracts a color feature in the local area extracted by the area extracting unit from the local area, a reconstruction parameter determining unit which determines a reconstruction parameter on the basis of the color feature extracted by the color feature extracting unit, a frequency component extracting unit which extracts a spatial frequency component unique to the key information from the composite image information input by the image input unit, and a reconstructing unit which reconstructs the sub-information from the spatial frequency component extracted by Lhe frequency component extracting unit by using the reconstruction parameter determined by the reconstruction parameter determining unit.
  • an image processing method comprising inputting composite image information from a recording medium on which the composite image information is recorded, which is created by color difference modulation processing using visible main image information, sub-information embedded in the main image information in an invisible state, and key information used to restore the sub-information, extracting a spatial frequency component unique to the key information from the composite image information input from the recording medium, and reconstructing the sub-information from the spatial frequency component extracted by extracting the frequency component.
  • an image processing method comprising inputting composite image information from a recording medium on which the composite image information is recorded, which is created by color difference modulation processing using visible main image information, sub-information embedded in the main image information in an invisible state, and key information used to restore the sub-information, extracting a color component from the composite image information input from the recording medium on the basis of the color component information stored in a color component information storage unit, extracting a spatial frequency component unique to the key information from the extracted color component, and reconstructing the sub-information from the extracted spatial frequency component.
  • an image processing method comprising inputting composite image information from a recording medium on which the composite image information is recorded, which is created by color difference modulation processing using visible main image information, sub-information embedded in the main image information in an invisible state, and key information used to restore the sub-information, extracting a local area from the composite image information input from the recording medium, extracting a color feature in a local area extracted the composite image information from the local area, creating color component composite image information by combining color components on the basis of the color feature extracted from the local area, extracting a spatial frequency component unique to the key information from the created color component composite image information, and reconstructing the sub-information from the extracted spatial frequency component.
  • an image processing method comprising inputting composite image information from a recording medium on which the composite image information is recorded, which is created by color difference modulation processing using visible main image information, sub-information embedded in the main image information in an invisible state, and key information used to restore the sub-information, extracting a local area from the composite image information input from the recording medium, extracting a color feature in the extracted local area from the local area, determining a reconstruction parameter on the basis of the color feature extracted from the local area, extracting a spatial frequency component unique to the key information from the composite image information input from the recording medium, and reconstructing the sub-information from the spatial frequency component extracted by extracting the frequency component by using the reconstruction parameter determined on the basis of the color feature.
  • the first embodiment will be described first.
  • FIG. 1 schematically shows the overall arrangement of an image processing system according to the first embodiment.
  • the image processing system shown in FIG. 1 is applied to, for example, processing of facial images for personal authentication on personal authentication media such as ID cards.
  • the image processing system shown in FIG. 1 is comprised of a first image processing apparatus 100 and second image processing apparatus 110.
  • the first image processing apparatus 100 records, on a recording medium M, composite image information created by embedding sub-information (sub-image information) in main image information, which is visible to the naked human eye, in an invisible state to the naked human eye.
  • the second image processing apparatus 110 restores the embedded sub-information from the composite image information recorded on the recording medium M by the first image processing apparatus 100.
  • the first image processing apparatus 100 includes a main image input unit 101, first pre-processing unit 102, second pre-processing unit 103, digital watermark embedding unit 104, post-processing unit 105, and recording unit 106.
  • the main image input unit 101 captures a facial image of the holder of a personal authentication medium and converts it into digital image information.
  • the main image input unit 101 inputs a facial image of the holder of the personal authentication medium or captures a facial portrait using an image input unit such as a scanner, thereby digitalizing the personal facial image information.
  • the main image information (facial image information) has three planes, i.e., R (red), G (green), and B (blue) planes.
  • the first pre-processing unit 102 converts the facial image information (to be also referred to as main image information hereinafter) captured by the main image input unit 101 into a form suitable for pixel formation processing by the recording unit 106 (to be described later).
  • the second pre-processing unit 103 converts the image information converted by the first pre-processing unit 102 into a form suitable for digital watermark embedding processing.
  • the digital watermark embedding unit 104 embeds digital watermark information in the image information converted by the second pre-processing unit 103 by using key information.
  • the post-processing unit 105 performs processing inverse to the processing by the second pre-processing 103 for the image information in which the digital watermark is embedded to convert the information back to a form suitable for pixel formation processing by the recording unit 106 (to be described later).
  • the recording unit 106 prints/records, on the recording medium M, the composite image information in which the digital watermark is embedded, on the basis of the image information converted by the post-processing unit 105.
  • the units ranging from the first pre-processing unit 102 to the post-processing unit 105 embed a digital watermark in main image information and convert it into a form suitable for pixel formation processing by the recording unit 106.
  • the flow of these processes will be described with reference to the flowchart shown in FIG. 2.
  • the first pre-processing unit 102 performs the first pre-processing corresponding to pixel formation processing by the recording unit 106 for the main image information captured by the main image input unit 101 to create main image information having undergone the first pre-processing (step S201). In this case, the first pre-processing unit 102 performs thinning-out (invalidation) processing for the main image information.
  • the second pre-processing unit 103 performs geometric transformation processing for the main image information having undergone the first pre-processing which is created in step S201, thereby creating image information subjected to embedding (step S202). In this case, the second pre-processing unit 103 performs rotation processing for the main image information having undergone the first pre-processing, and removes pixel portions thinned out in the first pre-processing to compress the effective image size.
  • the digital watermark embedding unit 104 performs digital watermark embedding processing for the image information subjected to embedding (main image information having undergone the second pre-processing) created in step S202 (step S203).
  • the digital watermark embedding unit 104 creates composite image information by embedding, in image information subjected to embedding, sub-information in an invisible state in which the information cannot be perceived by the human eye.
  • the post-processing unit 105 then creates to-be-recorded image information by performing post-processing for the composite image information created in step S203 (step S204).
  • the post-processing unit 105 performs reverse rotation processing for the composite image information, and expands the effective image size by adding the pixel portions removed in the second pre-processing.
  • digital watermark embedding processing is not limited to R (red), G (green), and B (blue) data.
  • color conversion may be performed first, and then digital watermark embedding processing may be performed for data having three planes, i.e., C (C (cyan), M (magenta), and Y (yellow) planes.
  • the recording unit 106 prepares a personal authentication medium by printing/recording the to-be-recorded image information created by the post-processing unit 105 on the recording medium M serving as a personal authentication medium. More specifically, the recording unit 106 performs color conversion of R (red), G (green), and B (blue) of the respective pixels of the to-be-recorded image information into C (cyan), M (magenta), and Y (yellow). For example, this color conversion method uses a 3 ⁇ 3 or 3 ⁇ 9 color conversion matrix or LUT (lookUp Table) in accordance with the characteristics of a recording device. The recording unit 106 then generates a driving signal for controlling the recording device from the pieces of C, M, and Y image information.
  • the recording unit 106 In the case of the fusion thermal transfer recording scheme, for example, a driving voltage control signal, driving pulse signal, and the like for a thermal head are generated. The recording unit 106 also performs heat control for the thermal head. Finally, the recording unit 106 records the composite image information on the recording medium M by alternately forming even-numbered and odd-numbered pixels in the main scanning direction of the recording device represented by a thermal head on a recording line basis.
  • the dots formed on the recording medium M are arrayed as shown in FIG. 3.
  • the respective dots are arranged at a pitch d (1/ ⁇ 2 the pitch of the heating elements of the thermal head) instead of every other dot and are arrayed in a line in a 45° direction.
  • FIG. 4 shows an example of a personal authentication medium 401 such as an ID card prepared by the recording unit 106.
  • a personal authentication facial image 402 of the holder is recorded on the personal authentication medium 401.
  • the facial image 402 is the image created and recorded by the processing described with reference to FIG. 2.
  • personal management information 403 such as an identification number (No.), name, date of birth, and expiration date is recorded on the personal authentication medium 401.
  • the personal management information 403 is used as the sub-information in the digital watermark embedding processing in step S203 in FIG. 2. Consequently, the personal authentication facial image 402 of the personal authentication medium 401 is associated with the personal management information 403. This makes it difficult to partly tamper or counterfeit the personal authentication medium 401, resulting in an improvement in security.
  • the second image processing apparatus 110 is comprised of an to-be-recorded image input unit 111, restoring unit 115, and determining unit 114.
  • the to-be-recorded image input unit 111 reads/inputs, for example, the composite image information 403 recorded on the personal authentication medium 401 in FIG. 4, and converts it into digital image information.
  • the to-be-recorded image input unit 111 captures the composite image information 403 recorded on the personal authentication medium 401 by using an image input device such as a camera, and converts the information into digital composite image information.
  • the image information has three planes, i.e., R (red), G (green), and B (blue) planes, as in the case of the main image input unit 101.
  • the restoring unit 115 restores digital watermark information (sub-information) from the composite image information captured by the to-be-recorded image input unit 111.
  • the restoring unit 115 is comprised of a frequency component extracting unit 112 and reconstructing unit 113.
  • the frequency component extracting unit 112 extracts the spatial frequency component of key information from the composite image information captured by the to-be-recorded image input unit 111.
  • the reconstructing unit 113 reconstructs digital watermark information (sub-information) from the spatial frequency component extracted by the frequency component extracting unit 112.
  • the reconstructing unit 113 performs totalization processing and spatial filtering for the amplitude information and phase information of the spatial frequency component extracted by the frequency component extracting unit 112, thereby reconstructing a modulated component, i.e., sub-information, with respect to the key information.
  • the determining unit 114 determines the authenticity of the personal authentication medium on the basis of the digital watermark information restored by the restoring unit 115.
  • the determining unit 114 collates the sub-information (personal management information) restored by the restoring unit 115 with the personal management information 403 on the personal authentication medium 401 which is captured by the to-be-recorded image input unit 111, thereby determining the authenticity of the personal authentication medium 401.
  • the fusion thermal transfer recording scheme an image is formed based on the presence/absence of dots.
  • the apparent density is controlled by performing area modulation processing of changing the areas of dots.
  • the alternate driving/recording scheme is preferably used.
  • the above alternate driving/recording scheme is a scheme of alternately driving the odd-numbered heating elements of the odd-numbered lines and the even-numbered heating elements of the even-numbered lines of the recording head (line thermal head) on a recording line basis.
  • image information to be recorded is arranged in a lattice pattern, as shown in FIG. 5A.
  • the image information is recorded in a staggered pattern to form an image, as shown in FIG. 5B. Therefore, the even-numbered information of each odd-numbered line and the odd-numbered information of each even-numbered line of the image information to be recorded are omitted in actual recording operation.
  • the first pre-processing in step S201 and the second pre-processing in step S202 are performed.
  • the post-processing in step S204 is performed. This makes it possible to prevent the destruction of the digital watermark in alternate driving/recording operation.
  • step S201 image information corresponding to pixels to which no energy is applied in the alternate driving/recording scheme is thinned out.
  • FIG. 6A shows an example of the pixel array of the overall image information to be recorded. Referring to FIG. 6A, black portions 601 correspond to pixels to be recorded (information not to be thinned out), and white portions 602 correspond to pixels not to be recorded (information to be thinned out).
  • FIG. 6B shows the image information obtained when the image information array shown in FIG. 6A is rotated through 45°.
  • the black portions 601 information not to be thinned out
  • the white portions 602 information to be thinned out
  • the white portions 602 are removed and the resultant information is arrayed again.
  • an array of only image information free from the influence of the alternate driving/recording scheme can be created.
  • FIG. 7A is a view showing a specific example of image information to be recorded.
  • FIG. 7B is a view showing the image information obtained by performing thinning-out processing for the image information shown in FIG. 7A.
  • FIG. 7C is a view showing the image information obtained when 45° rotation processing is performed for the image information shown in FIG. 7B.
  • FIG. 7D is a view showing the image information obtained when rearrangement processing is performed for the image information shown in FIG. 7C.
  • the above thinning-out processing is performed for the image information shown in FIG. 7A.
  • the even-numbered information of the odd-numbered lines pixels a12, a14, a32, and a34
  • the odd-numbered information of the even-numbered lines pixels a21, a23, a41, and a43
  • the image information having undergone the thinning-out processing in the first pre-processing is rotated through 45° (rotation processing).
  • the image information shown in FIG. 7B is rotated through 45°, the image information shown in FIG. 7C is formed.
  • the effective pixels of the image information shown in FIG. 7C are re-arrayed.
  • the remaining information (pixels a11, a13, a22, a24, a31, a33, a42, and a44) after the removal of the portions marked X is effective pixels.
  • the effective pixels of the image information shown in FIG. 7C are re-arrayed in the second pre-processing, as shown in FIG. 7D. Note that information ("0" in this case) which indicates that no information is recorded is stored in each array element as an empty space, as shown in FIG. 7D.
  • the thick frame portion is image information to be actually recorded.
  • the image information shown in FIG. 7A is compared with the image information shown in FIG. 7D, the area in which the pixels of the image information which is actually recorded or free from the influence of alternate driving/recording are arrayed is reduced. That is, when digital watermark embedding processing is so performed as to make sub-information fall within the thick frame portion in FIG. 7D, the sub-information can be completely held.
  • post-processing is processing totally reverse to the first pre-processing and second pre-processing described above.
  • the first embodiment has exemplified the fusion thermal transfer recording scheme
  • the image processing in the first embodiment can be applied to any recording scheme as long as it realizes gradation expression by dot area modulation of to-be-recorded pixels.
  • FIG. 8 schematically shows a processing sequence in this image processing system.
  • main image information 801 is, for example, facial image information for personal authentication.
  • sub-information 802 is, for example, information for improving the security of the main image information 801 (the numeral "174" in this case).
  • Key information 803 is information serving as a key for restoring the sub-information embedded in an invisible state by digital watermark embedding processing.
  • image information 804 subjected to embedding is created by performing the first pre-processing and second pre-processing for the main image information 801.
  • Digital watermark embedding processing 805 is then performed by using the image information 804 subjected to embedding, sub-information 802, and key information 803 to create digital watermarked image information 806.
  • Post-processing is performed as transformation processing inverse to the first pre-processing and second pre-processing to generate composite image information 807.
  • a personal authentication medium 809 is completed by executing recording (printing) processing 808 of recording the created composite image information 807.
  • a general digital watermark embedding technique can be applied to digital watermark embedding processing in this embodiment.
  • the digital watermark embedding processing in this embodiment is especially compatible with a technique of performing digital watermark embedding by superimposing sub-information on main image information.
  • the digital watermark embedding techniques that can be applied to this embodiment are disclosed in, for example, Jpn. Pat. Appln. KOKAI Publication Nos. 11-168616 and 2001-268346. These techniques are described on the premise that main image information is basically a color (full-color) image. By further applying, for example, the technique disclosed in Jpn. Pat. Appln. KOKAI Publication No. 11-355554 to these techniques, sub-information (sub-image information) can be embedded even in a monochrome image.
  • FIG. 9 is a view for explaining the flow of digital watermark embedding processing using the color difference modulation scheme described in Jpn. Pat. Appln. KOKAI Publication No. 11-168616.
  • the digital watermark embedding processing sub-information can be embedded in main image information in an invisible state without causing any image deterioration by using the following characteristics (1) to (3).
  • the gradation identification ability decreases, and color difference information becomes more difficult to discriminate than luminance information.
  • the composite image information (digital watermarked image) generated by this scheme does not depend on the image format for storage. Therefore, as an image format for composite image information, a new future image format can be used as well as a currently available image format such as BMP, TIFF, or JPEG.
  • Image information (main image information) 901 subjected to embedding is image information in which to-be-embedded information is embedded.
  • the main image information 901 corresponds to a facial portrait (facial image) of the holder of the medium.
  • the main image information 901 has, for example, 24-bit information per pixel (eight bits for each of R, G, and B).
  • To-be-embedded image information (sub-information) 902 is obtained by converting information to be embedded into a binary image.
  • the above sub-information 902 corresponds to the identification number or the like.
  • the sub-information 902 has, for example, 1-bit information per pixel.
  • Mask image information (key information) 903 is image information used in image combining processing and restoration (reproduction) of embedded image information.
  • the key information 903 has 1-bit information per pixel.
  • Smoothing processing 904 is performed with each black pixel of the to-be-embedded image information 902 being converted into "1"; and each white pixel, "0".
  • the smoothing processing 904 a (3 ⁇ 1)-pixel area including pixels on both ends of a target pixel in the x direction is extracted, and the weighted average of the extracted area is calculated.
  • phase modulation processing 905 phase modulation is performed for the mask image information 903 on the basis of the smoothing processing result obtained by the smoothing processing 904.
  • Color difference modulation processing 907 is performed using a color difference amount ⁇ Cd on the basis of the phase modulation result obtained by the phase modulation processing 905.
  • the color difference modulation processing 907 for example, three components, i.e., R (red), G (green), and B (blue), are separately calculated.
  • Composite image information 909 is generated by performing superimposition processing 908 on the basis of the color difference modulation processing result obtained by the color difference modulation processing 907 and the image information 901 subjected to embedding.
  • the image information 901 subjected to embedding, to-be-embedded image information 902, and mask image information 903 in FIG. 9 are identical to the main image information 801, sub-information 802, and key information 803 in this embodiment described with reference to FIG. 8.
  • the digital watermark embedding scheme shown in FIG. 9 can therefore be basically applied to this embodiment.
  • the array size of effective image information is smaller than the original size of the main image information as indicated by the thick frame in FIG. 7D.
  • the composite image information 909 is to be generated by superimposing image information 901' subjected to embedding and superimposition image information 910 obtained by color difference modulation processing, as in the case of digital watermark embedding processing shown in FIG. 9, the effective portion ("174" in this case) of the superimposition image information 910 needs to completely fall within the hatched portion of the image information 901' subject to embedding.
  • the image information 901' subject to embedding, superimposition image information 910, and composite image information 909 are defined as follows:
  • each value is an integral value from 0 to 255.
  • the composite image information DES C(x, y) is represented as follows by using the image information SRC C(x, y) subject to embedding and superimposition image information STL C(x, y ):
  • R red
  • G green
  • B blue
  • C cyan
  • M magenta
  • Y yellow
  • Sub-information restoration processing for the composite image information generated by the above digital watermark embedding processing will be described next. Sub-information is restored by extracting a specific spatial frequency component from the composite image information on the basis of the key information used in embedding processing, and reconstructing the sub-information from the spatial frequency component.
  • binary (monochrome) image information formed from a geometric pattern or the like can be used.
  • this pattern includes a monochrome checkered pattern formed by unit rectangles each having 1 ⁇ 2 pixels, and a pseudo-random pattern formed on the basis of a predetermined seed.
  • a method of extracting a specific spatial frequency component on the basis of key information a method using a spatial frequency filter can be used.
  • a coefficient for the spatial frequency filter corresponding to the key information is calculated according to procedures (1) to (4) described below. Note that coefficients may be calculated and stored in advance, or a coefficient may be calculated before the execution of extraction processing or every time extraction processing is performed.
  • FIGS. 10A and 10B are schematic views showing an example of the frequency component of the key information.
  • a white circle 1001 represents white; and a black circle 1002, black.
  • Reference numeral 1003 denotes a fundamental frequency waveform in the main scanning direction; and 1004, a fundamental frequency waveform in the sub-scanning direction.
  • a white circle 1005 represents a main color rich dot; and a black circle 1006, a complementary color rich dot. In this case, if the main color is red (R), the complementary color is cyan (C).
  • Reference numeral 1007 denotes a fundamental frequency waveform in the main scanning direction; and 1008, a fundamental frequency waveform in the sub-scanning direction.
  • the resolution of the main image information is 200 dpi
  • the print resolution of the composite image information and the read resolution of the to-be-recorded image input unit 111 are 400 dpi.
  • processing (1) described above when embedding processing is performed by using the key information shown in FIG. 10A, composite image information like that shown in FIG. 10B is captured by the to-be-recorded image input unit 111.
  • the embedded key information is transformed into a shape 1009 shown in FIG. 10B.
  • the fundamental frequency of this information is equal to that of the key information whose size is expanded by an amount corresponding to the ratio between the read resolution and the print resolution. In calculating a filter coefficient, therefore, changes in resolution in recording and reading operations are taken into consideration in advance.
  • a frequency filter for extracting the spatial frequency component of the key information from the composite image information is designed.
  • Equation (1) In order to extract the spatial frequency component of the key information from the composite image information captured by the to-be-recorded image input unit 111 by using the frequency filter coefficient calculated in advance by the above method, convolution integration is performed according to equation (1): where I is the composite image information captured by the to-be-recorded image input unit 111, g is the frequency filter coefficient, and K is the extracted spatial frequency component of the key information.
  • a method of extracting a specific spatial frequency component is not limited to the method using the above spatial frequency filter. It suffices to use a method of extracting a specific spatial frequency component by performing mapping information into another space and then inversely mapping the information by using known Fourier transform and Wavelet transform.
  • FIGS. 11 to 13 are schematic views showing the respective processes in reconstruction processing.
  • FIG. 11 shows the amplitude of the extracted spatial frequency component of the key information.
  • a white portion 101 indicates the + side, and a black portion 1102 indicates the - side.
  • FIG. 12 shows the result obtained by projecting the coordinates of points (zero-crossing point s) where the signs "+" and "-" are switched and totalizing the number of points. In this case, coordinates which exhibit a number equal to or more than a predetermined threshold TH are extracted as a zero-crossing point of the reference phase.
  • FIG. 13 shows the result obtained by calculating the deviation of the spatial frequency component at each coordinates from the reference phase and replacing the pixel value in accordance with the deviation.
  • sub-information can be restored as monochrome binary image information from composite image information.
  • Jpn. Pat. Appln. KOKAI Publication No. 2001-268346 is characterized in that a plurality of pieces of key information can be used to embed sub-information (sub-image information), and an arbitrary combination of pieces of key information necessary for the restoration of the sub-information (sub-image information) can be selected.
  • the sequence of extracting a spatial frequency component corresponding to key information and reconstructing the sub-information (sub-image information) by binarizing information fragments is executed a predetermined number of times, and the resultant information fragments are combined.
  • Nd pieces of key information Ki all the pieces of information (N pieces of key information) used in embedding operation may be set as necessary information, or several pieces of information (Nd (Nd ⁇ N) pieces of information) selected from all the pieces of information according to a predetermined sequence or randomly may be used.
  • Sub-information (sub-image information) fragments may be combined by a method of concatenating the respective fragments vertically and horizontally, a method of combining the fragments by addition, exclusive ORing, and the like, or a method of using a combination of arithmetic operations such as concatenation and combining.
  • predetermined weights may be assigned to the respective fragments before arithmetic operation.
  • the use of the image processing system described above can prepare a personal authentication medium in which a digital watermark is embedded, and check the mark, thereby realizing a system with higher security than the prior art.
  • composite image information constituted by main image information and another additional sub-information (sub-image information) embedded in the main image information in an invisible state can be generated for analog data output to an personal authentication medium, and the digital watermark information in the composite image information can be maintained after the composite image information is recorded.
  • the digital watermarking technique can be applied to a to-be-recorded image while high gradation performance is maintained.
  • the digital watermark information (sub-information) can be stored without being destroyed and restored after recording.
  • FIG. 14 shows the arrangement of a second image processing apparatus 110 according to the second embodiment.
  • the second image processing apparatus 110 is comprised of an image input unit 11, color component information storage unit 12, color component extracting unit 13, frequency component extracting unit 14, reconstructing unit 15, and display unit 16. The function of each unit will be described in detail below.
  • the image input unit 11 captures, for example, composite image information 403 recorded on a personal authentication medium 401 in FIG. 4 using an image input device such as a camera, and converts it into digital composite image information.
  • This image information has three planes, i.e., R, G, and B planes.
  • the color component information storage unit 12 stores the information of a color component used for the restoration of a digital watermark.
  • the color component extracting unit 13 extracts the color component used for the restoration of the digital watermark from the image information input from the image input unit 11 on the basis of the color component information read out from the color component information storage unit 12.
  • the reconstructing unit 15 performs totalization processing and spatial filtering for the amplitude information and phase information of the spatial frequency component extracted by the frequency component extracting unit 14 to reconstruct a modulated component, i.e., sub-information, with respect to the key information.
  • the display unit 16 displays the sub-information reconstructed by the reconstructing unit 15 (or may simultaneously display the input image information and sub-information).
  • the color component information storage unit 12 stores in advance information about a color component exhibiting high gradation characteristics, such as information that can specify a color component exhibiting the highest gradation characteristics among the colors of inks used for printing, e.g., C, M, and Y, or the colors of optical filters used for image capturing, e.g., R, G, and B.
  • This information may be a character string indicating a color name, a numerical value indicating the wavelength of an electromagnetic wave corresponding to the color, the number of an address assigned for management in the color component information storage unit 12, or the like.
  • a color component exhibiting high gradation characteristics will be described below by taking ink in a printer as an example.
  • inks of different colors have different compositions. For this reason, inks of different colors exhibit different physical properties. Gradation characteristics depend on physical properties. As a consequence, inks of different colors differ in gradation characteristics.
  • a printer like a sublimation type printer designed to change the density of ink adhering to a print target by controlling the applied energy exhibits characteristics represented by the relationship between the applied energy and the density shown in FIG. 15.
  • the density does not change much in a region where the applied energy is high and a region where the applied energy is low, and the density changes in accordance with the applied energy in an intermediate region.
  • Gradation characteristics are determined by the controllability of the density of ink and the feasible density range. With regard to the former, better gradation characteristics appear as the gradient of a plot of applied energy versus density becomes more moderate. With regard to the latter, better gradation characteristics appear as the difference between the minimum density and the maximum density increases.
  • FIG. 15 shows the applied energy/density characteristics of three types of ink.
  • ink 1 exhibits a narrow feasible density range
  • ink 3 exhibits low controllability.
  • ink 2 exhibits the best gradation characteristics.
  • gradation characteristics described above are not unique to the scheme exemplified above.
  • differences in physical property between inks also produce differences in gradation characteristics in a scheme like a fusion type scheme designed to change the areas of dots instead of density.
  • the relationship between the restoration of a digital watermark and gradation characteristics will be described next.
  • the digital watermark embedded by color difference modulation processing can be restored through a process of color component extraction ⁇ spatial frequency component extraction ⁇ sub-information reconstruction. Since the details of the process have already been described, only its outline will be described.
  • a color difference is extracted from a captured image, and the color difference is converted into corresponding information, e.g., bit on/off information, in sub-information reconstruction processing, thereby restoring the digital watermark.
  • Information that can be expressed by a color difference depends on gradation characteristics. That is, information that can be expressed decreases in amount as the gradation characteristics decrease, and vice versa.
  • the gradation characteristics change for each color component when digital information is converted into analog information or conversion inverse thereto is performed. Using a color component exhibiting the highest gradation characteristics can suppress a decrease in the amount of information that can be expressed by a color difference, thereby improving the restoration characteristics of the digital watermark.
  • FIG. 16 shows the arrangement of a second image processing apparatus 110 according to the third embodiment.
  • the second image processing apparatus 110 is comprised of an image input unit 21, area extracting unit 22, color feature extracting unit 23, color combining unit 24, frequency component extracting unit 25, reconstructing unit 26, and display unit 27. The function of each unit will be described in detail below.
  • the image input unit 21 captures, for example, composite image information 403 recorded on a personal authentication medium 401 in FIG. 4 using an image input device such as a camera, and converts it into digital composite image information.
  • This image information has three planes, i.e., R, G, and B planes.
  • the area extracting unit 22 sets an area having a predetermined size on the image information input from the image input unit 21.
  • the size of an area to be set the number of pixels corresponding to the fundamental wavelength of the spatial frequency component of the key information used in embedding operation is used.
  • the color feature extracting unit 23 totalizes the values of pixels existing in the area set by the area extracting unit 22 with respect to each of the R, G, and B planes, and calculates the feature of each color.
  • the color combining unit 24 calculates weights used in combining the respective planes, i.e., the R, G, and B planes, on the basis of the totalization result obtained by the color feature extracting unit 23.
  • the reconstructing unit 26 performs totalization processing and spatial filtering for the amplitude information and phase information of the spatial frequency component extracted by the frequency component extracting unit 25 to reconstruct a modulated component, i.e., sub-information, with respect to the key information.
  • the display unit 27 displays the sub-information reconstructed by the reconstructing unit 26 (or may simultaneously display the input image information and sub-information).
  • FIG. 17 shows an outline of the flow of processing in the area extracting unit 22, color feature extracting unit 23, and color combining unit 24.
  • the area extracting unit 22 sets an extraction target area S for a color feature, centered on a pixel of interest, on the digital image information captured by the image input unit 21 (step S31).
  • the number of pixels corresponding to the fundamental wavelength of the spatial frequency component of the key information used in embedding operation i.e., a value based on the fundamental wavelength which is set in consideration of changes in resolution in recording and reading operations. If, for example, the fundamental wavelength is two pixels and the record and read resolutions for a main image are 200 dpi and 400 dpi, respectively, the size of the area S becomes four pixels. Assume that the pixel of interest is sequentially moved from the left to right and from the bottom to top of the image, and the movement amount corresponds to the size of the area.
  • the position of the first pixel of interest is calculated in advance from the placement position of the personal authentication medium at the time of image input operation and the record position of a facial image in the personal authentication medium.
  • the color feature in each area S is obtained by using the values of pixels in the area for each plane (step S32).
  • a color feature is a numeral value representing a color in the area.
  • the average luminance values of pixels in the area for each plane is used.
  • the color combining unit 24 determines a color combining parameter P C corresponding to the color feature in each plane (step S33), and performs color combining processing for the respective pixels in the area on the basis of the color combining parameter P C (step S34).
  • a color combining parameter P C is determined from the color feature, a three-dimensional array in which the color combining parameter P C is stored using the color feature as an index is used.
  • Color combining computation is performed for the respective pixels in the area by using the color combining parameters determined in the above manner.
  • color combining computation the linear sum obtaining by multiplying the pixel values in the respective planes by the corresponding color combining parameters as weights and adding the resultant values is used.
  • the above processing is applied to the overall input image to obtain a color composite image T.
  • the frequency component extracting unit 25 extracts a specific spatial frequency component from the color composite image T on the basis of the key information used in digital watermark embedding processing.
  • the reconstructing unit 26 reconstructs sub-information.
  • the position of the first pixel of interest is determined from the placement position of the personal authentication medium and the record position of the facial image.
  • the present invention is not limited to this method.
  • a specific pattern is recorded in or near a facial image in advance, and the position of the first pixel of interest may be determined by using the result obtained by detecting the position of the pattern.
  • a color feature may be calculated from a predetermined range centered on the pixel of interest.
  • a color feature the average luminance of pixels in an area is used.
  • a color feature may be a median or mode, or a value indicting a distribution shape, e.g., a value obtained by correction using a standard deviation or variance as a weight.
  • a color feature may be a value obtained by applying a predetermined function or algorithm or the like.
  • a color combining parameter is determined from a three-dimensional array.
  • T ij [R ij , G ij , B ij , P R , P G , P B ] may be set by using a six-dimensional array AT storing color composite values using color features and color combining parameters as indexes.
  • the influence of a deterioration is reduced by combining the color components of input image information in this manner, thereby improving the restoration characteristics of the embedded sub-information.
  • FIG. 18 shows the arrangement of a second image processing apparatus 110 according to the fourth embodiment.
  • the second image processing apparatus 110 is comprised of an image input unit 41, area extracting unit 42, color feature extracting unit 43, amplification coefficient determining unit 44, frequency component extracting unit 45, extracted signal amplifying unit 46, reconstructing unit 47, and display unit 48. The function of each unit will be described in detail below.
  • the image input unit 41 captures, for example, composite image information 403 recorded on a personal authentication medium 401 in FIG. 4 using an image input device such as a camera, and converts it into digital composite image information.
  • This image information is constituted by three planes, i.e., R, G, and B planes.
  • the area extracting unit 42 sets an area having a predetermined size on the image information input from the image input unit 41.
  • the size of an area to be set the number of pixels corresponding to the fundamental wavelength of the spatial frequency component of the key information used in embedding operation is used.
  • the color feature extracting unit 43 totalizes the values of pixels existing in the area set by the area extracting unit 42 with respect to each of the R, G, and B planes, and calculates the feature of each color.
  • the amplification coefficient determining unit 44 determines an amplification coefficient for the reconstruction of a restoration result on the basis of the totalization result obtained by the color feature extracting unit 43.
  • the extracted signal amplifying unit 46 performs amplification processing for the amplitude information or phase information of the spatial frequency component extracted by the frequency component extracting unit 45 by using the amplification coefficient determined by the amplification coefficient determining unit 44.
  • the reconstructing unit 47 performs totalization processing and spatial filtering for the amplitude information or phase information of the spatial frequency component amplified by the extracted signal amplifying unit 46 to reconstruct a modulated component, i.e., sub-information, with respect to the key information.
  • the display unit 48 displays the sub-information reconstructed by the reconstructing unit 47 (or may simultaneously display the input image information and sub-information).
  • FIG. 19 shows an outline of the flow of processing in the extracted signal amplifying unit 46.
  • the color feature extracting unit 43 calculates a color feature by performing the same processing as that performed by the color feature extracting unit 23 described in the third embodiment (step S51).
  • the amplification coefficient determining unit 44 determines an amplification coefficient (amplification factor) M in the area from the color feature in each plane (step S52).
  • R F , G F , and B F be the color features in the R, G, and B planes in an area S having m ⁇ n pixels
  • the extracted signal amplifying unit 46 multiplies the spatial frequency component extracted by the frequency component extracting unit 45 by the amplification coefficient M obtained by the amplification coefficient determining unit 44 in each area (step S53). With this processing, the extracted frequency components are corrected such that variations in maximum value in the respective areas fall within a predetermined range.
  • f M amplification coefficient computation function for determining an amplification coefficient from a color feature.
  • the visibility of embedded sub-information can be improved by correcting reconstruction processing in accordance with the color components of input image information.
  • a multilevel image is created by multiplying an extracted spatial frequency component by an amplification coefficient.
  • a binary image may be created by performing binarization processing for the areas using a three-dimensional array A B storing binarization thresholds corresponding to color features in the respective areas. According to this method, the data amount of a restoration result can be reduced, and the contrast of the restoration result can be improved.
  • correcting reconstruction processing in accordance with color components of input image information makes it possible to improve the visibility of sub-information restored from printed composite image information without increasing a similar risk.
  • an image processing system and image processing method and apparatus which can create composite image information constituted by main image information and another additional sub-information embedded in the main image information in an invisible state for analog data output to a recording medium, and maintain the digital watermark information in the composite image information after the composite image information is recorded.
  • an image processing system and image processing method and apparatus which can apply a digital watermarking technique to a to-be-recorded image while maintaining high gradation performance in the fusion thermal transfer recording scheme, and allow the watermark information (sub-information) to be stored and restored without destruction after it is recorded.
  • an image processing system and image processing method and apparatus which can restore a digital watermark which is resistant to a deterioration in image information.
  • an image processing method and apparatus which can improve restoration characteristics and visibility of embedded sub-information without increasing the risk of disclosing the sub-information.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Fluid Mechanics (AREA)
  • Editing Of Facsimile Originals (AREA)
  • Image Processing (AREA)
  • Cleaning In Electrography (AREA)
  • Threshing Machine Elements (AREA)
  • Purification Treatments By Anaerobic Or Anaerobic And Aerobic Bacteria Or Animals (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

Composite image information which is generated by embedding invisible sub-information in visible main image information and recorded on a recording medium (M) is read. A spatial frequency component unique to key information used to restore the sub-information is extracted from the composite image information read from the recording medium (M). The sub-information is reconstructed from the extracted spatial frequency component.

Description

  • The present invention relates to an image processing system which creates composite image information by embedding, in visible main image information (e.g., a human facial image), another additional sub-information (e.g., security information) in an invisible state, records the created composite image information on a recording medium, and restores the embedded sub-information from the recorded composite image information, and an image processing method and apparatus used in the image processing system.
  • Recently, with the trend toward computerization of information and the proliferation of the Internet, increasing importance has been attached to a digital watermarking technique, digital signature technique, and the like to prevent counterfeiting and alteration of images. The digital watermarking technique of embedding additional sub-information (sub-image information) in main image information in an invisible state, in particular, has been provided as a countermeasure against unauthorized copying, counterfeiting, and tampering of a personal authentication medium such as an ID card or a photograph in which copyright information is embedded.
  • For example, Jpn. Pat. Appln. KOKAI Publication No. 9-248935 discloses en digital watermark insertion method of embedding data in image data output onto printed matter by using the characteristics of high spatial frequency components and color difference components which are difficult for man to perceive. Jpn. Pat. Appln. KOKAI Publication No. 2001-268346 discloses a printing apparatus for digital watermarks that can be recognized through optical filters.
  • In Jpn. Pat. Appln. KOKAI Publication No. 9-248935, there is a description about a method of predicting the degree of deterioration and increasing the strength of embedding in accordance with the predicted degree. This method, however, increases the risk of disclosing sub-information.
  • A sublimation/thermal transfer recording scheme is generally used to record facial images for personal authentication on various kinds of ID cards such as driver's licenses and personal authentication media typified by membership cards.
  • In general, in the sublimation/thermal transfer recording scheme, materials that can be dyed with sublimable materials are limited. This scheme can therefore be adapted to only limited recording media. For this reason, the degree of freedom in selecting recording media as personal authentication media on which facial images for personal authentication are recorded is low. As a consequence, easily available media must be selected. This often decreases the security. In addition, sublimable dyes generally have poor image durability, e.g., poor light resistance and poor solvent resistance.
  • In contrast to this, in a fusion thermal transfer recording scheme, a material having good light resistance can be generally selected as a coloring material. This scheme therefore allows a high degree of freedom of choice regarding recording media. In the scheme, therefore, a high-specialty recording medium can be used. This makes it possible to improve security. The fusion thermal transfer recording scheme, however, uses a dot area gradation method of performing gradation recording by changing the sizes of transferred dots. With this scheme, therefore, it is difficult to realize as high gradation performance as that with the sublimation/thermal transfer recording scheme.
  • In order to solve this problem, for example, Jpn. Pat. Appln. KOKOKU Publication No. 6-59739 discloses a method of recording transferred dots in a so-called staggered array (this method will be referred to as an alternate driving/recording scheme hereinafter).
  • Jpn. Pat. Appln. KOKOKU Publication No. 6-59739 discloses a recording method of improving the gradation recording performance in the fusion thermal transfer recording scheme. If, however, facial image information in which watermark information is embedded by using a digital watermarking technique is recorded, data is thinned out in a staggered pattern, resulting in loss of corresponding information. This destroys the digital watermark information.
  • It is, therefore, an object of the present invention to provide an image processing system and image processing method and apparatus which can create composite image information constituted by main image information and another additional sub-information embedded in the main image information in an invisible state for analog data output to a recording medium, and maintain the digital watermark information in the composite image information after recording.
  • It is another object of the present invention to provide an image processing system and image processing method and apparatus which can apply a digital watermarking technique to a to-be-recorded image while maintaining high gradation performance in the fusion thermal transfer recording scheme, and allow the watermark information (sub-information) to be stored and restored without destruction after it is recorded.
  • It is still another object of the present invention to provide an image processing system and image processing method and apparatus which can restore a digital watermark which is resistant to a deterioration in image information.
  • It is still another object of the present invention to provide an image processing method and apparatus which can improve restoration characteristics and visibility of embedded sub-information without increasing the risk of disclosing the sub-information.
  • According to the present invention, there is provided an image processing system comprising a first image processing apparatus which records, on a recording medium, composite image information created by embedding invisible sub-information in visible main image information, and a second image processing apparatus which restores the sub-information from the composite image information recorded on the recording medium by the first image processing apparatus, the first image processing apparatus including a pre-processing unit which performs, for main image information, pre-processing corresponding to pixel formation processing for image recording in the first image processing apparatus, an embedding processing unit which creates composite image information by embedding sub-information in main image information in an invisible state using the main image information, the sub-information, and key information used to restore the sub-information, and a recording unit which records the composite image information created by the embedding processing unit on a recording medium, and the second image processing apparatus including an image input unit which inputs the composite image information from the recording medium on which the composite image information is recorded by the recording unit of the first image processing apparatus, a frequency component extracting unit which extracts a spatial frequency component unique to the key information from the composite image information input by the image input unit, and a reconstructing unit which reconstructs the sub-information from the spatial frequency component extracted by the frequency component extracting unit.
  • According to the present invention, there is provided an image processing system comprising a first image processing apparatus which records, on a recording medium, composite image information created by embedding invisible sub-information in visible main image information, and a second image processing apparatus which restores the sub-information from the composite image information recorded on the recording medium by the first image processing apparatus, the first image processing apparatus including a first pre-processing unit which performs, for main image information, first pre-processing corresponding to pixel formation processing for image recording in the first image processing apparatus, a second pre-processing unit which performs geometric transformation with respect to the main image information having undergone the first pre-processing by the first pre-processing unit, an embedding processing unit which creates composite image information by embedding sub-information in main image information in an invisible state using the main image information, the sub-information, and key information used to restore the sub-information, an inverse transformation unit which performs transformation processing inverse to the transformation processing by the second pre-processing unit with respect to the composite image information created by the embedding processing unit, and a recording unit which records the composite image information having undergone the inverse transformation processing by the inverse transformation unit by an alternate driving/recording scheme of alternately forming even-numbered and odd-numbered pixels in a main scanning direction of a recording device on a recording line basis, and the second image processing apparatus including an image input unit which inputs the composite image information from the recording medium on which the composite image information is recorded by the recording unit of the first image processing apparatus, a frequency component extracting unit which extracts a spatial frequency component unique to the key information from the composite image information input by the image input unit, and a reconstructing unit which reconstructs the sub-information from the spatial frequency component extracted by the frequency component extracting unit.
  • According to the present invention, there is provided an image processing apparatus comprising an image input unit which inputs composite image information from a recording medium on which the composite image information is recorded, which is created by color difference modulation processing using visible main image information, sub-information embedded in the main image information in an invisible state, and key information used to restore the sub-information, a frequency component extracting unit which extracts a spatial frequency component unique to the key information from the composite image information input by the image input unit, and a reconstructing unit which reconstructs the sub-information from the spatial frequency component extracted by the frequency component extracting unit.
  • According to the present invention, there is provided an image processing apparatus comprising an image input unit which inputs composite image information from a recording medium on which the composite image information is recorded, which is created by color difference modulation processing using visible main image information, sub-information embedded in the main image information in an invisible state, and key information used to restore the sub-information, a color component information storage unit which stores color component information, a color component extracting unit which extracts a color component from the composite image information input by the image input unit on the basis of the color component information stored in the color component information storage unit, a frequency component extracting unit which extracts a spatial frequency component unique to the key information from the color component extracted by the color component extracting unit, and a reconstructing unit which reconstructs the sub-information from the spatial frequency component extracted by the frequency component extracting unit.
  • According to the present invention, there is provided an image processing apparatus comprising an image input unit which inputs composite image information from a recording medium on which the composite image information is recorded, which is created by color difference modulation processing using visible main image information, sub-information embedded in the main image information in an invisible state, and key information used to restore the sub-information, an area extracting unit which extracts a local area from the composite image information input by the image input unit, a color feature extracting unit which extracts a color feature in the local area extracted by the area extracting unit from the local area, a color combining unit which creates color component composite image information by combining color components on the basis of the color feature extracted by the color feature extracting unit, a frequency component extracting unit which extracts a spatial frequency component unique to the key information from the color component composite image information created by the color combining unit, and a reconstructing unit which reconstructs the sub-information from the spatial frequency component extracted by the frequency component extracting unit.
  • According to the present invention, there is provided an image processing apparatus comprising an image input unit which inputs composite image information from a recording medium on which the composite image information is recorded, which is created by color difference modulation processing using visible main image information, sub-information embedded in the main image information in an invisible state, and key information used to restore the sub-information, an area extracting unit which extracts a local area from the composite image information input by the image input unit, a color feature extracting unit which extracts a color feature in the local area extracted by the area extracting unit from the local area, a reconstruction parameter determining unit which determines a reconstruction parameter on the basis of the color feature extracted by the color feature extracting unit, a frequency component extracting unit which extracts a spatial frequency component unique to the key information from the composite image information input by the image input unit, and a reconstructing unit which reconstructs the sub-information from the spatial frequency component extracted by Lhe frequency component extracting unit by using the reconstruction parameter determined by the reconstruction parameter determining unit.
  • According to the present invention, there is provided an image processing method comprising inputting composite image information from a recording medium on which the composite image information is recorded, which is created by color difference modulation processing using visible main image information, sub-information embedded in the main image information in an invisible state, and key information used to restore the sub-information, extracting a spatial frequency component unique to the key information from the composite image information input from the recording medium, and reconstructing the sub-information from the spatial frequency component extracted by extracting the frequency component.
  • According to the present invention, there is provided an image processing method comprising inputting composite image information from a recording medium on which the composite image information is recorded, which is created by color difference modulation processing using visible main image information, sub-information embedded in the main image information in an invisible state, and key information used to restore the sub-information, extracting a color component from the composite image information input from the recording medium on the basis of the color component information stored in a color component information storage unit, extracting a spatial frequency component unique to the key information from the extracted color component, and reconstructing the sub-information from the extracted spatial frequency component.
  • According to the present invention, there is provided an image processing method comprising inputting composite image information from a recording medium on which the composite image information is recorded, which is created by color difference modulation processing using visible main image information, sub-information embedded in the main image information in an invisible state, and key information used to restore the sub-information, extracting a local area from the composite image information input from the recording medium, extracting a color feature in a local area extracted the composite image information from the local area, creating color component composite image information by combining color components on the basis of the color feature extracted from the local area, extracting a spatial frequency component unique to the key information from the created color component composite image information, and reconstructing the sub-information from the extracted spatial frequency component.
  • According to the present invention, there is provided an image processing method comprising inputting composite image information from a recording medium on which the composite image information is recorded, which is created by color difference modulation processing using visible main image information, sub-information embedded in the main image information in an invisible state, and key information used to restore the sub-information, extracting a local area from the composite image information input from the recording medium, extracting a color feature in the extracted local area from the local area, determining a reconstruction parameter on the basis of the color feature extracted from the local area, extracting a spatial frequency component unique to the key information from the composite image information input from the recording medium, and reconstructing the sub-information from the spatial frequency component extracted by extracting the frequency component by using the reconstruction parameter determined on the basis of the color feature.
  • This summary of the invention does not necessarily describe all necessary features so that the invention may also be a sub-combination of these described features.
  • The invention can be more fully understood from the following detailed description when taken in conjunction with the accompanying drawings, in which:
  • FIG. 1 is a block diagram schematically showing the overall arrangement of an image processing system according to the first embodiment of the present invention;
  • FIG. 2 is a flowchart for explaining the flow of processing from the first pre-processing unit to the post-processing unit;
  • FIG. 3 is a view showing how recorded dots are arrayed;
  • FIG. 4 is a plan view schematically showing an example of a generated personal authentication medium;
  • FIG. 5A is a view showing a specific example of image information to be recorded;
  • FIG. 5B is a view showing a specific example of dots to be recorded;
  • FIG. 6A is a view for explaining the concepts of the first pre-processing and second pre-processing;
  • FIG. 6B is a view for explaining the concepts of the first pre-processing and second pre-processing;
  • FIG. 7A is a view for explaining a specific example of the first pre-processing and second pre-processing;
  • FIG. 7B is a view for explaining a specific example of the first pre-processing and second pre-processing;
  • FIG. 7C is a view for explaining a specific example of the first pre-processing and second pre-processing;
  • FIG. 7D is a view for explaining a specific example of the first pre-processing and second pre-processing;
  • FIG. 8 is a view showing the flow of overall digital watermark embedding processing;
  • FIG. 9 is a flowchart schematically showing a sequence in digital watermark embedding processing;
  • FIG. 10A is a schematic view for explaining the frequency component of key information;
  • FIG. 10B is a schematic view for explaining the frequency component of the key information;
  • FIG. 11 is a schematic view showing a process in reconstruction processing for sub-information;
  • FIG. 12 is a schematic view showing a process in reconstruction processing for sub-information;
  • FIG. 13 is a schematic view showing a process in reconstruction processing for sub-information;
  • FIG. 14 is a block diagram schematically showing the arrangement of a second image processing apparatus according to the second embodiment of the present invention;
  • FIG. 15 is a graph for explaining the gradation characteristics of ink;
  • FIG. 16 is a block diagram schematically showing the arrangement of a second image processing apparatus according to the third embodiment of the present invention;
  • FIG. 17 is a view for explaining the flows of color feature extraction processing and color combining processing;
  • FIG. 18 is a block diagram schematically showing the arrangement of a second image processing apparatus according to the fourth embodiment of the present invention; and
  • FIG. 19 is a view for explaining the flow of extracted signal amplifying processing.
  • Embodiments of the present invention will be described below with reference to the views of the accompanying drawing.
  • The first embodiment will be described first.
  • FIG. 1 schematically shows the overall arrangement of an image processing system according to the first embodiment. The image processing system shown in FIG. 1 is applied to, for example, processing of facial images for personal authentication on personal authentication media such as ID cards. The image processing system shown in FIG. 1 is comprised of a first image processing apparatus 100 and second image processing apparatus 110. The first image processing apparatus 100 records, on a recording medium M, composite image information created by embedding sub-information (sub-image information) in main image information, which is visible to the naked human eye, in an invisible state to the naked human eye. The second image processing apparatus 110 restores the embedded sub-information from the composite image information recorded on the recording medium M by the first image processing apparatus 100.
  • The first image processing apparatus 100 includes a main image input unit 101, first pre-processing unit 102, second pre-processing unit 103, digital watermark embedding unit 104, post-processing unit 105, and recording unit 106.
  • The function of each unit of the first image processing apparatus 100 will be described in detail below.
  • The main image input unit 101 captures a facial image of the holder of a personal authentication medium and converts it into digital image information. The main image input unit 101 inputs a facial image of the holder of the personal authentication medium or captures a facial portrait using an image input unit such as a scanner, thereby digitalizing the personal facial image information. At this point of time, the main image information (facial image information) has three planes, i.e., R (red), G (green), and B (blue) planes.
  • The first pre-processing unit 102 converts the facial image information (to be also referred to as main image information hereinafter) captured by the main image input unit 101 into a form suitable for pixel formation processing by the recording unit 106 (to be described later).
  • The second pre-processing unit 103 converts the image information converted by the first pre-processing unit 102 into a form suitable for digital watermark embedding processing.
  • The digital watermark embedding unit 104 embeds digital watermark information in the image information converted by the second pre-processing unit 103 by using key information.
  • The post-processing unit 105 performs processing inverse to the processing by the second pre-processing 103 for the image information in which the digital watermark is embedded to convert the information back to a form suitable for pixel formation processing by the recording unit 106 (to be described later).
  • The recording unit 106 prints/records, on the recording medium M, the composite image information in which the digital watermark is embedded, on the basis of the image information converted by the post-processing unit 105.
  • The units ranging from the first pre-processing unit 102 to the post-processing unit 105 embed a digital watermark in main image information and convert it into a form suitable for pixel formation processing by the recording unit 106. The flow of these processes will be described with reference to the flowchart shown in FIG. 2.
  • First of all, the first pre-processing unit 102 performs the first pre-processing corresponding to pixel formation processing by the recording unit 106 for the main image information captured by the main image input unit 101 to create main image information having undergone the first pre-processing (step S201). In this case, the first pre-processing unit 102 performs thinning-out (invalidation) processing for the main image information.
  • The second pre-processing unit 103 performs geometric transformation processing for the main image information having undergone the first pre-processing which is created in step S201, thereby creating image information subjected to embedding (step S202). In this case, the second pre-processing unit 103 performs rotation processing for the main image information having undergone the first pre-processing, and removes pixel portions thinned out in the first pre-processing to compress the effective image size.
  • The digital watermark embedding unit 104 performs digital watermark embedding processing for the image information subjected to embedding (main image information having undergone the second pre-processing) created in step S202 (step S203). In this case, the digital watermark embedding unit 104 creates composite image information by embedding, in image information subjected to embedding, sub-information in an invisible state in which the information cannot be perceived by the human eye.
  • The post-processing unit 105 then creates to-be-recorded image information by performing post-processing for the composite image information created in step S203 (step S204). In this case, the post-processing unit 105 performs reverse rotation processing for the composite image information, and expands the effective image size by adding the pixel portions removed in the second pre-processing.
  • Note that digital watermark embedding processing is not limited to R (red), G (green), and B (blue) data. For example, color conversion (to be described later) may be performed first, and then digital watermark embedding processing may be performed for data having three planes, i.e., C (C (cyan), M (magenta), and Y (yellow) planes.
  • The recording unit 106 prepares a personal authentication medium by printing/recording the to-be-recorded image information created by the post-processing unit 105 on the recording medium M serving as a personal authentication medium. More specifically, the recording unit 106 performs color conversion of R (red), G (green), and B (blue) of the respective pixels of the to-be-recorded image information into C (cyan), M (magenta), and Y (yellow). For example, this color conversion method uses a 3 × 3 or 3 × 9 color conversion matrix or LUT (lookUp Table) in accordance with the characteristics of a recording device. The recording unit 106 then generates a driving signal for controlling the recording device from the pieces of C, M, and Y image information. In the case of the fusion thermal transfer recording scheme, for example, a driving voltage control signal, driving pulse signal, and the like for a thermal head are generated. The recording unit 106 also performs heat control for the thermal head. Finally, the recording unit 106 records the composite image information on the recording medium M by alternately forming even-numbered and odd-numbered pixels in the main scanning direction of the recording device represented by a thermal head on a recording line basis.
  • The dots formed on the recording medium M are arrayed as shown in FIG. 3. On a line A - A' in FIG. 3, the respective dots are arranged at a pitch d (1/√2 the pitch of the heating elements of the thermal head) instead of every other dot and are arrayed in a line in a 45° direction.
  • FIG. 4 shows an example of a personal authentication medium 401 such as an ID card prepared by the recording unit 106. A personal authentication facial image 402 of the holder is recorded on the personal authentication medium 401. The facial image 402 is the image created and recorded by the processing described with reference to FIG. 2. In addition, personal management information 403 such as an identification number (No.), name, date of birth, and expiration date is recorded on the personal authentication medium 401. The personal management information 403 is used as the sub-information in the digital watermark embedding processing in step S203 in FIG. 2. Consequently, the personal authentication facial image 402 of the personal authentication medium 401 is associated with the personal management information 403. This makes it difficult to partly tamper or counterfeit the personal authentication medium 401, resulting in an improvement in security.
  • The second image processing apparatus 110 is comprised of an to-be-recorded image input unit 111, restoring unit 115, and determining unit 114.
  • The function of each unit of the second image processing apparatus 110 will be described in detailed below.
  • The to-be-recorded image input unit 111 reads/inputs, for example, the composite image information 403 recorded on the personal authentication medium 401 in FIG. 4, and converts it into digital image information. The to-be-recorded image input unit 111 captures the composite image information 403 recorded on the personal authentication medium 401 by using an image input device such as a camera, and converts the information into digital composite image information. At this point of time, the image information has three planes, i.e., R (red), G (green), and B (blue) planes, as in the case of the main image input unit 101.
  • The restoring unit 115 restores digital watermark information (sub-information) from the composite image information captured by the to-be-recorded image input unit 111. The restoring unit 115 is comprised of a frequency component extracting unit 112 and reconstructing unit 113.
  • The frequency component extracting unit 112 extracts the spatial frequency component of key information from the composite image information captured by the to-be-recorded image input unit 111. The frequency component extracting unit 112 frequency-filters the composite image information captured by the to-be-recorded image input unit 111 to extract the amplitude (= strength) information of the frequency component of the embedded key information.
  • The reconstructing unit 113 reconstructs digital watermark information (sub-information) from the spatial frequency component extracted by the frequency component extracting unit 112. The reconstructing unit 113 performs totalization processing and spatial filtering for the amplitude information and phase information of the spatial frequency component extracted by the frequency component extracting unit 112, thereby reconstructing a modulated component, i.e., sub-information, with respect to the key information.
  • The determining unit 114 determines the authenticity of the personal authentication medium on the basis of the digital watermark information restored by the restoring unit 115. The determining unit 114 collates the sub-information (personal management information) restored by the restoring unit 115 with the personal management information 403 on the personal authentication medium 401 which is captured by the to-be-recorded image input unit 111, thereby determining the authenticity of the personal authentication medium 401.
  • An alternate driving/recording scheme and a fusion thermal transfer recording scheme of recording dots in a staggered array according to the first embodiment will be described next.
  • In the fusion thermal transfer recording scheme, an image is formed based on the presence/absence of dots. When a multi-tone image is to be expressed in the above fusion thermal transfer recording scheme, therefore, the apparent density is controlled by performing area modulation processing of changing the areas of dots. For this reason, in the fusion thermal transfer recording scheme, it is required to accurately modulate the sizes of dots. In order to meet this requirement, the alternate driving/recording scheme is preferably used.
  • The above alternate driving/recording scheme is a scheme of alternately driving the odd-numbered heating elements of the odd-numbered lines and the even-numbered heating elements of the even-numbered lines of the recording head (line thermal head) on a recording line basis. Assume that image information to be recorded is arranged in a lattice pattern, as shown in FIG. 5A. In actual recording operation, the image information is recorded in a staggered pattern to form an image, as shown in FIG. 5B. Therefore, the even-numbered information of each odd-numbered line and the odd-numbered information of each even-numbered line of the image information to be recorded are omitted in actual recording operation.
  • This indicates that even if sub-information is embedded in image information (to be simply recorded) in an invisible state by using digital watermark embedding processing, only the area 1/2 that of the original image information becomes effective. That is, 1/2 the original image information becomes ineffective, and the information of the ineffective portion is omitted. This means that the digital watermark is destroyed or changed. In general, when a digital watermark is destroyed in the this manner, it is very difficult to restore sub-information. This makes it impossible to maintain security.
  • In the first embodiment, therefore, in performing the digital watermark embedding processing in step S203, the first pre-processing in step S201 and the second pre-processing in step S202 are performed. In addition, after the digital watermark embedding processing in step S203, the post-processing in step S204 is performed. This makes it possible to prevent the destruction of the digital watermark in alternate driving/recording operation.
  • In the first embodiment, the concepts of the first pre-processing and second pre-processing will be described below with reference to FIGS. 6A and 6B.
  • In the first pre-processing (step S201), image information corresponding to pixels to which no energy is applied in the alternate driving/recording scheme is thinned out. FIG. 6A shows an example of the pixel array of the overall image information to be recorded. Referring to FIG. 6A, black portions 601 correspond to pixels to be recorded (information not to be thinned out), and white portions 602 correspond to pixels not to be recorded (information to be thinned out).
  • In the second pre-processing (step S202), for example, 45° rotation processing and processing of removing thinned-out information are performed for the array of the image information having undergone the first pre-processing to compress the effective image information size. FIG. 6B shows the image information obtained when the image information array shown in FIG. 6A is rotated through 45°. As shown in FIG. 6B, when the pixel array shown in FIG. 6A is rotated through 45°, the black portions 601 (information not to be thinned out) and the white portions 602 (information to be thinned out) are aligned in the main scanning direction. In the second processing, therefore, the white portions 602 (portions to be thinned out) are removed and the resultant information is arrayed again. As a consequence, in the second processing, an array of only image information free from the influence of the alternate driving/recording scheme can be created.
  • The first pre-processing and second pre-processing will be further described in detail with reference to specific examples shown in FIGS. 7A, 7B, 7C, and 7D. FIG. 7A is a view showing a specific example of image information to be recorded. FIG. 7B is a view showing the image information obtained by performing thinning-out processing for the image information shown in FIG. 7A. FIG. 7C is a view showing the image information obtained when 45° rotation processing is performed for the image information shown in FIG. 7B. FIG. 7D is a view showing the image information obtained when rearrangement processing is performed for the image information shown in FIG. 7C.
  • FIG. 7A shows the array of the respective pixels (aij (i = 1 to 4, j = 1 to 4)) of image information to be recorded. In the first pre-processing, the above thinning-out processing is performed for the image information shown in FIG. 7A. With this processing, in the pixel array of the image information shown in FIG. 7A, the even-numbered information of the odd-numbered lines (pixels a12, a14, a32, and a34) and the odd-numbered information of the even-numbered lines (pixels a21, a23, a41, and a43) are thinned out. As a result, the image information shown in FIG. 7A becomes the image information from which the array elements marked X are deleted as shown in FIG. 7B.
  • In the second pre-processing, the image information having undergone the thinning-out processing in the first pre-processing is rotated through 45° (rotation processing). When the image information shown in FIG. 7B is rotated through 45°, the image information shown in FIG. 7C is formed. In the second pre-processing, the effective pixels of the image information shown in FIG. 7C are re-arrayed. In the case shown in FIG. 7C, the remaining information (pixels a11, a13, a22, a24, a31, a33, a42, and a44) after the removal of the portions marked X is effective pixels. For this reason, the effective pixels of the image information shown in FIG. 7C are re-arrayed in the second pre-processing, as shown in FIG. 7D. Note that information ("0" in this case) which indicates that no information is recorded is stored in each array element as an empty space, as shown in FIG. 7D.
  • In the case shown in FIG. 7D, the thick frame portion is image information to be actually recorded. When, therefore, the image information shown in FIG. 7A is compared with the image information shown in FIG. 7D, the area in which the pixels of the image information which is actually recorded or free from the influence of alternate driving/recording are arrayed is reduced. That is, when digital watermark embedding processing is so performed as to make sub-information fall within the thick frame portion in FIG. 7D, the sub-information can be completely held.
  • Note that post-processing (step S204) is processing totally reverse to the first pre-processing and second pre-processing described above. Although the first embodiment has exemplified the fusion thermal transfer recording scheme, the image processing in the first embodiment can be applied to any recording scheme as long as it realizes gradation expression by dot area modulation of to-be-recorded pixels.
  • FIG. 8 schematically shows a processing sequence in this image processing system.
  • Referring to FIG. 8, assume that main image information 801 is, for example, facial image information for personal authentication. In addition, sub-information 802 is, for example, information for improving the security of the main image information 801 (the numeral "174" in this case). For example, an image obtained by coding a name, date of birth, and the like or a graphic pattern such as a company logo is used as the sub-information 802. Key information 803 is information serving as a key for restoring the sub-information embedded in an invisible state by digital watermark embedding processing.
  • First of all, image information 804 subjected to embedding is created by performing the first pre-processing and second pre-processing for the main image information 801. Digital watermark embedding processing 805 is then performed by using the image information 804 subjected to embedding, sub-information 802, and key information 803 to create digital watermarked image information 806. Post-processing is performed as transformation processing inverse to the first pre-processing and second pre-processing to generate composite image information 807. Finally, a personal authentication medium 809 is completed by executing recording (printing) processing 808 of recording the created composite image information 807.
  • Digital watermark embedding processing will be described in detail next.
  • A general digital watermark embedding technique can be applied to digital watermark embedding processing in this embodiment. The digital watermark embedding processing in this embodiment is especially compatible with a technique of performing digital watermark embedding by superimposing sub-information on main image information.
  • The digital watermark embedding techniques that can be applied to this embodiment are disclosed in, for example, Jpn. Pat. Appln. KOKAI Publication Nos. 11-168616 and 2001-268346. These techniques are described on the premise that main image information is basically a color (full-color) image. By further applying, for example, the technique disclosed in Jpn. Pat. Appln. KOKAI Publication No. 11-355554 to these techniques, sub-information (sub-image information) can be embedded even in a monochrome image.
  • When the authenticity of a facial image (composite image information) printed on a personal authentication medium is to be determined, it is required to restore sub-information from the composite image information and determine the authenticity of the sub-information. This restoration processing of the sub-information is also disclosed in, for example, Jpn. Pat. Appln. KOKAI Publication Nos. 11-168616, 2001-268346, and 11-355554. That is, by performing the restoration processing disclosed in the above references using key information, the sub-information recorded in an invisible state can be restored from the composite image information printed on the personal authentication medium by this image processing system.
  • A case wherein the digital watermark embedding processing using the color difference modulation scheme described in Jpn. Pat. Appln. KOKAI Publication No. 11-168616 will be described as an example of digital watermark embedding processing.
  • FIG. 9 is a view for explaining the flow of digital watermark embedding processing using the color difference modulation scheme described in Jpn. Pat. Appln. KOKAI Publication No. 11-168616. In the digital watermark embedding processing, sub-information can be embedded in main image information in an invisible state without causing any image deterioration by using the following characteristics (1) to (3).
  • (1) Human Visual Characteristics
  • According to the human visual characteristics, as the frequency of an image increases, the gradation identification ability decreases, and color difference information becomes more difficult to discriminate than luminance information.
  • (2) Human Visual Characteristics Based on Complementary Color Relationship
  • For example, the additive color mixture of red and cyan (= green + blue) produces a complementary color relationship, so that when red and cyan are located side by side, they look achromatic (white), which is difficult to discriminate by the human eye.
  • (3) Human Visual Characteristics with respect to Color Different (Complementary Color Relationship and Color Different Information (Color Difference Modulation Processing) to High-Frequency Carrier Pattern Image)
  • According to the human visual characteristics, small differences in color differences cannot be identified by the human eye. Therefore, when red rich pixels and cyan rich pixels are repeatedly arranged by using a high-frequency carry pattern image, the small differences in color differences between these pixels cannot be identified by the human eye, and hence it is determined that the color difference amount is plus or minus "0". The composite image information (digital watermarked image) generated by this scheme does not depend on the image format for storage. Therefore, as an image format for composite image information, a new future image format can be used as well as a currently available image format such as BMP, TIFF, or JPEG.
  • The flow of digital watermark embedding processing shown in FIG. 9 will be briefly described below. For details, refer to the descriptive contents of Jpn. Pat. Appln. KOKAI Publication No. 11-168616.
  • Image information (main image information) 901 subjected to embedding is image information in which to-be-embedded information is embedded. In a personal authentication medium, the main image information 901 corresponds to a facial portrait (facial image) of the holder of the medium. The main image information 901 has, for example, 24-bit information per pixel (eight bits for each of R, G, and B). To-be-embedded image information (sub-information) 902 is obtained by converting information to be embedded into a binary image. In a personal authentication medium, the above sub-information 902 corresponds to the identification number or the like. The sub-information 902 has, for example, 1-bit information per pixel. Mask image information (key information) 903 is image information used in image combining processing and restoration (reproduction) of embedded image information. For example, the key information 903 has 1-bit information per pixel.
  • Smoothing processing 904 is performed with each black pixel of the to-be-embedded image information 902 being converted into "1"; and each white pixel, "0". For example, in the smoothing processing 904, a (3 × 1)-pixel area including pixels on both ends of a target pixel in the x direction is extracted, and the weighted average of the extracted area is calculated. In phase modulation processing 905, phase modulation is performed for the mask image information 903 on the basis of the smoothing processing result obtained by the smoothing processing 904.
  • Color difference modulation processing 907 is performed using a color difference amount ΔCd on the basis of the phase modulation result obtained by the phase modulation processing 905. In the color difference modulation processing 907, for example, three components, i.e., R (red), G (green), and B (blue), are separately calculated. Composite image information 909 is generated by performing superimposition processing 908 on the basis of the color difference modulation processing result obtained by the color difference modulation processing 907 and the image information 901 subjected to embedding.
  • As is also obvious from the above description, the image information 901 subjected to embedding, to-be-embedded image information 902, and mask image information 903 in FIG. 9 are identical to the main image information 801, sub-information 802, and key information 803 in this embodiment described with reference to FIG. 8. The digital watermark embedding scheme shown in FIG. 9 can therefore be basically applied to this embodiment.
  • In this embodiment, however, in order to perform the first pre-processing and second pre-processing for main image information in advance, the array size of effective image information is smaller than the original size of the main image information as indicated by the thick frame in FIG. 7D. When, therefore, the composite image information 909 is to be generated by superimposing image information 901' subjected to embedding and superimposition image information 910 obtained by color difference modulation processing, as in the case of digital watermark embedding processing shown in FIG. 9, the effective portion ("174" in this case) of the superimposition image information 910 needs to completely fall within the hatched portion of the image information 901' subject to embedding.
  • With regard to the superimposition processing, the image information 901' subject to embedding, superimposition image information 910, and composite image information 909 are defined as follows:
  • image information subject to embedding: SRCC(x, y) ··· (A-1)
  • superimposition image information: STLC(x, y) ··· (A-2)
  • composite image information: DESC(x, y) ··· (A-3)
  • (where x and y are the coordinate values of each image, and C = {R (red), G (green), B (blue)} plane.
  • In 24-bit color computation, each value is an integral value from 0 to 255.
  • In this case, the composite image information DESC(x, y) is represented as follows by using the image information SRCC(x, y) subject to embedding and superimposition image information STLC(x, y):
  • DESR (x, y) = SRCR(x, y) + STLR (x, y) ···(B-1)
  • DESG(x, y) = SRCG(x, y) + STLG (x, y) ··· (B-2)
  • DESB(x, y) = SRCB(x, y) + STLB(x, y) ··· (B-3)
  • In the first embodiment, R (red), G (green), and B (blue) are used as fundamental colors for computation based on the additive color mixture. However, using C (cyan), M (magenta), and Y (yellow) as fundamental colors for computation based on the subtractive color mixture makes no substantial difference.
  • Sub-information restoration processing for the composite image information generated by the above digital watermark embedding processing will be described next. Sub-information is restored by extracting a specific spatial frequency component from the composite image information on the basis of the key information used in embedding processing, and reconstructing the sub-information from the spatial frequency component.
  • As key information, binary (monochrome) image information formed from a geometric pattern or the like can be used. For example, this pattern includes a monochrome checkered pattern formed by unit rectangles each having 1 × 2 pixels, and a pseudo-random pattern formed on the basis of a predetermined seed.
  • As a method of extracting a specific spatial frequency component on the basis of key information, a method using a spatial frequency filter can be used. A coefficient for the spatial frequency filter corresponding to the key information is calculated according to procedures (1) to (4) described below. Note that coefficients may be calculated and stored in advance, or a coefficient may be calculated before the execution of extraction processing or every time extraction processing is performed.
  • (1) The size of the key information is expanded/compressed on the basis of the resolution of the main image information, the resolution of the composite image information recorded on the recording medium, and the read resolution of the to-be-recorded image input unit 111.
  • (2) The expanded/compressed key information is Fourier-transformed into a frequency domain. Note that this transformation may be done with an integer or may be extended into a real or complex number.
  • (3) The passband of a filter is adjusted by referring to the transformed value.
  • (4) Fourier inverse transform is performed with respect to the value after adjustment, and the resultant value is set as a frequency filter coefficient.
  • The above procedures (1) to (4) will be described with reference to the specific example shown in FIGS. 10A and 10B. FIGS. 10A and 10B are schematic views showing an example of the frequency component of the key information.
  • Referring to FIG. 10A, a white circle 1001 represents white; and a black circle 1002, black. Reference numeral 1003 denotes a fundamental frequency waveform in the main scanning direction; and 1004, a fundamental frequency waveform in the sub-scanning direction. Referring to FIG. 10B, a white circle 1005 represents a main color rich dot; and a black circle 1006, a complementary color rich dot. In this case, if the main color is red (R), the complementary color is cyan (C). Reference numeral 1007 denotes a fundamental frequency waveform in the main scanning direction; and 1008, a fundamental frequency waveform in the sub-scanning direction.
  • Assume that the resolution of the main image information is 200 dpi, and the print resolution of the composite image information and the read resolution of the to-be-recorded image input unit 111 are 400 dpi. In this case, according to processing (1) described above, when embedding processing is performed by using the key information shown in FIG. 10A, composite image information like that shown in FIG. 10B is captured by the to-be-recorded image input unit 111.
  • The embedded key information is transformed into a shape 1009 shown in FIG. 10B. The fundamental frequency of this information is equal to that of the key information whose size is expanded by an amount corresponding to the ratio between the read resolution and the print resolution. In calculating a filter coefficient, therefore, changes in resolution in recording and reading operations are taken into consideration in advance.
  • In (2) to (4), a frequency filter for extracting the spatial frequency component of the key information from the composite image information is designed. In this case, the key information is originally binary. This information is therefore characterized in that an edge (= boundary between a white pixel and a black pixel) has a steep gradient. As an edge in a spatial domain becomes steeper, more harmonic components are contained in a frequency domain. If, therefore, the frequency filter coefficient calculated by using image information containing many steep edges without any change is used, noise on the harmonic side passes through the filter. In this case, the S/N ratio decreases to cause a trouble in restoration of sub-information.
  • For this reason, the adjustment operation in (3) is required. The contents of this operation depend on each key information and the operating environment of the system. In general, harmonic components are filtered out to suppress noise, and only frequencies near the fundamental frequency are passed. Under an environment where there is little noise, there is an approach to improving the security by also passing harmonic components and intentionally using the complexity of key information.
  • In order to extract the spatial frequency component of the key information from the composite image information captured by the to-be-recorded image input unit 111 by using the frequency filter coefficient calculated in advance by the above method, convolution integration is performed according to equation (1):
    Figure 00310001
    where I is the composite image information captured by the to-be-recorded image input unit 111, g is the frequency filter coefficient, and K is the extracted spatial frequency component of the key information.
  • Note that a method of extracting a specific spatial frequency component is not limited to the method using the above spatial frequency filter. It suffices to use a method of extracting a specific spatial frequency component by performing mapping information into another space and then inversely mapping the information by using known Fourier transform and Wavelet transform.
  • Reconstruction processing of reconstructing sub-information from the spatial frequency component extracted in the above manner will be described next. This reconstruction processing is performed according to the following procedures (1) to (3).
  • (1) A zero-crossing point (a point of sign change) is extracted from the extracted spatial frequency component.
  • (2) A reference phase of the spatial frequency component is obtained by projecting the zero-crossing point.
  • (3) The deviation of each coordinate of the spatial frequency component from the reference phase is calculated. The pixel value at each coordinate which deviates from the reference phase by a predetermined threshold or more is replaced with black, and other pixel values are placed with white.
  • FIGS. 11 to 13 are schematic views showing the respective processes in reconstruction processing. FIG. 11 shows the amplitude of the extracted spatial frequency component of the key information. A white portion 101 indicates the + side, and a black portion 1102 indicates the - side. FIG. 12 shows the result obtained by projecting the coordinates of points (zero-crossing point s) where the signs "+" and "-" are switched and totalizing the number of points. In this case, coordinates which exhibit a number equal to or more than a predetermined threshold TH are extracted as a zero-crossing point of the reference phase. FIG. 13 shows the result obtained by calculating the deviation of the spatial frequency component at each coordinates from the reference phase and replacing the pixel value in accordance with the deviation.
  • With the above processing, sub-information can be restored as monochrome binary image information from composite image information.
  • Note that when composite image information is generated by the method disclosed in Jpn. Pat. Appln. KOKAI Publication No. 2001-268346 described above, sub-information (sub-image information) is restored according to the following procedures (1) to (3).
  • (1) A specific spatial frequency component Fi is extracted on the basis of predetermined key information Ki (1 ≦ i ≦ N), where N is the total number of key information Ki used to embed sub-information (sub-image information).
  • (2) Binarization processing is performed for the extraction result Fi with a predetermined threshold THi corresponding to the key information Ki.
  • (3) The above procedures (1) and (2) are executed with respect to Nd (Nd ≦ N) pieces of key information necessary for restoration, and the resultant data are combined according to a predetermined sequence.
  • The method disclosed in Jpn. Pat. Appln. KOKAI Publication No. 2001-268346 is characterized in that a plurality of pieces of key information can be used to embed sub-information (sub-image information), and an arbitrary combination of pieces of key information necessary for the restoration of the sub-information (sub-image information) can be selected.
  • When, therefore, the sub-information (sub-image information) is restored from the composite image information generated by this method, the sequence of extracting a spatial frequency component corresponding to key information and reconstructing the sub-information (sub-image information) by binarizing information fragments is executed a predetermined number of times, and the resultant information fragments are combined.
  • Key information, a threshold, and a combining method which are required for restoration are determined in advance when embedding is performed. As for the Nd pieces of key information Ki, all the pieces of information (N pieces of key information) used in embedding operation may be set as necessary information, or several pieces of information (Nd (Nd ≦ N) pieces of information) selected from all the pieces of information according to a predetermined sequence or randomly may be used.
  • As for the threshold THi, a value common to all the pieces of key information Ki (i = 1, ···, N) or different values for the respective pieces of key information Ki may be used. Sub-information (sub-image information) fragments may be combined by a method of concatenating the respective fragments vertically and horizontally, a method of combining the fragments by addition, exclusive ORing, and the like, or a method of using a combination of arithmetic operations such as concatenation and combining. In addition, predetermined weights may be assigned to the respective fragments before arithmetic operation.
  • The use of the image processing system described above can prepare a personal authentication medium in which a digital watermark is embedded, and check the mark, thereby realizing a system with higher security than the prior art.
  • As described above, according to the first embodiment, composite image information constituted by main image information and another additional sub-information (sub-image information) embedded in the main image information in an invisible state can be generated for analog data output to an personal authentication medium, and the digital watermark information in the composite image information can be maintained after the composite image information is recorded.
  • In the image recording apparatus using the fusion thermal transfer recording scheme, the digital watermarking technique can be applied to a to-be-recorded image while high gradation performance is maintained. The digital watermark information (sub-information) can be stored without being destroyed and restored after recording.
  • The second embodiment will be described next.
  • FIG. 14 shows the arrangement of a second image processing apparatus 110 according to the second embodiment. The second image processing apparatus 110 is comprised of an image input unit 11, color component information storage unit 12, color component extracting unit 13, frequency component extracting unit 14, reconstructing unit 15, and display unit 16. The function of each unit will be described in detail below.
  • The image input unit 11 captures, for example, composite image information 403 recorded on a personal authentication medium 401 in FIG. 4 using an image input device such as a camera, and converts it into digital composite image information. This image information has three planes, i.e., R, G, and B planes.
  • The color component information storage unit 12 stores the information of a color component used for the restoration of a digital watermark. The color component extracting unit 13 extracts the color component used for the restoration of the digital watermark from the image information input from the image input unit 11 on the basis of the color component information read out from the color component information storage unit 12.
  • The frequency component extracting unit 14 frequency-filters the color component extracted by the color component extracting unit 13 to extract the amplitude (= strength) information and phase information of the spatial frequency component of embedded key information.
  • The reconstructing unit 15 performs totalization processing and spatial filtering for the amplitude information and phase information of the spatial frequency component extracted by the frequency component extracting unit 14 to reconstruct a modulated component, i.e., sub-information, with respect to the key information.
  • The display unit 16 displays the sub-information reconstructed by the reconstructing unit 15 (or may simultaneously display the input image information and sub-information).
  • The color component information storage unit 12 stores in advance information about a color component exhibiting high gradation characteristics, such as information that can specify a color component exhibiting the highest gradation characteristics among the colors of inks used for printing, e.g., C, M, and Y, or the colors of optical filters used for image capturing, e.g., R, G, and B. This information may be a character string indicating a color name, a numerical value indicating the wavelength of an electromagnetic wave corresponding to the color, the number of an address assigned for management in the color component information storage unit 12, or the like.
  • A color component exhibiting high gradation characteristics will be described below by taking ink in a printer as an example. In general, in a printer, inks of different colors have different compositions. For this reason, inks of different colors exhibit different physical properties. Gradation characteristics depend on physical properties. As a consequence, inks of different colors differ in gradation characteristics.
  • A printer like a sublimation type printer designed to change the density of ink adhering to a print target by controlling the applied energy exhibits characteristics represented by the relationship between the applied energy and the density shown in FIG. 15. In general, the density does not change much in a region where the applied energy is high and a region where the applied energy is low, and the density changes in accordance with the applied energy in an intermediate region. Gradation characteristics are determined by the controllability of the density of ink and the feasible density range. With regard to the former, better gradation characteristics appear as the gradient of a plot of applied energy versus density becomes more moderate. With regard to the latter, better gradation characteristics appear as the difference between the minimum density and the maximum density increases.
  • For example, FIG. 15 shows the applied energy/density characteristics of three types of ink. Referring to FIG. 15, ink 1 (characteristic A) exhibits a narrow feasible density range, and ink 3 (characteristic C) exhibits low controllability. Among these types of ink, therefore, ink 2 (characteristic B) exhibits the best gradation characteristics.
  • The gradation characteristics described above are not unique to the scheme exemplified above. For example, differences in physical property between inks also produce differences in gradation characteristics in a scheme like a fusion type scheme designed to change the areas of dots instead of density.
  • The above description has exemplified ink in a printer. Optical filters used for image capturing also differ in physical properties such as passband width and attenuation factor. For this reason, such filters also differ in gradation characteristics.
  • The relationship between the restoration of a digital watermark and gradation characteristics will be described next. The digital watermark embedded by color difference modulation processing can be restored through a process of color component extraction → spatial frequency component extraction → sub-information reconstruction. Since the details of the process have already been described, only its outline will be described.
  • In color extraction processing and spatial frequency component extraction processing, a color difference is extracted from a captured image, and the color difference is converted into corresponding information, e.g., bit on/off information, in sub-information reconstruction processing, thereby restoring the digital watermark.
  • Information that can be expressed by a color difference depends on gradation characteristics. That is, information that can be expressed decreases in amount as the gradation characteristics decrease, and vice versa. The gradation characteristics change for each color component when digital information is converted into analog information or conversion inverse thereto is performed. Using a color component exhibiting the highest gradation characteristics can suppress a decrease in the amount of information that can be expressed by a color difference, thereby improving the restoration characteristics of the digital watermark.
  • Using a color component exhibiting little deterioration at the time of digital/analog conversion makes it possible to improve the restoration characteristics of embedded sub-information.
  • The third embodiment will be described next.
  • FIG. 16 shows the arrangement of a second image processing apparatus 110 according to the third embodiment. The second image processing apparatus 110 is comprised of an image input unit 21, area extracting unit 22, color feature extracting unit 23, color combining unit 24, frequency component extracting unit 25, reconstructing unit 26, and display unit 27. The function of each unit will be described in detail below.
  • The image input unit 21 captures, for example, composite image information 403 recorded on a personal authentication medium 401 in FIG. 4 using an image input device such as a camera, and converts it into digital composite image information. This image information has three planes, i.e., R, G, and B planes.
  • The area extracting unit 22 sets an area having a predetermined size on the image information input from the image input unit 21. As the size of an area to be set, the number of pixels corresponding to the fundamental wavelength of the spatial frequency component of the key information used in embedding operation is used.
  • The color feature extracting unit 23 totalizes the values of pixels existing in the area set by the area extracting unit 22 with respect to each of the R, G, and B planes, and calculates the feature of each color. The color combining unit 24 calculates weights used in combining the respective planes, i.e., the R, G, and B planes, on the basis of the totalization result obtained by the color feature extracting unit 23.
  • The frequency component extracting unit 25 frequency-filters the color component combined by the color combining unit 24 to extract the amplitude (= strength) information and phase information of the spatial frequency component of embedded key information.
  • The reconstructing unit 26 performs totalization processing and spatial filtering for the amplitude information and phase information of the spatial frequency component extracted by the frequency component extracting unit 25 to reconstruct a modulated component, i.e., sub-information, with respect to the key information. The display unit 27 displays the sub-information reconstructed by the reconstructing unit 26 (or may simultaneously display the input image information and sub-information).
  • FIG. 17 shows an outline of the flow of processing in the area extracting unit 22, color feature extracting unit 23, and color combining unit 24. First of all, the area extracting unit 22 sets an extraction target area S for a color feature, centered on a pixel of interest, on the digital image information captured by the image input unit 21 (step S31).
  • In this embodiment, as the size of the area S, the number of pixels corresponding to the fundamental wavelength of the spatial frequency component of the key information used in embedding operation, i.e., a value based on the fundamental wavelength which is set in consideration of changes in resolution in recording and reading operations, is used. If, for example, the fundamental wavelength is two pixels and the record and read resolutions for a main image are 200 dpi and 400 dpi, respectively, the size of the area S becomes four pixels. Assume that the pixel of interest is sequentially moved from the left to right and from the bottom to top of the image, and the movement amount corresponds to the size of the area.
  • Note that the position of the first pixel of interest is calculated in advance from the placement position of the personal authentication medium at the time of image input operation and the record position of a facial image in the personal authentication medium.
  • The color feature in each area S is obtained by using the values of pixels in the area for each plane (step S32). A color feature is a numeral value representing a color in the area. In this embodiment, as a color feature, the average luminance values of pixels in the area for each plane is used. Letting Rij, Gij, and Bij (i = 1 ··· m, j = 1 ··· n) be pixel values in the respective planes, i.e., the R, G, and B planes, in the area S of m × n pixels, color features RF, GF, and BF are given by
    Figure 00400001
  • The color combining unit 24 determines a color combining parameter PC corresponding to the color feature in each plane (step S33), and performs color combining processing for the respective pixels in the area on the basis of the color combining parameter PC (step S34). In this embodiment, when the color combining parameter PC is determined from the color feature, a three-dimensional array in which the color combining parameter PC is stored using the color feature as an index is used.
  • Letting Rij, Gij, and Bij (i = 1 ··· m, j = 1 ··· n) be pixel values in the respective planes, i.e., the R, G, and B planes, in the area S of m × n pixels, and RF, GF, and BF be color features, color combining parameters PR, PG, and PB are determined as follows: PC = AC[RF, GF, BF]   (C = R, G, B) where AC (C = R, G, B) is a three-dimensional array, in which color combining parameters corresponding to the respective planes are stored. For example, AR stores the color combining parameter PR for the R plane.
  • Assume that the value of the color combining parameter PC (C = R, G, B) is determined in advance by the following processing (1) to (4).
  • (1) A color patch for determining a parameter is generated. A color patch is printed matter on which an area having a predetermined area is pained in a single color. Assume that digital watermark embedding processing has been performed for the area.
  • (2) The color patch is captured by the image input unit 21 or an image input unit having equivalent performance to calculate color features in the R, G, and B planes.
  • (3) Digital watermarks are restored from the R, G, and B planes, respectively, and the average value of the amplitudes of the restored signals is calculated. In this case, no color combining is performed.
  • (4) Color combining parameters corresponding to the color features calculated in (2) are determined from the values calculated in (3). If, for example, the average amplitude value of a given plane is larger than a predetermined threshold ε, the color combining parameter of the plane is set to "1", and the remaining planes are set to "0". In contrast to this, if the average amplitude value of all the planes is smaller than the threshold ε, the color combining parameters of all the planes are set to 1/3.
  • Color combining computation is performed for the respective pixels in the area by using the color combining parameters determined in the above manner. In this embodiment, as color combining computation, the linear sum obtaining by multiplying the pixel values in the respective planes by the corresponding color combining parameters as weights and adding the resultant values is used. Letting Rij, Gij, and Bij (i = 1 ··· m, j = 1 ··· n) be pixel values in the respective planes, i.e., the R, G, and B planes, in the area S of m × n pixels, and PR, PG, and PB be color combining parameters in the area S, a value Tij obtained as a result of color combining is calculated as follows: Tij = PRRij + PGGij + PBBij   (i=1 ··· m, j=1 ··· n,)
  • The above processing is applied to the overall input image to obtain a color composite image T. Subsequently, the frequency component extracting unit 25 extracts a specific spatial frequency component from the color composite image T on the basis of the key information used in digital watermark embedding processing. The reconstructing unit 26 reconstructs sub-information.
  • According to the above description, the position of the first pixel of interest is determined from the placement position of the personal authentication medium and the record position of the facial image. However, the present invention is not limited to this method. For example, a specific pattern is recorded in or near a facial image in advance, and the position of the first pixel of interest may be determined by using the result obtained by detecting the position of the pattern.
  • In addition, if the size of the spatial frequency component of key information is sufficiently smaller than that of a facial image, no area segmentation is performed, and a color feature may be calculated from a predetermined range centered on the pixel of interest.
  • In the above description, as a color feature, the average luminance of pixels in an area is used. However, the present invention is not limited to this. For example, a color feature may be a median or mode, or a value indicting a distribution shape, e.g., a value obtained by correction using a standard deviation or variance as a weight. Alternatively, a color feature may be a value obtained by applying a predetermined function or algorithm or the like.
  • Furthermore, according to the above description, a color combining parameter is determined from a three-dimensional array. However, the present invention is not limited to this method. For example, a color combining parameter may be determined on the basis of the following function: PC = fC (RF, GF, BF)   (C = R, G, B) where fC (C = R, G, B) represents a color computation function for determining a color combining parameter from a color feature. As the color computation function fC (C = R, G, B), for example, linear sum PC = WC1RF + WC2GF + WC3BF (C = R, G, B) obtained by multiplying RF, GF, and BF by predetermined weights or a polynomial having a high-order term such as RF 2 or a cross term such as RFGF may be used. Alternatively, such calculation may be done after logarithmic transformation like RF' = log(RF).
  • Moreover, according to the above description, a weighted linear sum is used for color combining computation. However, the present invention is not limited to this method. For example, color combining computation may be done by using a polynomial having a high-order term such as Rij 2 or a cross term such as RijGij, or such calculation may be done after logarithmic transformation like Rij' = log(Rij). Alternatively, = Tij[Rij, Gij, Bij, PR, PG, PB] may be set by using a six-dimensional array AT storing color composite values using color features and color combining parameters as indexes.
  • The influence of a deterioration is reduced by combining the color components of input image information in this manner, thereby improving the restoration characteristics of the embedded sub-information.
  • The fourth embodiment will be described next.
  • FIG. 18 shows the arrangement of a second image processing apparatus 110 according to the fourth embodiment. The second image processing apparatus 110 is comprised of an image input unit 41, area extracting unit 42, color feature extracting unit 43, amplification coefficient determining unit 44, frequency component extracting unit 45, extracted signal amplifying unit 46, reconstructing unit 47, and display unit 48. The function of each unit will be described in detail below.
  • The image input unit 41 captures, for example, composite image information 403 recorded on a personal authentication medium 401 in FIG. 4 using an image input device such as a camera, and converts it into digital composite image information. This image information is constituted by three planes, i.e., R, G, and B planes.
  • The area extracting unit 42 sets an area having a predetermined size on the image information input from the image input unit 41. As the size of an area to be set, the number of pixels corresponding to the fundamental wavelength of the spatial frequency component of the key information used in embedding operation is used.
  • The color feature extracting unit 43 totalizes the values of pixels existing in the area set by the area extracting unit 42 with respect to each of the R, G, and B planes, and calculates the feature of each color. The amplification coefficient determining unit 44 determines an amplification coefficient for the reconstruction of a restoration result on the basis of the totalization result obtained by the color feature extracting unit 43.
  • The frequency component extracting unit 45 frequency-filters the image information input from the image input unit 41 to extract the amplitude (= strength) information and phase information of the spatial frequency component of embedded key information.
  • The extracted signal amplifying unit 46 performs amplification processing for the amplitude information or phase information of the spatial frequency component extracted by the frequency component extracting unit 45 by using the amplification coefficient determined by the amplification coefficient determining unit 44.
  • The reconstructing unit 47 performs totalization processing and spatial filtering for the amplitude information or phase information of the spatial frequency component amplified by the extracted signal amplifying unit 46 to reconstruct a modulated component, i.e., sub-information, with respect to the key information. The display unit 48 displays the sub-information reconstructed by the reconstructing unit 47 (or may simultaneously display the input image information and sub-information).
  • FIG. 19 shows an outline of the flow of processing in the extracted signal amplifying unit 46. First of all, the color feature extracting unit 43 calculates a color feature by performing the same processing as that performed by the color feature extracting unit 23 described in the third embodiment (step S51).
  • The amplification coefficient determining unit 44 then determines an amplification coefficient (amplification factor) M in the area from the color feature in each plane (step S52). Letting RF, GF, and BF be the color features in the R, G, and B planes in an area S having m × n pixels, the amplification coefficient M is determined as follows: M = AM[RF, GF, BF] where AM is a three-dimensional array, in which amplification coefficients corresponding to color features are stored.
  • The extracted signal amplifying unit 46 multiplies the spatial frequency component extracted by the frequency component extracting unit 45 by the amplification coefficient M obtained by the amplification coefficient determining unit 44 in each area (step S53). With this processing, the extracted frequency components are corrected such that variations in maximum value in the respective areas fall within a predetermined range.
  • According to the above description, a three-dimensional array is used to determine the amplification coefficient M. However, the present invention is not limited to this method. For example, the amplification coefficient M may be determined on the basis of the following function: M = fM (RF, GF, BF) where fM represents an amplification coefficient computation function for determining an amplification coefficient from a color feature. As the amplification coefficient computation function fM, for example, linear sum M = WM1RF + WM2GF + WM3BF obtained by multiplying RF, GF, and BF by predetermined weights or a polynomial having a high-order term such as RF 2 or a cross term such as RFGF may be used. Alternatively, such calculation may be done after logarithmic transformation like RF' = log(RF).
  • As described above, the visibility of embedded sub-information can be improved by correcting reconstruction processing in accordance with the color components of input image information.
  • In the fourth embodiment, a multilevel image is created by multiplying an extracted spatial frequency component by an amplification coefficient. However, a binary image may be created by performing binarization processing for the areas using a three-dimensional array AB storing binarization thresholds corresponding to color features in the respective areas. According to this method, the data amount of a restoration result can be reduced, and the contrast of the restoration result can be improved.
  • As has been described above, according to the second to fourth embodiments, since a color component exhibiting little deterioration at the time of digital/analog conversion is used, when composite image information constituted by main image information and another additional sub-information (sub-image information) embedded in the main image information in an invisible state is generated for analog data output to printed matter, the performance of restoring the embedded sub-information from the printed composite image information can be improved without increasing the risk of disclosing the sub-information from the composite image information.
  • In addition, reducing the influence of a deterioration by combining color components of input image information makes it possible to improve the performance of restoring embedded sub-information from printed composite image information without increasing a similar risk.
  • In addition, correcting reconstruction processing in accordance with color components of input image information makes it possible to improve the visibility of sub-information restored from printed composite image information without increasing a similar risk.
  • As described above, according to each embodiment of the present invention, there are provided an image processing system and image processing method and apparatus which can create composite image information constituted by main image information and another additional sub-information embedded in the main image information in an invisible state for analog data output to a recording medium, and maintain the digital watermark information in the composite image information after the composite image information is recorded.
  • According to each embodiment of the present invention, there are provided an image processing system and image processing method and apparatus which can apply a digital watermarking technique to a to-be-recorded image while maintaining high gradation performance in the fusion thermal transfer recording scheme, and allow the watermark information (sub-information) to be stored and restored without destruction after it is recorded.
  • In addition, according to each embodiment of the present invention, there are provided an image processing system and image processing method and apparatus which can restore a digital watermark which is resistant to a deterioration in image information.
  • Furthermore, according to each embodiment of the present invention, there are provided an image processing method and apparatus which can improve restoration characteristics and visibility of embedded sub-information without increasing the risk of disclosing the sub-information.
  • It is explicitly stated that all features disclosed in the description and/or the claims are intended to be disclosed separately and independently from each other for the purpose of original disclosure as well as for the purpose of restricting the claimed invention independent of the composition of the features in the embodiments and/or the claims. It is explicitly stated that all value ranges or indications of groups of entities disclose every possible intermediate value or intermediate entity for the purpose of original disclosure as well as for the purpose of restricting the claimed invention, in particular as limits of value ranges.

Claims (25)

  1. An image processing system characterized by comprising a first image processing apparatus (100) which records, on a recording medium (M), composite image information created by embedding invisible sub-information in visible main image information, and a second image processing apparatus (110) which restores the sub-information from the composite image information recorded on the recording medium (M) by the first image processing apparatus (100),
       the first image processing apparatus (100) including
       a pre-processing unit (102,103) which performs, for main image information, pre-processing corresponding to pixel formation processing for image recording in the first image processing apparatus (100),
       an embedding processing unit (104) which creates composite image information by embedding sub-information in main image information in an invisible state using the main image information, the sub-information, and key information used to restore the sub-information, and
       a recording unit (106) which records the composite image information created by the embedding processing unit (104) on a recording medium (M), and
       the second image processing apparatus (110) including
       an image input unit (111) which inputs the composite image information from the recording medium (M) on which the composite image information is recorded by the recording unit (106) of the first image processing apparatus (100),
       a frequency component extracting unit (112) which extracts a spatial frequency component unique to the key information from the composite image information input by the image input unit (111), and
       a reconstructing unit (113) which reconstructs the sub-information from the spatial frequency component extracted by the frequency component extracting unit (112).
  2. An image processing system characterized by comprising a first image processing apparatus (100) which records, on a recording medium (M), composite image information created by embedding invisible sub-information in visible main image information, and a second image processing apparatus (110) which restores the sub-information from the composite image information recorded on the recording medium (M) by the first image processing apparatus (100),
       the first image processing apparatus (100) including
       a first pre-processing unit (102) which performs, for main image information, first pre-processing corresponding to pixel formation processing for image recording in the first image processing apparatus (100),
       a second pre-processing unit (103) which performs geometric transformation with respect to the main image information having undergone the first pre-processing by the first pre-processing unit (102),
       an embedding processing unit (104) which creates composite image information by embedding sub-information in main image information in an invisible state using the main image information, the sub-information, and key information used to restore the sub-information,
       an inverse transformation unit (105) which performs transformation processing inverse to the transformation processing by the second pre-processing unit (103) with respect to the composite image information created by the embedding processing unit (104), and
       a recording unit (106) which records the composite image information having undergone the inverse transformation processing by the inverse transformation unit (105) by an alternate driving/recording scheme of alternately forming even-numbered and odd-numbered pixels in a main scanning direction of a recording device on a recording line basis, and
       the second image processing apparatus (110) including
       an image input unit (111) which inputs the composite image information from the recording medium (M) on which the composite image information is recorded by the recording unit (106) of the first image processing apparatus (100),
       a frequency component extracting unit (112) which extracts a spatial frequency component unique to the key information from the composite image information input by the image input unit (111), and
       a reconstructing unit (113) which reconstructs the sub-information from the spatial frequency component extracted by the frequency component extracting unit (112).
  3. A system according to claim 2, characterized in that the first pre-processing unit (102) thins out main image information in accordance with pixel formation processing for image recording in the first image processing apparatus (100).
  4. A system according to clam 3, characterized in that the second pre-processing unit (103) rotates the main image information, which is thinned out by the first pre-processing unit (102) in advance, through a predetermined angle, and then performs geometric transformation to remove thinned-out portions from the main image information, compresses effective portions of the main image information, and performs reconstruction.
  5. A system according to claim 2, characterized in that the frequency component extracting unit (112) extracts a spatial frequency component of the key information from the composite image information input by the image input unit (111) by using a frequency filter coefficient.
  6. A system according to claim 2, characterized in that the reconstructing unit (113) extracts a change point at which a sign changes from the spatial frequency component extracted by the frequency component extracting unit (112), obtains a reference phase of the spatial frequency component by projecting the extracted change point, calculates a deviation of each coordinates of the spatial frequency component extracted by the frequency component extracting unit (112) from the obtained reference phase, and replaces a pixel value of a coordinate which deviates by not less than a predetermined threshold with a first value, and each of other pixel values with a second value, thereby reconstructing sub-information.
  7. A system according to claim 2, characterized by further comprising a determining unit (114) which determines authenticity of the recording medium on the basis of the sub-information reconstructed by the reconstructing unit (113).
  8. An image processing apparatus characterized by comprising:
    an image input unit (111) which inputs composite image information from a recording medium (M) on which the composite image information is recorded, which is created by color difference modulation processing using visible main image information, sub-information embedded in the main image information in an invisible state, and key information used to restore the sub-information;
    a frequency component extracting unit (112) which extracts a spatial frequency component unique to the key information from the composite image information input by the image input unit (111); and
    a reconstructing unit (113) which reconstructs the sub-information from the spatial frequency component extracted by the frequency component extracting unit (112).
  9. An apparatus according to claim 8, characterized in that the frequency component extracting unit (112) extracts a spatial frequency component of the key information from the composite image information input by the image input unit (111) by using a frequency filter coefficient.
  10. An apparatus according to claim 8, characterized in that the reconstructing unit (113) extracts a change point at which a sign changes from the spatial frequency component extracted by the frequency component extracting unit (112), obtains a reference phase of the spatial frequency component by projecting the extracted change point, calculates a deviation of each coordinates of the spatial frequency component extracted by the frequency component extracting unit (112) from the obtained reference phase, and replaces a pixel value of a coordinate which deviates by not less than a predetermined threshold with a first value, and each of other pixel values with a second value, thereby reconstructing sub-information.
  11. An apparatus according to claim 8, characterized by further comprising a determining unit (114) which determines authenticity of the recording medium (M) on the basis of the sub-information reconstructed by the reconstructing unit (113).
  12. An image processing apparatus characterized by comprising:
    an image input unit (11) which inputs composite image information from a recording medium (401) on which the composite image information is recorded, which is created by color difference modulation processing using visible main image information, sub-information embedded in the main image information in an invisible state, and key information used to restore the sub-information;
    a color component information storage unit (12) which stores color component information;
    a color component extracting unit (13) which extracts a color component from the composite image information input by the image input unit (11) on the basis of the color component information stored in the color component information storage unit (12);
    a frequency component extracting unit (14) which extracts a spatial frequency component unique to the key information from the color component extracted by the color component extracting unit (13); and
    a reconstructing unit (15) which reconstructs the sub-information from the spatial frequency component extracted by the frequency component extracting unit (14).
  13. An apparatus according to claim 12, characterized in that the color component extracted from the composite image information is a color component corresponding to a color of ink exhibiting a highest gradation characteristic when the composite image information is recorded, and information of the color component is stored in the color component information storage unit (12).
  14. An apparatus according to claim 12, characterized in that the color component extracted from the composite image information is a color component corresponding to a color of ink exhibiting a highest gradation characteristic when the composite image information is input, and information of the color component is stored in the color component information storage unit (12).
  15. An image processing apparatus characterized by comprising:
    an image input unit (21) which inputs composite image information from a recording medium (401) on which the composite image information is recorded, which is created by color difference modulation processing using visible main image information, sub-information embedded in the main image information in an invisible state, and key information used to restore the sub-information;
    an area extracting unit (22) which extracts a local area from the composite image information input by the image input unit (21);
    a color feature extracting unit (23) which extracts a color feature in the local area extracted by the area extracting unit (22) from the local area;
    a color combining unit (24) which creates color component composite image information by combining color components on the basis of the color feature extracted by the color feature extracting unit (23);
    a frequency component extracting unit (25) which extracts a spatial frequency component unique to the key information from the color component composite image information created by the color combining unit (24); and
    a reconstructing unit (26) which reconstructs the sub-information from the spatial frequency component extracted by the frequency component extracting unit (25).
  16. An image processing apparatus characterized by comprising:
    an image input unit (41) which inputs composite image information from a recording medium (M) on which the composite image information is recorded, which is created by color difference modulation processing using visible main image information, sub-information embedded in the main image information in an invisible state, and key information used to restore the sub-information;
    an area extracting unit (42) which extracts a local area from the composite image information input by the image input unit (41);
    a color feature extracting unit (43) which extracts a color feature in the local area extracted by the area extracting unit (42) from the local area;
    a reconstruction parameter determining unit (44) which determines a reconstruction parameter on the basis of the color feature extracted by the color feature extracting unit (43);
    a frequency component extracting unit (45) which extracts a spatial frequency component unique to the key information from the composite image information input by the image input unit (41); and
    a reconstructing unit (46,47) which reconstructs the sub-information from the spatial frequency component extracted by the frequency component extracting unit (45) by using the reconstruction parameter determined by the reconstruction parameter determining unit (44).
  17. An apparatus according to claim 16, characterized in that the reconstruction parameter comprises an amplification coefficient for amplifying the spatial frequency component, and the reconstructing unit (46,47) includes an amplifying unit (46) which amplifies the spatial frequency component by using the amplification coefficient.
  18. An apparatus according to claim 16, characterized in that the reconstruction parameter comprises a threshold for binarizing the spatial frequency component, and the reconstructing unit (46,47) includes a binarizing unit which binarizes the spatial frequency component by using the threshold.
  19. An image processing method characterized by comprising:
    inputting (111) composite image information from a recording medium on which the composite image information is recorded, which is created by color difference modulation processing using visible main image information, sub-information embedded in the main image information in an invisible state, and key information used to restore the sub-information;
    extracting (112) a spatial frequency component unique to the key information from the composite image information input from the recording medium; and
    reconstructing (113) the sub-information from the spatial frequency component extracted by extracting the frequency component.
  20. A method according to claim 19, characterized in that extracting (112) the frequency component includes extracting a spatial frequency component of the key information from the composite image information input from the recording medium (M) by using a frequency filter coefficient.
  21. A method according to claim 19, characterized in that reconstructing (113) includes extracting a change point at which a sign changes from the spatial frequency component, obtaining a reference phase of the spatial frequency component by projecting the extracted change point, calculating a deviation of each coordinate of the spatial frequency component from the reference phase, and replacing a pixel value of a coordinate which deviates by not less than a predetermined threshold with a first value, and each of other pixel values with a second value, thereby reconstructing sub-information.
  22. A method according to claim 19, characterized by further comprising determining (114) authenticity of the recording medium on the basis of the reconstructed sub-information.
  23. An image processing method characterized by comprising:
    inputting (11) composite image information from a recording medium (401) on which the composite image information is recorded, which is created by color difference modulation processing using visible main image information, sub-information embedded in the main image information in an invisible state, and key information used to restore the sub-information;
    extracting (13) a color component from the composite image information input from the recording medium (401) on the basis of the color component information stored in a color component information storage unit (12);
    extracting (14) a spatial frequency component unique to the key information from the extracted color component; and
    reconstructing (15) the sub-information from the extracted spatial frequency component.
  24. An image processing method characterized by comprising:
    inputting (21) composite image information from a recording medium (401) on which the composite image information is recorded, which is created by color difference modulation processing using visible main image information, sub-information embedded in the main image information in an invisible state, and key information used to restore the sub-information;
    extracting (22) a local area from the composite image information input from the recording medium (401);
    extracting (23) a color feature in a local area extracted the composite image information from the local area;
    creating (24) color component composite image information by combining color components on the basis of the color feature extracted from the local area;
    extracting (25) a spatial frequency component unique to the key information from the created color component composite image information; and
    reconstructing (26) the sub-information from the extracted spatial frequency component.
  25. An image processing method characterized by comprising:
    inputting (41) composite image information from a recording medium (401) on which the composite image information is recorded, which is created by color difference modulation processing using visible main image information, sub-information embedded in the main image information in an invisible state, and key information used to restore the sub-information;
    extracting (42) a local area from the composite image information input from the recording medium (401);
    extracting (43) a color feature in the extracted local area from the local area;
    determining (44) a reconstruction parameter on the basis of the color feature extracted from the local area;
    extracting (45) a spatial frequency component unique to the key information from the composite image information input from the recording medium (401); and
    reconstructing (46,47) the sub-information from the spatial frequency component extracted by extracting the frequency component by using the reconstruction parameter determined on the basis of the color feature.
EP04009683A 2003-04-25 2004-04-23 Image processing system Expired - Lifetime EP1471722B1 (en)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2003122346 2003-04-25
JP2003122346 2003-04-25
JP2004075020A JP4227048B2 (en) 2003-04-25 2004-03-16 Image processing system
JP2004075020 2004-03-16

Publications (3)

Publication Number Publication Date
EP1471722A2 true EP1471722A2 (en) 2004-10-27
EP1471722A3 EP1471722A3 (en) 2005-07-06
EP1471722B1 EP1471722B1 (en) 2009-04-08

Family

ID=32964985

Family Applications (1)

Application Number Title Priority Date Filing Date
EP04009683A Expired - Lifetime EP1471722B1 (en) 2003-04-25 2004-04-23 Image processing system

Country Status (8)

Country Link
US (1) US6883982B2 (en)
EP (1) EP1471722B1 (en)
JP (1) JP4227048B2 (en)
KR (1) KR100605432B1 (en)
AT (1) ATE428263T1 (en)
CA (1) CA2465088A1 (en)
DE (1) DE602004020416D1 (en)
TW (1) TWI280489B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1763218A2 (en) 2005-09-07 2007-03-14 Kabushiki Kaisha Toshiba Image processing method, image processing apparatus and recording material
EP1833237A2 (en) 2006-03-07 2007-09-12 Kabushiki Kaisha Toshiba Retrieval of information embedded in an image
EP2009897A1 (en) 2007-06-27 2008-12-31 Kabushiki Kaisha Toshiba Image processing apparatus, image processing method, image forming apparatus, image forming method, and recorded material
EP2043041A1 (en) 2007-09-27 2009-04-01 Kabushiki Kaisha Toshiba Image processing method and image processing device

Families Citing this family (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7162035B1 (en) 2000-05-24 2007-01-09 Tracer Detection Technology Corp. Authentication method and system
KR100878518B1 (en) * 2001-12-03 2009-01-13 삼성전자주식회사 Apparatus and method for embedding watermark into original information and transmitting watermarked information thereby and recovering watermark therefrom
JP4167590B2 (en) * 2003-12-22 2008-10-15 株式会社東芝 Image processing method
JP4688797B2 (en) * 2004-06-09 2011-05-25 パナソニック株式会社 Copy control information decision device
US7394567B2 (en) * 2004-10-21 2008-07-01 Seiko Epson Corporation Data embedding scheme for duplex color laser printer
JP4630765B2 (en) * 2005-08-26 2011-02-09 株式会社東芝 Image processing method and image processing apparatus
US7742619B2 (en) * 2005-12-21 2010-06-22 Texas Instruments Incorporated Image watermarking based on sequency and wavelet transforms
JP2008085875A (en) * 2006-09-28 2008-04-10 Toshiba Corp Image processing method, and image processor
KR20080049360A (en) * 2006-11-30 2008-06-04 삼성전자주식회사 The method of transmitting color gamut and the image device thereof
US8126288B2 (en) * 2007-01-31 2012-02-28 A School Juridical Person Fujita Educational Institution Image processing apparatus
PT2115694E (en) * 2007-02-05 2014-02-24 Nds Ltd System for embedding data
JP5002392B2 (en) * 2007-06-27 2012-08-15 株式会社東芝 Image processing apparatus and image processing method
JP4922205B2 (en) * 2007-08-17 2012-04-25 株式会社東芝 Image processing method and image processing apparatus
US7995196B1 (en) 2008-04-23 2011-08-09 Tracer Detection Technology Corp. Authentication method and system
US8786666B2 (en) 2010-04-27 2014-07-22 Lifesize Communications, Inc. Providing separate video and presentation streams to a recording server
US8786667B2 (en) 2011-04-26 2014-07-22 Lifesize Communications, Inc. Distributed recording of a videoconference in multiple formats
US8780166B2 (en) 2011-04-26 2014-07-15 Lifesize Communications, Inc. Collaborative recording of a videoconference using a recording server
US8868902B1 (en) * 2013-07-01 2014-10-21 Cryptite LLC Characteristically shaped colorgram tokens in mobile transactions
US9332309B2 (en) * 2012-06-08 2016-05-03 Apple Inc. Sync frame recovery in real time video transmission system
US9477884B2 (en) * 2012-06-14 2016-10-25 Digimarc Corporation Methods and systems for signal processing
US10826900B1 (en) * 2014-12-31 2020-11-03 Morphotrust Usa, Llc Machine-readable verification of digital identifications
US10050796B2 (en) * 2016-11-09 2018-08-14 Arizona Board Of Regents On Behalf Of Northern Arizona University Encoding ternary data for PUF environments
JP7315949B2 (en) 2019-04-26 2023-07-27 学校法人 関西大学 HIGH STRENGTH GEL BODY AND METHOD FOR MAKING THE SAME AND HYDROGEL AND METHOD FOR MAKING THE SAME
CN111986127B (en) * 2019-05-22 2022-03-08 腾讯科技(深圳)有限公司 Image processing method and device, computer equipment and storage medium
JP2023042893A (en) * 2021-09-15 2023-03-28 株式会社リコー Image processing apparatus, reading system, image formation system and feature amount detection method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020002679A1 (en) * 2000-04-07 2002-01-03 Tomochika Murakami Image processor and image processing method
EP1215880A2 (en) * 2000-12-07 2002-06-19 Sony United Kingdom Limited Embedding data in material
US20020164048A1 (en) * 1998-05-12 2002-11-07 Lucent Technologies Inc. Transform domain image watermarking method and system
US20030031341A1 (en) * 1993-11-18 2003-02-13 Rhoads Geoffrey B. Printable interfaces and digital linking with embedded codes
US20030053653A1 (en) * 1995-05-08 2003-03-20 Rhoads Geoffrey B. Watermark embedder and reader

Family Cites Families (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0659739A (en) 1992-08-07 1994-03-04 Sumitomo Heavy Ind Ltd Double table revolving device
US5407893A (en) * 1993-08-19 1995-04-18 Konica Corporation Material for making identification cards
US6345104B1 (en) * 1994-03-17 2002-02-05 Digimarc Corporation Digital watermarks and methods for security documents
JP3224480B2 (en) * 1994-09-30 2001-10-29 キヤノン株式会社 Color image processing equipment
US5995638A (en) * 1995-08-28 1999-11-30 Ecole Polytechnique Federale De Lausanne Methods and apparatus for authentication of documents by using the intensity profile of moire patterns
JP3547892B2 (en) 1996-03-14 2004-07-28 株式会社東芝 Image recording apparatus and image recording method
US6095566A (en) * 1996-03-14 2000-08-01 Kabushiki Kaisha Toshiba Image recorded product, image recording system, image reproducing system, and recording medium for use to superimpose-record/reproduce additional information
US6788347B1 (en) * 1997-03-12 2004-09-07 Matsushita Electric Industrial Co., Ltd. HDTV downconversion system
US5974150A (en) * 1997-09-30 1999-10-26 Tracer Detection Technology Corp. System and method for authentication of goods
JPH11168616A (en) 1997-12-03 1999-06-22 Toshiba Corp Image information processing method and image information processor
JP4015753B2 (en) 1998-06-11 2007-11-28 株式会社東芝 Image information processing method
DE69835133T8 (en) * 1997-12-03 2007-05-16 Kabushiki Kaisha Toshiba, Kawasaki Image information processing method and method for preventing counterfeiting of certificates and the like
US6519340B1 (en) * 1998-03-17 2003-02-11 The University Of Connecticut Method and apparatus for encryption using partial information
US5946414A (en) * 1998-08-28 1999-08-31 Xerox Corporation Encoding data in color images using patterned color modulated image regions
JP2000182086A (en) 1998-12-18 2000-06-30 Toshiba Corp Ticket issuing method and ticket collation method
US6556688B1 (en) * 1999-03-15 2003-04-29 Seiko Epson Corporation Watermarking with random zero-mean patches for printer tracking
WO2001031583A1 (en) * 1999-10-26 2001-05-03 Koninklijke Philips Electronics N.V. Image processing method, system and apparatus for noise reduction in an image sequence representing a threadlike structure
JP4495824B2 (en) 2000-03-21 2010-07-07 株式会社東芝 Information processing method
JP4038956B2 (en) 2000-03-23 2008-01-30 凸版印刷株式会社 Image generation system and image generation method
JP4554771B2 (en) 2000-06-20 2010-09-29 パナソニック株式会社 Legitimacy authentication system, personal certificate issuance system and personal certificate
EP1340192B1 (en) * 2000-11-22 2005-04-27 Concealogram Ltd. Hiding images in halftone pictures
US6937772B2 (en) * 2000-12-20 2005-08-30 Eastman Kodak Company Multiresolution based method for removing noise from digital images
JP4828739B2 (en) * 2001-08-17 2011-11-30 株式会社東芝 Thermal transfer recording method, printer system, and thermal transfer recording apparatus
US6829393B2 (en) * 2001-09-20 2004-12-07 Peter Allan Jansson Method, program and apparatus for efficiently removing stray-flux effects by selected-ordinate image processing

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20030031341A1 (en) * 1993-11-18 2003-02-13 Rhoads Geoffrey B. Printable interfaces and digital linking with embedded codes
US20030053653A1 (en) * 1995-05-08 2003-03-20 Rhoads Geoffrey B. Watermark embedder and reader
US20020164048A1 (en) * 1998-05-12 2002-11-07 Lucent Technologies Inc. Transform domain image watermarking method and system
US20020002679A1 (en) * 2000-04-07 2002-01-03 Tomochika Murakami Image processor and image processing method
EP1215880A2 (en) * 2000-12-07 2002-06-19 Sony United Kingdom Limited Embedding data in material

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1763218A2 (en) 2005-09-07 2007-03-14 Kabushiki Kaisha Toshiba Image processing method, image processing apparatus and recording material
EP1763218A3 (en) * 2005-09-07 2008-04-02 Kabushiki Kaisha Toshiba Image processing method, image processing apparatus and recording material
US7764847B2 (en) 2005-09-07 2010-07-27 Kabushiki Kaisha Toshiba Image processing method, image processing apparatus and recording material
EP1833237A2 (en) 2006-03-07 2007-09-12 Kabushiki Kaisha Toshiba Retrieval of information embedded in an image
EP1833237A3 (en) * 2006-03-07 2009-02-25 Kabushiki Kaisha Toshiba Retrieval of information embedded in an image
US8045794B2 (en) 2006-03-07 2011-10-25 Kabushiki Kaisha Toshiba Image processing method and device for restoring sub-information from composite image information
EP2009897A1 (en) 2007-06-27 2008-12-31 Kabushiki Kaisha Toshiba Image processing apparatus, image processing method, image forming apparatus, image forming method, and recorded material
US8314970B2 (en) 2007-06-27 2012-11-20 Kabushiki Kaisha Toshiba Image processing apparatus, image processing method, image forming apparatus, image forming method, and recorded material
EP2043041A1 (en) 2007-09-27 2009-04-01 Kabushiki Kaisha Toshiba Image processing method and image processing device
US8320607B2 (en) 2007-09-27 2012-11-27 Kabushiki Kaisha Toshiba Image processing method and image processing device for embedding invisible sub information into main images

Also Published As

Publication number Publication date
US6883982B2 (en) 2005-04-26
KR20040092456A (en) 2004-11-03
JP2004343712A (en) 2004-12-02
TW200426606A (en) 2004-12-01
KR100605432B1 (en) 2006-07-31
CA2465088A1 (en) 2004-10-25
ATE428263T1 (en) 2009-04-15
JP4227048B2 (en) 2009-02-18
US20040215965A1 (en) 2004-10-28
EP1471722A3 (en) 2005-07-06
EP1471722B1 (en) 2009-04-08
TWI280489B (en) 2007-05-01
DE602004020416D1 (en) 2009-05-20

Similar Documents

Publication Publication Date Title
EP1471722B1 (en) Image processing system
US7489800B2 (en) Image processing method
JP4167590B2 (en) Image processing method
AU2010294295B2 (en) A method for generating a security bi-level image for a banknote
US20040091050A1 (en) Digital image watermarking apparatus and method
JP2005159438A (en) Image processing method
JP4064863B2 (en) Image processing method
JP4746663B2 (en) Image processing apparatus and image processing method
JP4088191B2 (en) Image processing method and image recording apparatus
JP4686578B2 (en) Image processing method
JP2001094755A (en) Image processing method
JP2009033443A (en) Image processor, and image processing method
Li Binary Image Information Hiding Based on research Gui Review
CA2545472A1 (en) Image processing method

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20040423

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL HR LT LV MK

PUAL Search report despatched

Free format text: ORIGINAL CODE: 0009013

AK Designated contracting states

Kind code of ref document: A3

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL HR LT LV MK

AKX Designation fees paid

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PL PT RO SE SI SK TR

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LI LU MC NL PL PT RO SE SI SK TR

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REG Reference to a national code

Ref country code: CH

Ref legal event code: EP

REG Reference to a national code

Ref country code: IE

Ref legal event code: FG4D

REF Corresponds to:

Ref document number: 602004020416

Country of ref document: DE

Date of ref document: 20090520

Kind code of ref document: P

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090408

NLV1 Nl: lapsed or annulled due to failure to fulfill the requirements of art. 29p and 29m of the patents act
PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: PT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090908

Ref country code: FI

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090408

Ref country code: AT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090408

Ref country code: ES

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090719

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: NL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090408

Ref country code: SE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090708

Ref country code: PL

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090408

REG Reference to a national code

Ref country code: CH

Ref legal event code: PL

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090408

Ref country code: RO

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090408

Ref country code: LI

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20090430

Ref country code: CZ

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090408

Ref country code: EE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090408

Ref country code: CH

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20090430

PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: SK

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090408

Ref country code: BE

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090408

26N No opposition filed

Effective date: 20100111

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20090708

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: BG

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090708

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: MC

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20090430

Ref country code: IE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20090423

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20090708

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090709

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: IT

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090408

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: LU

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20090423

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: HU

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20091009

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: TR

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090408

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: CY

Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT

Effective date: 20090408

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 13

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 14

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 15

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20220308

Year of fee payment: 19

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: DE

Payment date: 20220302

Year of fee payment: 19

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 602004020416

Country of ref document: DE

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20230430

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20231103