WO2011105164A1 - Procédé de traitement d'image - Google Patents

Procédé de traitement d'image Download PDF

Info

Publication number
WO2011105164A1
WO2011105164A1 PCT/JP2011/051715 JP2011051715W WO2011105164A1 WO 2011105164 A1 WO2011105164 A1 WO 2011105164A1 JP 2011051715 W JP2011051715 W JP 2011051715W WO 2011105164 A1 WO2011105164 A1 WO 2011105164A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
superimposed
information
stego
superimposed image
Prior art date
Application number
PCT/JP2011/051715
Other languages
English (en)
Japanese (ja)
Inventor
山田 隆亮
由泰 田中
竜 海老澤
Original Assignee
株式会社日立製作所
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社日立製作所 filed Critical 株式会社日立製作所
Publication of WO2011105164A1 publication Critical patent/WO2011105164A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00838Preventing unauthorised reproduction
    • H04N1/00856Preventive measures
    • H04N1/00864Modifying the reproduction, e.g. outputting a modified copy of a scanned original
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • G06T1/005Robust watermarking, e.g. average attack or collusion attack resistant
    • G06T1/0064Geometric transfor invariant watermarking, e.g. affine transform invariant
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00838Preventing unauthorised reproduction
    • H04N1/00883Auto-copy-preventive originals, i.e. originals that are designed not to allow faithful reproduction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • H04N1/32149Methods relating to embedding, encoding, decoding, detection or retrieval operations
    • H04N1/32203Spatial or amplitude domain methods
    • H04N1/32208Spatial or amplitude domain methods involving changing the magnitude of selected pixels, e.g. overlay of information or super-imposition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2201/00General purpose image data processing
    • G06T2201/005Image watermarking
    • G06T2201/0601Image watermarking whereby calibration information is embedded in the watermark, e.g. a grid, a scale, a list of transformations

Definitions

  • the present invention embeds information as a latent image in an image such as a still image or a moving image so that it is difficult to be seen by human eyes, and detects and processes the embedded information as a latent image when some processing is performed on the image. And a method of embedding information.
  • the techniques for giving information to images are roughly classified into the following three types.
  • Information to be invisible in an image is a technique for expressing embedded identification information by using a redundant portion of content.
  • the embedded identification information is invisible to the person viewing the content, but can be extracted from the content using a special detection method.
  • JP-A-8-197828 (summary) JP 2003-118200 A (paragraph 0009) JP 2000-76418 A (paragraph 0011)
  • latent image technology is a technology developed mainly for printed materials, and is difficult to use for digital content.
  • latent image technique a technique for enhancing security and services by hiding the latent image.
  • the former A is referred to as a cover image and the latter B is referred to as a stego image.
  • An image hidden as a latent image is referred to as a superimposed image.
  • An image refers to a still image, a moving image, a 3D moving image, or the like.
  • Stego images include a stego image B1 in which the superimposed image remains a latent image and is not visible, and a stego image B2 in which the superimposed image is visible and visible.
  • the stego image B2 is generated by applying image processing to the stego image B1 or changing the viewing method.
  • the present invention is also applicable to a case where image processing is applied to a stego image when the stego image is digital content, in addition to conventional printing applications.
  • Patent Document 1 the fine lines added on the stego image are less likely to be greatly changed by digital image processing.
  • the change is too large at the minute points of Patent Document 2.
  • a noise reduction filter process may be performed in the compression encoding process, and a superimposed image expressed by minute points is likely to disappear.
  • the cover image is a pattern composed of thin lines, the entire pattern of the cover image is easily smoothed by the angle-of-view reduction process, and both the thin lines and the thick lines are visually recognized when they are smoothed with the grayscale average value in a large area. There will be no difference in density.
  • Patent Document 3 The technique of Patent Document 3 is invisible on the stego image, and the superimposed image cannot be directly recognized by human eyes.
  • the present invention provides a stego image creation method and detection in which hidden information is visualized (passive visual detection is possible) when analog / digital image processing is performed on a stego image of digital content.
  • the present invention provides a latent image technique focusing on components emphasized after processing in image processing accompanying reduction processing and recompression encoding processing as digital image processing.
  • the present invention also provides a method for detecting consciously hidden information.
  • an image may be referred to as data.
  • the signal component of the superimposed image is enhanced by the geometric deformation characteristics and the recompression coding characteristics.
  • the superimposed image is visualized and made visible to the human eye.
  • information such as a character hidden in the stego data as a latent image can be detected as necessary.
  • the detection result can be reflected in other devices and system operations.
  • An image superimposing apparatus stores a program (or module) for realizing an image superimposing method on a computer equipped with arithmetic means, input means, output means, memory (primary storage means), disk (secondary storage means), and communication. This can be realized by operating above.
  • Image superposition method includes superposition position map alignment designation processing, scramble processing, time-space region division processing, buffer, geometric deformation characteristic setting processing, recompression characteristic setting processing, superimposition data creation processing, superposition processing, compression coding parameter adjustment Processing, stego data output processing, auxiliary information output processing, digital signature processing, encryption processing, image quality analysis processing, conversion coefficient / quantization table extraction processing. Further, the cover data and the superimposed image are aligned using the superimposed position map, and the intensity of the superimposed image is adjusted using the intensity map table.
  • the superimposed image is processed based on the following technical principle. Attention is paid to a partial area of the cover image that becomes the size of the superimposed image when the cover image is reduced at a certain ratio R. In the superimposition process, attention is paid to a location A that corresponds to the pattern of the superimposed image and a location B that does not correspond in the partial region on the cover image. For the location A, the average value of the pixel values of the partial areas on the cover image is slightly changed. The location B is not changed. When such a change is given to the cover image, when the cover image is reduced, places where the average value is intentionally changed are connected to each other and perceived by humans, and the superimposed image appears to float.
  • the superimposed image is processed based on the following technical principle.
  • Set a cut-off frequency according to re-compression coding conditions (bit rate, etc.) remove frequency components higher than the cut-off frequency from the superimposed image, and have a frequency that is lower than the cut-off frequency and within a predetermined range. Increase the signal power of the component.
  • mosquito noise corresponding to the shape feature is emphasized in the superimposed image.
  • the cover image contains many intermediate frequency components and high frequency components when the stego data with the superimposed image superimposed is recompressed, the intermediate frequency components and high frequency components of the superimposed image are processed after the recompression encoding process. Many remain.
  • the recompression coding process is performed for each partial area (macroblock) of interest. For this reason, continuity between the pixel value of the attention area and the pixel value of another area adjacent to the attention area is not ensured. Therefore, in human vision, block noise is recognized in addition to the emphasized mosquito noise at the position of the pixels constituting the superimposed image. When these visually recognized noises appear to be connected across a plurality of attention areas, it becomes easier for a person to perceive a superimposed image.
  • effects obtained by a typical system to which the latent image technology disclosed in this specification is applied are as follows.
  • ⁇ Content authentication By superimposing authentication information, authority contact information, etc. as a character image, it is possible to provide a content authentication means that facilitates verification work.
  • ⁇ Falsification detection By superimposing the authenticity information as a character image, it is possible to visually detect whether the image is a forgery or an original. A special device is not required for checking the authenticity, and the verification operation is facilitated, for example, when the viewing screen size is changed by the player operation, the check pattern is visualized.
  • ⁇ Copyright protection By superimposing the video viewer ID and distribution route information as a check pattern.
  • ⁇ Information hiding By making the superimposed image a barcode-like pattern expression, the machine reading accuracy can be improved instead of visual confirmation.
  • Software protection Image processing software that outputs a stego image in which a product serial number is superimposed on a cover image makes it easy to prove the identity of the software processed image.
  • Distribution management By displaying a route, an administrator, a server name, etc. as a superimposed image, it becomes easy to prove the identity of the route through which the image has passed.
  • Billing By displaying a charged / settled (non-completed) mark as a superimposed image on the stego image, the user can easily check the status of the content such as the charging status.
  • the latent image technology disclosed in this specification can be applied to software, devices, services, systems, contents, media, and application applications for hiding information in images and detecting and processing the hidden information.
  • it can be used for content authentication, falsification detection, copyright protection, information leakage suppression, copy generation management, record management, information hiding, software protection, distribution management, billing management, and the like.
  • a stego image creation method a detection method, and an operation method in which hidden information is visualized when image processing is performed on the stego image with hidden information.
  • or (d) is a figure which illustrates the creation method of an intensity map in detail, (a) is a flowchart of the creation method of an intensity map, (b) is a figure which shows a superimposed image, (c ) Is a diagram showing an intensity map, and (d) is a diagram showing a reduced intensity map.
  • FIG. 6 is a graph in which a high frequency component is removed by applying an off frequency, and (d) is a frequency less than the cutoff frequency after removing a high frequency component equal to or higher than the cutoff frequency with respect to the frequency component shown in (b).
  • (E) is a graph showing the frequency component of the stego image to be output
  • (f) is a diagram showing a partial region of the cover image
  • (e) is a graph obtained by processing the band (bandwidth fd).
  • g) is the part in the superimposed image It is a diagram showing a data group that has been granted the geometric deformation characteristics in the region
  • (h) is a diagram showing the quantized transform coefficient table.
  • FIG. 8 is an explanatory diagram of a basic principle that a superposed image is visualized when a stego image undergoes compression-encoded image processing in the provision of recompression coding characteristics
  • (e) is a graph showing frequency components of an output stego image
  • (F) is a superimposed image in which the spatial component (pixel value) of the stego image obtained by inversely transforming the frequency component of the stego image gives the attention area of the cover image, the geometric deformation characteristic, and the recompression coding characteristic.
  • G is a graph showing a frequency component obtained by converting a partial region of a stego image by recompression encoding processing
  • (h) is ( FIG.
  • FIG. 10 is a diagram showing that the frequency component shown in g) is further inversely transformed to obtain a picture in which the superimposed image is superimposed on the attention area of the cover image, and (d ′) is obtained by frequency-converting the attention area of the stego image again.
  • Lap I is a graph showing the number characteristics. It is a figure which illustrates the detailed procedure which provides the recompression encoding characteristic with respect to a stego image. It is an image figure of the superimposition picture superimposed according to a superposition position map, and (a) divides a cover picture into a plurality of partial fields, and has a plurality of different visual characteristics (geometric deformation characteristic, recompression coding characteristic, etc.).
  • Figure 1 shows a conceptual diagram of latent image technology.
  • the latent image technique in this embodiment is to hide the superimposed image 102 from the cover data 101 so as not to be seen by human eyes, create stego data 104, and detect the superimposed image 102 from the stego data 106 that has undergone image processing 105.
  • the superimposing process 103 is performed by minutely changing the cover data. Since the superimposed image is not visible to the human eye, the stego data 104 can be sold and distributed normally. However, when the stego data 104 undergoes the image processing 105, the superimposed image is visualized and becomes visible to the human eye. For example, it can be used for fraud deterrence or authenticity proof by writing a character for checking fraud on the superimposed image 102 or displaying an image generation time to be proved.
  • Fig. 2 shows a system implementation example of latent image technology.
  • the user inputs the cover data 101 to the input device of the image superimposing device 203. Furthermore, the user designates characters, voice, images, moving images, database (DB) search results, etc., and the user inputs them as the superimposed image 102. Alternatively, a position time generation source using a GPS (Global Positioning System) or the like, or an output result of another system such as a DB may be input to the image superimposing device 203 via a network.
  • the image superimposing device 203 superimposes the superimposed image 102 on the cover data 101 and outputs the stego data 104.
  • the ID of the stego data distribution destination user and a character string indicating that copying of the image is prohibited are designated as the superimposed image 102 or the check pattern.
  • the stego data 104 When the stego data 104 is input to the decoder 205 (player or player), the stego data 104 is output to the monitor 206 and looks to the person 207 the same as the cover data 101.
  • the image superimposing device 203 and the stego data 104 may be built in the decoder 205.
  • a processed image 210 that has undergone the image processing is output.
  • the processed image 210 is input to the decoder 211 (player or player)
  • the processed image 210 is output to the monitor 212, and the person 213 sees the superimposed image superimposed on the cover data.
  • the superimposition image 102 or information 215 such as characters, sounds, and images that are the source of the superimposition image 102, which is designated by the user in advance, is output. .
  • the output of the detection device 214 may be connected to another system or service via a communication device such as a network, and the system operation and service content may be controlled in correspondence with the detection information. If the detection information 215 is copy control information equivalent to 4 bits (prohibited copying, copying allowed, etc.), the device control such as copy control, retransmission control, management record control corresponding to the detection information 215 is converted to stego data in another system. Can be done against.
  • Fig. 3 shows an example of a latent image detection method.
  • the latent image can be detected through an image processing apparatus such as a transcoder, but there are other methods for easily detecting the latent image. Even if the reproduced image is reduced and displayed by adjusting the decoder 302, the latent image can be detected by the eyes of the person 304.
  • the decoder 302 is normally played back and the output to the monitor 303 of the decoder 302 (when it is invisible to humans at this stage) is captured by the video camera 305, the digital camera 306, and the camera-equipped mobile phone 307, the stego data 301 Are subjected to DA-AD conversion (analog conversion), geometric deformation (reduction, etc.), and recompression coding processing.
  • the processed image after such recompression coding processing can be detected by the eyes of the person 308 by these methods in order to visualize the latent image in the same manner as the transcoder.
  • a screen may be printed using the video printer 307.
  • FIG. 4 shows a configuration example of a processing unit that realizes the image superimposing apparatus.
  • the image superimposing device 203 is a program that realizes an image superimposing method on a computer including an arithmetic device (CPU), an input device, an output device, a memory (primary storage device), a disk (secondary storage device), and a communication device ( Or a module) on a memory.
  • CPU arithmetic device
  • input device an input device
  • output device a memory
  • memory primary storage device
  • disk secondary storage device
  • a communication device Or a module
  • the program may be stored in advance in a memory or a disk in the computer, or may be introduced from another device via a medium that can be used by the computer when necessary.
  • the medium refers to, for example, a storage medium that can be attached to and detached from the computer, or a communication medium (that is, a wired, wireless, optical network, or a carrier wave or digital signal that propagates through the network).
  • Each processing unit shown in FIG. 4 is realized by the CPU executing a program (or module) that realizes each processing unit stored in a memory or a disk.
  • a signature key generation / input designation process 401 As an input reception process realized by the program, a signature key generation / input designation process 401, a work key generation / input designation process 402, a cover data input process 403, a superimposition information input process 404, a superimposition data input / edited imaging process 405, a scramble
  • a key generation / input designation process 406 As an input reception process realized by the program, a signature key generation / input designation process 401, a work key generation / input designation process 402, a cover data input process 403, a superimposition information input process 404, a superimposition data input / edited imaging process 405, a scramble
  • the input device includes a multimodal user interface. In each of the above processes, the input by the user and the input from another system are accepted.
  • the conversion coefficient / quantization table extraction process 408 extracts a conversion coefficient and a quantization table as image features from the cover data that the cover data input process 403 has received.
  • the cover data is compression-encoded data, it is often frequency-converted.
  • the conversion coefficient indicates a DCT (Discrete Cosign Transformation) coefficient.
  • DCT Discrete Cosign Transformation
  • the information received by the superimposition information input process 404 is information that is the basis for alignment of the superimposition data with respect to the cover data.
  • the upper right, the center, the whole, etc. are designated, and a superposition position map is created based on these. See FIG. 13 for the concept of alignment.
  • the user designation information received by the superimposition data input / editing / imaging processing 405 is a character, it is converted into a superimposition data by converting it into a character image.
  • voice it is converted into superimposition data by converting it into a character and a character image by voice recognition. You may edit text, sound, images, videos, and so on.
  • Parameters such as the original size of the superimposition data, the size of the cover data, and the scaling factor by which geometric deformation method is used to float the superposition data are designated.
  • a predetermined value such as a reaction at an area ratio of 1 ⁇ 2 may be used, or a plurality of parameters may be determined separately for each partial region of the image.
  • the superimposition information input process 404 accepts input of user designation information related to the superposition position map.
  • the superimposition position map alignment designation process 409 generates a superposition position map.
  • the superimposition position map shows the size of the superimposition image (or its reduced / enlarged image), the position of the superimposition image, the frame time, the number, the visualization characteristics, the type, the memory storage address of the original data, etc. This data is determined (see also FIG. 13).
  • information may be embedded in the superimposed image using a digital watermark technique.
  • a digital watermark technique that can be applied to binary images such as check characters.
  • Information specified by the input device may be used, or automatically generated information such as time may be used.
  • the scramble process 411 encrypts the superimposed data. This is performed when it is designated in the superimposition information input process 404 to encrypt the superimposition data.
  • the intensity map is scrambled by the scramble processing 416 using the same scramble key as the key used for the superimposed image. Take position correspondence. Thereafter, the superimposition data creation processing 417 creates superimposition data reflecting the intensity map.
  • Time space area division processing 412 divides time and space into partial areas.
  • the space-time region division process 412 follows the designation of the superposition position map that has been accepted / generated by the superposition information input process 404 and the superposition position map position designation process 409.
  • a buffer 413 for processing may be used depending on the frame type.
  • the temporal and spatial domain division processing 412 performs geometric deformation characteristic setting processing 414 and recompression characteristic setting processing on the superimposed image (or scrambled superimposed image) output from the superimposed data input / edit / image processing 405.
  • superimposed data creation processing 417 creates superimposed data.
  • Strength input / strength map setting processing 407 inputs / sets / generates a strength map designated by the user, and later adjusts the strength of the superimposed data to assist the work of creating the superimposed data.
  • the intensity map may be determined in advance by the user to specify the amount of change in the pixel value of the image, the upper and lower limits of the amount of change in the frequency component, or calculated from the relationship with the brightness value of the cover data. Also good.
  • the data of the created or input or predetermined intensity map is stored in an intensity map table on the memory or disk (see FIG. 8 for the concept of the intensity map table).
  • the superimposition process 418 superimposes the superimposition data and the cover data along the superimposition position map designated by the superposition position map alignment designation process 409.
  • the image may be superimposed by adding the frequency components of the image, or by adding the pixel values of the luminance / color difference signal components.
  • the image quality analysis processing 424 may adjust the strength of the superimposed data component with respect to the cover data or the superimposed data in advance so that the image quality of the cover data does not deteriorate significantly.
  • the compression encoding parameter adjustment processing 419 performs compression encoding on an image that has undergone the superimposition processing if it is uncompressed. Also, if the image that has undergone the superimposition process has been compression-encoded and has been generated by adding transform coefficients, the data change (for example, data size, upper limit value, changed quantization table replacement, etc.) accompanying the operation The parameters such as header information and file management information are adjusted.
  • Stego data output processing 420 outputs stego data including a latent image.
  • the encryption process 421 encrypts the stego data using the work key generated / input accepted by the work key generation / input designation process 402 and outputs it.
  • the electronic signature processing 422 may add the electronic signature data and certificate data using the signature key generated / input accepted by the signature key generation / input designation processing 401.
  • Auxiliary information output processing 423 outputs a scramble key, work key, electronic signature, certificate, signature key, time stamp, time information, management information, user information, and the like.
  • the output destination of the stego data output processing 420 and the output destination of the auxiliary information output processing 423 may be a disk, a monitor or a portable medium, or another system connected to the network. Also good.
  • FIG. 5 is a conceptual diagram showing the principle of imparting geometric deformation characteristics to a superimposed image.
  • the partial area yo502 of the cover image 501 that becomes the size of the superimposed image when the cover image 501 is reduced by a certain ratio R.
  • the superimposed image ym is set to 503, the width wd, and the height hd.
  • the partial area yo502 in the cover image 501 has a width ws and a height hs.
  • the function for obtaining the least common multiple is LCM ().
  • the area having the width LCM (ws, wd) and the height LCM (hs, hd) is equivalent to an integral multiple of the width and height of the superimposed image 503.
  • the same rectangular area 505 is equivalent to a value obtained by multiplying the width and height of the attention area 502 of the cover image 501 by another integer. Paying attention to these geometrical relationships, one pixel 509 in the superimposed image 503 corresponds to a (ws / wd) ⁇ (hs / hd) region 508 in the cover image 501.
  • the black small block 509 indicating the superimposed image corresponds to one pixel of the superimposed image, but the black small block 508 of the cover image and the black small block 506 of the virtual rectangular area are directly connected to the respective pixels. It shows the relative position and size in the whole image without corresponding.
  • the (ws / wd) ⁇ (hs / hd) attention area 508 is a pixel value that is reflected as a pixel of the superimposed image size at the time of reduction.
  • the average pixel value of the attention area 508 may become the pixel value of one pixel of the reduced cover image after the cover image is reduced.
  • the pixel value of one specific pixel in the attention area 508 may become the pixel value of one pixel of the reduced cover image after the cover image is reduced.
  • FIG. 6 is a diagram showing a processing procedure for imparting a geometric deformation characteristic to the superimposed image in the geometric deformation characteristic setting process 414.
  • the superimposed image that has been accepted / generated is referred to as a superimposed image sA.
  • the input accepted / generated image is used as the cover image 501.
  • the superposition position map generated by the superposition position map alignment designation process 409 is the size of the superposition image (or its reduced / enlarged image), the position of the superposition image, the frame time, the number, and the visualization characteristics in the image coordinate system of the cover data. , Type, data for determining the memory storage address of the original data, etc. (see also FIG. 13).
  • the superimposed image ym503 has a width wd and a height hd (see FIG. 5).
  • the attention area 507 of the cover image 501 one is selected from a plurality of overlapping position candidates indicated by the overlapping position map. As pre-processing of step 601, these processes are repeated / sequentially processed.
  • management information is embedded in the superimposed image sA using a digital watermark to create a superimposed image sB.
  • the management information may be input by a user, may be determined in advance, or may be input from another system (such as a time generation source) from the outside. Good.
  • step 602 the superimposed image sB is enlarged to xs1 times and ys1 times in the vertical and horizontal directions, and when the superimposed image sB is a binary image (1 pixel 1 bit), this is converted into multiple values (eg, 1 pixel 8 bits), A superimposed image sC of the value image is obtained.
  • xs1 LCM (ws, wd) / wd
  • ys1 LCM (ws, wd) / wd
  • LPF low-pass filter
  • step 604 the superimposed image sD after LPF is reduced to 1 / xs2 and 1 / ys2 in the vertical and horizontal directions to obtain a superimposed image sE.
  • the intensity input / intensity map setting processing 407 reads and sets the intensity map (see also FIGS. 7 and 8).
  • a superimposed image sF is created by superimposing uniformly / selectively so that the average pixel value of the block of interest changes during superimposition according to the intensity map. Alternatively, they are superimposed so as to maintain the image quality of the cover image. For example, in the case of selectively overlapping so that the average pixel value of the block of interest does not change, the following is performed.
  • An intensity map serving as a determination criterion for such various processes is prepared in advance or dynamically generated and selected and used by the user.
  • step 606 the cover image and the superimposed image sF are superimposed to obtain a stego image.
  • Superimposition processing 418 is performed.
  • the superimposition target of the cover image may be a luminance component, a color difference component, or an RGB component.
  • the superimposed image may also be interpreted as luminance shading, and the upper 4 bits of the 8-bit information are converted to a different color, such as the red component, and the inversion information of the lower 4 bits (evaluated as a value closer to 0 is a larger value). It may be separated.
  • FIG. 7 is a diagram showing in detail the method for creating the intensity map described in step 605.
  • step 701 the superimposed image sE709 is read.
  • step 702 management information designated by the user is set.
  • Step 703 the entire superimposed image is divided into small partial areas in the entire superimposed image, and the following processing from Step 704 to Step 707 is repeated.
  • step 704 the target partial area of the superimposed image is set.
  • step 705 an intensity map table storing an intensity map (see FIG. 8).
  • step 706 the superimposed image is analyzed to obtain a feature amount of the superimposed image.
  • the line segment direction (8 directions), the degree of unevenness of the object (black and white density), the area, the center of gravity, etc. are determined.
  • the determination of the line segment direction is performed by calculating the correlation between the line segment image directed in the eight directions of up, down, left, and right and the pixel value of the superimposed image, and the direction having the minimum correlation value is set as the line segment direction.
  • the unevenness can be achieved by the size of the frequency component of the target partial region.
  • step 707 intensity map data corresponding to the feature amount of the partial area is selected.
  • Various selection methods are possible. For example, select a map that distributes large intensity values parallel to the line segment direction, or select a map that distributes large intensity values in a direction orthogonal to the line segment direction, or has a large degree of unevenness. / If small, select a homogeneous / distributed intensity map. The user may select it.
  • an intensity map 710 is created by adding or multiplying intensity information (intensity parameter) specified by the selected intensity map with a user-specified intensity setting value. If this intensity map is reduced at an appropriate ratio, as shown in the reduced intensity map 711, distant points appear to be connected, and the latent image can be visualized more clearly.
  • step 708 the superimposed image sF is output.
  • FIG. 8 is an explanatory diagram showing an intensity map table.
  • Each record of the intensity map table 801 includes a partial region pixel array 802, an ID 803, a line segment direction 804, an object unevenness degree (black and white density) 805, management information 806, and an intensity setting value 807.
  • the management information 806 is a flag specified in step 605 and step 606 when the user specification such as overlaying so as to maintain the image quality of the cover image is not reflected. When this flag is off, priority is given to user designation.
  • step 707 intensity map data corresponding to the feature amount of the partial area is selected.
  • the intensity setting value 807 is selected according to the arrangement of the pixel values of the partial area of interest (illustrated as a black and white binary 4 ⁇ 4 area in FIG. 8).
  • the strength setting value 807 may be obtained using the line segment direction 804, the unevenness degree (black and white density) 805 of the object, etc. as a table search key.
  • FIG. 9 is an explanatory diagram showing a procedure for superimposing a cover image and a superimposed image when a moving image is exemplified.
  • step 901 a superimposed position map is created.
  • step 902 the superimposed image is frequency-converted and quantized.
  • step 903 cover image reading, decoding, and color space adjustment are performed.
  • step 904 the cover image is repeatedly controlled in GOP (Group Of Picture) units or frame units.
  • GOP Group Of Picture
  • step 905 the frame data is stored in the frame buffer.
  • step 914 the luminance information and chrominance information included in a certain frame are repeatedly controlled in units of slices or macroblocks, and attention is paid to one macroblock sequentially.
  • the size of the macroblock is 16 ⁇ 16 or the like.
  • step 906 the superimposed position map is referred to.
  • step 907 it is determined whether or not the position of the macro block of interest is a position specified in the superimposition position map (a position where the superimposition image is superimposed in the cover image). If it is not the superimposition position, the process returns to step 914 to refer to the next macroblock. If it is determined in step 907 that the position of the macro block of interest is a superposition position, the processing from step 908 to step 913 is performed.
  • step 908 it is determined whether the macro block of interest is an intra macro block. If the macro block is not an intra macro block, the information of the macro block of interest exists in the form of address reference using a motion vector with a specific macro block in the frame information at another time as an entity.
  • step 909 referring to the frame buffer, macroblock information (corresponding to a picture) existing in another frame in time is obtained with the macroblock information corresponding to the target macroblock.
  • step 910 the noticed macroblock information is replaced with the actual macroblock information existing in another frame in time as necessary. That is, the compressed expression by the motion vector is locally canceled and converted to the frequency component of the pixel value set.
  • the motion vector may not be locally canceled.
  • this is a case where macroblock information existing in another frame in time is referred to in a background portion having no motion in a compressed expression using a motion vector.
  • the rewriting result is reflected as it is in other frames referring to the macroblock.
  • step 911 the macro blocks of the superimposed image and the cover image are superimposed.
  • the pixel value obtained by inversely transforming the value added in the frequency domain into the spatial domain from the linearity of the transformation function adds the cover image and the superimposed image as the pixel values in the space (that is, the picture).
  • the resulting pixel value is technically equivalent.
  • step 912 compression coding parameter adjustment is performed along with rewriting of the GOP structure, frame information, slice information, and macroblock information in the above processing. For example, since the data amount changes, the data amount after the change is recorded and calculated.
  • step 913 output header information is set.
  • step 915 encoding processing such as transform coefficients is performed.
  • step 916 if processing in units of GOP has been completed, GOP information is read from the buffer and output. If it is in the middle of GOP or frame processing, the process returns to step 905, the intermediate information is stored in the buffer, and the process proceeds to the next process (step 914). In step 904, if all GOPs have been processed, the process ends.
  • FIG. 10 is an explanatory diagram showing the concept of a re-compression encoding characteristic imparting processing procedure for a superimposed image.
  • the addition of the recompression encoding characteristic is a process in the recompression characteristic setting process 415. Attention is paid to a partial area 1002 in the cover image 101.
  • the data 1003 in the partial area 1002 corresponds to a macroblock such as JPEG or MPEG, and is a rectangular area such as 16 ⁇ 16.
  • FIG. 10 shows a 4 ⁇ 4 example. Attention is paid to a partial region 1005 in the superimposed image 102, and geometric deformation characteristics shown in FIG.
  • the data group 1006 after the geometric deformation characteristics are given has the same picture size as the data group 1003. In addition, it reflects the settings of the superimposition position map and the intensity map, and the pattern is technically equivalent to the pattern 710 in FIG. 7, and it is difficult for human eyes to see the pattern of the superimposed image.
  • DCT conversion is used for frequency conversion.
  • quantization is performed so that many bits are allocated to low frequency components, and data compression is performed.
  • the numerical value of the frequency component becomes irreversible after quantization, and the pixel value obtained by inversely transforming the quantized frequency component is slightly different from the original pixel value.
  • a natural image uses the characteristic that signal power concentrates on low frequency components.
  • the pattern of the superimposed image may not be a natural image, such as when checking characters are written, and not only the low frequency component but also the high frequency component is important.
  • By appropriately maintaining and processing the high-frequency component recompression coding characteristics can be imparted to the superimposed image.
  • the frequency component is divided into low frequency, intermediate frequency, and high frequency, the high frequency component is easily lost by the compression encoding process.
  • the selection and processing level of the intermediate frequency is important.
  • the quantization transform coefficient table 1007 is a set of values obtained by frequency transforming and quantizing the data group 1003.
  • the effective value is biased to the upper left low frequency component, and the lower right high frequency component is often zero.
  • the lowest frequency among the frequencies at which the value of the quantized transform coefficient is zero is referred to as a cutoff frequency.
  • FIG. 10A is a graph showing frequency components obtained by frequency converting and quantizing the data 1003 of the partial area of the cover image 101.
  • the horizontal axis represents the frequency of the image
  • the vertical axis represents the spectral value in the conversion region.
  • FIG. 10B is a graph showing frequency components obtained by frequency conversion of the data group 1006 after the geometric deformation characteristics of the superimposed image 1004 are given.
  • a frequency portion exceeding the cutoff frequency is referred to as a cutoff region 1008.
  • FIG. 10 (c) is a graph obtained by applying a cutoff frequency to the frequency component shown in FIG. 10 (b) and removing the high frequency component.
  • the frequency calculated by the quantization transformation coefficient table 1007 is used as this cutoff frequency.
  • a value obtained by slightly changing the cutoff frequency within a predetermined range may be used.
  • inverse frequency transformation If a set of frequency components from which high frequency components have been removed is returned to the spatial domain by inverse frequency transformation (IDCT or the like), a pattern that is close to the original pattern is reproduced. However, many high-frequency components to be removed are included near the edge portion of the original superimposed image 1005. When only high-frequency components are selectively removed in the frequency domain, after returning to the spatial domain by inverse frequency transform (such as IDCT), the edge portion of the superimposed image may be disturbed and mosquito noise may appear. The part that appears to be mosquito noise has a cognitive effect in the frequency band near the cutoff frequency, such that the wave of the frequency part above the cutoff frequency is reflected by the cutoff frequency as a wall. It becomes easy to perceive.
  • IDCT inverse frequency transformation
  • FIG. 10D after removing a high frequency component equal to or higher than the cut-off frequency with respect to the frequency component shown in FIG. 10B, processing is applied to a frequency band (bandwidth fd) less than the cut-off frequency.
  • bandwidth fd bandwidth
  • Constants may be determined in advance for the maximum and minimum values of the bandwidth fd.
  • the frequency component of (kc) ⁇ ⁇ indicates a frequency one unit lower than the cutoff frequency on the horizontal axis in FIG.
  • the above processing may be performed by concentrating on the frequency component of (kc-1) ⁇ ⁇ .
  • the frequency band of (kc ⁇ ) ⁇ ⁇ to (kc ⁇ ) ⁇ ⁇ may be set as the bandwidth fd and may be processed.
  • the value equivalent to the electric power of the spectrum value in the region portion above the cutoff region is added to the frequency band (bandwidth fd) below the cutoff frequency.
  • the power is obtained as a waveform area calculation on the graph and is the sum of the spectrum values in the frequency band within the calculation range.
  • a value obtained by slightly changing the power of the spectrum value within a predetermined range may be used.
  • FIG. 10E is a graph showing the frequency components of the output stego image.
  • a frequency component obtained by performing the processing shown in FIG. 10D on the frequency component obtained by frequency-converting the data group 1006 after the geometric deformation characteristics of the superimposed image are defined as B.
  • the cut-off frequency varies depending on the frequency conversion setting.
  • the cutoff frequency is set to a low frequency. In the case of low compression, the frequency is high.
  • the cut-off frequency is intentionally set low, it is possible to superimpose a superimposed image that is easily visualized at high compression, and conversely, a superimposed image that is easily visualized at low compression. In this manner, a superimposed image having a recompression characteristic corresponding to the setting such as the bit rate and image quality when the stego image is compressed can be created.
  • FIG. 11 is an explanatory diagram showing the basic principle that a superposed image is visualized when a stego image undergoes compression-encoded image processing in giving re-compression encoding characteristics.
  • FIG. 11 (e) is a diagram equivalent to FIG. 10 (e), and is a graph showing frequency components of an output stego image.
  • the horizontal axis represents the frequency of the image
  • the vertical axis represents the spectral value in the conversion region.
  • the vertical axis may be considered as a DCT coefficient after quantization.
  • the spatial component (pixel value) 1101 of the stego image obtained by inversely transforming the frequency component of the stego image includes the attention area 1002 of the cover image 101, the geometric deformation characteristics, and the recompression coding characteristics.
  • the superimposed image X to which is added is a picture superimposed. More specifically, a process of adding a recompression encoding characteristic to the data 1106 of the area to which the geometric deformation characteristic is given is added to the data 1005 of the attention area of the superimposed image 102, and the transform component shown in FIG. Is the superimposed image X.
  • FIG. 11G is a graph showing frequency components obtained by converting a partial area 1102 of the stego image 104 by recompression encoding processing.
  • the frequency component shown in FIG. 11G is further inversely transformed, it appears as a pattern close to the pattern in which the superimposed image 102 is superimposed on the attention area 1002 of the cover image, and the shape and characters of the superimposed image are easily visible.
  • image processing such as a smoothing filter works so as to cancel out the mosquito noise-like noise emphasized by the process of FIG. 10D, and the cut-off frequency changes after transform component quantization.
  • the frequency characteristics (conversion band characteristics) of the superimposed image are approximately restored.
  • the component of the superimposed image in the attention area 1104 in the stego image 104 is extracted, it is spatially spread so as to blur after the recompression encoding process due to the influence of the superimposed cover image.
  • the cover image includes many intermediate frequency components and high frequency components, many intermediate frequency components and high frequency components of the superimposed image remain after the recompression coding process.
  • FIG. 11D shows frequency characteristics obtained by frequency-converting the attention area 1104 of the stego image 104 again.
  • the transform component quantization value and the cut-off frequency change, and the frequency characteristic (transform band characteristic) of the superimposed image is approximately restored.
  • FIG. 12 is an explanatory diagram showing a detailed procedure for giving a recompression coding characteristic to a stego image.
  • step 1201 the transform coefficient of the encoded cover image is restored, and the macroblock and quantization table information are extracted.
  • step 1202 a zero value of the macroblock quantization transform coefficient information 1007 is scanned to set a cutoff frequency.
  • step 1203 the superimposed image data is read and divided into regions. According to the superimposed position map, the position corresponding to the cover image is determined, frequency conversion and quantization are performed for each partial region, and a conversion coefficient is calculated.
  • step 1204 the frequency component larger than the cut-off frequency of the cover image is further divided among the conversion coefficients of the superimposed image, and the processing of step 1205 and step 1206 is performed.
  • a cutoff frequency is set from the conversion coefficient of the superimposed image as shown in FIG.
  • a recompression condition (such as a bit rate) may be designated by a user input instruction, and the corresponding cut-off frequency may be set lower and higher.
  • step 1206 the cut-off area shown in FIG. 10C is removed by applying an LPF that passes the conversion coefficient lower than the cut-off frequency to the conversion coefficient of the superimposed image.
  • step 1210 the signal power (total conversion coefficient) of the superimposed image is calculated.
  • the appropriateness of the signal power is determined, and if the signal power is smaller than a predetermined constant value Emax, the processing of step 1207, step 1208, and step 1209 is performed.
  • Step 1207 the energy of the cut-off area of the superimposed image is calculated.
  • Three parameters signal power of cut-off region 1008 (total sum of conversion coefficients, graph area), spectrum shape (from left, from right, homogeneous, etc.), and cut-off frequency band (frequency range of which frequencies are deleted) Get.
  • step 1208 the energy of the cut-off area of the superimposed image is adjusted. If the signal power in the cutoff region is smaller than a predetermined threshold value, the signal power is raised. If it is larger, the signal power is reduced.
  • step 1209 energy adjustment of the non-cut-off region of the superimposed image is performed. Processing is performed on the energy of the non-cut-off region of the superimposed image in accordance with the width (frequency difference) of the cut-off frequency band. A value corresponding to the energy of the cut-off area of the superimposed image is added or deleted. Alternatively, a value obtained by multiplying a predetermined constant may be used. As in the graph of FIG. 10D, the signal power is adjusted for the width of the cut-off frequency band for the spectral values that are in the non-cut-off region and in the frequency component lower than the cut-off frequency.
  • step 1211 the conversion coefficient of the cover image and the conversion coefficient of the superimposed image are calculated. Operations are added. Subtraction or multiplication may be used. If the sum is simply added, the cover image becomes brighter as a whole. Therefore, a method may be used in which an image processing filter is applied to the cover image and the signal power of the cover image is lowered / increased in advance.
  • This image processing filter may be a noise removal filter.
  • step 1212 the macro block information of the stego image is adjusted and encoded.
  • the quantization table of the cover image can be changed to another table. In that case, the macro block information of the stego image is adjusted so as to output the used table.
  • FIG. 13 is an image diagram showing a superimposed image superimposed according to the superimposed position map.
  • the superimposed image is difficult to visually recognize unless geometrical deformation or recompression coding is performed, but it is difficult to explain if the figure cannot be visually recognized.
  • FIG. 13A is a conceptual explanatory diagram when a cover image is divided into a plurality of partial areas and a plurality of superimposed images having different visible characteristics (geometric deformation characteristics, recompression encoding characteristics, etc.) are superimposed.
  • the partial area 1302, the partial area 1303, and the partial area 1304 have different geometric deformation characteristics. For example, when the stego image is reduced to an area ratio of 70%, the partial area 1302 is visualized, and when the area ratio is reduced to 50%, the partial area 1303 and the partial area 1304 are visualized.
  • the partial areas may overlap each other or may be in separate areas.
  • the secure area 1305 is obtained by encoding “prohibited copy ID: 0011”, encoding a signature, verifying information, secret information, 1-bit information indicating the presence of a check character, and the like in the form of a barcode. If 1-bit information indicating the presence of a check character is detected, but the check character appears to disappear, it is understood that the check character has been erased by some means, which contributes to applications such as tampering detection. .
  • the partial area 1302, the partial area 1303, the partial area 1304, and the partial area 1304 may have recompression encoding characteristics with different compression rates.
  • FIG. 13B shows an example in which superimposed images having different visual characteristics are superimposed depending on the moving image frame time.
  • the moving image includes a plurality of frames 1306.
  • a plurality of superimposed images having different visibility characteristics are superimposed on a plurality of frames.
  • the superimposed image itself may be moving image data with movement. When the entire moving image undergoes image processing, the location of the superimposed image having visible characteristics (geometric deformation characteristics, recompression encoding characteristics, etc.) changes, and the superimposed image is easily recognized by human eyes.
  • FIG. 14 is an explanatory diagram showing a processing procedure of the image superimposing device 203 when the machine automatically determines the detection of the superimposed image regardless of human eye recognition.
  • step 1401 the cover data input process 403 receives input of a cover image.
  • step 1402 the superimposition position map alignment designation process 409 sets a secure area.
  • the superimposed data input / edited image processing 405 encodes the superimposed information.
  • the electronic signature processing 422 further adds an electronic signature and a public key certificate to basic information such as a user ID as superimposition information.
  • Superimposition data input / edited image processing 405 encodes these bits to form a bit string having a length N of 0110001,.
  • an existing code system such as a JIS code may be used. Further, an error correction code is added to the bit string.
  • the superimposition information encoded as a bit string in this way is encoded into a superimposition image data format.
  • An area of n ⁇ n cells (n ⁇ n is equal to or less than N) is defined with four squares a pixel as one cell.
  • black or white is set to 0
  • white is set to white
  • white or black squares are sequentially arranged from the upper left to the lower right.
  • n squares are arranged, the operation moves to the lower row.
  • encoded superimposed image data is created.
  • the superimposed image data becomes a two-dimensional barcode-like pattern 1305. It may be converted into a multi-valued image or may be arranged at random. Information indicating the arrangement arranged at random may be held outside as a key and used at the time of detection.
  • step 1404 the superimposition process 418 performs superimposition of the superimposed image.
  • step 1405 the stego data output process 420 performs stego image output.
  • the detection device 214 is a program (or a program for realizing the detection device 214) on a computer including an arithmetic device (CPU), an input device, an output device, a memory (primary storage device), a disk (secondary storage device), and a communication device (or This can be realized by operating the module on the memory.
  • arithmetic device CPU
  • input device an output device
  • memory primary storage device
  • output device a disk
  • a communication device or This can be realized by operating the module on the memory.
  • the program may be stored in advance in a memory or a disk in the computer, or may be introduced from another device via a medium that can be used by the computer when necessary.
  • the medium refers to, for example, a storage medium that can be attached to and detached from the computer, or a communication medium (that is, a wired, wireless, optical network, or a carrier wave or digital signal that propagates through the network).
  • Each process shown in FIG. 15 is realized as a process by the CPU executing a program (or module) that realizes each process stored in the memory or the disk.
  • step 1501 an input of a stego image is accepted.
  • step 1502 superposed image separation of the secure area 1305 is performed from the stego image. Since the secure area is in the form of a two-dimensional bar code, after converting the frequency of the stego image, extract the conversion component with multiple frequency components with one square corresponding to 1 bit as the base of the frequency, and inversely transform to reconstruct the picture do it.
  • the superimposition information is decoded.
  • the reconstructed secure area is an image rectangular area with one side having a length of (a ⁇ n) and includes an n ⁇ n cell.
  • the average value of the pixel values of one square set in step 1403 is calculated and compared with a threshold value to determine whether the square is 0 or 1.
  • a bit string of 1 ⁇ n bit information is extracted.
  • a user ID or the like is decoded as superimposition information from the obtained bit string.
  • step 1504 the validity of the superimposed information is verified. Decode with error correction code. In addition, the electronic signature and public key certificate are verified. The validity can be verified by decoding with the error correction code or by verifying the signature.
  • step 1505 superimposition information is output.
  • Basic information such as a user ID is output as superimposition information.
  • FIG. 16 is an explanatory diagram showing a processing procedure for detecting the digital watermark embedded in the superimposed image itself by the detection device 214.
  • Each process shown in FIG. 16 is realized as a process by the CPU executing a program (or module) that realizes each process stored in the memory or the disk of the detection device 214 described above.
  • the digital watermark is embedded in the superimposed image itself by the superimposed information input process 404.
  • step 1601 an input of a stego image is accepted.
  • step 1602 the stego image is analyzed.
  • a superimposed image constituent pixel is extracted according to the set intensity map.
  • the extracted pixel information is reduced according to the geometric deformation characteristics.
  • step 1603 the superimposed image is separated.
  • a digital watermark is detected from the superimposed image. It can be detected by calculating the correlation with the digital watermark pattern.
  • Japanese Patent Application Publication No. 2007-013357 discloses a digital watermark technique that can be applied to binary images such as check characters. Extract superimposition information.
  • step 1605 the validity of the embedded information is verified.
  • an error correction code or an electronic signature can be used.
  • step 1606 the embedded information is output.
  • a person may visually recognize the detection result by outputting to a monitor which is one of output devices connected to the detection device.
  • step 1607 the corresponding device is controlled.
  • the output device of the detection device is equipped with a communication device by network or device connection, and is connected to other devices and systems.
  • Corresponding devices connected to the detection device include a still image viewing device, a still image recording device, a printing device, a digital camera, a moving image reproducing device, a moving image recording device, a video printer, and a video camera.
  • a digital signage, a photo frame, or the like may be connected.
  • Corresponding functions such as rendering, encoding, decoding, saving, and printing of these devices define function information that specifies an operation method.
  • state information is defined for each device state such as during recording or playback.
  • the function information and status information may be a serial number system.
  • the control information may mean “double speed recording OK”.
  • Corresponding equipment includes an input device, an output device, a disk, a memory, a computing device, and a control device.
  • the control device of the corresponding device performs control to perform an operation corresponding to the function information and status information.
  • function information, state information, and the like are expressed as digital watermark embedding information and embedded as a digital watermark in a superimposed image in a stego image.
  • information such as function information and status information detected as a digital watermark is sent to the response crisis, and the corresponding device is controlled.
  • an integrated video camera equipped with a detection device and a video recording device can be configured.
  • the stego image taken via the camera lens is input to the detecting device, and when the recording is detected as function information, the function information is sent to the moving image recording device, and the moving image recording device performs video recording.
  • recording NG is detected, video recording is not performed or stopped.
  • 101 Cover data
  • 102 Superimposed image
  • 104 Stego data
  • 203 Image superimposing device
  • 214 Detection device
  • 801 Intensity map table.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Security & Cryptography (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Editing Of Facsimile Originals (AREA)

Abstract

L'invention concerne un procédé de création, un procédé de détection et un procédé d'utilisation d'une stégo-image. Lorsque la stégo-image est soumise à un traitement d'image, une information cachée est visualisée. La présente invention concerne en outre un moyen servant à détection intentionnellement des informations. Une stégo-image est formée en cachant une image superposée dans une image de couverture de sorte que l'image superposée ne peut être vue par un œil humain, et que l'image superposée est détectée à partir de la stégo-image soumise au traitement d'image. Un processus de superposition applique à l'image superposée une propriété de déformation géométrique et une propriété de codage de recompression, et superpose l'image superposée mince à l'image de couverture. Puisque l'image superposée ne peut être vue par un œil humain la stégo-image peut être vendue et distribuée de manière normale. Lorsque la stégo-image est soumise au traitement d'image, des composantes de signal comportant la propriété de déformation géométrique et la propriété de codage de recompression sont mises en valeur, ce qui permet de visualiser l'image superposée. Par exemple, en inscrivant des caractères pour prévenir contre un acte frauduleux ou en affichant l'heure de génération d'image à utiliser comme preuve, par l'image superposée, l'invention peut être utilisée pour prévenir l'acte frauduleux et vérifier l'authenticité.
PCT/JP2011/051715 2010-02-23 2011-01-28 Procédé de traitement d'image WO2011105164A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2010036784A JP5508896B2 (ja) 2010-02-23 2010-02-23 画像処理方法
JP2010-036784 2010-02-23

Publications (1)

Publication Number Publication Date
WO2011105164A1 true WO2011105164A1 (fr) 2011-09-01

Family

ID=44506585

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2011/051715 WO2011105164A1 (fr) 2010-02-23 2011-01-28 Procédé de traitement d'image

Country Status (2)

Country Link
JP (1) JP5508896B2 (fr)
WO (1) WO2011105164A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108337391A (zh) * 2017-01-18 2018-07-27 精工爱普生株式会社 信息处理装置以及存储介质

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5509061B2 (ja) * 2010-12-27 2014-06-04 株式会社日立ソリューションズ 潜像重畳方法、および、潜像重畳装置、潜像重畳システム
JP5656082B2 (ja) 2011-05-25 2015-01-21 株式会社日立ソリューションズ 画像処理装置、画像生成装置、画像処理方法、および、画像生成方法
JP5988596B2 (ja) * 2012-01-25 2016-09-07 株式会社日立ソリューションズ 潜像埋め込み処理の画質維持方法

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005236864A (ja) * 2004-02-23 2005-09-02 Toppan Printing Co Ltd 電子透かしシステム及びその方法
JP2008510384A (ja) * 2004-08-11 2008-04-03 スティーブンス・インスティテュート・オブ・テクノロジー ディジタルイメージから隠蔽データを抽出する方法およびコンピュータの読み取り可能な媒体ならびにコンピュータデータ信号

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005236864A (ja) * 2004-02-23 2005-09-02 Toppan Printing Co Ltd 電子透かしシステム及びその方法
JP2008510384A (ja) * 2004-08-11 2008-04-03 スティーブンス・インスティテュート・オブ・テクノロジー ディジタルイメージから隠蔽データを抽出する方法およびコンピュータの読み取り可能な媒体ならびにコンピュータデータ信号

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108337391A (zh) * 2017-01-18 2018-07-27 精工爱普生株式会社 信息处理装置以及存储介质
CN108337391B (zh) * 2017-01-18 2020-05-05 精工爱普生株式会社 信息处理装置以及存储介质

Also Published As

Publication number Publication date
JP2011176385A (ja) 2011-09-08
JP5508896B2 (ja) 2014-06-04

Similar Documents

Publication Publication Date Title
Asikuzzaman et al. An overview of digital video watermarking
US7853040B2 (en) Covert and robust mark for media identification
JP4764433B2 (ja) 安全、堅固かつ高忠実度の透かし入れ
Asikuzzaman et al. Robust DT CWT-based DIBR 3D video watermarking using chrominance embedding
Lin et al. Issues and solutions for authenticating MPEG video
Hartung et al. Multimedia watermarking techniques
US6885757B2 (en) Method and apparatus for providing an asymmetric watermark carrier
Queluz Authentication of digital images and video: Generic models and a new contribution
EP1437897A2 (fr) Méthodes et appareil pour l'insertion et la détection de filigranes numériques
Lu et al. Lossless information hiding in images
Celik et al. Localized lossless authentication watermark (LAW)
JP5508896B2 (ja) 画像処理方法
Huang et al. Unseen visible watermarking: a novel methodology for auxiliary information delivery via visual contents
JP2013126189A (ja) 画像処理装置、改竄防止方法及び改竄検知方法
Garcia Freitas et al. Secure self-recovery watermarking scheme for error concealment and tampering detection
Maiorana et al. Multi‐bit watermarking of high dynamic range images based on perceptual models
JP4945541B2 (ja) 劣化ホスト信号利用の電子透かし埋め込み検出方法
JP5086951B2 (ja) 画像生成装置、画像生成方法、コンピュータが実行可能なプログラム、およびコンピュータが読み取り可能な記録媒体
KR101094809B1 (ko) 디지털 홀로그램을 이용한 디지털 콘텐츠의 워터마킹 방법
Obimbo et al. Using digital watermarking for copyright protection
Gupta Improving Security for Video Watermarking
Alaa ‘Watermarking images for fact-checking and fake news inquiry
Dubey et al. A State of Art Comparison of Robust Digital Watermarking Approaches for Multimedia Content (Image and Video) Against Multimedia Device Attacks
JP2024021961A (ja) 動画用電子透かし方法
Gao et al. Carrier Robust Reversible Watermark Model Based on Image Block Chain Authentication

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11747128

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11747128

Country of ref document: EP

Kind code of ref document: A1