WO2011071081A1 - Dispositif d'intégration d'informations invisibles, dispositif de reconnaissance d'informations invisibles, procédé d'intégration d'informations invisibles, procédé de reconnaissance d'informations invisibles et support d'enregistrement - Google Patents

Dispositif d'intégration d'informations invisibles, dispositif de reconnaissance d'informations invisibles, procédé d'intégration d'informations invisibles, procédé de reconnaissance d'informations invisibles et support d'enregistrement Download PDF

Info

Publication number
WO2011071081A1
WO2011071081A1 PCT/JP2010/072039 JP2010072039W WO2011071081A1 WO 2011071081 A1 WO2011071081 A1 WO 2011071081A1 JP 2010072039 W JP2010072039 W JP 2010072039W WO 2011071081 A1 WO2011071081 A1 WO 2011071081A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
image
visualization
embedding
invisible
Prior art date
Application number
PCT/JP2010/072039
Other languages
English (en)
Japanese (ja)
Inventor
直人 羽生
寛 福井
健一 佐久間
道関 隆国
一真 北村
Original Assignee
株式会社資生堂
学校法人立命館
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社資生堂, 学校法人立命館 filed Critical 株式会社資生堂
Priority to CN2010800555503A priority Critical patent/CN102648623A/zh
Priority to KR1020127014595A priority patent/KR101285648B1/ko
Priority to US13/514,108 priority patent/US8891815B2/en
Priority to EP10836001.7A priority patent/EP2512115B1/fr
Publication of WO2011071081A1 publication Critical patent/WO2011071081A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/387Composing, repositioning or otherwise geometrically modifying originals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • H04N1/32149Methods relating to embedding, encoding, decoding, detection or retrieval operations
    • H04N1/32203Spatial or amplitude domain methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking
    • G06T1/005Robust watermarking, e.g. average attack or collusion attack resistant
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/32Circuits or arrangements for control or supervision between transmitter and receiver or between image input and image output device, e.g. between a still-image camera and its memory or between a still-image camera and a printer device
    • H04N1/32101Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title
    • H04N1/32144Display, printing, storage or transmission of additional information, e.g. ID code, date and time or title embedded in the image data, i.e. enclosed or integrated in the image, e.g. watermark, super-imposed logo or stamp
    • H04N1/32149Methods relating to embedding, encoding, decoding, detection or retrieval operations
    • H04N1/32203Spatial or amplitude domain methods
    • H04N1/32229Spatial or amplitude domain methods with selective or adaptive application of the additional information, e.g. in selected regions of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2201/00General purpose image data processing
    • G06T2201/005Image watermarking
    • G06T2201/0051Embedding of the watermark in the spatial domain
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2201/00General purpose image data processing
    • G06T2201/005Image watermarking
    • G06T2201/0065Extraction of an embedded watermark; Reliable detection

Definitions

  • the present invention relates to a non-visualization information embedding device, a non-visualization information recognition device, a non-visualization information embedding method, and a non-visualization information recognition for providing a highly accurate image with excellent added value by efficiently acquiring information.
  • the present invention relates to a method and a recording medium.
  • a card with a pattern or the like is photographed, and a MIDI (Musical Instrument Digital Interface) signal for controlling a musical instrument is output based on the card type and three-dimensional position information detected from the photographed camera image.
  • a technique for outputting a video is known (see, for example, Patent Document 1).
  • Non-Patent Document 1 there is a system tool that recognizes a graphic on a paper medium with a camera and displays a virtual graphic on a display of a personal computer (for example, see Non-Patent Document 1).
  • a target image having identification information corresponding to a predetermined image pattern is acquired, the identification information is recognized from the acquired target images, and the identification information recognized from a plurality of pre-registered processes is obtained.
  • the corresponding predetermined process is executed, the target image is displayed in the predetermined display area, the position information of two or more predetermined positions in the displayed target image is acquired, and the two or more acquired positions
  • An image processing technique for drawing an image corresponding to recognized identification information in a direction and a position based on the position information is known (see, for example, Patent Document 2).
  • Non-Patent Document 1 In the virtual graphic display system shown in Non-Patent Document 1, once a virtual graphic is displayed, it is difficult to change the graphic. That is, the figure cannot be changed according to the external environmental change. Furthermore, in the prior art, since the image to be displayed is displayed based on the position of the card or the like including the identification information, there is a limitation in the display method, and for example, it is not possible to display a 3D video or the like. .
  • the present invention has been made in view of the above-described problems, and provides a non-visualized information embedding device and a non-visualized information recognition for providing a highly accurate image with excellent added value by efficiently acquiring information.
  • An object is to provide a device, a non-visualized information embedding method, a non-visualized information recognition method, and a recording medium.
  • a non-visualization information embedding device is a non-visualization information embedding device that embeds non-visualization information at a predetermined position of an acquired image, and includes object information included in the image.
  • image analysis means for acquiring position information, embedding target image determination means for determining whether the image is an image to be embedded from object information obtained by the image analysis means, and the embedding target image determination means
  • image synthesizing means for synthesizing the non-visualization information with the image based on the determination result obtained by the above.
  • the invisible information recognition apparatus is an invisible information recognition apparatus that recognizes invisible information included in an acquired image, and extracts the invisible information from the image.
  • a non-visualized information extracting unit that analyzes additional information of an object included in the image obtained from the non-visualized information when the non-visualized information is extracted by the non-visualized information extracting unit.
  • display information generating means for generating display information to be displayed on the screen from the additional information obtained by the invisible information analyzing means.
  • non-visualized information is displayed on the entire image by processing that cannot be recognized by the naked eye on images and videos displayed on a screen, or various print media such as paper, postcards, posters, and cards.
  • a part of the image, video, print medium, etc. is photographed with a photographing device such as a digital camera or a camera provided in a portable terminal, and the photographed image or video is transferred to a personal computer or mobile phone.
  • the embedded marker is recognized by capturing the image in a terminal or the like and processing the image using a filter or the like.
  • the present invention enables marker recognition that combines image processing with a marker embedding method that enables recognition of an embedded marker even in a model with limited capacity and performance such as a portable terminal. Further, according to the present invention, value-added information corresponding to an image, video, or the like that has been recognized and captured from the marker information is acquired.
  • the “image” described in the present embodiment includes one image such as a photograph and an image of continuous frames in a video.
  • FIG. 1 shows an example of a functional configuration of a non-visualized information embedding device according to the present embodiment.
  • the invisible information embedding device 10 shown in FIG. 1 includes an input unit 11, an output unit 12, a storage unit 13, an image acquisition unit 14, an image analysis unit 15, an embedding target image determination unit 16, and a non-display unit.
  • Visualization information setting means 17, non-visualization information generation means 18, image composition means 19, transmission / reception means 20, and control means 21 are configured.
  • the input means 11 starts / starts various instructions such as an image acquisition instruction from the user, an image analysis instruction, an embedding target image determination instruction, an embedding information setting instruction, an embedding information generation instruction, an image composition instruction, and a transmission / reception instruction. Accepts input such as end.
  • the input unit 11 includes a pointing device such as a keyboard and a mouse if the computer is a general-purpose computer such as a personal computer, and includes a group of operation buttons if the terminal is a portable terminal.
  • the input unit 11 also has a function of inputting an image or video captured by an imaging unit such as a digital camera. Note that the above-described imaging unit may be provided in the invisible information recognition device 30 or may have an external functional configuration.
  • the output unit 12 outputs the content input by the input unit 11 and the content executed based on the input content.
  • the output unit 12 includes the acquired image, the image analysis result, the embedding target image determination result, the set non-visualization information, the generated non-visualization information, the synthesized image obtained by synthesizing the non-visualization information, Screen display of results of processing in each configuration, audio output, and the like are performed.
  • the output unit 12 includes a display, a speaker, and the like.
  • the output unit 12 may have a printing function such as a printer, and the above-described output contents may be printed on various printing media such as paper, postcards, posters, etc., and provided to the user or the like. it can.
  • the accumulating unit 13 accumulates various information required in the present embodiment and various data at the time of execution or after execution of the embedding process. Specifically, the storage unit 13 acquires one or a plurality of images or videos obtained by input or image capturing acquired by the image acquisition unit 14 stored in advance. Further, the storage unit 13 includes a result analyzed by the image analysis unit 15, a determination result by the embedding target image determination unit 16, a setting content by the non-visualization information setting unit 17, and an embedding generated by the non-visualization information generation unit 18. Embedded information, images synthesized by the image synthesizing means 19, and the like are stored. Further, the storage means 13 can read out various data stored as required.
  • the image acquisition unit 14 acquires an image, a video, or the like that is a target for embedding information.
  • the images, videos, and the like may be images, videos, and the like obtained by imaging means such as a camera, for example, and may be target images that are applied to posters, photos, cards, stickers, and the like.
  • the image acquisition unit 14 can also acquire information and images stored in an external device connected to the communication network via the transmission / reception unit 20 and images and videos stored in a database. You may use the image which the user etc. actually image
  • the image analysis unit 15 analyzes the image acquired by the image acquisition unit 14 and analyzes the contents included in the image. Specifically, an object consisting of object information, coordinates, etc., such as what part (position, region) in the image is projected, how the object is moving in the video, etc. Get location information. For example, when the object is a person, the image analysis unit 15 may detect the face from the feature portion of the face, or may digitize the feature value of the face and specify the person based on the result.
  • the embedding target image determination unit 16 determines whether or not the object displayed in the image is a target for embedding preset invisible information based on the result analyzed by the image analysis unit 15. Whether or not the object is information for embedding the invisible information in the embedding target image determination means 16 is determined by embedding determination information set in advance by a user or the like and stored in the storage means 13 or the like. It may be determined by the embedded determination information, or it is searched whether additional information for the object is stored in the storage means 13, and if the additional information is stored, it is determined that the object is an embedding target object. May be.
  • the embedding target image determination unit 16 stores additional information about the personal computer. 13 is determined, and if there is additional information related to a personal computer, the personal computer can be determined as an embedding target image.
  • the contents to be embedded can be set, for example, by the non-visualization information setting unit 17 or the like. The set contents are stored in the storage means 13.
  • the embedding target image determining means 16 outputs each object as an embedding target.
  • all the objects may be set as an embedding target, or at least one object may be set as an embedding target. In this case, it is possible to arbitrarily set the object according to the priority set in advance, the position of the display area with respect to the entire screen of the object, and the time during which the object is displayed if the image is displayed.
  • the embedded detailed information is also stored in the storage means 13.
  • the non-visualization information setting means 17 sets the specific information content to be embedded as additional information based on the object information. For example, if the object information is a wallet, clothes, etc., set its brand name, product name, price, homepage address, etc. If the object information is a person, the person's name, age, gender, Height, hobby, career, etc. are set, and if the object information is a book, the title, author, publication date, price, information about the author, etc. are set. Note that the additional information set by the invisible information setting unit 17 includes video, images, and the like.
  • the non-visualization information setting means 17 sets in what form information is added.
  • the additional information is a specific encrypted character, or is a pattern, a symbol, code information, or a display size.
  • code information or the like it is preferable that a correspondence database is provided so that information on the code can be acquired on the non-visualized information recognition apparatus side. As described above, since it can be set from a plurality of forms, appropriate embedding information can be selected and set according to the content of the image to be embedded.
  • the non-visualization information generating means 18 generates an image to be embedded in the embedding target image.
  • the non-visualization information generation means 18 may generate directly as character information or code information.
  • code information for example, a two-dimensional barcode such as a QR code can be used.
  • the code information is not limited to the QR code, and for example, JAN code, ITF code, NW-7, CODE39, CODE128, UPC, PDF417, CODE49, Data Matrix, Maxi Code, etc. are used. You can also
  • the non-visualized information generating means 18 uses the low frequency part based on the color information of the original image to make the information to be embedded difficult to see the embedded information with respect to the image actually provided to the user.
  • An image using the high frequency part or an image using only the low frequency part or only the high frequency part is generated.
  • the low-frequency part indicates a part or region whose brightness is lower than that of the original image where the invisible information is embedded
  • the high-frequency part is a brightness that is higher than that of the original image. The part or area
  • the embedded information corresponding to the object is embedded, for example, on or around the position of the object displayed in the image.
  • the embedded image is synthesized with the target image based on the object position information obtained by the image analysis means 15. That is, according to the present embodiment, it is possible to embed a plurality of non-visualized information in an appropriate place, instead of assigning one embedding information to the entire image.
  • the image synthesizing unit 19 generates the non-visualization information generated by the non-visualization information generation unit 18 based on the image obtained by the image analysis unit 15 and the object position information including the object information and coordinate information of the image. Is embedded at a predetermined position to synthesize an image.
  • the image synthesis means 19 can move following the movement of the object in the video being played, and the invisible information can be embedded on the object. . That is, the image synthesizing unit 19 can perform the synthesizing process on the captured image every time the synthesis target image is input, and sequentially display the synthesized image.
  • the transmission / reception means 20 obtains a desired external image (captured image, composite image, etc.), an execution program for realizing the invisible information embedding process in the present invention, from an external device connectable using a communication network or the like. It is an interface to do.
  • the transmission / reception means 20 can transmit various information generated in the non-visualization information embedding device 10 to an external device.
  • the control means 21 controls the entire components of the invisible information embedding device 10. Specifically, the control unit 21 acquires an image, analyzes an image, determines whether the image is an embedding target image, sets non-visualization information, and the like based on an instruction from the input unit 11 by a user or the like, for example. Each control of each process such as setting and image composition is performed.
  • the invisible information in the invisible information setting unit 17 and the invisible information in the invisible information generating unit 18 may be set and generated in advance and accumulated in the accumulating unit 13.
  • FIG. 2 shows an example of a functional configuration of the non-visualized information recognition apparatus in the present embodiment.
  • the invisible information recognition apparatus 30 shown in FIG. 2 includes an input means 31, an output means 32, an accumulation means 33, an embedded image acquisition means 34, an invisible information extraction means 35, and an invisible information analysis means 36.
  • the display information generating unit 37, the transmitting / receiving unit 38, and the control unit 39 are included.
  • the input means 31 accepts inputs such as start / end of various instructions such as an embedded image acquisition instruction, a non-visualization information extraction instruction, a non-visualization information analysis instruction, a display information generation instruction, and a transmission / reception instruction from a user or the like.
  • the input means 31 includes a pointing device such as a keyboard and a mouse for a general-purpose computer such as a personal computer, and includes a group of operation buttons for a portable terminal or the like.
  • the input unit 31 also has a function of inputting an image or video taken by an imaging unit such as a digital camera. Note that the above-described imaging unit may be provided in the invisible information recognition device 30 or may have an external functional configuration.
  • the input means 31 can acquire an embedded image from a print medium such as paper, a postcard, a poster, a photograph, or a card.
  • a print medium such as paper, a postcard, a poster, a photograph, or a card.
  • an image capturing unit such as a camera and a function of reading data using a scanner function or the like are provided.
  • the output means 32 outputs the contents input by the input means 31 and the contents executed based on the input contents. Specifically, the output unit 32 outputs additional information for an object displayed on an image or video obtained by the display information generation unit 37. Note that the output unit 32 includes a display, a speaker, and the like.
  • the output unit 32 may have a printing function such as a printer, and prints each output content such as additional information on the above-described object on various print media such as paper and provides it to the user or the like. You can also.
  • the accumulating unit 33 accumulates various information required in the present embodiment and various data at the time of execution of the invisible information recognition process or after execution. Specifically, the accumulating unit 33 includes the embedded image acquired by the embedded image acquiring unit 34, the non-visualized information (marker) acquired by the non-visualized information extracting unit 35, and the non-visualized information analyzed by the non-visualized information analyzing unit 36. Visualization information, display contents generated by the display information generation means 37, and the like are accumulated.
  • the storage means 33 can store related information for the data analyzed by the non-visualized information analysis means 36. For example, when there is code information (including a character code, a two-dimensional code, etc.), etc., there is various data corresponding to the code information (for example, detailed information (characters for the object corresponding to the code information) , Video, image, audio, etc.), the size, color, time, position, operation content, etc.) when data is displayed on the screen are stored in the storage means 33. In addition, the storage unit 33 can read various data stored when a code or the like is acquired or when necessary.
  • code information including a character code, a two-dimensional code, etc.
  • various data corresponding to the code information for example, detailed information (characters for the object corresponding to the code information)
  • the storage unit 33 can read various data stored when a code or the like is acquired or when necessary.
  • the embedded image acquisition unit 34 acquires an embedded image from an external device connected to the communication network via the storage unit 33 and the transmission / reception unit 38.
  • the embedded image includes a video.
  • the invisible information extracting means 35 extracts invisible information included in the extracted embedded image. Specifically, the non-visualization information extraction unit 35 performs filtering at a predetermined frequency on the input embedded image, and acquires the non-visualization information embedded in the image. Note that if there is a plurality of invisible information in the image, all the invisible information is extracted.
  • non-visualization information extraction means 35 also acquires non-visualization information extraction position information indicating from which position the non-visualization information is extracted.
  • the invisible information extraction unit 35 causes the storage unit 33 to store the acquired various types of information.
  • the non-visualized information analyzing unit 36 analyzes what value-added data is actually included in the non-visualized information obtained by the non-visualized information extracting unit 35.
  • the invisible information analyzing means 36 in this embodiment has a reading function such as a barcode reader for reading a barcode.
  • the invisible information is a two-dimensional barcode
  • the two-dimensional information is analyzed.
  • Information is acquired from the barcode using a reading function or the like, and the acquired content (for example, code ID) is used as a key to be connected in advance to the communication network via the storage means 33 and the transmission / reception means 38 If there is additional information corresponding to the key as a result of searching an external device such as a server or a database, the information is acquired.
  • the display information generating unit 37 generates display information for displaying the result obtained by the non-visualized information analyzing unit 36 on the screen.
  • the display method may be described by providing another frame (another window) on the screen, may be displayed on the position where the corresponding object is displayed, or may be output by voice.
  • the display information generating unit 37 may visualize and display the acquired invisible information as it is, acquire additional information corresponding to the invisible information from the storage unit 33 or an external device, and the acquired additional information. Information may be displayed.
  • the display information generating unit 37 displays the screen set for each additional information acquired from the storage unit 33 based on the code ID and the like. Display information to be displayed is generated based on the display size, color, time, position, operation content, and the like.
  • the target object When the target object is moving as an image, it may be displayed following the position of the object, or may be displayed fixed at the position initially displayed on the screen.
  • the transmission / reception means 38 is an interface for acquiring a desired external image (captured image, etc.) from an external device connectable using a communication network or the like, an execution program for realizing the invisible information recognition processing in the present invention, and the like. It is. Further, the transmission / reception means 38 can transmit various types of information generated in the invisible information recognition device 30 to an external device.
  • the control means 39 controls the entire components of the invisible information recognition device 30. Specifically, the control means 39, for example, based on an instruction from the input means 31 by a user or the like, each of acquisition of an embedded image, extraction of non-visualization information, analysis of non-visualization information, generation of display information, etc. Each control of processing is performed.
  • the invisible information analyzing means 36 communicates via the storage means 33 or the transmitting / receiving means 38 using the code ID or the like as a key. It searches for external devices such as a preset server and database connected to the network, and acquires corresponding additional information.
  • the storage unit 33 is searched using the code ID acquired from the invisible information as a key, and the result obtained by the search is obtained. get.
  • the invisible information recognition device 30 is a “network type” connected to an external device via a communication network
  • the external device is accessed using the code ID acquired from the invisible information as a key, and is stored in the external device.
  • Information corresponding to the code ID is searched from a certain data group, and the corresponding additional information is acquired from the external device.
  • the non-visualized information embedding device 10 and the non-visualized information recognition device 30 generate an execution program (non-visualization information embedding program, non-visualization information recognition program) capable of causing a computer to execute each function, and a CD-ROM.
  • an execution program non-visualization information embedding program, non-visualization information recognition program
  • the invisible information embedding process and the invisible information recognizing process in the present invention can be realized by installing the execution program in a general-purpose personal computer, a server, or the like.
  • FIG. 3 shows an example of a hardware configuration capable of realizing the invisible information embedding process and the invisible information recognition process in the present embodiment.
  • the input device 41 has a pointing device such as a keyboard and a mouse operated by a user or the like, and inputs various operation signals such as execution of a program from the user or the like.
  • the input device 41 includes an image input unit that inputs an image taken from an imaging unit such as a camera.
  • the output device 42 has a display for displaying various windows and data necessary for operating the computer main body for performing processing according to the present invention, and displays the program execution progress, results, and the like by the control program of the CPU 46. can do.
  • the execution program installed in the computer main body in the present invention is provided by a portable recording medium 48 such as a USB memory or a CD-ROM.
  • the recording medium 48 on which the program is recorded can be set in the drive device 43, and the execution program included in the recording medium 48 is installed in the auxiliary storage device 44 from the recording medium 48 via the drive device 43.
  • the auxiliary storage device 44 is a storage means such as a hard disk, and can store an execution program in the present invention, a control program provided in a computer, and the like, and can perform input / output as necessary.
  • the memory device 45 stores an execution program read from the auxiliary storage device 44 by the CPU 46.
  • the memory device 45 includes a ROM (Read Only Memory), a RAM (Random Access Memory), and the like.
  • the CPU 46 controls processing of the entire computer, such as various operations and input / output of data with each hardware component, based on a control program such as an OS (Operating System) and an execution program stored in the memory device 45.
  • a control program such as an OS (Operating System) and an execution program stored in the memory device 45.
  • OS Operating System
  • execution program stored in the memory device 45.
  • the network connection device 47 obtains an execution program from another terminal connected to the communication network by connecting to a communication network or the like, or the execution result obtained by executing the program or the execution in the present invention
  • the program itself can be provided to other terminals.
  • the invisible information embedding process and the invisible information recognition process in the present invention can be executed. Further, by installing the program, the non-visualized information embedding process and the non-visualized information recognition process in the present invention can be easily realized by a general-purpose personal computer or the like.
  • FIG. 4 is a flowchart showing an example of the non-visualized information embedding processing procedure in the present embodiment.
  • an image taken by an imaging means such as a camera is acquired (S01), the image is analyzed (S02), and object information and object position information included in the image are obtained. Etc.
  • the image to be embedded is determined based on the information obtained in the process of S02 (S03), and it is determined whether or not the invisible information (marker) is embedded in the object (S04).
  • the invisible information is embedded (YES in S04)
  • the invisible information is set (S05), and the invisible information to be combined with the image is generated based on the invisible information set by the process of S05. (S06).
  • the invisible information generated in the process of S06 is synthesized at a predetermined position of the image (S07), and the synthesized image is displayed or output as output data by output means such as a display (S08).
  • FIG. 5 is a flowchart showing an example of the non-visualized information recognition processing procedure in the present embodiment.
  • an embedded image is acquired (S11), and the invisible information is extracted from the acquired image (S12).
  • the invisible information is extracted from the image (S13).
  • the invisible information is analyzed (S14).
  • display information to be displayed on the screen or the like is generated from the information obtained from the analysis result of the invisible information (S15), and the generated content is displayed (S16).
  • the process determines whether or not the invisible information is recognized from another image (S17).
  • the process returns to S11 and the subsequent processing is repeatedly executed. To do. Further, in the process of S17, when the invisible information is not recognized from another image (NO in S17), the invisible information recognition process is terminated.
  • the above-described processing can provide highly accurate images with excellent added value by acquiring efficient information. Further, by installing the program, the non-visualized information embedding process in the present invention can be easily realized by a general-purpose personal computer or the like.
  • the non-visualization information in this embodiment will be specifically described.
  • the invisible information letters, numbers, symbols, marks, patterns, colors, one-dimensional codes, two-dimensional codes, and the like can be used.
  • a description will be given using a two-dimensional code as an example.
  • FIG. 6 shows an example of an image.
  • a book 52, a wallet 53, a notebook computer 54, and a wristwatch 55 are placed on a desk 51.
  • the image analysis means 15 acquires these object information included in the image and their position information.
  • the embedding target image determination unit 16 has related information on the wallet 53, the notebook computer 54, and the wristwatch 55 and generates invisible information. Note that the user or the like may arbitrarily determine whether to generate the invisible information and what kind of object information to generate.
  • the non-visualized information 56-1 to 56-3 consisting of a two-dimensional code in a predetermined area on or around the position where the wallet 53, the notebook computer 54, and the wristwatch 55 are displayed. Are combined by superposition to generate a composite image.
  • FIG. 7 shows an example of the non-visualization information.
  • FIG. 7 shows two types of invisible information 60-1 and 60-2 using a two-dimensional code.
  • the invisible information 60-1 shown in FIG. 7 has a low frequency layer 61-1 in the outermost frame, a high frequency layer 62-1 inside, and a code portion 63-1 inside. It is configured.
  • the code is embedded in the entire image. Is surrounded by the low-frequency layer 61-1 and the high-frequency layer 62-1, and the frequency is filtered by a predetermined frequency (for example, HPF (High Pass Filter), LPF (Low Pass Filter), etc.)
  • HPF High Pass Filter
  • LPF Low Pass Filter
  • the two-dimensional code emerges by expressing the color with a single color such as black or a plurality of colors different from the color of the image. Therefore, by performing filtering, a code area can be extracted, and a two-dimensional code can be efficiently read even if the code is embedded in a part of the image.
  • the outermost frame has the high frequency layer 62-2
  • the inner side has the low frequency layer 61-2
  • the code part 63-2 is configured on the inner side, the same effect as in the case of the invisible information 60-1 can be obtained.
  • the area in which the code is embedded is preferably a square, but in the present invention, it can be applied to any area as long as it is a predetermined area such as a rectangle, a diamond, or a circle.
  • orientation information is required for the two-dimensional code, but in the present embodiment, the code direction is synthesized according to the top, bottom, left, and right of the image. Therefore, for example, when a video on a television screen is captured by a camera included in a mobile terminal, the orientation information is limited to a specific direction because the orientation of the mobile terminal is normally opposed to the television screen. Therefore, it is not necessary to attach information indicating the direction to the two-dimensional code.
  • the input image indicates the fixed direction without acquiring the direction information, there is no need to put the direction information in the code itself. Therefore, a lot of other information can be entered, and further, since it is not necessary to analyze the orientation information at the time of code recognition, the code can be recognized efficiently. Therefore, according to the present embodiment, the invisible information can be acquired with high accuracy.
  • FIG. 8 is a diagram for explaining a specific example of embedding additional information.
  • an example of embedding a two-dimensional code as additional information will be described. Since the original image shown in FIG. 8 is the same as the image 50 shown in FIG. 6 described above, a detailed description of the image 50 is omitted here.
  • the image 50 shown in FIG. 8 in the non-visualized area 56-2 for the above-mentioned node type personal computer, if the low frequency part 64 and the high frequency part 65 are enlarged based on a predetermined condition within a predetermined area (for example, a square). It is arranged.
  • a predetermined area for example, a square
  • the code is embedded with the low frequency portion 64 set to “0” and the high frequency portion 65 set to “1”. Further, the high frequency unit 65 arranges dark colors and light colors alternately in predetermined pixel units, and adjusts so that when viewed from a further distance, the color of the original image itself is averaged.
  • the low frequency and high frequency in this embodiment will be described.
  • the frequency refers to the spatial frequency unless otherwise specified.
  • Spatial frequency is defined as “the reciprocal of the period of the pixel value relative to the unit length”.
  • the frequency in the present embodiment is not particularly limited.
  • the frequency may be set in the range of 0.2 to 2 [cycle / pixel] in the high frequency portion and 0 to 1 [cycle / pixel] in the low frequency portion. Specifically, it is sufficient that the high frequency portion has a higher frequency than the low frequency portion.
  • a grid composed of predetermined pixel regions (for example, 4 ⁇ 4 px (pixels) or the like) formed in a high-frequency part is sufficient if a bright part and a dark part are periodically repeated.
  • pixel regions for example, 4 ⁇ 4 px (pixels) or the like
  • vertical stripes, horizontal stripes, grids State For example, Bright light dark dark dark bright light bright dark dark dark or Light / dark / dark / dark / light / dark / dark / light / dark / dark / light / dark.
  • the brightness difference between the bright part and the dark part at that time may be 10 or more, preferably 50 or more, and more preferably 100 or more.
  • the brightness difference in the present embodiment first generates a bright part and a dark part on the basis of the brightness of an image that is normally displayed, and uses the brightness difference between the generated bright part and the dark part.
  • the present invention is not limited to this.
  • the brightness difference between the brightness of a normal image and the brightness at the low frequency part or the high frequency part may be used.
  • the difference in brightness considered as a high-frequency part is about 15 or more with reference to an element adjacent in gray scale, it can be considered as a high-frequency part.
  • a brightness difference of about 15 to 35 is a region that can be mainly used as a high-frequency part.
  • an element is comprised by the pixel of 1 px or more in length and width, and in this embodiment, 1 element can be 2x2 px, for example.
  • the code of the invisible information is generated by arbitrarily changing the brightness difference according to the brightness and brightness of the image (background) at the embedded position, the performance of the camera to be photographed, and the like.
  • the usable pixel size in the additional information is not particularly limited because it varies depending on the distance between the image and the person viewing the image, for example. About 0.05 to 2 mm is preferable, and if the distance is about 10 m, about 0.5 to 20 mm is preferable. Even when used from a further distance, it is preferable to maintain the same pixel size / distance ratio.
  • FIG. 9 is a diagram for explaining a pattern of a low frequency part or a high frequency part. Note that one square in FIGS. 9A to 9E represents an element.
  • a checkered pattern (FIG. 9A), a horizontal stripe pattern (FIG. 9B), a vertical stripe
  • Preset patterns such as a pattern (FIG. 9C), a right diagonal line (FIG. 9D), and a left diagonal line (FIG. 9E) are generated.
  • the pattern is not limited to the above pattern, and for example, a plurality of patterns in FIGS. 9A to 9E may be partially combined.
  • the checkered pattern as shown in FIG. 9A is too fine to recognize the camera as a high-frequency unit. For this reason, it is necessary to configure the high-frequency part with horizontal lines or diagonal lines. Also on the monitor, the high-frequency part composed of diagonal lines is the most prominent as with paper.
  • the embedded information when the embedded information is acquired by photographing the low frequency part or the high frequency part with a camera or the like, the original information (background image) is obtained with reference to the obtained information shown in FIGS. 9A to 9E.
  • the embedded information By embedding appropriate non-visualization information, the embedded information can be read easily and reliably.
  • said pattern is applicable also when using only a low frequency or only a high frequency, or when using both a low frequency and a high frequency.
  • FIG. 10 shows an embodiment of code embedding.
  • the image 50 shown in FIG. 10 is the same as that shown in FIGS. 6 and 8, and a duplicate description is omitted.
  • a method of embedding the non-visualized information 56-3 for the wristwatch 55 will be described.
  • a portion where a code is embedded is extracted from the image 50.
  • the part to be extracted is set based on the size of one grid, and for example, can be a square having one side of a predetermined pixel (in FIG. 10, 8 px as an example). Further, one element in FIG. 10 is 2 ⁇ 2 px.
  • the grid size may be other than this, but if it is too small or too large, it will be difficult to read the codes “0” and “1”.
  • a 3-9 px square is preferred.
  • the shape is not limited to a square and may be a rectangle within the above range.
  • the number of grids in the cord part 63 is not particularly limited, but is preferably a square. In the example of FIG. 10, the grid size is 10 ⁇ 10.
  • FIG. 11 shows an example of code embedding in the grid.
  • the non-visualization information generating means 18 digitizes the pixels included in the grid 71 in the code part 63, which is the non-visualization information 56-3, by the value of brightness.
  • each pixel is composed of red, green, and blue elements, and the brightness of each color (0 to 255) is averaged as the brightness of the pixel.
  • FIG. 12 shows an example of code embedding in the low frequency part.
  • a so-called blur filter is applied to the entire grid by a filtering process using a Gaussian filter, and the brightness value in the grid is smoothed. . That is, in the example of FIG. 12, the brightness of the grid 71 is smoothed around 130.
  • FIG. 13A and FIG. 13B show an example of code embedding in a high-frequency part.
  • a stripe pattern is generated to embed a high-frequency part. Therefore, the brightness is increased for each element in the even-numbered row of the grid 71, and the brightness is decreased for each element in the odd-numbered row.
  • the increase / decrease value of the brightness for each element is determined according to the brightness of the background as shown in the correspondence table shown in FIG. 13B.
  • the values shown in the correspondence table of FIG. 13B are stored in, for example, the storage unit 13 or the like.
  • a two-dimensional code can be generated by performing the low frequency part and high frequency part code embedding processes as shown in FIGS. 12, 13A, and 13B on the invisible region.
  • FIG. 14 shows an example for extracting the invisible information.
  • the extraction method in the present invention is not limited to this.
  • a captured image obtained by photographing a video or image in which the non-visualization information is embedded is acquired (S21).
  • a Sobel Laplacian filter is performed on the acquired image to extract an edge (a portion where the brightness changes drastically) from the image, and the edge portion is converted to white, and the others are converted to black (S22). That is, by applying a Sobel Laplacian filter to the captured image, it is possible to extract both the code portion (a collection of edges in the case of a high-frequency image) and the background edges (Image edges).
  • edges in the case of a high-frequency image basically uses an edge where the light and darkness changes suddenly, but in the case of a high-frequency image, if the brightness is viewed in the horizontal or vertical direction, The brightness is reversed for each element. Therefore, in the case of a high-frequency image, all element boundaries are edges.
  • DCT Discrete Cosine Transform
  • IDCT Inverse Discrete Cosine Transform
  • S24 from the frequency domain to the spatial domain (S25).
  • S25 spatial domain
  • processing of the above S23 to S25 means performing LPF (Low Pass Filter) processing.
  • a Sobel Laplacian filter is performed on the image obtained by the processing in S25, and an edge (a portion with a sharp change in brightness) is extracted from the image, and the edge portion is converted to white, and the others are converted to black (S26). ).
  • an expansion process is performed to expand the edge obtained by the Sobel Laplacian filter to a thick one (S27). Specifically, for example, the obtained edge is expanded outward by 3 pixels (px) vertically and horizontally.
  • the invisible information is extracted from the image (S30). Specifically, since only the non-visualization information is included at the time when the median filtering process of S29 is completed, the non-visualization information is extracted, and (1) shape (rectangle) extraction, (2) projective transformation, ( 3) Perform the code “1” and “0” determination.
  • a process of calculating the coordinates of the four corners of a quadrangle (code) from the image is performed.
  • the position of the coordinates to be calculated is not limited to a quadrangle, and can be set as appropriate according to a preset shape.
  • the coordinates of each vertex are calculated in the case of a triangle or a star shape.
  • a process of returning a distorted code to a square is performed using the coordinates of the obtained four points.
  • processing for determining whether the data bit included in the code is “1” or “0” is performed. Specifically, among the pixels included in a block constituting 1 bit, if there are more black pixels, it is determined as “0”, and if there are more white pixels, it is determined as “1”.
  • FIG. 15 shows an example of extracting actual de-visualization information.
  • the invisible information is extracted from the captured image in which the invisible information (Code) is embedded in the character image (Image) of “R”.
  • Edge detection is performed from the captured image shown in FIG. 15A, and an image of code (Code) + image edge (Image edge) shown in FIG. 15B is acquired. Further, code removal is performed from the photographed image shown in FIG. 15A, and an image (Image-Code) obtained by removing the code portion from the image is obtained as shown in FIG. 15C.
  • edge detection is performed on the image (Image-Code) shown in FIG. 15C, and an image (Image edges-Code) obtained by removing the code from the image edge is obtained as shown in FIG. 15D. To do.
  • a code (Code) as shown in FIG. 15E can be acquired.
  • 14 corresponds to FIG. 15C
  • the result obtained in S27 corresponds to FIG. 15D
  • the result corresponds to FIG.
  • the invisible information can be extracted from the captured image with high accuracy by the above method.
  • the above-described extraction processing of the invisible information may be performed on the entire captured image, and if the position of the image in which the invisible information is embedded is specified in advance, only the specific region is included. You may perform said extraction process with respect to.
  • FIG. 16 shows an example of the recognition result of the invisible information.
  • the display information generation unit in the non-visualization information recognition device 30 performs the filtering process by HPF on the composite image 81 in which the non-visualization information is combined, thereby displaying the configuration shown in FIG. 16.
  • the two-dimensional code can be displayed on the original image, and this information can be read to display the contents of the two-dimensional code.
  • FIGS. 17A to 17H show examples of images to which invisible information is added, respectively.
  • Invisible information is embedded in each region 91 shown in FIGS. 17A to 17H.
  • the non-visualization information is added only to an appropriate place in the image, and detailed information on the object or the like displayed in a part of the image is accurately provided. be able to.
  • the code of the non-visualization information shown in the area 91 added to the images shown in FIGS. 17A to 17C includes, for example, the name, material, taste evaluation, price, and store that sells the product. Information or address information (for example, URL) to a server or the like in which the information is stored can be stored.
  • Information or address information for example, URL to a server or the like in which the information is stored can be stored.
  • information such as the name of the flower, the shooting location, and the blooming time can be accumulated.
  • information such as the name of the sculpture, the shooting location, the origin of installation, and the like can be accumulated in the code of the invisible information shown in the area 91 added to the image shown in FIG. 17E.
  • the code of the non-visualization information shown in the area 91 added to the image shown in FIG. 17F for example, information such as the name of the airplane, the flight speed, and the shooting location can be accumulated.
  • 17G and 17H show the same image, but in each area 91, codes of invisible information generated by different techniques are embedded. Specifically, in the region 91 in FIG. 17G, the code of the invisible information generated by using the high-frequency part and the low-frequency part is embedded. In the region 91 in FIG. The invisible information generated using only the part is embedded. That is, in the case of the code of the invisible information including the low frequency part and the high frequency part shown in FIG. 17G, the image may be blurred in the low frequency part depending on the original image. Therefore, as shown in FIG. 17H, by using a non-visualization information code using only a high-frequency part, it is possible to embed non-visualization information that is more difficult to see in the original image.
  • a plurality of pieces of invisible information can be partially added to one image.
  • the target to which the invisible information in this embodiment can be added is not only an image displayed on a television screen or a personal computer screen, but also various images such as video displayed on the screen, paper, card, postcard, poster, etc. It can also be applied to media and the like.
  • the size and number of codes embedded in the original image can be adjusted as appropriate according to the amount of data to be embedded in this embodiment.
  • FIG. 18 shows another embodiment of the non-visualization information.
  • either the high-frequency part or the low-frequency part is displayed in a predetermined color such as black.
  • the embedded character (RITS" in the example of FIG. 18) can be output as display information as it is.
  • the method shown in FIG. 18, for example it is not necessary to search for and acquire the corresponding additional information using the code ID or the like as a key, and the additional information can be quickly displayed on the screen. it can.
  • FIG. 19 shows an example of the comparison result.
  • the frequency method shown in FIG. 19 is used as the method of generating the invisible information encoded using the low frequency part and the high frequency part in the present embodiment.
  • the reading time is the time from the recognition of the code part to the completion of decoding.
  • the number of executed instructions is a value when the MIPS (Million Instructions Per Second) of “Intel core 2 duo” is set to 22,058M as an example.
  • the reading time is 0.272 seconds (execution time).
  • the execution environment at that time is MacOSX10.6 for OS, 2 GHz Core 2 Duo for CPU, and 2 GB for memory.
  • FIG. 20A shows an image in which the invisible information is not yet embedded
  • FIG. 20B shows an image in which the two-dimensional code that is the invisible information in the present embodiment is embedded.
  • the two-dimensional code added to the image of FIG. 20B is added to the same position as the non-visualization information 56-1 to 56-3 of the image 50 shown in FIG.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Editing Of Facsimile Originals (AREA)
  • Image Processing (AREA)
  • Accessory Devices And Overall Control Thereof (AREA)
  • Record Information Processing For Printing (AREA)

Abstract

La présente invention concerne un dispositif d'intégration d'informations invisibles qui intègre des informations invisibles à une position prescrite dans une image acquise. Ledit dispositif est caractérisé en ce qu'il possède un moyen d'analyse d'image qui acquiert les informations d'objet et les informations de position inclues dans l'image susmentionnée, un moyen de détermination d'image cible d'intégration qui détermine si l'image susmentionnée est la cible de l'intégration à partir des informations d'objet acquises par le moyen d'analyse d'image, et un moyen de synthèse d'image qui synthétise les informations invisibles susmentionnées dans l'image susmentionnée sur la base des résultats de détermination acquis à partir du moyen susmentionné de détermination d'image cible d'intégration.
PCT/JP2010/072039 2009-12-08 2010-12-08 Dispositif d'intégration d'informations invisibles, dispositif de reconnaissance d'informations invisibles, procédé d'intégration d'informations invisibles, procédé de reconnaissance d'informations invisibles et support d'enregistrement WO2011071081A1 (fr)

Priority Applications (4)

Application Number Priority Date Filing Date Title
CN2010800555503A CN102648623A (zh) 2009-12-08 2010-12-08 非可视化信息嵌入装置、非可视化信息识别装置、非可视化信息嵌入方法、非可视化信息识别方法、及存储介质
KR1020127014595A KR101285648B1 (ko) 2009-12-08 2010-12-08 비가시화정보 임베딩장치, 비가시화정보 인식장치, 비가시화정보 임베딩방법, 비가시화정보 인식방법 및 기록매체
US13/514,108 US8891815B2 (en) 2009-12-08 2010-12-08 Invisible information embedding apparatus, invisible information detecting apparatus, invisible information embedding method, invisible information detecting method, and storage medium
EP10836001.7A EP2512115B1 (fr) 2009-12-08 2010-12-08 Dispositif d'intégration d'informations invisibles, dispositif de reconnaissance d'informations invisibles, procédé d'intégration d'informations invisibles, procédé de reconnaissance d'informations invisibles et support d'enregistrement

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2009-278883 2009-12-08
JP2009278883 2009-12-08
JP2010191267A JP5021061B2 (ja) 2009-12-08 2010-08-27 非可視化情報埋込装置、非可視化情報認識装置、非可視化情報埋込方法、非可視化情報認識方法、非可視化情報埋込プログラム、及び非可視化情報認識プログラム
JP2010-191267 2010-08-27

Publications (1)

Publication Number Publication Date
WO2011071081A1 true WO2011071081A1 (fr) 2011-06-16

Family

ID=44145626

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2010/072039 WO2011071081A1 (fr) 2009-12-08 2010-12-08 Dispositif d'intégration d'informations invisibles, dispositif de reconnaissance d'informations invisibles, procédé d'intégration d'informations invisibles, procédé de reconnaissance d'informations invisibles et support d'enregistrement

Country Status (6)

Country Link
US (1) US8891815B2 (fr)
EP (1) EP2512115B1 (fr)
JP (1) JP5021061B2 (fr)
KR (1) KR101285648B1 (fr)
CN (1) CN102648623A (fr)
WO (1) WO2011071081A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2791883B1 (fr) * 2011-12-14 2020-01-01 Sony Corporation Dispositif de traitement d'informations, procédé de traitement d'informations, et programme

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6001275B2 (ja) * 2012-02-15 2016-10-05 学校法人立命館 非可視化情報埋込装置、非可視化情報埋込方法、及び非可視化情報埋込プログラム
JP5986422B2 (ja) * 2012-05-15 2016-09-06 学校法人立命館 オブジェクト抽出方法、オブジェクト抽出装置、及びオブジェクト抽出プログラム
CN103886548B (zh) * 2013-07-04 2017-09-15 百度在线网络技术(北京)有限公司 一种用于将二维码与图像融合的方法和装置
US20150026608A1 (en) * 2013-07-17 2015-01-22 Marvell World Trade Ltd. Systems and Methods for Application Management on Mobile Devices
JP6152787B2 (ja) 2013-11-29 2017-06-28 富士通株式会社 情報埋め込み装置、情報検出装置、情報埋め込み方法、及び情報検出方法
JP5536951B1 (ja) * 2013-12-26 2014-07-02 進 辻 表示コードが付された物品、表示コード読取装置および情報伝達方法
CN103886353B (zh) * 2014-03-10 2017-02-01 百度在线网络技术(北京)有限公司 二维码图像的生成方法和装置
CN103886628B (zh) * 2014-03-10 2017-02-01 百度在线网络技术(北京)有限公司 二维码图像生成方法和装置
JP2017168925A (ja) * 2016-03-14 2017-09-21 ソニー株式会社 信号処理装置、撮像装置および信号処理方法
JP6296319B1 (ja) * 2016-09-30 2018-03-20 国立大学法人 奈良先端科学技術大学院大学 情報処理装置、表示方法、読取方法、およびコンピュータ読み取り可能な非一時的記憶媒体
CN109792472B (zh) * 2016-10-12 2020-11-03 富士通株式会社 信号调整程序、信号调整装置以及信号调整方法
JP6934645B2 (ja) * 2017-01-25 2021-09-15 国立研究開発法人産業技術総合研究所 画像処理方法
JP7159911B2 (ja) * 2019-02-27 2022-10-25 京セラドキュメントソリューションズ株式会社 画像処理装置及び画像形成装置
CN112560530B (zh) * 2020-12-07 2024-02-23 北京三快在线科技有限公司 一种二维码处理方法、设备、介质及电子设备

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1141453A (ja) * 1997-07-24 1999-02-12 Nippon Telegr & Teleph Corp <Ntt> 電子透かし埋め込み読み出し処理方法,電子透かし埋め込み処理プログラム記憶媒体および電子透かし読み出し処理プログラム記憶媒体
JP2000082107A (ja) 1998-06-30 2000-03-21 Sony Corp 画像処理装置、画像処理方法、および媒体
JP2002032076A (ja) 2000-07-19 2002-01-31 Atr Media Integration & Communications Res Lab 楽器インタフェース
JP2002118736A (ja) * 2000-10-10 2002-04-19 Konica Corp 電子透かし挿入装置および電子透かし抽出装置ならびに電子透かしシステム
JP2005142836A (ja) * 2003-11-06 2005-06-02 Hitachi Ltd 電子透かし埋め込みプログラム及び情報処理装置
WO2005074248A1 (fr) 2004-02-02 2005-08-11 Nippon Telegraph And Telephone Corporation Title: dispositif d'incorporation de filigrane électronique, dispositif de detetection de filigrane électronique, procédé et programme s'y référant
WO2007015452A1 (fr) 2005-08-04 2007-02-08 Nippon Telegraph And Telephone Corporation Méthode de remplissage de filigrane numérique, dispositif de remplissage de filigrane numérique, méthode de détection de filigrane numérique, dispositif de détection de filigrane numérique, et programme
JP2009278883A (ja) 2008-05-20 2009-12-03 Marusho:Kk ガゴメ昆布食品の製造方法およびガゴメ昆布食品
JP2010191267A (ja) 2009-02-19 2010-09-02 Fuji Xerox Co Ltd 画像表示媒体および画像表示装置

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8379908B2 (en) * 1995-07-27 2013-02-19 Digimarc Corporation Embedding and reading codes on objects
US6411725B1 (en) * 1995-07-27 2002-06-25 Digimarc Corporation Watermark enabled video objects
WO2001080169A1 (fr) * 2000-04-17 2001-10-25 Digimarc Corporation Authentification d'objets supports electroniques et physiques a l'aide de filigranes numeriques
JP3431593B2 (ja) * 2000-10-31 2003-07-28 株式会社東芝 コンテンツ生成装置、電子透かし検出装置、コンテンツ生成方法、電子透かし検出方法及び記録媒体
EP1461760B1 (fr) 2001-11-30 2009-08-19 International Barcode Corporation Systeme et procede de validation d'une image numerique et de donnees correspondantes
JP2004178446A (ja) * 2002-11-28 2004-06-24 Ntt Docomo Inc 特定領域抽出装置及び特定領域抽出方法
WO2004090794A1 (fr) 2003-04-07 2004-10-21 Vodafone K.K. Procede de traitement d'informations
US7796776B2 (en) * 2004-03-29 2010-09-14 Panasonic Corporation Digital image pickup device, display device, rights information server, digital image management system and method using the same
WO2006008787A1 (fr) * 2004-07-15 2006-01-26 Mitsubishi Denki Kabushiki Kaisha Appareil de traitement d’informations et procede de traitement d’informations
US8259342B2 (en) * 2005-07-04 2012-09-04 International Business Machines Corporation System, method and program for generating data for printing invisible information, and method of manufacturing physical medium whereupon invisible information is printed
JP4676852B2 (ja) * 2005-09-22 2011-04-27 日本放送協会 コンテンツ送信装置
JP4645457B2 (ja) * 2006-01-24 2011-03-09 富士ゼロックス株式会社 透かし入り画像生成装置、透かし入り画像解析装置、透かし入り画像生成方法、媒体及びプログラム
US8090141B2 (en) * 2006-01-31 2012-01-03 Xerox Corporation System and method to automatically establish preferred area for image-wise watermark
US8369688B2 (en) * 2006-06-19 2013-02-05 Panasonic Corporation Information burying device and detecting device
JP2008172662A (ja) * 2007-01-15 2008-07-24 Seiko Epson Corp 画像データ変換装置および画像データ変換方法
JP4697189B2 (ja) * 2007-05-30 2011-06-08 村田機械株式会社 デジタル複合機
CN101072340B (zh) 2007-06-25 2012-07-18 孟智平 流媒体中加入广告信息的方法与系统
US20090050700A1 (en) * 2007-08-26 2009-02-26 Noboru Kamijoh Adding and detecting bar code printed with ink invisible to human eye onto printed medium
JP2009088614A (ja) 2007-09-27 2009-04-23 Toshiba Corp 画像処理方法および画像処理装置
US20100045701A1 (en) * 2008-08-22 2010-02-25 Cybernet Systems Corporation Automatic mapping of augmented reality fiducials
CN101504760A (zh) 2009-02-27 2009-08-12 上海师范大学 一种数字图像隐密信息检测与定位的方法

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH1141453A (ja) * 1997-07-24 1999-02-12 Nippon Telegr & Teleph Corp <Ntt> 電子透かし埋め込み読み出し処理方法,電子透かし埋め込み処理プログラム記憶媒体および電子透かし読み出し処理プログラム記憶媒体
JP2000082107A (ja) 1998-06-30 2000-03-21 Sony Corp 画像処理装置、画像処理方法、および媒体
JP2002032076A (ja) 2000-07-19 2002-01-31 Atr Media Integration & Communications Res Lab 楽器インタフェース
JP2002118736A (ja) * 2000-10-10 2002-04-19 Konica Corp 電子透かし挿入装置および電子透かし抽出装置ならびに電子透かしシステム
JP2005142836A (ja) * 2003-11-06 2005-06-02 Hitachi Ltd 電子透かし埋め込みプログラム及び情報処理装置
WO2005074248A1 (fr) 2004-02-02 2005-08-11 Nippon Telegraph And Telephone Corporation Title: dispositif d'incorporation de filigrane électronique, dispositif de detetection de filigrane électronique, procédé et programme s'y référant
WO2007015452A1 (fr) 2005-08-04 2007-02-08 Nippon Telegraph And Telephone Corporation Méthode de remplissage de filigrane numérique, dispositif de remplissage de filigrane numérique, méthode de détection de filigrane numérique, dispositif de détection de filigrane numérique, et programme
JP2009278883A (ja) 2008-05-20 2009-12-03 Marusho:Kk ガゴメ昆布食品の製造方法およびガゴメ昆布食品
JP2010191267A (ja) 2009-02-19 2010-09-02 Fuji Xerox Co Ltd 画像表示媒体および画像表示装置

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2512115A4 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2791883B1 (fr) * 2011-12-14 2020-01-01 Sony Corporation Dispositif de traitement d'informations, procédé de traitement d'informations, et programme

Also Published As

Publication number Publication date
EP2512115A4 (fr) 2013-03-06
KR20120128600A (ko) 2012-11-27
KR101285648B1 (ko) 2013-07-12
US20120237079A1 (en) 2012-09-20
EP2512115B1 (fr) 2016-10-19
US8891815B2 (en) 2014-11-18
CN102648623A (zh) 2012-08-22
EP2512115A1 (fr) 2012-10-17
JP5021061B2 (ja) 2012-09-05
JP2011142607A (ja) 2011-07-21

Similar Documents

Publication Publication Date Title
JP5021061B2 (ja) 非可視化情報埋込装置、非可視化情報認識装置、非可視化情報埋込方法、非可視化情報認識方法、非可視化情報埋込プログラム、及び非可視化情報認識プログラム
CN103038781B (zh) 隐藏图像信号发送
JP2017108401A5 (ja) スマートフォンベースの方法、スマートフォン及びコンピュータ可読媒体
JP4972712B1 (ja) 非可視化情報を用いたコンテンツ提供システム、非可視化情報の埋込装置、認識装置、埋込方法、認識方法、埋込プログラム、及び認識プログラム
JP4848427B2 (ja) 動画イメージコード、動画イメージコードを生成または復号する装置及びその方法
US10469701B2 (en) Image processing method that obtains special data from an external apparatus based on information multiplexed in image data and apparatus therefor
KR20120019331A (ko) 인스턴트 마커를 이용한 증강 현실 장치 및 방법
US9626934B2 (en) Display format using display device for machine-readable dot patterns
CN111625100A (zh) 图画内容的呈现方法、装置、计算机设备及存储介质
US20130002699A1 (en) Image processing apparatus and an image processing method
Liu et al. Toward a two-dimensional barcode with visual information using perceptual shaping watermarking in mobile applications
WO2020261546A1 (fr) Dispositif de traitement d&#39;informations, système de traitement d&#39;informations, procédé de traitement d&#39;informations, et programme
JP6001275B2 (ja) 非可視化情報埋込装置、非可視化情報埋込方法、及び非可視化情報埋込プログラム
CN111640190A (zh) Ar效果的呈现方法、装置、电子设备及存储介质
JP2014219822A (ja) コンテンツ表示装置、コンテンツ表示方法、プログラム、及び、コンテンツ表示システム
CN114549270A (zh) 结合深度鲁棒水印和模板同步的抗拍摄监控视频水印方法
JP4725473B2 (ja) 携帯端末装置、情報表示方法、及びプログラム
KR20170058517A (ko) 증강 현실을 이용한 포토존 촬영 장치
KR20160038193A (ko) 명함 또는 브로슈어를 통해 제공되는 증강 현실 컨텐츠를 이용한 기업 소개 방법 및 프로그램
JP6166767B2 (ja) 機械可読ドットパターン
WO2018135272A1 (fr) Dispositif de traitement d&#39;informations, procédé d&#39;affichage, programme, et support d&#39;enregistrement lisible par ordinateur
JP2009196324A (ja) 印刷装置、情報処理装置、及び情報処理方法
Zhu et al. Systolic array implementations for Chebyshev nonuniform sampling
Bernstein et al. Subliminal: A System for Augmenting Images with Steganography
JP2015056041A (ja) 画像生成システム、画像生成方法、画像生成プログラム、視線予測システム、視線予測方法、および視線予測プログラム

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 201080055550.3

Country of ref document: CN

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10836001

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 20127014595

Country of ref document: KR

Kind code of ref document: A

WWE Wipo information: entry into national phase

Ref document number: 13514108

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE

REEP Request for entry into the european phase

Ref document number: 2010836001

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2010836001

Country of ref document: EP