JP3949679B2 - Steganography system - Google Patents

Steganography system Download PDF

Info

Publication number
JP3949679B2
JP3949679B2 JP2004224727A JP2004224727A JP3949679B2 JP 3949679 B2 JP3949679 B2 JP 3949679B2 JP 2004224727 A JP2004224727 A JP 2004224727A JP 2004224727 A JP2004224727 A JP 2004224727A JP 3949679 B2 JP3949679 B2 JP 3949679B2
Authority
JP
Japan
Prior art keywords
signal
image
identification code
data
object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
JP2004224727A
Other languages
Japanese (ja)
Other versions
JP2005051793A (en
Inventor
ジェフリー ビー ローズ
Original Assignee
ディジマーク コーポレイション
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US08/436,102 priority Critical patent/US5748783A/en
Priority to US08/508,083 priority patent/US5841978A/en
Priority to US51299395A priority
Priority to US08534005 priority patent/US5832119C1/en
Priority to US63553196A priority
Application filed by ディジマーク コーポレイション filed Critical ディジマーク コーポレイション
Publication of JP2005051793A publication Critical patent/JP2005051793A/en
Application granted granted Critical
Publication of JP3949679B2 publication Critical patent/JP3949679B2/en
Expired - Lifetime legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Description

Steganography background

  There are numerous approaches to steganography and numerous applications of steganography. The outline is as follows.

  British Patent Publication No. 2196167 to Thorne M.I. discloses a system in which an audio recording is electronically mixed with a marking signal indicating the owner of the recording, and the combination is perceptually identical to the original. U.S. Pat. Nos. 4,963,998 and 5,079,648 disclose variations of this system.

  US Pat. No. 5,319,735 to Bolt, Berenac and Newman is based on the same principles as the aforementioned Thorn EMI patent, but additionally describes the psychoacoustic masking problem.

  U.S. Pat. Nos. 4,425,642, 4,425,661, 5,404,377, and 5,473,631 to Moses disclose various systems that embed data in an audio signal only slightly, the latter two patents in particular: The focus is on neural network realization and perceptual coding of details.

  U.S. Pat. No. 4,943,973 to A & T discloses a system that uses an extended spectrum technique to add a low level noise signal to other data and transmit ancillary data with them. This patent specifically describes the situation in which network control signals are transmitted along with digitized voice signals.

  U.S. Pat. No. 5,161,210 to US Philips discloses a system for defining additional low level quantization levels in an audio signal and, for example, transmitting a copy inhibit signal therewith.

  U.S. Pat. No. 4,972,471 to Gros assists in the automatic monitoring of copyrighted material (eg, radio) signals by reference to identification signals that are subtly embedded in them. A system for this purpose is disclosed.

  US Pat. No. 5,243,423 to Degene describes a video steganography system that encodes digital data (eg, data such as program company identification, copyright marking, media research, non-public description, etc.) on randomly selected video lines. Disclosure. Gene relies on television sync pulses to trigger stored pseudorandom sequences XORed with digital data and combined with video.

  EP-A-581317 discloses a system for redundantly marking an image with a multi-bit identification code. Each “1” (“0”) bit of the code is manifested as a slight increase (decrease) in pixel values around the “signature points” spaced apart by multiple intervals. Decoding proceeds by calculating the difference between the suspicious image and the original unencoded image and examining pixel variations around the signature point.

  PCT specification WO 95/14289 is the work preceding this specification in this field.

Komatsu et al., In their paper “Proposal in Watermarking in Document Image Communication and Its Use to Realize Signatures”, Electronic and Communication in Japan , Part 1, Volume 73, No. 5, 1990, pages 22-33, describes image marking techniques. This task is somewhat difficult to understand, but obviously results in a simple yes / no determination of whether a watermark (eg, a 1-bit encoded message) is present in a suspicious image.

There is a lot of work related to embedding digital information in video signals. Many embed signals into non-visual parts such as vertical and horizontal blanking periods, while others embed this information “in-band” (ie, the visible video signal itself). Examples are U.S. Pat. No. 4528588, and No. 4595950 and No. 5319453, and European Patent Application Publication Specification No. 441,702, Matsui et al., "Video steganography: to embed secret signature to the image" I Em A Intellectual Property Project Bulletin , January 1994, Volume 1, 1st Edition, pages 187-205.

  There are various consortium research attempts in Europe on copyright marking of video and multimedia. A technical overview can be found in “Image Access Control and Copyright Protection (ACCOPI), Work Package 8: Watermark”, June 30, 1995, page 46. A new plan called Talisman seems to extend this ACCOPI work to some extent. Researchers active in these plans, Zao and Ko, provide a web-based electronic media marking service known as Cisco Cup.

  Aura is investigating a number of issues in steganography in his paper “Invisible Communication”, Helsinki University of Technology, Digital Systems Laboratory, November 5, 1995.

  Stanford II et al., “Data Embedding Method”, SPIE 2615, October 23, 1995, reported their May 1994 image steganography program (BMPEMBED).

  UK company Highwater FB Limited introduces a software product that embeds identification information in photographs and other graphical data very little. This technology is disclosed in European Patent Specification Nos. 9400971.9 (filed on Jan. 19, 1994), No. 9504221.2 (filed on Mar. 2, 1995) and No. 9513790.7 (filed on Jul. 3, 1995). The first of these is published as PCT International Publication WO 95/20291.

  MIT Walter Vendor does a variety of work in this area, as explained by his paper, “Technology on Data Hiding”, Massachusetts Institute of Technology, Media Laboratory, January 1995.

  Palo Alto's Dice Co., Ltd. is developing the audio marking technology shown under the name of Argent. It is understood that US patents are pending and have not yet been issued.

  Tilkel et al. At Monash University, for example, “Electronic Watermark”, DICTA-93, McCully University, Sydney, Australia, December 1993, and “Electronic Watermark” International Conference on IEEE Images, November 1994. On the 16th, various papers including 86-90 pages are published.

  Cox et al. Of the NEC Technical Research Institute consider various data embedding technologies in their December 1995 paper entitled “Multimedia Guaranteed Extended Spectrum Watermark”.

  Morey et al., “Rechnergetutzte Steganographic: Wie sie Funktioniert undwarum folglish jede Reglementierung von Verschlusselung unsinnig ist,“ DuD Datenschtz und Datensicherung, 18/6 (1994) 318-326, an experimental system that embeds auxiliary data in ISDN. Are considered. The system picks up and modifies ISDN signal samples and raises auxiliary data transmission for sample signals below the threshold.

  In general, there are various software programs available on the Internet (eg, “Stego” and “White Noise Storm”) that operate by exchanging bits from the message stream to be hidden with the least significant bits of an image or audio signal. Exists.

DETAILED DESCRIPTION In the following discussion of the illustrative embodiment, the terms “signal” and “image” are used interchangeably to refer to a digital signal with an even number of dimensions greater than 1, 2, and 2. The example is conventionally switched back and forth between a one-dimensional audio format digital signal and a two-dimensional image format digital signal.

  In order to fully explain the details of the illustrative embodiment of the present invention, it is first necessary to explain the basic nature of the digital signal. FIG. 1 shows a classical representation of a one-dimensional digital signal. The x-axis defines the index number of the digital array “sample” and the y-axis represents the signal instant at the sample that is constrained to exist at only a finite number of levels defined as the “binary depth” of the digital sample. Value. The example shown in FIG. 1 has a value of 2 for the fourth power or “4 bits” giving 16 allowed states of the sample value.

  With respect to audio information such as sound waves, it is generally recognized that digitization processes treat discrete phenomena discretely in both the time domain and the signal level domain. As such, the digitization process itself introduces basic error sources and cannot record details smaller than the discrete processing periods in any region. The industry calls this “aliasing” in the time domain and “quantization noise” in the signal level domain. Thus, there is always a basic error floor for digital signals. The pure quantization noise measured in the effective sense is theoretically known to have a value that exceeds one of the square roots of 12 or on the order of 0.29 DN, where DN is a “digital number” Or it represents the finest unit increment of the signal level. For example, a full 12-bit digitizer has 4096 allowed DNs with an inherent effective noise floor of ˜0.29 DN.

  All known physical measurement processes add additional noise to the conversion of continuous signals into digital form. Typically, quantization noise, as will be mentioned later, adds to the “analog noise” of the measurement process in quadrature (the root mean square).

  The use of the decibel scale by almost all commercial and technical processes is used as a measure of signal and noise in a given recording medium. The expression “signal-to-noise ratio” is generally used as in this specification. As an example, this specification refers to the signal to noise ratio as a term of signal power and noise power, so 20 dB represents a 10-fold increase in signal amplitude.

  In summary, the presently preferred embodiment of the present invention embeds N-bit values in the entire signal by adding a very small amplitude encoded signal having a pure noise shape. Usually N is at least 8 and is raised to a higher limit by N-bit value recovery and final signal-noise considerations in decoding. As a practical matter, N is selected based on the specific reason for the application, such as the number of unique and different “signatures” desired. For the sake of explanation, if N = 128, the number of unique digital signatures is 10 ^ 38 (2 ^ 128) or more. This number appears to be more than sufficient for both validating the work with sufficient statistical certainty and showing the correct sales and distribution of information.

  The amplitude or power of this additional signal is determined by aesthetic and informational considerations for each and every application that uses this methodology. For example, non-professional video can have higher embedded signal levels without being noticeable to the average human eye, while high-precision audio can cause an unpleasant increase in “his” by the human ear. Only relatively small signal levels can be employed so as not to perceive. These statements are general and each application has its own set of criteria in the selection of the signal level of the embedded signal. A higher level of the embedded signal can verify a more malicious copy. On the other hand, higher levels of the embedded signal may cause more unpleasant perceived noise, possibly affecting the value of the distributed work.

  To illustrate the range of different applications in which the principles of the present invention can be used, this specification details two different systems. The first (referred to as a “batch encoding” system because there is no better name) uses verification encoding on the existing data signal. The second (referred to as “real-time encoding” because there is no better name) uses verification encoding on the generated signal. Those skilled in the art will recognize that the principles of the present invention can be used in many other situations in addition to those specifically described.

  The discussion of these two systems can be read in either order. Some readers find the latter more intuitive than the former, and for others the opposite may be true.

The following discussion of the first set of batch coding embodiments is best started with paragraphs that define the relevant terms.

  The original signal is applied to an original digital signal or a high-quality digitized copy of a non-digital signal.

  The N-bit verification word is applied to a unique verification binary value, which is a verification code that typically has an N range of 8 to 128, and is finally placed in the original signal through the disclosed conversion process. In the illustrated embodiment, each N-bit verification word begins with an array of values “0101” and is used to determine the optimization of the signal-to-noise ratio in the suspicious signal (see definition below).

  The m-th bit value of the N-bit verification word is either zero or 1 corresponding to the value at the m-th position when read from left to right of the N-bit word. For example, the first (m = 1) bit value of the N = 8 verification word 01110100 is the value “0”, the second bit value of this verification word is “1”, and so on.

  The mth independent embedded code signal has a dimension and quantity exactly equal to the original signal (for example, both 512 times 512 digital images) and (in the illustrated embodiment) an independent pseudo-random array of digital values. Applies to signals that are “Pseudo” pays tribute to the difficulty of philosophically determining a pure random state and indicates that there are various acceptable ways of generating a “random” signal. There are exactly N independent embedded code signals associated with any given original signal.

  An acceptable perceived noise level, how much “extra noise”, ie, the amplitude of the composite embed code signal described below, is added to the original signal to allow for sale or otherwise distributed. Applies to application-specific determination of whether a signal can still be held. This specification uses a 1 dB increase in noise as a typical value that can be tolerated, but this is quite arbitrary.

  The composite embedded code signal applies to a signal that has exactly the same dimensions and quantity as the original signal (eg, 512 times 512 digital images, both) and includes an inherent attenuation with the addition of N independent embedded code signals. Independent embedding codes are generated at any scale, but the amplitude of the composite signal must not exceed a pre-set acceptable perceived noise level and therefore requires “attenuation” of N additional independent code signals And

  The distributable signal is applied to a copy almost identical to the original signal, which consists of the original signal plus the composite embedded code signal. This is a signal that is distributed to outside society and has a slightly higher but acceptable “noise characteristic” than the original signal.

  Suspicious signals are applied to signals that have the overall appearance of the original and distributed signals and are suspected of matching the original with verification. If the suspicious signal matches the N-bit verification word, it can be found by analysis.

  The detailed methodology of this first embodiment begins by embedding an N-bit word into the original signal by multiplying each of the m-bit values by an independent embedded code signal that is stored in the composite signal as their corresponding result. The fully summed composite signal is then attenuated to an acceptable perceived noise amplitude, and the resulting composite signal added to the original signal becomes a distributable signal.

  The original signal, N-bit verification word, and all N independent embedded code signals are then stored in a secure location. Then find the suspicious signal. This signal may have undergone multiple copies, compression and decompression, re-sampling to digital signals at different intervals, digital to analog and then back to digital media, or some combination of these items Absent. If this signal still looks like the original, i.e. its intrinsic nature is not destroyed at all by all of these transformations and additions of noise, then depending on the signal to the noise characteristics of the embedded signal, It should function to the extent of statistical certainty. The degree of suspicious signal tampering and the original acceptable perceived noise level are the two key parameters of the reliability level required for verification.

  The verification process on the suspicious signal begins by resampling and aligning the suspicious signal into the digital format and range of the original signal. Therefore, if the image is reduced by a factor of 2, it must be increased digitally by the same factor. Furthermore, if a piece of music has been “deleted” but still has the same sampling rate as the original, it is necessary to record this deleted portion of the original, which is typically a local digital correlation of the two signals. (Ordinary digital operation) is performed, and using this found delay value, the cut portion is recorded relative to the original portion.

  If the suspicious signal is recorded with the sampling interval matched to the original, the signal level of the suspicious signal should match the original signal level in an effective sense. This can be done by searching for the three parameters that are optimized by using the offset and the minimum mean squared error between the two signals as a function of the amplification and gamma parameters. Suspicious signals that have been standardized and recorded at this point or simply standardized for convenience can be invoked.

  The newly matched pair then has the original signal subtracted from the normalized suspicious signal and provides a difference signal. The difference signal is then correlated with each of the N independent embedded code signals and the recorded peak correlation value. The first 4-bit code (“0101”) is used as a calibrator in both the average of the 0 and 1 values and the further matching of the two signals if the noise value is desired for a better signal (ie , 0101 indicates an optimal match of the two signals and also indicates that there is a probable presence of an N-bit verification signal).

  The resulting peak correlation values form a set of floating point noises that can be converted to 0 and 1 by being close to the average of 0 and 1 found by the 0101 calibration array. If the suspicious signal is really derived from the original, the number of verifications resulting from the above process will match the original N-bit verification word and the “bit error” statistics are predicted or not known Indicate. Signal-noise considerations determine that if that kind of “bit error” exists in the verification process, it will lead to a state of X% probability of verification, where X is 99.9%. desired. If the suspicious signal is not really a copy of the original, an essentially random arrangement of 0s and 1s will occur, resulting in a clear lack of separation of the resulting values. That is, when the resulting values are plotted in a histogram, the presence of an N-bit verification signal exhibits a strong two-level characteristic, but the absence of a code or the presence of a different original different code is a random normal distribution. Indicates the format. This histogram separation is sufficient for verification, but provides more evidence of verification if an accurate binary sequence can be objectively reproduced.

Suppose that you get an expensive picture of two state leaders at a special example cocktail party and that this picture is sure to deserve some reasonable reward in the market. We want to sell this picture and guarantee that it will not be used in an unauthorized or unpaid way. This and the following steps are summarized in FIG.

  Assume that this picture is converted to a positive color print. We first scan this in digitized form (by scanning in each of the three primary colors of the color image to the noise ratio) with a conventional high quality black and white scanner with a typical photometric spectral response curve. A better final signal can be obtained for this, but this nuance is not important for describing the basic processing).

  Now assume that the scanned image is a 4000 × 4000 pixel monochrome digital image with a gray scale of precision defined by a 12-bit gray value or 4096 permitted levels. We call this the “original digital image” which represents the same as the “original signal” in the above definition.

During the scanning process we arbitrarily set the absolute black corresponding to the digital value “30”. We have the theoretical noise of the square root of the luminance value of a given pixel (known as “shot noise” in industry) in addition to the basic two-digital effective noise that exists in the original digital image. Estimate. In the formula, we
<RMS Noise n, m > = sqrt (4+ (V n, m −30)) (1)
Have Here, n and m are simple display values that vary from 0 to 3999 in the rows and columns of the image. Sqrt is the square root. V is the DN of a predetermined display pixel in the original digital image. The <> brackets around RMSnoise simply mean that this is the expected average value, where it is clear that each and every pixel individually has a random error. Thus, for a pixel value having 1200 as a digital number or “brightness value”, we find that its expected effective noise value is sqrt (1204) = 34.70, which is 1200 It is quite close to the square root of 34.64.

We further understand that the square root of the pixel's intrinsic luminance value is not exactly the value that the eye perceives as minimal unpleasant noise, so we have the formula:
<RMS Addable Noise n, m > = X * sqrt (4+ (Vn , m- 30) ^ Y) (2)
Propose. Here, X and Y are added as empirical parameters that we adjust, and “addable” noise belongs to our acceptable perceived noise level according to the above definition. We will now experiment to see what the exact values of X and Y we can choose, but we do it at the same time as we perform the next step in the process.

  The next step in our process is to select N of our N-bit verification word. We have a 16-bit main verification value with 65536 possible values that is large enough to verify that the image is ours, and we only directly connect 128 copies of the image we want to track. Decide to sell and add to the 7th bit the 8th bit for odd / even addition of the first 7 bits (ie, bit error checking in the first 7 bits). The total bits required here are 4 bits for the 0101 calibration array, 16 bits for the main verification, and 8 bits for the version, and we will use other error matching values in the first 28 bits here. 4 bits, and 32 bits are given as N. The last 4 bits can use one of many industry standard error matching methods to select the 4 bits.

  Here we randomly determine the 16-bit main verification number and obtain 1101 0001 1001 1110 as an example. That is, our first version of the original sold has all 0 as the version identifier and the error verification bit will not match. We now have our unique 32-bit verification word that we embed in the original digital image.

  To do this, we generate 32 independent random 4000 × 4000 encoded images for each bit of our 32-bit verification word. A method for generating these random images will be described. There are numerous ways to generate these. Apparently the simplest way is to increase the gain by placing a black image only as input and then scanning it 32 times in the same scanner used for scanning in the original photograph. The only disadvantages of this technique are that it requires a large amount of memory and that “fixed pattern” noise becomes part of each independent “noise image”. However, fixed pattern noise can be removed by conventional “dark frame” subtraction techniques. We assume that rather than finding 2DN effective noise at the normal gain setting, we would set the absolute black average value at the digital number “100”, where we set the effective noise of 10 DN for each average value of all pixels. locate.

  We then use an intermediate spatial frequency bandpass filter (spatial synergy) for each independent random image, essentially removing very high and very low spatial frequencies from them. We find that most of the simple real-world error sources, such as geometric distortions, scanner smudges, and misalignments, appear at lower frequencies, so we avoid tampering with these forms In addition, we want to concentrate on our verification signal at higher spatial frequencies, so we remove very low frequencies. Similarly, we have found that multiple generation copies or compression-decompression transformations of a given image tend to destroy higher frequencies in any way, and if these frequencies tend to attenuate the most, In order to avoid having points where too much verification information is located, higher frequencies are removed. Thus, our new extracted independent noise image is dominated by the central spatial frequency. In practical features, we use 12-bit values in our scanner, we effectively remove DC values, and our new effective noise is slightly less than 10 digital numbers, so this It is effective to compress the resulting random image to a 6-bit value that varies from -32 to 0 through 31.

  Next we add all of the random images that have a 1 in the bit value of the corresponding 32-bit independent verification word to each other and accumulate the results in the 16-bit signature integer image. This is a non-attenuating and non-proportional version of the composite embedded signal.

Next we visually experiment by adding the composite embedded signal to the original digital image by changing the X and Y parameters of Equation 2. In the equation, we repeat the maximization of X and finding an appropriate Y in the following equation:
V dist; n, m = V orig; n, m + V comp; n, m * X * sqrt (4 + V orig; n, m ^ Y) (3)
Now apply dist to the candidate distributable image, ie we visually repeat to find X and Y that give us an acceptable image, apply org to the pixel value of the original image, and comp Applies to pixel values in composite images. n and m still indicate the row and column of the image, indicating that this operation is performed on all 4000 × 4000 pixels. The symbol V is a DN of a predetermined pixel and a predetermined image.

  Here, as an arbitrary assumption, we have found that when our visual experiment compares the original image with a candidate distributable image, the values X = 0.025 and Y = 0.6 are acceptable. Suppose that That is, distributable images with “additional noise” are close enough to the original in aesthetic sense. Our independent random image has a random effective noise value of about 10 DN, and adding about 16 of these images together increases the composite noise to about 40 DN, so an X increase value of 0.025 is added Note that the effective noise is returned to around 1 DN or half the amplitude of our inherent noise in the original. This is roughly 1 dB in dark pixel value noise, corresponding to a higher value in brighter pixels that have been changed by a Y value of 0.6.

  Thus, with these two values of X and Y, we now constitute the first version of our original distributable copy. Other versions simply create a new composite signal and slightly change X if deemed necessary. We now fix the original digital image along with a 32-bit verification word for each version and 32 independent random 4-bit images and wait for our first case of our original suspicious piracy. The storage method is about 14 megabytes for the original image and about 32 × 0.5 bytes × 16000000 = ˜256 megabytes for the random verification embedded image. This is perfectly acceptable for one expensive image. Some storage savings can be obtained by simple lossless compression.

Discovery of suspicious piracy of our images We sold our images, and after a few months we saw what was cut out of our images, plagiarized and placed in other stylized background scenes locate. Assume that this new “suspicious” image has been printed in 100,000 copies of a given magazine publication. We now try to determine if some of our original images are actually used in an unauthorized manner. FIG. 3 summarizes the details.

  The first step is to get the publication of the magazine and cut out the page with the image on it, carefully but not too carefully, from the background image using ordinary scissors Cut out the two figures. If possible, we cut only one connected part, rather than cutting the two figures separately. We paste this on a black background, which makes it easy to perform a visual inspection.

  We now get the original digital image from our secure location with a 32-bit verification word and 32 independent embedded images. We place the original digital image on our computer screen using standard image manipulation software and we cut roughly along the same boundary as our masked area of the suspicious image and at the same time Like this image is roughly masked. The term “rough” is used because an exact cut is not necessary, and this simply helps the verification statistics be reasonably terminated.

  Next, we rescale the masked suspicious image and roughly fit the dimensions of our masked original digital image, i.e. we scale the suspicious image, and place it above the original image. Overlay roughly. After we have done this rough alignment, we then put these two images into an automated scaling and alignment program. This program searches for three parameters: x-position, y-position and spatial scale, and the advantage of the form that the mean squared error between the two images is given by some predetermined scale variable and x and y offset Have This is a completely standard image processing methodology. Typically, this is done using generally smooth interpolation techniques and with sub-pixel accuracy. The search method can be one of many, and the simplex method is a representative one.

  After optimal scaling and finding the xy position variable, another search is then performed in optimizing the black level, luminance gain, and gamma of the two images. A form of advantage to be used again is the root mean square error, and again simplex or other search methodologies can be used to optimize these three variables. After optimizing these three variables, we use these modifications on the suspicious image and match it exactly to the original digital image and its mask pixel spacing and masking. We can now call this the reference mask.

  The next step is to subtract the original digital image from the newly standardized suspicious image only within the reference mask area. This new image is called a difference image.

  Next, a local correlation between the masked difference image and the masked independent embedded image is performed over all 32 independent random embedded images. The concept that it is only necessary to start to correlate “local” with an offset region of +/− 1 pixels of the offset between the nominal matching points of the two images found during the search procedure described above. Applies to The correlation peak should be very close to the nominal match point of 0,0 offset, we add the 3x3 correlation values to each other and for each of the 32 independent bits of our 32-bit verification word One overall correlation value can be given.

  After doing this for all 32-bit positions and all of their corresponding random images, we have a 32-valued quasi-floating point array. The first four values represent our calibration signal of 0101. We now take the average of the first and third floating point values, call this floating point value “0”, take the average of the second and fourth values, and set the floating point value to “1” Call it. We then proceed to all the remaining 28-bit values and simply assign either “0” or “1” based on the closer average values. Simply put, if the suspicious image is actually our original copy, the embedded 32-bit result code should match that of our record, and if it is not a copy, we will use the overall random state Should get. 3) There may be a third possibility that is a copy but does not match the verification number, and 4) there may be a fourth possibility that it is not a copy but is a match. That is, it is possible that the “suspicious image” is exactly a very poor copy of the original, and in the case of 4) we are basically using one 32-bit verification number because we are using 32-bit verification numbers. There is sex. If we really worry about 4), we can simply have a second independent test site that performs these tests in different publications of the same journal. Finally, checking the error check bits taking into account what these values give is the ultimate possible over-check in the overall process. In situations where the signal to noise can be a problem, these error check bits can be removed without too much harm.

Benefits Now that the complete description of the first embodiment has been described by detailed examples, it is appropriate to point out the theoretical interpretation of the processing steps and their advantages.

  The net benefit of the above processing is that obtaining the verification number is completely independent of the means and method for preparing the difference image. In other words, the difference image preparation methods such as cutting, matching, scaling, etc., when the verification number does not exist, the odds of finding the verification number do not increase, and when the true verification number exists, Only the signal-to-noise ratio is useful. The method of preparing the verification image may be different from each other and provides the possibility of multiple independent methodologies to form a match.

  The ability to get a match in a subset of the original signal or image is a key point in today's information-rich world. Cutting and pasting both the image and the audio parts becomes more common and allows such an embodiment to be used to detect a copy if the original work is being used illegally. Finally, matching the signal-to-noise ratio is only difficult if the copy work itself has changed significantly due to either noise or significant distortion, both of which affect the commercial value of the copy, and consequently Attempting to thwart this system can only be done at a huge reduction in cost in commercial value.

  The initial concept of the present invention was when only one “snow-like” image or random signal was added to the original image, ie N = 1. “Decomposing” this signal involves a subsequent mathematical analysis using a (generally statistical) algorithm that makes a determination in the presence or absence of this signal. The reason for abandoning this approach as the embodiment described above is that there is an inherent gray area in the certainty of detecting the presence or absence of the signal. With a number of bit stages, ie a forward change to N> 1, combined with a simple pre-defined algorithm defining how to choose between “0” and “1”, the present invention From statistical analysis, we have changed certain problems into the field of estimating random binary events such as coin throwing. This is seen as a powerful feature related to the intuitive acceptance of the present invention in both court and market. Similarities summarizing the inventor's thoughts on this overall problem are as follows. Searching for one verification number is equivalent to calling a coin flip only once and expecting a secret expert to make this call, but the N> 1 embodiment of the present invention described above is a coin flip. We expect a clear intuitive principle to call N consecutively accurately. This situation is very frustrating, i.e. the problem of "falsification" of the presence of one signal when the image and audio parts get a smaller range.

  Another reason why N> 1 is a more preferred embodiment than N = 1 is that the method of preparing and manipulating a suspicious image in N = 1 can be positively verified. Is to get. Therefore, the way in which experts make verification decisions is an essential part of this decision. The existence of numerous mathematical and statistical approaches to make this decision leaves the possibility that some tests will make positive decisions, while others make negative decisions, Brings other secret discussions about the benefits. The preferred embodiment of N> 1 of the present invention increases the likelihood of “calling coin throws N times consecutively” without signal preprocessing other than preprocessing that illegally uses known personal code signals. Avoid this other gray area by providing a way that can be.

  The most complete description of the system will be visible when industry standards and multiple independent groups come to set their own means or “in-house brand” in applying and deciphering embedded verification numbers Let's go. Multiple independent group validations further enhance the ultimate goal of the method, thereby increasing its attractiveness as an industry standard.

Use of True Polarity in the Generation of Composite Embedded Code Signals The above discussion used binary techniques 0 and 1 formalisms to accomplish its purpose. In particular, the N-bit verification words 0 and 1 are directly multiplied by their corresponding independent embedded code signals to form a composite embedded code signal (step 8, FIG. 2). This approach certainly has the conceptual simplicity, but the multiplication of the embedded code signal by 0 along with the storage of the embedded code involves a kind of inefficiency.

  While maintaining the formalism of the 0 and 1 nature of the N-bit verification word, it is preferred to have the word 0 to subtract these corresponding embedded code signals. Accordingly, in step 8 of FIG. 2, rather than merely “adding” the independent embedded code signal corresponding to “1” in the N-bit verification word, “in” of the independent embedded code signal corresponding to “0” in the N-bit verification word. Also perform “subtraction”.

  At first glance, this appears to add more obvious noise to the final composite signal. However, the energy width separation from 0 to 1 is increased, so the “gain” used in step 10 of FIG. 2 can be relatively low.

  We can call this improvement the use of true polarity. The main advantage of this improvement can be largely summarized as “information efficiency”.

"Perceptual orthogonality" of independent embedded code signals
The discussion above considered the use of random noise-like signals in general as independent embedded code signals. This is probably the simplest form of signal generated. However, there exists a form of information optimization that can be used for a set of independent embedded signals, and the applicant describes it under the heading 'perceptual orthogonality'. This term is a mathematical expression of vector orthogonality due to the current additional requirement that this orthogonality should be kept below some perceptible threshold while maximizing the signal energy of the verification information. Roughly based on the concept. In other methods, the embedded code signal does not necessarily need to be actually random.

Use and Improvement of the First Example in the Photographic Emulsion Based Photo Area The discussion above outlined techniques applicable to photographic work. The following sections further describe the details of this area and disclose some improvements that adapt themselves to a wide range of applications.

  The first area to discuss includes entering or pre-exposing a serial number on a conventional photographic work such as negative film, print paper, transparency, and the like. In general, this is a method of embedding serial numbers (and ownership and tracking information inclusively) a priori in a photographic work. The serial number itself is a permanent part of the normal exposed image, as opposed to being driven out into the margins or stamped on the background of a printed photo. Need a method. The 'serial number' referred to here is generally synonymous with the N-bit verification word, and only here we use more general industry terms.

  In step 11 of FIG. 2, the present disclosure mandates storing “original [image]” along with the code image. Next, in step 9 of FIG. 3, the original image is subtracted from the suspicious image, thereby instructing the possible verification code plus all of the accumulated noise and tampering or remain. Thus, the previous disclosure made an implicit assumption that the original exists without a composite embedded signal.

  Here, in the case of selling printed paper and other copy products, this is still the case, that is, the “original” actually exists without an embedded code, and the basic methodology of the first embodiment can be used. . The original film serves perfectly well as an “uncoded original”.

  However, when using a pre-exposed film, the composite embedded signal is pre-existing on the original film and thus separates from the pre-embedded signal, and the “original” never exists. However, in this latter case, the bits are examined more closely (with the former case sticking to the method outlined above), with an observation on how best to use the principles described above.

  The most obvious point of change in the case of pre-numbered negative films, ie negative films pre-exposed with a very slight unique composite embedded signal in every frame, appears in step 9 of FIG. . While there are certainly other differences, it is primarily logical in reality, such as how and when to embed the signal on the film, how to store the code number and serial number, and so on. Clearly, pre-exposure of the film results in significant changes in the general mass production process of film production and packaging.

  FIG. 4 is a schematic diagram of one possible subsequent mechanism for pre-exposing the film. “After this” applies to processing after all common manufacturing steps have already been performed. Ultimately, the economic scale requires that this pre-exposure step be placed directly in the film manufacturing chain. The one shown in FIG. 4 is known as a film writing system. The computer 106 displays the composite signal generated in step 8 of FIG. 2 on its fluorescent screen. A predetermined frame of film is then exposed by projecting the image of the phosphor screen, with the exposure level generally being very slight, i.e. generally negligible. Clearly, the market will set the market's own demands on how little this should be done, ie the level of added “nature” that the lawyer estimates. Each frame of the film is exposed sequentially, generally changing the composite image displayed on the CRT 102 for each and every frame, thereby giving each frame of the film a different serial number. The conversion lens 104 emphasizes the focal plane and the CRT surface of the film frame.

  Returning to the application of the principle of the previous embodiment in the case of a pre-exposure negative film, in step 9 of FIG. 3, subtracting “original” along with its embedded code clearly reveals that the code is the original integer part. The code is similarly “erased”. Fortunately, remedies exist and verification can still be done. However, an engineer who improves this embodiment is required to make the signal-to-noise ratio of the verification process in the case of the pre-exposure negative close to the signal-to-noise ratio in the case where an uncoded original exists.

  A simple definition of this problem is the order in this respect. Assuming a suspicious photo (signal), if the code exists somewhere, find the embedded verification code. This problem is not only within the noise and tampering situation described above, but here also within the coupling situation between the captured image and the code, the amplitude of each independent embedded code signal in the suspicious photo. Reduce to one of the discoveries. “Combination” applies here to the concept that the captured image is “randomly biased” to the correlation.

  Thus, keeping this additional item of signal coupling in mind, the verification process estimates the signal amplitude of each independent embedded code (as opposed to obtaining a correlation result in step 12 of FIG. 3). . If our verification code is present in a suspicious photo, the found amplitude is divided into both amplitudes having a positive amplitude assigned “1” and a negative amplitude assigned “0”. Our unique verification code reveals itself. On the other hand, if no such verification code exists, or some other code, a random Gaussian distribution of amplitudes is found by a random collection of values.

  It remains to give some further details on how to find the amplitude of the independent embed code. Again, fortunately, this exact problem has been addressed in other technical applications. Furthermore, throwing this problem and a little food into a crowded room of mathematicians and statisticians will surely bring out half a dozen optimized methodologies after a certain period of time. Let's go. It is a well-defined problem.

  One particular example solution arises from the field of astronomical imaging. Here, the mature prior art subtracts the “thermal noise frame” from the predetermined CCD image of the object. Often, however, it is not clearly known how much scaling factor to use in subtracting the thermal frame, and an exact scaling factor search is performed. This is clearly the job of this step of the present embodiment.

The general practice is to simply perform a general search algorithm on the scaling factor, select the scaling factor,
New image = acquired image−scaling factor × thermal image (4)
Formed by.

  A fast Fourier transform routine is used on the new image, and finally the scale factor that minimizes the integrated high frequency content of the new image is found. This general form of search operation with the minimization of individual quantities is very common. The discovered scale factor is therefore the searched “amplitude”. An improvement that has been considered but not yet realized is to remove the combination of the higher derivative of the acquired image and the embedded code from the estimated and calculated scale factor. That is, there is a specific bias effect due to the coupling described above, which should eventually be revealed and removed by both theoretical and empirical experiments.

Use and improvement in detection of signal or image changes Apart from the basic need to validate signals or images as a whole, there is also a somewhat ubiquitous need to detect possible changes to signals or images . The following sections describe how the example can be used as a powerful tool in this area with certain modifications and improvements.

  To summarize first, we assume that we have a predetermined signal or image that has been positively verified using the basic method outlined above. That is, we know the N-bit verification word, the independent embedded code signal, and the composite embedded code. We can then form a spatial map of the amplitude of the composite code in our given signal or image quite simply. Furthermore, we can divide this amplitude map by the spatial amplitude of the known composite code to give a normalized map, ie a map that changes around some global mean. With a simple survey of this map, we can see how clearly the normalized amplitude value falls below a statistical set of thresholds based solely on typical noise and tampering (error). Regions can also be detected visually.

  The implementation details of the formation of the amplitude map have various choices. One is to perform the same procedure used to determine the signal amplitude described above, where we use a normal weight function centered near the region we are investigating all of the signals / images. Is multiplied by a predetermined area.

Universal Code vs. Custom Code So far, this document has outlined how each and every source signal has its own set of independent embedded code signals. This requires the storage of a significant amount of additional code information in addition to the original, and a more economical form would be appropriate for many applications.

  One such saving approach is to have a predetermined set of independent embedded code signals common to a set of source works. For example, all of our 1000 images can utilize the same basic set of independent embedded code signals. At this time, the memory required for these codes is only a fraction of the total memory required for the source work.

  In addition, some applications can utilize a universal set of independent embedded code signals, ie codes that remain the same in all cases of distributed work. What is needed for this format will be seen by a system that has a unified device that tries to hide the N-bit verification word itself and can read this word. This can be used in a system that judges / does not in terms of reading position. A potential disadvantage of doing this setup is that universal codes are more likely to be tracked or stolen, and therefore they are less secure than the equipment and methodologies disclosed above. Perhaps this is the difference between "high safety" and "tight safety" and a less important distinction for the majority of potential applications.

The term "signal" used in printing on paper, documents, plastic processing identification cards, and other materials that can carry global embed codes is often narrowly defined to indicate digital data information, audio signals, images, etc. Used in. A broader interpretation of “signal” and more generally intended includes any type of change in any material. Thus, the microtopology of a typical paper piece is a signal (eg, height as a function of xy coordinates). The refractive properties of a flat piece of plastic become a signal (as a function of space). The point is that photographic emulsions, audio signals, and digitized information are not the only forms of signals that can use the principles of the present invention.

  As appropriate, a machine very similar to a Braille printing machine can be designed with the inherent 'noise-like' verification outlined above. These verifications can be applied in a position where the pattern is not recognized by an ordinary user of the document, with much less pressure than is applied in the formation of Braille braille. However, by continuing with the steps of this specification and using them with a fine verification mechanism, the unique verification code can be used for daily letter paper purposes, important documents, legal submissions, or It can be placed on any paper that is another guaranteed work.

  The verification work in such an embodiment is typically read by simply reading the document at various angles optically. This is an inexpensive method for inferring the paper microtopologi. Certainly other forms of reading paper topologi are possible.

  For example, in the case of a work encapsulated in plastic such as an identification card, which is a driver's license, a machine similar to a similar Braille printing machine can be used to attach a unique verification code. A thin layer of photosensitive material can be embedded inside the plastic and "photosensitized".

  It is clear that wherever there is a material that can be modulated by a “noise-like” signal, this material is a suitable carrier for utilizing the intrinsic verification code and the principles of the present invention. All that remains is the problem of adding verification information economically and keeping the signal level below an acceptable threshold that every application defines for itself.

The first set of real-time encoder embodiments most commonly use standard microprocessors or computers that perform image or signal encoding, and custom encoding devices that can be faster than typical von Neumann processors. Can be used. Such a system can be used for all forms of serial data streams.

  Music and videotape recording are examples of serial data streams, often piracy data streams. If the authorized records are encoded with verification data so that pirated infringements can be searched for the originals from which they were formed, it would help the implementation attempt.

  Copyright infringement is just one thing that requires the present invention. The other thing is authentication. Often it is important to see what a given set of data is actually intended (often years after its occurrence).

  To illustrate these and other needs, the system 200 of FIG. 5 can be used. System 200 can be thought of as a verification encoding black box 202. System 200 receives an input signal (hereinafter referred to as a “master” or “unencoded” signal) and a codeword and generates a verification encoded output signal (generally in real time). (Typically, the system provides key data for later decryption).

  The contents of the “black box” 202 can take various forms. A typical black box system is shown in FIG. 6, which includes a look-up table 204, a digital noise source 206, first and second scalers 208 and 210, an adder / subtractor 212, a memory 214, a register 216.

  (In the illustrated embodiment, an 8-20 bit data signal supplied at a rate of 1000000 samples per second, but in other embodiments, an appropriate A / D and D / A converter is provided. An input signal (which may be an analog signal) is supplied from the input terminal 218 to the address input terminal 220 of the lookup table 204. For each input sample (ie, a lookup table address), the lookup table provides a corresponding 8-bit digital output word. This output word is used as a scaling factor supplied to the first input terminal of the first scaler 208.

  The first scaler 208 has a second input terminal, and supplies an 8-bit digital noise signal from the noise source 206 to this input terminal. (In the illustrated embodiment, the noise source 206 comprises an analog noise source 222 and an analog-to-digital converter 224, but again other means can be used.) The noise source in the illustrated embodiment is 50 Having a zero average output value with a full width at half maximum (FWHM) of a digital number from 1 to 100 (eg, -75 to +75).

  The first scaler 208 multiplies two 8-bit words (scale factor and noise) at its input terminal and generates one 16-bit output word for each sample of the system input signal. Since the noise signal has a zero average value, the output signal of the first scaler similarly has a zero average value.

  The output signal of the first scaler 208 is supplied to the input terminal of the second scaler 210. The second scaler performs a global scaling function and establishes the absolute amount of verification signal that is ultimately embedded in the input data signal. The scaling factor is set by a scale controller 226 (which can take many forms, from a simple rheostat to a control graphically implemented in a graphical user interface) and according to the requirements of a separate application. Allows this factor to be changed. The second scaler 210 generates a scale noise signal on its output line 228. Each sample of the scale noise signal is sequentially stored in the memory 214.

  (In the illustrated embodiment, the output signal from the first scaler 208 can vary between -1500 and +1500 (decimal), but the output signal from the second scaler 210 is a small single number. (Such as between -2 and +2)).

  Register 216 stores a multi-bit verification codeword. In the illustrated embodiment, this codeword consists of 8 bits, but larger codewords (up to several hundred bits) are commonly used. These bits are referenced one at a time to control the degree of modulation of the input signal by the scale noise signal.

  In particular, the pointer 230 is sequentially cycled through the bit position of the code word in the register 216 and a control bit of “0” or “1” is supplied to the control input terminal 232 of the adder / subtractor 212. For a given input signal sample, if the control bit is “1”, the scale noise signal sample on line 232 is added to the input signal sample. When the control bit is “0”, the scale noise signal sample is subtracted from the input signal sample. The output terminal from the adder / subtracter 212 generates a black box output signal.

  The addition or subtraction of the scale noise signal according to the bits of the codeword generally affects very little modulation of the input signal. However, recognition of the contents of memory 214 allows the user to later decode the encoding and determine the code number used in the original encoding process. (In fact, the use of memory 214 is optional as will be described below).

  Distribute the encoded signal in a well-known manner, including the form converted to a printed image, the form stored on a magnetic medium (floppy disk, analog or DAT tape, etc.), CD-ROM, etc. It will be recognized that

Various decoding techniques can be used to determine the verification code while the suspicious signal remains encoded. Two are discussed below. The first is less preferred than the latter for many applications, but by discussing here, the reader will have a more complete situation in understanding the present invention.

  More particularly, the first decoding method is a difference method, by subtracting the corresponding sample of the original signal from the suspicious signal to obtain a difference sample, and then a deterministically encoded indicium. (I.e., stored noise data). This approach can therefore be referred to as a “sample-based deterministic” decoding technique.

  The second decoding technique does not use the original signal. Nor does it examine individual specimens to look for predetermined noise characteristics. Rather, statistic values (or portions thereof) of suspicious signals are considered and analyzed as a whole to identify the presence of verification signals that fill the entire signal. Reference to fullness means that the entire verification code can be identified from a small part of the suspicious signal. This latter approach can therefore be referred to as a “holographic statistical” decoding technique.

  Both of these methods start by matching the suspicious signal to the original. This requires scaling (eg, in amplitude, duration, color balance, etc.) and sampling (or resampling) to restore the original sampling rate. There are various well-understood techniques that can perform operations related to this matching function, as in the embodiments described above.

  As mentioned, the first decoding approach occurs by subtracting the original signal from the matched suspicious signal, leaving the difference signal. The polarity of successive difference signal samples can then be compared to the corresponding stored noise sample signal polarity to determine a verification code. That is, when the polarity of the first difference signal sample matches the polarity of the first noise signal sample, the first bit of the detection code is set to “1”. (In such a case, the polarities of the 9th, 17th, 25th, etc. samples should all be positive.) The polarity of the first difference signal sample is opposite to the polarity of the corresponding noise signal sample. In some cases, the first bit of the verification code is set to “0”.

  By performing the above analysis on eight consecutive samples of the difference signal, an array of bits comprising the original codeword can be determined. As in the preferred embodiment, during encoding, if the pointer 230 advances one bit at a time through the codeword and starts with the first bit, the first eight samples of the difference signal are analyzed and the value of the 8-bit codeword Can only be determined.

  In a noise-free world (the noise here is irrelevant to the noise acting on verification coding), the above analysis always yields an accurate verification code. However, the use of processing adapted only in a noise-free world is actually limited.

  (In addition, the correct verification of the signal in a noise-free situation is handled by various other simpler methods, such as a checksum, ie, a statistical impossibility match between the suspicious signal and the original signal, etc. Is possible).

  Although anomalies caused by noise in decoding can be handled to some extent by analyzing a large portion of the signal, such anomalies still set a practical upper limit in processing reliability. Furthermore, the villains that must be faced are always less tender than random noise. Rather, human-induced forms of tampering, distortion, fraudulent manipulation, etc. are increasingly selected. In such cases, the desired degree of verification reliability is achieved only by other approaches.

  The currently preferred approach ("holographic, statistical" decoding technique) recombines the suspicious signal with specific noise data (typically data stored in memory 214) and the resulting signal entropy. Rely on analyzing. “Entropy” does not need to be understood in its strict mathematical definition, it is simply the simplest word describing randomness (noise, flatness, snowiness, etc.).

  Most serial data signals are not random. That is, a sample usually correlates to some extent with an adjacent sample. In contrast, noise is typically random. When a random signal (eg, noise) is added to (or subtracted from) a non-random signal, the resulting signal entropy generally increases. That is, the resulting signal has a more random deviation than the original signal. This is the case for the encoded output signal generated by the current encoding process and has a larger entropy than the original uncoded signal.

  In contrast, if an addition of (or subtraction from) a random signal to a non-random signal reduces entropy, an exception will occur. The exception is to use a suitable decoding process and detect the embedded verification code.

  In order to fully understand the decoding method based on this entropy, emphasizing the characteristics of the original decoding process, which is the same process every 8th, is the first help.

  In the encoding process discussed above, the pointer 230 is incremented by one bit for each successive sample of the input signal through the codeword. If the code word is 8 bits long, the pointer returns every eighth sample to the same bit position in the code word. If this bit is “1”, noise is added to the input signal. If this bit is “0”, noise is subtracted from the input signal. Thus, due to the periodic progression of the pointer 230, every eighth sample of the encoded signal shares characteristics, depending on whether the bit of the codeword addressed by the pointer 230 is “1” or “0”. Are all increased by the corresponding noise data (or vice versa), or all are decreased.

  To take advantage of this feature, the entropy-based decoding process processes in a similar manner for every 8 bits of a suspicious signal. In particular, the first, ninth, seventeenth, twenty-fifth, etc. samples of the suspicious signal have corresponding scale noise signals stored in memory 214 (ie, first, ninth, seventeenth, twenty-fifth, respectively). The processing starts by adding the noise signals stored in memory locations such as. Next, the entropy of the resulting signal (ie, the suspicious signal changed every 8th sample) is calculated.

  (Calculation of signal entropy or randomness is well known to those skilled in the art. The generally accepted one is to take the derivative of the signal at each sample point and square these values. , Summing over the entire signal).

  Next, the above steps are repeated, at which time the stored noise value is subtracted from the first, ninth, seventeenth, twenty-fifth, etc. samples of the suspicious signal.

  One of these two operations cancels the encoding process and reduces the entropy of the resulting signal, the other worsens it. If the addition of noise data in memory 214 to a suspicious signal reduces its entropy, this data must have been previously subtracted from the original signal. This indicates that the pointer 230 was pointing to the “0” bit when these samples were encoded. ("0" at the control input terminal of adder / subtractor 212 results in subtraction of scale noise from the input signal).

  Conversely, if subtraction from every eighth sample of a suspicious signal of noise data reduces its entropy, the encoding process must have previously added this noise. This indicates that the pointer 230 was pointing to “1” when samples 1, 9, 17, 25, etc. were encoded. By noting whether the entropy reduction is due to either (a) addition or (b) subtraction to / from the suspicious signal of the stored noise data, the first bit of the codeword becomes (a) Whether it is “0” or (b) “1” can be determined.

  The above operation is performed on a group of regularly spaced samples starting at the second sample of the suspicious signal (ie 2, 10, 18, 26 ...). The entropy of the resulting signal indicates whether the second bit of the codeword is “0” or “1”. Repeat for the next 6 groups of suspicious signals until all 8 bits of the codeword are identified.

  It will be appreciated that the approach described above is not altered by a tamper mechanism that changes the value of an individual sample. That is, instead, this approach considers the entropy of the signal to yield a high degree of confidence in the results. In addition, a small excerpt of the signal can be analyzed by this method to detect piracy of details of the original work. Consequently, as a result, it is statistically sound in the face of both natural and human tampering of suspicious signals.

  Furthermore, it will be appreciated that the use of N-bit codewords in this real-time embodiment provides similar benefits as described above in connection with batch encoding systems. (In practice, this example can be conceptualized as using N difference noise signals in a batch coding system. The first noise signal has the same spread as the input signal and 0 between samples. A signal comprising a scale noise signal in the first, ninth, seventeenth, twenty-fifth, etc. samples (where N = 8), etc. The second noise signal is the second, tenth having 0 between samples. A similar signal with a scale noise signal at the 18th, 18th, 26th, etc. sample, etc. All these signals are mixed to produce a composite noise signal.) One important advantage is the high degree of statistical reliability (reliability that doubles with each successive bit of verification code) that the match is truly a match. This system does not rely on subjective evaluation of a suspicious signal on one deterministic embedded code signal.

Illustrative Variations From the above description, it will be appreciated that many changes can be made to the system shown without changing the basic principles. Some of these variations are described below.

  The decoding process described above tries both adding and subtracting stored noise data to / from a suspicious signal to find out which operation reduces entropy. In other embodiments, only one of these operations need be performed. For example, in one decoding process, stored noise data corresponding to every eighth sample of a suspicious signal is only added to the sample. If the resulting signal is increased accordingly, the corresponding bit of the codeword is “1” (ie, this noise was previously added during the decoding process and added again, so Only the randomness of the signal increased). If the resulting signal is therefore reduced, the corresponding bit of the codeword is “0”. Other tests of entropy that subtract the stored noise signal are not necessary.

  The statistical reliability of the verification process (encoding and decoding) can be changed to whatever reliability threshold (eg, 99.9%, 99.99%, 99.999) by appropriate selection of the global scaling factor. %, Etc.) can be substantially exceeded. Special reliability in any given application (not required for most applications) can be achieved by re-examining the decoding process.

  One way to re-examine the decoding process is to remove stored noise data from the suspicious signal according to the bits of the identified codeword and generate a “recovery” signal (eg, the first bit of the codeword is “ If it is found to be 1 ″, the noise sample stored at the first, ninth, seventeenth, etc. location of the memory 214 is subtracted from the corresponding sample of the suspicious signal). The entropy of the stored noise signal is measured and used as a baseline in other measurements. The process is then repeated, and at this time, the stored noise data is removed from the suspicious signal according to the modified codeword. The modified codeword is identical to the identified codeword except for the combined (eg, first) 1 bit. The entropy of the resulting signal is measured and compared with the baseline. If toggling a bit in an identified codeword results in increased entropy, the accuracy of that bit in the identified codeword is assured. This process is repeated for every different bit of the toggled verified codeword until all bits of the codeword have been examined. As a result of each change, the entropy increases compared to the baseline value.

  The data stored in the memory 214 is subject to various alternatives. In the discussion above, the memory 214 includes scale noise data. In other embodiments, non-scale noise data can be stored instead.

  In still other embodiments, it may be desirable to store at least a portion of the input signal itself in memory 214. For example, this memory can allocate 8 signature bits to noise samples and 16 bits to store the most significant bits of 18 or 20 bit audio signal samples. This has several benefits. One is that “suspicious” signal matching is simplified. Another benefit is that when encoding an already encoded input signal, the data in memory 214 can be used to identify which encoding process was performed first. That is, from input signal data in memory 214 (despite being insufficient), it can generally be determined which of the two codewords is being encoded.

  Yet another alternative of the memory 214 is that it can be omitted entirely.

  One way that this can be achieved is to use a deterministic noise source in the encoding process, such as an algorithmic noise source seeded by a known key number. The same deterministic noise source seeded with the same key number can be used in the decryption process. In such a device, instead of the large data set normally stored in memory 214, only the key number needs to be stored for later use in decryption.

  Alternatively, a universal decoding process can be performed if the noise signal added during encoding does not have a zero average value and the codeword length N is known to the decoder. This process uses an entropy test similar to the procedure described above, but cycles through the possible codewords and, according to the bits of the codeword being tested, until the entropy reduction is observed, the Nth of the suspicious signal Add / subtract a small dummy noise value (eg, less than the predicted average noise value) for each sample. However, such an approach is less suitable than other embodiments (eg, susceptible to cracking due to brute force) and is not suitable for most applications.

  Many applications can be handled by the embodiment shown in FIG. 7 which uses several different codewords and generates several different encoded variants of the input signal, each using the same noise data. it can. More particularly, embodiment 240 of FIG. 7 includes a noise store 242 that stores noise from noise source 206 during identification encoding of the input signal with a first codeword. (The noise source of FIG. 7 is shown outside the real-time encoder 202 for convenience of illustration.) Then, an additional verification encoded version of the input signal is read from the stored noise data from the store and the Nth codeword Can be generated by encoding the signal by combining them alternately. (A binary sequential codeword is shown in FIG. 7, but any arrangement of codewords can be used in other embodiments.) Such a device requires a proportionally sized long-term noise memory. Without being able to generate a large number of differently encoded signals. Instead, it stores a certain amount of noise data and encodes the original once or 1000 times.

  (If desired, several differently encoded output signals can be generated simultaneously rather than sequentially. Some such implementations are each driven by the same input signal and the same scale noise signal. Includes a plurality of adders / subtractors driven by different codewords, each of which produces a differently encoded output signal).

  It will be appreciated that in applications having many different encoded versions of the same original, it is not always necessary to identify every bit of the codeword. For example, sometimes the application may require verification of only the group of codes to which the suspicious signal belongs. (For example, a higher order bit in a codeword indicates a structure where several different encoded versions of the same source work were generated, a lower order bit indicates a specific copy. Suspicious signals are involved In order to verify the structure, it is not necessary to investigate the lower order bits, since the structure can be verified only by the higher order bits.) Identify the verification requirement, a subset of the codeword bits in the suspicious signal If it can be satisfied by doing so, the decoding process can be shortened.

  Some applications can be best handled by restarting the encoding process during several integration operations, sometimes with different codewords. As an example, consider a videotape work (eg, a television program). Each frame of the videotape work can be verified with a unique code number and processed in real time by a device 248 similar to that shown in FIG. Each time a vertical blanking is detected by the sink detector 250, the noise source 206 is reset (eg, repeats the just generated sequence) and the verification code is incremented to the next value. Thereby, each frame of the videotape is uniquely verified encoded. Typically, the encoded signal is stored on a video tape for long term storage (other storage media including laser disks can also be used).

  Returning to the encoder, the look-up table 204 in the illustrated embodiment shows that the large amplitude samples of the input data signal can handle a higher level of encoded verification encoding than does the small amplitude input samples. Is used. Thus, for example, an input data sample having a decimal value of 0, 1 or 2 can correspond to a scale factor of 1 (or zero), but an input data sample having a value greater than 200 is It can be made to correspond. Generally speaking, scale factors and input sample values correspond by a square root relationship. That is, a fold increase in the value of the sampled input signal roughly corresponds to a fold increase in the value of the scale factor associated therewith.

  (As an episodic reference to a zero scaling factor, mention for example if the source signal has no temporal or spatial information content. In an image, for example, a region characterized by several adjacent zero sample values. Corresponds to the true black region of the frame, and a zero scaling value can be used here since there is virtually no piracyed image data).

  Continuing with the encoding process, those skilled in the art will recognize the potential for “rail error” in the illustrated embodiment. For example, if the input signal consists of 8 bit samples and these samples span the entire range 0 to 255 (decimal), the addition / subtraction of scale noise to / from the input signal may depend on 8 bits. An output signal that cannot be represented (eg, -2 or 257) may be generated. There are many well-understood techniques that correct this situation, some of which are antegrade and some of them reactive. (These known techniques have in common that the input signal has no samples in the range 0-4 or 251-255, so that it can be safely modulated by the noise signal or otherwise generate rail errors. Or include a device that detects the input signal sample and modifies it to fit).

  Although the illustrated embodiment describes a codeword being advanced sequentially, one bit at a time, it will be appreciated that the bits of the codeword can be used for this purpose rather than sequentially. Indeed, the bits of the codeword can be selected according to some predetermined algorithm.

  Dynamic scaling of the noise signal based on the instantaneous value of the input signal is an optimization that can be omitted in many embodiments. That is, the look-up table 204 and the first scaler 208 can be omitted altogether and the signal from the digital noise source 206 can be supplied directly to the adder / subtractor 212 (or through the second global scaler 210).

  Further, it will be appreciated that the use of a zero average noise source simplifies the illustrated embodiment, but is not necessary for the present invention. Noise signals with other average values can be easily used and (if necessary) C. Corrections can be made outside of this system.

  The use of noise source 206 is also optional. Various other signal sources can be used, depending on the application, depending on the limit (eg, the threshold at which the encoded verification signal becomes perceptible). In many cases, the level of the embedded verification signal is sufficiently low that the verification signal does not need to have a random situation, i.e. is not perceptible despite its nature. However, a pseudo-random source 206 is usually desirable to provide the largest verification code signal S / N ratio (in this case, a somewhat inappropriate word) for the level of inability to perceive the embedded verification signal.

  It will be appreciated that verification encoding need not be performed after the signal has been reduced to a stored form as data (ie, “constant in actual form” in US copyright law terms). . For example, consider the case of a popular musician whose performance is often recorded incorrectly. By verifying the audio before driving the concert hall speakers, unauthorized recordings of the concert can be traced to individual locations and times. In addition, raw audio sources such as 911 emergency calls can be encoded before recording to facilitate their subsequent authentication.

  Although the black box embodiment has been described as a stand-alone unit, it will be appreciated that it can be integrated as a component in many tools / instruments. One of them is a scanner that can embed a verification code in scanned output data. (These codes can be handled simply to commemorate that this data was generated by an individual scanner). The other is in creative software such as the general drawing / graphics / animation / painting programs offered by Adobe, Macromedia, Corel, and similar companies.

  Finally, while the real-time encoder 202 has been described with reference to individual hardware implementations, it will be appreciated that various other implementations can be used instead. Some utilize other hardware forms. Others use software routines for some or all of the described functional blocks. (These software routines can be executed on many different general purpose programmable computers such as 80x86 PC compatible computers, RISC based workstations, etc.

Noise, Pseudo Noise, and Optimized Noise Formats So far, this specification has as some of the many examples of carrier signal types suitable for carrying one bit of information across an image or signal. Gaussian noise, “white noise”, and noise generated directly from the application equipment were assumed. To achieve certain goals, it is possible to be more proactive in the “design” characteristics of noise. “Designs” that use Gaussian or instrument noise are somewhat oriented for “absolute” safety. This section of the specification examines other considerations for the design of noise signals that can be considered the ultimate carrier of verification information.

  For some applications, the carrier signal (eg, the Nth embedded code signal in the first embodiment, the scale noise data in the second embodiment) is used as the verification signal for a more absolute signal strength with respect to the perceptibility of this signal. It may be advantageous to design to give An example is as follows. True Gaussian noise occurs most frequently with the value “0”, then with a probability that 1 and −1 are equal but less than “0”, then 2 and −2, and so on. Obviously, the value 0 does not carry information as used in the present invention. Thus, one simple adjustment or design takes over new processing whenever a zero occurs in the generation of an embedded code signal and converts the value "randomly" to either 1 or -1. A histogram of such processing appears as a Gaussian / Poisson distribution except that the value of 0 is empty and the values of 1 and −1 are increased by half of the normal 0 value histogram value. .

  In this case, the verification signal energy usually appears in all parts of the signal. Some of the exchanges involve the presence (usually negligible) loss of code safety where the “deterministic component” is part of the generation of the noise signal. The reason why this can be completely ignored is that we are still preparing a coin-throwing situation where 1 or -1 is chosen randomly. Another exchange is that this form of designed noise has a highly perceptible threshold and the least significant bit of the data stream or image is already negligible with respect to the commercial value of the material, i.e. the least significant bit Is removed from the signal (or all signal samples), no one can discern the difference and can only be used in applications where the value of the material is not damaged. This zero value limitation in the example described above is one of many ways to “optimize” the noise characteristics of the signal carrier, as can be realized by anyone skilled in the art. We also call this “pseudo-noise” in the sense that natural noise can be converted into a signal that is read as noise for all intents and purposes in a predetermined manner. Encryption methods and algorithms can also generate signals that are perceived as completely random, easily and often by definition. Thus, the term “noise” has a different meaning between what is subjectively defined by the observer or listener and what is mathematically defined. The latter difference is whether mathematical noise has different safety properties and is easy to track or easy to "automatically recognize" the presence of this noise.

"Universal" embedding code Most of this specification should make the noise-like embedding code carrying bits of information in the verification signal unique for each embedded signal for absolute safety Or slightly less restrictive to teach that the embed code signal should be generated sparingly to use the same embed code for a set of 1000 pieces of film, for example . In any case, there are other approaches that can greatly develop new applications for this technology by using what we can call "universal" embedded code signals. The economics of using them can be analyzed by the actual low reliability of these universal codes (eg, they can be analyzed by time-dependent encryption / decryption methods, and therefore prevented or replaced as possible ) Is economically negligible compared to the economic benefits of defining the intended use. Copyright infringement and illegal use are simply predictable “cost” and uncollected revenue sources, ie, simple line items in the overall economic analysis. A good analogy to this is in the cable industry and changing the wavelength of the video signal. Anyone who can be a technically skilled individual who is generally a law-abiding citizen can climb a ladder and repel a few wires in a cable connection box to free all paid channels. I think you know. The cable industry knows this, stops it and chooses an effective way to prosecute these captured, but the “lost income” emanating from this habit is still prevalent, but the system The percentage of profits gained by scrambling the whole is almost negligible. The overall scrambling system is economically successful despite the lack of “perfect safety”.

  The same is true for the use of this technology, giving it a great economic opportunity for a price that reduces some safety. This section first describes what is provided by universal code, and then moves on to some interesting uses where these codes can be used.

  Universal embedded code generally applies to the concept that accurate code knowledge can be distributed. Instead of placing the embed code in a secret safe that will never be contacted until a lawsuit is filed (as mentioned elsewhere in this specification), it can be placed in various places where analysis can be performed on the fly instead. To distribute. In general, this distribution is still placed in a safety-controlled situation, meaning that the steps are limited to those that require code recognition to know. The method of automatically detecting copyrighted works is a non-human example of “something” that needs to know the code.

  There are many ways to implement the universal code concept, each of which has advantages for some given application. For the purpose of teaching this technology, we use these approaches in three categories: universal code based on libraries, universal code based on deterministic formulas, and predefined industry standard patterns. It is classified into the universal code based on. A rough approach is that the first one is more secure than the latter two, but the latter two can be realized more economically than the first.

Universal Code: 1) Universal Code Library The use of a universal code library will only generate a limited set of individual embedded code signals, and any given coding material will have this limited "universal code" Except for using a subset, it simply means using the technique of the present invention. For example, the following is appropriate. Photographic paper manufacturers may wish to pre-expose all 8 × 10 inch photographic paper that they wish to sell with a unique verification code. They sell verification code recognition software to their large customers, service departments, inventory agencies, and individual photographers, and as a result, all these people have their material marked accurately They want to be able to determine if the third-party material they are trying to get is confirmed as copyrighted by this technology. This latter information helps identify the copyright owner and invalidate the lawsuit, among many other benefits. In order to make this plan “economical”, generating a unique verification embedding code on every photographic paper will generate several terabytes independent of the information and need to store these bytes, Need to be recognized by the recognition software. Instead, they decide to embed a 16-bit verification code obtained from a set of only 50 independent “universal” embed code signals in their photographic paper. Details on how to do this are given in the next section, but their recognition software has typically included 50x16 individual embed codes spread on 8x10 photographic paper. The point here is that it only needs to contain a limited set of embedded code in their code library, which is 1 to 10 megabytes of information (considering digital compression). . The reason for choosing 50 instead of 16 is that it is slightly more secure, and if you use the same 16 embed codes for all photos, the serial number capacity is only limited to 2 to the 16th power. Less sophisticated pirates can decrypt these codes and remove them using software tools.

  There are many different ways to implement this plan and the following is one of the preferred methods. Based on business management knowledge, the 300 pixel per inch criterion for embedded code signals is defined as sufficient resolution for many applications. This means that the decoded embedded code image contains 3000 × 2400 pixels to be exposed at a very low level on an 8 × 10 sheet. This gives 7200000 pixels. Using our alternating coding system as described in the black box means of FIGS. 5 and 6, each independent embedded code signal is a pixel carrying true information on the order of 7200000 or 450k, i.e. Only all 16th pixels on a given raster line are included. These values are typically digital numbers in the range of 2 to -2, and are well described by 3 bit numbers. At this time, the raw information content of the embedded code is about 3 / 8th of 450k, that is, about 170 kilobytes. This can be further reduced by digital compression. All of these decisions belong to standard engineering optimization principles known in the art, defined by any given application in the near future. We can therefore see that these 50 independent embedded codes reach several megabytes. This is a fairly reasonable level to distribute as a “library” of universal code in recognition software. Advanced standard cryptography devices are used to hide the exact characteristics of these codes if one is involved in the purchase of recognition software simply by a self-proclaimed pirate infringing all-purpose embedded code be able to. The recognition software can easily decode the code before using the recognition techniques taught herein.

  The recognition software itself certainly has various features, but the central task to do is to determine if there is a universal copyright code present in a given image. The key questions are, if any, which 16 of the total 50 universal codes are included, and what are their bit values if 16 are found? is there. The key variables in determining the answers to these questions are alignment, rotation, magnification (scale), and range. In most common cases where there is no helpful hint, all variables should be changed independently across all interconnections, and each of the 50 universal codes is reduced by entropy. In order to find out if this occurs, it should be checked by addition and subtraction. Strictly speaking, this is an enormous task, but this task, such as having the original image compared to a suspicious copy, or knowing the orientation and extent of the image proportional to 8x10 photographic paper Many useful hints are found that make it much easier, and simple matching techniques can determine all of the variables for a certain acceptable degree. At this time, it is simply necessary to iterate through the 50 universal codes in order to find any reduction in entropy. If you do one, you should also do the other fifteen. In order to set the predetermined order of the 50 universal codes to the order from the most significant bit to the least significant bit of the ID code word, a protocol is required. Therefore, when we discover the existence of the universal code number “4”, discover that its bit value is “0”, and discover that the universal codes “1” to “3” do not exist clearly The most significant bit of our N-bit ID code number is “0”. Similarly, if we find that the next lowest universal code present is the number “7” and find it to be “1”, our next most significant bit is “1” is there. If done properly, the system can clearly track the photographic paper inventory serial number to the copyright owner as long as it is registered with a certain registration or manufacturer of the photographic paper itself. That is, we use universal embed codes 4, 7, 11, 12, 15, 19, 21, 26, 27, 28, 34, 35, 37, 38, 40, and 48, and embed codes 0110 0101 0111 0100 The registration of a photographic paper with the property of Leonard de Boticelli, an unknown wildlife photographer and glacier cinematographer living in Canada. Put his film and photographic paper inventory he has registered tax-free into an envelope that was kindly prepared by a “no-postal” manufacturing company that did a ridiculously simple process when he purchased this inventory. For a few seconds work, we know this. Someone who needs to pay a royalty fee to Leonard checks that it appears and ensures registration automates the royalty payment process as part of the service.

  One endpoint is that truly sophisticated pirates and others with illegal purposes can actually use various encryption methods to decrypt these universal codes, Is to create software and hardware tools that can help you sell and remove or distort code. However, we do not teach these methods as part of this specification. Anyway, this is one of the prices that you need to pay for the ease of universal codes and the applications they open.

Universal Code: 2) Universal Code Universal Code Library based on deterministic formulas, the existence of universally-coded signals and images and several megabytes of independent, generally random as keys to open the identity Requires storing and converting data. Instead, various deterministic formulas are used to generate random data / image frames, thereby storing all of these codes in memory and asking each of the “50” universal codes. Can be avoided. Deterministic formulas can also help speed up the process of determining ID codes that are known once to be present in a given signal or image. On the other hand, deterministic formulas can be tracked by less sophisticated pirates. Once tracked, these can be communicated more easily, like posting to 100 newsgroups on the Internet. These are appropriate for many applications that may be tracked and published, and a deterministic formula that generates an independent universal embed code can simply be a ticket.

Universal Code: 3) “Simple” Universal Code This classification is part of the combination of the first two and is maximally aimed at the implementation of a truly large scale of the principle of this technology. Applications using this type are of a form where reliable safety is not as important as low cost, large scale implementation and the enormous economic benefits it enables. An example application places the verification recognition unit directly in a reasonably priced home audio and video device (such as a television). Such recognition units typically monitor audio and / or video to look for these copyright verification codes, from which recordability is provided or whether a central audio / video service is provided Simple decisions are made based on decisions such as an increase in program specific billing meters that are transmitted to the subscriber and placed in monthly invoices. In addition, “black boxes” in bars and other public places can monitor copyrighted material (listen with a microphone) and generate detailed reports used by ASCAP, BMI, etc. .

  The core principle of a simple universal code is the insertion of several basic industry-standard “noise-like” and seamless repeating patterns into signals, images, and image sequences, resulting in an inexpensive recognition unit. A) determining the presence of a copyright “flag” or B) adding to A so that a more complex determination configuration and operation can be facilitated.

  To realize this embodiment of the present invention, the basic principle of generating an independent embedded noise signal is adapted to an inexpensive recognition signal processing unit while at the same time maintaining effective randomness and holographic penetration properties. It needs to be easy to do. With the adoption of these simple codes into large industries, the codes themselves are adjacent to public information (as cable scrambling boxes are almost in fact public) and confirmed to develop a black market measure. Although the door remains open to pirates, this situation is quite similar to cable video scrambling and objective economic analysis of such illegal activities.

  One prior art known to the applicant in this general area of proactive copyright detection is the serial copy management system adopted by many companies in the audio industry. To the best of Applicants' knowledge, this system is not part of the audio data stream, but is nevertheless inserted into the audio stream and can indicate whether the associated audio data should be duplicated. Use audio “flag” signals. One problem with this system is that it limits the media and devices that can support this additional “flag” signal. Another deficiency is that the flag system does not carry identity information that can be used to make more complex decisions. Yet another difficulty is that high-quality audio sampling of analog signals can make a complete digital copy of a digital master arbitrarily close, and there seems to be no solution to prohibit this possibility. .

  The principles of the present invention can affect these and other problems in audio applications, video, and all other applications described above. An example of the use of a simple universal code is as follows. One industry standard, “1.00000 second noise,” is defined as the most basic that indicates the presence or absence of a copyright code for any given audio signal. FIG. 9 is an example of how industry standard noise seconds look in both the time domain 400 and the frequency domain 402. By definition, it is a continuous function and fits some combination of sampling rate and bit quantization. It has a normalized amplitude and can be arbitrarily scaled to any digital signal amplitude. The signal level and the first Mth derivative of this signal are continuous at the two boundaries 404 (FIG. 9CC), so that when repeated, “discontinuities” in the signal are not visible (as waveforms), Or when played by a high-end audio system, it cannot be heard. The selection of 1 second is arbitrary in this example, and the exact length of this interval is determined by audibility, pseudo white noise conditions, seamless repeatability, ease of recognition processing, and copyrighting decisions. Get for reasons like speed by what you can do. The insertion of this repetitive noise signal into the signal or image (again at a level below human perception) indicates the presence of copyright material. This is essentially a 1-bit verification code, and the embedding of other verification information is discussed later in this section. The use of this verification technology can be extended far beyond the low-cost home appliances discussed here, this technology can be used in studios, set up a monitoring station, and actually has several hundred channels. Information can be monitored at the same time, the marked signal stream can be searched, and the associated identity code compatible with the billing network and the royalties tracking system can be searched. This basic standardized noise signature is seamlessly repeated many times and added to the audio signal to be marked for basic copyright verification. Some of the reasons for the word “easy” are understood as follows. Obviously piracy will know about this industry standard signal, but their illegal use derived from this knowledge, such as deletion or tampering, is the economics of the overall technology for large markets. Compared to value, it is very small economically. For most high-end audio, this signal should be 80 to 100 dB down or smaller from full scale, and each situation should be chosen to their own level, even if there is something recommended. Can do. The amplitude of the signal can be modulated according to the audio signal level for which a noise signature is used. That is, this amplitude can be increased to some degree in the case of a drum beat, but not so dramatic as to be audible or uncomfortable. These degrees simply help the recognition network to be described.

  Recognition of the presence of this noise signal by a low-cost device can be performed in various ways. Some are based on a basic variant on the simple principle of measuring the audio signal output. Software recognition programs can be written, and more sophisticated mathematical detection algorithms can also be used to perform more reliable verification detection. In such an embodiment, the detection of the copyright noise signature includes a comparison of the time averaged output level of the audio signal with the time averaged output level of the same audio signal minus the noise signature. If the noise signal subtracted audio signal has a lower output level than the unmodified audio signal, a copyright signature exists and in the same sense, a certain state flag needs to be set. The main engineering subtlety involved in performing this comparison is inconsistent audio recording and playback speed (eg, some devices may be “slow” by 0.5% for exactly one second intervals). And processing of the unknown phase of a one-second noise signature in any given audio (basically this "phase" may be from 0 to 1 second or so). The other subtlety that is not as central as the two mentioned above but nevertheless should be explained is that the recognizer should subtract a noise signature with a larger amplitude than the noise signature originally embedded in the audio signal. It is not. Fortunately, this can be done by simply subtracting only a small amplitude noise signal, and if the output level drops, this is an indication of “towards the valley” at the output level. Yet another related subtlety is that the change in output level is very small relative to the overall output level, and the calculation is generally done with appropriate bit accuracy, eg, in 16-20 bit audio at time averaged output levels. This must be done by 32-bit value arithmetic and integration.

  Clearly, designing and assembling this power level comparison processing circuit for low cost applications is a technical optimization task. One exchange is the accuracy in performing verifications on “short cuts” that can be formed into a network for lower price and complexity. The preferred embodiment of this recognition network placement within the device is by a single programmable integrated circuit custom-made for the job. FIG. 10 shows one such integrated circuit 506. Here, the audio signal enters the 500 as a digital signal or as an analog signal to be digitized in the IC 500, and the output signal is set to a certain level when a copyright noise signature is found and is not found. The flag 502 is set to another level. It also shows storing the standardized noise signature waveform in read only memory 504 in IC 506. There is a slight time delay between the application of the audio signal to the IC 506 and the output of the valid flag 502 because a certain finite position of the audio needs to be monitored before recognition can take place. In this case, a “flag valid” output signal that the IC informs the outside world may be required if it has sufficient time to make an accurate determination of the presence or absence of a copyright noise signature.

There are a wide range of variations of the specific design and design philosophy used to perform the basic functions of the IC 506 of FIG. Audio engineers and digital signal processing engineers can produce several fundamentally different designs. One such design is illustrated in FIG. 11 by itself as a process 599 belonging to another technical optimization as discussed later. FIG. 11 shows a flowchart of either an analog signal processing network, a digital signal processing network, or a programming step of a software program. We notice that the input signal 600 along a path is fed to the time average power meter 602 and the resulting power output itself is treated as the signal P sig . To the upper right, we find a standard noise signature 504 that is read at 604 at 125% of normal speed, so that its pitch changes and 606 shows a “pitch change noise signal”. Next, at step 608, the pitch change noise signal is subtracted from the input signal and this new signal is fed to a time average power meter, here designated 610, of the same form as shown at 602. The output signal of this operation is also a time reference signal denoted here as P s-pcn at 610. Next, at step 612, the power signal 602 is subtracted from the power signal 610 to produce a power difference signal Pout 613. When a universal standard noise signature is actually present in the input audio signal 600, cases 2 and 618 occur, and a beat signal of about 4 seconds appears in the output signal 613, and this step is performed according to the steps shown in FIGS. A beat signal must be detected. Cases 1 and 614 are uniform noise signals in which no periodic beat is observed. 125% in step 604 is arbitrarily chosen here, and technical reasons determine the optimum value, leading to different beat signal frequencies 618. The 4 second wait in this example is effectively a period of time, but if you want to detect at least 2 or 3 beats, FIG. It is an overview of how to repeatedly act on various delayed versions of an input signal delayed by about 1/20 second by 20 parallel circuits each operating simultaneously in the audio portion. In this method, a beat signal is seen about every 1/5 second and looks like a traveling wave descending a row of beat detection circuits. The presence or absence of this traveling beat wave triggers the detection flag 502. At the same time, there are audio signal monitors that ensure that, for example, at least 2 seconds of audio is heard before setting the flag valid signal 508.

  Although audio examples have been described, it will be appreciated by those skilled in the art that certain universal noise signals or similar format definitions of images can be used for many other signals, images, photographs, and physical media already discussed. It will be obvious.

The case described above dealt with only the 1-bit surface of information. That is, it is determined whether the noise signature signal exists (1) or not (0). For many applications, it is preferable to further detect serial number information that can be used for more complex determinations, log information, etc. in billing statements. A principle similar to that described above is used, but here there are N independent noise signatures as shown in FIG. 9 instead of one such signature. Typically, one such signature is the master thereby detecting simply that a copyright marking is present, which generally has a greater power than the others, and then “verifies other smaller powers” “Embed noise signatures in audio. Once the authentication circuit finds the presence of the main noise signature, it proceeds to the other N noise signatures and uses steps similar to those described above. If a beat signal is detected, this indicates a bit value of 1, and if no beat signal is detected, it indicates a bit value of 0. Typically N is 32, and 2 32 verification codes can be made available to any given industry using the present invention.

Use of this technique when the length of the verification code is 1. The principle of the present invention is to use only one presence or absence of a verification signal--if desired, a fingerprint--and a signal or image is copyrighted. It can obviously be applied in the case of giving the reliability of what is being given. The above example of an industry standard noise signature is one suitable case. We no longer have the added reliability of analogy with coin throws and we no longer have tracking code capacity or basic serial number capacity, but many uses will not require these attributes But the added simplicity of one fingerprint more than compensates for these other attributes in some event.

Similarity to “wallpaper” The term “holographic” has been used herein to describe how verification code numbers are distributed throughout the encoded signal or image in a largely complete form. It was. This also applies to the concept that any given fragment of a signal or image contains a complete unique verification code number. In the physical implementation of holography, there is a limit on how small the fragment can be before it begins to lose this property, where the resolution limit of the holographic media is a major factor with respect to the holography itself. In the case of a non-tampered distribution signal using the encoder of FIG. 5 and additionally using our “designed noise” described above, where zero randomly changes to 1 or −1, the degree of fragmentation required is: It is simply N consecutive samples in a signal or image raster line, where N is the length of our verification code number defined in advance. This is the amount of information, i.e. the practical situation where noise and tampering will generally require a sample of orders of magnitude one, two or more larger than this simple number N. One skilled in the art will recognize that there are many variations that are included in the clear definition of the exact statistics in the smallest fragment dimensions that can be verified.

  For teaching purposes, Applicants also use an analog that the unique verification code number is “wallpapered” across the image (or signal). That is, the entire image is repeated many times. This repetition of the ID code number can be made periodically as in the use of the encoder of FIG. 5, or can itself be random, and the bits of the ID code 216 of FIG. Randomly selected in each sample without stopping, and this random selection is stored along with the value of the output signal 228. In any case, the ID code information carrier, the independently embedded code signal, varies across the image or signal. Therefore, to summarize the similarity with wallpaper, the ID code itself repeats many times, but the pattern that each repeat attaches varies randomly according to a key that is generally not traceable.

Loss Data Compression As noted above, the preferred embodiment verification encoding can withstand loss data compression and subsequent decompression. Such compression is likely to be increasingly used, especially in situations such as digitized entertainment programs (movies, etc.).

  The data encoded by the preferred embodiment of the present invention can withstand all forms of loss compression known to the applicant, but what appears to be most commercially important is CCITT G3, CCITT G4, JPEG, MPEG and JBIG compression / decompression standards. The CCITT standard is widely used in black and white document compression (eg, facsimile and document storage). JPEG is most widely used for still images. MPEG is most widely used for moving images. JBIG is a promising successor of the CCITT standard for use in black and white images. Techniques such as these are well known in the field of lossy data compression, and a good overview can be found in Pennebaker et al, JPEG, Still Image Data Compression Standard, Van Nostrand Reinhold, N.Y., 1993.

Steganography and the use of this technique in the transmission of more complex messages or information This document concentrates on what is referred to above as wallpapering one verification code over the entire signal. This appears to be a desirable feature for many applications. However, there are other applications where it is desirable to pass a message or embed a very long sequence of suitable verification information in a signal or image. One of many of these possible applications is intended for a given signal or image to be manipulated by several different groups, and a particular region of the image validates the proper manipulation information for each group. And when secured for insertion.

  In these cases, the codeword 216 in FIG. 6 can actually be changed as a function of signal or information location in some predetermined manner. For example, in the image, the code can be changed for every raster line in the digital image. A 16-bit code word can be 216, but each scan line has a new code word, so a 480 scan line image can pass a 980 byte (480 × 2 byte) message. The recipient of the message needs to access the noise signal stored in the memory 214 or know the universal code structure of the noise code of the encoding method being used. To the best of Applicants' knowledge, this is a new approach to the mature region of steganography.

  In all three aforementioned uses of universal codes, it is often desirable to add a short (possibly 8 or 16 bits) secret code in addition to the universal code. This provides the user with a slight amount of other security against the possibility of deleting universal codes by sophisticated pirates.

Applicant's Prior Application A detailed description of this point merely repeated the disclosure of Applicant's prior international application published as PCT International Publication Pamphlet WO 95/14289. The mere repetition above provides background to the following disclosure.

In one portion of this disclosure, exemplified in the part of a real-time encoder as a distinction from N independent embedded code signals , in some parts of this disclosure, N independent source signals co-space embedded signals are A saving step was performed to design the non-zero elements of the embedded code signal to be unique to the embedded code signal. More carefully, we “assign” a certain pixel / sample point of a given signal to a predetermined mth bit position in our N-bit identification word. Furthermore, and as another basic optimization of the realization, the set of these assigned pixels / samples across all N embedded code signals is exactly the range of the source signal, each in the source signal And all pixel / sample positions are assigned to the only mth bit position in our N-bit identification word. (However, it cannot be said that each and every pixel has to be changed.) For simplicity, we now have one master code signal (or “snow image” rather than N independent signals). "), Which realizes that the predefined position in this master signal corresponds to the unique bit position in our N-bit identification word. We therefore go through this detour and constitute this somewhat simple concept in the signal master noise signal. This movement, originally derived from the idea that beyond individual savings and simplification, individual bit positions in our N-bit identification word are no longer “sufficient” for the information transport capacity of one pixel / sample There are also performance reasons for.

  With a clearer understanding of this one master, we can newly discover other parts of this disclosure and explore further details within a given application area.

Most deterministic universal codes that use the master code concept One suitable case is to further explore the use of deterministic universal codes, referred to as item “2” in the part for universal codes. A given user of this technology can select the following various uses of the principles of this technology. The user in question may be the home video arranger, but obviously the principle extends to all other potential users of the technology. FIG. 13 schematically shows the steps involved. In this example, the user is assumed to be “alien production”. They first form an image canvas that spans the same space as the video frame dimensions of their movie “Adventure of Bad”. On this canvas, they print the name of the movie and place their logo and company name. In addition, they have special information at the bottom, such as the distribution lot for the large number of replicas they are currently making, and as shown, they actually have the number of unique frames shown. . Thus, we find an example of a standard image 700 that forms the initial basis for the formation of a master snowy image (master code signal) that is appended to the original movie frame to form the output distributable frame. The image 700 may be either black and white or color. The process of converting this image 700 into a pseudo-random master code signal is referred to by the encryption / scramble routine 702, where the original image 700 is subject to any number of known scrambling methods. The description of the number “28” refers to a concept that can actually be a library of scrambling methods, and can change the particular method used for this particular movie or this particular frame. The result is our classic master code signal or snowy image. In general, the brightness value is high and the snow-like image can be seen very well on a television receiver switched to an empty channel, but it is clearly obtained from the useful image 700 and converted through scramble 702. (Caution: The image smudge in this example is actually a somewhat poor depiction and is a poor tool function available to the inventor).

  This master snowy image 704 is then the signal modulated by our N-bit identification word outlined elsewhere in this disclosure, and the resulting modulated signal is perceived as acceptable in luminance. Noise level is added to the original frame to generate a distributable frame.

  There are various advantages and features provided by the method shown in FIG. There are various themes throughout this variant. Obviously, one advantage is that users can use a more intuitive and personalized method to seal and sign their work. Assuming that the encryption / scramble routine 702 is of high security and is not disclosed or leaked, even if the copyright infringement candidate has knowledge of the logo image 700, this knowledge is transferred to the master snow-cover. It cannot be used to enable the image 704 to be tracked, and so to speak, the system cannot be decrypted. On the other hand, a simple encryption routine can open the door to decrypt the system. Another obvious advantage of the method of FIG. 13 is the ability to place other information throughout the defense process. To be precise, the information contained in the logo image 700 is not transported directly in the final distributable frame. That is, if the encryption / scramble routine 702 has a simple known decryption / descrambling method that tolerates bit cut errors, it is generally the case that a distributable frame, an N-bit identification codeword, and the luminance used Based on having a reduction factor and the number of decryption routines to use, the image 700 can be completely recreated. The reason why the image 700 can be accurately recreated is due to the degradation operation itself and the associated bit cutting. However, for the current discussion, the whole problem is somewhat academic.

  A variation on the theme of FIG. 13 is to actually place the N-bit identification code directly on the logo image 700. In a sense, this is self-referencing. Therefore, when we retrieve our stored logo image 700, our identification word is already included, we use encryption routine # 28 on this image, scale down, use this version, and this disclosure Decrypt suspicious images using the technique. The N-bit word found in this way matches that contained in our logo image 700.

  There is a large visual change in the output scrambled master snowy image 704 when one desirable feature of the encryption / scramble routine 702 is given a small change to the input image 700, such as a single numeric change in the frame number. It is good also to come to do. Furthermore, the actual scrambling routine may vary as a function of frame number, and a certain “seed” number typically used in the pseudo-randomization function can vary as a function of frame number. Thus, all alternative methods that help maintain a high level of security are possible. Ultimately, engineering optimization considerations can be tolerated through the relationship between some of these randomization methods and the process of converting an uncompressed video stream into a compressed video stream, as per MPEG compression methodology. We will begin to study how it relates to maintaining signal strength levels.

  Another desirable feature of the encryption process 702 is that it is informationally efficient, that is, given any random input, there is little or no residual spatial pattern beyond pure randomness. It should be possible to output a noise image that is essentially spatially uniform. Any residual correlation pattern contributes to inefficiency in encoding the N-bit identification word, and exposing other tools to pirates and destroying the system.

  Another feature of the method of FIG. 13 is a more intuitive appeal to the use of identifiable symbols as part of a decoding system, which should be construed advantageously in the essentially general environment of the court. . It reinforces the simplicity inherent in throwing coins that are mentioned somewhere. The jury or judge will better show the owner's logo as one of the keys to recognize a suspicious copy as being stolen.

  Strictly speaking, it should also be mentioned that the logo image 700 is not necessary for randomization. The steps can be used directly on the logo image 700. It is not clear to the inventors what is the practical goal. A minor extension of this concept to the N = 1 case is simply and easily adding a logo image 700 to the original image at a very low brightness level. The inventor does not deduce this trivial case that should be in all new matters. In many respects, this is similar to the old problem of subliminal advertising, where low light level patterns added to images are recognizable to the human eye / brain system, perhaps in the human brain. Operates at the unconscious level. By pointing out these trivial extensions of current technology, it can hopefully be further clarified to identify our new principles with respect to such known prior art.

In some applications involving 5-bit reduced alphanumeric code sets and other N-bit identification words, it is desirable to actually represent names, companies, strange words, messages, etc. Most of this disclosure focuses on the use of N-bit identification words for high statistical security, indexed tracking codes, and other index-based message transport. The information transport capacity of “invisible signatures” in images and audio is somewhat limited, but when we actually “write” an alphanumeric item to an N-bit identification word, we use our N bits efficiently It is wise.

  One way to do this is to define a reduced bit (eg, less than 8 bit ASCII) standardized code that allows alphanumeric messages to pass, or use one that already exists. This can help meet this need in some of several applications. For example, a simple alphanumeric code can be constructed, for example, in a 5-bit index table that does not include the letters V, X, Q, and Z but includes the numbers 0-9. In this way, a 100 bit identification word can be transported with 20 alphanumeric symbols. Another option is a variable bit length code, such as that used in text compression routines, where more frequently used symbols have shorter bit length codes and less frequently used symbols have longer bit lengths. Is to use.

An additional classic in detecting and recognizing N-bit identification words in suspicious signals, the detection of N-bit identification signals fits well with older techniques for detecting known signals in noise. The noise in this sentence can be interpreted very broadly, and with respect to the need to detect the underlying signature signal, the image or audio track itself can be considered noise. One of the many references to this older technology is Kassum, Salem A's book, “Signal detection in subnormal noise” Springer-Barrag, 1988 (generally available in well-stocked libraries, eg , Available in the US Library of the Diet by catalog number TK5102.5.K357 1988). As far as the inventor's current understanding, the material in this book cannot be directly applied to the problem of finding the polarity of the applicant's embedded signal, but can apply a wider principle.

  In particular, Chapter 1.2 “Basic Concept of Hypothesis Testing” in Kassam's book expands the basic concept of the binary hypothesis, where the value “1” is a certain hypothesis and the value “0” is another hypothesis. The last paragraph of this chapter relates to the embodiment described above, ie, the case where the “0” hypothesis corresponds to the case of “noise only” and “1” corresponds to the presence of a signal in observation. However, the use of true polarity applicants is not the same, where “0” corresponds to the presence of an inverted signal rather than “noise only”. Also in this embodiment, the “noise only” case is actually ignored and the identification process gives our N-bit identification word or “garbage”.

  Continuing and inevitable industrial improvements in the detection of embedded code signals will certainly be borrowed in large quantities from this rich field of known signal detection. A common and well-known technique in this field is the so-called “adaptive filter”, which is described incidentally in chapter 2 of the Kassam book. Many basic textbooks in signal processing include discussion on this method of signal detection. This is known as correlation detection in some fields. In addition, if the phase or position of the known signal is known a priori, often as in the application of this technology, the adaptive filter is often used as the mth bit in the suspicious image and our N-bit identification word. It can be reduced to a simple vector dot product between embedded signals related to the plane. This represents yet another simple “detection algorithm” that takes a suspicious image and generates a sequence of 1's and 0's with the purpose of determining whether the sequence corresponds to a pre-embedded N-bit identification word . So to speak, referring to FIG. 3, we go through these process steps, including subtracting the original image from the suspicious image, the next step is simply going through all N random independent signals, When a simple vector dot product of these signals and the difference signal is calculated and the dot product is negative, “0” is assigned, and when the dot product is positive, “1” is assigned. Careful analysis of this “one of many” algorithms will show similarities to traditional adaptive filters.

  There are also some direct improvements to the “adaptive filter” and “correlation format” that can provide increased ability to accurately detect very low levels of embedded code signals. Some of these improvements are derived from the principles described in the Kassum book, others are generated by the inventor, who have knowledge of how they appear in other papers or work. And does not conduct a complete and comprehensive survey of advanced signal detection techniques. One such technique is most likely the one illustrated by Kassum, page 79, Figure 3.5, and various local optimizations that can be used in a general dot product algorithm approach for detection. There are some plots of optimization weighting factors. That is, rather than calculating a simple dot product, each elemental multiplication in the overall dot product is performed on the difference signal itself, i.e. the known signal for the low-level known signal being sought. Weighting can be based on innate statistical information. For interested readers who are not yet familiar with these topics, we recommend reading Chapter 3 of Kassam for a more complete understanding.

  One principle that did not appear to be clearly present in the book of Kassam and was fundamentally developed by the inventor is that, as a whole, the statistical properties of the known signal being sought, relative to the magnitude of the statistical properties of the suspicious signal Includes the use of characteristic sizes. In particular, the problem seems to be when the embedded signal we are looking for is at a much lower level than the noise and tampering present in the difference signal. FIG. 14 attempts to set the stage for inference following this approach. FIG. 720 at the top shows in a histogram of a representative “problem” difference signal, ie, a difference signal having a significantly higher overall energy than an embedded signal that may or may not be present therein. Includes a general look at the difference. The term “average removed” simply means that the average of both the difference signal and the embedded signal has been removed by a common operation prior to performing the normalized dot product. Next, FIG. 722 at the bottom has a derivative of these two signals, or a general similar histogram plot of the scaler slope in the case of an image. From a pure test, a simple thresholding operation in the derivative transform domain and subsequent inversion to the signal domain removes some innate bias in the dot product “identification algorithm” of some previous paragraphs. It ’s a long way to go. Here, thresholding refers to the idea of simply replacing a threshold value when the absolute value of the difference signal derivative value exceeds a threshold value. This threshold can be selected to maximize the histogram of the embedded signal.

  Another operation that can be an insignificant aid in “mitigating” some of the bias effects in the dot product algorithm is the removal of low-order frequencies in the difference signal, ie passing the difference signal through a high-pass filter. Where the cutoff frequency for the high pass filter is relatively close to the original (or DC) frequency.

A long title on the basic concept of special considerations that recognize embedded signals in compressed and decompressed signals or recognize embedded signals in any signal that has undergone some known process that forms a non-uniform error source . Some signal processing operations, such as image compression and decompression according to the JPEG / MPEG format of image / video compression, create errors in certain transform regions with a certain correlation and structure. Using JPEG as an example, certain patterns are clearly visible when a given image is compressed and decompressed at a somewhat higher compression ratio and the resulting image is Fourier transformed and compared to the Fourier transform of the original uncompressed image. become. This patterning is an indication of a correlation error, ie an error that can be quantified and predicted to some extent. To date, we recognize the embed code signal in certain suspicious images that may have been subject to predictions of the worse characteristics of this correlation error, either JPEG compression or other actions that leave an immediately identifiable error signature. It can be used advantageously in the method discussed. The basic idea is that in the region where there is a known higher level error, the value of the recognition method will be smaller for the region with a known lower level correlation error. Often, the expected level of error can be quantified and this quantification can be used to properly weight the retransformed signal values. Again using JPEG compression as an example, the suspicious signal can be Fourier transformed, and the Fourier space representation can clearly show a box lattice pattern that can be seen and understood. The Fourier spatial signal can then be “spatial filtered” near the grid points, and then this filtered representation is converted back to its normal time or spatial domain, and then the recognition given in this disclosure. The method can be done. Similarly, any signal processing method that creates non-uniform error sources can be transformed into a region where these error sources are non-uniform and the values at the high points of these error sources can be reduced. In this way, the “filtered” signal can be converted back into the time / space domain for standard recognition. Often, this entire process will involve long and difficult steps to “characterize” the behavior of typical correlation errors in order to “design” an appropriate filtering profile.

“Signature Code” and “Invisible Signature”
For simplicity and clarity, the terms “signature”, “invisible signature”, and “signature code” refer to general technology in science and technology, often complex embedded code as defined previously in this disclosure. Used to indicate signal and continue to use.

More detail in signature code embedding in video Because there is a difference between the JPEG standard that compresses still images and the MPEG standard that compresses video, placing invisible signatures in still images and placing signatures in video There is also a difference between doing it. It is not a different underlying problem, as with the JPEG / MPEG difference, but the inclusion of time as a parameter by the moving image opens up a new dimension of industrial optimization. Any textbook related to MPEG will necessarily include a section on how MPEG (in general) does not simply use JPEG on a frame-by-frame basis. As with the application of the principle of this technology, generally speaking, the placement of an invisible signature in a video sequence is not simply a separate invisible signature for each frame. There are various time-based reasons that are somewhat related to the psychophysics of video perception, and others for simple cost engineering reasons.

  Some embodiments actually use the MPEG compression standard as one of the solutions. Other video compression methods already invented or not yet invented can be used equally well. This example also uses the scrambled logo image approach for the generation of the master snowy image shown in FIG. 13 and discussed in this disclosure.

  The “compressed master snowy image” is rendered separately as shown in FIG. “Rendering” refers to a technique commonly known in video, movie and animation production, whereby an image or sequence of images is formed by constructive techniques such as computer instructions or by hand drawing animation cells. Thus, “rendering” the signature movie in this example is essentially trying to be computerized as a digital file or designing some custom digital electronic network that forms it.

The overall goal of the procedure outlined in FIG. 15 is to see the invisible signature on the original movie 762 side by side with the signature stored in 768, where the signature is viewed side by side, and the signature is MPEG compressed. In addition, it is used so as to remain optimal even after undergoing the stretching process. As indicated above, the use of an MPEG process in particular is an example of a common process for compression. It should also be noted that the examples given here have a certain capacity for industrial deformation. In particular, these being performed in the video compression technique are that if we start with two video streams A and B, compress A and B separately and combine these results, the resulting video stream C Is generally not the same as if video streams A and B were pre-combined and the result was compressed. Thus, in general, for example,
MPEG (A) + MPEG (B) ≠ MPEG (A + B)
It becomes. This will become clearer by introducing a somewhat abstract concept at this point in the present disclosure and discussing FIG. However, the general idea is that there are various algebras that can be used to optimize the passage of the “invisible” signature of the compression procedure. Clearly, the same principles shown in FIG. 15 still work for images, and JPEG or others are still the standard for image compression.

  Returning now to the details of FIG. 15, we begin by simply stepping through all the Z frames of the movie or video. For a 2 hour movie shown at 30 frames per second, Z will be (30 * 2 * 60 * 60) or 216000. The inner loops 700, 702 and 704 are simply mimics of the steps of FIG. The logo frame can be changed arbitrarily during the frame steps. The two arrows emanating from box 704 represent the continuation of loop 750 and the placement of the output frame on the rendered master snowy image 752.

  With a short but possibly appropriate aside in this regard, the use of the Markov processing concept makes some of the discussion regarding the optimization of the industrial realization of FIG. Briefly, Markov processing is processing in which a sequence of events occurs and there is generally no memory between one step and the next step in this sequence. In the situation of FIG. 15 and the sequence of images, the Markovian sequence of images is a sequence with no apparent or some correlation between a given frame and the next frame. Take a set of all the movies that have been produced so far, step one frame at a time, select a random frame from a random movie to be inserted into the output movie, and step through a minute or 1800 of these frames Assume. The resulting “movie” is a good example of a Markov movie. One point of this discussion is that depending on how the logo frame is rendered, depending on how the encryption / scramble step 702 is performed, the master snowy movie 752 may have some general quantification. It will show as much Markov features as possible. The point of this point is that the compression procedure itself is affected by the degree of this Markov feature and therefore needs to be considered in the design of the process of FIG. Similarly, and in general, even if a complete Markov movie was formed in the high brightness master snowy movie 752, the compression and decompression processing of that movie, represented as MPEG box 754, is a 752 Markov character. To at least minimally form a non-Markovian compressed master snowy movie 756. This point is used when this disclosure discusses the idea of using multiple frames of a video stream to find one N-bit identification word, ie, embedding the same N-bit identification word in several frames of a movie It is quite reasonable to use the information obtained from these multiple frames and find that one N-bit identification word. Thus, the non-Markovian nature of 756 adds some means to reading and recognition of the invisible signature.

  The rendered high brightness master snow movie 752 is now sent via an MPEG compression and decompression procedure 754 for the purpose of preconditioning the final master snow movie 756 used. Due to the above noted that MPEG compression is generally not considered distributive, the idea of step 754 is that the originally rendered snowy movie 752 has two components, a component that is free from compression processing 754, 756, and a component that is inevitable And roughly estimating using a difference operation 758 to generate a “cheesy master snowy movie” 760. The reason for the intentional use of the word “cheesy” is that it can generate “cheesy” special signature signal energy for applications or situations that are never subject to compression, even though they are subject to common compression. This is because the signature signal can be added later to a distributable movie in the same manner. (Thus, at least shown in FIG. 15.) Returning to FIG. 15, we made a rough cut in the signature that we know has a high probability of remaining the compression process unchanged and this “compressed master snowy movie” Use this "756, reduced 765, go through this procedure, compare (768) with the original movie, and whatever commercial feasibility criteria have been set up (ie, acceptable perceived noise level) Guarantees that it will also fit. The arrows returning from the side-by-side step 768 to the reduction step 764 correspond directly to the “visual experiment ...” in FIG. 2 and the gain control 226 in FIG. Those skilled in the art of image and sound theory can recognize that the whole of FIG. 15 can be summarized by attempting preconditioning of the visible signature signals such that they are even more tolerable of compression that they can fully sense. In order to repeat the above items as well, this idea is equally used for any such pre-identifiable process that may be presented to an image, image sequence or audio track. This obviously includes JPEG processing on still images.

Note that the additional elements of the real-time encoder circuitry generally the method steps shown in FIG. 15 that follow from box 750 through compression master snowy film formation 756 can be implemented in hardware with certain modifications. In particular, the entire analog noise source 206 in FIG. 6 can be replaced by such a hardware circuit. Similarly, the steps and related procedures shown in FIG. 13 can be implemented in hardware and the analog noise source 206 can be replaced.

Recognition based on two or more frames: As shown in the aside of Markov and non-Markov sequences of non-Markovian signature images, embedded invisible signature signals are non-Markovian in nature: There is a correlation with that of the next frame, and furthermore, one N-bit identification word is used over the range of the frame, and the sequence of N-bit identification words related to the sequence of frames is a non-Markov feature Again, it is pointed out that data from several frames of a movie or video can be used to recognize one N-bit identification word. All this is an imaginary way to say that the process of recognizing invisible signatures should only use the information that is available in this case of converting to multiple frames of a video sequence.

Header Variation The concept of “header” in a digital image or audio file is a well-established theory in the art. The top of FIG. 16 has a simplified appearance in the header concept, where a data file generally begins with a comprehensive set of information about the file as a whole, often if there is a copyright holder , Including information about the person who is the author or copyright holder of the data. This header 800 is typically followed by data itself 802, such as an audio stream, a digital image, a video stream, or a compressed version of these items. This is well known in the industry and is common.

  One way in which the principles of this technology can be used for information security services is shown generally at the bottom of FIG. In general, an N-bit identification word can be used for a given simple message of essentially “wallpaper” of an image (as shown) or an entire audio data stream. This is called “header modification” in the title of this section. The idea here is that less sophisticated pioneers and exploiters can change the information content of the header information, thus using the more secure technology of this technology as a check on the authenticity of the header information It can be done. Given a code message such as “Joe's image” in the header, the image that the user gets can have some higher degree of reliability that the header is not changed.

  Similarly, since the header can actually carry an N-bit identification word, it can be emphasized that a given data set has been encoded by the method of this technology, and the identification code can be accurately added to the header. Can be incorporated. Of course, this data file format has not yet been formed because the principles of this technology are not currently used.

“Body”: Capability for large transformations of headers Given all possible uses of the following aspects of our technology as a design change that may become important at some point, even if not fully developed. The title of this section includes "BODIER", a silly wording used to illustrate this possibility.

  In the previous section, we outlined how an N-bit identification word “identifies information contained in the header of a digital file, but these methods can completely replace the concept of a header, There are also expectations that information conventionally stored in the header can be placed in the digital signal and empirical data itself.

  This can be as simple as standardization in a 96 bit (12 byte) reader string in another completely empirical data stream by way of example only. This leader string is the number of bits in the elemental data unit of the entire data file that does not include the leader string and the number of bits in the depth of one data element (eg, the number of gray levels or the discrete signal level of the audio signal). Clearly and simply. From these, the universal code described herein is used to read the N-bit identification word written directly in the empirical data. The length of the empirical data needs to be long enough to contain complete N bits. N-bit words efficiently transmit what would otherwise be contained in a conventional header.

  FIG. 17 illustrates such a data format, which is referred to as a “universal empirical data format”. The leader string 820 includes a 64-bit string length 822 and a 32-bit data word size 824. The data stream 826 immediately follows and information that is customarily included in the header, but not directly included in the data stream, is represented here as an additional dotted line 828. Another term used for the added information is “shadow channel” shown in FIG.

  Another element that needs to be included in the reader string is some sort of composite checksum bit that can identify that the entire data file has not been altered.

Other in the distributed universal code system: One interesting variation in the theme of dynamic code universal code is the possibility of an N-bit identification word that actually contains instructions that change the behavior of the universal code itself. One of many examples is that data transmission begins, where a predetermined block of audio data is completely transmitted, reads an N-bit identification word, and uses the universal code # 145 from 500 sets of data Know that the first block and the portion of the N-bit identification word found in this way is an instruction that should analyze the next block of data using # 411 rather than the universal code set # 145. In general, this technology can be used as a way to change the actual decoding instructions themselves on the fly. More generally, this possibility of using “dynamic code” greatly increases the level of sophistication of the data identification procedure, and is likely to be subject to less sophisticated interference by hackers and pirates. Economic viability should be increased. Although the inventor does not believe that the concept of dynamic change of decryption / decryption instructions itself is novel, the execution of these instructions in the “shadow channel” of empirical data is As long as it seems to be new. (The shadow channel is defined as another specialized wording that encapsulates the more steganographic appropriate elements of this technology).

  A variation on the theme of dynamic code is the use of universal code in a system that has innately assigned knowledge of which code to use at that time. One way to put this possibility together is the idea of a “daily password”. The password in this example represents knowledge of which universal code set currently operates, and these vary depending on the specific set of application specific environments. Perhaps many uses continually update the universal code to something that has never been used, which is often the case with the conventional concept of daily passwords. The portion of the N-bit information word that is currently being transmitted can be, for example, the password history for the next day. Even if time is the most common trigger event for password changes, there may be event-based triggers as well.

Symmetric patterns and noise patterns: The placement of the identification pattern in the image is certainly not new because of the robust universal coding system . Logos stamped at the corners of images, fine patterns such as true signatures and copyright circle C symbols, and watermarks to represent ownership or to prevent unauthorized use of creative material This is an example of arranging a pattern in an image to do this.

  What appears to be new is an approach to placing independent “carrier” patterns, which, together with certain information, are included in images and audio for the purpose of transmission and identification of said information. Can be directly modulated. All steganographic solutions currently known to the inventor place this information “directly” in empirical data (encrypted first and then directly as much as possible), but the method of the present disclosure uses these (very (Often) assuming the formation of identical spatial carrier signals, modulation of these carrier signals with the appropriate information, and direct application to empirical data.

  In extending these concepts, one step further takes the stage of universal code system application, where the sending site sends empirical data according to the specific universal coding plan used, and the receiving site sends the universal coding plan. It is advantageous to analyze the empirical data using, and take a close look at the industrial reasons of such a system designed for the transmission of images or movies, unlike audio. More specifically, the same type of analysis is performed on images (or two-dimensional signals) as well as the analysis of specific realizations as included in the discussion of universal code in FIG. 9 and the accompanying audio applications. Should. This section is such an analysis and summary of a specific implementation of the universal code and attempts to predict the various hurdles that such a method should reveal.

  The theme that integrates the realization of the universal coding system for images and videos is “symmetric”. The idea to advance this cannot simply be a prevention against the use of image circulation as a means of less sophisticated pirates bypassing any given universal coding system. The lead principle is that the universal coding system should be easily readable regardless of the direction of rotation of the dependent image. These problems are common in the field of optical character recognition and object recognition, and these fields should be referenced with respect to other methods and means in promoting the industrial realization of this technology. The direct example is usually order.

  Digital Video and Internet Company XYZ double-checks the input video and the visual data, which are individual frames of the video itself, use XYZ's own relatively secure internal signature code that uses this technology. The company is developing a delivery system for its products that relies on asymmetric universal coding. This works well in many delivery situations, including their Internet barriers where header information is matched and no in-frame universal code is found to pass any material. However, other parts of these commercial networks perform global routine surveillance on Internet channels to find unauthorized transmission of their owned creative property. They control the encryption procedure used, so it is not a problem for them to decrypt the creative property, including the headers, and do a simple check. A group of pirates who want to buy and sell material in the XYZ network has decided how to change the security features in the XYZ header information system, and simply rotate about 10 or 20 images, By sending to the XYZ network, the network does not recognize the code, and therefore does not flag unauthorized use of their material, and the recipient of the material that the copyright infringer has rotated will not easily rotate it .

  To summarize this last example, through a logical classification, the asymmetric universal code is acceptable to “allowing allowed operations based on code discovery” but “existence of code” In the case of "random monitoring (regulation)", there is a risk of being bypassed somewhat easily. [Asymmetric universal code can capture 90% of fraud very well, ie insist that 90% of fraudsters don't bother doing a simple bypass of rotation] Address this latter category To do so, it requires the use of a pseudo-rotationally symmetric universal code. Long-standing “pseudo” devices that square the rotation problem cannot represent fully increasing objects to be rotated in a square grid of pixels in this instantaneous transformation. In addition, basic considerations need to be made to the scale / size changes of the universal code. A monitoring process is given (or given) to the human viewer if the visual material being monitored is in the “perceptual” domain, ie not encrypted or scrambled. It is understood that if it is in the form of wax, it needs to be done. A piracy candidate can use other simple scrambled and unscrambled techniques, and tools can be developed to monitor these leaked scrambled signals. In other words, a copyright infringement candidate investigates converting visual material out of the perceptual region, passing through the monitoring points, and converting the material back into the perceptual region. It is necessary to use in such a scenario. Therefore, the monitoring discussed here is used for applications where the monitoring can be done in the perceptual region, in which case the equipment to be viewed is actually sent.

  A “ring” is the only fully rotationally symmetric two-dimensional object. “Disks” can be viewed as a simple finite set of concentric and fully contacting rings with a width along their radial axis. Therefore, the “ring” needs to be the starting point from which a more robust universal code standard for images can be found. The ring is well adapted to scale / magnification issues, and the radius of the ring is one parameter that holds and takes care of its track. Other characteristics of the ring are that many of the smooth and quasi-symmetric characteristics that any automated surveillance system requires are common, even when different scale changes occur on different spatial axes in the image and the ring becomes elliptical. Is to be maintained. Similarly, any perceivable geometric distortion of the image will obviously distort the ring, but they can still retain all symmetry properties. Hopefully, in a more trivial way, simply “seeing” the image, in these relationships, especially if this length bypasses the universal coding system, the attempted unauthorized piracy It will be detectable.

By discovering ring vs. knot ring as the only ideal symmetric pattern that can build a fully circular robust universal coding system according to its basis, we can transport this basic pattern, To be able to read by computer or other means, survive simple transformations and tampering, and keep the economics of destruction as a simple cost-increasing item (as explained in the section on Universal Code, It must be changed to something functional that can reasonably be raised to a high level of security (which probably cannot be broken).

  An example of a “ring-based” all-purpose code is the woven Celtic knot pattern that the inventor later refined and enhanced in Leonardo da Vinci's work (eg, Mona Lisa or his knot design). Therefore, it is called a “knot pattern” or simply “knot”. Some rumors have led these paintings of knots to be steganographic in nature, i.e. convey messages and signatures, i.e. all more specific. Figures 18 and 19 investigate some of the basic characteristics of these knot patterns.

  Two simple examples of knot patterns are illustrated by super-radiating knot 850 and radiating knot 852. The names of these types are the symmetry point at the center of the expanded ring and the composing ring intersects this point, completely outside it or, in the case of a subradiating knot, the inside of the circle constituting the said center point. Whether or not. The example of 850 and 852 clearly shows a symmetrical arrangement of 8 rings or circles. “Ring”, as mentioned above, makes this term more specific in that it clearly recognizes the width of the ring along the radial axis of the ring. The individual rings in knot patterns 850 and 852 will be carrier signals for signals related to the bit plane in our N-bit identification word. Therefore, each of knot patterns 850 and 852 is an 8-bit information carrier. In particular, if the knot patterns 850 and 852 are bright rings on a black background, the “addition” of the bright ring to the independent source image can represent “1” and the “subtraction” of the bright ring from the independent source image. “Can represent“ 0 ”. The application of this simple coding scheme can be repeated many times, as in FIG. 19 and its knot pattern mosaic, and a scaled down version of this coding (modulation) knot mosaic can be directly applied to the original image. And the final step of adding over the same time and the result of a distributable image encoded via this universal symmetric coding method. It remains to communicate with the decoding system which ring is the least significant bit in our N-bit identification word and which is the most significant bit. One such method is to slightly increase the scale of the radius value (individual rings) from LSB to MSB. Another method is simply to make the MSB a 10% larger radius than the others and pre-allocate the counter clock width in the order that the remaining bits do not match. Yet another way is to place a simple hash mark inside a single circle. That is, there are various ways in which the bit order of the ring can be encoded in these knot patterns.

  The procedure for first checking for the mere presence of these knot patterns and secondly reading the N-bit identification word is as follows. The suspicious image is first Fourier transformed via a very common 2D FFT computer procedure. We do not know the exact scale of the knot pattern, ie we do not know the diameter of the knot pattern elemental ring in pixel units, we do not know the exact rotation state of the knot pattern, we simply Suppose that the amplitude (concentric low amplitude sine ring at the apex of the spatial frequency profile of the source image) as a result of the Fourier transform of the original image with respect to the warning ripple pattern is examined (by a basic automated pattern recognition method). The periodicity of these rings, along with the spacing of the rings, will let us know if a universal knot pattern appears or does not appear and these scales in the pixel. Classical small signal detection methods can be used for this purpose, as can other detection methods of this disclosure. Normal spatial filtering can then be used on the Fourier transformed suspicious image, where the spatial filter to be used passes all the spatial frequencies at the concentric vertices and blocks all other spatial frequencies . The resulting filtered image is Fourier transformed from the spatial frequency domain to the image spatial domain, and with almost visual inspection, bright ring inversion or non-inversion, MSB or LSB ring identification and N (in this case) 8) Can be found with a bit identification code word. Obviously, the pattern recognition procedure can perform this decoding step as well.

  The foregoing discussion and the method it describes have certain practical drawbacks and disadvantages that are discussed and ameliorated here. Basic methods are given in a rustic style to convey the basic principles involved.

  Let's enumerate some of the practical difficulties of the universal coding system described above that use knot patterns. As an example, (1) the ring pattern is somewhat inefficient in “covering” the entire image space and in all uses of the information transport capacity of the image range. Second, (2) the ring patterns themselves need to be more visible when they are used in a simple addition method, for example to 8-bit black and white images. Next, (3) the “8” rings 850 and 852 in FIG. 18 are rather small in number, and there is also a 22.5 degree rotation that can be used in diagrams that the recognition method needs to accommodate. Then, (4) a highly condensed region occurs where the complete overlap of the rings makes the added and subtracted luminance fully detectable. Secondly, (5) the 2D FFT routine used in decoding, as well as some of the mentioned pattern recognition methods, is notoriously computationally cumbersome. Finally, (6) nonetheless, the universal coding form described so far does not claim to have ultra-high security in the classic sense of the best security communication system, nevertheless, There is no cost to implement in hardware and software, and at the same time, the cost of piracy aspiring to try to back the system has increased, increasing the level of sophistication required for these pirates, Some security features allow piracy aspirants to be responsible for intense crime (such as the formation and distribution of means to take away the creative ownership of these knot pattern codes) It adds an advantage to the point that you have to go out of their way behind the system of punishment.

  All of these items can be addressed and should continue to be improved in any industrial realization of the technology principle. The present disclosure addresses these items with reference to the following embodiments.

  Starting with item number 3, the existence of only 8 rings as shown in FIG. 18 is simply compensated by an increase in the number of rings. The number of rings that a given application will use is clearly a function of that application. The trade-off is that on the side that claims to limit the number of rings used, if there are fewer rings, there will eventually be more signal energy per ring (per visibility) and that by the automated recognition method For ease of identification, less rings are collected, and generally they are less, so the entire knot pattern is reduced to a smaller overall pixel range, eg, 30 pixels than a 100 pixel diameter region. Including, but not limited to, using a diameter region. Reasons for increasing the number of rings include the desire to convey more information such as ASCII information, serial numbers, access codes, usable codes and history information, etc., and other reasons for having more codes A key advantage is that the rotation of the knot pattern to itself is reduced, thereby allowing the recognition method to handle a smaller range of rotation angles (eg 64 rings are less than 3 degrees). Need to identify the MSB / LSB and bit plane order, with a rotation of about 5.5 degrees that makes the knot pattern its initial alignment. Can be better understood in this example as well). Most practical applications select 16 to 128 rings corresponding to N = 16 to N = 128 for selection of the number of bits in the N bit identification codeword. This selection range is somewhat correlated in all radii and pixels when assigned to an elemental knot pattern such as 850 or 852.

  Taking item number 4 of practical difficulty, which is the concentration of the ring pattern in the image and the lack of the ring pattern elsewhere (very similar but different from the inefficient covering of item 1) The following improvements can be used. FIG. 18 shows an example of a key feature of “knots” (as opposed to a ring pattern), assuming that if the patterns probably intersect, a hypothetical third dimension is assumed, whereby the knot location is In some predetermined way, it takes precedence over other locations (see item 854). From the image point of view, the brightness or darkness of a given intersection in the knot pattern is “assigned” to only one location in the region where two or more locations overlap. The idea is extended to how the rules for this assignment are done in a rotationally symmetric way (864). For example, by moving the rule clockwise, the string that enters the loop is behind the string that leaves. Clearly, there are numerous variations that can be used for these rules, many of which are critically dependent on the geometry of the selected knot pattern. Other problems involved are probably the rules for assigning luminance to a given pixel under a knot pattern, with a finite width and also a luminance profile with a width along an axis perpendicular to the string direction. Will each play a role.

  Major improvements to the nominal knot pattern system described above are practical difficulties, (1) inefficient covering, (2) undesirable visibility of the ring, and (6) the need for a high level of security. Take up directly. This improvement also indirectly addresses the item (4) overlap problem discussed in the previous section. This major improvement is as follows. Prior to the step of adding the encoded knot pattern mosaic to the original image and generating the distributable image, the encoded knot pattern mosaic 866 is normalized and (generally smoothly) a random phase only spatial filter. To spatially filter (using conventional 2D FFT techniques). It is very important to note that this phase-only filter is itself completely rotationally symmetric in the spatial frequency domain, ie its filtering action is completely rotationally symmetric. The effect of this phase-only filter on individual luminance rings translates into a smoothly changing pattern of concentric rings, which is not quite different from the pattern on the water in some cases after dropping stones. In the case of this phase-only filter, the wave pattern is somewhat more random than the uniform periodicity of the stone wave pattern. FIG. 20 provides a coarse (ie, non-grayscale) representation of these phase-only filtered ring patterns. The top diagram of FIG. 20 is a cross-sectional view 874 of one representative luminance profile / profile of these phase-only filtered ring patterns. The center 872 of each ring is the point around which the luminance profile is rotated to fully describe the two-dimensional luminance distribution of one of these filtered patterns. Yet another coarse attempt to convey the characteristics of the filtered ring is represented as a rough grayscale image 876 of the filtered ring. This phase-only filtered ring 876 can be referred to as a random wave pattern.

  What is not shown in FIG. 20 is the synthesis effect of the phase only filter processing in the knot pattern of FIG. 18 or the mosaic of the knot pattern of FIG. Each ring in the knot pattern 850 or 852 results in a 2D luminance pattern in the form of 876, together forming a somewhat complex luminance pattern. By performing the ring coding by making it bright (1) or “dark”, the resulting phase-only filtered knot pattern is no longer perceptible to the human eye, but in particular a phase-only filter. After the process is inverse filtered to reproduce the original ring pattern, the computer begins to take on subtle features that can be easily identified.

  Returning now to FIG. 19, we can imagine that the 8-bit identification word was encoded in a knot pattern and the knot pattern was filtered only in phase. The resulting luminance distribution is a gorgeous tapestry of overlapping wave patterns that has some beauty but is not readily apparent to the eye / brain. [An exception to this can be drawn from the knowledge of the South Pacific Island community, where the navigator is able to use a small, generated sea wave off the islands between dispersed and reflected as a primitive means of navigation. It is said that he has learned a delicate technique to read the increasing complex ocean wave pattern. For better wording requirements, the resulting filtered knot pattern mosaic (obtained from 866) can be referred to as an encoded knot tapestry or simply a knot tapestry. Some basic properties of this knot tapestry are that the basic rotational symmetry of the mosaic that it is generated is retained, and is generally unknown to the eye / brain, and thus refines inverse problem engineering. Be more efficient (use more important in the next section) and use the basic knot concepts 854 and 864 in increasing the steps in the level and in using the available information content of the pixel grid In this case, the signal level is concentrated in a wave-like manner, and thus no “hot spots” are generated that are unpleasantly visible to the viewer.

  The basic decoding process described above requires an additional step of inverse filtering the phase only filter used in the encoding process. This inverse filtering is well known in the image processing industry. If the scale of the knot pattern is known a priori, the inverse filtering process is simple. On the other hand, if the scale of the knot pattern is not known, an additional step of finding this scale is appropriate. One such method of finding the knot pattern scale is to iteratively apply an inverse phase-only filter to the various scale versions of the image being decoded, looking for a scale version that begins to show a pronounced knot pattern. That is. Ordinary search algorithms such as simplex methods can be used to find the scale of the pattern accurately. The field of object recognition should also be referenced under the general title of object detection where the scale is unknown.

  The knot tapestry is the order of the additional points for efficiency covering the image pixel grid. Most applications of the universal image coding knot tapestry method are to use fully encoded tapestry (ie, tapestry with embedded N-bit identification words) in the source image at relatively low brightness levels. Put on. In actual terms, the luminance scale of the encoded tapes changes, for example, from a -5 gray scale value to a 5 gray scale value in a typical 256 gray scale image, where the value predominance is -2 to 2. . This provides only a practical way to receive bit cut errors that the knot tapestry can perceive. As an example, a perfect 256 gray level image is successfully used, scaled down to include a bit cut step by a factor of 20 in luminance, and this bit cut version is rescaled by the same factor of 20 in luminance and the result is Imagine a knot tapestry constructed by filtering only the opposite phase. The resulting knot pattern mosaic is a significantly degraded version of the original knot pattern mosaic. The point of bringing all this out is as follows. Although simply defined, it actually challenges the industrial task of selecting various free parameters of the design in the realization of the knot tapestry method, and the ultimate goal is to define a pre-defined visibility tolerance with knot tapestry Within the difference is to pass the maximum amount of information about the N-bit identification word. The free parameters are the stuffing criteria of the elemental ring radius in the pixel, N, ie the number of rings, the distance from the center of the knot pattern in the pixel to the center of the elemental ring, and other knot patterns in one knot pattern. And the rules regarding the distance, the weave of the strings, and the form and form of the phase-only filter to be used in the knot mosaic. It is desirable to supply such parameters to a computer optimization routine that helps in these selections. This begins as an art rather than a science due to the many non-linear free parameters involved.

  An accompanying note in the use of phase-only filtering is that it can assist in ring pattern detection. The inverse filtering of the decoding process tends to “blind” the underlying source image, adding a knot tapestry, and at the same time tend to “focus” the ring pattern. The ring pattern that appears without ambiguity of the source image has a more difficult time to “counter” the sharp features of the representative image. The decoding procedure should also use the gradient thresholding method described in other sections. Simply, this is more likely in a service where the decoding signal will raise the signal level of the signature signal relative to the source signal if the source signal is found to be significantly larger in luminance than our signature signal. A method that can have a high slope region threshold.

  With regard to the other practical difficulties mentioned above, the item (5) relating to the relative computational overhead of the 2D FFT routine and the typical pattern recognition routine, the first remedy here but not satisfied is: Finding a simpler way to recognize and decode the polarity of the ring luminance more quickly than using 2D FFT. Except this, if the pixel range of each knot pattern (850 or 852) is, for example, 50 pixels in diameter, a simple 64 by 64 pixel 2D FFT in a certain part of the image is the N-bit identification word described above. It turns out to be sufficient to identify The idea is to use the minimum required image area, as opposed to using the entire image to identify the N-bit identification word.

  Another note is that instead of these attorneys in the science of image processing starting the discussion in the knot tapestry with the use of rings, we can jump straight to the use of the 2D intensity distribution pattern 876 that functions on a QUA basis. it can. The use of the term “ring” as a baseline technique is somewhat instructive as it is in any way relevant to the invention disclosure. More importantly, perhaps the use of a true “ring” in the decoding process after inverse filtering is probably the simplest form to enter into a typical pattern recognition routine.

Those skilled in the art of neural network decoder signal processing will recognize that a computer using a neural network architecture is suitable for the pattern recognition and small signal detection problems in noise submitted by this technology. The complete disclosure in these subjects is beyond the scope of this document, and interested readers are, for example, Cherkasky, Buoy. , “ Statistics to Neural Networks: Theory and Pattern Recognition Applications ”, Springer-Barrag, 1994; Masters, T., “ Signal and Image Processing with Neural Networks: C Source Book ” Willay, 1994; " Advances in pattern recognition using neural network systems ", World Scientific Publishers, 1994; Niglin, A .; " Neural network for pattern recognition ", Willey, 1993; and Chen, See. , “ Pattern Recognition Neural Networks and Their Applications ”, World Scientific Publishers, 1991.

2D Universal Code II: Simple Scan Line Realization in the One-Dimensional Case The above sections in the ring, knots and tapestry certainly have its beauty, but some of the steps involved are practical realizations It may be so complex that it is too expensive for the application. The poor resemblance of the ring and well-designed symmetry concepts simply uses the basic concepts given in connection with FIG. 9 and the audio signal and uses them for two-dimensional signals such as images, for example This is done so that each scan line in the image has a random starting point in, for example, a 100 pixel long universal noise signal. Identification software and hardware are obliged to query images across the full range of rotation states and scale factors to find the “existence” of these universal codes.

Universal commercial copyright (UCC) image, audio and video file formats As is well known, it is unfortunate that there are (and non-standard) file format standards for digital images, digital audio and digital video. . These standards are generally formed within specific industries and applications, and because of the use and exchange of diffuse creative digital material, various file formats are violent in the arena for alternating discipline. Fight, where today we see a de facto histogram of enthusiastic fans and users in various favorite formats. The JEPG, MPEG standards for formatting and compression are just a few exceptions that can be seen when collaborative research between certain planned industries begins to work.

  The longing for a simple universal standard file format for audio / visual is very old. This longing for the protection of the subject matter is even older. With regard to the inherent difficulties associated with the formation of a universal format, and with respect to the general outline of such a plan within the patent disclosure, the inventor has presumably recognized that these methods are generally accepted worldwide “universal” I believe it can be as useful as anything that forms the basis of a "commercial copyright" format. Lawyers know that such animals are not formed by declarations, but through an efficient collection of broad needs, persistence and luck. More closely related to the purpose of this disclosure is to benefit if the use of this technology becomes a central part within an industry standard file format. In particular, the use of universal codes can be specified within such standards. The maximum expression of this technology's commercial practice comes from the knowledge of invisibly signing and instilling trust in the copyright holder.

  The following is a list of reasons why this technology principle can serve as a catalyst for such standards. (1) Few technical developers, if any, isolate and unambiguously address the problem of incomplete protection of empirical data and audio / visual material. (2) All the above file formats treat the information about the data and the data itself as two separate and physically different entities, but the technology method treats these two as 1 Can be combined into one physical entity. (3) Large scale applications of this technology principle first of all require practical standardization work, including integration with future improvements in compression technology, so that no standard substrate exists. (4) The development of multimedia has formed an attribute class of data called “content” that includes text, images, sound and graphics, which discusses an increasingly high level of “content standards”. (5) The direct coupling of copyright protection technology and security features to file format standards has long been delayed.

  The universal standard element necessarily includes the mirror image of the header certification method, where the header information is identified directly by the signature code in the data. The universal standard also outlines how to mix hybrid use of completely secret code and public code. Thus, if the public code is “removed” by a sophisticated pirate, the secret code remains intact. The universal standard specifies how invisible signatures evolve as digital images and audio evolve. Thus, when a given image is formed on the basis of several source images, the standard will determine how and when to remove the old signature and replace it with a new signature, and keep a record of the header or their evolution. And whether the signature itself keeps some sort of record.

Pixel vs. Protrusion Much of this disclosure focuses on the pixel that is the basic carrier of the N-bit identification word. The section discussing the use of one “master code signal” goes to the point where each and every pixel is essentially “assigned” to a unique bit plane in the N-bit identification word.

  For many applications, according to an example of an ink-based printing application at a resolution of 300 dots per inch, pixels in a primitive digital image file are actually dithered (eg, on a piece of paper) Ink stains. Often, isolated information that transports the capacity of the original pixel is compromised by spilling adjacent pixels into the geometrically defined space of the original pixel. Those skilled in the art will recognize this as a simple spatial filtering and various forms of blurring.

  In such a situation, a very local group of specific pixels can be advantageously assigned by a unique bit plane in an N-bit identification word rather than just one pixel. The ultimate goal is simply to pre-concentrate more of the signature signal energy to lower frequencies, and to allow most practical implementations to quickly remove or mitigate higher frequencies. It is.

  A naive approach is to assign the same basic signature gray value to 2 by 2 blocks of all the pixels to be modulated, rather than modulating one assigned pixel. A better approach is shown in FIG. 21, where an array of pixel groups is shown. This is a specific example of a large class of placement. The idea is that a particular small area of pixels is related to a given unique bitplane in the N-bit identification word, and this grouping actually shares the pixels between the bitplanes (2 by 2 blocks of the pixels). (Even if there is no need to share pixels as in the case of).

  Shown in FIG. 21 is a 3 by 3 array of pixels with an example normalization weight (normalization → total weight 1). This technology method operates on this elemental "protrusion" as a unit rather than on a single pixel. In this example, it can be seen that due to the extension of the signature signal, there is a four-fold reduction in the number of master code values that need to be stored. Applications of this “protrusion approach” for placement in invisible signatures include any application that experiences a large amount of innately known blurring and requires accurate identification even after this severe blurring.

Others in the steganographic use of this technology As mentioned in the first section of this disclosure, technical and scientific steganography is a common prior art to this technology. Here, the method of this technology can be used as a new way of performing steganography, as it is no doubt obvious to readers who have reversed their position and have adventured so far. (Yes, all of the discussion so far relates to investigating various forms and implementations of steganography).

  In this section, we consider steganography as the need to convey a message from point A to point B, and assume that this message is essentially hidden in independent empirical data. Since someone in the telecommunications industry can prove it, it broadens the scope of the purpose of conveying messages. Perhaps there is some exceptional need besides pure hobby rather than sending these messages over some number of conventional and simple channels. Past literature and product promotions in steganography, in particular, may make this exceptional need a requirement to hide the fact that the message is just being sent. Another possible need is that conventional communication channels are not directly available or cost prohibitive, i.e. message senders "send" somehow their encoded empirical data. Be able to. This disclosure includes, by reference, all previous considerations in myriad uses where steganography can be used, adding the following uses not yet described by the inventor.

  The first such use is very simple. It is necessary to transport messages about empirical data that transports messages in it. A trivial joke is a medium message, even if it is impossible the next time that a previous steganographic performer has not used this joke. Certain considerations in placing information about empirical data directly in that empirical data are already covered in the section on exchanging headers and the concept of “body”.

  The advantage of placing messages about empirical data directly in that data is that there is only one class of data objects, rather than the previous two classes. In any two class system, there is a risk that the two classes will become irrelevant or that one class will be tainted without the other class knowing about it. A specific example is what the inventor calls "device independent instructions".

  There are a myriad of machine data formats and data file formats. The excess of this format is notorious for its power that hinders progress toward universal data exchange and that one machine is doing the same thing that another machine can do. The instructions that the originator may have used for the second class of data (ie, the header) may not be compatible at all with the machine that is made to recognize these instructions. When format conversion is performed, critical instructions may be removed along this path or confused. The improvements disclosed here will make certain instructions into empirical data so that all that is required by the readout machine to recognize the instructions and messages will perform a standardized “recognition algorithm” in the empirical data. It can be used as a direct “sealing” method (of course, the machine can at least “read” empirical data properties). All machines can implement this algorithm in some old way they choose, using some computer, or the internal data format they need.

  The implementation of this device independent command method generally does not take into account issues of copyright infringement or unauthorized removal of what is sealed in the message. Perhaps embedded messages and commands will be a central and important part in the basic values and functions of the subject matter.

  Another example of a kind of steganographic use of this technology is the embedding of universal usage code for the benefit of the user community. The “message” being communicated can simply be a registered serial number that grants ownership to users who wish to legitimately use and pay for empirical information. This serial number can be found in the enormous registration of creative properties including the owner's name, pricing information, billing information, etc. The “message” may be a free and public use permission for a given subject. Similar owner identification and usage indexing can be achieved in a two-class data structure method such as a header, but the use of a one-class system of this technology allows the one-class system to perform file format conversion, header compatibility. Some advantages over the two-class system can be provided, such as portability, internal data format issues, header / body archiving issues, and media changes.

Fully Accurate Steganography Prior art steganographic techniques currently known to the inventor generally involve a completely deterministic or "accurate" recipe for conveying a message. That is, for a given message to be transmitted perfectly accurately, the recipient of the information needs to receive the exact digital data file sent by the sender, allowing bit errors or data “loss” That is the basic assumption. By definition, “lossy” compression or decompression in empirical data negates such steganographic methods. (Prior art such as Komatsu's work described above is an exception here).

  The principle of this technology can be used as an accurate form inherent in steganography. The exact form of prior art or such steganography of this technology is the relatively new technology of “digital signatures” and / or DSS (digital signature standard), and the recipient of a given empirical data allows any bit of information It is implied that it can first be verified that it has not changed in the received file, and is therefore combined so that the exact steganographic message contained can be verified that it has not changed.

  The simplest way to use this technology principle in an accurate steganographic system is to use the “designed” master noise scheme described above, where the master snow code is not allowed to contain zeros. Both the sender and receiver of information need to access both the master snow code signal and the original uncoded original signal. The receiver of the encoded signal simply subtracts the original signal to give a difference signal, and a simple polarity check technique between the difference signal and the master snow code signal is transmitted for each data sample. 1 bit is generated simultaneously. Presumably, data samples with values close to the “rail” of the gray value range are removed (such as values 0, 1, 245, and 255 in empirical data 8 bits deep).

Statistical Steganography The need to have access to the original signal for the recipient of a steganographically embedded data file can be eliminated by relying on what we call “statistical steganography”. In this approach, this technology method is used as a simple innate rule that governs the reading of empirical data sets that search for embedded messages. This method can also be used successfully in combination with prior art methods for identifying the integrity of data files such as by DSS (eg Walton, “Image Authentication for Unstable New Era”, Dr. Dobbs Journal, (See April 1995, page 18 on how to identify the integrity of a digital image, sample by sample, bit by bit).

  Statistical steganography has both sender and receiver access to the same master snow code signal. This signal can be sent to both parties in a completely random and reliable manner, or it can be generated by a shared and securely transmitted lower order key that generates a larger pseudo-random master snow code signal. it can. A 16-bit chunk of the message is conveyed in adjacent 1024 sample blocks of empirical data, and the recipient is innately prescribed to use the dot product decoding method as outlined in this disclosure. ing. The information sender checks in advance that the dot product approach actually produces an accurate 16-bit value (ie, the sender has crosstalk between the carrier image and the message signal and the dot product operation is not Pre-check that no 16-bit unwanted inversions occur.) A certain number of 1024 sample blocks are transmitted, so a 16-bit message is transmitted the same number of times. Using DSS techniques, if the transmitted data is known only for its presence in digital form, the integrity of the message can be identified, unlike the internal checksum and error correction code, It can also be transmitted in situations where data may be changed and transformed in its transmission. In this latter case, it is optimal to make the block of samples longer for a given message content size (just as an example, 10K samples for a 16-bit message chunk).

  Continuing the time in the topic of error correction steganography, many decoding techniques disclosed herein operate on the principle of discriminating pixels (or protrusions) increased by encoded data from those decreased by encoded data. It will be recognized. Identification of these positive and negative cases becomes increasingly difficult as the delta value (eg, the difference between the encoded pixel and the corresponding original pixel) approaches zero.

  A similar situation occurs in certain modem transmission techniques where an ambiguous intermediate ground separates into two desired signal states (eg, +/− 1). Errors resulting from this misinterpretation of the intermediate ground are sometimes referred to as “soft errors”. The principles from modem technology and the technology in which such problems occur can be used to mitigate such errors in the current situation as well.

  One approach is to weight the “reliability” of each delta measurement. If a pixel (protrusion) clearly produces one state or another state (eg +/− 1), its “reliability” is said to be high, giving proportionally greater weight. Conversely, if a pixel (protrusion) is relatively ambiguous in its judgment, its reliability is correspondingly lower, giving proportionally smaller weights. By weighting the data from each pixel (projection) according to its reliability value, the effect of soft errors can be greatly reduced.

  Such reliability weighting can also be used as a useful aid to other error detection / correction schemes. For example, in the known error correction polynomial, the above-described weighting parameters can be used to further sharpen the identification based on the polynomial in the location of the error.

“Noise” in vector graphics and extremely low-order indexed graphics
The methods of this disclosure generally assume the presence of “empirical data”, which is another way of expressing signals with noise contained in them by most definitions. In general, there are two classes of two-dimensional graphics that are not considered inherently noisy: vector graphics and certain indexed bitmapped graphics. Vector graphics and vector graphic files are generally files that contain precise instructions on how a computer or printer will draw straight lines, curves and shapes. Such a 1-bit value change in a file may, as a very rough example, change a circle to a square. That is, there is generally no “innate noise” to use in these files. Indexed bitmapped graphics typically belong to images with a small number of colors or gray values, such as 16 in an early CGA display on a PC computer. Such “very low order” bitmapped images typically display graphics and comics rather than being used in an attempted display of digital images taken by a natural camera. These forms of extremely low order bitmapped graphics are also generally not considered to contain "noise" in the classic sense language. An exception is if the concept of “noise” is still valid and the principle of this technology is still valid, if the indexed graphic file is trying to represent a natural image, such as in GIF (CompuServe graphic exchange format) It is. These latter formats often use dithering (similar to stippling and color newspaper printing) to achieve near-real images.

  This section considers this class of two-dimensional graphics that conventionally does not contain "noise". This section takes a simple look at how this technology principle can still be applied to such creative material in some way.

  The simplest way to use this technology principle for these “no-noise” graphics is to convert them into a form that follows the application of this technology principle. A number of terms are used in this industry for this conversion, including “ripping” vector graphics, such as converting vector graphics to raster images based on grayscale pixels. Programs such as Photoshop by Adobe have such internal tools for converting vector graphics into RGB or grayscale digital images. Once these files are converted to such a form, the principles of this technology can be applied in a simple way. Similarly, very low indexed bitmaps can be converted to RGB digital images or equivalent. In the RGB domain, the signature can be used for the three color channels in the proper ratio, or the RGB image can be easily converted to a gray scale / chroma format such as “Love” in Adobe Photoshop software. And the signature can be used for the “brightness channel”. Because most distribution media such as videotapes, CD-ROMs, MPEG video, digital images, and printing are in a form that follows the application of the principles of this technology, this from vector graphics form and very low order graphic form Conversion is often done in some event.

  Other methods of using this technology principle for vector graphics and extremely low-order bitmapped graphics recognize that there are specific characteristics for these innate graphics formats that appear as noise to the eye. That is. The first example is the boundary and outline of where a given line or shape is drawn or not drawn, or exactly where the bitmap changes from green to blue. In most cases, a human viewer of such graphics will be keenly aware of any attempt of a “modulated signature signal” due to fine and organized changes in the exact contour of the graphic object. Nevertheless, such a signature encoding is actually possible. The difference between this approach and that disclosed in most of this disclosure is now here already in the given graphic, rather than forming the signature purely separately or adding it to the signal. What you have to get from things. This disclosure nevertheless points out the possibilities here. The basic idea is to modulate the contour, right contact or left contact, top contact or bottom contact like transmitting an N-bit identification word. Even if the noise is a record of random spatial shifts in one direction or other directions perpendicular to a given contour, the changing contour location is included in a similar master noise image. The bit value of the N-bit identification word is encoded and read using the same polarity test of the change used and the change recorded in the master noise image.

Developments in the use of plastic credit and debit card system plastic credit cards, and more recently debit and ATM cash cards , based on the principles of this technology require little preface. It is not necessary to discuss much about the long history of fraud and misuse of these financial instruments here. The development of credit card holograms and the subsequent development of counterfeits is suitable as a historical example of giving and taking plastic card security measures and fraud countermeasures. This section in itself relates to how the principles of this technology can be implemented in a financial network based on selectively highly fraud-proof but cost-effective plastic cards.

  A basic list of desired features for the ubiquitous plastic economy is as follows. 1) A given plastic financial card is completely impossible to counterfeit. 2) The attempted counterfeit card (similarly similar) cannot function at all in the processing environment. 3) Electronic processing blocked by pirates will not be effective or reusable in any way. 4) In the event of an actual valid card physical theft, it still strongly disturbs the thief from using the card. 5) The overall economic cost of the financial card system is equal to or lower than that of the current international credit card network, ie, all the costs incurred per transaction have a higher profit margin for the realization of the network. Equal to or lower than the current standard to give. Apart from item 5, which requires a detailed analysis of industrial and social issues that are fully included with the realization strategy, the use of this technology principle below successfully achieves the above list, even for item 5. can do.

  FIGS. 22 through 26 summarize together with what follows what is referred to in FIG. 26 as a “cash card system that can ignore fraud”. The reason the fraud prevention features of this system are highlighted in the title is that the fraud and associated loss revenue is a central issue in today's plastic card based economy. The differential advantages and disadvantages of this system over current systems will be discussed later to give an illustrative embodiment.

  FIG. 22 illustrates a basic non-counterable plastic card that is unique to each and every user. Digital image 940 is taken of the card user. A computer connected within the central accounting network 980 shown in FIG. 26 receives the digital image 940 and, after its processing (as described in FIG. 24), is then printed on a personal cash card 950. Generate a rendered image. Further shown in FIG. 22 is a simple identification marking, in this case a bar code 952, and an optional position reference that can help simplify the scanning tolerances in the reader 958 shown in FIG. is there.

  The short story is that the personal cash card 950 actually contains a very large amount of information unique to that individual card. Even if the same principle, such as an embedded magnetic noise signal, can be reliably used for the magnetic strip, the magnetic strip is not included (see previous discussion in the “fingerprint” of the magnetic strip in a credit card, here. So fingerprints are conspicuous and preventive to passives.) In some event, the unique information in the image in the personal cash card 950 is stored in the central accounting network 980 of FIG. 26 together with the basic accounting information. The basis of unbreakable security is that during processing, the central network only needs to suspect a small percentage of the total information contained in the card, not the same exact information in any two processes. That is. Hundreds of unique and guaranteed “processing evidence”, if not tens or thousands of thousands, are included in a personal cash card. A piracy candidate who attempts to interfere with the transmission of encrypted or unencrypted processing then finds that the information is useless. This is different from a system with one complex and complete (generally encrypted) “key” that requires repeated access in its entirety. On the other hand, personal cash cards contain thousands of distinct guaranteed keys that can be used once in a few milliseconds and then discarded (so to speak). The central network 980 keeps track of the key and knows it is already in use and does not have it.

  FIG. 23 shows an exemplary point reader 958 that may look like that. Obviously, such a device needs to be able to be manufactured at a cost equal to or cheaper than current cash register systems, ATM systems and credit card magnetic stripe readers. The internals of the optical scanning, image processing and data communication components are not shown in FIG. 23, which are to be described in the future, and are usually conventional industrial design methods that perform functions that are within the abilities of those skilled in the art. Is simply to follow. The reader 958 can couple a normal personal identification number system that adds another conventional layer of security (typically after physical card theft has occurred) to the overall design of the system. A numeric touch pad 962 is provided. It should also be pointed out that the use of user photos is a powerful (and increasingly common) security feature to prevent unauthorized use after theft. Functional elements such as an optical window 960 that mimics the shape of the card and overlaps as a centering mechanism for scanning are shown. A data line cable 966 is also shown, possibly connected directly to the owner's central commercial computer system or possibly to the central network 980. Such a reader may be connected directly to a cash register that performs the normal calculation of purchased items. The configuration of the reader 958, such as in the form of a Faraday cage, where non-electronic signals such as raw scanning of the card may flow out of the unit, is probably excessive in security. The reader 958 should preferably include a digital signal processing unit that assists in the high speed calculation of the dot product operation described below. It should also include a local read-only memory that stores a number of spatial patterns (orthogonal patterns) used in the “recognition” step outlined in FIG. 25 and its discussion. As shown in FIG. 23, consumers using plastic cards simply place their cards on the window and pay for commerce. Users can choose whether they want to use a PIN number for themselves. If the signal processing step of FIG. 25 is a characteristic that is effectively satisfied by parallel digital processing hardware, the authorization of the purchase will probably occur within a few seconds.

  FIG. 24 takes a general look at one way of processing a user's raw digital image 940 into an image with more useful information content and uniqueness. In fact, the raw digital image itself can be used in the following way, but it is clearly pointed out that the placement of additional orthogonal patterns on the image may increase the overall system considerably. Should. (Orthogonal means that if a given pattern is multiplied by another orthogonal pattern, the resulting number will be zero, where “pattern multiplication” means vector dot product, these Are all well known terms and concepts in the art of digital image processing.) FIG. 24 illustrates that the computer 942 can be added to the raw image 970 after the raw image 970 query, The generation of a master snowy image 972 that generates a more specific image that is an image printed on the personal cash card 950 of FIG. The overall effect on the image is to “texture” the image. In the case of cash cards, the invisibility of the master snow pattern is not as high as that of commercial images, and the only criterion to keep the master snow image brighter to some extent is not to obscure the user's image. The central network 980 stores the final processed image in the user's account record and makes this unique and securely held image the carrier of a highly guaranteed “destroyed commerce key”. This image is therefore “available” for all properly connected point of sale locations throughout the network. As will be appreciated, the point of sale has no knowledge of this image and simply answers questions from the central network.

  FIG. 25 advances the order of a typical commercial transaction. This figure shows that the first stage is performed by the selling point reader 958, the second stage has an information transmission step communicated on the data line 966, and the third stage is the user account and Arrange by indentation, a step performed by the central network 980 with guaranteed information about the user's unique personal cash card 950. Although there is some coincidence in the realization of these steps, as is normally done in the industrial realization of such systems, these steps are arranged according to the general linear sequence of events.

  Step 1 of FIG. 25 is a standard “scan” of the personal cash card 950 within the optical window. This can be done using a linear photosensor that scans the window or by a two-dimensional photodetector array such as a CCD. The resulting scan is digitized into a grayscale image and stored in an image frame memory buffer, such as a “frame grabber”, as is common in the design of optical imaging systems. Once the card has been scanned, the first image processing step will likely locate four reference center points 954 and use these four points to guide all further image processing operations (ie, The four centers are "correctly align" the corresponding pattern and barcode on the personal cash card). Next, the barcode ID number is extracted using a general barcode reading image processing method. In general, the user's account number is determined in this step.

  Step 2 in FIG. 25 is optional printing of a PIN number. Perhaps most users will choose to have this feature, except those who don't have time to remember this, or who are sure that no one will steal their cash card. .

  Step 3 in FIG. 25 connects to the central accounting network via a data line and uses a normal communication handshake that is common in modern communication networks. More sophisticated embodiments of this system, like users of fiber optic data links, eliminate the need for standard telephone lines, where we assume various belt-tone telephones in the garden and readers It can be assumed that 958 does not forget the central network telephone number.

  After basic communication has been established, step 4 uses the more ubiquitous RSA encryption method to increase the ID number (possibly with the PIN number), which the selling point position found in step 1. B) send together with the encrypted one and add the basic information at the merchant operating the selling point reader 958 and the amount of necessary commercial transactions in currency units.

  Step 5 causes the central network to read the ID number, route information according to the actual memory location of the user account, and then verify the PIN number to verify that the account balance is sufficient to pay for the transaction. Do things. Along this direction, the central network also accesses the merchant's account, verifies that it is valid, and prepares for the expected credit.

  Step 6 does not illustrate an exit step that sends an unapproval to approval if step 5 does not pass, starting with the assumption that step 5 has passed all calculations. If all is verified, the central network generates 24 sets of 16 numbers, all of which are mutually exclusive, and generally have a large but clearly finite number range. , Choose from there. FIG. 25 shows a range that is 64K or 65536 numbers. In fact, it can be any actual number. Thus, one set of 24 may have, for example, the numbers 23199, 54142, 11007, 2854, 61932, 32879, 38128, 48107, 65192, 522, 57723, 27833, 19284, 39970, 19307 and 41090. . The next set is also made random in the same way, but the number of the set is set off through the 24 sets here. Thus, the central network sends a number of (16 × 24 × 4 bytes) or 768 bytes. The actual amount of numbers is determined by industrial optimization of security versus transmission rate issues. These random numbers are in fact an index to the 64K generally innately defined orthogonal pattern that is known to the central network and stored unchanged in memory in all of the point readers. . As will be appreciated, knowledge of these patterns of thief candidates is useless.

  Step 7 then sends a basic “acknowledge” message to reader 958 and also sends a set of 24 of 16 random index numbers.

  Step 8 causes the reader to receive and store all these numbers. The reader then uses its local microprocessor and custom-designed high speed digital signal processing circuitry to the central network as a “one-time key” where the central network tests the authenticity of the card image. Proceed through 24 sets of all numbers, with the intention of getting 24 separate floating point numbers to be sent back. The reader first sums the 16 patterns indicated by a predetermined set of 16 random numbers, then the resulting composite pattern and the manipulated image of the card. Perform normal dot product operations. This dot product generates a single number (simply we can call it a floating point number). The reader proceeds similarly through all 24 sets and generates a unique sequence of 24 floating point numbers.

  Step 9 then performs the reading device sending these results back to the central network.

  Step 10 then checks that the central network performs a check on these returned 24 numbers and probably stores the cards of the card that the central network has in its own memory exactly the same calculation. To the image. To normalize the luminance scale problem, the number normalized by the reader means that the highest absolute value of the collected 24 dot products can be divided by itself (its unsigned value) "can do. The resulting match between the number returned and the calculated value of the central network is satisfied if the card is valid and within a certain tolerance, if the card is fake or the card is raw If it is a duplicate, it will come off.

  Step 11 then sends the word whether the central transaction has been approved and informs the customer that they can go home with their purchase.

  Step 12 then clearly shows how to enter the transaction amount in the merchant account.

  As mentioned above, the first advantage of this plastic card is that it significantly reduces fraud, which is obviously a high cost for current systems. This system reduces the possibility of fraud only if the physical card is stolen or replicated very carefully. In both of these cases, PIN security and user photo security (a known system with higher security than low-wage clerks analyze signatures) remain. Attempts to duplicate cards should be made by “temporary theft” of the card, which requires a photo quality duplicator and a non-simple magnetic card magnetic stripe reader. This system is based on the recent 24-hour highly linked data network. Unauthorized monitoring of commerce involves monitoring that does not partially use whether the commerce is encrypted.

  It will be apparent that the aforementioned approach to increasing the security of commerce, including credit and debit card systems, can be easily extended to any photo-based identification system. In addition, the principles of the technology can be used to detect changes in photo ID documents and to increase the general reliability and security of such systems. In this context, FIG. 28 shows a photo ID card or document 1000, which can be, for example, a passport, a visa, a permanent residence permit (green card), a driver's license, a civil service identification card, or a private company identification badge. Refer to For convenience, such photo-based identification documents are collectively referred to as photo ID documents.

  The photo ID document includes a photo 1010 pasted on the document 1000. Printed human readable information 1012 is included in the document 1000 in proximity to the photograph 1010. Information readable by a machine, known as a “bar code”, may be included in the vicinity of the photo.

  In general, forgery of a photo ID document (eg, replacing the original photo with another photo) will cause significant damage to the card. Nevertheless, skilled counterfeiters can replace existing documents or illegally manufactured photo ID documents in a way that is extremely difficult to detect.

  As mentioned above, the technology adds security associated with the use of photo ID documents to encoded information (which may or may not be visually perceptible) in the photo image, thereby Enlargement by facilitating correction of the photographic image with other information related to the person, such as printed information 1012 appearing in document 1000.

  In some embodiments, the photograph 1010 may be generated from a raw digital image with a master snowy image added as described above with respect to FIGS. 22-24. The central network and selling point reading device described above (in this embodiment this device can be considered as an entry point or security point photo ID reading device) essentially serves as an index to a defined set of orthogonal patterns. The same processing as in the previous embodiment is performed including generation of a central network of unique numbers, a comparison of related dot product operations performed by the reading device, and similar operations performed by the central network. In this embodiment, if the numbers generated from the dot product operation performed by the reader and the central network match, the network sends an acknowledgment to the reader indicating a valid or non-exchanged photo ID document.

  In other embodiments, the photo portion 1010 of the identification document 1000 may be digitized and processed such that the photo image embedded in the photo ID document corresponds to a “distributable signal” as defined above. . Thus, in this case, the picture includes a composite embedded code signal that is not perceptible to the viewer and carries an N-bit identification code. It will be apparent that this identification code can be extracted from the photograph using any of the decoding techniques described above or by using a universal or custom code depending on the level of security required.

  It will be apparent that the information embedded in the photograph may be interrelated or extra part of the information 1012 that appears and can be read in the document. Thus, such a document can be authenticated by placing the photo ID document in a scanning system such as available in a passport or visa management point. The local computer given the universal code for extracting the identification information can extract the extracted information so that the operator can confirm the correlation between the encoded information and the readable information 1012 transported in the document. Display on the local computer screen.

  It will be clear that the information embedded in the photo need not be related to other information in the identification document. For example, the scanning system may only need to confirm the presence of the identification code to give the user “go” or “don't go” information as to whether the photo has been forged. It will also be apparent that a local computer using an encrypted digital communication line may send information to the central certification facility and then return an encrypted “go” or “don't go” indication.

  In another embodiment, the identification code embedded in the photo is a robust digital image of biometric data, such as a card bearer's fingerprint, which is scanned and displayed at this point in the fingerprint recognition system In a very high security point using (or retinal scanning, etc.), it may be used for comparison with the actual fingerprint of the carrier.

  It will be apparent that the information embedded in the photograph need not be visually hidden or steganographically embedded. For example, a photo incorporated in the identification card may be a combination of individual one- or two-dimensional barcode images. This bar code information includes conventional optical scanning techniques (including internal cross-checks) to allow the information obtained from the code to be compared with, for example, information printed on the identification document. ).

  It is also conceivable that a photo of the ID document currently in use may be processed so that information relating to the individual whose image appears in the photo can be embedded. In this context, the reader's attention is to the earlier part of this description titled "Print, paper, document, plastic coating identification card, and other materials that can be entirely embedded code". Here, a number of approaches to the modulation of physical media that can be treated as “signals” in accordance with the application of the principles of the technology are described.

Network Linking Method Using Information Embedded in Data Object with Intrinsic Noise FIG. 27 illustrates an aspect of the present technology that provides a network linking method using information embedded in a data object with inherent noise. To do. In a sense, this aspect is a network navigation system, which is a broader, indexed indexing system that embeds addresses and indexes directly into the data objects themselves. As will be noted, this aspect is particularly well suited for establishing hot links with pages provided on the World Wide Web (WWW). A given data object effectively includes both a graphical representation and an embedded URL address.

  As in the previous embodiment, this embedding is done so that the added address information does not affect the important values of the object as far as the producer and audience are concerned. As a result of such embedding, there is only one class of data objects, rather than the two classes (data objects and separate header files) associated with conventional WWW links. The advantages of reducing hot-linked data objects to one class have been described above and are described in further detail below. In one embodiment of the technology, the World Wide Web is used as a pre-existing network-based hot link. Typical devices in this system are networked computers and computer monitors that display the results of interactions when connected to the web. This embodiment of the technology is provided to website visitors and is an image, video, audio and other form of data with “greyscale” or “continuous tone” or “blur” and the resulting inherent noise Consider a URL or other address format information embedded steganographically directly into an object. As mentioned above, there are various ways to realize a basic steganographic realization, all of which can be used according to the present technology.

  With particular reference to FIG. 27, images, pseudo-continuous tone graphics, multimedia video and audio data are currently the basic building blocks of many sites 1002, 1004 on the World Wide Web. Such data is hereinafter collectively referred to as a creative data file or data object. For illustrative purposes, a continuous tone graphic data object 1006 (diamond ring with background) is shown in FIG.

  Both the website tool, the website developer 1008 and the viewer 1010, process these various file formats and package these data objects. Often by the creator's wish to sell products represented by these objects or advertise creative services (eg, a good example of a photograph displaying 800 phone numbers that promotes photographer technology and services) It is already common to distribute these data objects 1006 as widely as possible. By using this technology method, individuals and organizations that create and disseminate such data objects embed address links that correctly lead back to their own node in the network, their own site in the WWW. Can do.

  A user at a site 1004 simply needs to point and click on the displayed object 1006. The software 1010 confirms the object as a hot link object. The software reads the URL address embedded in the object and sends the user to the linked website 1002 as if the user is using a conventional web link. The linked site 1002 may be the creator's home page or network node of the object 1006, and the creator may be the manufacturer. The user at the first site 1004 is then given an order form to purchase, for example, a product represented by the object 1006.

  Creators of objects 1006 with embedded URL addresses or indexes (which may be referred to as “hot objects”) and manufacturers who wish to advertise their goods and services are creative It will be obvious to know that the content can be propagated like dandelion seeds in the wind across the WWW, and what is embedded in these seeds is a link back up their own homepage .

  It is also contemplated that the object 1006 may include an obvious icon 1012 (such as the exemplary “HO” abbreviation shown in FIG. 27) that is incorporated as part of the graphic. An icon or other fine indicia informs the user that the object is a hot object that carries an embedded URL address or other information accessible by software 1010.

  Indications that can be perceived by some human (eg, short sounds) can serve the purpose of informing the user of the hot object. However, it is also possible that such an indication is not necessary. The user's trial-and-error approach to clicking on a data object that does not have an embedded address will simply result in the software looking for a URL address but not finding it.

  The inherent automatic processing in the use of this aspect of the technology is highly advantageous. Web software and website development tools simply need to recognize this new class of embedded hot links (hot objects) that operate in real time on them. Conventional hot links can be easily modified and added without the need for a website programmer to do anything other than traffic monitoring by “uploading” hot objects to the website repository. it can.

  The method of realizing the above-described functions of the present technology generally includes the step (1) of forming a set of criteria for embedding URLs into other forms of image, video, audio, and data objects in a steganographic manner. Designing a website development tool and web software that recognizes various types of data objects (hot objects), wherein the tool is provided to the user and the user Designed to point and click, the user's software knows how to read or decode steganographic information and send the user to the decrypted URL address.

  The previous part of the specification that describes the steganographic implementation in detail (see generally FIG. 2 and related text) is easily adapted to the implementation of the technology. In this regard, another conventional site development tool 1008 is extended to include, for example, the ability to encode a bitmapped image file with an identification code (eg, URL address) according to the present technology. In this embodiment, a URL address (or other information) can be steganographically embedded in a hot object based on commerce or commerce using any of the universal codes described above.

  The previous part of this document (see generally FIG. 3 and related text) that describes in detail the techniques for reading or decoding steganographically embedded information is the realization of this technology. Fits easily. In this regard, another conventional user software 1010 is extended to include, for example, the ability to analyze encoded bitmap files and extract identification information (eg, URL addresses).

  Although an illustrative embodiment has been described in which information is steganographically embedded in a data object, those skilled in the art will be able to use any of a number of available steganographic techniques to perform the functions of this embodiment. It will be clear that this is possible.

  It will be apparent that this embodiment provides a mechanism of direct and general meaning that some of the basic building blocks of the WWW, i.e. images and sounds, can be hot links to other websites. Also, the programming of such hot objects can be completely automated simply by the distribution and usage of images and audio. No actual website programming is required. This embodiment enables commercial use of the WWW where non-programmers can easily disseminate their messages simply by creating and distributing creative content (here hot objects). . As indicated, web-based hot links themselves can be handled from more secret text-based interfaces to more natural image-based interfaces.

Encapsulated Hotlink File Format As mentioned above, once you understand the steganographic method of hotlink navigation, there is a more traditional method of adding “header-based” information as a new file format and transmission protocol development. Can emphasize the basic approach, built by a system based on steganography. One method that has begun to extend steganography-based hot linking methods to more traditional header-based methods is a new file format that can be enabled for standard classes used in network navigation systems. It is to define a class. It will be appreciated that objects beyond images, audio, etc. can become “hot objects” including text files, indexed graphics files, computer graphics, etc.

  The encapsulated hot link (EHL) file format is simply a small shell placed around a large range of pre-existing file formats. The EHL header information covers only the first N bytes of the file followed by a complete and accurate file in some kind of industry standard format. The EHL super-header simply appends the correct file format and URL address or other information related to the object to other nodes in the network or other databases in the network.

  The EHL format can be a method that slowly replaces (but is probably not completely) a steganographic method. This slowness pays homage to the file format standard, often the idea of forming, realizing, and taking (if any) very long for everyone to actually do. Again, the idea is that the EHL-like format and system built around it becomes a system mechanism based on steganographic methods.

Self-extracting web object Generally speaking, it is encoded in three classes of data: numbers (eg, binary-encoded serial or identification numbers), alphanumeric messages (eg, ASCII or reduced bit code) Human readable names or phone numbers) or computer instructions (eg JAVA or extensive HTML instructions) can be steganographically embedded in an object. Embedded URLs and the ones mentioned above will start looking for this third class, but can help a more detailed explanation of the possibilities.

  Consider the representative web page shown in FIG. 27A. It may be viewed as three basic parts: image (# 1- # 6), text and layout.

  Applicants' technology can be used to integrate this information into a self-extracting object and regenerate the web page from this object.

  According to this example, FIG. 27B shows an image of the web page of FIG. 27A that fits together into a single RGB mosaiced image. The user can perform this operation manually using an existing image processing program such as Adobe Photoshop software, or the operation can be automated by a suitable software program.

  Between some image tiles in the mosaic of FIG. 27B, there is an empty area (indicated by diagonal lines).

  This mosaiced image is then steganographically encoded, and layout instructions (eg, HTML) and web page text are embedded therein. Since there is no loss of image data in the empty area, the coding gain can be maximized. The encoded and mosaiced image is then JPEG compressed to form a self-extracting web page object.

  These objects can be exchanged as any other JPEG image. When opening a JPEG file, a properly programmed computer can detect the presence of embedded information and extract the layout data and text. Along with other information, the layout data identifies where the images forming the mosaic should be placed in the final web page. The computer can follow the embedded HTML instructions to form an original web page with all the graphics, text, and links to other URLs.

  When the self-extracting web page is viewed by a conventional JPEG viewer, self-extraction is not performed. However, the user will see logos and artwork related to the web page (with a noise-like “star” between some images). Those skilled in the art are completely different from seeing other compressed data objects (eg, PKZIP files and self-extracting text archives) that typically appear obscured as a whole unless fully extracted. You will recognize that

  (The above advantages can be fully achieved by placing the web page text and layout instructions in a header file associated with a JPEG compressed mosaiced image file. However, it is necessary to form such a system. The industry standard for a good header format seems practically difficult if not impossible).

Once web images embedded with palette URL information of steganographically embedded images are prevalent, such web images can be collected in a “palette” and provided to the user as a high level navigation tool. Navigation is effected by clicking on such images (eg, different web page logos) rather than clicking on literal web page names. A properly programmed computer can decrypt the embedded URL information from the selected image and can establish the requested connection.

Possible use of this technology in the protection and control of software programs Unauthorized use, duplication and resale of software programs represent a huge loss of revenue for the entire software industry. Prior art methods that attempt to alleviate this problem are quite common and will not be described here. To explain is how this technology principle relates to this enormous problem. It is not at all clear whether the tools provided by this technology have any economic advantage (everything possible) over measures that exist both in place and intent.

  The state of technology over the last decade or more has created the need to pass a complete copy of a software program in order for the program to function on the user's computer. In fact, SX is used in the formation of a software program when X is large, and the overall outcome of its development must be passed on to the user in order for the user to gain value from the software program. Fortunately, this is generally compiled code, but the problem is that this is an uncertain distribution situation seen abstractly. Unauthorized copying and use of most of the world's (and harmless in the spirit of most criminals) can be made to some extent with ease.

  This disclosure begins with an abstract approach that is known or not known to be economical in the broadest sense (eg, revenue recovered against the cost ratio exceeds that of most competing methods). suggest. This approach extends in the methods and approaches already shown in the Plastic Credit and Debit Card section. The abstract concept by assuming a “large set of unique patterns” is specific to a given production and specific to a given purchaser of this production. This set of patterns actually contains thousands, and even millions of completely unique “secret keys” and uses cryptographic terminology. Significantly and clearly, these keys are non-deterministic, i.e., they do not originate from individual sub 1000 or sub 2000 bit keys, as with systems based on RSA keys. This large set of patterns is measured in kilobytes or megabytes and is non-deterministic as described above. Furthermore, at the highest level of abstraction, these patterns can be encrypted by standard techniques and analyzed in the encrypted domain, where the analysis is performed on a small portion of the large set of patterns. Even in the worst case scenario where the thief wants to monitor the microcode instructions of the microprocessor step by step, this collected information prevents the thief from giving useful information. This latter point is important when it comes to “realized security” as opposed to “innate security”, which is briefly described below.

  For example, what are the characteristic characteristics of this type of key-based system, in contrast to the relatively simple RSA encryption method that has already been emphasized? As discussed above, this discussion does not use commercial aspect analysis. Instead, we focus on different characteristics. The main characteristic feature is the realization area (realization security). One example is that mere local use or reuse of one low bit number private key is an inherently weak link in an encrypted commerce system. ["Encrypted commerce system" refers to the fact that the paid use of the software is in this discussion a virtually encrypted communication between the software user and the "bank" that allows the user to use the program. In other ways, it is encryption in electronic financial commerce services. ] These self-proclaimed hackers who want to defeat so-called secure systems never attack the basic hardwired security (innate security) of the primitive use of the method, and these methods gather around humanity and human surveillance Attack the realization of. Here, still in abstraction, it is non-deterministic in itself, and in fact, a larger keybase formation that is more tuned towards a revocation key will lead to a more historic fragile realization of a given security system. Begins to “do not prevent”. The vast set of keys cannot be understood by the average holder of these keys, and their use of these keys (ie, “realization” of these keys) can select these keys randomly. Can then be easily discarded, and these can be “sniffed” without any useful information from the eavesdropper, in particular the eavesdropper can “decrypt” the key Can be used so that its usefulness in the system becomes obsolete in a long time.

  By making the abstraction semi-specific, one possible new approach to safely passing a software product only to the true purchaser of that product is as follows. In a large economic sense, this new method is a small-rate, real-time digital connectivity (often without the need for standard encryption) between the user's computer network and the dealer's network. Based solely on. At first glance, this may smell trouble for anyone in a good market and try to make up for lost income, so you can smash important things with unnecessary ones, Lose more legitimate income along that path (all parts of the minimal analysis). This new method allows a company that sells one piece of software to someone who wants to get it to store its functional software (minimized in speed and transmission) on a storage device local to the user's network. Order to supply about 99.8%. This “free core program” is designed to be completely non-functional, so that the most hacking hacker cannot use it or, in a sense, cannot “decompile” it. The legitimate activation and use of this program is based solely on instruction cycle counting and simply on the basis of simple, very low overhead communication between the user's network and the company's network. A customer who wants to use the product sends the payment amount to the company by any of a number of good ways to do so. The customer is sent their “enormous key of their unique secret key” by a common shipping method or via a generally secured encrypted data channel. If we look at this large set as if it were an image, it looks like a snowy image that has been discussed many times in other parts of this disclosure. (Here, the “signature” is an image rather than finely arranged in another image.) The special nature of this image is what we call “unusually unique” Contains a selection key. ("Wonderful" comes from simple mathematics in the number of combinations enabled by a random bit value of 1 megabyte exactly equal to the number given by "everything", so 1 megabyte is much more It is enough power for many people with a discard selection key of 10 to the power of 2400000.) It is important to reemphasize that the purchased presence is literally a productive use of the tool . This marketing requires that in this allocation of productivity, the pre-use payment plan is very free, as it well known to the user to lose interest and obviously lower overall revenue.

  This large set of selection keys is itself encrypted using standard encryption techniques. The basis for the relatively high “realization security” can now begin to prove itself. Here, the user wants to use a software product. They start the free core and the free core program finds that the user has installed a large set of their unique encryption keys. The core program calls a company network and performs a normal handshake. The company network knows that a large set of keys belongs to the real user and sends a query in a simple set of patterns in almost exactly the same way as described in the debit and credit card section above. This question is like a whole small set, and the internal workings of the core program do not need to decrypt all the pairs of keys, and therefore the encryption of the keys within the machine cycle on the local computer itself There is no decryption. As can be appreciated, this does not require the main disclosure “signature in the image”, instead many unique keys are images. The core program queries the key by performing specific dot products, and then sends these dot products back to the company network for verification. See FIG. 25 and a discussion of typical details in the confirmation process that accompanies it. In general, it sends an encrypted confirmation, and the core program now uses itself to give the word processing program a certain amount of instructions, eg, 100,000 characters that are being entered (to allow other 100,000) Before any other unique key that needs to be sent to. In this example, the purchaser can typically buy the number of instructions that a single user of the word processor program uses within a one year period. Here, purchasers of this product have no incentive to copy this program and give it to their friends and relatives.

  All of the above are good except for two simple problems. The first problem can be referred to as the “cloning problem”, and the second problem can be referred to as the “big brother problem”. Solutions to these two problems are closely linked. The latter problem eventually becomes a purely social problem, not just a technical solution as a tool.

  The cloning problem is as follows. In general, it appears against more sophisticated piracy of software than the currently popular form of piracy "friends raise their distribution CD to friends". If a cunning hacker “A” performs a system state cloning of an “embedded” program in its entirety and installs this clone on another machine, then this second machine will actually Knows to double the value you receive. By keeping this clone in digital storage, the hacker “A” only needs to resell it and reinstall it after the first period has passed, so once The program can be used indefinitely for payments, i.e. she can give the clone to their hacker friend "B" for a six-pack beer. One good solution to this problem is again to some extent well developed and uses low-cost real-time digital connectivity between the user site and the company authorization network. This ubiquitous connectivity generally does not exist today, but is growing rapidly through the Internet and fundamental growth in digital bandwidth. A part and category of “authorization” is a negligible communication cost random accounting function where the functionalized program routinely and irregularly handshakes and checks with the company network. On average, it does so during a cycle that contains a relatively small amount of the program's productivity cycle. The resulting average productivity cycle is generally much lower than the total raw cost of the overall authorized program cloning process. Therefore, even if the authorization program is cloned, the usefulness of the simultaneous cloning is severely limited and paying the price required by the sales company is more than repeating the cloning process in such a short time period. Even significantly more cost effective. Hackers can destroy this system for fun, but cannot reliably destroy it for profit. The back side of this arrangement is that if a program “calls” a company's network for random audits, the productivity count assigned to that user in that program is explained and no real payment has been received The company network simply stops its confirmation and the program no longer works. We do not have the motivation to “give this” to a friend unless the user is an obvious gift (perhaps it will be appropriate if they really pay, “do something similar to that you pay”) Return.

  The “big brother” of the second problem and the intuitively “authorized” connection between the user's network and the company's network, as described above, all kinds of possible real and imagined solutions It is a social and perceptual problem that should have. Even with anti-big brother solutions that cannot be defeated at the best, there are still core conspiracy theories that demand not. With this in mind, one possible solution is to set up a single program registration, mainly a public or non-profit instruction, that processes and coordinates the real-time validation network. Such presence has company customers as well as user customers. For example, an organization such as the Software Publishers Association may choose to introduce such an attempt.

  To conclude this section, we reemphasize that the method outlined here requires a highly connected distributed system, namely the more ubiquitous and cheaper Internet that exists around mid-1995. Should. The growth rate in immature digital communication bandwidth also argues that the system is more practical and faster than it first appeared. (The outlook for interactive TV provides the prospect of a high-speed network linking millions of sites around the world).

Use of current encryption methods in connection with this technology It should be briefly shown that a certain realization of the principles of this technology probably makes good use of current encryption methods. In one case, it could be a system that allows graphic artists and digital photographers to register their photos in real time with a copyright authority. The master code signal, or some representative portion thereof, may be advantageously sent directly to a third party registry. In this case, the photographer would like to know that their code was sent securely and not stolen on the way. In this case, a certain general encryption process may be used. Also, a photographer or musician, or some user of this technology, would like to receive a reliable time stamp service that is becoming more common. Such services can be advantageously used in connection with the principles of this technology.

Details in legal detection and removal of invisible signatures Generally, if a given entity can recognize a signature hidden in a given set of empirical data, the same entity will remove the signature. It can be carried out. In fact, fortunately, the degree of difference between the previous state and the subsequent state can be greatly increased. At some extremes, it is generally very difficult to “decompile” and software programs can be placed that perform authorization functions on empirical data. In general, the same bits of this software cannot be used to “remove” the signature (without making it extreme). On the other hand, if a hacker purposely discovers and understands the “public code” used in a data exchange system, and knows how the hacker recognizes the signature, the hacker is signed. Reading a predetermined set of data and forming a data set with the signature actually removed is not a big step. In this latter example, there are statistics that are interesting enough and often reveal that the signature has been removed, and these statistics are not considered here.

  These and other such attempts to remove the signature can be referred to as fraudulent attempts. Current and past developments in copyright law are generally aimed at activities that belong to criminal activity, and have typically placed words with penalties and compulsory terminology in established laws. Perhaps some and all lawyers of this signature technology are forced and penalized to use the same types of fraud removals of these types of copyright protection mechanisms a) creation, b) distribution, and c) I will make sure that it is a crime that requires it. On the other hand, the purpose of this technology to point out is that through the steps outlined in this disclosure, the software program is known to recognize these signatures by removing them and by the same amount of signal energy they found in the recognition process. By reversing the signature, it is possible to form it so that it can be easily reached. By pointing this out in this disclosure, the software or hardware that performs this signature removal operation is not only (possibly) a crime, but (possibly) not properly licensed by the owner of the patented technology It becomes clear that violations of

  If it is legal and normal recognition of the signature, it is easy. In certain instances, public signatures can be carefully formed to be minimally visible (ie, their strength is carefully increased), and in this way, the distribution “proofset” can be formed. “Typesetting” and “proof” have been used in the photographic industry for quite some time and can be used to distribute degraded images to prospective customers who can evaluate them commercially. It should not be used where it makes sense. In the case of this technology, increasing the strength of the public code acts as a way to consciously reduce the commercial value and then the public signature is activated by hardware or software activated by paying the purchase price for the subject matter. Can be removed (and possibly replaced with a new invisible tracking code or signature, public or private).

The ubiquitous and cost-effective recognition of monitoring stations and monitoring service signatures is a major issue for spreading the principles of this technology. Several sections of this disclosure address this topic in various ways. This section focuses on the idea that such entities as monitoring nodes, monitoring stations, and monitoring agents can be formed as part of the organizational implementation of the principles of the technology. In order to operate such an entity, knowledge of the master code is required and access to empirical data in its raw (unencrypted and unconverted) form can be required. (Having access to original unsigned empirical data helps better analysis, but is not necessary).

  The three basic forms of monitoring stations arise directly from an unambiguously defined class of master codes, private monitoring stations, semi-public and public. This distinction is simply based on the master code's intelligence. An example of a completely private monitoring station decides to place a certain basic pattern in its distributed subject matter, knows that a true thief can decipher and remove, but this possibility May be a large photo storage company that is considered ridiculously small on an economic scale. The storage company is responsible for high-value advertising and other photos of the state of copyright loss, looking for those that are relatively easy to inspect and find basic patterns, and the employees of the storage company Hire a part-timer to inspect photos that you “acknowledge” may have been infringed. This part-timer will cycle through these potentially compromised cases within a few hours, and if a basic pattern is found, it will perform a more thorough analysis, locate the original image, and outline this disclosure. Perform complete processing of unique identification as shown. Two central economic values arise for storage companies that do this, and these values by definition outweigh the costs of the monitoring service and the cost of the signature processing itself. The first value is in informing their customers and the world that there is a surveillance service backed by any statistics on their ability to capture their infringers and sign their material. . This is a deterrent value and probably the greatest value in the end. What is generally pre-necessary for this first value is actually regained, derived from the monitoring effort and the construction of its tracking records to frighten (emphasize the first value) Copyright usage fee.

  Semi-public monitoring stations and public monitoring stations can actually start third party services in these systems, given the knowledge of master code by customers, which can be thousands and millions. Searching through the “creative value” of, searching for code, and reporting the results to the customer, mostly follows the same pattern. ASCAP and BMI have a “lower technology” approach to this basic service.

  Large coordinated monitoring services that use this technology principle classify their creative property suppliers into two basic categories, which are generally public domain master codes (and of course these two Use a hybrid). This monitoring service performs daily sampling (inspection) of publicly available images, video, audio, etc., with high-level pattern inspection by banks of supercomputers. Scan magazine advertisements and images for analysis, digitize stolen video on commercial channels, sample audio, download public Internet sites randomly, and so on. These basic data streams are then fed to a constant agitation monitoring program that randomly looks for pattern matches between the large banks of public and private code and the data material being examined. Signal a small subset, perhaps a large tuple, as potential matches, identify them as having the correct signature, and analyze them more precisely in the given signaled material Supply a more precise inspection system to start trials. Perhaps the next is a small set of matching material that has been signaled, clearly identifying the subject's owner, sending a surveillance report to the customer, and confirming that they are a legitimate sale of their material. Make sure you can check. The same two values of the private monitoring service outlined above are equally applicable in this case. This surveillance service also acts as a format breeze in the case of a discovered and proven infringement, sending the infringing party a letter that verifies the infringement found and demands exaggerated copyright royalty, and infringes the party But avoid the option of going to the more costly court.

Method for Embedding Subliminal Registration Patterns in Images and Other Signals The concept of embedded signal reading includes the concept of registration. The underlying master noise signal must be known and its relevant parts need to be established (registered) to begin the reading process itself (eg, reading 1 and 0 of the N-bit identification word). . If someone has access to the original or thumbnail of the unsigned signal, this registration process is quite simple. If someone does not have access to this signal, this is often the case in the universal code application of this technology, and a different method is used to perform this registration step. By definition, examples of pre-marked photographic film and paper that do not become “unsigned” originals are perfect cases in the latter respect.

  Many previous sections considered this problem in various ways and gave some solutions. Obviously, an implementation of a solution where a clause in the “simple” universal code is known in advance for a given master code or innate but its exact location (and its presence or absence) is unknown Consider an example. This section gives a specific example of how a very low level designed signal can be embedded within a very large signal, where the designed signal is standardized, detected equipment or read Processing allows this standardized signal to be searched for even though its exact case is not known. A short passage in the 2D number code points out that this basic concept can be extended to two dimensions or indeed to images and animations. Also, the sections on symmetric and noise patterns outline yet another approach to the two-dimensional case, where the nuances related to two-dimensional scale and rotation are described more clearly. In that regard, the idea is not just to determine the exact direction and scale of the underlying noise pattern, but also to have five rings for information to be transmitted as well, for example N-bit identification words.

  This section attempts to isolate the subproblem registering embedded patterns for registration. Once the embedded pattern is registered, we can see how this registration can be useful for wider requirements. This section gives yet another technique for embedding patterns and a technique that can be referred to as “subliminal digital graticule”. "Graticule"-other terms such as fiducials, reticles, or hash marks are used well to convey the idea of calibration marks used for the purpose of positioning and / or measuring something be able to. In this case, it is used as a low level pattern that works as a kind of grid function. The gridding function itself can be a carrier of 1-bit information (its absence or presence, duplication, non-duplication), such as in 1 second of universal noise, or others such as embedded signals Can simply find the direction and scale of the information, and can simply adapt the image or the audio object itself.

  FIGS. 29 and 30 visually summarize two related methods of describing Applicants' subliminal digital graticule. As discussed, the method of FIG. 29 has slightly practical advantages over the method outlined in FIG. 30, but both methods adapt the image to a series of steps that converge to a solution. Effectively analyze the problems you find. The overall problem can simply be stated as follows. Given any image that may have been stamped with a subliminal digital graticule, find the scale, rotation, and origin (offset) of the subliminal digital graticule.

  The starting point for subliminal graticules is to define what these are. Briefly, these are visual patterns that are directly attached to other images or, where appropriate, exposed on photographic filters or paper. Classic double exposure is not a bad analogy, even if this particular concept expands somewhat in digital imaging. These patterns are generally invisible (subliminal) when they are combined with “normal” images and exposures, and are appended by definition, as is the case with embedded signatures. At very low brightness levels or exposure levels that do not interfere with the wide range of images.

  FIGS. 29 and 30 each define two classes of subliminal graticules, as represented in a particular frequency domain known as UV plane 1000. A general two-dimensional Fourier transform algorithm can convert a given image into its UV plane conjugate. For clarity, the depictions in FIGS. 29 and 30 are taken to be specific frequency amplitudes, but it is difficult to depict the phase and amplitude present at all points.

  FIG. 29 shows an example 1002 of six points in each quadrant along the 45 degree line. Since these points are difficult to distinguish by visual inspection of the UV plane, they are exaggerated in this figure. Also shown is a “typical” power spectral coarse depiction 1004 of any image. This power spectrum is generally as unique as an image is unique. Subliminal graticules are essentially these points. In this example, there are six spatial frequencies coupled along each of the two 45 degree axes. The amplitudes of these six frequencies may be the same or different (this subtle distinction will be discussed later). Generally speaking, each phase is different from each other and includes a 45 degree axis phase relative to the other. FIG. 31 shows this graphically. The phases in this example are simply randomly placed between PI and -PI at 1008 and 1010. Since the reflected quadrant is simply a mirror image of the mirror image by PI / 2, only two axes are shown in FIG. 31 for four separate quadrants. If we increase the intensity in this subliminal graticule and convert the result to an image area, we will see a wavy cross hatch pattern as described in the description of FIG. As stated, this wavy pattern is at a very low intensity when added to a given image. Store and standardize the exact frequency and phase of the spectral components used. These become “spectral signatures” that the registration facility and reading process seeks and measures.

  Briefly, FIG. 30 has a variation on this same general theme. FIG. 30 shows different classes of graticules in which the spectral frequency becomes a simple column of concentric rings rather than points along the 45 degree axis. FIG. 32 shows the pseudo-random phase profile as a function along a half period (the other half of the period is PI / 2 off the phase of the first half). These are simple examples and there are a wide variety of variations that are possible in the design of the phase profile and radius of these concentric rings. In this form of subliminal graticule deformation, there are few “patterns” like the wavy graticule of FIG. 29, and there are many random appearances like snowy images.

  The ideas behind both forms of graticule are: The unique pattern is always visually distinguished from the image to which it is attached, but it has certain characteristics that facilitate fast positioning of the pattern and its exact location when the pattern is generally positioned and The direction is embedded in an image having accuracy characteristics such that it can be found with some land level accuracy. The result for the above is to design a pattern that, on average, has little interference with the representative image that adds it and has the greatest energy for the visibility of the pattern.

  Proceeding with a full summary of how the overall process works, the graticule form of FIG. 29 begins by first positioning the rotational axis of the subliminal graticule, and then the graticule scale. , And then determine the origin or offset to facilitate image processing studies. Here, the last step confirms which of the two 45 degree axes is by determining the phase. Therefore, an accurate determination can be made even if the image is greatly confused. The first step and the second step can be performed using only the power spectrum data as opposed to the phase and amplitude. The phase and amplitude signals can then be used to make “fine adjustments” for accurate rotation angle and scale studies. The graticule of FIG. 30 switches between the first two steps described above, first finding the scale and then the rotation and continuing to determine the origin accurately. One skilled in the art will recognize that determining these salient parameters along two axes is sufficient to fully register the image. The “industrial optimization challenge” is to maximize the uniqueness and brightness of the pattern for these visibility and minimize the computational overhead in reaching a certain level of accuracy in registration. Obviously, when exposing photographic film and paper, the additional industrial challenge is primarily an outline of the economic steps to obtain an exposed pattern on the film and paper, and this challenge is discussed in the previous section. It has been pointed out.

  The problems and solutions defined above are intended for registration for registration purposes. Note that there is no description formed by the formation of some sort of evaluation judgment on whether graticule is actually found. Obviously, the above steps can be used for images that do not actually have graticules inside, where the measurement simply tracks noise. An empathy engineer assigned the task of setting a “detection threshold” for these pattern types, or within a tremendously wide range of images and ambient conditions that need to be searched and verified for patterns Need to spread to someone. [Antonically, this is a purely universal one-second noise placed in the previous section and goes beyond simply detecting or not detecting this single signal, ie additional information. Why is it appropriate to add a plane? This is the fact that a real beauty appears in the combination of the subliminal graticule with the registered embedded signature described elsewhere in this disclosure. Clearly, paying homage to the idea of being able to track noise, once a “volunteer registration” is found, the next logical step is to read the 64-bit universal code signature, for example. As another example, we can imagine assigning 44 bits of a 64-bit identification word as a registered user's index, a serial number if this is allowed. The remaining 20 bits are reserved as a hash code well known in the encryption technology in the 44-bit identification code thus obtained. So at once, 20 bits serve as the answer to “Yes, I have a registered image” or “No, I don't have”. More importantly, perhaps this considers a system that can be most flexible in accurately defining the level of “false positives” in any automated identification system. Threshold-based detection will always remain at the mercy of excess conditions and situations based on any decision. Give N coin throws anytime.

  Returning to the point, these graticule patterns must first be added to the image or exposed on the film. A good example program reads a digital image of any size and adds the specified graticule to the digital image to generate an output image. In the case of film, the graticule pattern is physically exposed on the film during or after exposure of the original image. All of these methods have broad variations in how they are performed.

  Searching and registering subliminal graticules is a more interesting and needed process. This section first describes the elements of this process and ends in the generalized flowchart of FIG.

  FIG. 33 shows the first major “search” step of the registration process for graticules of the type in FIG. The suspicious image (or suspicious photo scan) is first converted to its Fourier representation using a known 2D FFT routine. The input image looks like the one in the upper left of FIG. FIG. 33 conceptually represents the case where the image and graticule are not rotated, even though the subsequent processing addresses the rotation problem completely. After transforming the suspicious image, the power spectrum of the transform is calculated and is the square root of the sum of the two squared coefficients. It is also a good idea to perform a light low-pass filter process such as a 3x3 blur filter on the resulting power spectrum data so that subsequent search steps do not require tremendously fine steps. . Next, the candidate rotation angle of 0 to 90 degrees (or 0 to PI / 2 in the radius) is advanced. Calculate two resulting vectors along some angle, the first vector is a simple addition of the power spectral values at a given angle along the four lines emanating from the origin in each quadrant . The second vector is a moving average of the first vector. The normalized power profile is then calculated as shown at 1022 and 1024, with the difference being that one plot is along an angle that does not align with the subliminal graticule and the other plot is aligned. Normalization specifies that the first vector is the resulting vector numerator and the second vector is the denominator. As can be seen at 1022 and 1024 in FIG. 33, a column of peaks (should be “6” instead of “5” as shown) appears when the angles are aligned along their original direction. Detection of these peaks can be done by setting a certain threshold to the normalized value and integrating these sums along the entire radius line. A plot 1026 from 0 to 90 degrees is shown at the bottom of FIG. 33, indicating that an angle of 45 degrees contains maximum energy. In fact, this signal is often much lower than shown in this lower figure, instead of simply selecting a few best candidate angles and choosing these candidates instead of choosing the highest value as the “found rotation angle”. Can be submitted to the next stage of the process of determining registration. The foregoing is merely a known signal detection scheme and can be understood by those skilled in the art where there are many such schemes that can ultimately be formed or borrowed. A simple need for the first stage process is to reduce the candidate rotation angle to some, and then a more precise search can be taken over.

  FIG. 34 essentially outlines a similar form of overall search in the power spectral domain. Instead, we search by going from a small scale to a large scale, first relative to the overall scale of the concentric rings, rather than the angle of rotation. The graph shown at 1032 is the same normalized vector as 1022 and 1024, but here the vector values are plotted as a function of angle along the semicircle. The moving average denominator still needs to be calculated in the radial direction rather than the tangential direction. As can be seen by producing plot 1040, a similar “peaking” in the normalized signal occurs when the scanned circle coincides with the graticule circle. The scale can then be found in the bottom plot by matching the known features of the concentric rings (ie these radii) with the profile at 1040.

  FIG. 35 shows a second main step in registering a subliminal graticule of the type in FIG. Once we find some rotation candidates by the method of FIG. 33, then we take a plot of the candidate angles in the form of 1022 and 1024, and the inventor has performed a filtering operation on these vectors. We do what we call a “scaling kernel” that fits. The scaled kernel is a known frequency anharmonic relationship where the kernel in this case is represented as a line of x at the top of 1042 and 1044, and the scale of these frequencies is that of the scale at some required 100%. It relates to spreading over some predetermined range, such as 25% to 400%. The matched filter operation simply adds the resulting scaled frequency multiplied by one side of these plots. Those skilled in the art will recognize the similarity of this operation to the very well known matched filter operation. The plot resulting from the matched filter operation looks somewhat like 1046 at the bottom of FIG. Each candidate angle from the first step generates its own such plot, at which point the highest value of these plots is our candidate scale. Similar to the graticule of the form of FIG. 30, a similar “scaled kernel” matched filter operation is performed on plot 1040 of FIG. This generally gives one candidate scale factor. Next, using the stored phase plots 1012, 1014 and 1016 of FIG. 32, more conventional matched filter operations can be performed (as kernels) with these stored plots and half-periods at a previously found scale. Between the phase profile measured along

  The last step in registering graticules of the form of FIG. 29 is to perform a variety of common matched filter operations between the known (spectral or spatial) profile of the graticule and the suspicious image. This matched filter operation is simple because the rotation, scale and direction are known by the previous step. If the precise and precise previous steps do not exceed the design specification in the process, a simple small search can be performed in a small area for the two parameters, scale and rotation, and the matched filter operation performed The highest value found determines the “finely tuned” scale and rotation. In this way, scale and rotation can be found within the extent set by the noise and crosstalk of the suspicious image itself. Similarly, once a graticule scale and rotation of the form of FIG. 30 is found, a simple matched filter operation can complete this registration process, as well as applying “fine adjustments”. it can.

  Proceeding to variations on the use of graticules of the form of FIGS. 29 and 36 gives the possibility of finding subliminal graticules without having to perform a costly two-dimensional FFT (Fast Fourier Transform). In situations where computational overhead is a major problem, the search problem can be reduced to a series of one-dimensional steps. FIG. 36 clearly shows how this is done. This figure in the upper left is an arbitrary image in which graticules of the format of FIG. 29 are embedded. Starting at 0 degrees, for example, by 5 degrees and ending at 180 degrees, the gray values along the illustrated columns can simply be added to form the resulting column-integral scan 1058. The upper right of this figure, 1052, shows one of the many angles to do this. This column-integral scan is then converted to its Fourier representation using a one-dimensional FFT that is less expensive for computation. This is then changed to an amplitude or “power” plot (the two are different) to form a normalized vector version similar to 1022 and 1024 in FIG. The difference here is that as the angle approaches the correct angle of graticule, the display peaks begin to appear slowly in plots like 1024, which we generally deviate slightly in our rotation. In general, it appears at higher frequencies than required for a given scale. It remains to find the angle that maximizes the peak signal, so zoom in at the correct angle. Once the correct rotation is found, scaled kernel matched filtering can be performed, all continuing with the conventional matched filtering described above. Again, one idea of the “shortcut” of FIG. 36 is to significantly reduce the computational overhead in using a graticule of the form in FIG. The inventor has not reduced this method of FIG. 36 due to customs, even if implemented, and currently has no data on how much computation can be saved. These attempts are part of a development that has identified the application of the method.

  FIG. 37 briefly summarizes how to rotate around a graticule of the form of FIG. 29 in the order of the main process steps.

  In other alternative embodiments, the graticule energy is not related to 45 degrees in the spatial frequency domain. Instead, this energy is more widely and spatially distributed. FIG. 29A shows one such distribution. The frequencies near the axis and near the origin are generally invalid because they are where the image energy is most likely concentrated.

  This detection of energy in a suspicious image again relies on techniques such as those described above. However, instead of first checking the axis, then checking the rotation and then checking the scale, we do a more comprehensive alignment procedure that determines everything in a violent attempt. Those skilled in the art will recognize that the Fourier-Merlin transform is suitable for use in such pattern matching programs.

  The above principles obtain application in, for example, photo reproduction kiosks. Such an apparatus typically includes a lens for imaging a customer-provided original (eg, photographic print or film) on an optoelectronic detector and a photosensitive emulsion substrate (again, photographic paper or film) by the detector. And a print writing device that exposes and develops in accordance with the obtained image data. Details of such devices are known to those skilled in the art and will not be discussed here.

  In such a system, the memory stores the data from the detector, uses a processor (eg, a Pentium microprocessor associated with the support component), processes the memory data, and has a steganographically embedded copyright. The presence of data can be detected. When such data is detected, the print writing is interrupted.

  In order to avoid system failure due to manual rotation off the axis of the original image, the processor desirably implements the technique described above, and does not auto-register the original despite the scale, rotation and origin offset factors. Do. If desired, a digital signal processing board can be used to remove some of the FFT processing by the main (eg, Pentium) processor. After registering a rotated / scaled image, detection of any steganographically embedded copyright information is straightforward and ensures that the machine is not used in a photographer's copyright infringement.

  Although the disclosed technique has used the steganographic encoding method preferred by the applicant, the principle can be applied more widely and can be used in many cases where automatic registration of images is to be performed.

The use of a universal coding system outlined in the previous section, where the video data stream works efficiently as a high-speed one-way modem , and in a simple manner, changes from frame to frame. Through the use of snowy frames, a simple receiver has prior knowledge of changes in the master snowy frame and thus changes from frame to frame (or every I frame as may be in MPEG video) Can be designed to read N-bit message words. In this way, the video sequence can be used as a high speed unidirectional information channel, such as a unidirectional modem. For example, consider a frame of video data that has N scan lines that are embedded steganographically and transmit N-bit messages. If there are 484 scan lines in frame (N) and the frame changes 30 times per second, an information channel with a capacity comparable to a 14.4 kiloboard modem is achieved.

  In practice, a sufficient data rate is usually achieved at an excess of N bits per frame, resulting in a transmission rate close to that of the ISDN circuit.

In the fraud prevention cellular telephone industry in wireless communications , revenues of $ 100 million are lost each year due to service theft. Some services are lost due to physical theft of cellular telephones, but more harmful threats are caused by cellular telephone hackers. Cellular phone hackers use a variety of electronic devices to mimic the identification signal generated by an authorized cellular phone. (These signals are sometimes referred to as authorization signals, identification numbers, signature data, etc.) Often, hackers eavesdrop on these signals and exchange data with cell sites for authorized cellular telephone subscribers. Learn by recording With the clever use of this data, hackers can imitate authorized subscribers and trick carriers to complete illegal calls.

  In the prior art, the identification signal is separated from the audio signal. Most commonly, they are separated in time, eg, transmitted in bursts at the beginning of a call. Voice data passes through the channel only after a proof operation is performed on this identification data (identification data is also typically included in data packets sent during transmission). Another approach is to spectrally separate the identifications in, for example, spectral subbands other than the band assigned to the audio data.

  Other fraud prevention schemes are also used. One class of technologies monitors cellular phone RF signal characteristics and identifies the originating phone. Another class of techniques uses handshaking protocols and some of the data returned by the cellular telephone is based on an algorithm (eg, hashing) used for random data sent to it.

  A combination of the aforementioned approaches is sometimes used.

  U.S. Pat.Nos. 5,465,387, 5,454,027, 5,420,910, 5,448,760, 5,335,278, 5,345,595, 5,144,649, 5,204,902, 5,153,919 and 5,388,212 And anti-fraud technology used there. The disclosures of these patents are incorporated by reference.

  As anti-fraud systems have become more sophisticated, cellular phone hackers have also become more sophisticated. Ultimately, hackers recognize that all prior art systems are vulnerable to the same weakness, namely that identification is based on certain attributes of cellular telephone transmissions other than voice data, More prevalent. Because this attribute is separated from the voice data, such a system can turn these voices into a thief electronically “up” to a composite electronic signal with the attributes necessary to break the fraud prevention system. On the other hand, it is always susceptible.

  To overcome this drawback, preferred embodiments of this aspect of the technology have steganographically encoded speech signals with identification data, resulting in “in-band” frequency signals (in-band in both time and spectrum). Produce. This approach allows the carrier to monitor the user's voice signal and composite the identification data therefrom.

  In some such forms of the present technology, some or all of the identification data used in the prior art (eg, data transmitted at the start of a call) is also repeatedly steganographically encoded into the user's voice signal. . Therefore, the carrier checks the identification data accompanying the voice data periodically or aperiodically with the identification data sent at the start of the call, and guarantees a match between them. If they do not match, the call can be acknowledged as being hacked and steps can be taken to improve it, such as interrupting the call.

  In another form of the technology, a randomly selected one of several possible messages is repeatedly steganographically encoded into the telephone subscriber's voice. The index sent to the cellular carrier at the start of the call recognizes the expected message. If the message steganographically decoded by the cellular carrier from the telephone subscriber's voice does not match the expected one, the call is recognized as fraudulent.

  In a preferred form of this aspect of the technology, steganographic coding relies on a pseudo-random data signal to convert the message or identification data into a low-level noise-like signal superimposed on the telephone subscriber's digitized voice signal. Convert. This pseudo-random data signal is known or can be known for both the telephone subscriber's phone (in terms of encoding) and the cellular carrier (in terms of decoding). Many such embodiments rely on a deterministic pseudo-random number generator seeded with known seeds for both telephones and carriers. In a simple embodiment, this species can remain constant (eg, a telephone ID number) from one cell to the next. In more complex embodiments, a quasi-one-time pad system can be used, and a new seed is used for each session (ie, call). In a hybrid system, each telephone and cellular carrier has a reference noise key (eg, 10,000 bits) from which the telephone selects a region of bits such as 50 bits starting at a randomly selected offset; Each uses this excerpt as a seed to generate pseudo-random data for encoding. Data (eg, offset) sent from the phone to the carrier during the start of the call causes the carrier to reconstruct the same pseudo-random data used for decoding. Still other improvements can be obtained by borrowing basic techniques from cryptographic communication techniques and using them for the steganographically embedded signals detailed in this disclosure.

  Details of our preferred technique for steganographic encoding / decoding with pseudo-random data streams have been more particularly detailed in the previous part of this specification, but this technology is used with such techniques. It is not limited to.

  Suppose the reader is familiar with cellular communications technology. Therefore, details known from the prior art in this field are not considered here.

  Referring to FIG. 38, an illustrative cellular system includes a telephone 2010, a cellular site 2012, and a central office 2014.

  Conceptually, a telephone is a microphone 2016, an A / D converter 2018, a data formatter 2020, a modulator 2022, an RF section 2024, an antenna 2026, a demodulator 2028, a data unformatter 2030, and a D / D. It can be viewed as including an A converter 2032 and a speaker 2034.

  In operation, telephone subscriber voice is picked up by microphone 2016 and converted to digital form by A / D converter 2018. The data formatter 2020 converts the digitized voice into packet form and adds synchronization and control bits. Modulator 2022 converts this digital data stream into an analog signal that varies according to the data whose phase and / or amplitude is being modulated. The RF section 2024 typically turns this time-varying signal into one or more intermediate frequencies and eventually into a UHF transmission frequency. The RF section then amplifies it and provides the resulting signal to antenna 2026 for broadcast to cell site 2012.

  This process works in reverse when receiving. Broadcast from the cell site is received by the antenna 2026. The RF section 2024 amplifies the received signal and converts it to a different frequency for demodulation. A demodulator 2028 processes the amplitude and / or phase changes of the signal supplied from the RF section and generates a corresponding digital data stream. The data unformatter 2030 separates the audio data from the relevant synchronization / control data and passes it to the D / A converter for conversion to analog form. The output from the D / A converter drives a speaker 2034 through which the telephone subscriber listens to the voices of other parties.

  The cell site 2012 receives broadcasts from a plurality of telephones 2020 and relays the received data to the central office 2014. Similarly, the cell site 2012 receives data from the central office and broadcasts the same to the telephone.

  The central office 2014 performs various operations including cell authentication, switching, and cell handoff.

  (In some systems, the functional division between the cell site and the central office differs from that outlined above. In fact, in some systems, all of this functionality is given at one site).

  In an exemplary embodiment of this aspect of the technology, each phone 2010 additionally includes a steganographic encoder 2036. Similarly, each cell site includes a steganographic decoder 2038. The encoder operates to hide the auxiliary data signal in a signal representing the voice of the telephone subscriber. The decoder performs the reverse function and distinguishes the auxiliary data signal from the encoded audio signal. This auxiliary signal serves to confirm the legality of the cell.

  An exemplary steganographic encoder 2036 is shown in FIG.

  The illustrated encoder 2036 operates on digitized voice data, auxiliary data, and pseudo-random noise (PRN) data. The digitized voice data is used for the port 2040 and is supplied from the A / D converter 2018, for example. The digitized voice data may comprise 8-bit samples. Ancillary data is used for port 2042 and the ancillary data may comprise a stream of binary data that uniquely identifies phone 2010 in one form of the present technology. (Auxiliary data may additionally include administrative data of the type that is routinely exchanged with the cell site at the beginning of the call.) A pseudo-random data signal is used at port 2044, eg, the values “−1” and “ It can be a signal that occurs randomly between 1 ″. (More and more cellular phones are incorporating circuitry that can receive an extended spectrum, and this pseudo-random noise signal and other aspects of this technology are often already used in the basic operation of cellular units. Network can be “backed” or shared).

  For convenience of explanation, all three data signals applied to encoder 2036 are clocked at a common rate, but this is not really necessary.

  In operation, auxiliary data and PRN data streams are applied to the two inputs of logic circuit 2046. The output signal of circuit 2046 switches between -1 and +1 according to the following table.

(If the auxiliary data signal is considered as a switch between -1 and 1 instead of 0 and 1, it can be seen that circuit 2046 operates as a 1-bit multiplier).

  Thus, the output signal from gate 2046 is a bipolar data stream whose instantaneous value varies randomly according to the corresponding values of auxiliary data and PRN data. However, it has auxiliary data encoded therein. If the corresponding PRN data is known, auxiliary data can be extracted.

  A noise-like signal from gate 2046 is applied to the input of scaler circuit 2048. The scaler circuit scales (for example, doubles) the input signal by a coefficient set by the gain control circuit 2050. In the embodiment shown, this factor can vary between 0 and 15. Therefore, the output signal from the scaler circuit 2048 can be represented as a 5-bit data word (sign bit in addition to 4 bits) that changes in each clock cycle according to the auxiliary and PRN data and the scaler coefficient. The output signal from this scaler circuit can be considered as “scaled noise data” (however, when PRN data is given again, it is “noise” from which auxiliary data can be retrieved).

  This scaled noise data is added to the digitized speech signal by an adder 2051 to generate an encoded output signal (eg, binary added for each sample). This output signal is a composite signal representing both digitized audio data and auxiliary data.

  The gain control circuit 2050 converts the amplitude of the scaled noise data to be added so that the addition to the digitized voice data is converted to analog form and heard by the telephone subscriber so as not to significantly degrade the voice data. To control. This gain control circuit can operate in various ways.

  One is a logarithmic scaling function. Thus, for example, a speech data sample having a decimal value of 0, 1 or 2 may correspond to a scale factor of 1 or 0, and a speech data sample having a value of 200 or more corresponds to a scale factor of 15 May be. Generally speaking, scale factors and audio data values correspond by a square root relationship. That is, a 4-fold increase in the value of the audio data corresponds to a 2-fold increase in the value of the scaling factor related thereto. Other scaling factors are linear because they are derived from the average power of the audio signal.

  (An episodic reference to zero as a scaling factor refers, for example, to the case where the digitized speech signal sample has essentially no information content).

  What is more satisfying than the instantaneous scaling factor is based on one audio signal data sample is that the scaling factor is based on the dynamics of several samples. That is, a rapidly changing stream of digitized audio data may hide the auxiliary data relatively more than a slowly changing stream of digitized audio data. Accordingly, the gain control circuit 2050 can be made to respond to the first order, or preferably second order or higher order derivatives of the audio data in setting the scaling factor.

  In still other embodiments, gain control block 2050 and scaler 2048 may be omitted entirely.

  (Those skilled in the art will recognize the possibility of a “rail error” in the system. For example, digitized speech data consists of 8-bit samples, and these samples are entirely from 0 to 255 (decimal). Addition of scaled noise to the input signal or subtraction of scaled noise from the input signal may produce an output signal that cannot be represented by 8 bits (eg, -2 or 257). There are a number of well-understood techniques that correct this situation, some of which are antegrade and some of them reactive, in common with these known techniques Specifies that the voiced audio data does not have a value in 0-4 or 241-255, thereby allowing it to be safely combined with the scaled noise signal; Detecting the digitized speech samples to produce a rail error if Kere includes measures to correct adaptively).

  Returning to the phone 2010, the encoder 2036 suitably places an encoder 2036 as detailed above between the A / D converter 2018 and the data formatter 2020, thereby directing all voice transmissions with auxiliary data. It is encoded steganographically. In addition, circuitry or software that controls the operation of the telephone is arranged so that the auxiliary data is repeatedly encoded. That is, when all the bits of the auxiliary data are encoded, the pointer returns in a circle, and the auxiliary data to be applied to the encoder 2036 is updated. (Auxiliary data may be stored at a known address in RAM memory for ease of reference).

  It will be appreciated that the auxiliary data in the illustrated embodiment is transmitted at a rate that is one-eighth the rate of voice data. That is, scaled noise data corresponding to one signal bit of auxiliary data is sent for each 8-bit sample of audio data. Thus, if voice samples are sent at a rate of 4800 samples / second, ancillary data can be sent at a rate of 4800 bits / second. If the auxiliary data is composed of 8-bit symbols, the auxiliary data can be transported at a rate of 600 symbols / second. If the auxiliary data consists of a uniform 60-symbol string, each second of speech carries the auxiliary data 10 times. (A much higher auxiliary data rate can be achieved by taking the power of more efficient coding techniques, such as limited symbol codes (eg, 5 or 6 bit codes), Huffman coding, etc. This highly redundant transmission of auxiliary data allows for a smaller amplitude of the scaled noise data to be used and still ensures reliable decoding even in relatively noisy environments for wireless transmission Give enough signal-to-noise headroom.

  Returning now to FIG. 40, each cell site 2012 includes a steganographic decoder 2038 that analyzes the composite data signal broadcast by telephone 2010 and identifies auxiliary data and digitized voice data therefrom. Can be separated. (The decoder preferably operates on unformatted data (ie, data with packet overhead, control and administrative bits removed, which are not shown for simplicity).

  The decoding of the unknown embedded signal (ie, the embedded auxiliary signal) from the unknown speech signal is optimally performed by some form of statistical analysis of the composite data signal. This technique described above can equally be used here. For example, an entropy based approach can be used. In this case, the auxiliary data is repeated every 480 bits (instead of every 8 bits). As mentioned above, the entropy based decoding process treats every 480th sample of the composite signal as well. In particular, the process begins by encoding the first, 481st, 961th, etc. samples of the composite data signal and adding them to the PRN data. (I.e. add the sparse PRN data set, i.e. the original PRN set, to all but every 480th zeroed reference.) Then the resulting signal around these points (i.e. The composite data signal in which every 480th sample is changed is calculated.

Next, the above steps are repeated, and at this time, the PRN data corresponding to the first, 481st, 961st, etc. composite data samples is subtracted .

One of these two operations counteracts (eg, cancels) the encoding process, reducing the entropy of the resulting signal and increasing the other. If adding sparse PRN data to the composite data reduces its entropy, this data must have been subtracted earlier from the original audio signal. This indicates that the corresponding bit of the auxiliary data signal is “0” when these samples are encoded. ("0" in the auxiliary data input of the logic circuit 46 generates an inverted version of the corresponding PRN standard as its output standard, resulting in subtraction from the corresponding PRN standard audio signal).

In contrast, if subtracting sparse PRN data from the composite data reduces its entropy, the decoding process must have added this signal earlier. This indicates that the value of the auxiliary data bit is “1” when the samples 1, 481, 961, and the like are encoded.

  Assistance by noting when entropy is lowered by (a) adding a sparse set of PRN data to composite data or (b) subtracting a sparse set of PRN data from composite data It can be determined whether the first bit of data is (a) “0” or (b) “1”. (In an actual application, in the presence of various distortion phenomena, the composite signal may be sufficiently degraded so that addition or subtraction of sparse PRN data does not actually reduce entropy. Will increase entropy, in which case a “suitable” operation can be identified by observing which operation increases the entropy less).

  Next, the above operation is performed on a group of samples (that is, 2,482, 962,...) With a composite signal interval starting from the second sample. The resulting entropy of the signal indicates whether the second bit of the auxiliary data signal is “0” or “1”. Continue with 478 groups of spaced samples in the composite signal until all 480 bits of the codeword have been identified.

  As described above, the interrelationship between the composite data signal and the PRN data can be used as a statistical detection technique. Such an operation is facilitated in the current context from ancillary data whose encoded representation has been examined and known a priori, at least in large parts. (In one form of the technology, the auxiliary data is based on authentication data exchanged at the beginning of the call that the cellular system has already received and recorded, and in another form (detailed below), The ancillary data comprises a predetermined message.) Thus, the problem can be reduced and it can be determined whether the expected signal is present (rather than searching for the entire unknown signal). . Furthermore, the data formatter 2020 breaks down the composite data into frames of known length. (In known GSM implementations, voice data is sent in time slots, each carrying 114 data bits.) By extending the auxiliary data as needed, each repetition of the auxiliary data can be, for example, Can be started at the start of a new frame of data. This greatly simplifies the correlation determination since 113 for every 114 possible bit alignments can be ignored (helps decoding even if there is no a priori known ancillary data).

  Again, this wireless fraud detection presents the current common problem of detecting known signals in noise, and the approaches discussed previously can be equally used here.

  If the location of the auxiliary signal is known a priori (or more precisely, as mentioned above, it is known to be one of several distinct locations), the matched filter approach is often It can be reduced to a simple vector dot product of a set of sparse PRN data and the quoting excluding the average of the corresponding composite signals. (Note that it is not necessary to sparse the PRN data, and an approaching burst may be reached, such as in GB 2196167 mentioned earlier, where a given bit in the message is , With a close PRN value associated with it.) Such a process proceeds through all 480 sparse sets of PRN data and performs the corresponding dot product operation. When this dot product is positive, the corresponding bit of the auxiliary data signal is “1”, and when the dot product is negative, the corresponding bit of the auxiliary data signal is “0”. If several alignments of the auxiliary data signal in the constructed composite signal are possible, the procedure is repeated in each candidate alignment and the one that produces the highest correlation is selected as true. (Once the correct alignment has been determined for one bit of the auxiliary data signal, the alignment of all other bits can be determined from it, perhaps more known as “synchronization”. “Alignment” can be achieved primarily by the exact same mechanism that locks on and tracks the audio signal itself and takes into account the basic functions of the cellular unit.

Security Considerations The security of the embodiments just described depends in large part on the security of the PRN data and / or the security of the auxiliary data. In the following discussion, we will consider some of the many technologies that ensure the security of these data.

  In the first embodiment, each phone 2010 is given a long noise key that is unique to that phone. This key may be, for example, a highly unique 10,000 bit string stored in ROM. (In most applications, the key is practically shorter than this may be used).

  The central office 2014 has access to a security disk 2052 that stores such key data for all authorized phones. (This disc may be separated from the central office itself).

  Each time the phone is used, 50 bits from this noise key are identified and used as seeds for the deterministic pseudorandom number generator. The data generated by this PRN generator serves as PRN data for the call.

  This 50-bit type can be determined, for example, by a random number generator in the phone that generates an offset address between 0 and 9950 each time the phone is used for a call. The 55 bits of the noise key starting at this offset address are used as the seed.

  During the start of the call, this offset address is transmitted to the central office 2014 via the cell site 2012 by telephone. Here, the computer at the central office uses the offset address and indexes that copy of the noise key for that phone. Thereby, the central office identifies the same 50-bit species that is identified in the telephone. The central office 2014 then relays these 50 bits to the cell site 2012, where a deterministic noise generator similar to that at the phone generates a PRN sequence corresponding to this 50-bit key, This is supplied to the detector 2038.

  With the process described above, the same sequence of PRNs occurs at both the phone and the cell site. Therefore, the auxiliary signal encoded in the voice data by the telephone can be safely transmitted to the cell site and can be accurately decoded by the cell site. If this auxiliary data does not match the expected auxiliary data (for example, data transmitted at the start of the call), this call is flagged as illegal and an appropriate corrective action is taken.

  It will be appreciated that a person eavesdropping on the wireless transmission of call initiation information can only intercept a randomly generated offset address that is transmitted by phone to the cell site. This data alone is not useful in stealing calls. Even if a hacker accesses a signal provided to the cell site from the central office, this data is essentially useless, and all that is provided is a 50-bit species. This type is useless for hackers because it is different for each nearby call (repeating only one per 9950 calls).

  In the system involved, the entire 10,000 bit noise key can be used as a seed. Uses an offset address randomly generated by the phone during the start of the call, and in the resulting PRN data from that seed, indicates that the PRN data to be used for the session is started. (As 4800 voice samples per second, 4800 RPN data is required per second, that is, about 170,000 RPN data is required per hour. Therefore, the offset address in this modified embodiment is more than the offset address described above. Will also be much larger).

  In this variant embodiment, the RPN data used for decoding is preferably generated from a 10,000 bit seed at the central office and relayed to the cell site. (For security reasons, a 10,000 bit noise key should not leave the central office security).

  In a variation of the above system, rather than the reverse, the offset address can be generated by the central office or at the cell site and relayed to the phone during the start of the call.

  In other embodiments, the phone 1020 may be provided with a one-time seed list that matches the seed list stored on the security disk 2052 at the central office. Each time you use the phone to start a new call, use the next species in this list. With this arrangement, no data is needed for species exchange, and each telephone and carrier knows independently which species to use and generate a pseudo-random data sequence for the current session.

  In such an embodiment, the carrier can determine when the phone is almost out of such a list, and sends an alternative list (eg, as part of the administrative data given to the phone temporarily). can do. To increase security, the carrier may require that the phone be returned to manual reprogramming to avoid wireless transmission of this variable information. Alternatively, the alternative species list can be encrypted for wireless transmission using any of a variety of known techniques.

  In the second class of embodiments, security derives not only from the security of the PRN data, but also from the security of the auxiliary message data encoded therefrom. Some such systems rely on one transmission randomly selected from 256 possible messages.

  In this embodiment, the ROM in the phone stores 256 different messages (each message may be 128 bits in length, for example). When the phone is manipulated to initiate a call, the phone randomly generates a number from 1 to 256, which serves as an index for these stored messages. This index is sent to the cell site during the start of the call, causing the central office to identify the expected message from the match database on the guarantee disk that contains the same 256 messages. (Each phone has a different set of messages.) (Alternatively, the carrier randomly selects an index number during call initiation, sends it to the phone, and selects the message to use during the session.) Many of these additional security layers may seem overkill in a theoretically pure world where attacks attempted on the security system are only mathematically real in nature. (The addition of these additional layers of security, which makes the message itself different, may simply compromise the mathematical security of the core principle of this technology by the actual public functioning security system designer. Admit that they will face a realization economy that cannot.)

  Thereafter, all voice data transmitted by the phone during the call is steganographically encoded with the indexed message. The cell site examines the data received from the phone for the presence of the expected message. If the message is not present, or if a different message is decrypted instead, the call is flagged as invalid and a corrective action is taken.

  In this second embodiment, the PRN data used for encoding and decoding can be as simple or complex as desired. A simple system uses the same PRN data for each cell. Such data may be generated, for example, by a deterministic PRN generator seeded with fixed data (eg, a telephone identifier) that is unique to the telephone and also known by the central office. Or a universal noise sequence can be used (ie, the same noise sequence can be used for all phones). Alternatively, pseudo-random data can be generated by a deterministic PRN generator that seeds data that changes from call to call (eg, based on data transmitted during call start, eg, identifying a target phone number, etc.). Can be generated. Some embodiments may seed the pseudo-random number generator with seeds of data from previous calls (this data is necessarily known to the phone and carrier but unknown to the thief. Because it seems to be.)

  Of course, the elements from the two approaches can be combined in various ways and other features can be added. The above embodiment is merely a good example and does not begin to create a catalog of myriad approaches that can be used. Generally speaking, any data that is necessarily or can be known by both the telephone and the cell site / central office is used as a basis for either the auxiliary message data or the PRN data that encodes it. can do.

  The preferred embodiment of this aspect of the technology encodes the auxiliary data at any short sample of the received audio to encode each auxiliary data randomly throughout the duration of the telephone subscriber's digitized voice. Can also be decrypted. In a preferred form of this aspect of the technology, the carrier repeatedly examines steganographically encoded auxiliary data (eg, every 10 seconds or at random intervals) and continues to have the expected attributes. Guarantee that

  Although the above discussion has focused on steganographically encoding transmissions from cellular telephones, it will be appreciated that transmissions to cellular telephones can similarly be steganographically encoded. Such an arrangement is suitable, for example, in transporting management data (ie non-voice data) from a carrier to individual telephones. This management data may be, for example, reprogrammed the targeted cellular telephone (or all cellular telephones) from the central office, updating the species list (for systems using the on-time pad system described above), unfamiliar local It can be used to inform the cellular phone of “spoofing” data specific to the region, etc.

  In some embodiments, the carrier may steganographically transmit to the cellular telephone the species that the cellular telephone uses in transmission to the carrier for the remainder of the session.

  Although the above discussion has focused on steganographic encoding of baseband digitized speech data, those skilled in the art will also encode intermediate frequency signals (analog or digital) as well as steganographically according to the principles of the present technology. You will recognize what you can do. The advantage of post-baseband signals is that the bandwidth of these intermediate frequency signals is relatively wide compared to baseband signals, so that more auxiliary signals can be encoded in them, or a certain amount of auxiliary signals The signal can be repeated more frequently during transmission. (When using steganographic coding of intermediate signals, the variation introduced by the coding should not be so large as to affect the reliable transmission of management data, taking into account the error correction facilities supported by the packet format. Should be careful).

  Those skilled in the art will recognize that the auxiliary data itself can be arranged in a known manner to aid in error detection by the decoder 38, or error detection capability. Interested readers should refer to one of many readily available textbooks detailing such techniques, such as Laura Bau, Error Coding Cookbook, McGraw Hill, 1996.

  Although preferred embodiments of this aspect of the technology have been described in the context of cellular systems that use packetized data, other wireless systems do not use such conveniently configured data. In systems where configuration cannot be used as a synchronization aid, synchronization can be achieved within the composite data signal using techniques such as those detailed in our prior application. In a class of such techniques, the auxiliary data itself has features that facilitate its synchronization. In another class of technology, the auxiliary data modulates one or more embedded carrier patterns designed to facilitate alignment and detection.

  As previously indicated, the principles of the technology are not limited to use with the special form of steganographic encoding detailed above. In fact, any known or later invented steganographic coding technique can be used to increase the security or functionality of a cellular (or other wireless, eg PCS) communication system in the manner detailed above. Can be used. Similarly, these principles are not limited to wireless telephones, and any wireless communication can be provided with this type of “in-band” channel.

  A system that implements Applicant's technology can include dedicated hardware circuitry, but more generally a suitably programmed microprocessor (eg, phone 2010, with associated RAM and ROM memory). It will be appreciated that such a system at each of the cell sites 2012 and the central office 2014 may also be provided.

Coding with bit cells The above discussion focuses on increasing or decreasing the value of individual pixels and reflects the coding of the auxiliary data signal combined with the pseudo-random signal. The following discussion details a variant embodiment where the auxiliary data is encoded by a patterned group of pixels, referred to herein as bit cells, without pseudo-randomization.

  Referring to FIGS. 41A and 41B, two illustrative 2 × 2 bit cells are shown. 41A is used to represent “0” bits of auxiliary data, and FIG. 41B is used to represent “1” bits. In operation, the underlying image pixel is pulled up or down according to the +/- value of the bit cell to represent one of these two bit values. (As detailed below, the size of drawing a given pixel or region of an image can be related to many factors. Defining a feature pattern is a subtraction cue.) The relative bias of the encoded pixels is examined (using the techniques described above) to identify which of the two patterns represents each corresponding region of the encoded image.

  It will be appreciated that in this embodiment, the auxiliary data is not explicitly randomized, but the bit cell pattern may be considered a “designed” carrier signal, as described above.

  The exchange of this “designed” information carrier with the pseudo-random noise of the previous embodiment provides the advantage that bit cell patterning reveals itself in Fourier space. Thus, bitcell patterning can work like the subliminal digital graticule described above, assists in registering suspicious images and eliminates scale / rotation errors. By changing the size of the bit cell and the pattern in it, the location of the energy provided thereby in the spatial transformation domain can be changed, optimizing the impedance from the representative image energy and facilitating detection .

  (The above discussion considered direct coding of the auxiliary data without randomization by the PRN signal, but in other embodiments, of course, randomization can be used).

Conceptually better signatures In some of the previously described embodiments, the magnitude of the signature energy is adapted based on the region to make it invisible in the image (or less audible in the sound). To do. In the following description, Applicants will more specifically consider the problem of hidden signature energy in images, the separation problem imposed thereby, and the solution of each of these problems. The purpose of the signing process goes beyond mere operation to “digit detectability of embedded signatures that conform to some form of fixed“ visibility / acceptable threshold ”by a given user / creator ”Is maximized.

  In designing services for this purpose, consider the following three-axis parameter space, where two of these axes are half-axis (positive only) and the third axis is all-axis (positive / negative) ). This set of axes defines two of the eight regular Euclidean three-dimensional spaces. When parameters that refine the event and are “separate worth” appear on the scene (such as an expanded local visibility matrix), they (in general) are half of themselves. The axis can be defined and extended to the following examples beyond three dimensions.

  Note that the signature design objective begins to optimally assign “gain” to local bumps based on the coordinates of the space, while basically it is necessary to expedite operation in practical applications . First, the three axes are as follows. Let the two half-axes be x and y and the full axis be z.

The x-axis represents the brightness of a single bump. The basic concept is to squeeze out a little energy and brighten the area against the blurred area. Importantly, when a true “psycho-linear-device-independent” luminance value (pixel DN) appears, this axis is not valid if the luminance value is coupled to another operating axis (eg, C * xy). Necessary. In this case, this is caused by the sub-optimization of the present pseudo-linear luminance encoding.

  The y-axis is the adjacent “local hiding potential” within the range that the bump itself finds. The basic concept is that the flat region has a low concealment potential because the eye can detect subtle changes like the flat region. Long lines and long edges tend to have low concealment potentials, since very smooth long line “breaks and cuts” are also somewhat visible, while short lines and etch information and their mosaics have high concealment potentials. Tend to be. These concepts, long and short, are directly linked to processing time issues and processing tool issues required for careful quantification such as parameters. The development of the y-axis motion model inevitably involves some theory against the empirical theory of some nasty artists. As more and more knowledge is gathered as the y-axis parts are brought together, they break up and become their own independent axes when worthwhile.

  The z-axis is an axis “with or against gain” (as will be explained later), and this is a full axis while the other two are half-axes. The basic concept is that a given input bump has a pre-existing bias for what it wants to encode to “1” or “0” at that position, which in part is a function of the readout algorithm used, The magnitude of the bias is somewhat correlated with the “hiding potential” of the y-axis, and is preferably used as a variable in deciding how much tweak value to assign to the bump. it can. The concomitant basic concept is that if a bump is already a friend (ie, the bias to its neighborhood tends to be the desired Δ value), it should not change significantly. Its already natural state provides the data energy necessary for decoding without significantly changing the local image values, in some cases. On the other hand, if the bump is initially enemy (ie, the bias to its neighbors tends to deviate from the Δ value that should be imposed by the encoding), change it significantly. This latter operation tends to make the point less visible (very local blurred motion), reducing the deviation of this point relative to its neighbors and adding additional energy that can be detected during decoding. Supply. In these two cases, these two cases are referred to herein as “with gain” and “against gain”.

  The general concept of the problem as already explained needs to be sufficient for several years. Obviously, adding chrominance issues will extend the provisions a bit, and will be a signature bang for greater visibility, and human visibility studies applied to compression issues will have no opposite reason. Is equally applicable to this area. Here, the principles that can be used in typical applications are described.

  For speed, the local concealment potential can only be calculated based on the 3 × 3 neighbors of the pixel. Other than speed issues, there is no data or inherent theory to support larger ones. Summarizing the design issues are the visibility of the y-axis, how to combine luminance with this, and the trivial friend / enemy asymmetry. The guiding principle simply sets the flat area to zero, the conventional pure maximum or minimum area to "1.0" or maximum, and has "local line", "smooth slope", "saddle point" And don't spread anything anywhere between these.

  Typical applications are 6 basic parameters, 1) brightness, 2) local mean difference, 3) asymmetric factor (with or against gain), 4) minimum linear factor (flat vs. line vs. maximum) 5) bit plane bias factor, 6) overall gain (user's single top level gain knob).

  The luminance parameter and the parameter of the difference from the local average are linear and their use is specified elsewhere herein.

  The asymmetry factor is a single scalar applied on the “counter-gain” side of the difference axis above 2.

  The minimum linear factor is obviously coarse, but it should serve a 3 × 3 neighbor setting. The concept is that the true 2D minimum and maximum are very scratched along each of the 4 lines that traverse the 3 × 3 adjacent central pixels, and the visual line or edge has at least one of the four linear profiles. It tends to extend. [Four linear profiles are each 3 pixels in the length direction. That is, the upper left pixel-center-lower right; directly above-center-below; upper right-center-lower left; right-center-left) Do this on all linear profiles and then select the minimum value for the maximum parameter to be used as the “y-axis”.

  The bit plane bias factor is interesting because it has two planes, a previously empty plane and then an empty plane. In the former case, simply “read” the unsigned image and see where all the biases deviate with respect to all the bit planes, and the overall gain of the bit planes that go against the desired message as a whole. "Is easily raised and only others are removed, i.e., slightly below its gain. If empty later, perform the entire signature process with previously empty bitplane bias and the other five parameters listed here, eg “Gestalt of a large JPEG compression AND model of a line screen that scans after printing the image Execute signature image via `` distortion '', read out the image, discover which bitplanes are confused or in error, reinforce bitplane bias appropriately, and perform plug connection again To do. If you have good data to do the expansion process, you only need to perform this step once, ie you can easily perform the Van-Cittertize process (process with some buffer factor applied to the tweak) Vaguely refer to to repeat).

  Finally, there is an overall gain. Its purpose is to make this single variable a top-level “intensity knob” (more typically a slider or other slider in the graphical user interface) that can be adjusted by the interested user as little as desired. Control). A very interested user lowers the advanced menu and empirically processes on the other five variables.

In some visible watermarking applications, it is desirable to provide a visible indication to the image to indicate that it contains steganographically encoded data. In one example, the indication may be a slightly visible logo (sometimes referred to as a “watermark”) that is applied to one corner of the image. This indicates that the image is a “smart” image and carries data in addition to the image. A light bulb is one suitable logo.

Other Applications One application for the disclosed technology is like a marking / decoding “plug-in” for using image processing software such as Adobe's Photoshop software. Once the marking of such an image has spread, the user of such software will decrypt the embedded data from the image and look up the public registry to identify the owner of the image. In one example, protection can act as a conduit where appropriate royalty payments are made to the owner for use of the user's images (in the illustrated example, the registry is coupled to a database and via the WWW. The database is an internet server that can be accessed by the database, with detailed information about the cataloged image (eg, the owner's name, address, phone number, etc.), indicated by an information code that encodes the image itself. The price list for the various types of use that can be made on an image) The person decoding the image uses the code collected in this way to query the owner and, if desired, Pay copyright royalties electronically to owners).

  Another application is a smart business card, in which case the business card is provided with a photo with inconspicuous, machine-readable embedded contact data (the same function, the surface microtopology of the card with data embedded Achieved by changing).

  Yet another promising application is in content standards. Television signals, images on the Internet, and other content sources (sound, images, video, etc.) are “fair” (ie sex, violence, Data representing the grades for suitability for children, etc.). Television receivers, internet surfing software, etc. can clearly understand such a proper grade (eg, by using global code decoding) and can operate properly (eg, images or video). Can not be viewed or the sound source is not played back.

  In the simple example described so far, the embedded data can have one or more “flag” bits. Some flags indicate “inappropriate for children” (others could say, for example, “This image is copyrighted” or “This image is a public area”). Such a flag bit can be in a field of control bits that is separate from the embedded message, ie it can be a message itself. By checking the state of these flag bits, the decoder software can quickly inform the user of various characteristics of the image.

  (Control bits can be encoded at a known location in the image-known to the subliminal graticule-can indicate the format of the embedded data (eg, its length, its type, etc.) Thus, these control bits are similar to the data that is sometimes carried in conventional file headers, but in this case they are embedded in the image instead of being considered for the file).

  The field of product marking is generally well utilized by ordinary bar codes and overall product codes. However, in certain applications such barcodes are undesirable (eg, when considering aesthetics or when security is concerned). In such applications, Applicant's technology marks the product via a harmless carrier (eg, a photograph of the product) or by encoding a microtopology on the surface of the product or a label thereon. Can do.

  Effectively combining encryption and / or digital signature techniques with stefanography has increased security-too many to explain in detail.

  Medical records appear in areas where proof is important. Protection against fraud can be achieved using the stefanographic principle applied to film-based recording or document microtopology.

  Many industries, such as automobiles and passenger aircraft, rely on tags that mark important parts. However, such tags are easily removed and sometimes forged. In applications where security is more desirable, a company part can be marked stefanographically to provide an unobtrusive identification / certification tag.

  In various applications seen herein, different messages can be relatedly conveyed by different regions of the image (eg, different regions of an image can provide different Internet URLs). , Different areas of the photo collage can identify different photographers.) The same applies to other media (for example, voice).

  One software visionary observes data as a data chunk follows a data waveform and interacts with other data chunks. At such times, such lumps are strong and need to be legitimately identified. Again, steganographic techniques can increase the reliability of the guarantee.

  Finally, the message conversion code—a recursive system in which a stefanographically encoded message actually changes the underlying stefanographic code pattern—provides a new level of sophistication and security. Such a message conversion code is very suitable for applications such as plastic cash cards where time-varying factors are important to increase safety.

  In addition, if the user prefers a particular form of stefanographic encoding as already described, other applications than those disclosed herein are widely implemented using other stefanographic marking techniques. Many of which are known in the art. Similarly, although this specification has emphasized the use of this technology for images, its principles generally apply to such information in voice, physical media, or any other carrier of information. The same applies to the inset.

  Although the principles of the technology have been described with reference to numerous embodiments and variations thereof, the technology can be modified in an apparatus without departing from the principles. Accordingly, all embodiments within the scope of the following claims and their equivalents are claimed as the present invention.

2 is a simple and classic diagram of a one-dimensional digital signal separated in two axes. FIG. A general overview with a detailed description of the steps of the process of embedding “fine” authentication signals on other signals. A gradual explanation of how to verify the original suspicious copy. FIG. 6 is a diagram of an apparatus for pre-exposing a film with verification information according to another embodiment of the present invention. FIG. 4 is a diagram of a “black box” embodiment of the present invention. FIG. It is a block diagram of the Example of FIG. Fig. 7 shows a variation of the embodiment of Fig. 6 adapted to encode successive sets of input data having different code words but having the same noise data. Fig. 7 shows a variation of the embodiment of Fig. 6 adapted to encode each frame of a video tape production having a unique code number. FIG. 4 is a representation of manufacturing standard noise seconds that can be used in certain embodiments of the present invention. FIG. FIG. 4 is a representation of manufacturing standard noise seconds that can be used in certain embodiments of the present invention. FIG. FIG. 4 is a representation of manufacturing standard noise seconds that can be used in certain embodiments of the present invention. FIG. Fig. 2 shows an integrated circuit used in the detection of a standard noise code. 11 shows a process flow for detecting a standard noise code that can be used in the embodiment of FIG. FIG. 5 is an embodiment using a plurality of detectors according to another embodiment of the present invention. FIG. Fig. 4 illustrates an embodiment for generating a pseudo-random noise frame from an image. It shows how signal statistics can be used in decoding assistance. It shows how a signature signal is used to increase its robustness in terms of expected distortion (eg MPEG). Fig. 4 shows an embodiment in which information about a file is detailed in the header and in the file itself. Fig. 4 shows an embodiment in which information about a file is detailed in the header and in the file itself. Fig. 4 illustrates details regarding an embodiment using a pattern to be rotated. Fig. 4 illustrates details regarding an embodiment using a pattern to be rotated. Fig. 4 illustrates details regarding an embodiment using a pattern to be rotated. Shows "bump" encoding rather than pixel. Shows "bump" encoding rather than pixel. The aspect of a security card is shown in detail. The aspect of a security card is shown in detail. The aspect of a security card is shown in detail. The aspect of a security card is shown in detail. The aspect of a security card is shown in detail. It is a figure explaining the network link method which uses the information embedded in the data object which has an intrinsic noise. A representative web page and the steps in its encapsulation into a self-extracting object are shown. A representative web page and the steps in its encapsulation into a self-extracting object are shown. It is a figure of a photo identification document or a security card. 2 shows two embodiments in which a subliminal digital graticule can be implemented. The modification in embodiment of FIG. 29 is shown. 2 shows two embodiments in which a subliminal digital graticule can be implemented. The phase of the spatial frequency along the two tilt axes is shown. The phase of the spatial frequency along the two tilt axes is shown. The spatial frequency phase along the first, second and third concentric rings is shown. The spatial frequency phase along the first, second and third concentric rings is shown. The spatial frequency phase along the first, second and third concentric rings is shown. Fig. 4 shows the steps in the registration process for a subliminal graticule using a tilt axis. Fig. 4 shows the steps in the registration process for a subliminal graticule using a tilt axis. Fig. 4 shows the steps in the registration process for a subliminal graticule using a tilt axis. Fig. 4 shows the steps in the registration process for a subliminal graticule using a tilt axis. Fig. 4 shows the steps in the registration process for a subliminal graticule using a tilt axis. Fig. 4 shows the steps in the registration process for subliminal graticules using concentric rings. Fig. 4 shows the steps in the registration process for subliminal graticules using concentric rings. Fig. 4 shows the steps in the registration process for subliminal graticules using concentric rings. Fig. 4 shows the steps in the registration process for subliminal graticules using concentric rings. Fig. 4 shows the steps in the registration process for subliminal graticules using concentric rings. Fig. 5 shows another step for subliminal graticules using a tilt axis. Fig. 5 shows another step for subliminal graticules using a tilt axis. Fig. 5 shows another step for subliminal graticules using a tilt axis. Fig. 6 illustrates another registration process that does not require a 2D FFT. Fig. 6 illustrates another registration process that does not require a 2D FFT. Fig. 6 illustrates another registration process that does not require a 2D FFT. Fig. 6 illustrates another registration process that does not require a 2D FFT. Fig. 6 is a flow chart summarizing the registration process for subliminal graticules. It is a block diagram which shows the main components of an example radio telephone system. FIG. 39 is a block diagram of an exemplary steganographic encoder that can be used in the telephone of the system of FIG. FIG. 2 is a block diagram of an exemplary steganographic decoder that may be used at the cell site of FIG. Fig. 4 illustrates an exemplary bit cell for use in one form of encoding. Fig. 4 illustrates an exemplary bit cell for use in one form of encoding.

Claims (40)

  1. A method of embedding information in an object so that network navigation from the object to a network resource is possible ,
    Receiving a digital image including image pixels;
    Receiving an identification code embedded in the digital image and used to detect the network resource ;
    A process for generating a two-dimensional code signal representative of the identification code, the two-dimensional code signal has a component corresponding to a plurality of locations within said digital image, said on said plurality of locations Generating the two-dimensional code signal such that the identification code is randomized and repeatedly distributed;
    By changing the digital image based on the two-dimensional code signal, embedding the identification code steganography to said digital image, comprising the steps of generating an object that is linked to the network resource, the identification code, It is readable by machine from scanned images from the printing of the object, to enable navigation to the network resources, a step,
    Including methods.
  2.   The method of claim 1, wherein the identification code comprises a URL address.
  3. The method of claim 1, wherein the identification code comprises an index used to detect network resources.
  4.   The method of claim 1, wherein the identification code is repeated in the block of the two-dimensional code signal.
  5. The object steganographic to embed registration data, rotation and scaling to correct caused by scanning the image from the printing of the object The method of claim 1.
  6. The registration data constitute the pattern in the frequency domain method of claim 5.
  7.   The method of claim 5, wherein the registration data comprises a pattern formed by the two-dimensional code signal.
  8.   The method of claim 1, wherein the two-dimensional code signal varies in response to corresponding pixels of the digital image to reduce the perceptibility of the identification code at the object.
  9.   The method of claim 1, wherein the two-dimensional code signal is dependent on a key that is independent of the digital image.
  10.   The method of claim 1, wherein the key is used to randomize an identification code with a data object.
  11.   The method of claim 1, wherein the identification code includes index information associating an object with a database on a network.
  12.   The method of claim 1, wherein the digital image comprises a color image and the steganographic embedding is performed by changing the brightness of the color image.
  13.   The identification code includes two or more bits, and the two-dimensional code signal changes the digital image so that each of a plurality of pixels of the object is changed by information of two or more bits. The method described.
  14.   The method of claim 1, wherein an element of the two-dimensional code signal corresponds to a pixel block of the digital image, and the element modulates a feature of the pixel block to embed the identification code.
  15.   The method of claim 14, wherein the element modulates the feature of the pixel block such that the signal energy of the identification code is concentrated at low frequencies.
  16. So that the object can be a network navigation to network resources, a method for decoding information embedded in steganography manner from said object,
    Scanning an image of the object to form a digital image including image pixels representing the object;
    Comprising the steps of steganography to decode the identification code from said digital image, wherein the identification code is a code wherein the digital image Ru carried in a two-dimensional code signal embedded in steganography, said two-dimensional code signals have a corresponding element in a plurality of locations within the digital image, and the identification codes were randomized on the plurality of positions, a signal for repeatedly distributed, a step,
    Using the identification code to detect network resources on a network, the identification code enabling navigation from the object to the network resource ; and
    Including methods.
  17.   The method of claim 16, comprising analyzing the characteristics of the digital image to extract bits of the identification code.
  18.   The method of claim 17, comprising analyzing statistical characteristics of the digital image to extract bits of the identification code.
  19.   The method of claim 17, comprising analyzing characteristics of a pixel block of the digital image to extract bits of the identification code.
  20.   The method of claim 16, wherein different polarities are used to convey different bit values of the bits of the identification code.
  21.   The method of claim 16, wherein each of the pixels of the digital image conveys two or more bits of the identification code.
  22.   The method of claim 16, wherein the identification code comprises a URL address.
  23. The method of claim 16, wherein the identification code comprises an index used to detect network resources.
  24.   The method of claim 16, wherein the identification code is repeated in the block of the two-dimensional code signal.
  25. The registration data from the digital image by steganography to decode, to correct the rotation and scaling caused by scanning the image from the printing of the object The method of claim 16.
  26. The registration data constitute the pattern in the frequency domain method of claim 25.
  27.   The method of claim 25, wherein the registration data comprises a pattern formed by the two-dimensional code signal.
  28.   The method of claim 16, wherein the two-dimensional code signal is dependent on a key that is independent of the digital image.
  29.   29. The method of claim 28, wherein the key is used to decrypt bits of the identification code from the digital image.
  30.   30. The method of claim 29, wherein the key has random characteristics.
  31.   The method of claim 16, wherein the identification code includes index information that associates an object with a database on a network.
  32.   The method of claim 16, wherein the digital image comprises a color image, and the steganographic decoding is performed by extracting bits of the identification code from the luminance of the color image.
  33. The method of claim 16, wherein the decoding comprises performing correlation detection to extract bits of the identification code from the digital image.
  34. The method of claim 16, wherein the decoding comprises extracting bits of the identification code from the digital image using error correction coding.
  35. 35. The method of claim 34 , wherein the decoding comprises reducing errors in extracting the bit value of the identification code using reliability weighting.
  36. A system for managing navigation from an object to information related to the object stored in the network ,
    A storage device of the identification code embedded in steganography to said object, a storage device for storing information, wherein the identification code, and the identification code associated with the object embedded in steganography, the
    A server that receives an identification code that is steganographically decoded from the object, the server obtaining information related to the object using the decoded identification code;
    Equipped with a,
    The identification codes are randomly distributed using key data at a plurality of positions in the object,
    The server decrypts the identification code using the key data;
    system.
  37. 37. The system of claim 36 , wherein the server and the storage device are accessible from the Internet and provide object related information in response to the identification code decrypted from the object.
  38. A method of embedding information in an object so that network navigation from the object to a network resource is possible ,
    Receiving a digital image including image pixels;
    Receiving an identification code embedded in the digital image and used to detect network resources ;
    A process for generating a two-dimensional code signal representative of the identification code, the two-dimensional code signal has a corresponding element in a plurality of locations within the digital image, the identification on said plurality of locations a step of encoding generates the two-dimensional code signal to be distributed is randomized,
    By changing the digital image based on the two-dimensional code signal, embedding the identification code steganography to said digital image, comprising the steps of generating an object that is linked to the network resource, the identification code, is readable by machine from scanned images from the printing of the object is a code that allows navigation to the network resources, a step,
    Including methods.
  39. A method for decoding steganographically embedded information from an object so as to enable network navigation from the object to a network resource, comprising :
    Scanning an image of the object to form a digital image having image pixels representing the object;
    Comprising the steps of steganography to decode the identification code from said digital image, wherein the identification code is a code wherein the digital image Ru carried in a two-dimensional code signal embedded in steganography, said two-dimensional code signal has a corresponding element in a plurality of locations within the digital image, a and the signal to randomize the identification code on the plurality of positions, comprising the steps,
    Using said identification code, comprising the steps of detecting a network resource on the network, the identification code is to enable navigation to the network resource from said object, comprising the steps,
    Including methods.
  40. A method for decoding steganographically embedded information from an object so that network navigation from the object to a network resource is possible ,
    Scanning an image of the object to form a digital image having image pixels representing the object;
    It said digital image an identification code from a process of steganography to decode said identification code, said digital image to be carried in a two-dimensional code signal embedded in steganography, said two-dimensional code signal, a digital have a corresponding element in a plurality of positions in an image is a and the signal for the repeated identification code distributed on the plurality of positions, comprising the steps,
    Using the identification code the decoded, a step of detecting a network resource on the network, the decoded identification code allows navigation to the network resource from said object, comprising the steps,
    Only including,
    The identification code is randomly distributed using key data at a plurality of positions in the two-dimensional code signal,
    The step of decrypting includes extracting the identification code from the digital image using the key data;
    Method.
JP2004224727A 1993-11-18 2004-07-30 Steganography system Expired - Lifetime JP3949679B2 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US08/436,102 US5748783A (en) 1995-05-08 1995-05-08 Method and apparatus for robust information coding
US08/508,083 US5841978A (en) 1993-11-18 1995-07-27 Network linking method using steganographically embedded data objects
US51299395A true 1995-08-09 1995-08-09
US08534005 US5832119C1 (en) 1993-11-18 1995-09-25 Methods for controlling systems using control signals embedded in empirical data
US63553196A true 1996-04-25 1996-04-25

Related Child Applications (1)

Application Number Title Priority Date Filing Date
JP08534258 Division

Publications (2)

Publication Number Publication Date
JP2005051793A JP2005051793A (en) 2005-02-24
JP3949679B2 true JP3949679B2 (en) 2007-07-25

Family

ID=34280159

Family Applications (3)

Application Number Title Priority Date Filing Date
JP2004224727A Expired - Lifetime JP3949679B2 (en) 1993-11-18 2004-07-30 Steganography system
JP2007124838A Expired - Lifetime JP4417979B2 (en) 1993-11-18 2007-05-09 Steganography system
JP2007124835A Expired - Lifetime JP5128174B2 (en) 1993-11-18 2007-05-09 Steganography system

Family Applications After (2)

Application Number Title Priority Date Filing Date
JP2007124838A Expired - Lifetime JP4417979B2 (en) 1993-11-18 2007-05-09 Steganography system
JP2007124835A Expired - Lifetime JP5128174B2 (en) 1993-11-18 2007-05-09 Steganography system

Country Status (1)

Country Link
JP (3) JP3949679B2 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7461136B2 (en) 1995-07-27 2008-12-02 Digimarc Corporation Internet linking from audio and image content
US7693965B2 (en) 1993-11-18 2010-04-06 Digimarc Corporation Analyzing audio, including analyzing streaming audio signals
US7936900B2 (en) 1995-05-08 2011-05-03 Digimarc Corporation Processing data representing video and audio and methods related thereto
US7965864B2 (en) 1999-05-19 2011-06-21 Digimarc Corporation Data transmission by extracted or calculated identifying data
US8036420B2 (en) 1999-12-28 2011-10-11 Digimarc Corporation Substituting or replacing components in sound based on steganographic encoding
US8078697B2 (en) 1995-05-08 2011-12-13 Digimarc Corporation Network linking methods and apparatus
US8099403B2 (en) 2000-07-20 2012-01-17 Digimarc Corporation Content identification and management in content distribution networks
US8108484B2 (en) 1999-05-19 2012-01-31 Digimarc Corporation Fingerprints and machine-readable codes combined with user characteristics to obtain content or information
US8200976B2 (en) 1999-05-19 2012-06-12 Digimarc Corporation Portable audio appliance
US8332478B2 (en) 1998-10-01 2012-12-11 Digimarc Corporation Context sensitive connected content
US8429205B2 (en) 1995-07-27 2013-04-23 Digimarc Corporation Associating data with media signals in media signal systems through auxiliary data steganographically embedded in the media signals

Families Citing this family (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101848795B1 (en) 2011-04-20 2018-04-16 에스케이플래닛 주식회사 System and method for service connection based on audio signal
US9088360B2 (en) 2012-12-27 2015-07-21 Panasonic Intellectual Property Corporation Of America Information communication method
US9560284B2 (en) 2012-12-27 2017-01-31 Panasonic Intellectual Property Corporation Of America Information communication method for obtaining information specified by striped pattern of bright lines
CN107196703B (en) 2012-05-24 2019-09-03 松下电器(美国)知识产权公司 Information communicating method
US8913144B2 (en) 2012-12-27 2014-12-16 Panasonic Intellectual Property Corporation Of America Information communication method
US8988574B2 (en) 2012-12-27 2015-03-24 Panasonic Intellectual Property Corporation Of America Information communication method for obtaining information using bright line image
US8922666B2 (en) 2012-12-27 2014-12-30 Panasonic Intellectual Property Corporation Of America Information communication method
MX359612B (en) 2012-12-27 2018-09-28 Panasonic Ip Corp America Information communication method.
US9087349B2 (en) 2012-12-27 2015-07-21 Panasonic Intellectual Property Corporation Of America Information communication method
US9085927B2 (en) 2012-12-27 2015-07-21 Panasonic Intellectual Property Corporation Of America Information communication method
JP5557972B1 (en) 2012-12-27 2014-07-23 パナソニック株式会社 Visible light communication signal display method and display device
WO2014103157A1 (en) 2012-12-27 2014-07-03 パナソニック株式会社 Video display method
US10303945B2 (en) 2012-12-27 2019-05-28 Panasonic Intellectual Property Corporation Of America Display method and display apparatus
US10523876B2 (en) 2012-12-27 2019-12-31 Panasonic Intellectual Property Corporation Of America Information communication method
US9608725B2 (en) 2012-12-27 2017-03-28 Panasonic Intellectual Property Corporation Of America Information processing program, reception program, and information processing apparatus
US10530486B2 (en) 2012-12-27 2020-01-07 Panasonic Intellectual Property Corporation Of America Transmitting method, transmitting apparatus, and program
JP6328060B2 (en) 2012-12-27 2018-05-23 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカPanasonic Intellectual Property Corporation of America Display method
US9608727B2 (en) 2012-12-27 2017-03-28 Panasonic Intellectual Property Corporation Of America Switched pixel visible light transmitting method, apparatus and program
KR101797030B1 (en) * 2017-01-25 2017-11-13 국방과학연구소 Apparatus and method for processing image steganography using random permutation and difference of image

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05199345A (en) * 1991-05-21 1993-08-06 Hitachi Ltd Facsimile server
JP3599795B2 (en) * 1993-09-03 2004-12-08 株式会社東芝 Image processing device
EP0642060B1 (en) * 1993-09-03 1999-04-07 Kabushiki Kaisha Toshiba Apparatus for steganographic embedding of information into colour images
JP3444956B2 (en) * 1994-04-18 2003-09-08 キヤノン株式会社 Facsimile machine

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8073933B2 (en) 1993-11-18 2011-12-06 Digimarc Corporation Audio processing
US7693965B2 (en) 1993-11-18 2010-04-06 Digimarc Corporation Analyzing audio, including analyzing streaming audio signals
US8010632B2 (en) 1993-11-18 2011-08-30 Digimarc Corporation Steganographic encoding for video and images
US7936900B2 (en) 1995-05-08 2011-05-03 Digimarc Corporation Processing data representing video and audio and methods related thereto
US8078697B2 (en) 1995-05-08 2011-12-13 Digimarc Corporation Network linking methods and apparatus
US8429205B2 (en) 1995-07-27 2013-04-23 Digimarc Corporation Associating data with media signals in media signal systems through auxiliary data steganographically embedded in the media signals
US7461136B2 (en) 1995-07-27 2008-12-02 Digimarc Corporation Internet linking from audio and image content
US9740373B2 (en) 1998-10-01 2017-08-22 Digimarc Corporation Content sensitive connected content
US8332478B2 (en) 1998-10-01 2012-12-11 Digimarc Corporation Context sensitive connected content
US8108484B2 (en) 1999-05-19 2012-01-31 Digimarc Corporation Fingerprints and machine-readable codes combined with user characteristics to obtain content or information
US8200976B2 (en) 1999-05-19 2012-06-12 Digimarc Corporation Portable audio appliance
US7965864B2 (en) 1999-05-19 2011-06-21 Digimarc Corporation Data transmission by extracted or calculated identifying data
US8036420B2 (en) 1999-12-28 2011-10-11 Digimarc Corporation Substituting or replacing components in sound based on steganographic encoding
US8099403B2 (en) 2000-07-20 2012-01-17 Digimarc Corporation Content identification and management in content distribution networks

Also Published As

Publication number Publication date
JP2007329907A (en) 2007-12-20
JP4417979B2 (en) 2010-02-17
JP5128174B2 (en) 2013-01-23
JP2005051793A (en) 2005-02-24
JP2007312383A (en) 2007-11-29

Similar Documents

Publication Publication Date Title
Shih Digital watermarking and steganography: fundamentals and techniques
US8542871B2 (en) Brand protection and product authentication using portable devices
JP5475160B2 (en) System response to detection of watermarks embedded in digital host content
US8391545B2 (en) Signal processing of audio and video data, including assessment of embedded data
Yeung et al. Invisible watermarking for image verification
US6487301B1 (en) Digital authentication with digital and analog documents
US6614914B1 (en) Watermark embedder and reader
US7822225B2 (en) Authentication of physical and electronic media objects using digital watermarks
US7028902B2 (en) Barcode having enhanced visual quality and systems and methods thereof
US8027663B2 (en) Wireless methods and devices employing steganography
US7224819B2 (en) Integrating digital watermarks in multimedia content
US7305104B2 (en) Authentication of identification documents using digital watermarks
Cox et al. Digital watermarking and steganography
US8144924B2 (en) Content objects with computer instructions steganographically encoded therein, and associated methods
EP1591953B1 (en) System and method for decoding digital encoded images
US6738495B2 (en) Watermarking enhanced to withstand anticipated corruptions
US7424131B2 (en) Authentication of physical and electronic media objects using digital watermarks
US6359985B1 (en) Procedure for marking binary coded data sets
Barni et al. Data hiding for fighting piracy
CN101273367B (en) Covert and robust mark for media identification
US7570781B2 (en) Embedded data in gaming objects for authentication and association of behavior information
US8068679B2 (en) Audio and video signal processing
Podilchuk et al. Digital watermarking: algorithms and applications
US8411898B2 (en) Digital authentication with analog documents
US6266430B1 (en) Audio or video steganography

Legal Events

Date Code Title Description
A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20041221

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20051018

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20060328

A601 Written request for extension of time

Free format text: JAPANESE INTERMEDIATE CODE: A601

Effective date: 20060627

A602 Written permission of extension of time

Free format text: JAPANESE INTERMEDIATE CODE: A602

Effective date: 20060630

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20060921

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20070320

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20070418

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20100427

Year of fee payment: 3

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110427

Year of fee payment: 4

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110427

Year of fee payment: 4

S111 Request for change of ownership or part of ownership

Free format text: JAPANESE INTERMEDIATE CODE: R313113

Free format text: JAPANESE INTERMEDIATE CODE: R313111

R360 Written notification for declining of transfer of rights

Free format text: JAPANESE INTERMEDIATE CODE: R360

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110427

Year of fee payment: 4

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110427

Year of fee payment: 4

R360 Written notification for declining of transfer of rights

Free format text: JAPANESE INTERMEDIATE CODE: R360

R371 Transfer withdrawn

Free format text: JAPANESE INTERMEDIATE CODE: R371

S111 Request for change of ownership or part of ownership

Free format text: JAPANESE INTERMEDIATE CODE: R313111

S111 Request for change of ownership or part of ownership

Free format text: JAPANESE INTERMEDIATE CODE: R313111

Free format text: JAPANESE INTERMEDIATE CODE: R313113

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110427

Year of fee payment: 4

R371 Transfer withdrawn

Free format text: JAPANESE INTERMEDIATE CODE: R371

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110427

Year of fee payment: 4

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110427

Year of fee payment: 4

S111 Request for change of ownership or part of ownership

Free format text: JAPANESE INTERMEDIATE CODE: R313113

Free format text: JAPANESE INTERMEDIATE CODE: R313111

S111 Request for change of ownership or part of ownership

Free format text: JAPANESE INTERMEDIATE CODE: R313111

R350 Written notification of registration of transfer

Free format text: JAPANESE INTERMEDIATE CODE: R350

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110427

Year of fee payment: 4

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110427

Year of fee payment: 4

R350 Written notification of registration of transfer

Free format text: JAPANESE INTERMEDIATE CODE: R350

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20110427

Year of fee payment: 4

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20120427

Year of fee payment: 5

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20130427

Year of fee payment: 6

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20140427

Year of fee payment: 7

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

R250 Receipt of annual fees

Free format text: JAPANESE INTERMEDIATE CODE: R250

EXPY Cancellation because of completion of term