US8243985B2 - Bit pattern design for visible watermarking - Google Patents
Bit pattern design for visible watermarking Download PDFInfo
- Publication number
- US8243985B2 US8243985B2 US12/701,290 US70129010A US8243985B2 US 8243985 B2 US8243985 B2 US 8243985B2 US 70129010 A US70129010 A US 70129010A US 8243985 B2 US8243985 B2 US 8243985B2
- Authority
- US
- United States
- Prior art keywords
- image
- bit
- message
- pattern
- dots
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Expired - Fee Related, expires
Links
- 238000013461 design Methods 0.000 title description 8
- 239000003550 marker Substances 0.000 claims abstract description 97
- 238000000034 method Methods 0.000 claims description 78
- 230000000295 complement effect Effects 0.000 claims description 7
- 239000003086 colorant Substances 0.000 claims description 6
- 230000001419 dependent effect Effects 0.000 claims description 6
- 239000002131 composite material Substances 0.000 claims description 2
- 238000000605 extraction Methods 0.000 abstract description 31
- 238000007781 pre-processing Methods 0.000 abstract description 21
- 230000008569 process Effects 0.000 description 27
- 238000012545 processing Methods 0.000 description 18
- 230000000875 corresponding effect Effects 0.000 description 16
- 230000007423 decrease Effects 0.000 description 14
- 230000008859 change Effects 0.000 description 13
- 238000002360 preparation method Methods 0.000 description 13
- 238000001514 detection method Methods 0.000 description 9
- 101000575041 Homo sapiens Male-enhanced antigen 1 Proteins 0.000 description 8
- 102100025532 Male-enhanced antigen 1 Human genes 0.000 description 8
- 238000013459 approach Methods 0.000 description 8
- 238000012937 correction Methods 0.000 description 7
- 230000009466 transformation Effects 0.000 description 4
- 230000009286 beneficial effect Effects 0.000 description 3
- 230000003247 decreasing effect Effects 0.000 description 3
- 230000008570 general process Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000003780 insertion Methods 0.000 description 2
- 230000037431 insertion Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000000844 transformation Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000002596 correlated effect Effects 0.000 description 1
- 238000013075 data extraction Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000008450 motivation Effects 0.000 description 1
- 230000009290 primary effect Effects 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 238000012552 review Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000002311 subsequent effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K19/00—Record carriers for use with machines and with at least a part designed to carry digital markings
- G06K19/06—Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code
- G06K19/06009—Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking
- G06K19/06037—Record carriers for use with machines and with at least a part designed to carry digital markings characterised by the kind of the digital marking, e.g. shape, nature, code with optically detectable marking multi-dimensional coding
Definitions
- the present invention is geared toward the field of watermarking scanned images, and in particular, is geared toward the definition of new printable symbols for encoding a watermark message.
- Visible watermarking is the science of overlaying messages comprised of bit-encoded, character strings onto a user-provided image (i.e. an input image) prior to printing the input image.
- the resultant, printed image is a watermarked image, and it is said to have been watermarked with a secret, or hidden, watermark message.
- the principle motivation behind such encoding is that when properly encoded, the watermark message can be extracted from a scan of the previously-printed, watermarked image.
- Visible watermarking schemes can be useful in many scenarios, such as having a user ID encoded onto documents while printing from a network printer, having the date and time of printing inconspicuously placed on a printed document, etc.
- watermark messages may be incorporated as textual messages overlaid on an output image
- the watermark message not be readily discernable by a casual viewer of the output image. That is, it is typically preferred that the watermark message not be printed using text characters, which can be freely read since such a message is likely to become a distraction to a casual viewer; moving the casual view's attention away from the output image itself.
- the encoded message should be innocuous to the casual viewer and not divert attention from the main subject matter of the output image.
- ASCII code which is known in the art to provide a distinct eight-bit, binary code per text character.
- ASCII encoded message may be printed as a series of 0's and 1's overlaid on the output image, and is not easily discernable by a casual viewer.
- it is typically not an optimal solution to clutter an output image with a series of 0's and 1's detectable by the human eye.
- One approach to hiding the encoded message is to encode the watermark message into text characters that may already be a part of the output image.
- an existing text character within the output image is divided into an upper region and a lower region, and relative darkness levels of the two regions are modulated to inscribe the encoded watermark message.
- the upper region of a text character may be made darker than its lower region to represent a logic-0.
- the lower region of a text character may be made darker than its upper region to represent a logic-1.
- This approach succeeds in effectively hiding the watermark message from a casual reader of the output document, but to some extent, it may be dependent upon the quality of scanning and printing equipment used to process the watermarked output image to successfully inscribe the watermarked message and to successfully extract the watermarked message after multiple scan-and-print cycles.
- the present invention takes an alternate approach to solving the above described problem.
- This alternative approach is to define new printable pattern symbols to represent individual data bits (i.e. logic 0 bits and logic 1 bits) in a data bit-string that defines an encoded, character-string message. Since the newly defined printable pattern symbols would not be known to a casual observer of the output image, they would pose a lower level of distraction.
- the watermark message (preferably a bit-string comprising a bit-encoded, text message) may be overlaid as a series of printable pattern symbols onto the input image to create a watermarked image with pattern symbols that are visible, but not decipherable, by human eyes.
- One aspect of the present invention is a method of watermarking a text message onto an input image, having the following steps: (a) converting the text message into a binary-coded data bit sequence of logic 0 data bits and logic 1 data bits; (b) assigning a first bit-pattern symbol to logic 0 data bits within the binary-coded data bit sequence, the first bit-pattern symbol being a first predefined geometric arrangement of individual image dots; (c) assigning a second bit-pattern symbol to logic 1 data bits within the binary-coded data bit sequence, the second bit-pattern symbol being a second predefined geometric arrangement of individual image dots distinct from the first predefined geometric arrangement of individual image dots; (d) defining a non-data, marker bit, and assigning a third bit-pattern symbol to the marker bit, the third bit-pattern symbol being a third predefined geometric arrangement of individual image dots distinct from the first and second predefined geometric arrangements of individual image dots; (e) arranging the binary-coded data bit sequence into a predefined geometric configuration having a
- step (a) of this method includes padding the text message with a first fixed data-bit-pattern to create a full-length character string of fixed length if the text message is shorter than the fixed length.
- first predefined geometric arrangement has a boundary edge with a distinct gradient direction
- identification of the first, second, and third bit-pattern symbols is dependent upon determination of their respective distinct gradient direction
- each of the first, second, and third predefined geometric arrangements have a horizontal and vertical projection-profile pair distinct from each other, and identification of the first, second, and third bit-pattern symbols is dependent upon identification of its corresponding horizontal and vertical projection-profile pair.
- step (e) includes arranging multiple, consecutive copies of the binary-coded data bit sequence within the predefined geometric configuration. If the number of bit locations within the geometric configuration is not evenly divisible by the binary-coded data bit sequence, only complete copies of the binary-coded data bit sequence are consecutive arranged within the geometric configuration, and bit locations within the geometric configuration not filled by the binary-coded data bit sequence are assigned a second predefined padding data-bit sequence.
- step (g) the data bits are written only over background areas of the input image, and the markers bits are written over both foreground and background areas of the input image.
- step (a) the first predefined geometric arrangement of individual image dots forms a first right triangle comprised of 10 dots arranged in four adjacent columns, with 4 dots forming its base at the first column, 3 dots in the second column, 2 dots in the third column, and 1 dot at its fourth column.
- step (c) the second predefined geometric arrangement of individual image dots is defined by the first geometric arrangement being rotated 90 degrees such that the second predefined geometric arrangement forms a second right triangle comprised of 10 dots arranged in four adjacent rows, with 4 dots at its base row, 1 dot at its top row, 3 dots on the row one-up from the base row, 2 dots on the row one-down from the top row.
- the third predefined geometric arrangement of individual image dots forms a square comprised of 12 dots arranged in four adjacent columns with 4 dots in each column. It should also be noted that the spacing between adjacent columns and rows of image dots is selected such that subjecting the watermarked image to a print-and-scan operation results in a plurality of the image dots within each of the first, second and third geometric arrangement joining to form composite shapes matching their respective first, second, and third bit-pattern symbol.
- each image dot is comprised of a group of pixels having a geometric arrangement matching the third predefined geometric arrangement of individual image dots that defines the third bit-pattern symbol.
- step (g) the luminance intensity level of a written data bit or marker bit is modulated in accordance with the luminance intensity level of the image pixels in the input image over which the data or marker bit is being written only if the image pixels have an intensity level equal to or higher than a first threshold level or an intensity level equal to or lower than a second threshold level. If the luminance intensity level of the image pixels are above the first threshold level, then the luminance intensity level of the data bit or marker bit to be written is lowered to be darker than the image pixels. Alternatively, if the luminance intensity levels of the image pixels are below the second threshold level, then the luminance intensity level of the data bit or marker bit to be written is raised to be brighter than the image pixels. In this case, the first threshold level is 225 and the second threshold level is 35.
- the color of a written data bit or marker bit is determined by the colors of their surrounding image pixels so as to blend in with the colors of the surrounding image pixels.
- the present invention further presents a method of identifying a watermark bit from within a watermarked image, comprising the following step: (a) generating an input image from the watermarked image, the input image having at least one of three predefine geometric arrangements of individual image dots forming a corresponding one of three bit-pattern symbols, each bit-pattern symbol being uniquely representative of one of a three types of watermark bits; (b) creating a horizontal gradient image and a vertical gradient image of each geometric arrangement of individual image dots; (c) creating a horizontal connected-component image for each horizontal gradient image by identifying connected components of a minimum size; (d) creating a vertical connected-component image for each vertical gradient image by identifying connected components of the minimum size; (e) combining the horizontal connected-component image and vertical connected-component image of each geometric arrangement of individual image dots to form a corresponding outline image having a skeleton pattern; (f) applying a fill-in operation to each skeleton pattern to convert each skeleton pattern into a pattern block; (g) vertically
- step (a) includes applying the watermarked image to a mask for locating bit images of each individual bit-pattern symbol.
- the horizontal gradient image is an absolute intensity image of a horizontal gradient, and detects the edges of bit-pattern symbols in the input image along its horizontal direction.
- the vertical gradient image is an absolute intensity image of a vertical gradient, and detects the edges of bit-pattern symbols in the input image along its vertical direction.
- the minimum size is 75% of the area covered by the largest of the three types of watermark bits.
- the horizontal projection profile is preferably generated by noting the number of pixels within each row of pixels that make up the right region RR
- the vertical projection profile V is generated by noting the number of pixels within each column of pixels that make up the right region RR.
- step (j) it is presently preferred that if a right region RR is not assigned one of the first and second data bit values, then its left region LR and right RR are combined to determine if they substantially form a parallelogram, and if they do, then defining it as a marker bit, the marker bit not being a data bit; else randomly assigning it one of the first and second data bit values on a 50% probability basis.
- FIG. 1 shows three distinct bit-pattern symbols to represent a logic-1 data bit, a logic-0 data bit and a marker bit (MB), respectively, in accord with the present invention.
- FIG. 2 shows that each dot d in FIG. 1 preferably consists of sixteen pixels p in a 4 ⁇ 4 grid arrangement.
- FIG. 3 shows multiple processing phases in a process for discerning the general shape of each bit-pattern symbols in an input image.
- FIG. 4 a shows a binarized bit image of a recovered skeleton pattern (i.e. outline) of a bit-pattern symbol.
- FIG. 4 b shows the binarized bit image after application of a fill-in operation to fill in the interior and border areas defined by the skeleton pattern of FIG. 4 a.
- FIG. 4 c shows the dividing down the middle of the filled-in image of FIG. 4 b in preparation for extracting bit pattern information.
- FIG. 5 shows the application of the presently preferred bit extraction technique applied to a logic 0 bit pattern and a logic 1 bit pattern as each bit pattern is rotated at 90° intervals.
- FIG. 6 shows a preferred method of creating a connected components mask.
- FIG. 7 a shows the application of the connected components mask method of FIG. 6 to an image to remove everything except the pattern symbols, which are shown as white image patterns on a black background.
- FIG. 7 b shows the result of applying the connected component mask process of FIG. 6 to an image that did have a watermarked message.
- FIG. 8 illustrates an example of rotation error, or skew error.
- FIG. 9 is a preferred method of correcting for skew error illustrated in FIG. 8 .
- FIG. 10 illustrates a preferred method of preparing a user-provided message string for encoding onto an input image.
- FIG. 11 shows an exemplary method of subdividing an input image into multiple image blocks for encoding a formatted message therein.
- FIG. 12 illustrates that if a connected components mask has not previously been subdivided into mask blocks and generally spans the entirety of the input image, then it is divided into mask blocks of shape, size and number corresponding to the image blocks, with each mask block having a one-to-one relationship to its corresponding image block according to its relative location within the input image.
- FIG. 13 illustrates a presently preferred watermark encoding sequence.
- FIG. 14 a is an example of an encoded image block.
- FIG. 14 b re-presents the image of FIG. 14 a.
- FIG. 14 c shows the image of FIG. 14 b after partial processing in preparation for extracting an encoded watermark message.
- FIG. 15 is a sample input image with a watermark message printed upon it.
- FIG. 16 illustrates the image of FIG. 15 after undergoing a transformation resulting from a print-and-scan cycle.
- FIG. 17 shows a page having another sample image with multiple message blocks outlined by marker bits.
- FIG. 18 shows a general process for decoding a watermark message.
- FIG. 19 is a simplified illustration highlighting elements of a scanned image.
- FIG. 20 illustrates image of FIG. 19 rotated by 90°.
- FIG. 21 illustrates the image of FIG. 20 rotated by 90°.
- FIG. 22 illustrates the image of FIG. 21 rotated by 90°.
- FIG. 23 illustrates that the search window used for searching for the best corner is increased by 50% in each of multiple search cycles.
- FIG. 24 provides a more detailed description of the preprocessing step 172 of FIG. 18 .
- FIG. 25 provides more detailed description of module Best_Corner_Detection of sub-step 215 in FIG. 24 .
- FIG. 26 illustrates the result of applying the pre-processing process of FIGS. 18-25 to page 151 of FIG. 17 .
- FIG. 27 shows a preferred method of watermark message extraction.
- FIG. 28 shows an image grid representation of extracted bit pattern information.
- FIG. 29 shows the image grid of FIG. 28 after removing all bit pattern information except for an identified message block.
- FIG. 30 highlights that a properly of the present invention wherein an improperly oriented image will produced mostly marker bits, while a properly oriented image will produce mostly data bits with marker bits along its perimeter.
- bit-encoding scheme Before going into the details of the present invention, an exemplary bit-encoding scheme is presented. It is to be understood that multiple bit-encoding schemes are known in the art, and selection of a specific bit-encoding scheme is not critical to the present invention.
- a user-provided message string is first converted into a suitable form by bit-encoding.
- the user-provided message string may be any character string, which for illustration purposes is herein assumed to be character string: “Hello world”.
- This text character string i.e. message string
- This text character string is then re-rendered as (i.e. encoded into) a collection of logic low “0” bits and logic high “1” bits.
- every unique character in the message string is correlated to a corresponding unique numeric code, preferably in the range from 0 to 255.
- Each numeric code represents a unique group of data bits, such as a byte (an 8-bit binary number).
- ASCII code is the American Standard Code for Information Interchange, and it provides a character-encoding scheme for converting text characters into machine readable, binary codes of eight bits each.
- ASCII equivalent numeric codes for each text character are shown in Table 1 below.
- the “Hello world” message string can therefore be represented as the following binary bit-vector:
- logic 0's and 1's Since the message is now a collection of logic 0's and 1's, all that is needed is a way to represent a logic-0 and a logic-1 on a printed image.
- the representation of logic 0's and 1's is an important aspect of the present invention.
- a novel, and printable, pattern symbol design for representing a 0-bit and a 1-bit for use in visible image watermarking is described below.
- bit-pattern symbol design for representing a 0-bit and a 1-bit for use in visible image watermarking is described below.
- bit-pattern symbol design it is beneficial to first define a “full-length message string” since it is currently preferred that user-provided message strings be formatted into a full-length message string prior to being watermarked (i.e. encoded) onto an input image. It is to be understood that the defined length of the “full-length message string” is a design choice.
- all message strings that are to be encoded onto an input image be of equal length, and preferably be confined to a fix length of 64 bytes (or 64 one-byte characters) in total, which defines the “full-length message string”.
- Smaller message strings may be padded with known bit-data to make their final bit-length equal to 64 bytes.
- the present exemplary message string, “Hello world”, which consists of only 11 characters may be rewritten as “Hello world” with multiple blank spaces appended at its end. In other words, 53 blank-space characters may be appended to the end of the original, 11-character “Hello world” message string to make it a full-length message string of 64 characters.
- bit-pattern symbols 11 , 13 , and 15 to represent a logic-1 bit, a logic-0 bit and a marker bit (MB), respectively.
- Each of bit-pattern symbols 11 , 13 , and 15 consists of a specific arrangement of individual dots (or squares) d. More specifically, bit-pattern symbol 11 arranges ten dots d in a triangular arrangement with four dots d at its base. Bit-pattern symbol 13 likewise arranges ten dots d in a triangular arrangement with four dots d at its base; wherein bit-pattern symbol 13 resembles bit-pattern symbol 11 rotated ninety degrees clockwise.
- bit-pattern 15 arranges sixteen dots d in a 4 ⁇ 4 square grid arrangement, with four dots d at its side.
- each dot d preferably consists of sixteen pixels p in a 4 ⁇ 4 grid arrangement 19 .
- each dot d preferably has a pixel arrangement matching the dot arrangement of each marker bit.
- Bit-pattern symbols 11 , 13 , and 15 have not been previously used in industry, and were developed to address a number of specific real-world problems.
- bit-pattern symbols 11 , 13 , and 15 permit the use of simple horizontal and vertical projection-profiles as part of bit recognition techniques, as is explained more fully below.
- marker bit 15 for use as a placeholder to demarcate boundaries of a message string (or message block, as defined below) facilitates a decoding procedure.
- bit-pattern symbols 11 , 13 , and 15 that represent an input message string may be overlaid on the entire input image, irrespective of the input image's foreground or background pixels.
- Color selection of each bit-pattern symbol 11 , 13 , and 15 is preferably adaptively changed so that bright regions of the input image have darker bit-pattern symbols overlaid on them, and dark regions of the input image have lighter bit-pattern symbols overlaid on them.
- bit-pattern symbols 11 , 13 , and 15 An important characteristic of the present bit-pattern symbols is that after a print-and-scan cycle, the bit-pattern symbols appear as continuous objects rather than as a collection of dots d. That is, the print-and-scan cycle tends to blur bit-pattern symbols 11 , 13 , and 15 such that it becomes more difficult to differentiate between individual dots d, and instead what is discernable is the general contiguous shape of bit-pattern symbols 11 , 13 , and 15 , as defined by the arrangement of the collection of dots d. It has been further found that accurate detection of the bit-pattern symbols 11 , 13 , and 15 is not affected by rotation of a watermarked image, under the presently preferred use of projection profiles to detect bit-pattern symbols, as is described below.
- the input image may undergo a print-and-scan cycle.
- One of the primary effects of this print-and-scan cycle is due to the lower dynamic range of a scanned document. Colors seem to get closer to each other and sharp edges often get blurred. Consequently, actual intensity values extracted from a scanned image may not be optimal indicators for use in identifying bit-pattern symbols.
- the present invention makes use of a gradient based technique to identify bit-pattern symbols in a scanned image.
- the presently preferred gradient based technique computes and combines horizontal and vertical gradients.
- FIG. 3 shows a representation of an input image 21 generated by scanning a printed document having nine bit-pattern symbols (i.e. nine representations of bit-pattern symbols 11 , 13 , and 15 ), with each bit-pattern symbol being in accord with the present invention. Also shown are multiple processing phases ( 23 , 25 , and 27 ) in a process for discerning the general shape of each of the nine bit-pattern symbols in input image 21 .
- Horizontal gradient image 23 is an absolute intensity image of a horizontal gradient, or derivative, of input image 21 , and detects the edges of the bit-pattern symbols in input image 21 along its horizontal direction. That is, the horizontal gradient determines an intensity difference between adjacent pixels in a horizontal direction in input image 21 , and thus is effective for identifying boundaries between light and dark regions, as encountered when traversing rows of pixels across input image 21 in the horizontal direction. For example, the difference in intensity value between adjacent pixels within the white areas separating bit-pattern symbols 11 , 13 , and 15 within scanned image 21 is zero, and thus these regions would typically have an intensity gradient value of zero (i.e. be dark) in the resultant horizontal gradient image 23 .
- regions of input image 21 having small gradient differences in intensity values are indicated as light areas in horizontal gradient image 23 rather than as dark areas.
- regions of input scanned image 21 having large gradient differences in intensity values are indicated as dark areas in horizontal gradient image 23 .
- the intensity difference between adjacent pixels when moving along a horizontal direction is highest when leaving a white region and entering the beginning of a bit-pattern symbol, or when leaving a bit-pattern symbol and entering the beginning of a white region. Therefore, these left-side and right-side boundary lines of the bit-pattern symbols of input image 21 manifest themselves as dark vertical or diagonal linear arrangements 22 in horizontal gradient image 23 .
- each the bit-pattern symbol is comprised of a group of adjacent dots d separated by a blank small spaces.
- the blank spaces between adjacent dots d may manifest themselves as light areas in a scanned image, such as input image 21 .
- desired vertical or diagonal linear arrangements 22 may be identified by looking for connected components of minimum size within a gradient image 23 (or 25 ).
- the minimum size is about 75% of the area of a marker bit.
- the end-result is that horizontal gradient image 23 identifies the horizontal left and right boundaries, or borders, of bit-pattern symbols within input image 21 .
- Vertical gradient image 25 shows the absolute intensity image of a vertical gradient (i.e. vertical derivative) of scanned image 21 .
- a change in light intensity when moving vertically along columns of pixels in input image 21 are encountered when moving into, or out of, a bit-pattern symbol.
- the horizontal and diagonal linear regions 24 in vertical gradient image 25 indicate the top and bottom boundaries of the bit-pattern symbols 11 , 13 , 15 of input image 21 .
- the horizontal gradient image 23 and the vertical gradient image 25 are combined to form outline image 27 .
- the result of this combination is a set of nine skeleton patterns (i.e. outlines) 26 of the nine scanned bit-pattern symbols of input image 21 .
- outline image 27 the present approach facilitates the reliable reproducing, i.e. identifying, of bit-pattern symbols 11 , 13 , and 15 as scanned in input image 21 .
- a fill-in operation is performed on skeleton patterns 26 , which converts the skeleton patterns into pattern blocks that represent the original bit-pattern symbols 11 , 13 , and 15 of input image 21 .
- FIG. 4 a shows a binarized bit image of a recovered skeleton pattern (i.e. outline) of bit-pattern symbol 13 .
- FIG. 4 b shows the binarized bit image after application of the fill-in operation to fill in the interior and border areas defined by the skeleton pattern, which recovers bit-pattern symbol 13 .
- bit-pattern symbols i.e. identifying the bit data information represented by bit-pattern symbols 11 , 13 , and 15 .
- the presently preferred embodiment for reading bit-pattern symbols uses a projection profile technique to identify the logic bit values represented by each pattern symbol shape.
- a filled-in image of a pattern symbol is first divided vertically approximately down the middle to form a left region LR and a right region RR, whose widths are approximately the same.
- a horizontal projection profile H and a vertical projection profile V are then determined for the right region RR.
- the horizontal projection profile H is determined by moving down the rows of pixels within the right region RR, and counting the number of pixels that are part of right region RR within each row. If plotted, the pixel distribution across the rows would form a contour line for horizontal projection H.
- the vertical projection V is determined by moving along the columns of pixels within the right region RR, and counting the number of pixels within each column of RR.
- a plot of vertical projection V is shown for illustration purposes.
- the vertical V and horizontal H projections are in essence row and column histograms of the pixels that make up right region RR.
- the recovered pattern symbol is determined to be a data bit.
- the first and second predefined spans are at least a third of the span of the horizontal projection H and vertical projection V, respectively.
- the pattern symbol of FIGS. 4 a - 4 c represents a logic-0 bit (i.e. pattern symbol 13 in FIG. 1 ). Therefore, as is indicated in FIG. 4 c , the vertical projection profile V of right region RR decreases from left to right along arrow A 1 . Stated differently, the vertical pixel-length (i.e. pixel count) of pixel columns within right region RR is reduced as one progresses from left to right within right region RR. Similarly, the horizontal projection profile H (i.e. the pixel-length of adjacent horizontal lines of pixels within right region RR) decreases as one progresses from top to bottom along arrow A 2 .
- the recovered pattern symbol of FIG. 4 c is correctly determined to be a data bit, rather than a marker bit MB. Additionally, since the vertical projection decreases from left to right, and the horizontal projection decreases from top to bottom, the pattern symbol is further correctly identified as a logic-0 bit.
- the pattern symbol would have been identified as a logic-1 bit. This is better illustrated in FIG. 5 , below.
- FIG. 5 shows four rotations of pattern symbol 13 .
- the four rotations of pattern symbol 13 are arranged as four pairs, 31 a / 31 b through 37 a / 37 b.
- the left-side pattern symbol ( 31 a - 37 a ) shows the initial state of a recovered symbol after application of the fill-in process, as is described above in reference to FIG. 4 b .
- the right-side symbol 31 b - 37 b shows the identification of its respective right region RR (shown as a shaded area) in preparation for reading the symbol, as is explained above in reference to FIG. 4 c.
- Pattern symbol 31 a shows a correctly orientated pattern symbol representing a logic-0 bit.
- Pattern symbol 31 b shows the reading of this pattern symbol by identifying its right-region RR, and determining its respective vertical projection V and horizontal projection H.
- the vertical projection V of pattern symbol 31 b decreases from left-to-right along the direction arrow A 3 , which identifies it as a logic bit as opposed to a marker bit.
- the horizontal projection H of pattern symbol 31 b decrease from top-to-bottom along the direction of arrow A 4 , which identifies it a logic-0 bit. If horizontal projection H of pattern symbol 31 b increased from top-to-bottom, then it would have been identified as a logic-1 bit.
- pattern symbol 31 b is correctly identified as representing a logic-0 bit.
- pattern symbol 31 a were rotated 90 degrees clockwise as indicated by pattern symbol pair 33 a / 33 b , then the right-region RR of pattern symbol 33 b would be rectangular.
- its vertical projection V would have a relatively flat bottom and its horizontal projection H would also have a relatively flat side. Consequently, both the vertical V and horizontal H projections would manifest changes (from column-to-column and from row-to-row) of less than the prescribed thresholds, and the pattern would be erroneously identified as a marker bit.
- the pattern symbol representing a logic-0 bit is correctly identifiable only when it is read in its correct orientation.
- Pattern symbol 41 a shows a correctly orientated image of a pattern symbol 11 representing a logic-1 bit
- pattern symbol 41 b shows the identification of its right-region RR in preparation for reading the pattern symbol by identifying its vertical projection V and its horizontal projection H.
- its vertical projection V decreases from left-to-right along arrow A 5
- its horizontal projection H decreases from bottom-to-top along arrow A 6 .
- Pattern symbol 41 b would therefore be correctly identified as representing a logic-1 bit.
- the right-region RR would be rectangular.
- the vertical V and horizontal H projections would show changes of less than the prescribed thresholds and the pattern would be erroneously identified as a marker bit.
- pattern symbol 45 a were rotated another 90 degrees clockwise as indicated by pattern symbol pairs 47 a / 47 b , then its right-region RR would again be rectangular. Thus, pattern symbol 47 b would again be erroneously identified as a marker bit.
- a connected components mask i.e. a binary mask
- a first connected component mask of the input image upon which the watermark message is to be written is created.
- a second connected component mask of a scanned image is created.
- the basic difference between the first and second connected component mask is an area threshold parameter that determines the size of the connected components.
- image 75 shows a partial view of an input image upon which a watermark message is to be written.
- the objecting in creating the first connected component is to identify those areas within input image 75 where the watermark message may be written.
- image 77 is the resultant first connected component mask, which shows in white those areas where the watermark message may be written, and shows in black those areas where the watermark mark should not be written.
- Image 71 in FIG. 7 a shows an example of a watermark message written upon input image 75 according to first connected component mask 77 .
- a second connected component mask is created that identifies areas (i.e. bit images) of input image 71 that contain marker bit or data bit information within image 71 .
- second connected component 73 identifies these bit images that should be read as white areas in a black field. Message extraction is explained in more detail below.
- a submitted image 51 is first converted to a single channel image (step 53 ). It is to be understood that submitted image 51 may be the input image, in its entirety. Alternatively, the input image may be divided into multiple image blocks, and each image block may be processed individually as the submitted image in step 51 . In one embodiment of the present invention, an entire image is submitted for message encoding, and multiple image blocks are submitted for message extraction, but this a design choice.
- step 64 YES
- the output mask at step 65 would be a mask block of size equal to the image block.
- the resultant connected component area mask may be divided into mask blocks of size equal to an image block (step 67 ) prior to outputting the result at step 65 .
- the conversion to a single channel image at step 53 may be achieved by applying an RGB to YCbCr conversion to image 51 , and then performing all further processing only on the luminance (Y) channel.
- a light intensity gradient image of the luminance channel image i.e. intensity 1
- ⁇ I ⁇ I x + ⁇ I y
- a binarized image of the light intensity gradient image is created ( 57 ) by comparing each intensity value to a single intensity threshold 59 . Connected components of binarized image are then labeled ( 61 ).
- An area mask i.e. binary mask
- An area mask can then be created ( 65 ) by discarding all connected components whose sizes vary more than ⁇ 25% of an area threshold 63 .
- the size of area threshold 63 depends on whether one is creating a first connected component mask for message insertion (i.e. writing) or a second connected component for message extraction (i.e. reading).
- area threshold 63 defines an area for inserting a watermark message, and in one embodiment, generally separates the foreground from the background of the image.
- area threshold 63 is much smaller, and preferably of similar size as the area of a pattern bit.
- the watermark message is overlaid (i.e., printed) using the above described pattern symbols 11 , 13 , and 15 .
- Identification of the background and foreground sections of the input image is advantages for applying optional printing variations.
- data bit patterns are printed solely on background sections of the input image, but marker bit patterns are printed on both foreground and background sections of the input image.
- the data bit patterns and marker bit patterns are varied in intensity depending on the general intensity of their surrounding image pixels. That is, bit patterns are printed darker than their surrounding image bits in areas where the input image is light (i.e. above a predefined intensity threshold), and bit patterns are printed lighter than their surrounding image in areas where the input image is dark (i.e. not above the predefined intensity threshold).
- the area mask i.e. mask block
- the area mask will highlight (i.e. create pattern images of) the pattern symbols for ease of extraction and decoding.
- the potential problem that may arise during generation of the input image is most commonly associated with a problem arising from a scanning operation. It is be understood that a printed image on paper may be scanned to create an electronic image onto which a watermark message may to be encoded, or a previously encoded, watermarked image may be scanned in preparation for extracting the encoded watermark message.
- Skew error is basically a small rotation error in the electronic image caused when the original paper image was askew while being scanned.
- FIG. 8 An example of this type of rotation error, or skew error, is illustrated in FIG. 8 , and a preferred method for correcting for skew error is illustrated in FIG. 9 .
- one begins by first dividing the input image 90 (i.e. a scanned image in the present example of FIG. 8 ) into a left-hand plane 83 and a right-hand plane 85 with a vertical divide (or cut) 87 separating the left-hand plane 83 from the right-hand plane 85 (step 91 ).
- the vertical divide 87 is along the center of the input image 81 such that the left-hand plane 83 spans a left half of the input image 81 and the right-hand plane 85 spans a right half of the input image 81 .
- the left-hand plane 85 is searched (i.e. scanned vertically downward from top-to-bottom) to identify the first non-white row (i.e. non-blank row) of pixels (step 92 ).
- This first non-white row is hereafter identified as the first non-white-left row and is indicated by dash line 88 in FIG. 8 .
- this search identifies the first such non-white row encountered when searching from the top of the left-hand plane towards the bottom of the left-hand plane, and designates the encountered row as the first non-white-left row.
- the row index number of this first non-white-left row may be identified as a first row index (step 93 ).
- the right-hand plane 85 is searched to identify its first non-white-right row of pixels when searching vertically from its top towards its bottom (step 94 ), and a second row index corresponding to the first non-white-right row 89 is recorded (step 95 ).
- a non-white row is characterized as row having a luminance intensity histogram containing less than a pre-specified percentage of white pixels (preferably less than 98 percent).
- the percentage of white pixels (i.e. pixels having a luminance intensity greater than 250) in a non-white row is less than 98 percent of the total pixels in the same row.
- the vertical spatial difference between the estimated first non-white-left row and the first non-white-right row may be used as a metric to determine image rotation.
- the difference between the first row index and second row index is a measure of the vertical offset from the first non-white-left row to the first non-white-right row.
- the width of the left-hand plane may then be used in combination with the vertical offset to obtain a first estimate the rotation angle ⁇ 1 .
- the width of the left-hand plane may be estimated as half the width of the input image. Consequently, an estimate of a first rotation angle ⁇ 1 may be obtained as shown in step 96 as,
- the input image 90 is then rotated ninety degrees, and this same procedure for estimating rotation angle is repeated on the rotated, input image (step 97 ) to obtain an estimate of a second rotation angle ⁇ 2 for the rotated input image.
- the two estimated rotation angles ⁇ 1 and ⁇ 2 may then be averaged together to obtain an estimate of the general rotation error angle ⁇ of input image 90 (i.e. the skew error of input image 90 ).
- the image is preferably converted to a single channel image (RGB to YCbCr), and then work with the Y channel only. The converted image is further smoothed and binarized to handle noise. Skew angle may be corrected by countering the estimated general rotation error angle ⁇ of input image 90 (step 99 ). That is, input image 90 may be rotated by minus the rotation error angle (i.e. by ⁇ ).
- the second point of interest to be discussed prior to presenting an encoding process is the issue of applying a preferred formatting scheme to a user-provided, message string prior to encoding as a watermark message.
- the presently preferred formatting scheme has been found to provide benefits in facilitating the recovery of encoded watermarked messages.
- Step 101 Upon receiving a user-provided input message string (Step 101 ), a formalized message string “M” is created (Step 102 ) by checking the length (i.e. character or bit length) of the user-provided message string and, if necessary, padding the user-provided message string with blanks (i.e. blank spaces) to make it a predefined, fixed length of preferably 64 bytes, or 512 bits.
- a formalized message string “M” is created (Step 102 ) by checking the length (i.e. character or bit length) of the user-provided message string and, if necessary, padding the user-provided message string with blanks (i.e. blank spaces) to make it a predefined, fixed length of preferably 64 bytes, or 512 bits.
- ECC error correction code
- a predefined indicator marker string i.e. a known bit-pattern
- message string ME or to formalized message string “M” if ECC is not provided.
- ECC ECC is used.
- the predefined indicator marker string may be inserted at the beginning of message ME, but preferably is appended to the end of the message ME.
- one of two predefined indicator marker strings are used, depending on whether a pre-designated bit (preferably the last bit) of ECC string E (or alternatively a pre-designated bit of formalized message string “M”) is a logic high (“1”) or a logic low (“0”) (Step 104 ).
- the resultant formatted message (either MEA 0 or MEA 1 ) is then arranged into a message block of 900 bits (Step 107 ), and preferably the message block is of enough bits to span 2.5 percent of the input image.
- multiple copies of the formatted message (MEA 1 or MEA 0 ) may be copied to fill the message block of 900 bits.
- the formatted message (MEA 1 or MEA 0 ) may be padded with a known bit pattern or with a series of identical bits, such as all zeros or all ones, to fill the complete message block (preferably a perfect square) of 900 bits.
- the message blocks have all zeros or all ones padded onto the end of the formatted message to complete the 900 bits, as described in Step 107 .
- the original user-provided message becomes a formatted message defined as MEA 1 or MEA 0 (i.e., formalized message string M+Error Correction Code string E+ indicator marker string A 0 or A 1 , depending on the last bit value of E) and arranged into a message block of 900 bits with zeros or ones padding the end of the formatted message on alternate message blocks.
- MEA 1 or MEA 0 i.e., formalized message string M+Error Correction Code string E+ indicator marker string A 0 or A 1 , depending on the last bit value of E
- FIG. 11 shows an exemplary method of subdividing an input image 90 into multiple image blocks (step 108 ).
- Each image block may then be encoded (Step 109 ) with the appropriate formatted message (containing MEA 1 or MEA 0 ), as defined in FIG. 10 .
- a connected components mask is first created (as described in FIG. 6 ) to identify areas of the image block suitable for encoding with the formatted message.
- the connected component mask be divided into multiple mask blocks, as described in FIG. 6 .
- FIG. 6 it is shown for illustration purposes in FIG.
- a connected components mask 130 if a connected components mask 130 has not previously been subdivided into mask blocks and generally spans the entirety of the input image, then it is divided (step 132 ) into mask blocks of shape, size and number corresponding to the image blocks, with each mask block having a one-to-one relationship to its corresponding image block according to its relative location within the input image.
- the mask blocks may have a logic-1 indicating regions of the image blocks where message information may be encoded, and have a logic-0 indicating image block regions where no message information may be encoded.
- a watermark encoding sequence may begin by providing a mask block 121 , a message block 122 , and an image block, and a one-to-one relation is established between each mask block and its corresponding mask block (step 125 ).
- an input image may be divided into multiple image blocks, and the connected components mask may also be divided into multiple mask blocks of equal number and size as the image blocks.
- a formatted message block may be of fixed size (preferably 900 bits), and has alternating message content in alternate message blocks based on formatted message MEA 0 or MEA 1 and any predefined padding-bit-pattern. It is to be understood that the message blocks are not necessarily square in shape nor are they the same size as the image blocks. It is presently preferred that the message blocks be smaller than the image blocks.
- step 127 If all image blocks have been processed, then the encoding sequence ends at step 128 , but if all image blocks have not yet been processed (step 127 ), then based on the message length and pattern dimensions, an image block 123 extracted from the input image is multiplied by its corresponding mask block 121 to create a mask-filtered block (step 129 ).
- the resultant mask-filtered block which masks out areas of the image block where data bit patterns should not be encoded, is further subdivided into message-pattern-size sub-blocks.
- step 133 YES
- step 127 determines if another image block remains to be processed.
- bit patterns defined by the formatted message MEA 0 or MEA 1 that defines a current message block is overlaid on (i.e. encoded onto) its corresponding message-pattern-size sub-block (step 135 ). It is to be noted that data bit patterns are not overlaid on foreground regions of the image block.
- Another important aspect of the present method is an adjustment of color of the watermark patterns. For lighter regions of the image block, the watermark pattern is made dark (i.e. darker than surrounding image block pixels), and for darker regions of the image block, the watermark pattern is overlaid in lighter color (lighter than surrounding image block pixels).
- the total number of watermark pattern symbols in a message block is identified as “message-bit-count”, and the encoding of the watermark message begins by defining a counting variable n equal to 0 (step 135 ).
- the pattern symbol is encoded in a color darker than its surrounding background pixels.
- the current region is a background region of the input image and is a relatively dark region (as defined by an intensity value of 35 or lower)
- the pattern symbol is encoded in a color lighter than its surrounding background pixels.
- FIG. 14 a An example of an encoded image block is shown in FIG. 14 a .
- the “marker” bit patterns 141 square arrangement of dots
- the “logic” bit patterns 143 triangular arrangement of dots
- the luminosity level of the patterns is changed based on the average intensity of the image block. For white background regions, such as on the left side of FIG. 14 , the marker bit patterns 141 and logic bit patterns 143 are black (or darker than their surrounding background), whereas for dark background regions, such as the right side of FIG.
- the marker bit patterns 141 and logic bit patterns 143 are white (or lighter than their surrounding background). It should be noted that for background regions that are not fully black or fully white, such as the region identified by reference character 149 , marker bit patterns and data bit patterns are printed in an intensity lighter or darker, respectively, than their surrounding pixels. However, the actual color of the bit and marker patterns may be determined by the color of their surrounding pixels so as to blend in with the colors of the surrounding pixels. In an alternate embodiment of the present invention, the color of the bit and marker patters are selected to blend with their surrounding pixels.
- FIG. 14 b represents a scan of the image of FIG. 14 a
- image 14 b shows same image after partially processing in preparation for extracting the encoded watermark message.
- elements similar to those of FIG. 14 a have similar reference characters and are described above.
- the image of FIG. 14 b is preferably converted to a binary image in FIG. 14 c (with regions not having data or bit patterns blacked out) to facilitate identification of the marker bits and data bits.
- FIG. 15 Another example of an input image with a watermark message printed upon it is shown in FIG. 15 .
- the watermark patterns are dark on the light areas of the input image 150 , and they are light in the darker areas of the input image 150 .
- Recovering i.e. reading, deciphering, or extracting
- a watermark message from an image that has undergone a print-and-scan cycle requires some pre-processing. This is because once an image undergoes a print-and-scan cycle, the scanned image may appear very different from the original image.
- the print-and-scan cycle introduces non-linear transformations in addition to color changes. Apart from color changes, one of the transformations that are addressed within the present invention is the introduction of small rotations introduced due to small misalignments between the scan bed and the paper edges, as is described above in reference to FIG. 9 . It is presently assumed that such rotations are less than 2°.
- FIG. 16 shows a resultant image after the image of FIG. 15 undergoes a print-and-scan cycle using an Epson® CX11 multifunction device, which has integrated fax/copier/scanner/printer functionality.
- the decoding process includes two key steps (i.e. a pre-processing step and message extraction step) that are described separately in more detail below.
- FIG. 17 shows a page 151 having a sample image 152 with multiple message blocks 153 (square in shape) outlined by marker bits 155 .
- Data bits 156 are contained within the boundaries of message blocks 153 .
- the watermark messages are extracted from within the message blocks 153 .
- an image from which a watermark message is to be extracted may have been cropped (or otherwise distorted) so that the message blocks 153 do not necessarily begin at the upper left corner of page 151 , or of sample image 152 , a preprocessing step is necessary to identify a corner of a message block 153 prior to applying a bit extraction step.
- sample image 152 shows partial message blocks 154 along its top that have been partially cut off, such as from a prior cropping action.
- FIG. 18 shows a general process for decoding a watermark message, which includes two key steps: a pre-processing step 172 and message extraction step 179 .
- the supplied sample image 171 is applied to pre-processing step 172 , which includes several sub-step described in more detail below.
- preprocessing step 172 corrects for any skew error in sample image 171 , removes any white border around sample image 171 , reviews sample image 171 to identify a good corner of a message block, and crops and rotates sample image 171 to place the identify good corner at the upper left corner of the rotated image.
- the identified good corner is placed at the upper-left corner because message extraction step 179 assumes this arrangement in order to read a message block from left-to-right and from top-to-bottom starting from the upper-left corner of the image. It should be noted, however, that since the corner identified processing step 172 is not necessarily the top left corner of a message block, message extraction step 179 will have to determine for itself the true top-left corner of a message block. This is because the supplied sample image 171 is not necessarily right-side-up, but may have any orientation, such as being upside-down in landscape or portrait view.
- Step 173 After having identified the best corner of a message block and aligning it with the upper-left corner of the image, the resultant image is converted to a single channel (step 173 ) in a manner similar to that described above in reference to step 53 in FIG. 6 .
- a gradient image is then created in step 174 in a manner similar to that of step 55 in FIG. 6 .
- Step 175 makes use of two user-provided threshold options, a lower threshold, “lower_thr”, and an upper threshold, “upper_thr”.
- a variable thr i.e. a first variable memory location or memory space
- a character array “extracted_message” i.e. second variable memory location or second memory space
- Step 177 increases thr by a value of two, and then checks if the increased value of thr now exceeds the upper threshold, upper_thr. Steps 177 - 179 are repeated until thr exceeds upper_thr. When thr exceeds upper_thr, the process stops (step 176 ) and “extracted_message” holds the deciphered watermark messaged.
- Binarize step 178 creates a binarized image using the threshold thr in a manner similar to step 57 in FIG. 6 .
- the binarized image is a pre-step in preparation for creating a mask that determines which pixels are included in a search for message bits, and which are avoided, as illustrated in FIG. 14 c .
- the number of pixels included for examination i.e. the mask sensitivity
- the best message extracted during all the cycles is outputted (step 176 ) after the last cycle, as determined by step 177 .
- Extract message step 179 receives the output from binarize step 175 . Extract message step 179 creates a mask, searches for marker bits and data bits, identifies any data bit characters, creates a message string from the identified data bit characters, compares the characters in the currently created message string with the characters identified in previous cycles to determined the most probably character string, and stores this most probable character string as the current “new message”. As is explained above in reference to FIG. 6 , the connected component size is selected to be closed to a pattern symbol size in order to identify pattern images of the printed bit patterns. Details of this extract message step 179 are provided below. Finally, the “new message” is copied to memory space “extracted_message” (step 180 ).
- Preprocessing step 172 includes several sub-steps. Before describing these preprocessing sub-steps, it may be beneficial to first pictorially illustrate some of theses preprocessing sub-steps using a simplified sample image, as shown in FIG. 19 .
- paper flap 164 identifies the lower right corner of a sample page 181 , which holds a scanned image 182 .
- a printer or scanner may introduce a white border, or margin, 183 a - 183 d of undetermined thickness at any or all sides of scanned image 182 .
- the marker bits are represented as black squares 185 . It is to be understood that the data bits (not shown) that constitute the watermark message would be distributed within the boundaries of each message block 186 , as defined by the rows and columns of marker bits 185 .
- FIG. 19 illustrates that within a sample image submitted for deciphering, watermark blocks 186 might be shown on only a portion of the sample image 182 by design, or by cropping, or by some other editing manipulation. However, FIG. 18 emphasizes that within the areas where message blocks 186 are shown, the marker bits 185 are printed continuously across both foreground and background areas of the scanned image 182 .
- the marker bits 185 that are printed upon foreground snowman 187 are dark when printed on light areas of snowman 187 (such as when printed over empty areas of the snowman's torso), and are light when printed on dark areas of snowman 187 (such as when printed on the snowman's hat or bowtie).
- An early preprocessing sub-step crops off (i.e. removes) the top white, border or margin 183 a until a dominant line of dark pixels 188 of scanned image 182 is encountered, as is explained below.
- search for the best message block 186 available in scanned image 182 is to search for the best message block 186 available in scanned image 182 .
- search window 189 whose size is equal to the size of a message block 186 .
- search window 190 a since the dimensions of a scanned image might be distorted during a scanning operation, it is preferred that one begin with a search window 190 a whose side lengths are one and half that of the message block's corresponding side lengths.
- the message blocks 186 are square, so each side of search window 190 a is 1.5 times the length of a side of a message block.
- the preprocessing sub-steps then proceed to search within the current search window 190 a to identify the best possible corner marker bit of any (full or partial) message block 186 with the current search window 90 . In the present example of FIG. 18 , that would be corner 191 . Criteria for identifying the best corner of a message block are defined more fully below.
- FIGS. 20-23 similar to those of FIG. 19 have similar reference characters and are described above.
- search window 190 a now identifies the new upper-left corner (after rotation), and within search window 190 a , the current best corner marker bit 191 is again identified.
- FIG. 20 The image of FIG. 20 is then rotated 90°, as shown in FIG. 21 , and preprocessing sub-steps that were applied to FIG. 19 are applied to the current image 182 of FIG. 21 .
- search window 190 a is applied to the new upper-left corner, and the current best corner marker bit 191 is again identified.
- FIG. 21 The image of FIG. 21 is then again rotated 90°, as shown in FIG. 22 , and preprocessing sub-steps that were applied to FIG. 19 are applied to the current image 182 of FIG. 22 .
- search window 190 a is applied to the new upper-left corner, and the current best corner marker bit 191 is again identified.
- search window 190 a The repeated application of search window 190 a to each of the four corners of image 182 as is illustrated in reference to FIGS. 19-22 , is then preferably repeated two additional times. Each time, the size of search window 190 a is increased by 50% to create larger search windows 190 b and 190 c , as shown in FIG. 23 . At the ends of these repeated cycles, the best corner would have been identified and the sample image 182 is cropped and rotated to place the best identified watermark block corner at the upper-left corner to proceed with processing step 173 , as described in reference to FIG. 17 .
- preprocessing step 172 of FIG. 18 receives the sample image, which may be a scanned image, a crop image or other user-provided image, and essentially rotates and crops the sample image so that the top-left corner of the sample image coincides with a corner of a message block. In this way, decoding (i.e. message extraction) can begin in a left-to-right, top-to-bottom fashion.
- Pre-processing step 172 performs the following sub-steps.
- Sub-step 201 receives the sample image, along with the dimensions of the message blocks and a preferred confidence level (threshold_confidence) for determining the corner of a message block.
- a preferred confidence level for determining the corner of a message block.
- the message blocks are preferably square in shape, only one side dimension (msge_block_size) is necessary. It is to be understood that both of these parameters (msge_block_size and threshold_confidence) may be predefined so that they need not be specified sub-block 201 .
- Sub-step 203 then provides rotation compensation and margin cropping.
- rotation compensation is achieved by applying the skew correction process described above in reference to FIG. 9 .
- margin cropping is based on the assumption that printers and/or scanners may introduce a white border (i.e. margin), to images (as described above in reference to FIG. 19 ).
- the margin cropping sub-step removes the white border from all four sides of the sample_image. This may be accomplished by starting from the top boundary of the sample_image, and proceeding downwards cropping offs rows of pixels until encountering a row whose white-pixel-count is less than 90% of the total pixel-count for that row (alternatively, until the white pixels make up less than 90% of the image row length dimension). This process may then be repeated at each of the remaining three sides of the sample_image. For example, the sample_image may be rotated three additional times, and the same process for removing rows of white pixels may be repeated at each rotation to remove the white border from all four sides.
- a dimension multiplier i
- search window 190 a which is initially 50% bigger than a margin block size, is preferably increased by 50% in each of two subsequent cycles, as is illustrated by search windows 190 b and 190 c in FIG. 23 .
- Dimension multiplier, i is used for increasing the size of the search window during each cycle.
- this current_confidence parameter is initialized to zero.
- Parameter “rotation” specifies the amount of rotation necessary for bringing the best identified corner of a watermark block to the upper-left corner of the image, and it is also initialized to a value of zero.
- the cycle begins with sub-step 207 , which increases dimension multiplier, i, by 0.5, and then checks if the increased value of i is greater than 2.5. If i is greater, then step 172 ends at sub-step 209 . Since i has is initialized to a value of 1 in sub-step 205 , it takes three iterations for i to increase beyond 2.5, and thus the search window is increased only three times, as illustrated by 109 a - 109 c in FIG. 23 .
- the search window is applied to each of the sample_image's four corners. This is achieved by rotating the sample_image in 90° increments, and searching for the best watermark block corner at each increment.
- parameter angle is increased by 90°. Since parameter angle was initialized to ⁇ 90° in sub-step 211 , the value of angle after the first increment is 0°, as shown in FIG. 19 .
- the sample_image is rotated four times by 0°, 90°, 180°, and 270° as determined by sub-step 213 . Thus, after increments of 90°, parameter angle will be greater than 270°, as determined in sub-step 213 , and the process returns to sub-step 207 in preparation for the next cycle.
- parameter angle is not greater than 270° after being incremented in sub-step 213 .
- the sample_image is rotated by the amount indicated by the value of parameter angle.
- Parameter block_side_lenth which determines the size of the search window, is defined by the size of a message block (i.e. msge_block_size) multiplied by dimension multiplier, i.
- An image segment at the upper-left corner of the sample_image of size defined by the search window is hereinafter identified as “corner_image”.
- corner_image identifies an image segment of the sample_image that coincides with the search window when the search window is superimposed on the upper-left corner of the sample_image, as currently rotated.
- Module Best_Corner_Detection in sub-step 215 receives and searches the corner_image for the best watermark block corner.
- Module Best_Corner_Detection is one of the most important modules based on the fact that the sub-sequent processing blocks are all dependent on the output of this module.
- Module Best_Corner_Detection identifies all the marker bit patterns present in the image segment, and then, based on the number of continuous marker bit patterns in one direction, determines the row index and column index for each watermark block corner within the image segment. A confidence level is calculated for each identified watermark block corner. Parameter new_confidence holds the highest calculated confidence level, and the row index and column index of the corner with the highest calculated confidence are saved as parameters newRowID and newColumnID, respectively.
- Module Best_Corner_Detection is described in greater detail in reference to FIG. 25 below.
- sub-step 217 if the new_confidence parameter is greater than the current_confidence, then control flows to sub-step 219 .
- the new_confidence value is copied to the current_confidence parameter, the newRowID is saved as row_ID, the newColumnID is saved as column_ID, and the current angle parameter that yielded the higher new_confidence is stored in parameter rotation.
- sub-step 217 determines that the current_confidence parameter is greater than the new_confidence parameter returned by the Best_Corner_Detection module, then processing returns to sub-step 213 to check if the current search window has been applied to all four corners of the sample image. If not, then the sample_image is rotated 90° and the search window is applied to the next upper-left corner. However, if the current search window has been applied to all four corners of the sample_image, then control returns to sub-step 207 to determine if the search window should be increased by 50% and re-applied to the sample_image.
- processing ends with sub-step 209 which by using row_ID, column_ID, and rotation to select the sample_image corner that has the highest confidence level, and rotates and crops the image to align the best corner to the upper-left corner of the sample_image.
- processing may return to sub-step 213 , as indicated by dash line 218 .
- option sub-step 220 determines if the current current_confidence level is greater than the threshold_confidence parameter. If so, then the currently identified watermark block corner is acceptable and processing is terminated early by returning to sub-step 209 . If not, then processing continues with sub-step 213 .
- module Best_Corner_Detection of sub-step 215 from FIG. 24 includes several sub-steps of its own.
- a connected components mask of the corner_image is generated (sub-step 223 ) using a specified intensity threshold and area threshold ( 225 ) in a manner similar to FIGS. 6-7 , discussed above.
- Parameter new_confidence is initialized to zero in sub-step 227 , and in sub-step 229 areas of the corner_image identified by the connected component mask are searched for marker bit patterns, as described generally above, and in particular as described in reference to FIGS. 1-5 .
- Sub-step 231 stores the row ID of the row having the most marker bits in parameter newRowID.
- a confidence metric is then calculated for row identified by newRowID.
- the confidence metric is calculated by determining what fraction of the total bits (both marker bits and data bits) in row newRowID are marker bits.
- the calculated metric is stored in parameter row_confidence.
- Sub-step 235 stores the column ID of the column having the most marker bits in parameter newColumnID.
- a confidence metric is then calculated for column identified by newColumnID in sub-step 237 .
- the confidence metric is calculated by determining what fraction of the total bits (both marker bits and data bits) in column newColumnID are marker bits. The calculated metric is stored in parameter column_confidence.
- sub-step 239 stores the average of row_confidence and column_confidence in parameter new_confidence.
- sub-step 240 returns the values of: new_confidence, newRowID, and newColumnID as outputs of sub-step 215 .
- FIG. 26 shows the result of applying the pre-processing process of FIGS. 18-25 to page 151 of FIG. 17 .
- the white border areas 253 will be removed.
- the above described process identifies corner 255 as the best corner of a watermark block, and outline 252 A extends the row and column at the intersection of corner 255 to identify the section ( 252 A) of page 151 that will be cropped, and rotated.
- the right side of FIG. 26 shows the resultant cropped and rotated image 252 B. As is described above, the cropped image is rotated so as to place corner 255 at the upper left corner.
- Pre-processed image 252 B is now ready for extracting its watermarked message, as describe in steps 173 - 180 of FIG. 18 .
- This message extraction phase incorporates string matching.
- the presently preferred process can handle message extraction irrespective of whether pre-processed image 252 B is in known correct orientation, or is in an unknown landscape or portrait mode. It is to be understood, however, that if it is known that pre-processed image 252 B is correctly oriented with its watermarked message written from left-to-right starting from its upper-left corner, then the sub-steps for determining correct landscape/portrait mode and orientation may be skipped. Additionally for ease of explanation, the following message extraction sub-steps are described with reference to the close-up views provided by the sample images of FIGS. 7 a and 14 a - 14 c , described above.
- message extraction preferably begins by dividing the pre-processed image (such as image 252 B of FIG. 26 ) into patches (sub-step 261 ) roughly 1.5 times (preferably within 1.1 to 2.0 times) the size of a message block, as shown, for example, in FIGS. 14 a and 14 b .
- the patch be bigger than the message block because nonlinear distortions introduced during a scan-and-print cycle may alter the shape and/or dimensions of an image, including the message block.
- a message block should be in the center of the image. The watermark message will be extracted from the message block that lies within the patch area.
- multiple patches (and thereby multiple message blocks) be examined for message extraction.
- Sub-step 262 addresses the question of whether a watermark message is extracted from more than one patch. If only one patch is used, then the message extraction process ends at the completion of the current patch. Otherwise, the process ends (sub-step 264 ) after the desired number of patches have been examined.
- the process goes to the next patch (sub-step 263 ).
- the center of a current patch i.e. the location of the center bit within the current patch
- the center bit within a patch would likely shift along with shifts in the image dimensions due to distortion.
- Patch_Rotation may be set to a value higher than 270° (i.e. set to 360° in the present example) to indicate that no additional rotation and bit extraction cycles are necessary.
- the patch image is turned into a gradient image, which is then thresholded to produce a binary image (such as shown in FIG. 14 c , for example) further processing.
- a binary image such as shown in FIG. 14 c , for example
- the connected components based technique described in reference to FIGS. 6 and 7 is used to generate the binary mask.
- sub-step 268 is applied on a patch-by-patch basis to reduce time requirements. That is, if only one or a few patches are processed, then there is no need to convert the entire image to a binary image. It is to be understood, however, that if the process of sub-step 268 were applied to the entire image prior to defining a patch (in sub-step 261 , for example), then there is no need to re-apply this sub-step to each patch individually in sub-step 268 , and processing could proceed from sub-step 266 / 267 directly to sub-step 269 , which goes to the top-left corner of the patch to start reading the bit information.
- the resultant binary image is a series of white bit images on a black background. Each white bit image is then examined to determine whether it is a marker bit, a logic-0 data bit, or a logic-1 data bit. Reading of each bit image preferably follows the process described above in reference to FIGS. 1-5 .
- each bit image is examined to determine whether it can be identified as a marker bit or a data bit. As is explained above in reference to FIG. 4 , this may be accomplished by filling-in individual bit images and then subjecting the right half of each bit image to a projection computation. As is illustrated above using arrows in to FIG. 5 (for example arrows A 3 -A 6 ), the direction of decreasing horizontal projection H and vertical projection V are determined. The horizontal and vertical projection values are then combined to determine whether the bit image is a logic-0 bit or a logic-1 bit.
- Sub-step 270 determines if the horizontal and vertical projections of the right-half of the next bit image successfully identify a logic-0 data bit or a logic-1 data bit, as is illustrated by the following table.
- sub-step 270 YES
- the bit image can be successfully identified as either a logic-0 data bit or a logic-1 data bit
- the identified bit information is stored (stub-step 273 ).
- sub-step 270 NO
- the entire filled-out bit image i.e. both the left and right halves
- Sub-step 271 checks specifically for marker bit patterns to reduce the chances of a data bit pattern being erroneously identified as a marker bit pattern.
- the consequences of misidentifying a data bit pattern in sub-step 272 i.e. mistakenly identifying a logic-0 data bit as a logic-1 data bit, or mistakenly identifying a logic-1 data bit as a logic-0 data bit) are reasonably tolerable compared to the effects of mistakenly identifying a data bit pattern as a marker bit pattern.
- sub-step 275 determines if the current rotation of the current patch is greater than 270° (sub-step 275 ). As it was explained above in reference to sub-steps 265 - 267 , if it is known that the image orientation is correct for left-to-right and top-to-bottom reading of the bit images, then there is no need to examine the current patch for correct orientation and control can return to sub-step 262 to check if another patch needs to be examined.
- the patch is rotated 90° (sub-step 276 ) and sub-steps 269 - 275 are re-applied to the same patch with the new orientation.
- the bit images within the part are re-read in the current rotated orientation.
- sub-step 267 assigned an initial orientation of 0°
- the patch is read in each of four orientations, 0° ⁇ 90° ⁇ 180° ⁇ 270°.
- the specific shape of the data bits and marker bits means that when bit data is not read along its correct orientation, not only is its data bit information not capable of being identified, it is most likely to be misidentified as a marker bit. Therefore, to determine the correct orientation of the patch, one checks to see which of the four orientations (0°, 90°, 180°, or 270°) rendered the greater number of data bits, and that orientation is categorized as the correct orientation.
- FIGS. 28 and 29 A pictorial example of determining a correct orientation by identifying the orientation that reveals the most data bits is illustrated in reference to FIGS. 28 and 29 .
- the bits may be arranged as an image grid, as shown in FIG. 28 .
- marker bits are shown as white squares and data bits are shown as shaded squares, when logic 0's and logic 1′ having an assigned darkness level for ease of viewing in FIG. 28 .
- Black regions in FIG. 28 identify areas masked out that are to be ignored during reading. Since the objective is to read a message block within the current patch, one first identifies the message block by identifying contiguous sequences of at least 3 or 4 marker bits. These contiguous marker bits define the perimeter of the message block. In the present example of FIG. 28 , the message block perimeter is identified arrows 281 - 284 , which delineate a respective sequence of contiguous marker bits.
- First image 291 is a first patch prior to application of the process of FIG. 27 .
- Image 292 shows the result of arranging the extraction bit pattern information.
- Data bits 295 are indicated as two distinct lighter shades of gray (for indicating a logic 0 or a logic 1).
- the vast majority of the bit patterns in this orientation are identified as marker bits 293 .
- Image 297 shows the result of rotating image 291 by 270° and re-applying the data extraction process described above. In this case, arrangement of the extracted bit information indicates that maker bits 293 are located only along the perimeter of a message block, and the interior of the message block is comprised predominately of data bits 295 .
- the present invention preferably uses “centroid feedback”, by which the location of a first patch (or message block) is used to identify the location of a second patch (or message block) relative to the first patch.
- centroid feedback by which the location of a first patch (or message block) is used to identify the location of a second patch (or message block) relative to the first patch.
- first and second patches are consecutive patches in a submitted image.
- any desired patch may be used as the reference patch, but it is preferred that the reference patch selection be updated periodically.
- the centroid of a correct message block is used to identify the correct centering for the next message block to be read.
- This step is important due to non-linear scaling introduced by a print-and-scan cycle. As a result of this non-linear scaling, the exact dimensions of pattern images (as well as the dimensions of the message blocks) are not the same as they were during their initial encoding.
- the step size for cropping subsequent message blocks in a submitted image is constantly updated based on the centroid of the best message block identified found so far.
- the centroid identified for the message block is updated for the rotation (landscape/portrait correction) before being used by the subsequent steps.
- each message block is thresholded multiple times with a series of increasing threshold values and image bit identification may be attempted at each threshold level. For the results discussed above, thresholds from 25 to 35 with a step size of 2 were used. Lastly, not only do all the messages collected from a single message block go through an error correction phase, the extracted data bit information is applied to a string matching routine to generate the most probable string.
- a single message block may have multiple copies of a single message string (or at least the repeated message blocks will contain a copy of the original message string), therefore to identify the most probable message string, one may compare the bit data from the multiple, recovered copies of the message string and identify the message string that repeats itself most consistently.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Editing Of Facsimile Originals (AREA)
- Image Processing (AREA)
Abstract
Description
TABLE 1 | ||||
Text | ASCII | Binary | ||
Character | Code | Code | ||
H | 72 | 01001000 | ||
|
101 | 01100101 | ||
|
108 | 01101100 | ||
|
108 | 01101100 | ||
o | 111 | 01101111 | ||
32 | 00100000 | |||
w | 119 | 01110111 | ||
o | 111 | 01101111 | ||
r | 114 | 01110010 | ||
|
108 | 01101100 | ||
|
100 | 01100100 | ||
-
- 0100100001100101011011
- 0001101100011011110010
- 0000011101110110111101
- 1100100110110001100100
Horizontal Projection | Vertical Projection | Inference | ||
<0 | <−0.2 | |
||
≧0 | <−0.2 | |
||
∇I=∇I x +∇I y
Horizontal | ||
Projection | Vertical Projection | Inference |
<0 | <−0.2 | 0 |
≧0 | <−0.2 | 1 |
Claims (24)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/701,290 US8243985B2 (en) | 2010-02-05 | 2010-02-05 | Bit pattern design for visible watermarking |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US12/701,290 US8243985B2 (en) | 2010-02-05 | 2010-02-05 | Bit pattern design for visible watermarking |
Publications (2)
Publication Number | Publication Date |
---|---|
US20110194725A1 US20110194725A1 (en) | 2011-08-11 |
US8243985B2 true US8243985B2 (en) | 2012-08-14 |
Family
ID=44353754
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US12/701,290 Expired - Fee Related US8243985B2 (en) | 2010-02-05 | 2010-02-05 | Bit pattern design for visible watermarking |
Country Status (1)
Country | Link |
---|---|
US (1) | US8243985B2 (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8194918B2 (en) * | 2010-02-05 | 2012-06-05 | Seiko Epson Corporation | Embedded message extraction for visible watermarking |
TWI467131B (en) * | 2012-04-24 | 2015-01-01 | Pixart Imaging Inc | Method of determining object position and system thereof |
CN106464771B (en) | 2014-04-29 | 2019-09-20 | 惠普发展公司,有限责任合伙企业 | Scanner with background |
CN105809088B (en) * | 2014-12-30 | 2019-07-19 | 清华大学 | Vehicle identification method and system |
CN107239761B (en) * | 2017-06-05 | 2020-03-27 | 山东农业大学 | Fruit tree branch pulling effect evaluation method based on skeleton angular point detection |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4965744A (en) | 1987-03-13 | 1990-10-23 | Ricoh Company, Ltd. | Apparatus for erasing and extracting image data from particular region of orignal document |
US5530759A (en) * | 1995-02-01 | 1996-06-25 | International Business Machines Corporation | Color correct digital watermarking of images |
US6222932B1 (en) * | 1997-06-27 | 2001-04-24 | International Business Machines Corporation | Automatic adjustment of image watermark strength based on computed image texture |
US6263086B1 (en) * | 1998-04-15 | 2001-07-17 | Xerox Corporation | Automatic detection and retrieval of embedded invisible digital watermarks from halftone images |
US20020102007A1 (en) | 2001-01-31 | 2002-08-01 | Xerox Corporation | System and method for generating color digital watermarks using conjugate halftone screens |
US20020106103A1 (en) | 2000-12-13 | 2002-08-08 | Eastman Kodak Company | System and method for embedding a watermark signal that contains message data in a digital image |
US6526155B1 (en) | 1999-11-24 | 2003-02-25 | Xerox Corporation | Systems and methods for producing visible watermarks by halftoning |
US20030210803A1 (en) | 2002-03-29 | 2003-11-13 | Canon Kabushiki Kaisha | Image processing apparatus and method |
US20040008377A1 (en) | 2002-07-09 | 2004-01-15 | Stone Cheng | Method for adding a text to an image |
US20050286765A1 (en) | 1998-10-29 | 2005-12-29 | Mitsuo Nakayama | Image scanner and an optical character recognition system using said image scanner |
-
2010
- 2010-02-05 US US12/701,290 patent/US8243985B2/en not_active Expired - Fee Related
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4965744A (en) | 1987-03-13 | 1990-10-23 | Ricoh Company, Ltd. | Apparatus for erasing and extracting image data from particular region of orignal document |
US5530759A (en) * | 1995-02-01 | 1996-06-25 | International Business Machines Corporation | Color correct digital watermarking of images |
US6222932B1 (en) * | 1997-06-27 | 2001-04-24 | International Business Machines Corporation | Automatic adjustment of image watermark strength based on computed image texture |
US6263086B1 (en) * | 1998-04-15 | 2001-07-17 | Xerox Corporation | Automatic detection and retrieval of embedded invisible digital watermarks from halftone images |
US20050286765A1 (en) | 1998-10-29 | 2005-12-29 | Mitsuo Nakayama | Image scanner and an optical character recognition system using said image scanner |
US6526155B1 (en) | 1999-11-24 | 2003-02-25 | Xerox Corporation | Systems and methods for producing visible watermarks by halftoning |
US20020106103A1 (en) | 2000-12-13 | 2002-08-08 | Eastman Kodak Company | System and method for embedding a watermark signal that contains message data in a digital image |
US20020102007A1 (en) | 2001-01-31 | 2002-08-01 | Xerox Corporation | System and method for generating color digital watermarks using conjugate halftone screens |
US20030210803A1 (en) | 2002-03-29 | 2003-11-13 | Canon Kabushiki Kaisha | Image processing apparatus and method |
US20040008377A1 (en) | 2002-07-09 | 2004-01-15 | Stone Cheng | Method for adding a text to an image |
Also Published As
Publication number | Publication date |
---|---|
US20110194725A1 (en) | 2011-08-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8194918B2 (en) | Embedded message extraction for visible watermarking | |
US6641053B1 (en) | Foreground/background document processing with dataglyphs | |
KR100653886B1 (en) | Mixed-code and mixed-code encondig method and apparatus | |
US7936901B2 (en) | System and method for encoding high density geometric symbol set | |
US20110052094A1 (en) | Skew Correction for Scanned Japanese/English Document Images | |
US7965894B2 (en) | Method for detecting alterations in printed document using image comparison analyses | |
WO2018095149A1 (en) | Method and system for generating two-dimensional code having embedded visual image, and reading system | |
US8275168B2 (en) | Orientation free watermarking message decoding from document scans | |
US7017816B2 (en) | Extracting graphical bar codes from template-based documents | |
US20210165860A1 (en) | Watermark embedding and extracting method for protecting documents | |
US9349237B2 (en) | Method of authenticating a printed document | |
Bhattacharjya et al. | Data embedding in text for a copier system | |
US8284987B2 (en) | Payload recovery systems and methods | |
US8508793B2 (en) | System and method for calibrating a document processing device from a composite document | |
US6731775B1 (en) | Data embedding and extraction techniques for documents | |
US8243985B2 (en) | Bit pattern design for visible watermarking | |
US20150347886A1 (en) | High capacity 2d color barcode and method for decoding the same | |
JP4173994B2 (en) | Detection of halftone modulation embedded in an image | |
MXPA06001533A (en) | Machine readable data. | |
US8144925B2 (en) | Mapping based message encoding for fast reliable visible watermarking | |
US8300882B2 (en) | Data adaptive message embedding for visible watermarking | |
JP2008092447A (en) | Image processing apparatus, image output device, and image processing method | |
Tkachenko et al. | Centrality bias measure for high density QR code module recognition | |
US8634110B2 (en) | Edge refinement system | |
Simske et al. | Variable Data Void Pantographs |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: EPSON RESEARCH & DEVELOPMENT, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:DAS GUPTA, MITHUN;XIAO, JING;REEL/FRAME:023907/0965 Effective date: 20100204 |
|
AS | Assignment |
Owner name: SEIKO EPSON CORPORATION, JAPAN Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:EPSON RESEARCH & DEVELOPMENT, INC.;REEL/FRAME:023992/0278 Effective date: 20100223 |
|
ZAAA | Notice of allowance and fees due |
Free format text: ORIGINAL CODE: NOA |
|
ZAAB | Notice of allowance mailed |
Free format text: ORIGINAL CODE: MN/=. |
|
STCF | Information on status: patent grant |
Free format text: PATENTED CASE |
|
FEPP | Fee payment procedure |
Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
FPAY | Fee payment |
Year of fee payment: 4 |
|
MAFP | Maintenance fee payment |
Free format text: PAYMENT OF MAINTENANCE FEE, 8TH YEAR, LARGE ENTITY (ORIGINAL EVENT CODE: M1552); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY Year of fee payment: 8 |
|
FEPP | Fee payment procedure |
Free format text: MAINTENANCE FEE REMINDER MAILED (ORIGINAL EVENT CODE: REM.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
LAPS | Lapse for failure to pay maintenance fees |
Free format text: PATENT EXPIRED FOR FAILURE TO PAY MAINTENANCE FEES (ORIGINAL EVENT CODE: EXP.); ENTITY STATUS OF PATENT OWNER: LARGE ENTITY |
|
STCH | Information on status: patent discontinuation |
Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362 |
|
FP | Lapsed due to failure to pay maintenance fee |
Effective date: 20240814 |