JP5125548B2 - Image processing apparatus and program - Google Patents

Image processing apparatus and program Download PDF

Info

Publication number
JP5125548B2
JP5125548B2 JP2008016819A JP2008016819A JP5125548B2 JP 5125548 B2 JP5125548 B2 JP 5125548B2 JP 2008016819 A JP2008016819 A JP 2008016819A JP 2008016819 A JP2008016819 A JP 2008016819A JP 5125548 B2 JP5125548 B2 JP 5125548B2
Authority
JP
Japan
Prior art keywords
pattern
partial
code
plurality
sequence
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
JP2008016819A
Other languages
Japanese (ja)
Other versions
JP2009176250A (en
Inventor
隆志 園田
健司 大西
Original Assignee
富士ゼロックス株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士ゼロックス株式会社 filed Critical 富士ゼロックス株式会社
Priority to JP2008016819A priority Critical patent/JP5125548B2/en
Publication of JP2009176250A publication Critical patent/JP2009176250A/en
Application granted granted Critical
Publication of JP5125548B2 publication Critical patent/JP5125548B2/en
Application status is Expired - Fee Related legal-status Critical
Anticipated expiration legal-status Critical

Links

Images

Description

  The present invention relates to an image processing apparatus and a program.

  A method for providing a position code for encoding a plurality of positions on a surface is known (see, for example, Patent Document 1). In this patent document 1, a cyclic numeral sequence is printed a plurality of times along the surface. At that time, using different rotations of the cyclic numeral sequences so that a predetermined deviation occurs between the adjacent numeric strings, a plurality of code windows divided on the surface include at least three cyclic numeral sequences and adjacent codes. It has one numeric string that overlaps one numeric string of the window, and the position of the code window is encoded by a shift between the cyclic numeric strings belonging to the code window.

  There is also known a technique that enables spatially asynchronous reading of encoding from a recording medium that records machine-readable encoding of digital quantum that is logically ranked (see, for example, Patent Document 2). In this document, multiple copies of an encoding that are essentially identical can be formed, and a machine-recognizable spatial synchronization index can be combined with the print position within each of the copies of the encoding to allow spatial synchronization of the encoding. A large number of cases are provided, and these cases are written in a spatially periodic center grid in the recording medium according to the layout rules.

  Systems and methods for determining the position of captured images from larger images are also known (see, for example, Patent Document 3). In this Patent Document 3, a non-repetitive sequence is folded into a non-repetitive sequence unique to each sub-window of a predetermined size, and the captured position is determined for each sub-window in the non-repetitive sequence. Thus, the position of the captured image is determined from the image of the larger subwindow.

  A technique that enables long-time recording and repeated reproduction of multimedia information that is optically readable is also known (see, for example, Patent Document 4). In this patent document 4, a recording apparatus uses a printer system or a printing plate making system as a dot code that can optically read so-called multimedia information including audio information, video information, digital code data, etc. And record on a medium such as paper. The pen-type information reproducing device sequentially captures the dot code according to the manual scanning of the dot code, and the original audio information is output by the audio output device, the original video information is displayed by the display device, and the original digital code data Is output by a page printer or the like.

  There is also known a technique that enables precise detection of coordinates on a medium without using a tablet (see, for example, Patent Document 5). In this patent document 5, a pen-type coordinate input device optically reads a code symbol formed on a medium and indicating the coordinates on the medium, decodes the read code symbol, and reads the code symbol in the read image. The position, orientation, and amount of distortion are detected. Then, the coordinates of the position of the tip on the medium are detected based on the decoded information and the position, orientation, and distortion amount of the code symbol.

JP-T-2003-511762 JP-A-9-185669 JP 2004-152273 A JP-A-6-231466 JP 2000-293303 A

Here, in general, when the position on the medium is specified by searching the correspondence information in which the partial sequence constituting the specific bit sequence is associated with the position on the medium based on the partial sequence acquired from the pattern image, Since there are cases where a partial sequence cannot be acquired from an image, an erroneous partial sequence is acquired from a pattern image, and the like, there is a limit to increasing the possibility of specifying a position on a medium.
An object of the present invention is to increase the possibility that a position on a medium can be specified by a partial sequence acquired from a pattern image.

The invention according to claim 1 is a plurality of partial strings constituting a specific bit string representing a position on the medium, and m (m ≧ 1) partial strings representing the predetermined position on the medium and the m pieces. Image acquisition means for acquiring a plurality of pattern images respectively corresponding to a plurality of partial sequences composed of n (n ≧ 1) partial sequences adjacent to the partial sequence, and the m pieces of the plurality of pattern images P (1 ≦ p ≦ m) subsequences detected from m pattern images respectively corresponding to the subsequences, and n corresponding to the n subsequences of the plurality of pattern images , respectively. A partial sequence acquisition means for acquiring q (1 ≦ q ≦ n) partial sequences detected from the pattern image , and the specific bit sequence when the predetermined position is not specified by the p partial sequences Multiple substrings to be displayed and before they are represented The correspondence information that associates the position on the medium, as a search start position is the position specified based on the plurality of pattern images acquired last by the image acquisition unit, the q number with the p number of subsequences An image processing apparatus comprising: a specifying unit that specifies the predetermined position by performing a search based on a search partial sequence including the partial sequence.
According to a second aspect of the present invention, the specific bit string is a k-th order M-sequence, and the specifying unit is configured such that a sum of a length of the p partial sequences and a length of the q partial sequences is k or more. If this is the case, the corresponding information is searched based on the search subsequence using an arbitrary position as a search start position, and the sum of the lengths of the p subsequences and the q subsequences is k. 2. The image processing apparatus according to claim 1, wherein a search based on the search subsequence of the correspondence information is not executed if the response information is less than 1.
The invention according to claim 3 further includes detection means for detecting an error pattern image that is a pattern image from which a corresponding partial sequence of the plurality of pattern images cannot be acquired, and the partial sequence acquisition unit includes the plurality of pattern images. Obtaining p partial sequences from p pattern images other than the error pattern image, and obtaining q partial sequences from q pattern images other than the error pattern image. The image processing apparatus according to claim 1, wherein
According to a fourth aspect of the present invention, the specifying unit supplements a partial sequence that cannot be acquired from the plurality of pattern images based on the plurality of partial sequences associated with the predetermined position in the correspondence information. The image processing apparatus according to claim 3 , wherein the m partial sequences and the n partial sequences are acquired.
According to a fifth aspect of the present invention, the plurality of pattern images include p pattern images and q pattern images, and the partial sequence acquisition unit includes the p pattern images out of the plurality of pattern images. The image processing apparatus according to claim 1, wherein the p partial sequences are acquired from a pattern image, and the q partial sequences are acquired from the q pattern images.
According to a sixth aspect of the present invention, the specifying unit corrects a partial sequence erroneously acquired from the plurality of pattern images based on a plurality of partial sequences associated with the predetermined position in the correspondence information. The image processing apparatus according to claim 5 , wherein the m partial sequences and the n partial sequences are acquired.
According to the seventh aspect of the present invention, a computer includes a plurality of partial sequences constituting a specific bit sequence representing a position on the medium, and m (m ≧ 1) partial sequences representing a predetermined position on the medium; wherein m of the m partial n adjacent columns (n ≧ 1) number of the function of acquiring a plurality of pattern images corresponding to the plurality of sub-strings comprising a subsequence, the plurality of pattern images P (1 ≦ p ≦ m) partial sequences detected from m pattern images respectively corresponding to the partial images and n corresponding to the n partial sequences of the plurality of pattern images. A function of acquiring q (1 ≦ q ≦ n) partial sequences detected from the number of pattern images, and a plurality of bits constituting the specific bit sequence when the predetermined position is not specified by the p partial sequences And the plurality of subsequences represent the subsequence The correspondence information that associates a position on the body, as the search start position specified position based on the plurality of pattern images acquired last by the function of acquiring the plurality of pattern images, the p number of parts This is a program for realizing a function of specifying the predetermined position by performing a search based on a search partial sequence including a sequence and the q partial sequences.
According to an eighth aspect of the present invention, the specific bit sequence is a k-th order M-sequence, and in the specifying function, the sum of the lengths of the p partial sequences and the length of the q partial sequences is k. If it is above, the corresponding information is searched based on the search subsequence with an arbitrary position as a search start position, and the sum of the lengths of the p subsequences and the q subsequences is 8. The program according to claim 7 , wherein if the number is less than k, the search based on the search subsequence of the correspondence information is not executed.

The invention of claim 1 has the effect that the efficiency of the process of specifying the position on the medium by the partial sequence acquired from the pattern image can be improved as compared with the case where this configuration is not provided.
The invention according to claim 2 has an effect that the accuracy of the position on the medium specified by the partial sequence acquired from the pattern image is increased as compared with the case where the present configuration is not provided.
The invention of claim 3 has an effect that even if a partial sequence cannot be acquired from a part of the pattern image, the position on the medium can be specified by the partial sequence acquired from the pattern image.
The invention of claim 4 has an effect that even if a partial sequence cannot be acquired from some pattern images, a correct partial sequence can be acquired.
The invention of claim 5 has the effect that even if an erroneous partial sequence is acquired from a part of the pattern image, the position on the medium can be specified by the partial sequence acquired from the pattern image.
The invention of claim 6 has an effect that a correct partial sequence can be acquired even if the partial sequence cannot be acquired from some pattern images.
The invention of claim 7 has an effect that the efficiency of the process of specifying the position on the medium by the partial sequence acquired from the pattern image can be improved as compared with the case where this configuration is not provided.
The invention according to claim 8 has an effect that the accuracy of the position on the medium specified by the partial sequence acquired from the pattern image is increased as compared with the case where this configuration is not provided.

The best mode for carrying out the present invention (hereinafter referred to as “embodiment”) will be described in detail below with reference to the accompanying drawings.
First, the encoding method used in this embodiment will be described.
In the encoding method according to the present embodiment, mCn is represented by a pattern image (hereinafter referred to as “code pattern”) in which unit images are arranged at n (1 ≦ n <m) locations selected from m (m ≧ 3) locations. (= M! / {(Mn)! × n!}) Express information. That is, instead of associating one unit image with information, a plurality of unit images are associated with information. Assuming that one unit image is associated with information, there is a drawback in that erroneous information is expressed when the unit image is lost or noise is added. On the other hand, for example, if two unit images are associated with information, it can be easily understood that there is an error when the number of unit images is one or three. Furthermore, in the method of expressing 1 bit or 2 bits at most in one unit image, a synchronous pattern for controlling reading of the information pattern is expressed by a pattern visually similar to the information pattern for expressing the information. I can't. For this reason, the present embodiment employs the above encoding method. Hereinafter, such an encoding method is referred to as an mCn method.
Here, a unit image having any shape may be used. In the present embodiment, a dot image (hereinafter simply referred to as “dot”) is used as an example of a unit image, but an image having another shape such as a hatched pattern may be used.

FIG. 1 shows an example of a code pattern in the mCn system.
In the figure, a black area and a hatched area are areas where dots can be arranged, and a white area between them is an area where dots cannot be arranged. Of the areas where dots can be arranged, dots are arranged in black areas and dots are not arranged in hatched areas. In other words, FIG. 1 shows an example in which a total of 9 dots can be arranged in 3 × 3, and (a) shows that 2 dots are placed in 9 dots that can be arranged. An example of the code pattern in the 9C2 method to be arranged, (b) shows an example of the code pattern in the 9C3 method in which 3 dots are arranged in an area where nine dots can be arranged.

  However, the dots (black areas) arranged in FIG. 1 are only for information representation, and do not coincide with the dots (minimum squares in FIG. 1) that are the minimum units constituting the image. In this embodiment, when “dot” refers to the former dot and the latter dot is referred to as “pixel”, the dot has a size of 2 pixels × 2 pixels at 600 dpi. . Since the length of one side of one pixel at 600 dpi is 0.0423 mm, the length of one side of one dot is 84.6 μm (= 0.0423 mm × 2). The larger the dot, the more likely it will be noticeable, so it is preferable to make it as small as possible. However, if it is too small, printing with a printer becomes impossible. Therefore, the above value is adopted as the dot size, which is larger than 50 μm and smaller than 100 μm. However, the above value 84.6 μm is a numerical value to the last, and is about 100 μm in the actually printed toner image.

Next, FIG. 2 shows an example of all code patterns in the 9C2 system. Here, a space between dots is omitted. As shown in the figure, in the 9C2 system, 36 (= 9 C 2 ) code patterns are used. Also, a pattern value that is a number for uniquely identifying each code pattern is attached to all code patterns. In the figure, an example of assignment of the pattern value to each code pattern is also shown. However, the correspondence shown in the figure is merely an example, and any pattern value may be assigned to any code pattern.

FIG. 3 shows an example of all code patterns in the 9C3 system. Also here, the space between dots is omitted. As shown in the drawing, in the 9C3 system, 84 (= 9 C 3 ) code patterns are used. Also in this case, a pattern value which is a number for uniquely identifying each code pattern is attached to all code patterns. In the figure, an example of assignment of the pattern value to each code pattern is also shown. However, in this case as well, the correspondence shown in the figure is merely an example, and any pattern value may be assigned to any code pattern.

Here, the size of the area where the code pattern is arranged (hereinafter referred to as “pattern block”) is set so that 3 dots × 3 dots can be arranged. However, the size of the pattern block is not limited to this. That is, the size may be 2 dots × 2 dots, 4 dots × 4 dots, and the like.
Further, the shape of the pattern block may not be a square, but may be a rectangle as in the case of arranging 3 dots × 4 dots, for example. In this specification, a rectangle means a rectangle in which the lengths of two adjacent sides are not equal, that is, a rectangle other than a square.
Further, the number of dots to be arranged in an arbitrarily determined number of dot arrangement areas may be appropriately determined in consideration of the amount of information to be expressed and the allowable image density.

  Thus, in this embodiment, mCn types of code patterns are prepared by selecting n locations from m locations. Among these code patterns, a specific pattern is used as an information pattern, and the rest is used as a synchronization pattern. Here, the information pattern is a pattern representing information to be embedded in the medium. The synchronization pattern is a pattern used for extracting an information pattern embedded in the medium. For example, it is used for specifying the position of the information pattern or detecting the rotation of the image. Any medium may be used as long as an image can be printed. Since paper is a representative example, the medium will be described as paper below, but metal, plastic, fiber, or the like may be used.

Here, a synchronization pattern among the code patterns shown in FIG. 2 or FIG. 3 will be described. When these code patterns are used, since the shape of the pattern block is a square, it is necessary to recognize the rotation of the image in units of 90 degrees. Therefore, a set of synchronization patterns is constituted by four types of code patterns.
FIG. 4A shows an example of a synchronization pattern in the 9C2 system. Here, of the 36 types of code patterns, 32 types of code patterns are information patterns representing 5-bit information, and the remaining 4 types of code patterns constitute a set of synchronization patterns. For example, the synchronization pattern in which the code pattern of the pattern value “32” is upright, the synchronization pattern in which the code pattern of the pattern value “33” is rotated 90 degrees to the right, and the code pattern of the pattern value “34” is rotated 180 degrees to the right The synchronization pattern is a synchronization pattern obtained by rotating the code pattern of the pattern value “35” to the right by 270 degrees. However, the method of distributing the 36 types of code patterns to the information pattern and the synchronization pattern is not limited to this. For example, 16 types of code patterns may be used as information patterns expressing 4-bit information, and the remaining 20 types of code patterns may constitute five sets of synchronization patterns.

  FIG. 4B is an example of a synchronization pattern in the 9C3 system. Here, out of 84 kinds of code patterns, 64 kinds of code patterns are information patterns expressing 6-bit information, and the remaining 20 kinds of code patterns constitute five sets of synchronization patterns. The figure shows two of the five synchronization patterns. For example, in the first set, a synchronization pattern in which the code pattern of pattern value “64” is erected, a synchronization pattern in which the code pattern of pattern value “65” is rotated 90 degrees to the right, and a code pattern of pattern value “66” The synchronization pattern rotated 180 degrees to the right and the code pattern of pattern value “67” are the synchronization pattern rotated 270 degrees to the right. In the second set, a synchronization pattern in which the code pattern of pattern value “68” is erected, a synchronization pattern in which the code pattern of pattern value “69” is rotated 90 degrees to the right, and a code pattern of pattern value “70” The synchronization pattern rotated 180 degrees to the right and the code pattern of pattern value “71” are the synchronization pattern rotated 270 degrees to the right.

  Although not shown, when the shape of the pattern block is a rectangle, two types of code patterns may be prepared as synchronization patterns in order to detect image rotation. For example, when an area in which 3 dots in the vertical direction and 4 dots in the horizontal direction should be detected but an area in which the 4 dots in the vertical direction and 3 dots in the horizontal direction can be arranged is detected, the image is 90 degrees or 270 degrees at that time. It is because it turns out that it is rotating.

Next, a minimum unit of information expression (hereinafter referred to as “code block”) in which a synchronization pattern and an information pattern are arranged will be described.
FIG. 5 shows an example of the layout of the code block.
In the drawing, the layout of the code block is shown on the right side of each of (a) and (b). Here, a layout in which 25 pattern blocks of 5 × 5 are arranged is employed. Of the 25 pattern blocks, a synchronization pattern is arranged in one block at the upper left. In addition, an information pattern representing X coordinate information for specifying the horizontal coordinate on the paper surface is arranged in the right four blocks of the synchronous pattern, and the vertical coordinate on the paper surface is specified in the four blocks below the synchronous pattern. An information pattern representing coordinate information is arranged. Further, an information pattern representing the identification information of the document printed on the paper surface or the paper surface is arranged in 16 blocks surrounded by the information pattern representing the coordinate information.

  Further, on the left side of (a), it is shown that a code pattern in the 9C2 system is arranged in each pattern block. That is, 36 types of code patterns are divided into, for example, 4 types of synchronization patterns and 32 types of information patterns, and each pattern is arranged according to a layout. On the other hand, the left side of (b) shows that a code pattern in the 9C3 system is arranged in each pattern block. That is, 84 types of code patterns are divided into, for example, 20 types of synchronization patterns and 64 types of information patterns, and each pattern is arranged according to a layout.

  In the present embodiment, the coordinate information is expressed in M series in the vertical direction and the horizontal direction on the paper surface. Here, the M series is a series in which the partial sequence does not coincide with other partial sequences. For example, the eleventh order M-sequence is a bit string of 2047 bits. The same sequence as the partial sequence of 11 bits or more extracted from the 2047-bit bit string does not exist in the 2047-bit bit string other than itself. In the present embodiment, one code pattern is associated with 4 bits. That is, a bit string of 2047 bits is represented by a decimal number for every 4 bits, a code pattern is determined according to the assignment in FIG. 2 or FIG. Therefore, at the time of decoding, the position on the bit string is specified by specifying three consecutive code patterns and referring to the table storing the correspondence between the code patterns and the coordinates.

FIG. 6 shows an example of encoding coordinate information using an M sequence.
(A) shows a bit string “0111000101011010000110010...” As an example of the 11th order M-sequence. In the present embodiment, this is divided into 4 bits, and the first partial sequence “0111” is the code pattern of the pattern value “7”, and the second partial sequence “0001” is the pattern value “1”. As the code pattern, the third partial sequence “0101” is arranged on the paper surface as the code pattern of the pattern value “5”, and the fourth partial sequence “1010” is arranged as the code pattern of the pattern value “10”. .

  Further, when the code patterns are divided into 4 bits and assigned to the code patterns in this way, as shown in (b), all pattern strings are expressed in 4 cycles. That is, since the 11th order M-sequence is 2047 bits, if this sequence is cut out every 4 bits and expressed by a code pattern, 3 bits will be added at the end. The first 1 bit of the M series is added to these 3 bits to form 4 bits, which are represented by a code pattern. Further, a cut-out code pattern is represented every 4 bits from the second bit of the M sequence. Then, the next cycle starts from the third bit of the M sequence, and the next cycle starts from the fourth bit of the M sequence. Furthermore, the fifth cycle starts from the fifth bit, which coincides with the first cycle. Therefore, if four cycles of the M sequence are cut out every 4 bits, it is possible to use all 2047 code patterns. Since the M sequence is 11th order, the three consecutive code patterns do not match any other continuous code pattern at any position. Therefore, decoding can be performed by reading three code patterns. However, in the present embodiment, coordinate information is expressed by four code patterns in consideration of the occurrence of errors.

In addition, although some methods can be used for encoding the identification information, RS encoding is suitable in the present embodiment. This is because the RS code is a multi-value coding method, and the pattern value of the code pattern arranged in each pattern block may correspond to the multi-value of the RS code.
In this embodiment, the code pattern is used by, for example, printing the identification information on a paper image over a document image, reading a partial image on the paper surface with a pen-shaped scanner, and then identifying the document image from there. It is assumed that information is acquired. In this case, an error occurs due to dirt on the paper surface or the performance of the scanner, but this error is corrected by the RS code.

Here, the correction by the RS code and the amount of information that can be expressed when performing such correction will be specifically described.
In the present embodiment, as described above, a code pattern in which the number of dots in one pattern block is constant is employed. Thereby, if one dot disappears or one dot is added, the number of dots in the pattern block changes. Therefore, these are errors that can be recognized as errors. On the other hand, if the disappearance and addition of dots occur at the same time, they are misrecognized as other code patterns.
For example, out of 16 blocks in which an information pattern representing identification information is arranged, 10 blocks are arranged as information blocks representing identification information itself, and 6 blocks are used as correction blocks. In this case, up to 6 blocks that are known to be errors and up to 3 blocks that are not known to be errors are corrected. If this is realized by 32 types of information patterns in the 9C2 system, for example, 5 bits of information is expressed by 1 block, and therefore the identification information itself is expressed by 50 blocks of 10 bits. Further, for example, if it is realized by 64 types of information patterns in the 9C3 system, 6 bits of information is expressed by one block, and therefore the identification information itself is expressed by 60 bits by 10 blocks.

Next, a wide layout including the code block will be described.
FIG. 7 is a diagram showing an example of such a layout. In this layout, the code blocks in FIG. 5 are periodically arranged as basic units in the vertical and horizontal directions of the entire sheet.
Here, as the synchronization pattern, the same code pattern is arranged in the upper left pattern block in each code block. In the figure, the synchronization pattern is represented by “S”.
As the X coordinate information, the same sequence of code patterns is arranged in each pattern block in the same row as the synchronization pattern is arranged. As the Y coordinate information, the same sequence of code patterns is arranged in each pattern block in the same column as the synchronization pattern is arranged. In the figure, patterns representing X coordinate information are represented by “X01”, “X02”,..., And patterns representing Y coordinate information are represented by “Y01”, “Y02”,.
Furthermore, as the identification information, the same arrangement of code patterns is periodically arranged in the vertical direction and the horizontal direction. In the drawing, patterns representing identification information are represented by “I01”, “I02”,..., “I16”.
By adopting such a layout, for example, when the range indicated by a circle in the figure is read, the range including the entire code block in FIG. 5 is not read. However, identification information and coordinate information can be obtained by the processing described later.

The code image printed on the paper surface in such a layout is formed with K toner (infrared light absorbing toner containing carbon) or special toner using, for example, an electrophotographic method.
Here, as the special toner, an invisible toner having a maximum absorption rate of 7% or less in the visible light region (400 nm to 700 nm) and an absorption rate of 30% or more in the near infrared region (800 nm to 1000 nm) is exemplified. . Here, “visible” and “invisible” are not related to whether they can be recognized visually. “Visible” and “invisible” are distinguished depending on whether or not an image formed on a printed medium can be recognized by the presence or absence of color development due to absorption of a specific wavelength in the visible light region. Further, “invisible” includes those that have some color developability due to absorption of a specific wavelength in the visible light region but are difficult to recognize with human eyes.

  By the way, as described above, when coordinates are expressed using the M series, it is necessary to hold the correspondence between the M series partial columns and the coordinate values as a coordinate table. Then, the coordinate value is obtained by searching the coordinate table based on the 11-bit value obtained from the three consecutive code patterns. However, the code pattern according to the present embodiment includes an error in which the number of dots in the block changes due to the disappearance or addition of dots (hereinafter referred to as “unidentifiable error”), the disappearance of dots, Although the addition has occurred, the number of dots in the block has not changed, and therefore an error (hereinafter referred to as “error specification error”) may occur in which an incorrect pattern value is specified if it is an error. In preparation for the occurrence of such an error, a method of holding a partial sequence including an erroneous pattern in the coordinate table is also conceivable, but this method is not practical in terms of storage capacity.

  Therefore, in the present embodiment, it is assumed that there is no error at first, and the coordinate table is searched based on the 11-bit value obtained from the three consecutive code patterns. A coordinate value is obtained by searching the M-sequence table using the code pattern. Here, the M series table is a table in which coordinates are associated with pattern values of blocks arranged at the coordinates. That is, even if there are unidentifiable errors or misidentified errors in some of the four consecutive code patterns and the coordinate values cannot be obtained from the coordinate table, the location that matches the remaining code patterns is specified. Thus, the coordinate value is obtained.

In addition, as a method for searching the M series table at this time, there is a method of searching for a matching partial sequence by sequentially scanning from the top, but this method is not efficient.
Therefore, in the present embodiment, the fact that there is no big jump in the coordinates acquired in each frame in the stroke is used. That is, for example, if there is history information one frame before, the M series is searched using the coordinates indicated by the history information as the search start position.

FIG. 8 is a diagram showing a first example of an M-sequence search method according to the present embodiment.
Among these, (a) is the figure which showed the mode at the time of searching a registration pattern row | line | column based on the acquisition pattern row | line | column obtained from the read image. Here, it is assumed that an unspecified error has occurred in the third block of the four blocks in the acquired pattern sequence. Then, the registered pattern string is searched from the top, and the registered pattern strings “6”, “2”, “0”, “10” corresponding to the coordinate values “7”, “8”, “9”, “10” are used. ”Matches the acquired pattern string except for the third block, and thus the coordinate value“ 7 ”is acquired.

By the way, when the coordinates are expressed using the k-th order M-sequence, if the length of the bit string used for the search is k bits or more, two or more positions matching these pattern values cannot be found.
This is shown in (b). Here, P is a coordinate position to be obtained among the coordinate positions of 2047. Q is the coordinate position obtained in the previous frame. In the case of using the 11th order M-sequence, there is no arrangement other than P in which the pattern values of three code patterns out of four consecutive code patterns match. Therefore, even if searching from the top of the M series as shown in (a), an incorrect coordinate value is not obtained. However, since this method is not efficient, it is better to search from left to right with Q as the center.

FIG. 9 is a diagram showing a second example of the M-sequence search method in the present embodiment.
Among these, (a) is the figure which showed the mode at the time of searching a registration pattern row | line | column based on the acquisition pattern row | line | column obtained from the read image. Here, it is assumed that an unspecified error has occurred in the first block and the third block among the four blocks of the acquired pattern sequence. Then, the registered pattern string is searched from the position of the coordinate value “9”, and the registered pattern strings “8”, “8”, “20” corresponding to the coordinate values “17”, “18”, “19”, “20”, Since “1” and “2” match the acquired pattern string except for the first block and the third block, the coordinate value “17” is acquired.

By the way, when the coordinates are expressed using the k-th order M-sequence, if the length of the bit string used for the search is less than k bits, two or more positions matching these pattern values are found.
This is shown in (b). Here, P is a coordinate position to be obtained among the coordinate positions of 2047. Q is the coordinate position obtained in the previous frame. In the case of using an 11th order M-sequence, there are seven sequences in which the pattern values of two code patterns are identical among four consecutive code patterns. In the figure, six locations other than P are designated as R1, R2,..., R6. The positions of P and R1, R2,..., R6 are random, but there are always seven at an average interval of 186 mm. Therefore, since there is a possibility that an incorrect coordinate value is obtained when searching from the head of the M series, it is preferable to search left and right from Q near the center. In (a), the search is performed from the coordinates (coordinate value “9”) obtained in the previous frame.

Note that the correspondence between the coordinate values shown in FIGS. 8 and 9 and the registered pattern sequence corresponds to the M-sequence table described above.
Further, although not shown, the coordinate table is assumed as follows. That is, if a registered pattern string corresponding to the coordinate value j in the M-sequence table is expressed as PATTERN_SEQUENCE [j], the coordinate value j and PATTERN_SEQUENCE [j] + 16 × PATTERN_SEQUENCE [j + 1] + (16 × 16 × PATTERN_SEQUENCE [j + 2]>> 1) is associated with the decimal number obtained. Note that “>>” indicates that the bit string is shifted to the right. Further, it is desirable that the data in the coordinate table is sorted in ascending order of decimal numbers so that the search based on the acquired pattern sequence can be performed efficiently.
In the present embodiment, it is assumed that the M series table and the coordinate table are stored in a memory (not shown) that can be referred to by the X coordinate code decoding unit 42 and the Y coordinate code decoding unit 47.

  Hereinafter, the decoding of such coordinates will be described in more detail. In this embodiment, any mCn method may be used. However, in the following description, for the sake of simplicity, it is assumed that the 9C3 method is used. Hereinafter, the pattern block is also simply referred to as “block”.

First, the image processing apparatus 20 that reads and processes a code image formed on a paper surface will be described.
FIG. 10 is a block diagram illustrating a configuration example of the image processing apparatus 20.
As illustrated, the image processing apparatus 20 includes an image reading unit 21, a dot array generation unit 22, a block detection unit 23, and a synchronization code detection unit 24. Also, the identification code detection unit 30, the identification code decoding unit 32, the X coordinate code detection unit 40, the X coordinate code decoding unit 42, the Y coordinate code detection unit 45, the Y coordinate code decoding unit 47, and an information output Part 50.

The image reading unit 21 reads a code image printed on a paper surface using an imaging element such as a charge coupled device (CCD) or a complementary metal oxide semiconductor (CMOS).
The dot array generation unit 22 detects dots from the read code image, and generates a dot array with reference to the dot positions. Note that, as preprocessing for dot detection from the code image, processing for removing noise included in the read image is also performed. Here, the noise includes, for example, variations in image sensor sensitivity and noise generated by an electronic circuit. The type of noise removal processing should match the characteristics of the imaging system, but sharpening processing such as blurring processing and unsharp masking may be applied. In addition, dot detection is performed as follows. That is, first, a dot image portion and other background image portions are separated by binarization processing, and a dot position is detected from each binarized image position. At that time, since there are cases where a lot of noise components are included in the binarized image, it is necessary to combine filter processing for determining dots based on the area and shape of the binarized image. Thereafter, the dot array is generated by replacing the dot detected as an image with digital data on the two-dimensional array, for example, “1” for the position where the dot is and “0” for the position where there is no dot. Do. In the present embodiment, a dot array generation unit 22 is provided as an example of an image acquisition unit that acquires a plurality of pattern images.

  The block detection unit 23 detects a pattern block in the code block on the dot array. In other words, a frame with the same size and shape as the code block and a block frame having the same size and shape as the pattern block are moved as needed on the dot array, and the position where the number of dots in the block is equal is the correct frame position. A code array storing pattern values in each block is generated. In the present embodiment, a block detection unit 23 is provided as an example of detection means for detecting an error pattern image.

  The synchronization code detection unit 24 detects the synchronization code with reference to the type of each code pattern detected from the dot array. The synchronization code detector 24 also determines image rotation based on the detected synchronization code. For example, when a square code pattern is used, it may be rotated by 90 degrees. Therefore, the direction is detected depending on which of the four types of synchronization patterns the detected synchronization code corresponds to. In addition, when a rectangular code pattern is used, there is a possibility that the pattern is rotated by 180 degrees. Therefore, the direction is detected depending on which of the two types of synchronization patterns the detected synchronization code corresponds to. Furthermore, the synchronous code detection unit 24 rotates the code array by the rotation angle detected in this way, and sets the code array in the correct direction.

The identification code detection unit 30 detects the identification code from the code array whose angle is corrected with reference to the position of the synchronization code.
The identification code decoding unit 32 decodes the identification code using the same parameters (such as the number of blocks) used in the RS code encoding process described above, and outputs identification information.

The X coordinate code detection unit 40 detects the X coordinate code from the code array whose angle is corrected with reference to the position of the synchronization code. In the present embodiment, an X coordinate code detection unit 40 is provided as an example of a partial sequence acquisition unit that acquires a partial sequence.
The X coordinate code decoding unit 42 extracts the M series partial sequence from the detected X coordinate code, refers to the position of this partial sequence in the M sequence used for image generation, and corrects this position with the shift amount of the code block The obtained value is output as X coordinate information. In the present embodiment, an X coordinate code decoding unit 42 is provided as an example of specifying means for specifying a predetermined position.

The Y coordinate code detection unit 45 detects the Y coordinate code from the code array whose angle is corrected with reference to the position of the synchronization code. In the present embodiment, a Y coordinate code detection unit 45 is provided as an example of a partial sequence acquisition unit that acquires a partial sequence.
The Y coordinate code decoding unit 47 extracts the M series partial sequence from the detected Y coordinate code, refers to the position of this partial sequence in the M sequence used for image generation, and corrects this position with the shift amount of the code block. The obtained value is output as Y coordinate information. In the present embodiment, a Y coordinate code decoding unit 47 is provided as an example of specifying means for specifying a predetermined position.

  The information output unit 50 outputs the identification information, the X coordinate information, and the Y coordinate information acquired from the identification code decoding unit 32, the X coordinate code decoding unit 42, and the Y coordinate code decoding unit 47, respectively.

Next, the operation of the image processing apparatus 20 will be described. In the description of this operation, it is assumed that 9C3 code patterns are arranged in the layout of FIG.
First, the image reading unit 21 reads a code image of an area having a predetermined size from the medium on which the code image is printed.
Next, the dot array generation unit 22 generates a dot array in which “1” is set at a position where a dot is detected and “0” is set at a position where a dot is not detected.
Thereafter, the block detection unit 23 detects the boundary of the pattern block by superimposing the block frame on this dot arrangement and searching for the position of the block frame where the number of dots in all the pattern blocks is “3”. . In the present embodiment, the code block is assumed to be arranged by 5 × 5 pattern blocks. Accordingly, a block frame having a size of 5 blocks × 5 blocks is used.

Here, the operation of the block detector 23 will be described.
FIG. 11 is a flowchart illustrating an operation example of the block detection unit 23.
First, the block detector 23 acquires a dot array from the dot array generator 22 (step 201). The size of this dot array is (number of blocks necessary for decoding × number of dots on one side of block + number of dots on one side of block−1) 2 . In this embodiment, the number of blocks necessary for decoding is 5 × 5, and the number of dots on one side of the block is 3, so a 17 × 17 dot array is acquired.

  Next, a block frame is superimposed on the acquired dot array (step 202). Then, “0” is substituted into the counters I and J, and “0” is substituted into MaxBN (step 203). Here, I and J count the number of steps in which the block frame is moved from the initial position. The block frame is moved for each line of the image, and the number of lines moved at that time is counted by counters I and J. MaxBN records the maximum count value when counting the number of blocks in which the number of dots detected in the block is “3” while moving the block frame.

  Next, the block detector 23 moves the block frame I in the X direction and J in the Y direction (step 204). Since I and J are “0” in the initial state, the block frame does not move. Then, the number of dots included in each block of the block frame is counted, and the number of blocks in which the number of dots is “3” is counted. The counted number of blocks is stored in IB [I, J] (step 205). In I and J of IB [I, J], values of I and J indicating the movement amount of the block frame are recorded, respectively.

  Next, the block detection unit 23 compares IB [I, J] with MaxBN (step 206). Since MaxBN has an initial value “0”, in the first comparison, IB [I, J] is larger than MaxBN. In this case, the value of IB [I, J] is substituted for MaxBN, the value of I is substituted for MX, and the value of J is substituted for MY (step 207). When IB [I, J] is MaxBN or less, the values of MaxBN, MX, and MY are left as they are.

Thereafter, the block detection unit 23 determines whether I = 2 (step 208).
If I = 2 is not satisfied, “1” is added to I (step 209). Then, the processing in steps 204 and 205 is repeated to compare IB [I, J] with MaxBN (step 206).
If IB [I, J] is greater than MaxBN, which is the maximum value of IB [I, J] up to the previous time, the value of IB [I, J] is assigned to MaxBN, and the value of I at that time is set to MX. , J is substituted for MY (step 207). If MaxBN is larger than IB [I, J], it is determined whether I = 2 (step 208). When I = 2, it is next determined whether J = 2 or not (step 210). If J = 2 is not satisfied, “0” is substituted for I, and “1” is added to J (step 211). Such a procedure is repeated to detect a case where (I, J) is (0, 0) to (2, 2) and IB [I, J] is maximum.

When the processing up to I = 2 and J = 2 is completed, the block detection unit 23 compares the stored MaxBN with the threshold value TB (step 212). The threshold value TB is a threshold value for determining whether the number of blocks having a dot number of “3” is such that decoding is possible.
Here, when MaxBN is larger than the threshold value TB, the block frame is fixed at the positions of MX and MY, and the pattern value of each block is detected at that position. Then, the detected pattern value is recorded in the memory as a code array PA [X, Y] together with variables X and Y for identifying each block (step 213). At this time, if the pattern value cannot be converted, “−1” which is not used as a pattern value is recorded. Then, the block detection unit 23 outputs MX and MY and the code array PA [X, Y] to the synchronous code detection unit 24 (step 214).
On the other hand, if MaxBN is equal to or less than the threshold value TB, it is determined that decoding is impossible due to large noise in the image, and decoding is impossible (step 215).

Next, the operation of the synchronization code detection unit 24 will be described.
FIG. 12 is a flowchart illustrating an operation example of the synchronization code detection unit 24.
First, the synchronization code detection unit 24 acquires MX and MY and the code array PA [X, Y] from the block detection unit 23 (step 251).
Next, the synchronous code detection unit 24 substitutes “1” for K and L (step 252). K is a counter indicating the number of blocks in the X direction, and L is a counter indicating the number of blocks in the Y direction.

Next, the synchronization code detection unit 24 determines whether the pattern value of PA [K, L] is “64” (step 253).
If the pattern value of PA [K, L] is “64”, it is determined that the code array PA [X, Y] does not need to be rotated, and K is substituted for the X coordinate SX of the block having the synchronization code, and Y Substitute L for coordinates SY. Further, MX is substituted for the movement amount ShiftX in the X direction of the block frame, and MY is substituted for the movement amount ShiftY in the Y direction (step 254).

Next, the synchronization code detection unit 24 determines whether or not the pattern value of PA [K, L] is “65” (step 255).
If the pattern value of PA [K, L] is “65”, the code array PA [X, Y] is rotated 90 degrees to the left (step 256). As shown in FIG. 4B, the code pattern with the pattern value “65” is an image obtained by rotating the code pattern with the pattern value “64” by 90 degrees in the right direction. Is upright. At this time, all pattern values in the code array PA [X, Y] are converted into pattern values when rotated 90 degrees to the left.
In accordance with this rotation, L is substituted for the X coordinate SX of the block having the synchronization code, and 6-K is substituted for the Y coordinate SY. Also, MY is substituted for the movement amount ShiftX in the X direction of the block frame, and 2-MX is substituted for the movement amount ShiftY in the Y direction (step 257).

Next, the synchronization code detector 24 determines whether the pattern value of PA [K, L] is “66” (step 258).
If the pattern value of PA [K, L] is “66”, the code array PA [X, Y] is rotated 180 degrees to the left (step 259). As shown in FIG. 4B, since the code pattern of the pattern value “66” is an image obtained by rotating the code pattern of the pattern value “64” by 180 degrees, the code pattern of the pattern value “66” is rotated by 180 degrees. Let the image stand upright. At this time, all the pattern values in the code array PA [X, Y] are converted into pattern values when rotated 180 degrees.
In accordance with this rotation, 6-K is substituted for the X coordinate SX of the block having the synchronization code, and 6-L is substituted for the Y coordinate SY. Further, 2-MX is substituted for the movement amount ShiftX in the X direction of the block frame, and 2-MY is substituted for the movement amount ShiftY in the Y direction (step 260).

Next, the synchronization code detection unit 24 determines whether the pattern value of PA [K, L] is “67” (step 261).
If the pattern value of PA [K, L] is “67”, the code array PA [X, Y] is rotated 270 degrees to the left (step 262). As shown in FIG. 4B, the code pattern of the pattern value “67” is an image obtained by rotating the code pattern of the pattern value “64” to the right by 270 degrees, so that the image is rotated by 270 degrees in the reverse direction. Erect. At this time, all the pattern values in the code array PA [X, Y] are converted into pattern values when rotated 270 degrees in the left direction.
In accordance with this rotation, 6-L is substituted for the X coordinate SX of the block having the synchronization code, and K is substituted for the Y coordinate SY. Further, 2-MY is substituted for the movement amount ShiftX in the X direction of the block frame, and MX is substituted for the movement amount ShiftY in the Y direction (step 263).

When values are assigned to SX, SY, ShiftX, and ShiftY in steps 254, 257, 260, and 263, the synchronization code detection unit 24 converts PA [X, Y] and these values into the identification code detection unit 30. The X coordinate code detection unit 40 and the Y coordinate code detection unit 45 are output (step 264).
If PA [K, L] is not any of the pattern values “64” to “67”, the synchronization code detector 24 determines whether K = 5 (step 265). If K = 5 is not satisfied, “1” is added to K (step 266), and the process returns to step 253. If K = 5, it is determined whether L = 5 (step 267). If L = 5 is not satisfied, “1” is substituted for K, “1” is added to L (step 268), and the process returns to step 253. That is, the processes in steps 253 to 264 are repeated while changing the values of K and L until the blocks having pattern values “64” to “67” are detected. In addition, even if K = 5 and L = 5, if a block having pattern values “64” to “67” cannot be detected, a determination signal indicating that decoding is impossible is output (step 269).

Next, operations of the identification code detection unit 30 and the identification code decoding unit 32 will be described.
FIG. 13 is a flowchart illustrating an operation example of the identification code detection unit 30 and the identification code decoding unit 32.
First, the identification code detection unit 30 acquires the code arrays PA [X, Y], SX, and SY from the synchronization code detection unit 24 (step 301).
Next, the identification code detection unit 30 initializes all elements of the identification code array IA [X, Y] with “−1” (step 302). Note that “−1” is a number that is not used as a pattern value. Then, “1” is substituted into counters IX and IY for identifying each block in the code block (step 303). Here, IX is a counter indicating the number of blocks in the X direction, and IY is a counter indicating the number of blocks in the Y direction.

The identification code detection unit 30 determines whether IY-SY is divisible by “5” (step 304). That is, it is determined whether or not the synchronization code is arranged in the row specified by IY.
Here, when IY-SY is divisible by “5”, that is, when a synchronization code is arranged in this row, since the identification code is not extracted, “1” is added to IY (step 305). ), Go to step 304.
On the other hand, if IY-SY is not divisible by “5”, that is, if no synchronization code is arranged in this row, it is determined whether IX-SX is divisible by “5” (step 306). That is, it is determined whether or not the synchronization code is arranged in the column specified by IX.

Here, when IX-SX is divisible by “5”, that is, when a synchronization code is arranged in this column, “1” is added to IX because the identification code is not extracted (step 307). ), Go to Step 306.
On the other hand, when IX-SX is not divisible by “5”, that is, when a synchronization code is not arranged in this column, the identification code detection unit 30 performs IA [(IX-SX) mod5, (IY-SY) mod [5] is substituted for PA [IX, IY] (step 308).

Then, it is determined whether or not IX = 5 (step 309).
If IX = 5 is not satisfied, “1” is added to IX (step 307), and the processing of steps 306 to 308 is repeated until IX = 5. When IX = 5, it is next determined whether IY = 5 (step 310). If IY = 5 is not satisfied, “1” is substituted for IX (step 311), “1” is added to IY (step 305), and the processing of steps 304 to 309 is repeated until IY = 5. . When IY = 5, the process proceeds to the identification code decoding unit 32.

That is, the identification code decoding unit 32 determines whether or not IA [X, Y] can be decoded (step 312).
If it is determined that IA [X, Y] can be decoded, the identification code decoding unit 32 obtains identification information from IA [X, Y] (step 313). If it is determined that IA [X, Y] cannot be decoded, N / A is substituted for identification information (step 314).

Next, the operation of the X coordinate code detection unit 40 will be described.
FIG. 14 is a flowchart illustrating an operation example of the X coordinate code detection unit 40.
First, the X coordinate code detection unit 40 acquires the code arrays PA [X, Y], SX, SY, ShiftX, and ShiftY from the synchronization code detection unit 24 (step 401).
Next, the X coordinate code detection unit 40 initializes all elements of the X coordinate code array XA [X] with “−1” (step 402). Note that “−1” is a number that is not used as a pattern value. Then, “1” is substituted into counters IX and IY for identifying each block in the code block. Here, IX is a counter indicating the number of blocks in the X direction, and IY is a counter indicating the number of blocks in the Y direction. Furthermore, the X coordinate code detection unit 40 substitutes “1” into a counter KX for identifying each element in the X coordinate code array (step 403).

Further, the X coordinate code detection unit 40 determines whether IY-SY is divisible by “5” (step 404). That is, it is determined whether or not the synchronization code is arranged in the row specified by IY.
Here, if IY-SY is not divisible by “5”, that is, if no synchronous code is arranged in this row, the X coordinate code is not extracted, so “1” is added to IY ( Step 405), the process proceeds to Step 404.
On the other hand, when IY-SY is divisible by “5”, that is, when a synchronous code is arranged in this row, the X coordinate code detection unit 40 determines whether IX-SX is divisible by “5”. (Step 406). That is, it is determined whether or not the synchronization code is arranged in the column specified by IX.

Here, when IX-SX is divisible by “5”, that is, when the synchronization code is arranged in this column, “1” is added to IX because the X coordinate code is not extracted (step 1). 407), go to step 406.
On the other hand, if IX-SX is not divisible by “5”, that is, if no synchronization code is arranged in this column, the X coordinate code detection unit 40 substitutes PA [IX, IY] for XA [KX]. (Step 408).

Then, it is determined whether or not IX = 5 (step 409).
If IX = 5 is not satisfied, “1” is added to KX (step 410), “1” is added to IX (step 407), and the processing of steps 406 to 408 becomes IX = 5. Repeat until When IX = 5, the process proceeds to the process of the X coordinate code decoding unit 42.

Next, the operation of the X coordinate code decoding unit 42 will be described.
FIG. 15 is a flowchart showing an operation example of the X coordinate code decoding unit 42.
First, the X coordinate code decoding unit 42 sets the number of unspecified errors to iErasure (step 421). Here, the number of unspecified errors is obtained by counting the blocks in which XA [X, Y] = − 1.
Next, the X coordinate code decoding unit 42 determines whether iErasure exceeds “2” (step 422). Here, “2” is the number of iErasures that can be subjected to error correction decoding. If iErasure exceeds “2”, that is, “3” or “4”, N / A is substituted into the X coordinate information (step 439), and the process is terminated. If iErasure does not exceed “2”, the decoding process is continued.

In that case, first, the X coordinate code decoding unit 42 determines whether or not iErasure is “0” (step 423). That is, it is determined whether there are any unspecified errors in the four blocks detected by the X coordinate code detection unit 40.
Here, if iErasure is “0”, that is, if there is no unspecified error, an 11-bit bit value is calculated from the pattern values of the first three blocks (step 424). When the bit value read_num is read_sequence [j] (j = 0 to 3) as the pattern value in the j-th block among the four blocks detected by the X coordinate code detection unit 40,
read_num = read_sequence [0] + 16 × read_sequence [1] + (16 × 16 × read_sequence [2] >> 1)
Is required. Note that “>>” indicates that the bit string is shifted to the right. This is because an 11-bit bit value is obtained, and for the third block, 3 bits from the front are to be calculated.

Thereby, the X coordinate code decoding part 42 searches a coordinate value from a coordinate table (step 425). In the coordinate table, if the coordinate value for the 11-bit value i is expressed as POSITION_TABLE [i], the coordinate value im of the X coordinate is
im = POSITION_TABLE [read_num]
Is required. Here, POSITION_TABLE represents the correspondence between the value obtained by the same method as in step 424 and j from three consecutive PATTERN_SEQUENCE [j], PATTERN_SEQUENCE [j + 1], and PATTERN_SEQUENCE [j + 2] of the pattern sequence.

Here, the bit value is calculated from 11 bits. The M sequence used in the present embodiment is 11 bits, and its calculation method is shown, but it may be calculated with 11 bits or more. For example, the calculation can be performed with 12 bits obtained from three code patterns, and error detection can be performed with one extra bit. In this case, the above POSITION_TABLE associates the coordinate value j with the decimal number obtained by PATTERN_SEQUENCE [j] + 16 × PATTERN_SEQUENCE [j + 1] + 16 × 16 × PATTERN_SEQUENCE [j + 2]. As described above, when a POSITION_TABLE is created from 11 bits, coordinate values are output corresponding to the bit values. However, when a POSITION_TABLE is created from 12 bits, the coordinate values may not correspond to the bit values. is there. At this time, “−1” is output.
Accordingly, when a 12-bit bit value is calculated in step 424 and the coordinate table is searched using this bit value in step 425, immediately after that, in step 426, the coordinate value is obtained from the coordinate table, that is, the search is performed. There may be a step of determining whether or not is successful. In this step, when im> −1, it may be determined that the search is successful.

  Thereafter, the X coordinate code decoding unit 42 determines whether or not the fourth block matches (step 427). This determination is performed by comparing read_sequence [3] and PATTERN_SEQUENCE [im + 3], where the pattern value for the coordinate value j is represented by PATTERN_SEQUENCE [j]. If the fourth block does not match, the fourth block of the X coordinate code detected by the X coordinate code detector 40 is corrected (step 428). That is, PATTERN_SEQUENCE [im + 3] is substituted for read_sequence [3]. If the fourth block matches, the process for correcting the fourth block is not performed. In either case, im is stored in the X coordinate information (step 429). Note that step 428 need not be performed if the detected pattern string read_sequence [j] is not used after coordinate decoding.

On the other hand, if it is determined in step 423 that an unspecified error has occurred, or if it is determined that the search has not been successful in step 426 after searching with a 12-bit value, an unspecified error or an erroneously specified error has occurred. The process moves to the process of obtaining the X coordinate information using a non-block.
That is, the X coordinate code decoding unit 42 determines whether or not the image reading unit 21 has succeeded in decoding in the previous frame (step 430). This determination is performed using the variable pastPosition. In the variable pastPosition, if decoding in the previous frame is successful, a coordinate value as a decoding result is stored, and if it fails, “−1” is stored. Therefore, when pastPosition ≧ 0, it may be determined that decoding was successful in the previous frame. If decoding has not succeeded in the previous frame, “0” is substituted into a variable startPosition indicating a search start position in collation processing described later (step 431). If the decoding has succeeded in the previous frame, the value of pastPosition is substituted into a variable startPosition indicating the search start position in the collation process described later (step 432).

Thereafter, the X coordinate code decoding unit 42 determines whether iErasure is less than “2” (step 433). That is, it is determined whether or not there is one block in which an unspecified error has occurred among the four blocks detected by the X coordinate code detector 40. If there is one, a three-block matching process described later is performed (step 434), and it is determined whether the matching is successful (step 435).
Here, if the decoding of the X coordinate information is successful by the three-block matching process, the process ends.

On the other hand, if the decoding of the X coordinate code is not successful by the three-block matching process, it is determined whether the decoding is successful in the previous frame (step 436). This determination may also be made by examining whether pastPosition ≧ 0, as described above. At this time, if the decoding is not successful in the previous frame, N / A is substituted into the X coordinate information (step 439), and the process is terminated. As described above, in the two-block matching process, there are seven locations where the two blocks match, and therefore, if the decoding is not successful in the previous frame and the search start position cannot be set, an appropriate coordinate value cannot be obtained. Because.
Thereafter, the X coordinate code decoding unit 42 performs a later-described two-block collation process (step 437), and determines whether the collation is successful (step 438).
Here, if the decoding of the X coordinate information is successful by the two-block matching process, the process is terminated.
On the other hand, if the decoding of the X coordinate code is not successful by the two-block matching process, N / A is substituted for the X coordinate information (step 439), and the process ends.

Next, the 3-block matching process in FIG. 15 will be described in detail.
FIG. 16 is a flowchart showing an operation example of the 3-block matching process by the X coordinate code decoding unit 42.
First, the X coordinate code decoding unit 42 starts an M-sequence table scanning loop (step 441). Specifically, the initial value of the variable i representing the scanning position of the M-sequence table is set to “0”, and the loop is executed while adding “1” to i as long as i is less than the length of the M-sequence SEQUENCE_LENGTH.
Next, in the M-sequence table scanning loop, the X coordinate code decoding unit 42 starts a left / right collation loop (step 442). Specifically, the initial value of the variable ip indicating which of the left and right of the search start position is set as the collation position is set to “−1”, and the loop is performed while adding “2” to ip as long as ip is less than “2”. Execute. That is, the loop is executed once with ip = −1, and the loop is further executed once with ip = 1.

  In this left and right collation loop, the X coordinate code decoding unit 42 first determines whether or not the collation position is within the range of the M series (step 443). Since the collation position is expressed as startPosition + (i × ip), the determination here is performed by checking whether 0 ≦ startPosition + (i × ip) ≦ SEQUENCE_LENGTH-3 is satisfied.

As a result, if it is determined that the collation position is within the range of the M series, “0” is substituted into the collation counter matchBlock (step 444), and a four-block scanning loop is started (step 445). Specifically, the initial value of the variable j for counting the number of blocks is set to “0”, and the loop is executed while adding “1” to j as long as j is less than “4”.
Within this 4-block scanning loop, the X coordinate code decoding unit 42 determines whether the pattern value in the X coordinate code detected by the X coordinate code detection unit 40 matches the pattern value in the M sequence table ( Step 446). That is, it is determined whether read_sequence [j] = PATTERN_SEQUENCE [startPosition + (i × ip) + j]. As a result, if the two match, “1” is added to the matching counter matchBlock (step 447), but if the two do not match, such processing is not performed. Then, the loop is executed for the next j, and the loop is terminated before j becomes “4” (step 448).

As a result, the X coordinate code decoding unit 42 determines whether or not there is a portion in the M sequence table where three of the four blocks match (step 449). That is, it is determined whether or not the value of matchBlock is “3” in the 4-block scanning loop.
As a result, if there is a portion where the three blocks match, that is, if the value of the matchBlock is “3”, the X coordinate code decoding unit 42 starts a one-block correction loop (step 450). Specifically, the initial value of the variable j for counting the number of blocks is set to “0”, and the loop is executed while adding “1” to j as long as j is less than “4”.
In this one-block correction loop, the X coordinate code decoding unit 42 corrects the j-th block (step 451). That is, PATTERN_SEQUENCE [startPosition + (i * ip) + j] is stored in read_sequence [j]. Then, the loop is executed for the next j, and the loop is terminated before j becomes “4” (step 452). Note that step 451 need not be performed if the detected pattern string read_sequence [j] is not used after coordinate decoding.
Thereafter, the X-coordinate code decoding unit 42 stores i in the X-coordinate information (step 453), and returns information indicating that the 3-block matching has been successful (step 454).

On the other hand, if there is no portion where the three blocks match in step 449, that is, if the value of matchBlock is not “3”, the X coordinate code decoding unit 42 executes a loop for the next ip, and ip is “2”. The loop is terminated before it becomes (step 455). Further, the X coordinate code decoding unit 42 executes a loop for the next i, and ends the loop before i becomes SEQUENCE_LENGTH (step 456).
As described above, when the M-sequence table scanning loop is completed without leaving the process of steps 450 to 454 in the middle of the left and right collation loop, the X coordinate information is not obtained in the three block collation process. Therefore, in this case, information indicating that the 3-block verification has failed is returned (step 457).

Next, the 2-block matching process of FIG. 15 will be described in detail.
FIG. 17 is a flowchart showing an operation example of the two-block matching process by the X coordinate code decoding unit 42.
First, the X coordinate code decoding unit 42 starts an M series table scanning loop (step 461). Specifically, the initial value of the variable i representing the scanning position of the M-sequence table is set to “0”, and as long as i is less than a predetermined length (for example, SEQUENCE_LENGTH / 100), a loop is added while adding “1” to i. Execute.
Next, in the M-sequence table scanning loop, the X coordinate code decoding unit 42 starts a left / right collation loop (step 462). Specifically, the initial value of the variable ip indicating which of the left and right of the search start position is set as the collation position is set to “−1”, and the loop is performed while adding “2” to ip as long as ip is less than “2”. Execute. That is, the loop is executed once with ip = −1, and the loop is further executed once with ip = 1.

  In this left-right collation loop, the X coordinate code decoding unit 42 first determines whether or not the collation position is within the range of the M series (step 463). Since the collation position is expressed as startPosition + (i × ip), the determination here is performed by checking whether 0 ≦ startPosition + (i × ip) ≦ SEQUENCE_LENGTH-3 is satisfied.

As a result, if it is determined that the collation position is within the range of the M series, “0” is substituted for the collation counter matchBlock (step 464), and a four-block scanning loop is started (step 465). Specifically, the initial value of the variable j for counting the number of blocks is set to “0”, and the loop is executed while adding “1” to j as long as j is less than “4”.
Within this 4-block scanning loop, the X coordinate code decoding unit 42 determines whether the pattern value in the X coordinate code detected by the X coordinate code detection unit 40 matches the pattern value in the M sequence table ( Step 466). That is, it is determined whether read_sequence [j] = PATTERN_SEQUENCE [startPosition + (i × ip) + j]. As a result, if the two match, “1” is added to the matching counter matchBlock (step 467), but if the two do not match, such processing is not performed. Then, the loop is executed for the next j, and the loop is terminated before j becomes “4” (step 468).

Thereby, the X coordinate code decoding unit 42 determines whether or not there is a portion in the M sequence table where two of the four blocks match (step 469). That is, it is determined whether the value of matchBlock is “2” in the 4-block scanning loop.
As a result, if there is a portion where the two blocks match, that is, if the value of the matchBlock is “2”, the X coordinate code decoding unit 42 starts a two block correction loop (step 470). Specifically, the initial value of the variable j for counting the number of blocks is set to “0”, and the loop is executed while adding “1” to j as long as j is less than “4”.
In this two-block correction loop, the X coordinate code decoding unit 42 corrects the j-th block (step 471). That is, PATTERN_SEQUENCE [startPosition + (i * ip) + j] is stored in read_sequence [j]. Then, the loop is executed for the next j, and the loop is terminated before j becomes “4” (step 472). Note that step 471 does not need to be performed if the detected pattern string read_sequence [j] is not used after coordinate decoding.
Thereafter, the X coordinate code decoding unit 42 stores i in the X coordinate information (step 473), and returns information indicating that the two-block matching is successful (step 474).

On the other hand, if there is no portion where the two blocks match in step 469, that is, if the value of matchBlock is not “2”, the X coordinate code decoding unit 42 executes a loop for the next ip, and ip is “2”. The loop is terminated before it becomes (step 475). Further, the X coordinate code decoding unit 42 executes a loop for the next i, and ends the loop before i becomes SEQUENCE_LENGTH (step 476).
As described above, when the M-sequence table scanning loop is completed without leaving the processing of steps 470 to 474 in the middle of the left-right collation loop, the X coordinate information is not obtained in the two-block collation processing. Accordingly, in this case, information indicating that the two-block matching has failed is returned (step 477).

Although only the operations of the X coordinate code detection unit 40 and the X coordinate code decoding unit 42 have been described here, the Y coordinate code detection unit 45 and the Y coordinate code decoding unit 47 perform the same operation.
This is the end of the description of the operation of the image processing apparatus 20 in the present embodiment.

  By the way, in the above description, if the coordinates can be decoded in the previous frame, the coordinates are set as the search start position in the M series table, but the present invention is not limited to this. That is, the search start position may be determined using not only the previous frame but also the coordinates that can be decoded by a predetermined number of frames from the previous time to the previous frame. Further, instead of using the decoded coordinates as the search start position as they are, the coordinates determined by some processing based on the decoded coordinates may be used as the search start position.

  Further, in the above, when the coordinates cannot be decoded in the previous frame, in the 3-block matching process, the M-sequence table is searched using the start of the M-sequence as the search start position, and in the 2-block matching process, the M-sequence table is searched. Was not done. However, in consideration of expressing coordinates in a k-th order M-sequence and assigning an arbitrary number of bits to a code pattern, the processing pattern is not always limited. For example, among m (m ≧ 1) code patterns representing coordinates, p (1 ≦ p ≦ m) code patterns, and q (of n (n ≧ 1) code patterns different from this, q ( It is assumed that (p + q) block matching processing is performed using a pattern sequence obtained by concatenating 1 ≦ q ≦ n) code patterns. In this case, if the sum of the length of the bit string corresponding to the p code patterns and the length of the bit string corresponding to the q code patterns is not less than k, an arbitrary position (for example, the head of the M sequence) is set as the search start position. And it is sufficient. On the other hand, if the sum of the length of the bit string corresponding to the p code patterns and the length of the bit string corresponding to the q code patterns is less than k, it is better not to search for the M series. Note that the n code patterns are not limited to code patterns adjacent to the rear of m code patterns, but may be code patterns adjacent to the front of m code patterns.

Here, an example of a service using a code image in the present embodiment will be described.
FIG. 18A specifically shows an image of the first service.
In this service, first, an object ID corresponding to a certain area on the page is embedded as a code image. Then, as shown on the left side of the arrow, the mobile terminal device 70 with a camera is placed in that area on the paper. Then, the mobile terminal device 70 reads the object ID, and generates and displays an information input UI corresponding to the object ID as shown on the right side of the arrow. Here, it is conceivable that the information input UI includes information on the character type and the number of characters as illustrated. Information input from the information input UI is transmitted to the server.

FIG. 18B specifically shows the image of the second service.
Also in this service, first, an object ID corresponding to a certain area on the page is embedded as a code image. In this example, the pen device 60 writes in the area on the paper surface, and the pen device 60 holds the object ID and the writing data in association with each other as shown on the right side of the arrow in the upper right direction. It shall be. Then, as shown on the left side of the arrow, the mobile terminal device 70 with a camera is placed in that area on the paper. Then, the mobile terminal device 70 reads the object ID, receives handwritten data associated with the object ID from the pen device 60, and adds information on the mobile terminal 70 as shown on the right side of the arrow in the lower right direction. Enable editing. For example, assume that the pen device 60 writes a katakana character on the paper. In this case, the mobile terminal device 70 recognizes a katakana character and converts it into a kanji character by a user operation. That is, when the kanji is not known, it is possible to write the katakana with the pen device 60, convert the kanji into the portable terminal device 70, and send it to the server.

Next, a specific hardware configuration of the image processing apparatus 20 in the present embodiment will be described.
First, the pen device 60 that implements the image processing apparatus 20 will be described.
FIG. 19 is a view showing the mechanism of the pen device 60.
As illustrated, the pen device 60 includes a control circuit 61 that controls the operation of the entire pen. The control circuit 61 includes an image processing unit 61a that processes a code image detected from an input image, and a data processing unit 61b that extracts identification information and coordinate information from the processing result.
The control circuit 61 is connected to a pressure sensor 62 that detects the writing operation by the pen device 60 by the pressure applied to the pen tip 69. Further, an infrared LED 63 that irradiates infrared light onto the medium and an infrared CMOS 64 that inputs an image are also connected. Further, an information memory 65 for storing identification information and coordinate information, a communication circuit 66 for communicating with an external device, a battery 67 for driving the pen, and pen identification information (pen ID) are stored. A pen ID memory 68 is also connected.

  Note that the image reading unit 21 shown in FIG. 10 is realized by, for example, the infrared CMOS 64 of FIG. The dot array generation unit 22 is realized by, for example, the image processing unit 61a in FIG. Further, the block detection unit 23, the synchronization code detection unit 24, the identification code detection unit 30, the identification code decoding unit 32, the X coordinate code detection unit 40, the X coordinate code decoding unit 42, and the Y coordinate code detection unit 45 shown in FIG. The Y coordinate code decoding unit 47 and the information output unit 50 are realized by, for example, the data processing unit 61b of FIG.

Further, the processing realized by the image processing unit 61a or the data processing unit 61b in FIG. 19 may be realized by a general-purpose computer, for example. Accordingly, assuming that such processing is realized by the computer 90, the hardware configuration of the computer 90 will be described.
FIG. 20 is a diagram illustrating a hardware configuration of the computer 90.
As shown in the figure, the computer 90 includes a CPU (Central Processing Unit) 91 as a calculation means, a main memory 92 as a storage means, and a magnetic disk device (HDD: Hard Disk Drive) 93. Here, the CPU 91 executes various types of software such as an OS (Operating System) and applications to realize the above-described functions. The main memory 92 is a storage area for storing various software and data used for execution thereof, and the magnetic disk device 93 is a storage area for storing input data for various software, output data from various software, and the like. .
Further, the computer 90 includes a communication I / F 94 for performing communication with the outside, a display mechanism 95 including a video memory and a display, and an input device 96 such as a keyboard and a mouse.

  The program for realizing the present embodiment can be provided not only by communication means but also by storing it in a recording medium such as a CD-ROM.

It is the figure which showed an example of the code pattern in 9C2 system and 9C3 system. It is the figure which showed an example of all the code patterns in 9C2 system. It is the figure which showed an example of all the code patterns in 9C3 system. It is the figure which showed the example of the synchronous pattern in 9C2 system and 9C3 system. It is the figure which showed the example of the basic layout of a code block. It is a figure for demonstrating the expression of the coordinate by M series. It is the figure which showed the example of the extensive layout of a code block. It is a figure for demonstrating the outline | summary of 3 block collation processing. It is a figure for demonstrating the outline | summary of 2 block collation processing. It is the block diagram which showed the function structural example of the image processing apparatus in this Embodiment. It is the flowchart which showed the operation example of the block detection part in this Embodiment. It is the flowchart which showed the operation example of the synchronous code detection part in this Embodiment. It is the flowchart which showed the operation example of the identification code | cord | chord detection part etc. in this Embodiment. It is the flowchart which showed the operation example of the X coordinate code detection part in this Embodiment. It is the flowchart which showed the operation example of the X coordinate code decoding part in this Embodiment. It is the flowchart which showed the operation example of the X coordinate code decoding part in this Embodiment. It is the flowchart which showed the operation example of the X coordinate code decoding part in this Embodiment. It is the figure which showed the example of the service using the code | symbol image in this Embodiment. It is the figure which showed the mechanism of the pen device which can implement | achieve the image processing apparatus in this Embodiment. It is a hardware block diagram of the computer which can apply this Embodiment.

Explanation of symbols

DESCRIPTION OF SYMBOLS 20 ... Image processing apparatus, 21 ... Image reading part, 22 ... Dot array production | generation part, 23 ... Block detection part, 24 ... Synchronization code detection part, 30 ... Identification code detection part, 32 ... Identification code decoding part, 40 ... X coordinate Code detecting unit 42 ... X coordinate code decoding unit 45 45 Y coordinate code detecting unit 47 47 Y coordinate code decoding unit 50 Information output unit

Claims (8)

  1. A plurality of partial strings constituting a specific bit string representing a position on the medium, and m (m ≧ 1) partial strings representing a predetermined position on the medium and n (n adjacent to the m partial strings ≧ 1) an image acquisition means for acquiring a plurality of pattern images respectively corresponding to a plurality of partial sequences consisting of a plurality of partial sequences;
    Wherein among the m partial and p (1 ≦ p ≦ m) pieces of partial sequences that could be detected from the corresponding m pieces of pattern image each row, the plurality of pattern images of the plurality of pattern images partial sequence acquisition means for acquiring q (1 ≦ q ≦ n) partial sequences detected from n pattern images respectively corresponding to n partial sequences;
    When the predetermined position is not specified by the p partial sequences, correspondence information in which a plurality of partial sequences constituting the specific bit sequence and positions on the medium represented by the plurality of partial sequences is associated with the image A search is performed based on a partial sequence for search including the p partial sequences and the q partial sequences, with a position specified based on a plurality of pattern images acquired by the acquisition unit last time as a search start position . Thus, an image processing apparatus comprising: a specifying unit that specifies the predetermined position.
  2. The specific bit string is a k-th order M-sequence,
    If the sum of the lengths of the p subsequences and the length of the q subsequences is equal to or greater than k, the specifying unit sets the correspondence information as the search subsequence using an arbitrary position as a search start position. If the sum of the lengths of the p partial sequences and the q partial sequences is less than k, the search based on the search partial sequence of the correspondence information is not executed. The image processing apparatus according to claim 1, wherein:
  3. A detection means for detecting an error pattern image that is a pattern image from which a corresponding partial sequence of the plurality of pattern images cannot be obtained;
    The partial sequence acquisition means acquires the p partial sequences from p pattern images other than the error pattern image among the plurality of pattern images, and extracts the p partial sequences from q pattern images other than the error pattern image. The image processing apparatus according to claim 1, wherein q partial sequences are acquired.
  4. The specifying means complements the partial sequences that could not be obtained from the plurality of pattern images based on the plurality of partial sequences associated with the predetermined position in the correspondence information, thereby obtaining the m partial sequences and The image processing apparatus according to claim 3, wherein the n partial sequences are acquired.
  5. The plurality of pattern images are composed of p pattern images and q pattern images,
    The partial sequence acquisition means acquires the p partial sequences from the p pattern images out of the plurality of pattern images, and acquires the q partial sequences from the q pattern images. The image processing apparatus according to claim 1, wherein:
  6. The specifying means corrects the partial sequence obtained by mistake from the plurality of pattern images based on the plurality of partial sequences associated with the predetermined position in the correspondence information, thereby obtaining the m partial sequences. The image processing apparatus according to claim 5, wherein the n partial columns are acquired.
  7. On the computer,
    A plurality of partial strings constituting a specific bit string representing a position on the medium, and m (m ≧ 1) partial strings representing a predetermined position on the medium and n (n adjacent to the m partial strings ≧ 1) a function of acquiring a plurality of pattern images respectively corresponding to a plurality of partial sequences consisting of a plurality of partial sequences;
    Wherein among the m partial and p (1 ≦ p ≦ m) pieces of partial sequences that could be detected from the corresponding m pieces of pattern image each row, the plurality of pattern images of the plurality of pattern images a function of acquiring q (1 ≦ q ≦ n) subsequences detected from n pattern images respectively corresponding to n subsequences;
    When the predetermined position is not specified by the p partial strings, correspondence information in which the plurality of partial strings constituting the specific bit string and the positions on the medium represented by the plurality of partial strings are associated with each other is The partial sequence for search including the p partial sequences and the q partial sequences, with the position specified based on the plurality of pattern images acquired previously by the function of acquiring the pattern image as the search start position A program for realizing the function of specifying the predetermined position by searching based on the above.
  8. The specific bit string is a k-th order M-sequence,
    In the specifying function, if the sum of the lengths of the p partial sequences and the length of the q partial sequences is equal to or greater than k, the corresponding information is used as the search start position as an arbitrary position. Search based on the column, and if the sum of the lengths of the p partial columns and the q partial columns is less than k, do not execute the search based on the search partial column of the correspondence information The program according to claim 7 .
JP2008016819A 2008-01-28 2008-01-28 Image processing apparatus and program Expired - Fee Related JP5125548B2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2008016819A JP5125548B2 (en) 2008-01-28 2008-01-28 Image processing apparatus and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2008016819A JP5125548B2 (en) 2008-01-28 2008-01-28 Image processing apparatus and program

Publications (2)

Publication Number Publication Date
JP2009176250A JP2009176250A (en) 2009-08-06
JP5125548B2 true JP5125548B2 (en) 2013-01-23

Family

ID=41031226

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2008016819A Expired - Fee Related JP5125548B2 (en) 2008-01-28 2008-01-28 Image processing apparatus and program

Country Status (1)

Country Link
JP (1) JP5125548B2 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5601082B2 (en) * 2010-08-12 2014-10-08 富士ゼロックス株式会社 Information processing apparatus, imaging apparatus, and program

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005092436A (en) * 2003-09-16 2005-04-07 Casio Comput Co Ltd Code information read-out device, program therefor, and pen type data input unit using the same
JP2006079391A (en) * 2004-09-10 2006-03-23 Fuji Xerox Co Ltd Trajectory acquiring apparatus and trajectory acquiring program
JP4289350B2 (en) * 2005-12-26 2009-07-01 富士ゼロックス株式会社 Image processing apparatus and image processing method
JP5028843B2 (en) * 2006-04-12 2012-09-19 富士ゼロックス株式会社 Writing information processing device, writing information processing method, and program
JP2008009833A (en) * 2006-06-30 2008-01-17 Fuji Xerox Co Ltd Document management device and program

Also Published As

Publication number Publication date
JP2009176250A (en) 2009-08-06

Similar Documents

Publication Publication Date Title
US6601772B1 (en) Compact matrix code and one-touch device and method for code reading
EP2105869B1 (en) Two-dimensional code having rectangular region provided with specific patterns for specify cell positions and distinction from background
CN1499440B (en) Decode in 2-D array and error correction
JP4713806B2 (en) Coding surface with function flag
EP0672994B1 (en) Method and apparatus for reading an optically two-dimensional code
JP4746807B2 (en) Position determination-calculation
KR101026580B1 (en) Active embedded interaction coding
US5761686A (en) Embedding encoded information in an iconic version of a text image
US20030029919A1 (en) Reading pen
JP3983774B2 (en) Coded patterns for optical devices and prepared surfaces
US8045219B2 (en) Printed media products including data files provided in multiple layers of encoded, colored dots
JP4353591B2 (en) Glyph address carpet method and apparatus for providing position information of multidimensional address space
JP2007011890A (en) Dot pattern
ES2210502T3 (en) Method and registration device.
JP4528476B2 (en) Sensing device with compatible nib
US4534031A (en) Coded data on a record carrier and method for encoding same
CA2026387C (en) High density two dimensional symbology
CA2044404C (en) Self-clocking glyph shape codes
TWI457831B (en) Method of imaging coding pattern comprising columns and rows of coordinate data
DK175021B1 (en) Optical machine-readable, binary code, and method of reading and generating them
EP1188143B1 (en) Position determination
JP2010539612A (en) Coding pattern with direction code
US20060274948A1 (en) Stroke localization and binding to electronic document
JP4122629B2 (en) 2D code generation method
US6570104B1 (en) Position determination

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20101217

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20120127

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20120131

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20120327

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20121002

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20121015

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20151109

Year of fee payment: 3

LAPS Cancellation because of no payment of annual fees