WO2003041014A1 - Apparatus and method for recognizing code - Google Patents

Apparatus and method for recognizing code Download PDF

Info

Publication number
WO2003041014A1
WO2003041014A1 PCT/KR2002/000886 KR0200886W WO03041014A1 WO 2003041014 A1 WO2003041014 A1 WO 2003041014A1 KR 0200886 W KR0200886 W KR 0200886W WO 03041014 A1 WO03041014 A1 WO 03041014A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
code
color
colors
code image
Prior art date
Application number
PCT/KR2002/000886
Other languages
French (fr)
Inventor
Cheol-Ho Cheong
Nam-Kyu Lee
Tack-Don Han
Original Assignee
Colorzip Media, Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Colorzip Media, Inc filed Critical Colorzip Media, Inc
Priority to EP02802741A priority Critical patent/EP1456816B1/en
Priority to DE60225329T priority patent/DE60225329T2/en
Priority to JP2003542972A priority patent/JP4016342B2/en
Priority to US10/492,305 priority patent/US6981644B2/en
Publication of WO2003041014A1 publication Critical patent/WO2003041014A1/en
Priority to HK05105479A priority patent/HK1072826A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/0021Image watermarking

Definitions

  • the present invention relates to an apparatus for recognizing a code image, which is physically or electronically expressed, and for extracting information represented on the code image, and a method therefor.
  • a suitable decoder in order to read an original data from a code image in which data is represented as an image, a suitable decoder must be provided.
  • an object of the present invention to provide an apparatus for recognizing a code which is capable of reading an original data from a code image in which data is expressed as a color or shade, and a method therefor. Accordingly, to achieve the above object, according to one aspect of the present invention, there is provided an apparatus for recognizing a code.
  • the apparatus includes an image-acquiring portion for acquiring a raw image in which a code image is contained, a color-converting portion for correcting colors or shades recognized in the raw image, using environmental variables, converting the corrected colors or shades into a plurality of standard colors or standard shades used to generate the code image, and generating a standard image represented by the standard colors or standard shades, a binary-coding converting portion for dividing the colors or shades of the raw image into two colors according to a predetermined reference value and generating a binary-coded image, an image-processing portion for extracting an area excluding a code image area from the binary-coded image, setting a color of the area corresponding to the extracted area in the standard image as a background color, thereby discriminating the code image area from other areas, extracting a plurality of cells included in the code image area, and recognizing the standard color or standard shade represented in each of the cells, and a code-converting portion for extracting a corresponding character, number, or symbol from the color or shade recognized in each of the cells of the code
  • an apparatus for recognizing a code includes an image-acquiring portion for acquiring a raw image in which a code image is contained, a binary-coding converting portion for dividing the colors or shades of the raw image into two colors according to a predetermined reference value and generating a binary-coded image, an image-processing portion for extracting an area excluding a code image area from the binary-coded image, setting a color of the area corresponding to the extracted area in the standard image as a background color, thereby discriminating the code image area from other areas, extracting a plurality of cells included in the code image area, and recognizing the standard color or standard shade represented in each of the cells, a color-converting portion for correcting colors or shades recognized with respect to each of the cells included in the code image area, using environmental variables, and converting the corrected colors or shades into a plurality of standard colors or standard shades used to generate the code image, and a code-converting portion for extracting a corresponding
  • a method for recognizing a code includes the steps of acquiring a raw image in which a code image is contained, correcting colors or shades recognized from the raw image, using environmental variables, converting the corrected colors or shades into a plurality of standard colors or standard shades used to generate the code image, and generating a standard image represented by the standard colors or standard shades, dividing the colors or shades of the raw image into two colors according to a predetermined reference value and generating a binary-coded image, extracting an area excluding a code image area from the binary-coded image, and setting a color of the area corresponding to the extracted area in the standard images as a background color, thereby discriminating the code image area from other areas, extracting a plurality of cells included in the code image area, and recognizing the standard color or standard shade represented in each of the cells, and extracting a corresponding character, number, or symbol from the color or shade recognized in each of the cells of the code image according to a
  • a method for recognizing a code includes the steps of acquiring a raw image in which a code image is contained, dividing the colors or shades of the raw image into two colors according to a predetermined reference value and generating a binary-coded image, extracting an area excluding a code image area from the binary-coded image, and setting a color of the area corresponding to the extracted area in the standard image as a background color, thereby discriminating the code image area from other areas, extracting a plurality of cells included in the code image portion, and recognizing the standard color or standard shade represented in each of the cells, correcting colors or shades recognized from the raw image with respect to each of the cells included in the code image area, using environmental variables, converting the corrected colors or shades into a plurality of standard colors or standard shades used to generate the code image, and extracting a corresponding character, number, or symbol from the color or shade recognized in each of the cells of the code image according to a relationship between the
  • FIG. 1A is a block diagram of an apparatus for recognizing a code according to a first embodiment of the present invention
  • FIG. 1 B is a flow chart illustrating the operation of the apparatus shown in FIG. 1A;
  • FIGS. 2A through 2E illustrate various examples of code images to be read by the apparatus
  • FIGS. 3A through 3C illustrate examples of code conversion tables used to convert predetermined data into an image
  • FIG. 4 illustrates an example in which a code image is incorporated into a name card
  • FIG. 5 is a block diagram of the apparatus according to a second embodiment of the present invention
  • FIG. 6 is a flow chart illustrating the operation of the apparatus shown in FIG. 5;
  • FIGS. 7A through 7F illustrate results in which black-and-white images are obtained from raw images
  • FIGS. 8A through 8F illustrate steps of removing a noise image from a black-and-white image
  • FIG. 9 is a flow chart illustrating the step of recognizing a standard color represented in each pixel of the code image
  • FIGS. 10 through 14 illustrate examples for explaining FIG. 9; and FIG. 15 is a block diagram of the apparatus according to a third embodiment of the present invention, and FIG. 16 is a flow chart illustrating the operation of the apparatus shown in FIG. 15.
  • FIG. 1A is a block diagram of an apparatus for recognizing a code according to a first embodiment of the present invention
  • FIG. 1 B is a flow chart illustrating the operation of the apparatus shown in FIG. 1A
  • FIGS. 2A through 2E illustrate various examples of code images to be read by the apparatus
  • FIGS. 3A through 3C illustrate examples of code conversion tables used to convert predetermined data into an image
  • FIG. 4 illustrates an example in which a code image is incorporated into a name card.
  • the apparatus for recognizing a code reads a code image (an quadrangular image shown at a lower right corner of FIG.
  • an image-acquiring portion 11 acquires an image including a
  • code image which is physically or electronically expressed.
  • a code data to be eventually extracted is expressed as an image shape in the code image.
  • the image-acquiring portion 11 reads the image physically expressed through an image input device, such as a scanner, a digital camera, a PC camera, a sensor, or a facsimile, and converts the read image into an image data, which can be electronically processed.
  • output of the image-acquiring portion 11 are referred to as "raw image”
  • the raw image is formatted as an image file, which can be processed by a computer.
  • a noise image or background with a code image is usually contained in the raw image.
  • An image-processing portion 13 extracts a code image from the raw image and recognizes the color or shade of individual cells contained in the code image.
  • a code image region is extracted from the raw image.
  • the image-processing portion 13 generates data related to the code image based on the raw image and discriminates the shape, position, or type of code image.
  • the image-processing portion 13 discriminates the number, shape, and position of cells contained in the code image.
  • the image-processing portion 13 detects the color or shade of each cell. The image-processing portion 13 sets-up an environmental variable in consideration of the ambient environment at a time when the raw image is acquired, corrects the color or shade of individual cells using the environmental variable, and thereby detects the original color or shade of each cell.
  • the image-processing portion 13 extracts the code image region except for a background image portion from the raw images and discriminates the shape and type of code image, thereby discriminating the cells contained in the code image region on this basis.
  • the raw image is converted into a black-and-white image on the basis of a black-and-white environmental variable, which is set according to the degree of brightness in a state where the raw image is input.
  • a color of the background image portion of the black-and-white image is set by a peculiar background color, which is not used to represent code data.
  • a color of raw image portions corresponding to the background image portion in the black-and-white image is set by the background color, and by discriminating code image portions and background portions, the code image region is extracted from the raw image.
  • An arithmetic operations required to extract the code image region may be reduced by using the black-and-white image.
  • the image-processing portion 13 receives an image in which the code image portions and the other portions are divided by the background color, divides the received image into a plurality of blocks, detects a region having a color or shade that is not the background color from each of the blocks, selects a block having the largest region among all blocks, detects a center point of the code image region contained in the selected block, and searches the entire image on the basis of the center point, thereby detecting the region having a color or shade that is not the background color as the code image region.
  • a code-setting portion 17 establishes a relationship (i.e., FIGS. 3A through 3C) between a character, number, or symbol used for representing data and a corresponding color or shade.
  • a code-converting portion 15 extracts a corresponding character, number, or symbol from the color or shade of each cell of the code image according to the relationship provided by the code-setting portion 17, thereby generating a code data.
  • the environmental variable may be set to R, G, or B in a RGB mode, H, S, or V in a HSV mode, or combination of the R, G, and B and the H, S, and V, so as to normalize the color value or shade value, which is recognized in the raw image in consideration of an environment in which the raw image is read.
  • values for a color environmental variable are added to or subtracted from color or shade values of pixels of the raw images.
  • the color or shade value may be a value for red, green, and blue in a RGB color mode; hue, saturation, and value (brightness) in a HSV color mode; cyan, magenta, yellow, and black in a CMYK color mode; or hue, intensity, and saturation in a HIS color mode.
  • the color or shade represented in the raw image is adjusted to correct for the environment where the raw image was read, and thus the original color/shade can be obtained.
  • an initial environmental variable is set on the presumption that a fluorescent light or a three-wavelength lamp was used for illumination. Otherwise, a white sheet of paper is used as a reference background before the raw image is input into a camera, and the environmental variable is set according to the ambient illumination. For example, the red light coming from a halogen lamp is relatively strong, and thus the environmental variable is set to remove the effect of the red light emitted from the halogen lamp. Next, if the actual color detected is normalized using the environmental variable, the effect of illumination can be reduced, and colors close to the original color can be obtained.
  • the component is converted to 255. Otherwise, the component is converted to 0. As a result, the final RGB value is (255, 0, 255), and thus the color or shade is discriminated as magenta.
  • Example 1 In the RGB color mode, a maximum value for each component of RGB is 255, and the minimum value therefor is 0. Thus, (255, 0, 0) represents red, (0, 255, 0) represents green, (0, 0, 255) represents blue, (0, 0, 0) represents black, and (255, 255, 255) represents white. In a case where x is the value of R,
  • y is the value of an environmental variable for each of R, G, and B, the components for the final RGB value for each pixel are determined as below.
  • f(x) 255, // x + y ⁇ 128 (where 0 ⁇ x,y ⁇ 255) 0, otherwise
  • Y denotes brightness, and I and Q denote the degree of redness and blueness, respectively.
  • a CMYK color mode is a color mode usually used for printing, and each color component is indicated as percentage or ratio in the CMYK color mode.
  • the CMYK color mode is expressed as (R/255, G/255, B/255) in relation to the RGB value.
  • x is a value for C, M, Y or K
  • y is a value of the environmental variable corresponding to each color component
  • the final CMYK value for each pixel is determined as below.
  • Example 4 In the case of HSV and HSI color mode, the value for hue is expressed as angle.
  • x is the value for a hue and the color environmental values are set so that 0 ⁇ Trg ⁇ Tgb ⁇ Tbr ⁇ 360 , a method for discriminating colors may be adopted as below.
  • Example 5 For the Commission Internationale de PE clairage (CIE) color mode, the value of a color is represented on an x-y graph. In this case, x and y are used as a reference for discriminating colors, and thus a method for discriminating colors may be adopted as below.
  • CIE Commission Internationale de PE clairage
  • f(x,y) red, if x ⁇ O ⁇ green, if x ⁇ 0.25, y ⁇ O.4 blue, if x ⁇ 0.25, y ⁇ 0.25 ' ' ⁇ ' gray, otherwise
  • Data related to a relationship between a general environment and an environmental variable to be set according to the general environment are stored in a database, and preset data with respect to an actual working environment are read, and thus the environmental variable may be used.
  • the environmental variable is determined experimentally such that colors read by an input optical device are corrected into original colors, or the code images are discriminated from the background, by analyzing the optical characteristics and the ambient illumination of the input optical device, and thereby the colors may be recognized without errors while not being affected by a device or environment.
  • two or more environmental variable groups having different objectives may be formed.
  • one of the environmental variable groups may be an environmental variable for separating the code images from the background, and the other may be an environmental variable for discriminating the color or shade of the code images separated from the background.
  • the value for R is relatively high when colors are discriminated for the RGB mode in the environment of illuminating using red light.
  • the value for R read in the optical device is reduced to a predetermined weight value, and the effect of the environment is excluded.
  • the code image in the environment of bright illumination is discriminated as black and white for the HSV mode
  • the weight value of the V component is increased, and thereby colors are discriminated.
  • the weight value of the V component for discriminating the black-and-white color in the HSV mode from colors excluding black or white are reduced, and the weight value of the S component is also reduced, and thereby colors are discriminated.
  • the distribution of the values for R, G, B, H, S, and/or V, which are obtained from each cell in the code image by the optical device, is determined, and the environmental variable and a weight value thereof may be reset with reference to the distribution.
  • FIGS. 2A through 2D illustrate various examples of code images, which can be recognized by the apparatus for recognizing a code according to the present invention.
  • various shapes for cells for example, quadrangular, circular, elliptical, cross-shaped, or honeycomb shaped, may be realized, and a combination of shapes may be possible when a code image is formed.
  • the shape and size of a code image or cell may be properly selected according to the content or amount of data to be represented in the code image.
  • Various shapes of the code image comprised of a plurality of cells such as quadrangular, circular, elliptical, cross-shaped, or honeycomb shaped, may be realized, and a code image having a similar shape to a barcode shown in FIG. 2D is also included.
  • FIG. 2E illustrates the regions of a code image based on roles of data represented in the code image.
  • the code image include a data region 291 , which is formed of at least one data cell in which colors, shade, shapes, patterns, or a combination thereof is differently encoded according to the content of data.
  • the data region 291 may be formed of one or more data cells in which characters are encoded as images, and each of the data cells may represent a character, or a set of a plurality of data cells may represent one or more characters.
  • the character "A" may be represented as a red cell, or may be represented as two cells such as one red cell and one green cell.
  • Code data contained in the data region 291 are comprised of characters, numbers, and symbols, and may comprise names, addresses, telephone numbers, fax numbers, host addresses of networks, domain names and IP addresses used in Internet, uniform resource locators (URLs), protocols, or document names depending on user's demands.
  • URLs uniform resource locators
  • the parity region 293 is comprised of parity cells for checking recognition errors of the cells represented in the data region 291.
  • the reference region 295 is comprised of at least one reference cell, which provides reference colors, reference shade, reference shapes, reference patterns, or a combination thereof for determining colors, shade, shapes, patterns or a combination thereof of the data cell formed in the data region 291.
  • the control region 297 is comprised of at least one control cell, in which control data for commands or services to be provided using data represented in the data region 291 are encoded.
  • auxiliary regions all regions excluding the data region 291 , i.e., the parity region 293, the reference region 295, and the control region 297, are referred to as "auxiliary regions", and cells included in the auxiliary regions are marked as auxiliary cells.
  • the parity region 293 is used to determine whether colors or shades (or possibly, shapes and/or patterns) are expressed to be suitable for data cells according to the content of code data.
  • parity data are obtained according to code values designated corresponding to the color or shade represented in each of the data cells, and parity cells are formed of colors or shade corresponding to the parity data.
  • the reference region 295 is used to set reference colors (or reference shades, reference shapes, or reference patterns as occasion demands) for recognizing colors (or shades, shapes, or patterns) represented in the cells of the data region 291 and/or the auxiliary regions.
  • the colors of the cells in each region may be represented in a color mode such as a red green blue (RGB) color mode, a hue saturation value (HSV) color mode, a cyan magenta yellow black (CMYK) color mode, a hue saturation intensity (HSI) color mode, a CIE color mode, a YIQ or YUV color mode.
  • a boundary region for discriminating regions may be further implemented between regions included in the code images.
  • a boundary region for discriminating cells may be further included between cells in each region.
  • the boundary region may be comprised of a line or a cell formed of a specific color or pattern, and a boundary line or boundary cell may be formed as black or white.
  • Color can be printed differently depending on the kind of printer or the mate ⁇ al used as the printing paper, and the same color may be slightly differently recognized according to the characteristics of a scanner or camera.
  • the reference cell included in the reference region 295 provides a reference for discriminating colors represented in the data region 291. That is, even if color is differently output depending on the output device used, or color is differently input according to the input device used such as a scanner, a color difference between the color of the reference region 295 and the color of the data region 291 is constant, and thus colors of the cells in the data region 291 can be precisely recognized.
  • colors of the cells of the data region 291 have a relative color difference compared with reference colors of the reference region 295, and thus the colors of the cells of the data region 291 are obtained by being compared with the reference colors of the reference region 295 on the basis of the RGB mode or HSV mode, and thereby data of the data cells can be precisely recognized even if an image input device or output device is changed.
  • the shapes or patterns may be inclined or warped.
  • the reference shapes or reference patterns are provided in the reference region 295, and thus a wrong input state can be sensed, and the shapes or patterns of the data cells can be precisely recognized.
  • Various services can be provided to a user using code data of the data region 291 depending on the type of application.
  • code data For example, in a case where a homepage address (that is, a URL) on the Internet is expressed on a name card as code images, the code images are decoded by a computer, and the web browser of the computer or a server connected to the computer is executed and thereby can be programmed to contact the homepage.
  • an electronic mail address is expressed as a code image
  • the code image is decoded by a computer, and the mailing software of the computer is executed, thereby providing an environment where electronic mails can be sent to the electronic mail address.
  • a user can call a telephone number corresponding to the code image or receive services of data related to geography.
  • the automatic service function can be automatically implemented by a separate program or in accordance to the kind of objective data in a decoding program.
  • the control region 297 in which a command for executing the automatic service function is expressed as an image, is included in the code image and thereby automatically implement services in the decoding program by using control data decoded from the control region 297.
  • commands or meta-data for controlling objective data of the data region 291 can be included in the control region 297.
  • data encoded in the control region 297 may include various meta-data such as the decoding order of the cells formed in the data region 291 , the position of the reference cell of the reference region 295, and the position or property of the parity region 293.
  • FIG. 3A illustrates an example in which two bits of data are expressed as four colors. If each cell has one of four colors, two bits of data can be expressed. Then, in a case where it is defined that one character is expressed as four consecutive cells, 8 bits, that is, 256 kinds of characters can be expressed. Meanwhile, in a case where there are four kind of shapes for a cell with the same color(i.e., small quadrangle, large quadrangle, small circle, and large circle), two bits of data can be expressed, and 256 kinds (8 bits) of data can be expressed in a case where each cell may be filled with four different colors.
  • FIG. 3B illustrates an example of a code conversion table for converting various characters (alphabets or special characters), numbers, or shapes into images, and in the example, one character is mapped as one or two color cells.
  • various characters are converted into code values, and then code images are generated as colors allocated to each of the code values.
  • the code images are generated using eight colors, and two consecutive cells are used so as to express one character or number.
  • Code values from "000" to "111" are allocated to the eight colors, and each character is encoded as two colors.
  • the number "3" is allocated to the code value "000 01 1 ", is encoded as a color (black) allocated to the code value "000” and a color (cyan) allocated to the code value "011”, and therefore is imaged as two consecutive cells of one black cell and one cyan cell.
  • Various characters or numbers included in code data are converted into code values according to the code conversion table shown in FIG. 3B, and then colors corresponding to the code values can be expressed as a quadrangular matrix shape comprised of a combination of quadrangular cells.
  • FIG. 3C illustrates an embodiment in which a code images is generated using a greyscale code.
  • the greyscale code is formed according to the brightness of a greyscale tone instead of the mixing rate of red (R), green (G), and blue (B).
  • R red
  • G green
  • B blue
  • a reference shade is set to either black, white, or grey
  • the cells in the data region 291 have values which are coded by a grey difference compared with the reference shade of the reference region 295.
  • the shade of each cell of the code image is calculated, and cells (cell set) having similar shades are collected.
  • the same code value is allocated to the cells belonging to the same cell set, and then errors in decoding can be determined using the parity region 293.
  • the greyscale code image may be applied to media which are printed mainly with black and white, such as newspapers.
  • FIG. 4 illustrates an example in which a code image is essentially adopted into a name card, using the above code system.
  • a user can generate an image file in which portions of a quadrangular code image represented at the lower right comer of the name card are included, using a camera or scanner, and the user can process the image file and recognize code data, which are to be represented by the code image.
  • FIG. 5 is a block diagram of the apparatus according to a second embodiment of the present invention
  • FIG. 6 is a flow chart illustrating the operation of the apparatus shown in FIG. 5.
  • An image-acquiring portion 51 acquires a raw image in which a code image is contained.
  • An image file acquired by an image input device, such as a camera or scanner, or an image file existing in the format of an electronic file is received, and the data format of the image file is converted if necessary.
  • a compressed video image file is uncompressed and converted into an image file having a bit map shape.
  • the image file is stored in a memory 52 as a raw image.
  • An ambient noise image with a code image, which the user wants to detect, is also included in the raw image.
  • a control portion 58 receives the type of the code image, sets an environmental variable in consideration of an environment at a time where the raw images are acquired, or reads an environmental variable already stored and transmits the environmental variable to a color filter 53.
  • the memory 52 temporarily stores image data required for image processing.
  • the color filter 53 corrects colors or shades recognized from the raw image by the environmental variable, converts the corrected colors or shades into standard colors or standard shades, which are used to generate the code image, and thereby generates a standard image represented by the standard colors or standard shade.
  • standard colors or standard shades mean colors or shade which are set to correspond to characters, numbers, or symbols when the code image is generated (see FIGS. 3A through 3C).
  • the code image displayed in physical media may be not represented as originally set colors or may not recognize the original colors due to the characteristics of an image device or the effect of an ambient environment when the code image is printed by an image
  • RGB value of magenta is (255, 0, 255)
  • the RGB value recognized from the cell to be represented as magenta is not exactly (255, 0, 255) but will be some value near to (255, 0, 255).
  • code data can be extracted by applying the code conversion table only if the colors or shades actually recognized are converted into standard colors or standard shades.
  • a binary-coded filter 54 divides colors or shades of the raw image into two colors according to a predetermined reference value (black-and-white environmental variable) and generates a binary-coded image.
  • the binary-coded image may be implemented as black and white or two specific colors.
  • a reference value may be set to the averaged value of values for R, G, and B or a minimum value or maximum value among them, a value for V of the HSV color mode, a value for I of the HSI color mode, or a value for K of the CMYK color mode.
  • a pre-processing portion 55 receives the raw image and the binary-coded image, sets a color of a portion excluding the code image portion in the binary-coded image as a specific background color, sets a color of a portion of the standard image, which corresponds to the portion set as the background color in the binary-coded image, as the background color, thereby discriminating a code image portion from other portions. For example, a color of pixels positioned at an edge of the binary-coded image is set as the background color, and then a color of pixels connected to the pixels set as the background color is also set as the background color, thereby discriminating the code image portion from the other portions.
  • the background color is set to one of colors, which are not used to generate the code image.
  • a feature point-extracting portion 56 extracts a plurality of cells included in the code image portion and then recognizes the standard colors or standard shade represented in each cell.
  • a decoder 57 extracts a corresponding character, number, or symbol from the color or shade recognized in each cell of the code images according to a relation between characters, numbers, or symbols and the corresponding color or shade, and generates code data.
  • a color or shade which is most distributed among the colors or shades of the pixels belonging to each cell, is recognized as a color of the corresponding cell.
  • step 64 the binary-coded image is generated by dividing the colors or shades of the raw image into two colors according to a predetermined reference value.
  • step 65 a color of the portion excluding the code image portion is set as the specific background color on the basis of the binary-coded image.
  • step 66 the portion of the standard image corresponding to the portion set as the background color in the binary-coded image is processed as the background color, and thus the code image portion can be discriminated from the other portions.
  • step 67 the plurality of cells included in the code image portion are extracted, and then the standard color or standard shade represented in each cell is recognized.
  • step 68 a corresponding character, number, or symbol is extracted from the color or shade recognized in each cell of the code image according to the relationship between the character, number, or symbol and the corresponding color or shade, thereby generating code data.
  • the binary-coded filter 54 converts the raw image into a black-and-white image according to the black-and-white environmental variable.
  • the black-and-white image is used to facilitate discrimination of an object included in the raw image and to improve the working speed.
  • the brightness value of each pixel of the raw image is compared with the environmental variable value and output as a pixel that is black or white, and thus the black-and-white image is generated.
  • the black-and-white environmental variable means a parameter or a set of parameters for converting colors represented in the raw image into black or white, such as the averaged value in which the sum of the values for red, green, and blue is divided into 3, or the value (brightness) of the HSV color mode.
  • the value of the black-and-white variable is set low. In a case where the total brightness of the raw images is relatively high, the value of the black-and-white variable is set high. For example, in a case where the RGB value in a pixel of the raw image is
  • black or white pixels may be generated by applying a reference value to all pixels or may be generated by dividing the entire image into several portions, setting a reference value with respect to each of the portion, and applying the reference value to pixels belonging to the each of the portion.
  • the images are divided into blocks having predetermined sizes, the brightness value of pixels belonging to each of the blocks is averaged, and the averaged value is set as the black-and-white environmental variable.
  • the brightness value of each pixel is compared with the black-and-white variable value, and thus pixels belonging to the blocks can be binary-coded as black or white.
  • the method is most effective in a case where the whole image is locally dark or bright.
  • a multiple threshold value method may be adopted when a color image is converted into a black-and-white image.
  • a histogram of the brightness value of the pixels belonging to the image is obtained.
  • the brightness frequency can thus be obtained.
  • the histogram can be divided into brightness values having a high frequency and brightness values having a low frequency.
  • the brightness value of each pixel is shown convergent to the specific brightness value.
  • brightness values having a lower frequency which are positioned among brightness values having a higher frequency, are set as a plurality of black-and-white environmental variables, and then the plurality of black-and-white environmental variables are applied to the color image in order.
  • the most appropriate variable is selected among the plurality of black-and-white environmental variables, so a proper black-and-white image can be generated.
  • FIG. 7 illustrates the result in which a black-and-white image is obtained from raw image.
  • FIG. 7A illustrates the raw images. The raw image is actually a color image but represented as a greyscale image due to the limitation in the expression of the drawings.
  • FIG. 7B illustrates the example in which the values for R, G, and B of pixels are averaged, and the averaged value is compared with the reference value of 128, to obtain the black-and-white image.
  • the reference value is set to 90.
  • FIG. 7D illustrates an example that adopts the local threshold value method.
  • FIG. 7E illustrates the example in which the minimum value among the values for R, G, and B of pixels is compared with of the reference value of 128 to obtain a black-and-white image, and for FIG. 7F, the reference value is set to 90.
  • the pre-processing portion 55 receives the color image and the black-and-white image, compares the color image with the black-and-white image, and thus removes a unnecessary noise image from the color image and the black-and-white image.
  • the step of removing the noise image on the basis of the black-and-white image is performed by steps of removing a background image and a small noise image and will be described in detail with reference to FIGS. 8A through 8F.
  • FIGS. 8A through 8F illustrate the step of removing the background image.
  • the noise image position at the edges of the black-and-white image that is, pixels having a black color among pixels positioned at the edges of the black-and-white image, are set its color as a specific shade or color, which is a color or shade that is not represented in cells of the code image and referred to as a "background color" (see FIG. 8B).
  • the entire region is checked, and pixels connected to the noise image and also set its color as the background color.
  • black pixels connected to the pixels represented by the background color is detected in order, and a color of the pixels are set as the background color (see FIGS. 8C through 8E). If there are no further black pixels connected to the pixels having the background color, the portion set as the background colors is determined as the region of the noise image, and the portion comprised of the black pixels among the portion excluding the region of the noise image is determined as the region of the code image.
  • it is efficient that pixels belonging to the noise image are simultaneously searched in all directions such as from left to right or right to left, from up to down, or from down to up.
  • the connection state of each of the pixels is checked from the image from which the background noise image is removed, the length or area to which the pixels are connected is less than a reference value, and then this image is determined as the small noise image, thereby removing the small noise image. For example, in a case where the number of pixels that are black in a portion having a predetermined area is less than the reference value, this image corresponding to the portion may be determined as the noise image. In this way, the code image portion is formed when the portion determined as the noise image and a white pixel portion are excluded from the black-and-white image. The result of the noise image removed from the black-and-white image is applied to the color image, and so the region of the code image is extracted from the color image.
  • the feature point-extracting portion 56 receives the color image and/or black-and-white image from which the noise image are removed, searches the feature point comprising the image (code image) of a specific object and its region, searches data related to the type and position of the code image from the feature point of the object, and determines errors.
  • the feature point-extracting portion 56 is implemented by performing steps of dividing a standard image into blocks, searching the region of the code image, extracting the feature point from the searched region of the code image and determining the type of a code.
  • FIG. 9 is a flow chart illustrating the step of recognizing standard colors represented in each cell of the code image.
  • the standard image in which the background color is represented or from which the noise image is removed
  • the standard image is divided into blocks so that the region of the code image can be searched in step 92.
  • FIG. 10A illustrates how the entire image is divided into blocks having predetermined sizes. A portion represented as a shaded quadrangle denotes a region with the code image.
  • the position of the code image may be estimated.
  • the significant image means the code image portion excluding the noise image portion of which color is set by the background color.
  • step 93 the center point of the block where the region of the code image belonging to each block is the largest, is searched, and the region of the code image is searched on the basis of the center point.
  • step 94 the block with the greatest number of pixels belonging to the region of the code image, is detected, and the center point of the block (or the center point of the region of the code image belonging to the block) is detected.
  • the region of the entire image is divided into a plurality of blocks, and then the region of the code image is searched, and thereby the arithmetic operations required for processing can be reduced.
  • the arithmetic operations can be reduced.
  • the number of pixels belonging to the region of the code image in each block is calculated in FIG. 10A, the number of pixels belonging to the code image is greatest in block 6, and decreases in order from blocks 2, 3, and 5.
  • the number of arithmetic operations performed can be greatly reduced.
  • the code image are comprised of a plurality of images separated from one another in space, there are a plurality of regions of the code image.
  • the image region having the largest size among the plurality of code images, which is assumed to be the region of the code image, is first searched, and then other image regions are checked in the order of size.
  • step 95 the entire standard image is searched from the center point, and a candidate region to be the region of the code image is detected.
  • step 96 the feature point is detected from the candidate region, and thereby the shape and type of the code image are determined.
  • the region of a figure formed by circumscription points of the candidate region is searched, which may be result in a quadrangular region or a circular region, and the image comprised of the detected region become a code image.
  • cell regions in the code image is discriminated each other in step 97, and standard colors represented in a corresponding cell is recognized on the basis of colors of pixels belonging to each cell region in step 98.
  • FIGS. 11 A through 11 C illustrate the step of detecting the candidate region, and regions 104 marked by slanted line denote a code image region, which is to be actually obtained, and candidate regions 103 including the regions 104 are detected.
  • the step is to simplify a process by selecting a partial necessary image region from the entire images and performing a future arithmetic process with respect to the partial image.
  • An extreme point i.e., a point having minimum and maximum values on x and y coordinates
  • an image region estimated as the code image is obtained, and a figure comprised of the extreme point is determined as the candidate regions 103.
  • a method for searching a candidate region includes a reduction search method and an extension search method.
  • a region corresponding to the code image is searched while being extended to an outside from the center point. That is, the code image region is searched by being extended to a portion determined as the code image region that is not the background region.
  • the reduction search method the code image region is searched by being reduced to the center point from the outside.
  • the candidate region is expressed as a left upper coordinate and a right lower coordinate.
  • the candidate region is expressed as the coordinate of the center point and the length of the radius.
  • a method for determining a feature point includes a diagonal search method or a boundary detection method.
  • the feature point is searched using a segment having a temporary slope in the candidate region. For example, in a case where the code image is formed of quadrangles, in order to search vertices of the quadrangular code image in the candidate region, a diagonal having an angle of 45 degree is drawn at vertices of the candidate region, and thus a quadrangular region formed of points externally contacting the diagonal is searched. As shown in FIG.
  • the diagonal with respect to each vertex of the candidate region has a predetermined direction such as counterclockwise or clockwise.
  • FIGS. 12B and 12C illustrates the diagonal search method in greater detail.
  • FIG. 12B in a case where the feature point is detected from the diagonal in a direction CD, if a plurality of pixels contact the diagonal as shown, a pixel (pixel (D of the drawing) lastly detected is determined as the feature point.
  • the step is performed in directions (2), (S>, and ®, and thereby desired feature points of the code image region are detected.
  • all of the desired feature points may be not detected. For example, in the case of a quadrangular code image, four feature points should be extracted but only three feature points can be extracted.
  • FIG. 12C in a case where the number of the feature points are not enough, the step of searching in vertical and horizontal directions is further performed.
  • the pixels of the code image region nearest to the boundary surface of the candidate region are detected in directions ⁇ , ⁇ , ⁇ , and (8).
  • the feature points determined through the diagonal search method may be different from the feature points determined through search in vertical/horizontal directions. In such a case, either the averaged value of two coordinates or one of the two coordinates is selected to determine the feature point.
  • the external boundary of the code image region included in the candidate region is tracked, and thus the feature points, such as vertices of the code image, are searched.
  • coordinates composing a boundary are chosen as candidate coordinates.
  • a method for removing the coordinates from the candidate coordinates is used.
  • a distance-based method besides the slope-based method may be adopted.
  • the coordinates are removed from the feature point candidate coordinates. That is, in a case where a distance (x, - ⁇ ,_, ⁇ +(y, - y,_,Y between the two adjacent candidate
  • the types of the code image may be discriminated according to the ratio of width to length. That is, in a case where the ratio of width to length is similar, the code image may be determined as a square and may be recognized as a 5 x 5 matrix code image. On the other hand, in a case where a difference between the width and length is more than a predetermined value, the code image may be determined as a 8 x 5 two-dimensional code image.
  • FIGS 14A through 14C illustrate the step of dividing cells belonging to the code image region and finding center points of the cells
  • the code image in the present invention is determined as a 4 x 4 quadrangular image
  • the width and the length of the code image are divided by 4, respectively, and the coordinates of the center points of the cells divided as shown in FIG 14B are obtained
  • FIG 14C illustrates an algorithm for searching for the center points of cells
  • the position of X c ⁇ and Y c ⁇ used for searching for the center of an i-th cell contacting the side is expressed by Equation 6
  • the coordinate of the point contacting the side of the code is obtained by
  • Equation 6 in a case where the coordinate is connected to the point of the same order as that in an opposite side, the point of contact in which two segments are intersected is generated, which is determined as the center point of each cell
  • Equation 6 is ideal only for a case where a close-up photographing angle of the code image and a camera is 90 degree
  • the close-up angle is small (i e , in a case where the camera is lying down)
  • distortion occurs in the code image, and thus errors may be generated
  • the code image of the original quadrangle is input in the form of trapezoid.
  • an extra arithmetic operation may be required but in most cases, the above Equation is enough.
  • the center position of cells may be searched for using the above Equation or an auxiliary Equation.
  • the boundary line, or the boundary region between cells, which is inserted when the code image is generated, is detected in consideration of the distribution of colors of pixels in the code image region, and thus cells may be discriminated on this basis.
  • the decoder 57 decodes the code image by using data searched from the feature point-extracting portion 56 and reproduces code data.
  • the steps of determining the color/shade of each cell and of checking parity data are performed using the color environmental variable and the analysis information (color mode and the type of code image) input by the controller 58.
  • the value of the color or shade detected for each cell is replaced with a corresponding character, number, or symbol.
  • abnormality can be determined through a parity operation, and the corresponding character, number, or symbol is output in a case where there is no abnormality.
  • the above steps are performed on all of the cells, and desired code data are generated by connecting the character, number, or symbol obtained with respect to each cell.
  • the position of each cell is checked by the center coordinate of each cell input from the feature point-extracting portion 56, and a predetermined number of pixels are extracted on this basis, and thus colors can be determined.
  • Colors can be determined using the RGB mode after the averaged value of sampled pixels is obtained, or colors can be determined to represent a corresponding cell after the angle of colors is obtained using the HSV mode.
  • the value for each of the RGB components of the sampled pixels in the color image is 0 or 255 after undergoing the step of converting colors by the color environmental variable, and thus the colors are in a standard color state.
  • the color having the most frequency among the sampled pixels is determined as the color of a corresponding cell.
  • the color environmental variable is applied to the sampled pixels, and the sampled pixels are converted into the standard colors, and colors having the most frequency are determined as colors of a corresponding cell.
  • the values of RGB of the sampled pixels are determined after being converted through HSV conversion.
  • the averaged values of each of cells are obtained, and in a case where the averaged values are aligned in the order of size, portions having the relatively high frequency of distribution of the aligned averaged values are converged into three places, and also there are intervals among the three places, which have relatively low frequency.
  • the center point of the longest interval and the center point of the longest interval in the second place are obtained, and then if values corresponding to the two center points are compared with the averaged values of each of cells, it can be determined whether each cell belongs to which level (one among black, grey, and white).
  • the averaged values of values for R, G, and B of the pixels sampled from a cell are obtained, and thereby may be used as the brightness value.
  • the distribution of the brightness value is checked using the brightness values obtained for each cell, and is divided into three groups such as black, white, and grey. Then, a shade of a cell is determined as shade nearest to the brightness value of the cell.
  • the data region is discriminated from the auxiliary regions (the parity region, the reference region and/or the control region) in the code image.
  • the colors, shade, shapes, and patterns, which are represented in each cell are determined using the reference region, and in a case where there is the parity region, errors of data cells are determined.
  • the step of searching shapes, colors, patterns, and characters, which are included in the code image is required in the step of decoding, and besides this, the step of correcting a distorted image.
  • color determination is possible using one or more methods among the red, green, and blue (RGB) mode, the hue, saturation, and value (HSV) mode, the cyan, magenta, and yellow (CMY) mode, and the hue, lightness, and saturation (HLS) mode.
  • the code values of each of cells positioned in the data region and/or the auxiliary region are extracted for decoding.
  • reference colors or shade
  • reference shapes or reference patterns as a reference for interpreting data in all regions are made by detecting colors or shade of a reference cell.
  • the colors, shapes, and patterns of the cells positioned in the data region, the parity region or the control region are detected, and then a difference between the detected one and the reference colors, the reference shapes and/or the reference patterns is obtained, and thereby the difference is converted into the code values for each of the cells.
  • the code values corresponding to each of the cells may be obtained according to the colors or shade, shapes and/or patterns, which are read by the image input device.
  • the step of checking errors of the parity with respect to each row and column of the code image by the code values (that is, parity data) obtained from the parity region is performed.
  • Environmental variables optimized to illumination usually used, and weight values thereof may be preset to store in a decoding program or database such that the user can select the environmental variable most suitable for own environment.
  • the occurrence of the parity errors can be considered that there are errors in reading colors by the environmental variables presently set, and in this case, another environmental variable is adopted to read colors again. If necessary, the direction or position of the code image may be searched based on the parity data.
  • FIG. 15 is a block diagram of the apparatus according to a third embodiment of the present invention
  • FIG. 16 is a flow chart illustrating the operation of the apparatus shown in FIG. 15. Compared with the apparatus shown in FIG. 5, there is a difference in which the apparatus of FIG.
  • the apparatus of FIG. 15 extracts the desired code image region from the raw image and converts colors of each of pixels (or a pixel sampled among pixels) belonging to the code image region into the standard colors using the environmental variables by means of the color filter.
  • Other functions or operation are basically similar, and a memory is not shown as a matter of convenience.
  • a difference from the apparatus of FIG. 5 will be described, and the other may be same applied only if there is no problem with the processing order or operation.
  • the apparatus for recognizing a code shown in FIG. 15 acquires the raw image in which the code image is included, in an image-acquiring portion 151 , divides colors or shade of the raw image into two colors according to a predetermined reference value in a binary-coded conversion portion 153, thereby generating a binary-coded image.
  • a pre-processing portion 154 sets a color of a portion excluding a code image portion in the binary-coded image as a specific background color, sets a color of a portion of the raw image corresponding to the portion set by the background color in the binary-coded image, as the background color, thereby discriminating the code image portion from other portions.
  • a feature point-extracting portion 155 extracts a plurality of cells included in the code image portion and recognizes colors or shade represented in each cell.
  • a color filter 156 sets environmental variables in consideration of an environment at a time when the raw image is acquired, corrects the colors or shade recognized in each cell included in the code image portion by the environmental variables, converts the corrected colors or shade into a plurality of standard colors or standard shade used to generate the code image, thereby generating a standard image represented by the standard colors or standard shade.
  • a decoder 157 extracts a corresponding character, number, or symbol from colors or shade recognized in each cell of the code image according to a relationship between a character, number, or symbol and corresponding colors or shade, and generate code data.
  • step 161 the raw image in which the code image is contained, are acquired.
  • step 162 the colors or shade of the raw image is divided into two colors according to the predetermined reference value, and thus the binary-coded image is generated.
  • step 163 the portion excluding the code image portion is represented by the specific background color on the basis of the binary-coded image.
  • step 164 the portion of the raw image corresponding to the portion represented by the background color in the binary-coded image is processed as the background color, thereby discriminating the code image portion from the other portions.
  • step 165 the plurality of cells included in the code image portion are extracted, and then colors or shade represented in each cell are recognized.
  • step 166 the environmental variables are set in consideration of the environment at a time when the raw image is acquired, and the colors or shade recognized in each cell included in the code image is corrected by the environmental variables.
  • step 167 the corrected colors or shade are converted into a plurality of standard colors or standard shade, which are used to generate the code image, and thus the standard image represented by the standard colors or standard shade is generated.
  • predetermined pixels are sampled on the basis of the center point of each cell, and the environmental variables are applied only to the sampled pixels, and thereby the standard colors or standard shade of the cell may be discriminated.
  • data related to the discriminated standard colors or standard shade of each cell are stored in a memory and are used to generate code data. As a result, steps required to generate the standard image may be omitted.
  • step 168 a corresponding character, number, or symbol is extracted from the colors or shade recognized in each cell of the code image according to the relationship between a character, number, or symbol and corresponding colors or shade, and thus code data are generated.
  • the method for recognizing a code according to the present invention can be embodied in a computer program.
  • the program can be realized in media used in a computer and in a common digital computer for operating the program.
  • the program can be stored in computer readable media.
  • the media can include magnetic media such as a floppy disk or a hard disk and optical media such as a CD-ROM or a digital video disc (DVD).
  • the program can be transmitted by carrier waves such as Internet.
  • the computer readable media is dispersed into a computer system connected by networks and can be stored as computer readable codes and implemented by a dispersion method.
  • the apparatus for recognizing a code and the method therefore according to the present invention can receive a code image in which predetermined data are encoded by colors or shade, precisely discriminate original colors or shade regardless of an environment into which the code image is input, and thus desired code data can be obtained.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Communication Control (AREA)
  • Character Input (AREA)
  • Facsimile Image Signal Circuits (AREA)
  • Editing Of Facsimile Originals (AREA)

Abstract

An apparatus and method for recognizing a code from a code image that is expressed physically or electronically and extracting data represented in the code image is provided. The method includes the steps of receiving a raw image in which a code image is contained, detecting a background image included in the raw image, extracting a code image region in which a background image is excluded, recognizing the shape and type of the code image and the color or shade represented in each of cells, converting the color or shade recognized from each of the cells into a corresponding character, number, or symbol and generating code data. The code image in which predetermined data are represented as colors or shades are received, and original colors or shades can be precisely discriminated regardless of an environment in which the code image is recognized.

Description

APPARATUS AND METHOD FOR RECOGNIZING CODE
Technical Field
The present invention relates to an apparatus for recognizing a code image, which is physically or electronically expressed, and for extracting information represented on the code image, and a method therefor.
Background Art
In a method for representing recognizable data, such as characters, numbers, or symbols, there is a case where characters, numbers, or symbols are represented as images in consideration of security of data or a display space.
Likewise, in order to read an original data from a code image in which data is represented as an image, a suitable decoder must be provided.
Disclosure of the Invention
To solve the above problem, it is an object of the present invention to provide an apparatus for recognizing a code which is capable of reading an original data from a code image in which data is expressed as a color or shade, and a method therefor. Accordingly, to achieve the above object, according to one aspect of the present invention, there is provided an apparatus for recognizing a code. The apparatus includes an image-acquiring portion for acquiring a raw image in which a code image is contained, a color-converting portion for correcting colors or shades recognized in the raw image, using environmental variables, converting the corrected colors or shades into a plurality of standard colors or standard shades used to generate the code image, and generating a standard image represented by the standard colors or standard shades, a binary-coding converting portion for dividing the colors or shades of the raw image into two colors according to a predetermined reference value and generating a binary-coded image, an image-processing portion for extracting an area excluding a code image area from the binary-coded image, setting a color of the area corresponding to the extracted area in the standard image as a background color, thereby discriminating the code image area from other areas, extracting a plurality of cells included in the code image area, and recognizing the standard color or standard shade represented in each of the cells, and a code-converting portion for extracting a corresponding character, number, or symbol from the color or shade recognized in each of the cells of the code image according to a relationship between a character, number, or symbol and a corresponding color or shade and generating code data.
To achieve the above object, according to another aspect of the present invention, there is provided an apparatus for recognizing a code. The apparatus includes an image-acquiring portion for acquiring a raw image in which a code image is contained, a binary-coding converting portion for dividing the colors or shades of the raw image into two colors according to a predetermined reference value and generating a binary-coded image, an image-processing portion for extracting an area excluding a code image area from the binary-coded image, setting a color of the area corresponding to the extracted area in the standard image as a background color, thereby discriminating the code image area from other areas, extracting a plurality of cells included in the code image area, and recognizing the standard color or standard shade represented in each of the cells, a color-converting portion for correcting colors or shades recognized with respect to each of the cells included in the code image area, using environmental variables, and converting the corrected colors or shades into a plurality of standard colors or standard shades used to generate the code image, and a code-converting portion for extracting a corresponding character, number, or symbol from the color or shade recognized in each of the cells of the code image according to a relationship between the character, number, or symbol and the corresponding color or shade and generating code data.
To achieve the above object, according to another aspect of the present invention, there is provided a method for recognizing a code. The method includes the steps of acquiring a raw image in which a code image is contained, correcting colors or shades recognized from the raw image, using environmental variables, converting the corrected colors or shades into a plurality of standard colors or standard shades used to generate the code image, and generating a standard image represented by the standard colors or standard shades, dividing the colors or shades of the raw image into two colors according to a predetermined reference value and generating a binary-coded image, extracting an area excluding a code image area from the binary-coded image, and setting a color of the area corresponding to the extracted area in the standard images as a background color, thereby discriminating the code image area from other areas, extracting a plurality of cells included in the code image area, and recognizing the standard color or standard shade represented in each of the cells, and extracting a corresponding character, number, or symbol from the color or shade recognized in each of the cells of the code image according to a relationship between the character, number, or symbol and the corresponding color or shade and generating code data. To achieve the above object, according to another aspect of the present invention, there is provided a method for recognizing a code. The method includes the steps of acquiring a raw image in which a code image is contained, dividing the colors or shades of the raw image into two colors according to a predetermined reference value and generating a binary-coded image, extracting an area excluding a code image area from the binary-coded image, and setting a color of the area corresponding to the extracted area in the standard image as a background color, thereby discriminating the code image area from other areas, extracting a plurality of cells included in the code image portion, and recognizing the standard color or standard shade represented in each of the cells, correcting colors or shades recognized from the raw image with respect to each of the cells included in the code image area, using environmental variables, converting the corrected colors or shades into a plurality of standard colors or standard shades used to generate the code image, and extracting a corresponding character, number, or symbol from the color or shade recognized in each of the cells of the code image according to a relationship between the character, number, or symbol and the corresponding color or shade and generating code data. Brief Description of the Drawings
The above objective and advantages of the present invention will become more apparent by describing in detail preferred embodiments thereof with reference to the attached drawings in which:
FIG. 1A is a block diagram of an apparatus for recognizing a code according to a first embodiment of the present invention, and FIG. 1 B is a flow chart illustrating the operation of the apparatus shown in FIG. 1A;
FIGS. 2A through 2E illustrate various examples of code images to be read by the apparatus, FIGS. 3A through 3C illustrate examples of code conversion tables used to convert predetermined data into an image, and FIG. 4 illustrates an example in which a code image is incorporated into a name card;
FIG. 5 is a block diagram of the apparatus according to a second embodiment of the present invention, and FIG. 6 is a flow chart illustrating the operation of the apparatus shown in FIG. 5;
FIGS. 7A through 7F illustrate results in which black-and-white images are obtained from raw images;
FIGS. 8A through 8F illustrate steps of removing a noise image from a black-and-white image; FIG. 9 is a flow chart illustrating the step of recognizing a standard color represented in each pixel of the code image;
FIGS. 10 through 14 illustrate examples for explaining FIG. 9; and FIG. 15 is a block diagram of the apparatus according to a third embodiment of the present invention, and FIG. 16 is a flow chart illustrating the operation of the apparatus shown in FIG. 15.
Best mode for carrying out the Invention
Hereinafter, the present invention will be described in detail by describing preferred embodiments of the invention with reference to the accompanying drawings. FIG. 1A is a block diagram of an apparatus for recognizing a code according to a first embodiment of the present invention, and FIG. 1 B is a flow chart illustrating the operation of the apparatus shown in FIG. 1A. FIGS. 2A through 2E illustrate various examples of code images to be read by the apparatus, and FIGS. 3A through 3C illustrate examples of code conversion tables used to convert predetermined data into an image, and FIG. 4 illustrates an example in which a code image is incorporated into a name card. The apparatus for recognizing a code reads a code image (an quadrangular image shown at a lower right corner of FIG. 4) represented on a physical medium and extracts an original code data corresponding to the code image. The code image are an image into which number, character, or symbol is converted using a code conversion table and may be expressed in various ways as shown in FIGS. 2A through 2D. First, the function and operation of the apparatus for recognizing a code will be described with reference to FIGS. 1 A and 1 B. In step 21 , an image-acquiring portion 11 acquires an image including a
"code image", which is physically or electronically expressed. A code data to be eventually extracted is expressed as an image shape in the code image. The image-acquiring portion 11 reads the image physically expressed through an image input device, such as a scanner, a digital camera, a PC camera, a sensor, or a facsimile, and converts the read image into an image data, which can be electronically processed. Here, output of the image-acquiring portion 11 are referred to as "raw image", and the raw image is formatted as an image file, which can be processed by a computer. In general, a noise image or background with a code image is usually contained in the raw image. An image-processing portion 13 extracts a code image from the raw image and recognizes the color or shade of individual cells contained in the code image.
In step 22, based on a parameter such as an environmental variable and/or color mode, a code image region is extracted from the raw image. In step 23, the image-processing portion 13 generates data related to the code image based on the raw image and discriminates the shape, position, or type of code image. In step 24, the image-processing portion 13 discriminates the number, shape, and position of cells contained in the code image. In step 25, the image-processing portion 13 detects the color or shade of each cell. The image-processing portion 13 sets-up an environmental variable in consideration of the ambient environment at a time when the raw image is acquired, corrects the color or shade of individual cells using the environmental variable, and thereby detects the original color or shade of each cell.
The image-processing portion 13 extracts the code image region except for a background image portion from the raw images and discriminates the shape and type of code image, thereby discriminating the cells contained in the code image region on this basis. Preferably, the raw image is converted into a black-and-white image on the basis of a black-and-white environmental variable, which is set according to the degree of brightness in a state where the raw image is input. Preferably, a color of the background image portion of the black-and-white image is set by a peculiar background color, which is not used to represent code data. Preferably, then, a color of raw image portions corresponding to the background image portion in the black-and-white image is set by the background color, and by discriminating code image portions and background portions, the code image region is extracted from the raw image. An arithmetic operations required to extract the code image region may be reduced by using the black-and-white image.
Preferably, the image-processing portion 13 receives an image in which the code image portions and the other portions are divided by the background color, divides the received image into a plurality of blocks, detects a region having a color or shade that is not the background color from each of the blocks, selects a block having the largest region among all blocks, detects a center point of the code image region contained in the selected block, and searches the entire image on the basis of the center point, thereby detecting the region having a color or shade that is not the background color as the code image region.
A code-setting portion 17 establishes a relationship (i.e., FIGS. 3A through 3C) between a character, number, or symbol used for representing data and a corresponding color or shade. In step 26, a code-converting portion 15 extracts a corresponding character, number, or symbol from the color or shade of each cell of the code image according to the relationship provided by the code-setting portion 17, thereby generating a code data.
An example of setting an environmental variable used to read the color or shade of each pixel in an image will be described below. The environmental variable may be set to R, G, or B in a RGB mode, H, S, or V in a HSV mode, or combination of the R, G, and B and the H, S, and V, so as to normalize the color value or shade value, which is recognized in the raw image in consideration of an environment in which the raw image is read. In other words, values for a color environmental variable are added to or subtracted from color or shade values of pixels of the raw images. For example, the color or shade value may be a value for red, green, and blue in a RGB color mode; hue, saturation, and value (brightness) in a HSV color mode; cyan, magenta, yellow, and black in a CMYK color mode; or hue, intensity, and saturation in a HIS color mode. The color or shade represented in the raw image is adjusted to correct for the environment where the raw image was read, and thus the original color/shade can be obtained.
In general, an initial environmental variable is set on the presumption that a fluorescent light or a three-wavelength lamp was used for illumination. Otherwise, a white sheet of paper is used as a reference background before the raw image is input into a camera, and the environmental variable is set according to the ambient illumination. For example, the red light coming from a halogen lamp is relatively strong, and thus the environmental variable is set to remove the effect of the red light emitted from the halogen lamp. Next, if the actual color detected is normalized using the environmental variable, the effect of illumination can be reduced, and colors close to the original color can be obtained.
An example of correcting colors or shades using the environmental variable will be described below. An assumption that the code images are comprised of eight colors and coded using a code conversion table shown in FIG. 3B is made. In a case where the RGB value represented in a pixel of a raw image is recognized as (100, 100, 100), and the environmental variable is set to (+100, -50, +50), the corrected RGB value for the pixel, obtained by performing arithmetic operation on the RGB value and the environmental variable, is (200, 50, 150).
For each component of the corrected RGB value, if the component is greater than
128, the component is converted to 255. Otherwise, the component is converted to 0. As a result, the final RGB value is (255, 0, 255), and thus the color or shade is discriminated as magenta.
Example 1 ) In the RGB color mode, a maximum value for each component of RGB is 255, and the minimum value therefor is 0. Thus, (255, 0, 0) represents red, (0, 255, 0) represents green, (0, 0, 255) represents blue, (0, 0, 0) represents black, and (255, 255, 255) represents white. In a case where x is the value of R,
G, or B, and y is the value of an environmental variable for each of R, G, and B, the components for the final RGB value for each pixel are determined as below. f(x) = 255, // x + y ≥ 128 (where 0 ≤ x,y ≤ 255) 0, otherwise
Example 2) YIQ and YUV color modes are obtained by assigning a predetermined weight to the RGB value and are similar to the RGB mode. That is, the YIQ color mode may be obtained using Y = 0.299R + 0.587G + 0.114B, I = 0.596R - 0.274G - 0.322B, Q = 0.211 R - 0.523G + 0.312B. Y denotes brightness, and I and Q denote the degree of redness and blueness, respectively.
f(y) = white, y ≥ 0.5 black, otherwise
Example 3) A CMYK color mode is a color mode usually used for printing, and each color component is indicated as percentage or ratio in the CMYK color mode. The CMYK color mode is expressed as (R/255, G/255, B/255) in relation to the RGB value. In a case where x is a value for C, M, Y or K, and y is a value of the environmental variable corresponding to each color component, the final CMYK value for each pixel is determined as below. f(x) = I, if x + y ≥ 0.5 (where 0 ≤ x,y ≤ l) f(x) = 0, otherwise
Example 4) In the case of HSV and HSI color mode, the value for hue is expressed as angle. Here, in a case where x is the value for a hue and the color environmental values are set so that 0 ≤ Trg < Tgb < Tbr ≤ 360 , a method for discriminating colors may be adopted as below.
f(x) = red, if Tbr < x < Trg green, if Trg ≤ x < Tgb . . . . (4) blue, if Tgb ≤ x < Tbr
Example 5) For the Commission Internationale de PE clairage (CIE) color mode, the value of a color is represented on an x-y graph. In this case, x and y are used as a reference for discriminating colors, and thus a method for discriminating colors may be adopted as below.
f(x,y) = red, if x ≥ OΛ green, if x < 0.25, y ≥ O.4 blue, if x < 0.25, y < 0.25 ' ' ^ ' gray, otherwise
Data related to a relationship between a general environment and an environmental variable to be set according to the general environment are stored in a database, and preset data with respect to an actual working environment are read, and thus the environmental variable may be used. The environmental variable is determined experimentally such that colors read by an input optical device are corrected into original colors, or the code images are discriminated from the background, by analyzing the optical characteristics and the ambient illumination of the input optical device, and thereby the colors may be recognized without errors while not being affected by a device or environment. In addition, two or more environmental variable groups having different objectives may be formed. In other words, one of the environmental variable groups may be an environmental variable for separating the code images from the background, and the other may be an environmental variable for discriminating the color or shade of the code images separated from the background. For example, the value for R is relatively high when colors are discriminated for the RGB mode in the environment of illuminating using red light. Thus, the value for R read in the optical device is reduced to a predetermined weight value, and the effect of the environment is excluded. When the code image in the environment of bright illumination is discriminated as black and white for the HSV mode, the weight value of the V component is increased, and thereby colors are discriminated. In the environment of dim illumination, the weight value of the V component for discriminating the black-and-white color in the HSV mode from colors excluding black or white are reduced, and the weight value of the S component is also reduced, and thereby colors are discriminated. When the environmental variable is reset, the distribution of the values for R, G, B, H, S, and/or V, which are obtained from each cell in the code image by the optical device, is determined, and the environmental variable and a weight value thereof may be reset with reference to the distribution.
Examples of code images to be processed by the apparatus for recognizing a code shown in FIG. 1A and code data corresponding to the code image will be described below. FIGS. 2A through 2D illustrate various examples of code images, which can be recognized by the apparatus for recognizing a code according to the present invention. In the drawings, various shapes for cells, for example, quadrangular, circular, elliptical, cross-shaped, or honeycomb shaped, may be realized, and a combination of shapes may be possible when a code image is formed. The shape and size of a code image or cell may be properly selected according to the content or amount of data to be represented in the code image. Various shapes of the code image comprised of a plurality of cells, such as quadrangular, circular, elliptical, cross-shaped, or honeycomb shaped, may be realized, and a code image having a similar shape to a barcode shown in FIG. 2D is also included.
FIG. 2E illustrates the regions of a code image based on roles of data represented in the code image. The code image include a data region 291 , which is formed of at least one data cell in which colors, shade, shapes, patterns, or a combination thereof is differently encoded according to the content of data. The data region 291 may be formed of one or more data cells in which characters are encoded as images, and each of the data cells may represent a character, or a set of a plurality of data cells may represent one or more characters. For example, the character "A" may be represented as a red cell, or may be represented as two cells such as one red cell and one green cell. Code data contained in the data region 291 are comprised of characters, numbers, and symbols, and may comprise names, addresses, telephone numbers, fax numbers, host addresses of networks, domain names and IP addresses used in Internet, uniform resource locators (URLs), protocols, or document names depending on user's demands.
At least one of a parity region 293, a reference region 295, and a control region 297 may be further included in the code image. The parity region 293 is comprised of parity cells for checking recognition errors of the cells represented in the data region 291. The reference region 295 is comprised of at least one reference cell, which provides reference colors, reference shade, reference shapes, reference patterns, or a combination thereof for determining colors, shade, shapes, patterns or a combination thereof of the data cell formed in the data region 291. The control region 297 is comprised of at least one control cell, in which control data for commands or services to be provided using data represented in the data region 291 are encoded. Hereinafter, all regions excluding the data region 291 , i.e., the parity region 293, the reference region 295, and the control region 297, are referred to as "auxiliary regions", and cells included in the auxiliary regions are marked as auxiliary cells. The parity region 293 is used to determine whether colors or shades (or possibly, shapes and/or patterns) are expressed to be suitable for data cells according to the content of code data. In the parity region 293, parity data are obtained according to code values designated corresponding to the color or shade represented in each of the data cells, and parity cells are formed of colors or shade corresponding to the parity data. The reference region 295 is used to set reference colors (or reference shades, reference shapes, or reference patterns as occasion demands) for recognizing colors (or shades, shapes, or patterns) represented in the cells of the data region 291 and/or the auxiliary regions. The colors of the cells in each region may be represented in a color mode such as a red green blue (RGB) color mode, a hue saturation value (HSV) color mode, a cyan magenta yellow black (CMYK) color mode, a hue saturation intensity (HSI) color mode, a CIE color mode, a YIQ or YUV color mode. Even in case where a code is formed by black-and-white shades (greyscale), data of each cell can be precisely recognized on the basis of the shades of white and/or black represented in the reference region 295. In addition, a boundary region for discriminating regions may be further implemented between regions included in the code images. In addition, a boundary region for discriminating cells may be further included between cells in each region. The boundary region may be comprised of a line or a cell formed of a specific color or pattern, and a boundary line or boundary cell may be formed as black or white.
Color can be printed differently depending on the kind of printer or the mateπal used as the printing paper, and the same color may be slightly differently recognized according to the characteristics of a scanner or camera. In consideration of this, the reference cell included in the reference region 295 provides a reference for discriminating colors represented in the data region 291. That is, even if color is differently output depending on the output device used, or color is differently input according to the input device used such as a scanner, a color difference between the color of the reference region 295 and the color of the data region 291 is constant, and thus colors of the cells in the data region 291 can be precisely recognized. Thus, colors of the cells of the data region 291 have a relative color difference compared with reference colors of the reference region 295, and thus the colors of the cells of the data region 291 are obtained by being compared with the reference colors of the reference region 295 on the basis of the RGB mode or HSV mode, and thereby data of the data cells can be precisely recognized even if an image input device or output device is changed. In a case where shapes or patterns are input into an input device such as a camera, the shapes or patterns may be inclined or warped. The reference shapes or reference patterns are provided in the reference region 295, and thus a wrong input state can be sensed, and the shapes or patterns of the data cells can be precisely recognized. Various services can be provided to a user using code data of the data region 291 depending on the type of application. For example, in a case where a homepage address (that is, a URL) on the Internet is expressed on a name card as code images, the code images are decoded by a computer, and the web browser of the computer or a server connected to the computer is executed and thereby can be programmed to contact the homepage. In addition, in a case where an electronic mail address is expressed as a code image, the code image is decoded by a computer, and the mailing software of the computer is executed, thereby providing an environment where electronic mails can be sent to the electronic mail address. As another example, in a case where the code image is input into a portable terminal, a user can call a telephone number corresponding to the code image or receive services of data related to geography. In such a case, the automatic service function can be automatically implemented by a separate program or in accordance to the kind of objective data in a decoding program. In addition, the control region 297, in which a command for executing the automatic service function is expressed as an image, is included in the code image and thereby automatically implement services in the decoding program by using control data decoded from the control region 297.
In addition, commands or meta-data for controlling objective data of the data region 291 can be included in the control region 297. For example, data encoded in the control region 297 may include various meta-data such as the decoding order of the cells formed in the data region 291 , the position of the reference cell of the reference region 295, and the position or property of the parity region 293.
FIG. 3A illustrates an example in which two bits of data are expressed as four colors. If each cell has one of four colors, two bits of data can be expressed. Then, in a case where it is defined that one character is expressed as four consecutive cells, 8 bits, that is, 256 kinds of characters can be expressed. Meanwhile, in a case where there are four kind of shapes for a cell with the same color(i.e., small quadrangle, large quadrangle, small circle, and large circle), two bits of data can be expressed, and 256 kinds (8 bits) of data can be expressed in a case where each cell may be filled with four different colors.
FIG. 3B illustrates an example of a code conversion table for converting various characters (alphabets or special characters), numbers, or shapes into images, and in the example, one character is mapped as one or two color cells.
In view of an encoding method using the code conversion table of FIG. 3B, various characters are converted into code values, and then code images are generated as colors allocated to each of the code values. In the present embodiment, the code images are generated using eight colors, and two consecutive cells are used so as to express one character or number. Code values from "000" to "111" are allocated to the eight colors, and each character is encoded as two colors. For example, the number "3" is allocated to the code value "000 01 1 ", is encoded as a color (black) allocated to the code value "000" and a color (cyan) allocated to the code value "011", and therefore is imaged as two consecutive cells of one black cell and one cyan cell. Various characters or numbers included in code data are converted into code values according to the code conversion table shown in FIG. 3B, and then colors corresponding to the code values can be expressed as a quadrangular matrix shape comprised of a combination of quadrangular cells.
FIG. 3C illustrates an embodiment in which a code images is generated using a greyscale code. The greyscale code is formed according to the brightness of a greyscale tone instead of the mixing rate of red (R), green (G), and blue (B). Thus, in the reference region 295, a reference shade is set to either black, white, or grey, and the cells in the data region 291 have values which are coded by a grey difference compared with the reference shade of the reference region 295. In a case where there is no reference region 295 in the code image, the shade of each cell of the code image is calculated, and cells (cell set) having similar shades are collected. The same code value is allocated to the cells belonging to the same cell set, and then errors in decoding can be determined using the parity region 293. In a case where determination errors occur, it is again determined whether errors occur after the shade of each cell is recalculated or a reference for forming a cell set is differently set. The greyscale code image may be applied to media which are printed mainly with black and white, such as newspapers.
FIG. 4 illustrates an example in which a code image is essentially adopted into a name card, using the above code system. In the case of using the apparatus for recognizing a code shown in FIG. 1 , a user can generate an image file in which portions of a quadrangular code image represented at the lower right comer of the name card are included, using a camera or scanner, and the user can process the image file and recognize code data, which are to be represented by the code image. FIG. 5 is a block diagram of the apparatus according to a second embodiment of the present invention, and FIG. 6 is a flow chart illustrating the operation of the apparatus shown in FIG. 5.
An image-acquiring portion 51 acquires a raw image in which a code image is contained. An image file acquired by an image input device, such as a camera or scanner, or an image file existing in the format of an electronic file is received, and the data format of the image file is converted if necessary. For example, a compressed video image file is uncompressed and converted into an image file having a bit map shape. In this way, the image file is stored in a memory 52 as a raw image. An ambient noise image with a code image, which the user wants to detect, is also included in the raw image. A control portion 58 receives the type of the code image, sets an environmental variable in consideration of an environment at a time where the raw images are acquired, or reads an environmental variable already stored and transmits the environmental variable to a color filter 53. The memory 52 temporarily stores image data required for image processing.
The color filter 53 corrects colors or shades recognized from the raw image by the environmental variable, converts the corrected colors or shades into standard colors or standard shades, which are used to generate the code image, and thereby generates a standard image represented by the standard colors or standard shade. Here, standard colors or standard shades mean colors or shade which are set to correspond to characters, numbers, or symbols when the code image is generated (see FIGS. 3A through 3C). The code image displayed in physical media may be not represented as originally set colors or may not recognize the original colors due to the characteristics of an image device or the effect of an ambient environment when the code image is printed by an image
, output device or when the code image is processed by the image input device. For example, although the RGB value of magenta is (255, 0, 255), the RGB value recognized from the cell to be represented as magenta is not exactly (255, 0, 255) but will be some value near to (255, 0, 255). Thus, code data can be extracted by applying the code conversion table only if the colors or shades actually recognized are converted into standard colors or standard shades. A binary-coded filter 54 divides colors or shades of the raw image into two colors according to a predetermined reference value (black-and-white environmental variable) and generates a binary-coded image. The binary-coded image may be implemented as black and white or two specific colors. Here, a reference value may be set to the averaged value of values for R, G, and B or a minimum value or maximum value among them, a value for V of the HSV color mode, a value for I of the HSI color mode, or a value for K of the CMYK color mode.
A pre-processing portion 55 receives the raw image and the binary-coded image, sets a color of a portion excluding the code image portion in the binary-coded image as a specific background color, sets a color of a portion of the standard image, which corresponds to the portion set as the background color in the binary-coded image, as the background color, thereby discriminating a code image portion from other portions. For example, a color of pixels positioned at an edge of the binary-coded image is set as the background color, and then a color of pixels connected to the pixels set as the background color is also set as the background color, thereby discriminating the code image portion from the other portions. Here, the background color is set to one of colors, which are not used to generate the code image. A feature point-extracting portion 56 extracts a plurality of cells included in the code image portion and then recognizes the standard colors or standard shade represented in each cell.
A decoder 57 extracts a corresponding character, number, or symbol from the color or shade recognized in each cell of the code images according to a relation between characters, numbers, or symbols and the corresponding color or shade, and generates code data. Preferably, a color or shade, which is most distributed among the colors or shades of the pixels belonging to each cell, is recognized as a color of the corresponding cell. The operation of the apparatus for recognizing a code shown in FIG. 5 will be described below with reference to the flow chart of FIG. 6. In step 61 , the raw image, in which the code image is included, is acquired. In step 62, the environmental variable is set in consideration of an environment at a time when the raw image was acquired, and the colors or shade recognized from the raw image is corrected by the environmental variable. In step 63, the corrected colors or shades are converted into a plurality of standard colors or standard shades, which are used to generate the code image, and thus the standard image represented by the standard colors or standard shades is generated.
In step 64, the binary-coded image is generated by dividing the colors or shades of the raw image into two colors according to a predetermined reference value. In step 65, a color of the portion excluding the code image portion is set as the specific background color on the basis of the binary-coded image. In step 66, the portion of the standard image corresponding to the portion set as the background color in the binary-coded image is processed as the background color, and thus the code image portion can be discriminated from the other portions. In step 67, the plurality of cells included in the code image portion are extracted, and then the standard color or standard shade represented in each cell is recognized. In step 68, a corresponding character, number, or symbol is extracted from the color or shade recognized in each cell of the code image according to the relationship between the character, number, or symbol and the corresponding color or shade, thereby generating code data.
The function and operation of the apparatus for recognizing a code shown in FIG. 5 will be described in greater detail.
The binary-coded filter 54 converts the raw image into a black-and-white image according to the black-and-white environmental variable. The black-and-white image is used to facilitate discrimination of an object included in the raw image and to improve the working speed. The brightness value of each pixel of the raw image is compared with the environmental variable value and output as a pixel that is black or white, and thus the black-and-white image is generated. The black-and-white environmental variable means a parameter or a set of parameters for converting colors represented in the raw image into black or white, such as the averaged value in which the sum of the values for red, green, and blue is divided into 3, or the value (brightness) of the HSV color mode. In a case where the total brightness of the raw image is relatively dark (this may be the case when the ambient environment is slightly dark when the raw image is scanned, or the brightness is low due to the characteristics of the image device), the value of the black-and-white variable is set low. In a case where the total brightness of the raw images is relatively high, the value of the black-and-white variable is set high. For example, in a case where the RGB value in a pixel of the raw image is
(100, 100, 100), the average brightness of the pixel is 100. In such a case, assuming that the value of the environmental variable is 80, a pixel having a value brighter than the environmental variable value is recognized as white. Otherwise, the pixel is recognized as black. When a color image is binary-coded into a black-and-white image, black or white pixels may be generated by applying a reference value to all pixels or may be generated by dividing the entire image into several portions, setting a reference value with respect to each of the portion, and applying the reference value to pixels belonging to the each of the portion. That is, in view of a local threshold value method, the images are divided into blocks having predetermined sizes, the brightness value of pixels belonging to each of the blocks is averaged, and the averaged value is set as the black-and-white environmental variable. The brightness value of each pixel is compared with the black-and-white variable value, and thus pixels belonging to the blocks can be binary-coded as black or white. The method is most effective in a case where the whole image is locally dark or bright.
A multiple threshold value method may be adopted when a color image is converted into a black-and-white image. A histogram of the brightness value of the pixels belonging to the image is obtained. The brightness frequency can thus be obtained. The histogram can be divided into brightness values having a high frequency and brightness values having a low frequency. In general, the brightness value of each pixel is shown convergent to the specific brightness value. Thus, brightness values having a lower frequency, which are positioned among brightness values having a higher frequency, are set as a plurality of black-and-white environmental variables, and then the plurality of black-and-white environmental variables are applied to the color image in order. As a result, in a case where an image has a greater variation in brightness, the most appropriate variable is selected among the plurality of black-and-white environmental variables, so a proper black-and-white image can be generated.
FIG. 7 illustrates the result in which a black-and-white image is obtained from raw image. FIG. 7A illustrates the raw images. The raw image is actually a color image but represented as a greyscale image due to the limitation in the expression of the drawings. FIG. 7B illustrates the example in which the values for R, G, and B of pixels are averaged, and the averaged value is compared with the reference value of 128, to obtain the black-and-white image. For FIG. 7C, the reference value is set to 90. FIG. 7D illustrates an example that adopts the local threshold value method. FIG. 7E illustrates the example in which the minimum value among the values for R, G, and B of pixels is compared with of the reference value of 128 to obtain a black-and-white image, and for FIG. 7F, the reference value is set to 90.
The pre-processing portion 55 receives the color image and the black-and-white image, compares the color image with the black-and-white image, and thus removes a unnecessary noise image from the color image and the black-and-white image. The step of removing the noise image on the basis of the black-and-white image is performed by steps of removing a background image and a small noise image and will be described in detail with reference to FIGS. 8A through 8F.
In general, white or a redundant blank having high luminance exists around the code images, and the region of the code images is separated from adjacent regions. In step of removing the background image, the connection state of the noise image is checked, and then the background image can be removed. FIGS. 8A through 8F illustrate the step of removing the background image. First, the noise image position at the edges of the black-and-white image (see FIG. 8A), that is, pixels having a black color among pixels positioned at the edges of the black-and-white image, are set its color as a specific shade or color, which is a color or shade that is not represented in cells of the code image and referred to as a "background color" (see FIG. 8B). The entire region is checked, and pixels connected to the noise image and also set its color as the background color. In other words, black pixels connected to the pixels represented by the background color is detected in order, and a color of the pixels are set as the background color (see FIGS. 8C through 8E). If there are no further black pixels connected to the pixels having the background color, the portion set as the background colors is determined as the region of the noise image, and the portion comprised of the black pixels among the portion excluding the region of the noise image is determined as the region of the code image. In such a case, in order to check the connection state of the noise image, it is efficient that pixels belonging to the noise image are simultaneously searched in all directions such as from left to right or right to left, from up to down, or from down to up. In the step of removing the small noise image, the connection state of each of the pixels is checked from the image from which the background noise image is removed, the length or area to which the pixels are connected is less than a reference value, and then this image is determined as the small noise image, thereby removing the small noise image. For example, in a case where the number of pixels that are black in a portion having a predetermined area is less than the reference value, this image corresponding to the portion may be determined as the noise image. In this way, the code image portion is formed when the portion determined as the noise image and a white pixel portion are excluded from the black-and-white image. The result of the noise image removed from the black-and-white image is applied to the color image, and so the region of the code image is extracted from the color image. For example, a pixel existing in a position (coordinate) which is determined as the noise image in the black-and-white image, is also determined as the noise image even in the color image. The feature point-extracting portion 56 receives the color image and/or black-and-white image from which the noise image are removed, searches the feature point comprising the image (code image) of a specific object and its region, searches data related to the type and position of the code image from the feature point of the object, and determines errors. The feature point-extracting portion 56 is implemented by performing steps of dividing a standard image into blocks, searching the region of the code image, extracting the feature point from the searched region of the code image and determining the type of a code. FIG. 9 is a flow chart illustrating the step of recognizing standard colors represented in each cell of the code image. In a case where the standard image in which the background color is represented (or from which the noise image is removed) is input in step 91 , the standard image is divided into blocks so that the region of the code image can be searched in step 92. FIG. 10A illustrates how the entire image is divided into blocks having predetermined sizes. A portion represented as a shaded quadrangle denotes a region with the code image. In a case where the sizes of significant image belonging to corresponding blocks after the image is divided into the blocks, are checked, the position of the code image may be estimated. Here, the significant image means the code image portion excluding the noise image portion of which color is set by the background color. Thus, the center point of the block where the region of the code image belonging to each block is the largest, is searched, and the region of the code image is searched on the basis of the center point. In other words, in step 93, the number of pixels belonging to the region of the code image that is not the background region for each block is detected. In step 94, the block with the greatest number of pixels belonging to the region of the code image, is detected, and the center point of the block (or the center point of the region of the code image belonging to the block) is detected.
The region of the entire image is divided into a plurality of blocks, and then the region of the code image is searched, and thereby the arithmetic operations required for processing can be reduced. In a case where the shape of the code image is quadrangle and the position of the code image is roughly known, the arithmetic operations can be reduced. In a case where the number of pixels belonging to the region of the code image in each block is calculated in FIG. 10A, the number of pixels belonging to the code image is greatest in block 6, and decreases in order from blocks 2, 3, and 5. Thus, in a case where an image arithmetic operation is performed only in the region of blocks 2, 3, 5, and 6, not blocks 1 , 4, 7, 8, or 9 where there are no or few pixels belonging to the region of the code image, the number of arithmetic operations performed can be greatly reduced. As shown in FIG. 10B, in a case where the code image are comprised of a plurality of images separated from one another in space, there are a plurality of regions of the code image. Preferably, in such a case, the image region having the largest size among the plurality of code images, which is assumed to be the region of the code image, is first searched, and then other image regions are checked in the order of size.
Subsequently, the step of searching the region of a code, in which the code image is formed, on the basis of the center point searched in the step of dividing standard image into blocks is performed. In step 95, the entire standard image is searched from the center point, and a candidate region to be the region of the code image is detected. In step 96, the feature point is detected from the candidate region, and thereby the shape and type of the code image are determined. In this step, the region of a figure formed by circumscription points of the candidate region is searched, which may be result in a quadrangular region or a circular region, and the image comprised of the detected region become a code image. And then, cell regions in the code image is discriminated each other in step 97, and standard colors represented in a corresponding cell is recognized on the basis of colors of pixels belonging to each cell region in step 98.
FIGS. 11 A through 11 C illustrate the step of detecting the candidate region, and regions 104 marked by slanted line denote a code image region, which is to be actually obtained, and candidate regions 103 including the regions 104 are detected. The step is to simplify a process by selecting a partial necessary image region from the entire images and performing a future arithmetic process with respect to the partial image. An extreme point (i.e., a point having minimum and maximum values on x and y coordinates) of an image region estimated as the code image is obtained, and a figure comprised of the extreme point is determined as the candidate regions 103. For example, coordinates having minimum and maximum values on an x-axis and minimum and maximum values on an y-axis among pixels belonging to image excluding a background region from regions 102 of the blocks 2, 3, 5, and 6 selected from FIG. 10A are obtained, and a figure (quadrangle) formed by the coordinates is determined as a candidate region.
A method for searching a candidate region includes a reduction search method and an extension search method. In view of the extension search method, a region corresponding to the code image is searched while being extended to an outside from the center point. That is, the code image region is searched by being extended to a portion determined as the code image region that is not the background region. In view of the reduction search method, the code image region is searched by being reduced to the center point from the outside. In a case where the code image is formed of quadrangle, the candidate region is expressed as a left upper coordinate and a right lower coordinate. In a case where the code image is formed of circle, the candidate region is expressed as the coordinate of the center point and the length of the radius.
A method for determining a feature point includes a diagonal search method or a boundary detection method. In view of the diagonal search method (see FIG. 12A), the feature point is searched using a segment having a temporary slope in the candidate region. For example, in a case where the code image is formed of quadrangles, in order to search vertices of the quadrangular code image in the candidate region, a diagonal having an angle of 45 degree is drawn at vertices of the candidate region, and thus a quadrangular region formed of points externally contacting the diagonal is searched. As shown in FIG. 12A, a point where a pixel belonging to the code image region that is not the background region by using the diagonal having the angle of 45 degree beginning at each vertex of the candidate region, contacts the diagonal, is determined as the feature point. The diagonal with respect to each vertex of the candidate region has a predetermined direction such as counterclockwise or clockwise.
FIGS. 12B and 12C illustrates the diagonal search method in greater detail.
In FIG. 12B, in a case where the feature point is detected from the diagonal in a direction CD, if a plurality of pixels contact the diagonal as shown, a pixel (pixel (D of the drawing) lastly detected is determined as the feature point. Next, the step is performed in directions (2), (S>, and ®, and thereby desired feature points of the code image region are detected. Meanwhile, in a case where the feature points are detected by the diagonal detection method, all of the desired feature points may be not detected. For example, in the case of a quadrangular code image, four feature points should be extracted but only three feature points can be extracted. In FIG. 12C, in a case where the number of the feature points are not enough, the step of searching in vertical and horizontal directions is further performed. In other words, the pixels of the code image region nearest to the boundary surface of the candidate region are detected in directions ©, ©, ©, and (8). The feature points determined through the diagonal search method may be different from the feature points determined through search in vertical/horizontal directions. In such a case, either the averaged value of two coordinates or one of the two coordinates is selected to determine the feature point.
In view of the boundary detection method (see FIG. 13), the external boundary of the code image region included in the candidate region is tracked, and thus the feature points, such as vertices of the code image, are searched. First, coordinates composing a boundary are chosen as candidate coordinates. Next, in a case where the slope of adjacent candidate coordinates is checked and there is no variation in the slope between the coordinates, a method for removing the coordinates from the candidate coordinates (slope-based method) is used. For example, in a case where adjacent candidate coordinates are(xι_l,yι_l), (xt, y , and (χ ], y,+l) , if α, = (y, - .,)/(x, - *,_,) and α2 = ( „l -y1)/(x„i - xl) , and a^ is the same as a2, or a difference between ai and a2 is smaller than a predetermined value, the coordinate^,, >,) is not determined as the feature point.
In addition, a distance-based method besides the slope-based method may be adopted. In a case where a distance between the adjacent feature point candidate coordinates is smaller than a predetermined distance, the coordinates are removed from the feature point candidate coordinates. That is, in a case where a distance (x, -χ,_,Ϋ +(y, - y,_,Y between the two adjacent candidate
coordinates (xt_x,yt_x) and (x,, >, ) is smaller than the predetermined value, the coordinate (x yt) is removed from the candidate coordinates.
Closing of the boundary and the angle and length of the boundary are checked using the feature points, and thus the shape and type of the code image can be searched. As an example, in a case where one code image may be formed of 5 x 5 or 8 x 5 cells, the types of the code image may be discriminated according to the ratio of width to length. That is, in a case where the ratio of width to length is similar, the code image may be determined as a square and may be recognized as a 5 x 5 matrix code image. On the other hand, in a case where a difference between the width and length is more than a predetermined value, the code image may be determined as a 8 x 5 two-dimensional code image. In addition, the center point of the cells composing the code image may be found using a length rate and slope In addition, it is determined whether the code image is properly extracted in consideration of the size or area, and length rate of the code image FIGS 14A through 14C illustrate the step of dividing cells belonging to the code image region and finding center points of the cells Referring to FIG 14A, the code image in the present invention is determined as a 4 x 4 quadrangular image The width and the length of the code image are divided by 4, respectively, and the coordinates of the center points of the cells divided as shown in FIG 14B are obtained
FIG 14C illustrates an algorithm for searching for the center points of cells In a case where the length of one side of the code image is L, and the length of an axis X is W, and the length of an axis Y is H when a perpendicular line is drawn toward the axes X and Y on the basis of the length L of the side of the code image, the position of X and Y used for searching for the center of an i-th cell contacting the side is expressed by Equation 6 Here, C is the number of cells positioned in rows or columns of a color code W = Lx cosθ
* = ^x W> ' = (U , 0 - 1, 7 = 1, 2, , C
" (6)
H = L x sin θ
Y = ^x , t = 0,l, ,C -l, ι = l,2, , C
The coordinate of the point contacting the side of the code is obtained by
Equation 6, and in a case where the coordinate is connected to the point of the same order as that in an opposite side, the point of contact in which two segments are intersected is generated, which is determined as the center point of each cell
Equation 6 is ideal only for a case where a close-up photographing angle of the code image and a camera is 90 degree Thus, in a case where the close-up angle is small (i e , in a case where the camera is lying down), distortion occurs in the code image, and thus errors may be generated For example, in a case where the close-up angle is excessively small, the code image of the original quadrangle is input in the form of trapezoid. Thus, in order to correct this, an extra arithmetic operation may be required but in most cases, the above Equation is enough. In particular, although distortion is severe, if the size of the image is large, the center position of cells may be searched for using the above Equation or an auxiliary Equation. Besides the above method, the boundary line, or the boundary region between cells, which is inserted when the code image is generated, is detected in consideration of the distribution of colors of pixels in the code image region, and thus cells may be discriminated on this basis. In step 68, the decoder 57 decodes the code image by using data searched from the feature point-extracting portion 56 and reproduces code data. The steps of determining the color/shade of each cell and of checking parity data are performed using the color environmental variable and the analysis information (color mode and the type of code image) input by the controller 58. The value of the color or shade detected for each cell is replaced with a corresponding character, number, or symbol. Then abnormality can be determined through a parity operation, and the corresponding character, number, or symbol is output in a case where there is no abnormality. The above steps are performed on all of the cells, and desired code data are generated by connecting the character, number, or symbol obtained with respect to each cell.
The position of each cell is checked by the center coordinate of each cell input from the feature point-extracting portion 56, and a predetermined number of pixels are extracted on this basis, and thus colors can be determined. Colors can be determined using the RGB mode after the averaged value of sampled pixels is obtained, or colors can be determined to represent a corresponding cell after the angle of colors is obtained using the HSV mode. In the case of color determination using the RGB mode, the value for each of the RGB components of the sampled pixels in the color image is 0 or 255 after undergoing the step of converting colors by the color environmental variable, and thus the colors are in a standard color state. Thus, the color having the most frequency among the sampled pixels is determined as the color of a corresponding cell. In a case where the color image which has not been converted into the standard colors, are input, the color environmental variable is applied to the sampled pixels, and the sampled pixels are converted into the standard colors, and colors having the most frequency are determined as colors of a corresponding cell. In the case of color determination by the HSV mode, the values of RGB of the sampled pixels are determined after being converted through HSV conversion.
In the case of the code image represented as a three-grey level, the averaged values of each of cells are obtained, and in a case where the averaged values are aligned in the order of size, portions having the relatively high frequency of distribution of the aligned averaged values are converged into three places, and also there are intervals among the three places, which have relatively low frequency. The center point of the longest interval and the center point of the longest interval in the second place are obtained, and then if values corresponding to the two center points are compared with the averaged values of each of cells, it can be determined whether each cell belongs to which level (one among black, grey, and white). For example, the averaged values of values for R, G, and B of the pixels sampled from a cell are obtained, and thereby may be used as the brightness value. The distribution of the brightness value is checked using the brightness values obtained for each cell, and is divided into three groups such as black, white, and grey. Then, a shade of a cell is determined as shade nearest to the brightness value of the cell.
In the case of the code image shown in FIG. 2E, the data region is discriminated from the auxiliary regions (the parity region, the reference region and/or the control region) in the code image. In a case where there is the reference region, the colors, shade, shapes, and patterns, which are represented in each cell, are determined using the reference region, and in a case where there is the parity region, errors of data cells are determined.
The step of searching shapes, colors, patterns, and characters, which are included in the code image, is required in the step of decoding, and besides this, the step of correcting a distorted image. Here, color determination is possible using one or more methods among the red, green, and blue (RGB) mode, the hue, saturation, and value (HSV) mode, the cyan, magenta, and yellow (CMY) mode, and the hue, lightness, and saturation (HLS) mode.
The code values of each of cells positioned in the data region and/or the auxiliary region are extracted for decoding. In a case where there is the reference region, reference colors (or shade), reference shapes or reference patterns as a reference for interpreting data in all regions are made by detecting colors or shade of a reference cell. The colors, shapes, and patterns of the cells positioned in the data region, the parity region or the control region are detected, and then a difference between the detected one and the reference colors, the reference shapes and/or the reference patterns is obtained, and thereby the difference is converted into the code values for each of the cells. In a case where there is no reference region, the code values corresponding to each of the cells may be obtained according to the colors or shade, shapes and/or patterns, which are read by the image input device. In a case where there is the parity region, the step of checking errors of the parity with respect to each row and column of the code image by the code values (that is, parity data) obtained from the parity region is performed. Environmental variables optimized to illumination usually used, and weight values thereof may be preset to store in a decoding program or database such that the user can select the environmental variable most suitable for own environment. The occurrence of the parity errors can be considered that there are errors in reading colors by the environmental variables presently set, and in this case, another environmental variable is adopted to read colors again. If necessary, the direction or position of the code image may be searched based on the parity data. The code values of each of the cells obtained by the above step are converted into code data comprised of recognizable characters including numbers and symbols by the code conversion table (see FIG. 3B). In a case where there are the control region for setting matters related to commands or services, which are available using code data, in the code image, the commands or services are provided according to the data set in the control region, and if not, basic services designated in a program may be provided. FIG. 15 is a block diagram of the apparatus according to a third embodiment of the present invention, and FIG. 16 is a flow chart illustrating the operation of the apparatus shown in FIG. 15. Compared with the apparatus shown in FIG. 5, there is a difference in which the apparatus of FIG. 5 previously converts colors of each pixel with respect to the raw image into standard colors by means of a color filter and performs image processing, and the apparatus of FIG. 15 extracts the desired code image region from the raw image and converts colors of each of pixels (or a pixel sampled among pixels) belonging to the code image region into the standard colors using the environmental variables by means of the color filter. Other functions or operation are basically similar, and a memory is not shown as a matter of convenience. Hereinafter, a difference from the apparatus of FIG. 5 will be described, and the other may be same applied only if there is no problem with the processing order or operation.
The apparatus for recognizing a code shown in FIG. 15 acquires the raw image in which the code image is included, in an image-acquiring portion 151 , divides colors or shade of the raw image into two colors according to a predetermined reference value in a binary-coded conversion portion 153, thereby generating a binary-coded image.
A pre-processing portion 154 sets a color of a portion excluding a code image portion in the binary-coded image as a specific background color, sets a color of a portion of the raw image corresponding to the portion set by the background color in the binary-coded image, as the background color, thereby discriminating the code image portion from other portions. A feature point-extracting portion 155 extracts a plurality of cells included in the code image portion and recognizes colors or shade represented in each cell.
A color filter 156 sets environmental variables in consideration of an environment at a time when the raw image is acquired, corrects the colors or shade recognized in each cell included in the code image portion by the environmental variables, converts the corrected colors or shade into a plurality of standard colors or standard shade used to generate the code image, thereby generating a standard image represented by the standard colors or standard shade.
A decoder 157 extracts a corresponding character, number, or symbol from colors or shade recognized in each cell of the code image according to a relationship between a character, number, or symbol and corresponding colors or shade, and generate code data.
The operation of the apparatus shown in FIG. 15 will be described with reference to FIG. 16. In step 161 , the raw image in which the code image is contained, are acquired. In step 162, the colors or shade of the raw image is divided into two colors according to the predetermined reference value, and thus the binary-coded image is generated. In step 163, the portion excluding the code image portion is represented by the specific background color on the basis of the binary-coded image. In step 164, the portion of the raw image corresponding to the portion represented by the background color in the binary-coded image is processed as the background color, thereby discriminating the code image portion from the other portions. In step 165, the plurality of cells included in the code image portion are extracted, and then colors or shade represented in each cell are recognized. In step 166, the environmental variables are set in consideration of the environment at a time when the raw image is acquired, and the colors or shade recognized in each cell included in the code image is corrected by the environmental variables. In step 167, the corrected colors or shade are converted into a plurality of standard colors or standard shade, which are used to generate the code image, and thus the standard image represented by the standard colors or standard shade is generated. In the present embodiment, since the position of the center point of each cell and the type of a code are already known, predetermined pixels are sampled on the basis of the center point of each cell, and the environmental variables are applied only to the sampled pixels, and thereby the standard colors or standard shade of the cell may be discriminated. Preferably, data related to the discriminated standard colors or standard shade of each cell are stored in a memory and are used to generate code data. As a result, steps required to generate the standard image may be omitted.
In step 168, a corresponding character, number, or symbol is extracted from the colors or shade recognized in each cell of the code image according to the relationship between a character, number, or symbol and corresponding colors or shade, and thus code data are generated.
The method for recognizing a code according to the present invention can be embodied in a computer program. The program can be realized in media used in a computer and in a common digital computer for operating the program. The program can be stored in computer readable media. The media can include magnetic media such as a floppy disk or a hard disk and optical media such as a CD-ROM or a digital video disc (DVD). Also, the program can be transmitted by carrier waves such as Internet. Also, the computer readable media is dispersed into a computer system connected by networks and can be stored as computer readable codes and implemented by a dispersion method.
Industrial Applicability
As described above, the apparatus for recognizing a code and the method therefore according to the present invention can receive a code image in which predetermined data are encoded by colors or shade, precisely discriminate original colors or shade regardless of an environment into which the code image is input, and thus desired code data can be obtained.
While this invention has been particularly shown and described with reference to preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims

What is claimed is:
1. An apparatus for recognizing a code, the apparatus comprising: an image-acquiring portion for acquiring a raw image in which a code image is contained; a color-converting portion for correcting colors or shades recognized in the raw image, using environmental variables, converting the corrected colors or shades into a plurality of standard colors or standard shades used to generate the code image, and generating a standard image represented by the standard colors or standard shades; a binary-coding converting portion for dividing the colors or shades of the raw image into two colors according to a predetermined reference value and generating a binary-coded image; an image-processing portion for extracting an area excluding a code image area from the binary-coded image, setting a color of the area corresponding to the extracted area in the standard image as a background color, thereby discriminating the code image area from other areas, extracting a plurality of cells included in the code image area, and recognizing the standard color or standard shade represented in each of the cells; and a code-converting portion for extracting a corresponding character, number, or symbol from the color or shade recognized in each of the cells of the code image according to a relationship between a character, number, or symbol and a corresponding color or shade and generating code data.
2. The apparatus of claim 1 , wherein the environmental variables of the color-converting portion are parameters set to discriminate colors or shades of the raw image in consideration of an environment in which the raw image is input, and the color-converting portion corrects the value of each of the pixels recognized in the raw image, using the environmental variables.
3. The apparatus of claim 1 , wherein the predetermined reference value of the binary-coding converting portion is set to the averaged value of values for R, G, and B of a red green blue (RGB) color mode, or a minimum value or maximum value among them, a value for V of a hue saturation value (HSV) color mode, a value for I of a hue saturation intensity (HSI) color mode, or a value for K of a cyan magenta yellow black (CMYK) color mode.
4. The apparatus of claim 1 , wherein the image-processing portion sets colors of pixels positioned at an edge of the binary-coded image as a background color, and sets colors of pixels connected to the pixels set by the background color as the background color, thereby discriminating the code image area from other areas.
5. The apparatus of claim 1 , wherein the image-processing portion receives an image in which the code image area and the other areas are divided into one another by the background color, divides the image into a plurality of blocks, detects a region having a color or shade that is not the background color, from each of the blocks, selects a block having the largest region among the plurality of blocks, detects a center point of the code image area contained in the block, and searches the entire image on the basis of the center point, thereby detecting the region having a color or shade that is not the background color as the code image area.
6. The apparatus of claim 1 , wherein the code-converting portion recognizes a color or shade having the greatest distribution among the colors or shades of pixels belonging to each cell, as the color or shade of a corresponding cell.
7. An apparatus for recognizing a code, the apparatus comprising: an image-acquiring portion for acquiring a raw image in which a code image is contained; a binary-coding converting portion for dividing the colors or shades of the raw image into two colors according to a predetermined reference value and generating a binary-coded image; an image-processing portion for extracting an area excluding a code image area from the binary-coded image, setting a color of the area corresponding to the extracted area in the standard image as a background color, thereby discriminating the code image area from other areas, extracting a plurality of cells included in the code image area, and recognizing the standard color or standard shade represented in each of the cells; a color-converting portion for correcting colors or shades recognized with respect to each of the cells included in the code image area, using environmental variables, and converting the corrected colors or shades into a plurality of standard colors or standard shades used to generate the code image; and a code-converting portion for extracting a corresponding character, number, or symbol from the color or shade recognized in each of the cells of the code image according to a relationship between the character, number, or symbol and the corresponding color or shade and generating code data.
8. The apparatus of claim 7, wherein the environmental variables of the color-converting portion are parameters set to discriminate colors or shades of the raw image in consideration of an environment in which the raw image is input, and the color-converting portion corrects the value of each of the pixels recognized in the raw image by the environmental variables.
9. The apparatus of claim 7, wherein the image-processing portion sets colors of pixels positioned at an edge of the binary-coded image as a background color, and sets colors of pixels connected to the pixels set by the background color as the background color, thereby discriminating the code image area from other areas.
10. The apparatus of claim 7, wherein the image-processing portion receives an image in which the code image area and the other areas are divided into one another by the background color, divides the image into a plurality of blocks, detects a region having a color or shade that is not the background color, from each of the blocks, selects a block having the largest region among the blocks, detects a center point of the code image area contained in the block, and searches the entire image on the basis of the center point, thereby detecting the region having a color or shade that is not the background color as the code image area.
11. A method for recognizing a code, the method comprising the steps of: acquiring a raw image in which a code image is cpntained; correcting colors or shades recognized from the raw image, using environmental variables; converting the corrected colors or shades into a plurality of standard colors or standard shades used to generate the code image, and generating a standard image represented by the standard colors or standard shades; dividing the colors or shades of the raw image into two colors according to a predetermined reference value and generating a binary-coded image; extracting an area excluding a code image area from the binary-coded image, and setting a color of the area corresponding to the extracted area in the standard images as a background color, thereby discriminating the code image area from other areas; extracting a plurality of cells included in the code image area, and recognizing the standard color or standard shade represented in each of the cells; and extracting a corresponding character, number, or symbol from the color or shade recognized in each of the cells of the code image according to a relationship between the character, number, or symbol and the corresponding color or shade and generating code data.
12 A method for recognizing a code, the method comprising the steps of acquiring a raw image in which a code image is contained, dividing the colors or shades of the raw image into two colors according to a predetermined reference value and generating a binary-coded image, extracting an area excluding a code image area from the binary-coded image, and setting a color of the area corresponding to the extracted area in the standard image as a background color, thereby discriminating the code image area from other areas, extracting a plurality of cells included in the code image portion, and recognizing the standard color or standard shade represented in each of the cells, correcting colors or shades recognized from the raw image with respect to each of the cells included in the code image area, using environmental variables, converting the corrected colors or shades into a plurality of standard colors or standard shades used to generate the code image, and extracting a corresponding character, number, or symbol from the color or shade recognized in each of the cells of the code image according to a relationship between the character, number, or symbol and the corresponding color or shade and generating code data
13 A computer readable media for implementing the method of claim 11
14 A computer readable media for implementing the method of claim 12
15 An apparatus for recognizing a code, the apparatus comprising an image-acquiring portion for acquiring a raw image in which a code image is contained, an image-processing portion for extracting the code image from the raw image and recognizing colors or shades represented in a plurality of cells included in the code image; a code-setting portion in which a relationship between a character, number, or symbol and a corresponding color or shade is set; and a code-converting portion for extracting the corresponding character, number, or symbol from the color or shade represented in each of the cells of the code image according to the relationship set in the code-setting portion and generating code data.
16. The apparatus of claim 15, wherein the image-processing portion sets environmental variables in consideration of an environment at a time where the raw image is acquired, and corrects the colors or shades recognized in the raw image, using the environmental variables.
17. The apparatus of claim 15, wherein the. image-processing portion extracts a code image region, in which a background image portion is excluded, from the raw image, discriminates the shape and type of the code image, and discriminates cells included in the code image region on this basis.
18. The apparatus of claim 15, wherein the image-processing portion converts the raw image into a binary-coded image on the basis of the environmental variables, which are set according to the degree of brightness in a state where the raw image is input, extracts the background image area from the binary-coded image, sets a color of the area corresponding to the extracted area in the raw image as a background color, thereby discriminating a code image area from other areas and extracts the code image region from the raw image.
19. A method for recognizing a code, the method comprising the steps of: receiving a raw image in which a code image is contained; detecting a background image included in the raw image and extracting a code image region in which a background image is excluded; discriminating the shape and kind of the code image from the code image region; discriminating cells included in the code image region; recognizing the color or shade represented in each of the cells; and converting the color or shade recognized in each of the cells into a corresponding character, number, or symbol and generating code data.
20. A computer readable media for implementing the method of claim 19.
PCT/KR2002/000886 2001-11-03 2002-05-13 Apparatus and method for recognizing code WO2003041014A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
EP02802741A EP1456816B1 (en) 2001-11-03 2002-05-13 Apparatus and method for recognizing code
DE60225329T DE60225329T2 (en) 2001-11-03 2002-05-13 DEVICE AND METHOD FOR DETECTING CODE
JP2003542972A JP4016342B2 (en) 2001-11-03 2002-05-13 Apparatus and method for code recognition
US10/492,305 US6981644B2 (en) 2001-11-03 2002-05-13 Apparatus and method for recognizing code
HK05105479A HK1072826A1 (en) 2001-11-03 2005-06-29 Apparatus and method for recognizing code

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR1020010068378A KR100339691B1 (en) 2001-11-03 2001-11-03 Apparatus for recognizing code and method therefor
KR2001-0068378 2001-11-03

Publications (1)

Publication Number Publication Date
WO2003041014A1 true WO2003041014A1 (en) 2003-05-15

Family

ID=19715674

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2002/000886 WO2003041014A1 (en) 2001-11-03 2002-05-13 Apparatus and method for recognizing code

Country Status (10)

Country Link
US (1) US6981644B2 (en)
EP (1) EP1456816B1 (en)
JP (1) JP4016342B2 (en)
KR (1) KR100339691B1 (en)
CN (1) CN1310187C (en)
AT (1) ATE387676T1 (en)
DE (1) DE60225329T2 (en)
ES (1) ES2303566T3 (en)
HK (1) HK1072826A1 (en)
WO (1) WO2003041014A1 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006134337A (en) * 2004-11-05 2006-05-25 Colorzip Media Inc Method and apparatus for decoding mixed code, and recording medium
EP1679638A1 (en) * 2005-01-07 2006-07-12 Samsung Electronics Co.,Ltd. Improved apparatus and method for recognizing pattern data
US7321688B2 (en) * 2000-06-09 2008-01-22 Minolta Co., Ltd. Image processor for character recognition
GB2416239B (en) * 2004-07-12 2009-02-25 Sunplus Technology Co Ltd Document with indexes and associated document reader system
US7612918B2 (en) 2005-03-29 2009-11-03 Kabushiki Kaisha Toshiba Image processing apparatus
NO20110508A1 (en) * 2011-04-01 2012-09-03 Envac Optibag Ab Procedure and system for identifying waste containers on the basis of a pattern
US8534560B2 (en) 2010-07-30 2013-09-17 Shift Co. Ltd. Two-dimensional code reader and program
US9405992B2 (en) 2011-04-01 2016-08-02 Envac Optibag Ab Method and system for identifying waste containers based on pattern
WO2018021901A1 (en) * 2016-07-27 2018-02-01 Université Abdelmalek Essaâdi Method of remote identification of the qr code by means of a camera.

Families Citing this family (68)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8682077B1 (en) 2000-11-28 2014-03-25 Hand Held Products, Inc. Method for omnidirectional processing of 2D images including recognizable characters
JP2004165863A (en) * 2002-11-12 2004-06-10 Murata Mach Ltd Color image transmission apparatus
JP3996520B2 (en) * 2003-01-30 2007-10-24 株式会社デンソーウェーブ Two-dimensional information code and generation method thereof
JP3880553B2 (en) * 2003-07-31 2007-02-14 キヤノン株式会社 Image processing method and apparatus
JP4205554B2 (en) * 2003-11-11 2009-01-07 富士フイルム株式会社 Form processing device
US20060081711A1 (en) * 2004-09-30 2006-04-20 Junxiang Zhao Color-identifying system for colored barcode and a method thereof
US20060180672A1 (en) * 2005-02-11 2006-08-17 Chu Lonny L Method and system for multi-dimensional symbol coding system
KR100702292B1 (en) * 2005-05-03 2007-03-30 (주)아이미디어아이앤씨 Image code and method and apparatus for recognizing thereof
KR100703527B1 (en) * 2005-07-12 2007-04-03 삼성전자주식회사 Image pre-processing method for an efficient hotcode patternrecognition
DE102005034374B3 (en) * 2005-07-22 2007-03-01 Siemens Ag A method for automatically creating a background mask in images with noisy background areas, applications therefor and an MRI apparatus for performing the method and computer software product
US8789756B2 (en) * 2006-02-25 2014-07-29 Roche Diagnostics Operations, Inc. Test element coding apparatuses, systems and methods
EP1826705A1 (en) 2006-02-25 2007-08-29 F.Hoffmann-La Roche Ag Analytical consumables and arrangement for reading information
KR100914515B1 (en) 2006-06-23 2009-09-02 주식회사 칼라짚미디어 Color classification method for color based image code recognition
SG138575A1 (en) * 2006-06-23 2008-01-28 Colorzip Media Inc Method of classifying colors of color based image code
BRPI0714333A2 (en) * 2006-07-19 2013-05-07 B Core Inc attic symbol, article to which the attic symbol is attached, method for attaching the attic symbol to the article, attic symbol decoding method, relative device, and relative program
WO2009060942A1 (en) * 2007-11-09 2009-05-14 B-Core Inc. Optical recognition code, its marking method, its reading-out method, article marked with optical recognition code, color recognition method, program, automatic recognition code by color arrangement and its attached article
US8172145B2 (en) * 2007-11-20 2012-05-08 Datalogic ADC, Inc. Enhanced virtual scan line processing
KR101428054B1 (en) 2007-12-13 2014-08-07 주식회사 엘지씨엔에스 Apparatus and method for media image detection, and system with the same
JP4910011B2 (en) * 2009-03-24 2012-04-04 ビーコア株式会社 Optical symbol and article with it
GB0812266D0 (en) * 2008-07-04 2008-08-13 Tyco Electronics Raychem Nv Improvements in or relating to an optical fibre organiser tray
KR100963240B1 (en) 2008-08-07 2010-06-10 광주과학기술원 Method and apparatus for recognizing color marker
US8554579B2 (en) 2008-10-13 2013-10-08 Fht, Inc. Management, reporting and benchmarking of medication preparation
US8459556B2 (en) * 2009-01-09 2013-06-11 Datalogic ADC, Inc. Prioritized virtual scan line processing
KR100920663B1 (en) * 2009-01-22 2009-10-14 (주) 이토프 Method For Recognizing Of 2-dimensional Code
KR101701170B1 (en) 2009-09-30 2017-02-01 시프트 코. 엘티디. Two-dimensional code, two-dimensional code reader, and program
JP4499825B1 (en) * 2009-09-30 2010-07-07 広行 遠藤 2D code, 2D code reader and program
US20130026240A1 (en) * 2010-01-18 2013-01-31 Manabu Hagiwara Two-dimensional code, code generation system, program, and printed medium
KR101169140B1 (en) 2010-02-17 2012-07-30 고려대학교 산학협력단 Apparatus and method for generating image for text region extraction
US8505823B2 (en) * 2010-06-30 2013-08-13 International Business Machine Corporation Noise removal from color barcode images
CN102646187B (en) * 2011-02-20 2016-08-03 深圳市心艺来文化有限公司 Color graphics coding and recognition methods
CN106267472B (en) 2011-07-15 2019-08-30 赛诺菲-安万特德国有限公司 Drug delivery device with electromechanical driving mechanism
US9897980B2 (en) * 2011-07-15 2018-02-20 Sanofi-Aventis Deutschland Gmbh Drug delivery device
KR101200378B1 (en) 2011-08-30 2012-11-12 인하대학교 산학협력단 A robust texture feature extraction using the localized angular phase
MX2014004073A (en) 2011-10-10 2014-09-11 Yewon Comm Co Ltd Device and method for automatically identifying a qr code.
US9111186B2 (en) 2011-10-12 2015-08-18 University Of Rochester Color barcodes for mobile applications: a per channel framework
US8915440B2 (en) 2011-12-23 2014-12-23 Konica Minolta Laboratory U.S.A., Inc. Four dimensional (4D) color barcode for high capacity data encoding and decoding
US8931700B2 (en) 2011-12-23 2015-01-13 Konica Minolta Laboratory U.S.A., Inc. Four dimensional (4D) color barcode for high capacity data encoding and decoding
JP5907823B2 (en) * 2012-06-29 2016-04-26 シャープ株式会社 Cooker
JP5904889B2 (en) * 2012-06-29 2016-04-20 シャープ株式会社 Information processing device
EP3779876A1 (en) 2012-10-26 2021-02-17 Baxter Corporation Englewood Improved image acquisition for medical dose preparation system
KR101623326B1 (en) 2012-10-26 2016-05-20 백스터 코포레이션 잉글우드 Improved work station for medical dose preparation system
JP5822411B2 (en) * 2013-08-12 2015-11-24 株式会社アポロジャパン Image information code conversion apparatus, image information code conversion method, image related information providing system using image code, image information code conversion program, and recording medium recording the program
GB2518443A (en) * 2013-09-24 2015-03-25 Ibm Method for detecting phishing of a matrix barcode
US10235596B2 (en) * 2013-11-06 2019-03-19 Research & Business Foundation Sungkyunkwan University System and method for transferring data using image code, outputting image code on display device, and decoding image code
AU2015284368A1 (en) 2014-06-30 2017-01-12 Baxter Corporation Englewood Managed medical information exchange
US11107574B2 (en) 2014-09-30 2021-08-31 Baxter Corporation Englewood Management of medication preparation with formulary management
US11575673B2 (en) 2014-09-30 2023-02-07 Baxter Corporation Englewood Central user management in a distributed healthcare information management system
CN104361381A (en) * 2014-11-17 2015-02-18 深圳市华鑫精工机械技术有限公司 Double-interface card and method for packaging double-interface card
SG11201704359VA (en) 2014-12-05 2017-06-29 Baxter Corp Englewood Dose preparation data analytics
TWI533227B (en) * 2015-02-26 2016-05-11 楊東華 Method and system for color encoding and decoding
EP3800610A1 (en) 2015-03-03 2021-04-07 Baxter Corporation Englewood Pharmacy workflow management with integrated alerts
US9805662B2 (en) * 2015-03-23 2017-10-31 Intel Corporation Content adaptive backlight power saving technology
USD790727S1 (en) 2015-04-24 2017-06-27 Baxter Corporation Englewood Platform for medical dose preparation
US10628736B2 (en) * 2015-09-24 2020-04-21 Huron Technologies International Inc. Systems and methods for barcode annotations for digital images
WO2017082607A1 (en) 2015-11-09 2017-05-18 Samsung Electronics Co., Ltd. Electronic device and operating method of the same
KR20170054900A (en) * 2015-11-10 2017-05-18 삼성전자주식회사 Display apparatus and control method thereof
KR20170121915A (en) * 2016-04-26 2017-11-03 예스튜디오 주식회사 Method and System for Painting using Smart Pen
US9959586B1 (en) * 2016-12-13 2018-05-01 GoAnimate, Inc. System, method, and computer program for encoding and decoding a unique signature in a video file as a set of watermarks
JP2018136903A (en) * 2017-02-21 2018-08-30 エンゼルプレイングカード株式会社 System for counting number of gaming-purpose substitute currency
WO2020093152A1 (en) 2018-11-05 2020-05-14 Hamid Reza Tizhoosh Systems and methods of managing medical images
US10896306B2 (en) * 2019-04-29 2021-01-19 Ncr Corporation Barcode scanner optimization
CN111046996B (en) * 2019-11-27 2023-08-04 湖南省科腾信安智能科技有限公司 Color QR code generation and identification method
KR102214421B1 (en) * 2020-04-20 2021-02-09 조영근 System for editing video using absolute time and method thereof
EP4252190A4 (en) 2020-11-24 2024-09-11 Huron Tech International Inc Systems and methods for generating encoded representations for multiple magnifications of image data
US12118120B2 (en) 2021-05-17 2024-10-15 Bank Of America Corporation Prevention of unauthorized access to information
US11934554B2 (en) 2021-05-17 2024-03-19 Bank Of America Corporation Information security by preventing unauthorized data access
KR102490756B1 (en) * 2022-08-08 2023-01-27 미러 주식회사 A server providing encrypted document content, a viewer device of the document contents
CN116778195B (en) * 2023-08-16 2023-11-24 北京华源技术有限公司 Equipment identification method and system based on color codes

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11168616A (en) * 1997-12-03 1999-06-22 Toshiba Corp Image information processing method and image information processor
JPH11355554A (en) * 1998-06-11 1999-12-24 Toshiba Corp Picture information processing method and forgery preventing method of certificate and the like

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3894217A (en) * 1973-12-27 1975-07-08 Nippon Electric Co Device for discriminating color coded articles
US5343028A (en) 1992-08-10 1994-08-30 United Parcel Service Of America, Inc. Method and apparatus for detecting and decoding bar code symbols using two-dimensional digital pixel images
US5352878A (en) 1993-01-29 1994-10-04 United Parcel Service Of America, Inc. Method and apparatus for decoding bar code symbols using independent bar and space analysis
CN1179752A (en) * 1995-03-31 1998-04-22 基维软件程序有限公司 Machine-readable label
JP3676887B2 (en) * 1996-08-08 2005-07-27 理想科学工業株式会社 Color two-dimensional code and color two-dimensional code creation device
GB9806767D0 (en) * 1998-03-31 1998-05-27 Philips Electronics Nv Pixel colour valve encoding and decoding
JP3459770B2 (en) * 1998-04-10 2003-10-27 キヤノン株式会社 Image reading apparatus and method, recording medium
US6081739A (en) * 1998-05-21 2000-06-27 Lemchen; Marc S. Scanning device or methodology to produce an image incorporating correlated superficial, three dimensional surface and x-ray images and measurements of an object
US7023587B2 (en) * 1999-11-12 2006-04-04 Nikon Corporation Image scanning apparatus, recording medium which stores image scanning programs, and data structure
WO2001082593A1 (en) * 2000-04-24 2001-11-01 The Government Of The United States Of America, As Represented By The Secretary Of The Navy Apparatus and method for color image fusion

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH11168616A (en) * 1997-12-03 1999-06-22 Toshiba Corp Image information processing method and image information processor
JPH11355554A (en) * 1998-06-11 1999-12-24 Toshiba Corp Picture information processing method and forgery preventing method of certificate and the like

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7454060B2 (en) 2000-06-09 2008-11-18 Minolta Co., Ltd. Image processor for character recognition
US7321688B2 (en) * 2000-06-09 2008-01-22 Minolta Co., Ltd. Image processor for character recognition
GB2416239B (en) * 2004-07-12 2009-02-25 Sunplus Technology Co Ltd Document with indexes and associated document reader system
USRE44139E1 (en) 2004-11-05 2013-04-09 Colorzip Media, Inc. Method and apparatus for decoding mixed code
JP2006134337A (en) * 2004-11-05 2006-05-25 Colorzip Media Inc Method and apparatus for decoding mixed code, and recording medium
US7751629B2 (en) 2004-11-05 2010-07-06 Colorzip Media, Inc. Method and apparatus for decoding mixed code
JP4515999B2 (en) * 2004-11-05 2010-08-04 株式会社カラージップメディア Mixed code decoding method and apparatus, and recording medium
EP2290581A1 (en) * 2005-01-07 2011-03-02 Samsung Electronics Co., Ltd. Improved apparatus and method for recognizing pattern data
US8098935B2 (en) 2005-01-07 2012-01-17 Samsung Electronics Co., Ltd Apparatus and method for recognizing pattern data in a mobile terminal
EP1679638A1 (en) * 2005-01-07 2006-07-12 Samsung Electronics Co.,Ltd. Improved apparatus and method for recognizing pattern data
US7612918B2 (en) 2005-03-29 2009-11-03 Kabushiki Kaisha Toshiba Image processing apparatus
US8534560B2 (en) 2010-07-30 2013-09-17 Shift Co. Ltd. Two-dimensional code reader and program
NO332365B1 (en) * 2011-04-01 2012-09-03 Envac Optibag Ab Process and system for identifying waste containers on the basis of samples
NO20110508A1 (en) * 2011-04-01 2012-09-03 Envac Optibag Ab Procedure and system for identifying waste containers on the basis of a pattern
US9405992B2 (en) 2011-04-01 2016-08-02 Envac Optibag Ab Method and system for identifying waste containers based on pattern
EP2694223B1 (en) * 2011-04-01 2021-01-27 Envac Optibag AB Method and system for identifying waste containers based on pattern
WO2018021901A1 (en) * 2016-07-27 2018-02-01 Université Abdelmalek Essaâdi Method of remote identification of the qr code by means of a camera.

Also Published As

Publication number Publication date
CN1578969A (en) 2005-02-09
EP1456816A1 (en) 2004-09-15
DE60225329T2 (en) 2009-02-19
EP1456816B1 (en) 2008-02-27
ES2303566T3 (en) 2008-08-16
CN1310187C (en) 2007-04-11
KR100339691B1 (en) 2002-06-07
US20050001033A1 (en) 2005-01-06
JP4016342B2 (en) 2007-12-05
EP1456816A4 (en) 2005-02-02
KR20010113578A (en) 2001-12-28
JP2005509223A (en) 2005-04-07
DE60225329D1 (en) 2008-04-10
HK1072826A1 (en) 2005-09-09
ATE387676T1 (en) 2008-03-15
US6981644B2 (en) 2006-01-03

Similar Documents

Publication Publication Date Title
US6981644B2 (en) Apparatus and method for recognizing code
JP5875637B2 (en) Image processing apparatus and image processing method
US7221790B2 (en) Processing for accurate reproduction of symbols and other high-frequency areas in a color image
US20020037100A1 (en) Image processing apparatus and method
JP5337563B2 (en) Form recognition method and apparatus
JP2005150855A (en) Color image compression method and color image compression apparatus
JP4366003B2 (en) Image processing apparatus and image processing method
JP2011154698A (en) Method of discriminating color of color based image code
JP2008004101A (en) Method of discriminating colors of color based image code
JP2008099149A (en) Image processor, image processing method and image processing program
US20010036317A1 (en) Apparatus and method for detecting a pattern
JP2018139457A (en) Image processing apparatus, control method for image processing and program
JP2003008909A (en) Image processor, image processing method, program and storage medium
US8472716B2 (en) Block-based noise detection and reduction method with pixel level classification granularity
KR100353872B1 (en) Machine readable code image and method for encoding and decoding thereof
JP4250316B2 (en) Image compression apparatus, image expansion apparatus, method thereof, and storage medium
JP4228905B2 (en) Image processing apparatus and program
JP4383187B2 (en) Image processing apparatus, image processing program, and storage medium
AU2007249099B2 (en) Block-based noise detection and reduction method with pixel level classification granularity
JP3759349B2 (en) Image analysis apparatus and image analysis method
JPH08212296A (en) Optical character reader
JP2001134773A (en) Device and method for processing image
JP2001312726A (en) Image processor and image processing method
JP2008005177A (en) Image processor, image processing method, and image processing program
JP2000099627A (en) Device for reading character

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SI SK SL TJ TM TN TR TT TZ UA UG US UZ VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
WWE Wipo information: entry into national phase

Ref document number: 10492305

Country of ref document: US

WWE Wipo information: entry into national phase

Ref document number: 20028215982

Country of ref document: CN

WWE Wipo information: entry into national phase

Ref document number: 2003542972

Country of ref document: JP

WWE Wipo information: entry into national phase

Ref document number: 2002802741

Country of ref document: EP

WWP Wipo information: published in national office

Ref document number: 2002802741

Country of ref document: EP

WWG Wipo information: grant in national office

Ref document number: 2002802741

Country of ref document: EP