WO2010059449A2 - Interpolation haute résolution pour lecteurs de code optique à base d'imageuse couleur - Google Patents

Interpolation haute résolution pour lecteurs de code optique à base d'imageuse couleur Download PDF

Info

Publication number
WO2010059449A2
WO2010059449A2 PCT/US2009/063713 US2009063713W WO2010059449A2 WO 2010059449 A2 WO2010059449 A2 WO 2010059449A2 US 2009063713 W US2009063713 W US 2009063713W WO 2010059449 A2 WO2010059449 A2 WO 2010059449A2
Authority
WO
WIPO (PCT)
Prior art keywords
pixels
axes
optical code
green
interpolated
Prior art date
Application number
PCT/US2009/063713
Other languages
English (en)
Other versions
WO2010059449A3 (fr
Inventor
Craig D. Cherry
Original Assignee
Datalogic Scanning, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Datalogic Scanning, Inc. filed Critical Datalogic Scanning, Inc.
Publication of WO2010059449A2 publication Critical patent/WO2010059449A2/fr
Publication of WO2010059449A3 publication Critical patent/WO2010059449A3/fr

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/12Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using a selected wavelength, e.g. to sense red marks and ignore blue marks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14172D bar codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/146Methods for optical code recognition the method including quality enhancement steps
    • G06K7/1469Methods for optical code recognition the method including quality enhancement steps using sub-pixel interpolation

Definitions

  • the field of this disclosure relates generally to imaging and more particularly, but not exclusively, to systems and methods for reading optical codes.
  • Optical codes encode useful, optically-readable information typically about the items to which they are attached or otherwise associated.
  • an optical code is the bar code.
  • Bar codes are ubiquitously found on or associated with objects of various types, such as the packaging of retail, wholesale, and inventory goods; retail product presentation fixtures (e.g., shelves); goods undergoing manufacturing; personal or company assets; and documents.
  • a bar code typically serves as an identifier of an object, whether the identification be to a class of objects (e.g., containers of milk) or a unique item (e.g., U.S. Pat. No. 7,201 ,322).
  • a typical linear or one-dimensional bar code such as a UPC code, consists of alternating bars (i.e., relatively dark areas) and spaces (i.e., relatively light areas).
  • a UPC code for example, the pattern of alternating bars and spaces and the widths of those bars and spaces represent a string of binary ones and zeros, wherein the width of any particular bar or space is an integer multiple of a specified minimum width, which is called a "module" or "unit.”
  • a bar code reader must be able to reliably discern the pattern of bars and spaces, such as by determining the locations of edges demarking adjacent bars and spaces from one another, across the entire length of the bar code.
  • Linear bar codes are just one example of the many types of optical codes in use today.
  • Higher-dimensional optical codes such as, two-dimensional matrix codes (e.g., MaxiCode) or stacked codes (e.g., PDF 417), which are also sometimes referred to as "bar codes," are also used for various purposes.
  • optical code readers are available for capturing an optical code and for decoding the information represented by the optical code.
  • image-based optical code readers are available that include imagers, such as charge-coupled devices (CCDs) or complementary metal oxide semiconductor (CMOS) imagers, that generate electronic image data that represent an image of a captured optical code.
  • image-based optical code readers are used for reading one-dimensional optical codes and higher-dimensional optical codes. Because optical codes most often include dark and light patterns (e.g., black and white) that represent binary data, imagers of image-based optical code readers are typically monochrome so that uniform sensitivity for each pixel of the imager is achieved.
  • Common imagers made for image capturing devices are primarily color imagers - not monochrome. Because imagers made for many image capturing devices are color, color imagers are generally made in higher volume and have become more widely available and may be less expensive than monochrome imagers.
  • Common color imagers include a color filter array placed over pixel sensors arranged in a grid of rows and columns. A typical color filter pattern is the Bayer pattern 100 shown in Figure 1 and described in U.S. Patent No. 3,971 ,065 to Bayer.
  • the Bayer pattern 100 includes red filters (corresponding to red (R) pixels 102), green filters (corresponding to green (G) pixels 104), and blue filters (corresponding to blue (B) pixels 106) that pass, respectively, red, green, and blue wavelengths of light.
  • Each pixel of the color imager outputs data representing only a red, green, or blue intensity value, and, thus, each pixel contains only one-third of the total color data for the pixel location.
  • still cameras and video cameras use demosaic algorithms to determine red, green, and blue luminance information for each pixel location.
  • a bilinear interpolation method is known in which a green intensity value is generated for the location of red pixel 102a by calculating the mean intensity value of the four adjacent green pixels 104a, 104b, 104c and 104d. Because the above-described bilinear interpolation method results in visible artifacts, other more complicated interpolation methods have been used for still cameras and video cameras such as bicubic interpolation and adaptive algorithms.
  • Some image-based optical code readers have included color imagers, but the present inventor has recognized that interpolation methods used with color- imager-based optical code readers degrade spatial resolution of optical code images making it difficult to detect accurately edge locations of optical codes.
  • a typical color-imager-based optical code reader that has the same number of total pixels as a monochrome-imager-based optical code reader has lower spatial resolution than the monochrome-imager-based optical code reader due to the color interpolation process.
  • the typical color-imager-based optical code reader will have more total pixels than the monochrome-imager-based optical code reader to compensate for the degrading effects of the color interpolation process.
  • One embodiment discloses a method of processing image data that represent light intensity values sensed by different pixels of a color image sensor array used in an optical code reader to improve edge location detection accuracy of optical code elements.
  • the color image sensor array includes first and second sets of pixels.
  • the pixels of the first and second sets are arranged along multiple parallel axes of a first axes group and along multiple parallel axes of a second axes group transverse to the first axes group in a checkerboard pattern.
  • the first set of pixels and the second set of pixels are sensitive to visible wavelength bands of light that are different from one another.
  • the pixels of the first set sense light reflected from an optical code that is positioned within a field of view of the optical code reader.
  • the reflected light forms an image of the optical code on the color image sensor array.
  • the image includes a pattern of dark and light elements that have edges to be detected that are oriented to be appreciably parallel to the axes of the first axes group.
  • a first set of image data representing intensity values sensed by the first set of pixels is produced, and a second set of image data is produced from the first set of image data.
  • the second set of image data represents interpolated intensity values that correspond to locations of selected pixels of the second set.
  • a corresponding interpolated intensity value is produced by using only intensity values sensed by pixels of the first set that share an axis of the first axes group with the selected pixel to thereby preserve spatial resolution along the second axes group and improve location detection accuracy of the edges of the dark and light elements of the optical code.
  • Figure 1 is a diagram showing a Bayer pattern of a color filter array.
  • Figure 2 is a block diagram of an optical code reader according to an embodiment.
  • Figure 3 is a diagram showing a color image sensor array used in the optical code reader of Figure 2, together with multiple axes corresponding to columns and rows of pixels of the color image sensor array.
  • Figures 4, 5, and 6 are examples of different optical codes that may be read by the optical code reader of Figure 2.
  • Figure 7 shows the pixels of the color image sensor array, together with an image of a portion of an optical code, to demonstrate an interpolation method according to an embodiment.
  • Figure 8 shows an image of a portion of an optical code superimposed on, and rotated with respect to, the multiple axes of Figure 3.
  • Figure 9 is a flow chart of the steps of an interpolation method according to an embodiment.
  • FIG. 2 is a block diagram of an optical code reader 200 according to one embodiment.
  • the optical code reader 200 may be any type of reader, such as, but not limited to, a hand-held type reader, a stationary reader, or a personal digital assistant (PDA) reader.
  • the optical code reader 200 includes a color image sensor array 202 of red pixels 304, green pixels 306, and blue pixels 308 shown in Figure 3.
  • the color image sensor array 202 may be a charge-coupled device (CCD), such as a full-frame, frame-transfer, or interline-transfer CCD.
  • the color image sensor array 202 may be a complementary metal oxide semiconductor (CMOS) imager, such as a global shuttered or rolling-reset CMOS imager.
  • CMOS complementary metal oxide semiconductor
  • the red, green, and blue pixels 304, 306, and 308 of the color image sensor array 202 are arranged along multiple parallel axes of a first axes group 310 and along multiple parallel axes of a second axes group 312 transverse to the first axes group 310 in a Bayer pattern.
  • the green pixels 306 are positioned at every other pixel location to form a checkerboard pattern with a combination of the red pixels 304 and blue pixels 308.
  • Figure 3 illustrates the first and second axes groups 310 and 312 and the pixels 304, 306, and 308 of the color image sensor array 202.
  • each axis of the first axes group 310 corresponds to a column of red and green pixels 304 and 306 or blue and green pixels 308 and 306, and each axis of the second axes group 312 corresponds to a row of red and green pixels 304 and 306 or blue and green pixels 308 and 306.
  • Each row of pixels corresponds to a scan line of the color image sensor array 202.
  • the first axes group 310 may correspond to the rows of pixels (i.e., the scan lines), and the second axes group 312 may correspond to the columns of pixels.
  • Figure 3 shows ten columns (i.e., ten parallel axes of the first axes group 310) and eight rows (i.e., eight parallel axes of the second axes group 312).
  • the color image sensor array 202 may contain many more columns and rows than the numbers of columns and rows shown in Figure 3.
  • color image sensor array 202 may contain one or more megapixels.
  • the red pixels 304 of color image sensor array 202 are sensitive to visible light having wavelengths that correspond to the color red (wavelengths ranging between about 600 nanometers (nm) and about 750 nm).
  • the green pixels 306 are sensitive to visible light having wavelengths that correspond to the color green (wavelengths ranging between about 500 nm and about 600 nm).
  • the blue pixels 308 are sensitive to visible light having wavelengths that correspond to the color blue (wavelengths ranging between about 400 nm and about 500 nm).
  • the red, green, and blue pixels 304, 306, and 308 produce image data representing light intensities sensed by the pixels.
  • the optical code reader 200 includes an optical system 204 positioned to focus light on the color image sensor array 202.
  • the optical system 204 may include conventional optical components, such as one or more lenses, an aperture, and, in some cases, a mechanical shutter.
  • the color image sensor array 202 may include electronic shuttering means.
  • the optical code reader 200 may include one or more artificial illumination sources 206 positioned to illuminate a field of view 208 of the optical code reader 200 (two artificial illumination sources are shown in Figure 2).
  • the optical code reader 200 may rely on ambient light to illuminate the field of view 208 instead of the artificial illumination sources 206.
  • the optical code reader 200 includes a data processing system 210.
  • the data processing system 210 may include conventional hardware, such as camera interface hardware, and one or more programmable central processing units (CPU).
  • the data processing system 210 may also include a field programmable gate array (FPGA) for performing various operations described below (e.g., pixel selection, interpolation, edge detection) to reduce the loading of other processor-based stages and, thus, allow for higher pixel rate processing or slower processor speeds.
  • FPGA field programmable gate array
  • An FPGA may also implement spatial filtering of images using, for example, one- dimensional or two-dimensional convolution kernels for improving a signal-to-noise ratio (by reducing noise using a low-pass or band-pass filter), or for sharpening an image using a high-pass filter.
  • Filters implemented by an FPGA may also be used to compute signal values between pixel sites to produce higher resolution interpolated images.
  • the data processing system may include various types of programmable or non-programmable logic hardware, such as a complex programmable logic device (CPLD), a masked logic array, a standard cell, and a full custom application specific integrated circuit (ASIC), to perform the various operations.
  • CPLD complex programmable logic device
  • ASIC application specific integrated circuit
  • the data processing system 210 may be contained within a housing 205 of the optical code reader 200.
  • the data processing system 210 may be external to the housing 205 of the optical code reader 200, the data processing system 210 and the optical code reader 200 may communicate through a wired (e.g., EIA232, USB) or wireless (e.g., WLAN, Bluetooth®) communication link, and the data processing system 210 may communicate simultaneously with multiple optical code readers 200.
  • a wired e.g., EIA232, USB
  • wireless e.g., WLAN, Bluetooth®
  • Figure 2 shows an optical code 212 in the field of view 208 of the optical code reader 200.
  • the optical code reader 200 may read any type of optical code such as one-dimensional optical codes and higher-dimensional optical codes.
  • Figures 4, 5, and 6 show three different examples of the types of optical codes that may be read by the optical code reader 200.
  • Figure 4 shows an example of a linear bar code 212a;
  • Figure 5 shows an example of a stacked code 212b;
  • Figure 6 shows an example of matrix code 212c.
  • Each code 212a, 212b, and 212c includes a pattern of dark elements 400 (e.g., bars) and light elements 402 (e.g., spaces).
  • the dark and light elements 400 and 402 include demarcation edges 404 to be detected by the optical code reader 200.
  • the optical code 212 is positioned in, or passed through, the field of view 208 of the optical code reader 200.
  • the field of view 208 of the optical code reader 200 may be illuminated by artificial light generated by artificial illumination sources 206 or by ambient light.
  • the light that illuminates the field of view 208 includes light having wavelengths that correspond to the color green.
  • the light reflects off the optical code 212 toward the optical system 204.
  • the optical system 204 focuses the reflected light on the pixels of the color image sensor array 202 - the focused light forming an image of the optical code 212 on the pixels.
  • a first set of image data representing the intensities values of light sensed by the green pixels 306 is transmitted to the data processing system 210.
  • the data processing system 210 uses the first set of image data to produce a second set of image data representing interpolated intensity values for red and blue pixel locations.
  • the first and second sets of image data are used to decode the optical code 212 (e.g., detect the edges 404 of the dark elements 400 and light elements 402, classify cells as dark or light).
  • the red pixels 304 and the blue pixels 308 may also produce image data representing the intensities values of light sensed by the red and blue pixels 304 and 308.
  • the image data produced by the red and blue pixels 304 and 308, however, may be replaced by the second set of image data or ignored.
  • the image data produced by the red and blue pixels 304 and 308 may be retained for other purposes such as providing a color image on a display for human viewing.
  • Figure 7 shows a portion of the pixels 304, 306, and 308 of the color image sensor array 202 and an image 212' of a portion of the optical code 212 that is formed on the color image sensor array 202.
  • the image 212' is shown to the side of - rather than over - the pixels 304, 306, and 308.
  • the optical code 212 is positioned in the field of view 208 so that edges 404' of the image 212' are appreciably parallel to the axes of the first axes group 310 (i.e., the columns of the pixels).
  • edges 404' being "appreciably parallel" to the axes of the first axes group 310 means that the smallest angle ⁇ between the edges 404' and the axes of the first axes group 310 is less than the smallest angle ⁇ between the edges 404' and the axes of the second axes group 312 as shown in Figure 8.
  • "appreciably parallel” includes any orientation in which the smallest angle ⁇ between the edges 404' and the axes of the first axes group 310 ranges from about 0° to about 45°.
  • the optical code 212 may be positioned in the field of view 208 to achieve the above-described alignment by moving (e.g., rotating) the optical code 212 and/or the optical code reader 200.
  • the interpolated intensity values correspond to selected locations of the red pixels 304 and the blue pixels 308.
  • an interpolated intensity value is produced using only the intensity values of the green pixels 306 located along the axis of the first axes group 310 (i.e., the intensity values of the green pixels 306 that share an axis of the first axes group 310 with the selected location).
  • the data processing system 210 performs single-axis interpolation - the axis of interpolation corresponding to the orientation of the edges 404' of the image 212' of the optical code 212.
  • Figure 7 shows arrows 500 representing the intensity values of the green pixels 306 that are used to produce the interpolated intensity values for different locations of the red and blue pixels 304 and 308.
  • the intensity values of the green pixels 306 located above (i.e., second column and second row) and below (i.e., second column and fourth row) are used to produce the interpolated intensity value for the red pixel location.
  • the same interpolation method is used to produce interpolated intensity values for red and blue pixels locations along consecutive rows (i.e., along consecutive axes of the second axes group 312).
  • the above-described method preserves spatial resolution of the color image sensor array 202 along the parallel axes of the second axes group 312 so that the locations of the edges of optical codes may be accurately identified.
  • the above-described method allows the color image sensor array 202 to achieve a spatial resolution across optical code edges that is comparable to that achieved by a monochrome imager having the same number of pixels as the sum of the red, green, and blue pixels 304, 306, and 308.
  • problems associated with, and the need to compensate for, differences in sensitivity between the red pixels 304, green pixels 306, and blue pixels 308 may be avoided.
  • FIG. 9 is a flow chart of steps of a particular method 700 that may be implemented by the optical code reader 200.
  • the data processing system 210 After receiving the image data from the green pixels 306, together with the image data from the red and blue pixels 304 and 308, the data processing system 210 selects a pixel by identifying a portion of the image data that corresponds to the selected pixel (step 702). The data processing system 210 identifies whether the selected pixel is a green pixel 306 (step 704). If the selected pixel is a green pixel 306, the intensity value of the green pixel 306 is preserved to decode the optical code 212 (step 706).
  • the selected pixel is a red pixel 304 or a blue pixel 308, the intensity value is ignored or discarded, and the data processing system 210 identifies the neighboring green pixels 306 that share an axis of the first axes group 310 with the selected pixel (step 708).
  • the green pixels 304 above and below it i.e., the green pixel 306 in the first column, first row and the green pixel 304 in the first column, third row
  • the mean intensity value is used as the interpolated intensity value for the selected pixel (step 712).
  • step 702 The above-described steps are repeated for each pixel (represented by dashed lines extending from steps 706 and 712 to step 702), and the green pixel values of the green pixels 306 and the interpolated intensity values of the red and blue pixel locations are collectively used to decode the optical code 212.
  • the data processing system 210 need not perform all the above-described steps, and the data processing system 210 may perform alternative steps.
  • the data processing system 210 may perform other known interpolation methods using one or more green pixels 306 located along the parallel axes of the first axes group 310 (i.e., other more computationally complex interpolation methods may be used to achieve higher levels of accuracy such as polynomial or spline interpolation). Additional information about various interpolation methods can be found in WILLIAM K.
  • interpolated intensity values may not be produced (i.e., some locations are not selected locations).
  • interpolated intensity values need not be produced for red or blue pixels located along the first axis of the second axes group 312 (e.g., the top edge row of the color image sensor array 202) and the nth axis of the second axes group (e.g., the bottom edge row of the color image sensor array 202).
  • the intensity values of the green pixels 306 are the only intensity values used from the top edge and bottom edge rows to read the optical code 212.
  • the intensity values of the top edge and bottom edge rows may not be used to decode the optical code 212.
  • the image 212' of the optical code 212 may also be oriented in a ladder orientation.
  • the parallel axes of the first axes group 310 correspond to the rows of pixels and interpolation is performed using the green pixels 306 of the rows instead of the green pixels of the columns so that spatial resolution in a direction across the edges 404' of the image 212' may be preserved.
  • the single-axis interpolation described above is performed along the axes of the first axes group 310 and then along the axes of the second axes group 312 so that two sets of interpolated intensity values are produced.
  • the two sets of interpolated intensity values may be produced from the same set of green intensity values (e.g., a single frame) or from different sets of green intensity values (e.g., different frames).
  • the sets of interpolated intensity values are used independently, in combination with the corresponding set of green intensity values, to identify the edges of the optical code 212 and, thereby, decode it.
  • two interpolated images including green intensity values and interpolated intensity values
  • each interpolated image may be decoded independently.
  • the data processing system 210 performs the method 700 for each pixel of the color image sensor array 202 (note, the pixels of the first and nth rows need not be processed); then, in a second processing round, the data processing system 210 performs the method 700 for each pixel of the color image sensor array (again, the pixels of the first and nth rows need not be processed) with the exception that in steps 708 and 710, the neighboring green pixels 306 that share an axis of the second axes group 312 with the selected pixel are identified and their intensity values are used to produce the mean intensity value (e.g., for the red pixel 304 in the second column, third row of Figure 7, the green pixels 306 to the right and left of it are used).
  • the mean intensity value e.g., for the red pixel 304 in the second column, third row of Figure 7, the green pixels 306 to the right and left of it are used.
  • one processing round may be performed in which a vertical interpolation value and a horizontal interpolation value are created for a blue or red pixel location before processing a next pixel.
  • the data processing system 210 may then attempt to decode the optical code 212 using one of the two interpolated images or both interpolated images.
  • optical code reader 200 need not attempt to determine the orientation of the optical code 212 prior to interpolating and decoding.
  • labels on packages and/or other optical codes within the field of view 208 of the optical code reader 200 may make it difficult for the optical code reader 200 to determine, prior to decoding, the orientation of the optical code 212 (e.g., the edges of symbols on packaging labels may dominate over those of the optical code 212 when attempting to use conventional edge detection techniques (e.g., gradient computation) to determine the orientation of the optical code 212).
  • the data processing system 210 may operate relatively quickly so that it generates the interpolated images and attempts to decoded them without significantly increasing the processing time.
  • the data processing system 210 may analyze the interpolated images to determine which interpolated image to decode.
  • the interpolated images may be compared (e.g., edge sharpness comparison) to identify the interpolated image that most accurately reflects the locations of the optical code edges 404.
  • Several possible measures of the amount of signal modulation may be implemented to choose the interpolated image that is most likely to have been interpolated in a direction substantially parallel to the edges 404 of the optical code 212. For example, one measurement may be standard deviation, in which the image is chosen that has the smallest standard deviation of pixel values along a row or column.
  • a second measurement may be the differences between adjacent pixels, in which the image is chosen that has the smallest sum of the absolute values of the differences between adjacent pixels.
  • the interpolated image that represents edges 404 in higher resolution may be selected for decoding. Choosing one of the interpolated images to decode - instead of decoding both interpolated images - may be useful in some applications.
  • the data processing system 210 may also use feedback information from a decoder implemented in the data processing system 210 to identify the orientation of the optical code 212 so that the optical code reader 200 can improve its performance during a subsequent read.
  • the decoder may be able to identify whether the optical code 212 is oriented in a picket fence or ladder orientation without being able to actually decode it. Accordingly, the decoder can communicate that the optical code 212 was in a particular orientation so that the data processing system 210 need only interpolate along either the first set of axes 310 or the second set of axes 312 during a subsequent attempt to read the optical code 212.
  • green pixel data is used to compute interpolated intensity values and to decode the optical code 212.
  • red pixel data, blue pixel data, or a combination of red and blue pixel data may be used in accordance with the above-described embodiments to compute interpolated intensity values corresponding to green pixel locations to decode the optical code 212.
  • the colors red, green, and blue and the Bayer pattern have been described above in connection with the color image sensor array 202, other colors and filter patterns may be used without departing from the scope and spirit of the present disclosure.
  • the color image sensor array 202 may include a cyan, yellow, green, and magenta (CYGM) filter; a red, green, blue, and emerald (RGBE) filter; or a white, red, green, and blue (WRGB) filter.
  • CYGM cyan, yellow, green, and magenta
  • RGBE red, green, blue, and emerald
  • WRGB white, red, green, and blue
  • the optical code reader 200 and its associated methods are flexible to compensate for the effects of these various filters. For example, single-axis interpolation may be conducted using one or more of the colors of these different filters.
  • Certain embodiments may be capable of achieving one or more of the following advantages: (1) enabling utilization of lower cost color imagers in optical code readers; (2) achieving higher spatial resolution with a color imagers by interpolating along - not across - optical code edges; (3) reducing processor loading to allow for higher pixel rate processing or lower processor speeds by performing various operations via an FPGA; and (4) reading higher density optical codes than would otherwise be possible.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Toxicology (AREA)
  • Electromagnetism (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Color Television Image Signal Generators (AREA)
  • Editing Of Facsimile Originals (AREA)

Abstract

L’invention concerne un lecteur de code optique (200) contenant une matrice de capteur d'image couleur (202) présentant des pixels (306) d’un premier ensemble et des pixels (304, 308, ou 304 et 308) d'un second ensemble longeant de multiples axes parallèles d'un premier groupe d’axes (310) et de multiples axes parallèles d’un second groupe d’axes (312) transversal au premier groupe d’axes. Les pixels du premier ensemble produisent des données représentant des valeurs d’intensité de lumière détectées. Dans une configuration, le lecteur de code optique contient un système de traitement de données (210) opérationnel pour réaliser une interpolation à un seul axe pour obtenir des valeurs d’intensité interpolées qui correspondent à des emplacements sélectionnés de pixels du second ensemble. Une valeur d’intensité interpolée s'obtient pour un emplacement sélectionné en utilisant uniquement les valeurs d’intensité détectées par les pixels du premier ensemble partageant un axe du premier groupe d’axes avec l’emplacement sélectionné.
PCT/US2009/063713 2008-11-20 2009-11-09 Interpolation haute résolution pour lecteurs de code optique à base d'imageuse couleur WO2010059449A2 (fr)

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US11642508P 2008-11-20 2008-11-20
US61/116,425 2008-11-20
US12/611,869 US20100123009A1 (en) 2008-11-20 2009-11-03 High-resolution interpolation for color-imager-based optical code readers
US12/611,869 2009-11-03

Publications (2)

Publication Number Publication Date
WO2010059449A2 true WO2010059449A2 (fr) 2010-05-27
WO2010059449A3 WO2010059449A3 (fr) 2010-07-29

Family

ID=42171185

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2009/063713 WO2010059449A2 (fr) 2008-11-20 2009-11-09 Interpolation haute résolution pour lecteurs de code optique à base d'imageuse couleur

Country Status (2)

Country Link
US (1) US20100123009A1 (fr)
WO (1) WO2010059449A2 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107153804A (zh) * 2017-06-07 2017-09-12 福州觉感视觉软件科技有限公司 一种带定位区的堆叠式二维码及其生成和识别方法

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8479998B2 (en) * 2011-01-31 2013-07-09 Hand Held Products, Inc. Terminal having optical imaging assembly
JP2015154308A (ja) * 2014-02-17 2015-08-24 ソニー株式会社 画像処理装置、画像処理方法、およびプログラム
CN110110589A (zh) * 2019-03-25 2019-08-09 电子科技大学 基于fpga并行计算的人脸分类方法

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997048231A1 (fr) * 1996-06-14 1997-12-18 Iterated Systems, Inc. Procede et systeme de reconstruction par interpolation des valeurs de chrominance manquantes pour systemes d'imagerie en couleurs a capteur unique
US20060274171A1 (en) * 2005-06-03 2006-12-07 Ynjiun Wang Digital picture taking optical reader having hybrid monochrome and color image sensor array
US7364081B2 (en) * 2003-12-02 2008-04-29 Hand Held Products, Inc. Method and apparatus for reading under sampled bar code symbols

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3971065A (en) * 1975-03-05 1976-07-20 Eastman Kodak Company Color imaging array
US4642678A (en) * 1984-09-10 1987-02-10 Eastman Kodak Company Signal processing method and apparatus for producing interpolated chrominance values in a sampled color image signal
US5243655A (en) * 1990-01-05 1993-09-07 Symbol Technologies Inc. System for encoding and decoding data in machine readable graphic form
US5373322A (en) * 1993-06-30 1994-12-13 Eastman Kodak Company Apparatus and method for adaptively interpolating a full color image utilizing chrominance gradients
US5506619A (en) * 1995-03-17 1996-04-09 Eastman Kodak Company Adaptive color plan interpolation in single sensor color electronic camera
JP2748263B2 (ja) * 1995-09-04 1998-05-06 松下電器産業株式会社 バーコードリーダと、それに用いるイメージセンサ
US5596367A (en) * 1996-02-23 1997-01-21 Eastman Kodak Company Averaging green values for green photosites in electronic cameras
US20020050518A1 (en) * 1997-12-08 2002-05-02 Roustaei Alexander R. Sensor array
US6642962B1 (en) * 1999-09-01 2003-11-04 Neomagic Corp. Merged pipeline for color interpolation and edge enhancement of digital images
US6628330B1 (en) * 1999-09-01 2003-09-30 Neomagic Corp. Color interpolator and horizontal/vertical edge enhancer using two line buffer and alternating even/odd filters for digital camera
US7607581B2 (en) * 2003-11-13 2009-10-27 Metrologic Instruments, Inc. Digital imaging-based code symbol reading system permitting modification of system features and functionalities
US6722569B2 (en) * 2001-07-13 2004-04-20 Welch Allyn Data Collection, Inc. Optical reader having a color imager
US7071978B2 (en) * 2001-07-18 2006-07-04 Hewlett-Packard Development Company, L.P. Image mosaic data reconstruction
CN101118317B (zh) * 2002-02-27 2010-11-03 Cdm光学有限公司 波前编码成像系统的优化图像处理
US7580070B2 (en) * 2005-03-31 2009-08-25 Freescale Semiconductor, Inc. System and method for roll-off correction in image processing
US7237721B2 (en) * 2005-05-24 2007-07-03 Nokia Corporation Image processing for pattern detection
US7770799B2 (en) * 2005-06-03 2010-08-10 Hand Held Products, Inc. Optical reader having reduced specular reflection read failures
WO2007036055A1 (fr) * 2005-09-30 2007-04-05 Simon Fraser University Procedes et appareil de detection de defauts dans des reseaux d'imagerie par analyse d'images
US7946491B2 (en) * 2006-08-03 2011-05-24 Nokia Corporation Method, apparatus, and computer program product for providing a camera barcode reader
US8091788B2 (en) * 2007-01-11 2012-01-10 Datalogic Scanning, Inc. Methods and systems for optical code reading using virtual scan lines

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997048231A1 (fr) * 1996-06-14 1997-12-18 Iterated Systems, Inc. Procede et systeme de reconstruction par interpolation des valeurs de chrominance manquantes pour systemes d'imagerie en couleurs a capteur unique
US7364081B2 (en) * 2003-12-02 2008-04-29 Hand Held Products, Inc. Method and apparatus for reading under sampled bar code symbols
US20060274171A1 (en) * 2005-06-03 2006-12-07 Ynjiun Wang Digital picture taking optical reader having hybrid monochrome and color image sensor array

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107153804A (zh) * 2017-06-07 2017-09-12 福州觉感视觉软件科技有限公司 一种带定位区的堆叠式二维码及其生成和识别方法
CN107153804B (zh) * 2017-06-07 2020-06-30 福州觉感视觉软件科技有限公司 一种带定位区的堆叠式二维码的生成和识别方法

Also Published As

Publication number Publication date
US20100123009A1 (en) 2010-05-20
WO2010059449A3 (fr) 2010-07-29

Similar Documents

Publication Publication Date Title
US8505823B2 (en) Noise removal from color barcode images
EP2752787B1 (fr) Systèmes et procédés de lecture de code optique à l'aide d'un dispositif d'imagerie couleur
US10638099B2 (en) Extended color processing on pelican array cameras
EP2396744B1 (fr) Formation de l'image d'un code optique haute résolution à l'aide d'un dispositif de formation d'image de couleur
US20230137694A1 (en) Super resolution and color motion artifact correction in a pulsed color imaging system
US9424453B2 (en) Indicia reading terminal with color frame processing
US8079525B1 (en) System and method for decoding barcodes captured with a color image sensor
JP6582987B2 (ja) 映像撮影装置、映像撮影方法、符号型赤外カットフィルタ、および符号型特定色カットフィルタ
US10147167B2 (en) Super-resolution image reconstruction using high-frequency band extraction
WO2013067671A1 (fr) Terminal de lecture d'indice optique comportant un capteur d'image en couleurs
US20030063185A1 (en) Three-dimensional imaging with complementary color filter arrays
US7589772B2 (en) Systems, methods and devices for multispectral imaging and non-linear filtering of vector valued data
US20100123009A1 (en) High-resolution interpolation for color-imager-based optical code readers
US6827268B2 (en) Imaging device
US7734112B1 (en) Method and system for identifying an object in an electronically acquired image
Spote et al. Joint demosaicing of colour and polarisation from filter arrays
US11423273B2 (en) Detection of machine-readable tags with high resolution using mosaic image sensors
US20220358625A1 (en) Camera and method for acquiring image data
Neves et al. Time-constrained detection of colored objects on raw Bayer data
US20220343092A1 (en) Apparatus and methods for preprocessing images having elements of interest
Tajbakhsh Efficient defect pixel cluster detection and correction for Bayer CFA image sequences
CN116724564A (zh) 图像传感器、图像数据获取方法、成像设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 09828014

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 09828014

Country of ref document: EP

Kind code of ref document: A2