CN114818759A - Arrangement and method for determining a target distance and adjusting a reading parameter of an imaging reader based on the target distance - Google Patents

Arrangement and method for determining a target distance and adjusting a reading parameter of an imaging reader based on the target distance Download PDF

Info

Publication number
CN114818759A
CN114818759A CN202210540586.5A CN202210540586A CN114818759A CN 114818759 A CN114818759 A CN 114818759A CN 202210540586 A CN202210540586 A CN 202210540586A CN 114818759 A CN114818759 A CN 114818759A
Authority
CN
China
Prior art keywords
image
target
sub
imager
aiming
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210540586.5A
Other languages
Chinese (zh)
Inventor
C·谭
C·D·威滕伯格
H·E·库臣布罗德
D·P·戈伦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Symbol Technologies LLC
Original Assignee
Symbol Technologies LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US15/165,117 external-priority patent/US11009347B2/en
Priority claimed from US15/170,464 external-priority patent/US9800749B1/en
Application filed by Symbol Technologies LLC filed Critical Symbol Technologies LLC
Publication of CN114818759A publication Critical patent/CN114818759A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • G06K7/10712Fixed beam scanning
    • G06K7/10722Photodetector array or CCD scanning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • G06K7/10712Fixed beam scanning
    • G06K7/10722Photodetector array or CCD scanning
    • G06K7/10732Light sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • G06K7/10792Special measures in relation to the object to be scanned
    • G06K7/10801Multidistance reading
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/00795Reading arrangements
    • H04N1/00798Circuits or arrangements for the control thereof, e.g. using a programmed control device or according to a measured quantity
    • H04N1/00816Determining the reading area, e.g. eliminating reading of margins
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/024Details of scanning heads ; Means for illuminating the original
    • H04N1/02409Focusing, i.e. adjusting the focus of the scanning head
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/024Details of scanning heads ; Means for illuminating the original
    • H04N1/028Details of scanning heads ; Means for illuminating the original for picture information pick-up
    • H04N1/02815Means for illuminating the original, not specific to a particular type of pick-up head

Abstract

The range to a target to be read by image capture within the range of working distances is determined by directing the aiming spot along the aiming axis to the target, and by capturing a first image of the target containing the aiming spot, and by capturing a second image of the target without the aiming spot. Each image is captured in a frame over a field of view having an imaging axis offset from the boresight axis. An image preprocessor compares first image data from the first image to second image data from the second image over a common fractional region of the two frames to obtain a location of the aiming spot in the first image, and determines a distance to the target based on the location of the aiming spot in the first image.

Description

Arrangement and method for determining a target distance and adjusting a reading parameter of an imaging reader based on the target distance
The present application is a divisional application entitled "apparatus and method for determining a target distance and adjusting a reading parameter of an imaging reader based on the target distance" filed as "11/5/2017", and application No. "201780031989.4".
Cross Reference to Related Applications
This application claims benefit of U.S. patent application serial No. 15/165,117 filed on 26/5/2016 and U.S. patent application serial No. 15/170,464 filed on 1/6/2016.
Background
The present invention relates generally to an arrangement and method for determining a distance to a target to be read by image capture within a range of working distances and/or quickly adjusting one or more reading parameters of an imaging reader operable to read a target by image capture within a range of working distances based on the target distance, particularly in an imaging reader having an aiming light assembly offset from the imaging assembly.
Solid state imaging systems or imaging readers have been used in handheld and/or hands-free modes of operation to electro-optically read targets, such as one-and two-dimensional bar code symbol targets and/or non-symbol targets such as documents. A handheld imaging reader includes a housing having a handle held by an operator and an imaging module, also known as a scan engine, supported by the housing and aimed by the operator during reading. The imaging module includes an imaging assembly having a solid-state imager or imaging sensor with an imaging array of photo cells or light sensors corresponding to image elements or pixels in an imaging field of view of the imager, and an imaging lens assembly for capturing return light scattered and/or reflected from a target being imaged and for projecting the return light onto the array to begin capture of an image of the target. Such imagers may include one-or two-dimensional Charge Coupled Devices (CCDs) or Complementary Metal Oxide Semiconductor (CMOS) devices and associated circuitry for generating and processing electronic signals corresponding to one-or two-dimensional arrays of pixel data within the imaging field of view. To increase the amount of return light captured by the array in, for example, dark environments, the imaging module also typically includes an illumination assembly for illuminating the target, preferably with a variable level of illumination light reflected and scattered from the target. An aiming light assembly may also be supported by the imaging module for projecting a visible aiming spot on the target.
In some applications, such as in warehouses, it is sometimes necessary for the same reader to read not only distant targets (e.g., on products located on overhead shelves that are located at a distant working distance range on the order of thirty feet to fifty feet from the reader) but also near targets (e.g., on products located at ground level or near the operator that are located at a near working distance range on the order of less than two feet from the reader). A near imager may be provided in the reader for imaging and focusing a near target within a relatively wide imaging field of view, and a far imager may also be provided in the same reader for imaging and focusing a far target within a relatively narrow imaging field of view. Typically, at least one of the imagers (usually the far imager) has a variable focal length, such as a movable lens assembly or a zoom element.
While known imaging readers generally meet their intended purpose, it is challenging for the reader to quickly select the correct imager to read the target, quickly select the correct gain and/or exposure for the selected imager, and quickly select the correct illumination level to illuminate a target that may be located anywhere within an extended range of working distances. It is also challenging to focus a correct imager within an extended working distance range. Contrast-based autofocus, which is common in consumer cameras on smartphones, is notoriously slow because it relies on capturing and processing many images over many consecutive frames over a relatively long period of time to determine the best focus position. In many industrial applications where fast acting, active and dynamic readers are desired, such slow performance is unacceptable.
Accordingly, there is a need to quickly adjust various reading parameters of an imaging reader, such as selecting the correct imager, adjusting the gain and/or exposure of at least one imager, adjusting the illumination level, and focusing at least one imager, for reading a target that may be located anywhere within an extended range of working distances relative to the imaging reader without slowing or degrading reader performance.
Drawings
The accompanying figures, in which like reference numerals refer to identical or functionally-similar elements throughout the separate views and which together with the detailed description below are incorporated in and form part of the specification, serve to further illustrate embodiments of the concepts that include the claimed invention and to explain various principles and advantages of those embodiments.
Fig. 1 is a side view of a portable handheld imaging reader operable to determine a target distance for quickly selecting the correct imager and/or imager gain and/or imager exposure and/or illumination level, and/or quickly focusing the correct imager according to the present disclosure.
Fig. 2 is a schematic diagram of various components including an imaging, illumination, and aiming light assembly supported on an imaging module mounted within the reader of fig. 1.
Fig. 3 is a perspective view of the imaging module of fig. 2 in an isolated state.
Fig. 4 is a cross-sectional view taken on line 4-4 of fig. 2.
FIG. 5 is a diagram depicting aiming spots on a close-up target for the reader of FIG. 1.
FIG. 6 is a diagram depicting a collimated spot on a remote target for the reader of FIG. 1.
FIG. 7 is a view of an image containing a targeting spot during a coarse determination of the location of the targeting spot in the image.
FIG. 8 is a view of an image containing a targeting spot during fine determination of the location of the targeting spot in the image.
Fig. 9 is a flow chart depicting steps performed in a method of determining a target distance according to the present disclosure.
Skilled artisans will appreciate that elements in the figures are illustrated for simplicity and clarity and have not necessarily been drawn to scale. For example, the dimensions and positioning of some of the elements in the figures may be exaggerated relative to other elements to help to improve understanding of embodiments of the present invention.
The arrangements and method constituents have been represented where appropriate by conventional symbols in the drawings, showing only those specific details that are pertinent to understanding the embodiments of the present invention so as not to obscure the disclosure with details that will be readily apparent to those of ordinary skill in the art having the benefit of the description herein.
Detailed Description
One aspect of the present disclosure relates to an arrangement for determining a distance to a target to be read by image capture within a range of working distances and/or for adjusting at least one reading parameter of an imaging reader for reading the target by image capture within the range of working distances. The arrangement comprises: an energizable aiming assembly for directing an aiming spot along an aiming axis toward a target when energized; and a controller for activating and deactivating the targeting assembly. The apparatus also includes an imaging assembly for capturing a first image of the target containing the aiming spot with the aiming assembly activated and for capturing a second image of the target without the aiming spot with the aiming assembly deactivated. Each image is captured in a frame over a field of view having an imaging axis offset from the boresight axis. An image preprocessor compares first image data from the first image to second image data from the second image over a common fractional region of the two frames to obtain a location of the aiming spot in the first image, and determines a distance to the target further based on the location of the aiming spot in the first image. The controller is further operable to adjust at least one read parameter based on the determined target distance.
More specifically, during the coarse determination of the target range, the image preprocessor subdivides the common fractional region into a plurality of sub-frames and compares the first image data and the second image data in each sub-frame to obtain the location of the aiming spot in at least one of the sub-frames. Thereafter, during fine determination of the target range, the image preprocessor subdivides the area around the location of the aiming spot into a plurality of sub-areas and compares the first image data and the second image data in each sub-area to obtain the location of the aiming spot in at least one of the sub-areas. Advantageously, the imaging component captures each image as an array of pixels having intensity values, and the image pre-processor averages the intensity values in each sub-frame and each sub-region to obtain an average intensity value and compares the difference between the average intensity values in each sub-frame and each sub-region of the first and second images to obtain the position of the aiming spot based on the maximum difference between the average intensity values of at least one of the sub-frames and sub-regions.
This arrangement is preferably incorporated in an imaging module (also referred to as a scan engine) mounted in an imaging reader, particularly a handheld reader, having a near imager for imaging a near target over a relatively wide imaging field of view and a far imager for imaging a far target over a relatively narrow imaging field of view. The imaging assembly mentioned above preferably comprises a far imager with a variable focal length, such as a movable lens assembly or a variable focal length element. The reader also preferably has an illumination light assembly for generating a variable level of illumination light.
In accordance with the present disclosure, the determined target distance may be used to adjust one or more reading parameters of the imaging reader. For example, the determined target distance may be employed to automatically select which of the imagers is to be used to image the target, and/or to automatically adjust the gain of the selected imager, and/or to automatically adjust the exposure of the selected imager, and/or to automatically adjust the illumination light level, and/or to automatically adjust the focal length of the selected imager. In contrast to known contrast-based autofocus, which is performed by capturing and processing many images over a long period of time, the focusing disclosed herein is more rapid because the determination of the target distance is performed in subframes and in sub-regions of a pair of partial images.
Yet another aspect of the present disclosure relates to a method of determining a distance to a target to be read by image capture within a range of working distances and/or adjusting at least one reading parameter of an imaging reader for reading the target by image capture within the range of working distances. The method is performed by directing the aiming spot along the aiming axis to the target, and then not directing the aiming spot to the target. The method is further performed by capturing a first image of the target containing the aiming spot, capturing a second image of the target without the aiming spot, and capturing each image in a frame over a field of view having an imaging axis offset from the aiming axis. The method is further performed by comparing first image data from the first image to second image data from the second image over a common fractional region of the two frames to obtain a location of the aiming spot in the first image, and by determining a range to the target based on the location of the aiming spot in the first image. The method is still further performed by adjusting at least one reading parameter based on the determined target distance.
Reference numeral 30 in fig. 1 generally identifies an ergonomic imaging reader configured as a pistol-shaped housing having an upper barrel or body 32 and a lower handle 28 rearwardly inclined away from the body 32 at an angle of inclination of, for example, 15 ° relative to vertical. The light transmissive window 26 is located adjacent the front or nose end of the body 32 and is also preferably inclined, for example, at an angle of 15 deg. to the vertical. The imaging reader 30 is held in the operator's hand and is used in a handheld mode in which the trigger 34 is manually depressed to initiate imaging of a target (particularly a bar code symbol) to be read in an extended working distance range, for example on the order of approximately thirty feet to fifty feet from the window 26. Other configurations of housings and readers operating in a hands-free mode may also be employed.
As shown schematically in fig. 2 and more realistically in fig. 3-4, the imaging module 10 is mounted in a reader 30 behind the window 26 and is operable, as described below, to read targets through the window 26 by image capture over a range of extended working distances away from the module 10. The target may be located anywhere in the working distance range between the near working distance (WD1) and the far working distance (WD 2). In a preferred embodiment, the WD1 is about eighteen inches at or away from the window 26, and the WD2 is substantially further away from the window 26, for example more than about sixty inches away from the window 26. The module 10 includes an imaging assembly having a near imaging sensor or imager 12 and a near imaging lens assembly 16 and a far imaging sensor or imager 14 and a far imaging lens assembly 18, the near imaging lens assembly 16 is operative for capturing return light from a near target located in a near area of the range (e.g., an area from about zero inches to about eighteen inches away from the window 26) over a relatively wide imaging field of view 20 (e.g., about thirty degrees), which is generally rectangular, and for projecting the captured return light onto the near imager 12, the far imaging lens assembly 18 is used to capture return light from a far target located in a far distance region of the range (e.g., a region greater than about sixty inches away from the window 26) over a relatively narrow imaging field of view 22 (e.g., about sixteen degrees), which is generally rectangular, and to project the captured return light onto the far imager 14. While only two imagers 12, 14 and two imaging lens assemblies 16, 18 have been illustrated in fig. 2, it will be understood that more than two imagers may be provided in the module 10.
Each imager 12, 14 is a solid state device, such as a CCD or CMOS imager having a one-dimensional array of addressable image sensors or pixels arranged in a single linear row, or preferably a two-dimensional array of such sensors arranged in mutually orthogonal rows and columns, and the imager 12, 14 is operable for detecting return light captured by the respective imaging lens assembly 16, 18 through the window 26 along the respective near and far imaging axes 24, 36. Each imaging lens assembly is advantageously a cooke triplet lens. As illustrated in fig. 4, the near imaging lens assembly 16 has a fixed focal length and the far imaging lens assembly 18 has a variable focal length due to the addition of a zoom element 38 or a movable lens assembly.
As also shown in fig. 2-4, the illumination light assembly is also supported by the imaging module 10 and includes an illumination light source (e.g., at least one Light Emitting Diode (LED)40) fixedly mounted on an optical axis 42, and an illumination lens assembly of an illumination lens 44 also centered on the optical axis 42. The illumination light assembly is shared by the two imagers 12, 14 and is operable to emit illumination light at a variable illumination level.
As further shown in fig. 2-3, the aiming light assembly is also supported by the imaging module 10 and includes an aiming light source 46 (e.g., a laser) fixedly mounted on an aiming axis 48, and an aiming lens 50 centered on the aiming axis 48. The aiming lens 50 may include diffractive or refractive optical elements and is operable to project a visible aiming light pattern onto the target along the aiming axis 48 prior to reading. As shown in fig. 5-6, the aiming light pattern includes an aiming spot 102, preferably having a generally circular shape.
As further shown in fig. 2, the imagers 12, 14, LED 40 and laser 46 are operatively connected to a controller or programmed microprocessor 52 operable to control the operation of these components. The memory 54 is connected to the controller 52 and is accessible by the controller 52. Preferably, the controller 52 is the same apparatus used for processing return light from the target and for decoding the captured target image. An image pre-processor 56 in a custom Application Specific Integrated Circuit (ASIC) or Field Programmable Gate Array (FPGA) is operatively connected between the imagers 12, 14 and the controller 52 for pre-processing images captured by the imagers 12, 14 as described more fully below. In some applications, the image preprocessor 56 may be integrated with the controller 52.
As described above, it is challenging for the reader 30 to quickly select the correct imager 12 or 14 to read the target, quickly select the correct gain and/or exposure for the selected imager, and select the correct illumination level from the LEDs 40 to illuminate the target that can be located anywhere within the extended working distance range. It is also challenging to focus the selected imager within an extended working distance range. Contrast-based autofocus, which relies on capturing and processing many images over many consecutive frames over a relatively long period of time to determine the best focus position, is notoriously slow. One aspect of the present disclosure relates to enhancing reader performance by operating the aiming light assembly to function as both a luminometer and a rangefinder to determine the distance to a target, and then selecting the correct imager 12 or 14, and/or selecting the correct gain and/or exposure for the selected imager, and/or selecting the correct illumination from the LED 40, and/or focusing the selected imager based on the determined distance.
As shown in fig. 2, the aiming axis 48 is offset from the near and far imaging axes 24, 36 such that the resulting disparity between the aiming spot 102 on the aiming axis 48 and one of the near and far imaging axes 24, 36 provides target distance information. More specifically, the parallax between the aiming axis 48 and either of the near and far imaging axes 24, 36 provides range information from the pixel location of the aiming spot 102 on one of the imaging sensor arrays. It is preferable to use the imaging axis 36 of the far imager 14 by default, as parallax will be greater for the far imager 14 than for the near imager 12. In a preferred embodiment, the distance between the aiming axis 48 and the far imaging axis 36 on the module 10 is about 23 millimeters.
As shown in FIG. 5, the target of the symbol 100, which is configured to be located in the near region of the range, is contained within the narrow field of view 22 of the far imager 14, and preferably, the imaging axis 36 is approximately centered within the narrow field of view 22. As shown in fig. 6, the same symbol 100 configured to be located in the far region of the range is also contained in the narrow field of view 22 of the far imager 14, and preferably, the imaging axis 36 is again approximately centered in the narrow field of view 22. The apparent size of the symbol 100 is larger in fig. 5 than in fig. 6. In the narrow imaging field of view 22, the symbol 100 is off-center in fig. 5 and more centered in fig. 6. For the default far imager 14, if the symbol 100 is located at an infinite working distance from the reader 30, the aiming spot 102 pointed on the symbol 100 will directly overlie the imaging axis 36. As the symbol 100 gets closer to the reader 30, the aiming spot 102 gets larger in area, as shown in FIG. 5, and moves away from the imaging axis 36 along the tilted trajectory 104. By determining the position of the aiming spot 102 on the trajectory 104 relative to the imaging axis 36, the working distance of the symbol 100 can be determined. The separation between the aiming spot 102 and the imaging axis 36 is proportional to the inverse of the working distance. Preferably, the position of the aiming spot 102 along the trajectory 104 is pre-calibrated during manufacture of the reader. As also shown in fig. 5-6, far imager 14 captures an image of symbol 100 at a particular resolution, in this illustrative case at a two-dimensional resolution of 800 rows of pixels in height and 1280 columns of pixels in width.
The image preprocessor 56 mentioned above is used to analyze the images captured by the far imager 14 in order to determine the location of the aiming spot 102. To minimize cost, the image preprocessor 56 is preferably incorporated in a low power, low processing device, preferably without a frame buffer to store the image. As a result, as explained below, the image preprocessor 56 is not responsible for analyzing each overall captured image, but rather only fractional regions of each captured image, particularly where the targeted spot 102 is expected to appear along the trajectory 104.
More specifically, the controller 52 activates the aiming laser 46 to direct the aiming spot 102 onto the symbol 100. The far imager 14 captures an image of a first, all or preferably a portion of the symbol 100 having the aiming spot 102 thereon in a first frame. In response, the image preprocessor 56 analyzes only fractional regions of the first image in the first frame. As shown in FIG. 7, image preprocessor 56 does not analyze pixels in row 0 to about row 400, or in row 560 to row 800, or column 0 to about column 640, because the targeted spots 102 are not expected there and there is no reason to waste processing power or time analyzing that there will not be pixels of the targeted spots 102. The fractional region or remaining region contains only about 160 of the original 800 lines of the complete first image and can therefore be captured and analyzed much faster than the complete first image.
The image preprocessor 56 subdivides the remaining area of the first frame into a matrix of sub-frames or coarse regions. As shown in fig. 7, the remaining area is subdivided into sixteen generally rectangular sub-frames, e.g., four rows by four columns. The subframes need not have the same height, width, or area. It will be appreciated that the remaining area may be subdivided into any number of subframes. The number of sub-frames depends on the accuracy with which the aiming spot 102 is initially roughly positioned in the sub-frames.
Image preprocessor 56 next acquires image data from each of the sub-frames. More specifically, the tone or luminance values of all pixels in each subframe are averaged to obtain an average luminance value. The image preprocessor 56 obtains a matrix of sixteen average luminance values, one for each sub-frame.
Thereupon, the controller 52 deactivates the aiming laser 46 and the far imager 14 captures an image of a second, entire or preferably portion of the symbol 100 in a second frame on which the spot 102 is not aimed. As before, the image preprocessor 56 analyzes only the fractional region of the second image in the second frame, and the fractional region is the same fractional region as that used in the first image. As before, the image preprocessor 56 obtains the luminance values of all pixels in each sub-frame of the same fractional region, averages the luminance values in each sub-frame of the same fractional region to obtain an average luminance value, and obtains a matrix of sixteen average luminance values, one average luminance value for each sub-frame.
By way of non-limiting numerical example, a matrix of sixteen average brightness values with the targeting component de-energized is shown on the left side below, and a matrix of sixteen average brightness values with the targeting component energized is shown on the right side below:
Figure BDA0003648185500000091
the image preprocessor 56 next compares the two matrices by subtracting the average luminance value for each sub-frame, thereby obtaining the following difference matrix of luminance difference values in this numerical example:
Figure BDA0003648185500000092
it will be observed from the disparity matrix that the luminance difference value in row 1, column 1 is highlighted from all other luminance difference values because it has the largest magnitude or luminance difference. This identifies the location of the aiming spot 102.
If it is desired to more accurately determine the location of the targeted spot 102, the image preprocessor 56 may subdivide the area around the identified location of the targeted spot 102 into a plurality of sub-areas. As shown in fig. 8, the image preprocessor 56 subdivides the region into a matrix of sub-regions or fine regions, for example, into sixteen (e.g., four rows by four columns) generally rectangular sub-regions. The sub-regions need not have the same height, width or area. It will be appreciated that the region may be subdivided into any number of sub-regions. The number of sub-regions depends on the accuracy desired for subsequently fine positioning of the aiming spot 102 in the sub-region.
As before, controller 52 energizes and de-energizes targeting laser 46, and processor 56 obtains a matrix of sixteen average brightness values (one for each sub-region with targeting laser 46 energized) and another matrix of sixteen average brightness values (one for each sub-region with targeting laser 46 de-energized). The image pre-processor 56 next compares the two matrices by subtracting the average luminance values for each sub-region and finely locates the aiming spot 102 by finding the largest luminance difference value in at least one of the sub-regions.
Returning to FIG. 7, it will be observed that it is not necessary to analyze all sixteen sub-frames, as the targeted spots 102 will only appear in the shaded sub-frames exhibited along the locus 104. This reduces the likelihood of errors caused by moving objects or flashing light sources that may only appear in the image with aiming laser 46 energized and are mistaken for aiming spot 102. The same principle of ignoring subframes can be applied to the top and bottom rows of sub-regions shown in fig. 8.
In operation, once the working distance to the symbol 100 is determined from the aiming spot position, the controller 52 either selects the near imager 12 and, when the rangefinder determines that the symbol 100 to be imaged and read by the near imager 12 is located in a near area of range, activates the illumination light assembly to illuminate the symbol 100 with a relatively low intensity of illumination light; or the far imager 14 is selected and the illumination light assembly is activated to illuminate the target with relatively high intensity illumination light when the rangefinder determines that the symbol 100 to be imaged and read by the far imager 14 is located in a far-away region of range.
In addition, once the working distance to the symbol 100 is determined from the aiming spot position, the controller 52 may also adjust the focal length of the far imager 14, such as by changing the focal length of the focusing element 38. The controller 52 energizes the LEDs 40 with a variable current to vary the intensity of the illumination light. Still further, the controller 52 may also adjust the gain and/or exposure of one or more imagers once the working distance to the symbol 100 is determined from the aiming spot position and/or once the brightness value from each sub-frame is determined.
As shown in the flow chart of fig. 9, the method is performed by: activating an aiming light assembly to direct an aiming spot 102 along an aiming axis to the symbol 100 in step 200; capturing a first fractional image of the symbol 100 containing the aiming spot 102 in step 202; and obtaining first image data from the first fractional image in step 204. Next, the aiming light assembly is de-energized in step 206, a second fractional image of the symbol 100 without the aiming spot 102 is captured in step 208, and second image data from the second fractional image is obtained in step 210. The first image data and the second image data are compared to obtain the position of the aiming spot 102 in step 212 and the distance to the symbol 100 is determined based on the position of the aiming spot 102 in step 214. In step 216, based on the determined distance, the correct imager 12 or 14 is selected, and/or the correct gain and/or exposure for the selected imager is adjusted, and/or the correct illumination from the LED 40 is adjusted, and/or the focal length of the selected imager is adjusted.
Specific embodiments have been described in the foregoing specification. However, one of ordinary skill in the art appreciates that various modifications and changes can be made without departing from the scope of the present invention as set forth in the claims below. Accordingly, the specification and figures are to be regarded in an illustrative rather than a restrictive sense, and all such modifications are intended to be included within the scope of present teachings.
The benefits, advantages, solutions to problems, and any element(s) that may cause any benefit, advantage, or solution to occur or become more pronounced are not to be construed as a critical, required, or essential feature or element of any or all the claims. The invention is defined solely by the appended claims including any amendments made during the pendency of this application and all equivalents of those claims as issued.
Moreover, in this document, relational terms such as first and second, top and bottom, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. The terms "comprises," "comprising," "has," "having," "includes," "including," "contains," "containing," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises, has, includes, contains a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. An element starting with "comprising," "having," "including," or "containing" does not, without further limitation, preclude the presence of additional identical elements in the process, method, article, or apparatus that comprises, has, contains, or contains the element. The terms "a" and "an" are defined as one or more unless explicitly stated otherwise herein. The terms "substantially", "essentially", "approximately", "about" or any other version of these terms are defined as being close to as understood by one of ordinary skill in the art, and in one non-limiting embodiment the terms are defined to be within 10%, in another embodiment within 5%, in another embodiment within 1%, and in another embodiment within 0.5%. The term "coupled", as used herein, is defined as connected, although not necessarily directly, and not necessarily mechanically. A device or structure that is "configured" in a certain way is configured in at least that way, but may also be configured in ways that are not listed.
It will be appreciated that some embodiments may be comprised of one or more general purpose or special purpose processors (or "processing devices"), such as microprocessors, digital signal processors, custom processors, and Field Programmable Gate Arrays (FPGAs), and unique stored program instructions (including both software and firmware) that control the one or more processors to implement, in conjunction with certain non-processor circuits, some, most, or all of the functions of the methods and/or apparatus described herein. Alternatively, some or all functions could be implemented by a state machine that has no stored program instructions, or in one or more Application Specific Integrated Circuits (ASICs), in which various functions or some combinations of certain of the functions are implemented as custom logic. Of course, a combination of these two approaches may also be used.
Furthermore, one embodiment may be implemented as a computer-readable storage medium having computer-readable code stored thereon for programming a computer (e.g., including a processor) to perform a method as described and claimed herein. Examples of such computer-readable storage media include, but are not limited to, hard disks, CD-ROMs, optical memory devices, magnetic memory devices, ROMs (read only memories), PROMs (programmable read only memories), EPROMs (erasable programmable read only memories), EEPROMs (electrically erasable programmable read only memories), and flash memories. Moreover, it is expected that one of ordinary skill, notwithstanding possibly significant effort and many design choices motivated by, for example, available time, current technology, and economic considerations, when guided by the concepts and principles disclosed herein will be readily capable of generating such software instructions and programs and Integrated Circuits (ICs) with minimal experimentation.
The abstract of the disclosure is provided to allow the reader to quickly ascertain the nature of the technical disclosure. This Abstract is submitted with the understanding that it will not be used to interpret or limit the scope or meaning of the claims. Furthermore, in the foregoing detailed description, it can be seen that various features are grouped together in various embodiments for the purpose of streamlining the disclosure. This method of disclosure is not to be interpreted as reflecting an intention that the claimed embodiments require more features than are expressly recited in each claim. Rather, as the following claims reflect, inventive subject matter lies in less than all features of a single disclosed embodiment. Thus the following claims are hereby incorporated into the detailed description, with each claim standing on its own as a separately claimed subject matter.

Claims (18)

1. An arrangement for determining a distance to a target to be read by image capture within a range of working distances, the arrangement comprising:
an energizable aiming assembly configured to direct an aiming spot along an aiming axis to a target when energized;
a controller configured to activate and deactivate the activatable targeting component;
an imaging assembly configured to
Capturing a first image of the target containing the aiming spot with the energizable aiming assembly energized, and
capturing a second image of the target without the aiming spot with the activatable aiming assembly deactivated, each of the first and second images being captured in a respective frame over a field of view having an imaging axis offset from the aiming axis; and
an image preprocessor configured to
Comparing first image data from the first image with second image data from the second image over a common fractional region of each of the respective frames to obtain a location of the aiming spot in the first image, an
Determining a distance to the target based on the position of the aiming spot in the first image,
wherein the energizable targeting assembly, the imaging assembly, and the image pre-processor are incorporated in an imaging module having a first imager configured to image a first target over a first imaging field of view and a second imager configured to image a second target over a second imaging field of view, the first target being positioned closer than the second target, the first field of view being wider than the second field of view, and wherein the imaging assembly is one of the first imager or the second imager.
2. The arrangement of claim 1 wherein the image preprocessor subdivides the common fractional region into a plurality of sub-frames and compares the first image data and the second image data in each of the plurality of sub-frames to obtain the location of the aiming spot in at least one of the plurality of sub-frames.
3. The arrangement of claim 2, wherein the image pre-processor subdivides a region around the location of the aiming spot into a plurality of sub-regions and compares the first image data and the second image data in each of the plurality of sub-regions to obtain the location of the aiming spot in at least one of the plurality of sub-regions.
4. The arrangement of claim 2, wherein the imaging component captures each of the first and second images as an array of pixels having intensity values, and wherein the image preprocessor is configured to
Averaging the luminance values in each of the plurality of sub-frames to obtain an average luminance value, an
Comparing the difference between the average brightness value of each of a plurality of sub-frames of the first image and the average brightness value of each of a plurality of sub-frames of the second image to obtain the location of the aiming spot based on the greatest difference between the average brightness values in at least one of the plurality of sub-frames.
5. The arrangement of claim 1, wherein the imaging assembly is the second imager with variable focal length.
6. The arrangement of claim 5, wherein the controller adjusts a focal length of the second imager based on the determined target distance.
7. The arrangement of claim 5, wherein the controller selects one of the first imager and the second imager and adjusts at least one of a gain and an exposure for the selected imager based on the determined target distance.
8. The arrangement of claim 5, further comprising an illumination light assembly mounted on the module, the illumination light assembly configured to generate a variable level of illumination light, wherein the controller adjusts the level of the illumination light based on the determined target distance.
9. The arrangement of claim 1, further operable to adjust at least one reading parameter of an imaging reader for reading a target, wherein the controller is further configured to adjust the at least one reading parameter of the imaging reader based on the determined target distance.
10. A method of determining a distance to a target to be read by image capture within a range of working distances, the method comprising the steps of:
directing an aiming spot along an aiming axis to the target;
capturing a first image of the target including the aiming spot;
subsequently not directing the aiming spot at the target;
capturing a second image of the target without the aiming spot, each of the first and second images being captured in a respective frame over a field of view having an imaging axis offset from the aiming axis;
comparing first image data from the first image to second image data from the second image over a common fractional region of each of the respective frames to obtain a location of the aiming spot in the first image; and
determining a distance to the target based on the position of the aiming spot in the first image,
wherein at least one of capturing the first image and capturing the second image is performed by one of a first imager configured to image a first target over a first imaging field of view and a second imager configured to image a second target over a second imaging field of view, the first target being located closer than the second target, the first field of view being wider than the second field of view.
11. The method of claim 10, further comprising subdividing the common fractional region into a plurality of sub-frames, and comparing the first image data and the second image data in each of the plurality of sub-frames to obtain a location of the aiming spot in at least one of the plurality of sub-frames.
12. The method of claim 11, further comprising subdividing an area around the location of the aiming spot into a plurality of sub-areas, and comparing the first image data and the second image data in each of the plurality of sub-areas to obtain the location of the aiming spot in at least one of the plurality of sub-areas.
13. The method of claim 11, wherein capturing the first image and capturing the second image are performed by: capturing each of the first and second images as an array of pixels having luminance values, averaging the luminance values in each of the plurality of sub-frames to obtain an average luminance value, and comparing a difference between the average luminance value of each of the plurality of sub-frames of the first image and the average luminance value of each of the plurality of sub-frames of the second image to obtain the location of the aiming spot based on a maximum difference between the average luminance values in at least one of the plurality of sub-frames.
14. The method of claim 10, wherein one of the first imager and the second imager has a variable focal length.
15. The method of claim 14, further comprising adjusting a focal length of the second imager based on the determined target distance.
16. The method of claim 14, further comprising selecting one of the first imager and the second imager and adjusting at least one of a gain and an exposure for the selected imager based on the determined target distance.
17. The method of claim 14, further comprising illuminating the target with a variable level of illumination light, and adjusting the level of illumination light based on the determined target distance.
18. The method of claim 10, further operable for adjusting at least one reading parameter of an imaging reader for reading a target, the method further comprising adjusting the at least one reading parameter of the imaging reader based on the determined target distance.
CN202210540586.5A 2016-05-26 2017-05-11 Arrangement and method for determining a target distance and adjusting a reading parameter of an imaging reader based on the target distance Pending CN114818759A (en)

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US15/165,117 US11009347B2 (en) 2016-05-26 2016-05-26 Arrangement for, and method of, determining a distance to a target to be read by image capture over a range of working distances
US15/165,117 2016-05-26
US15/170,464 2016-06-01
US15/170,464 US9800749B1 (en) 2016-06-01 2016-06-01 Arrangement for, and method of, expeditiously adjusting reading parameters of an imaging reader based on target distance
PCT/US2017/032138 WO2017205065A1 (en) 2016-05-26 2017-05-11 Arrangement for, and method of, determining a target distance and adjusting reading parameters of an imaging reader based on target distance
CN201780031989.4A CN109154974B (en) 2016-05-26 2017-05-11 Apparatus and method for determining target distance and adjusting reading parameters of an imaging reader based on target distance

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201780031989.4A Division CN109154974B (en) 2016-05-26 2017-05-11 Apparatus and method for determining target distance and adjusting reading parameters of an imaging reader based on target distance

Publications (1)

Publication Number Publication Date
CN114818759A true CN114818759A (en) 2022-07-29

Family

ID=59054173

Family Applications (3)

Application Number Title Priority Date Filing Date
CN201780031989.4A Active CN109154974B (en) 2016-05-26 2017-05-11 Apparatus and method for determining target distance and adjusting reading parameters of an imaging reader based on target distance
CN202210540586.5A Pending CN114818759A (en) 2016-05-26 2017-05-11 Arrangement and method for determining a target distance and adjusting a reading parameter of an imaging reader based on the target distance
CN202210538453.4A Pending CN114818757A (en) 2016-05-26 2017-05-11 Arrangement and method for determining a target distance and adjusting a reading parameter of an imaging reader based on the target distance

Family Applications Before (1)

Application Number Title Priority Date Filing Date
CN201780031989.4A Active CN109154974B (en) 2016-05-26 2017-05-11 Apparatus and method for determining target distance and adjusting reading parameters of an imaging reader based on target distance

Family Applications After (1)

Application Number Title Priority Date Filing Date
CN202210538453.4A Pending CN114818757A (en) 2016-05-26 2017-05-11 Arrangement and method for determining a target distance and adjusting a reading parameter of an imaging reader based on the target distance

Country Status (4)

Country Link
CN (3) CN109154974B (en)
DE (1) DE112017002649T5 (en)
GB (5) GB202207975D0 (en)
WO (1) WO2017205065A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10803265B2 (en) 2018-03-22 2020-10-13 Symbol Technologies, Llc Aiming light patterns for use with barcode readers and devices systems and methods associated therewith
US10452885B1 (en) * 2018-04-17 2019-10-22 Zebra Technologies Corporation Optimized barcode decoding in multi-imager barcode readers and imaging engines
US11790197B2 (en) 2021-10-11 2023-10-17 Zebra Technologies Corporation Miniature long range imaging engine with auto-focus, auto-zoom, and auto-illumination system

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2008027316A2 (en) * 2006-08-31 2008-03-06 Intermec Ip Corp. Auto-focusing method for an automatic data collection device
US20100147957A1 (en) * 2008-12-17 2010-06-17 Vladimir Gurevich Range finding in imaging reader for electro-optically reading indicia
US9413981B2 (en) * 2012-10-19 2016-08-09 Cognex Corporation System and method for determination and adjustment of camera parameters using multi-gain images
US9185306B1 (en) * 2014-05-15 2015-11-10 Symbol Technologies, Llc Imaging module and reader for, and method of, illuminating and imaging targets to be read over an extended range of working distances

Also Published As

Publication number Publication date
GB2565247A (en) 2019-02-06
GB202207975D0 (en) 2022-07-13
CN114818757A (en) 2022-07-29
GB201818127D0 (en) 2018-12-19
CN109154974A (en) 2019-01-04
GB2565247B (en) 2022-02-09
GB202207933D0 (en) 2022-07-13
GB202207955D0 (en) 2022-07-13
GB2598873B (en) 2022-08-10
DE112017002649T5 (en) 2019-03-07
WO2017205065A1 (en) 2017-11-30
CN109154974B (en) 2022-05-24
GB2598873A (en) 2022-03-16

Similar Documents

Publication Publication Date Title
CN107241534B (en) Imaging module and reader for rapidly setting imaging parameters of imager and method thereof
US9800749B1 (en) Arrangement for, and method of, expeditiously adjusting reading parameters of an imaging reader based on target distance
US9185306B1 (en) Imaging module and reader for, and method of, illuminating and imaging targets to be read over an extended range of working distances
US10929623B2 (en) Imaging module and reader for, and method of, reading targets by image capture over a range of working distances with multi-functional aiming light pattern
US9305197B2 (en) Optimizing focus plane position of imaging scanner
US8534556B2 (en) Arrangement for and method of reducing vertical parallax between an aiming pattern and an imaging field of view in a linear imaging reader
US9646188B1 (en) Imaging module and reader for, and method of, expeditiously setting imaging parameters of an imager based on the imaging parameters previously set for a default imager
US11009347B2 (en) Arrangement for, and method of, determining a distance to a target to be read by image capture over a range of working distances
US10534944B1 (en) Method and apparatus for decoding multiple symbology types
US9141833B2 (en) Compact aiming light assembly and imaging module for, and method of, generating an aiming light spot with increased brightness and uniformity from a light-emitting diode over an extended working distance range in an imaging reader
CN109154974B (en) Apparatus and method for determining target distance and adjusting reading parameters of an imaging reader based on target distance
US10491790B2 (en) Imaging module and reader for, and method of, variably illuminating targets to be read by image capture over a range of working distances
US10671824B2 (en) Decoding designated barcode in field of view of barcode reader
US20150371070A1 (en) Efficient optical illumination system and method for an imaging reader
US11853838B2 (en) Systems and approaches for reducing power consumption in industrial digital barcode scanners
CN107871097B (en) Imaging module and reader for reading an object and method
CN110390221B (en) Optimized barcode decoding in a multi-imager barcode reader and imaging engine
US9213880B2 (en) Method of optimizing focus plane position of imaging scanner
US7679724B2 (en) Determining target distance in imaging reader

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination