CN108460307B - Symbol reader with multi-core processor and operation system and method thereof - Google Patents

Symbol reader with multi-core processor and operation system and method thereof Download PDF

Info

Publication number
CN108460307B
CN108460307B CN201810200359.1A CN201810200359A CN108460307B CN 108460307 B CN108460307 B CN 108460307B CN 201810200359 A CN201810200359 A CN 201810200359A CN 108460307 B CN108460307 B CN 108460307B
Authority
CN
China
Prior art keywords
image
vision system
core
decoding
cores
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810200359.1A
Other languages
Chinese (zh)
Other versions
CN108460307A (en
Inventor
L·努恩宁克
R·罗伊特
F·温岑
M·茹森
J·凯斯滕
J·A·内格罗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cognex Corp
Original Assignee
Cognex Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Family has litigation
First worldwide family litigation filed litigation Critical https://patents.darts-ip.com/?family=50407267&utm_source=google_patent&utm_medium=platform_link&utm_campaign=public_patent_search&patent=CN108460307(B) "Global patent litigation dataset” by Darts-ip is licensed under a Creative Commons Attribution 4.0 International License.
Priority claimed from US13/645,173 external-priority patent/US10154177B2/en
Priority claimed from US13/645,213 external-priority patent/US8794521B2/en
Application filed by Cognex Corp filed Critical Cognex Corp
Publication of CN108460307A publication Critical patent/CN108460307A/en
Application granted granted Critical
Publication of CN108460307B publication Critical patent/CN108460307B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T1/00General purpose image data processing
    • G06T1/20Processor architectures; Processor configuration, e.g. pipelining
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/10544Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
    • G06K7/10821Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum further details of bar or optical code scanning devices
    • G06K7/10831Arrangement of optical elements, e.g. lenses, mirrors, prisms

Abstract

The present invention provides a vision system camera having a multi-core processor, a high speed and high resolution imager, a field of view extender, an autofocus lens, and a preprocessor connected to the imager for preprocessing image data and a method of operating in conjunction therewith that provides highly desirable acquisition and processing speeds, as well as image sharpness in a wide range of applications. The mechanism effectively scans objects requiring a wide field of view, varying sizes, and relatively fast movement relative to the system field of view. The vision system provides a physical package with a variety of physical interconnect interfaces to support various options and control functions. The package optimizes heat exchange with the surrounding environment by arranging components to efficiently dissipate internally generated heat, and includes heat dissipating structures to facilitate such heat exchange (e.g., fins). The system also allows for multiple multi-core processes to optimize and load balance image processing and system operations (e.g., auto-tuning tasks).

Description

Symbol reader with multi-core processor and operation system and method thereof
Divisional application
The present application is divisional application entitled "symbol reader with multicore processor and operating system and method thereof" filed No. 2013104653303, filed 2013, 10.8.
Technical Field
The present invention relates to machine vision systems, and more particularly to vision systems capable of acquiring, processing and decoding symbols, such as barcodes.
Background
Vision systems for measuring, detecting, correcting for object and/or symbol decoding (e.g., one-dimensional and two-dimensional bar codes, also referred to as "IDs") are widely used in applications and industry. Such systems are based on the use of an image sensor (also referred to as an "imager") that obtains images (typically grayscale or color images, as well as one-, two-, or three-dimensional images) of an object or target, and processes these obtained images using an onboard or interconnected vision system processor. The processor typically includes both processing hardware and non-transitory computer readable program instructions that execute one or more vision system processes based on information processed on the image to produce a desired output. The image information is typically provided in an array of image pixels, each having a different color and/or intensity. In the example of a symbol reader (also referred to herein as a "camera"), a user or an automated processing process obtains an image of a target that is believed to contain one or more barcodes, two-dimensional codes, or other symbol types. This image is processed to identify the features of the bar code and then decoded by a decoding program and/or processor to obtain the intrinsic alphanumeric data represented by the bar code.
One common application of ID readers is to track and sort objects moving along a route (such as a conveyor belt) in production and logistics operations. The ID reader can be positioned over the entire route, taking the respective IDs of all objects needed at the appropriate perspective as each object moves under its field of view. The focal length of the reader relative to the object may be varied depending on the placement position of the reader relative to the route of movement and the size (e.g., height) of the object. That is, a larger object may have an ID thereon that is closer to the reader, while a smaller/flatter object may have an ID that is farther from the reader. In each case, the ID should appear at sufficient resolution so that it can be properly imaged and decoded. Disadvantageously, the most commercially available vision system cameras rely on imaging sensors that define pixel arrays that are nearly square in size (e.g., near 1: 1 aspect ratio, with more typical ratios being 4: 3, 5: 4, or 16: 9). This width/height ratio does not match well with the requirements of reading applications where objects pass on a conveyor line that is wider than the field of view (FOV) of the camera. More generally, the height of the field of view should be slightly larger than the ID (or other useful area), while the width of the field of view should be approximately equal to or slightly larger than the width of the conveyor line. In some instances, a pipeline-scan camera may be employed to cope with object movement and a wide field of view. However, such a scheme is not applicable to objects and pipeline mechanisms of certain geometries. Likewise, line-scan (i.e., one-dimensional) imaging sensors tend to be more costly than conventional rectangular format sensors.
In the case of relatively wide objects and/or pipelines, the lens or imager of a single ID reader may not have sufficient field of view in the lateral direction to cover the entire width of the route while maintaining the resolution required to accurately image and decode the ID. Failure to image the full width can cause the reader to miss an ID that is out of its field of view, or to pass through the field of view too quickly. One costly way to provide the required width is to employ multiple cameras across the width of the pipeline, typically networked together to share image data and processes. Alternatively, a wider field of view aspect ratio of the one or more cameras may be achieved by optically expanding the native field of view of the sensor using a field expander that divides the field of view into a plurality of narrower swaths extending across the width of the conveyor line. A challenge in providing such a mechanism is that a narrower section in the upstream-to-downstream direction of the moving pipeline may require a higher frame rate to ensure that the ID is fully captured before it is removed from the section. This can mandate processing speed to the system, and current imager-based decoding systems that acquire over a wider area substantially lack the required frame rate for reliable decoding at high object-passing speeds.
A further challenge in operating a vision system based ID reader is that the focus and illumination should be set to relatively optimal values to provide a readable ID image for the decoding application. This requires a fast analysis method of the focal length and the illumination conditions so that these parameters can be automatically calculated and/or automatically adjusted. With a wider field of view and/or higher object throughput relative to the imaged scene, the processing speed required to perform such functions may not be achievable using conventional vision system-based readers.
Typically, to provide such high speed functionality, the imager/sensor may acquire images at a relatively high frame rate. It is generally desirable to provide an image processing mechanism/process that can more efficiently employ image frames in various ways that can increase system capacity to adjust parameters and read image data at high rates.
Disclosure of Invention
The present invention overcomes the shortcomings of the prior art by providing a vision system camera, and a cooperating method of operation, having a multi-core processor, a high speed and high resolution imager, a field of view extender (FOVE), an autofocus lens, and a preprocessor coupled to the imager for preprocessing image data, which provides highly desirable acquisition and processing speeds, and image clarity in a wide range of applications. The mechanism efficiently scans objects that require a wide field of view, different sizes and locations of useful features, and relatively fast movement relative to the system field of view. The vision system provides a physical package with a variety of physical interconnect interfaces to support various options and control functions. The package optimizes heat exchange with the surrounding environment by arranging components to efficiently dissipate internally generated heat, and includes heat dissipating structures to facilitate such heat exchange (e.g., fins). The system also allows for multiple multi-core processes to optimize and load balance image processing and system operations (e.g., auto-tuning tasks).
In an exemplary embodiment, the vision system includes a camera housing that houses an imager and a processor mechanism. The processor mechanism includes (a) a pre-processor interconnected with the imager that receives and pre-processes images from the imager at a first frame rate (e.g., 200 to 300 or more images per second), and (b) a multi-core processor (having multiple cores) that receives the pre-processed images from the pre-processor and performs vision system tasks thereon. Thereby producing a result related to the information in the image. It should be noted that the term "core" as used herein should be broadly construed to include discrete "sets of cores" assigned a particular task. Illustratively, the first frame rate is much higher than the second frame rate at which the multi-core processor receives images from the pre-processor. The preprocessor (e.g., FPGA, ASIC, DSP, etc.) may also be interconnected with a data store that buffers images from the imager. In various processes, portions or partial images of an image may be buffered based on instructions from a pre-processor in the event that a particular function does not necessarily use the entire image (e.g., automatic adjustment). Likewise, the downsampled (sub-sampled) image data may be buffered in some processes, such as auto-scaling, which does not require a full resolution image when performing a task. Further, the multi-core processor may be interconnected with a data memory storing operating instructions for each core of the corresponding multi-core processor. The memory also stores image data that is processed by each core based on a schedule. In particular, the schedule enables commands to be processed selectively in each core for each image, in order to increase the efficiency of result generation. The schedule may command one or more cores to perform system tasks (also referred to as "system operation tasks," which are not directly linked to image processing and decoding tasks), such as automatic adjustments, such as lighting control, brightness exposure, and focus of an autofocus lens. The lens may be a liquid lens or other type of variable focus lens. The preprocessor may be constructed and arranged to perform such preset automatic adjustment operations based at least in part on information generated by system tasks performed by at least the core. More specifically, the results produced by the kernel may include decoded symbols (ID/code) imaged from an object.
In an exemplary embodiment, the camera assembly lens may be optically coupled to a FOVE that divides an image received at the imager into a plurality of partial images along an extended width. These partial images may be vertically stacked on the imager and include overlap in the width direction. This overlap may occur in each partial image and may be wide enough to completely image the largest ID/code that needs to be viewed, ensuring that no symbols are lost due to segmentation between fields of view. Illustratively, each partial image is processed separately by a separate core (or a separate set of cores) of the multi-core processor. To assist in auto-calibration, the FOVE may include a fiducial point at a known focal length relative to the imager, located in the optical path at a position that enables it to be selectively or partially exposed to the imager so that image acquisition can be accomplished without any significant interference from the fiducial point during runtime. A self-calibration process uses the reference point to determine the focal length (focus) of the lens. The datum may illustratively be located on an optical component of the FOVE. Optionally, the FOVE housing supports an exterior illuminator that is removably attached to the housing by a snap-fit alignment structure and magnets.
The physical packaging of the camera assembly is constructed of a material that has good thermal conductivity to transfer heat more quickly to the surrounding environment, such as an aluminum alloy. The processor mechanism includes an imager board including the imager and a main board including a multi-core processor biased by a spring-loaded carriage assembly against a side of the camera assembly housing interior thereby achieving a fixed but removable snap fit and a tight snap fit with the interior side wall of the camera assembly housing for enhanced heat transfer from the main board. To further enhance heat exchange and a tight snap fit, the main board includes a profile of raised circuit elements arranged to follow an inner profile of an inner side of the camera housing so as to minimize the distance therebetween. The camera assembly housing also includes a plurality of heat fins on its exterior side for heat exchange with the ambient environment. The housing further supports one or more external fans. The housing front face is adapted to mount a removable lens assembly. Such a removable lens assembly may include a liquid lens connected by a cable to a connector on one side (e.g., the front) of the camera assembly housing. Another connector is provided to control optional internal (or external) lighting. The rear of the camera includes a separate I/O board that is connected to the motherboard by an electronic link. The I/O board includes a plurality of externally exposed connectors for interfacing various data and control functions. One such control/function is an external speed signal (e.g., an encoder signal) from the pipeline that moves relative to the field of view of the camera assembly. The preprocessor and/or multi-core processor is constructed and arranged to perform, based on the speed signal and the plurality of images, at least one of: (a) controlling the focus of the variable lens; (b) determining a focal distance of the imaged object; (c) correcting to the focal length of the assembly line; and (d) determining the relative velocity of the imaged object. Typically, the camera housing includes a front and back face, each of which is sealingly attached to a respective seam (sealed with a gasket) at each of the opposite ends of the body. Optionally, the seam between one (or both) of the front and back faces and the body includes therein a ring of translucent material constructed and arranged to illuminate one of a plurality of preset colors to provide an indicator to a user corresponding to the status of the system. For example, the ring may illuminate green for a good (successful) ID read, and red for no (failed) ID read.
In an embodiment, the preprocessor may be adapted to selectively transfer images from a buffer memory to the multicore processor for further processing at the cores of the multicore processor based on the identification of useful features (e.g., symbols/IDs/codes) by the preprocessor.
In an exemplary embodiment, a method for processing an image in a vision system includes: an image is captured in an imager of a vision system camera at a first frame rate and at least a portion of the image is transmitted to a multi-core processor. The transmitted image is processed to produce results in each of a plurality of cores of the multi-core processor according to a schedule, which includes information associated with the image. The processing step may further comprise: a step of identifying an image containing a symbol in the transmitted image in at least one of the plurality of cores, and a step of performing a decoding on the image containing a symbol in another one of the plurality of cores, so that one core identifies the presence of a symbol (and optionally provides other information related to the symbol, including for example resolution, symbol type, etc.), while another core decodes the symbol that has been identified. Optionally, the step of processing may comprise: a step of image analysis is performed on the transmitted images to identify images having sufficient features for decoding in at least one of the plurality of kernels. In other words, the kernel determines whether the picture is clear enough and available for decoding. Another core performs the step of decoding on pictures with sufficient characteristics, whereby unusable pictures are discarded before attempting to locate and/or decode the symbol. In an embodiment, for a transmitted image, the step of decoding is performed in at least one of the plurality of cores using a first decoding process (e.g., an algorithm), and the step of decoding is performed in another of the plurality of cores using a second decoding process, such that decoding may occur in at least one decoding process. Illustratively, the step of decoding may require decoding a picture in at least one of the plurality of kernels and, after a preset time interval, if (a) the picture has not yet finished decoding and (b) it is assumed that more time is available for the picture to decode, the picture continues to decode in another of the plurality of kernels. Alternatively, after the time limit has passed, there is a possibility that it will take more time to successfully decode, and the system may allow the core to continue decoding and assign the next picture to a different core. In further embodiments, the system may provide load balancing when there are multiple image frames with multiple types of symbols (e.g., one-dimensional codes and two-dimensional codes). The kernel partitions the image in such a way that a one-dimensional (1D) code and a two-dimensional (2D) code are relatively load-balanced provided to each kernel.
In a further embodiment, codes may be assigned to non-decoded system tasks based on the current trigger frequency. A low trigger frequency within a threshold allows the core to be used for system tasks such as auto-tuning, while a higher trigger frequency indicates that the core is used for decoding (e.g., generating results related to image information). As described above, various processes associated with core allocation may be intermixed during the operation of the vision system, and processing resources (cores) may be reallocated for various purposes.
Drawings
The following description of the invention refers to the accompanying drawings, in which:
FIG. 1 is a schematic diagram of a vision system arranged relative to an exemplary motion pipeline having objects of various sizes and shapes, including IDs or other symbols, each object passing through the field of view of the system according to an exemplary embodiment;
FIG. 2 is a block diagram of circuitry for acquiring and processing image data, and for controlling various system functions, according to an exemplary embodiment;
FIG. 3 is a front perspective view of the vision system camera assembly of FIG. 1 in accordance with an exemplary embodiment;
FIG. 4 is a rear perspective view of the vision system camera assembly of FIG. 1 in accordance with an exemplary embodiment;
FIG. 5 is a side cross-sectional view of the vision system camera assembly taken along line 5-5 of FIG. 3;
FIG. 5A is a rear cross-sectional view of the vision system camera assembly taken along line 5A-5A of FIG. 3;
FIG. 6 is a front perspective view of the vision system camera assembly of FIG. 1 with the internal illumination assembly and lens removed;
FIG. 7 is a perspective view of the vision system of FIG. 1 including a vision system camera assembly and the field of view extender (FOVE) described, the FOVE cooperating with an external beam-type illuminator mounted thereon, in accordance with an exemplary embodiment;
FIG. 7A is a more detailed top cross-sectional view of a coupling according to FIG. 7 disposed between a FOVE housing and a front of a camera assembly;
FIG. 8 is a perspective view of the optical components of the exemplary FOVE of FIG. 7, shown with the housing removed;
FIG. 9 is a plan view of the optical components of the exemplary FOVE of FIG. 7, shown with the housing removed and a wide field of view image being acquired;
FIG. 10 is a schematic view of a stacked arrangement of multiple fields of view provided by the FOVE of FIG. 7 for an imager of a camera assembly;
FIG. 11 is a front view of the FOVE of FIG. 7 with a beam-type illuminator disposed on a bracket relative to the FOVE housing and a coupler mated with the camera assembly of FIG. 1;
FIG. 12 is a top cross-sectional view of a portion of a film-based liquid lens assembly mounted in and controlled by the camera assembly of FIG. 1 in accordance with an exemplary embodiment;
FIG. 13 is a rear perspective view of the inner member of the camera assembly of FIG. 1 with the housing body removed and showing in detail the "360 degree" annular indicator structure between the body and the front thereof;
FIG. 14 is a flowchart of a generalized run of a scheduling algorithm/process for assigning system operation tasks and vision system tasks to cores of a multi-core processor of the vision system of FIG. 1;
FIG. 15 is a block diagram of a multi-core process in which an image frame is divided into portions that are respectively assigned to multiple cores for processing;
FIG. 16 is a block diagram of a multi-core process in which an image frame is allocated to one core for processing while another core performs one or more system tasks;
FIG. 17 is a flowchart showing the system tasks of dynamically allocating cores for image processing and non-image processing based on the current trigger frequency;
FIG. 18 is a block diagram of a multi-core process in which the IDs/codes in each image frame are dynamically assigned to the cores in a manner that more effectively balances the processing load across the entire core group;
FIG. 19 is a flowchart showing a process for decoding an identifier code by a first core being allocated to a second core after the process exceeds a preset time limit;
FIG. 20 is a flowchart showing the continued assignment of a decoding process to an identifier code to a first core after the process has been processed by the first core beyond a preset time limit;
FIG. 21 is a block diagram of a multi-core process in which IDs/codes in an image frame are allocated to two cores in parallel, where each core performs a different decoding algorithm;
FIG. 22 is a block diagram of a multi-core process in which a series of image frames are each assigned to a different core for processing;
FIG. 23 is a block diagram of a multi-core process in which image frame data is concurrently distributed to a first core running an ID/code lookup process and a second core running an ID/code decoding process based on the looked-up ID/code information provided by the first core;
FIG. 24 is a block diagram of a multi-core process in which image frame data is distributed in parallel to a first core running a vision system process and a second core running an ID/code decoding process based on image information provided by the first core;
FIG. 25 is a block diagram of a multi-core process in which image frame data is distributed in parallel to a first core running an ID/code presence/absence process and a second core running an ID/code location and decoding process based on ID/code presence/absence information provided by the first core;
FIG. 26 is a block diagram of a multi-core process in which image frame data is distributed in parallel to a first core running an image analysis process and a second core running an ID/code location and decoding process based on information about image frame quality and features provided by the first core;
FIG. 27 is a flow chart of a system process for adjusting focus based on comparative measurements from a conveyor/line speed sensor (encoder) and tracking of features on an object through an exemplary vision system field of view;
FIG. 28 is a flow chart of a process for locating a useful feature (ID/code) using a preprocessor (FPGA) connected to an imager and sending a unique image frame that appears to contain the useful feature to the multi-core processor for further processing;
FIG. 29 is a side view of the vision system of FIG. 1 showing a self-alignment reference point provided for FOVE and an optional bottom-mounted cooling fan on the vision system camera assembly;
FIG. 29A is a more detailed perspective view of a camera assembly including a bottom-mounted bracket and a cooling fan according to an exemplary embodiment;
FIG. 29B is an exploded perspective view of the camera assembly with the bracket and cooling fan of FIG. 29A;
FIG. 30 is a flow chart of a system process for correcting the non-linearity of the lens drive current curve for focal length/optical power;
FIG. 31 is a flow chart of a system process for determining focus based on an analysis of feature locations in each overlap region of an image of a FOVE projection;
FIG. 32 is a flow chart of a system process for determining the velocity and/or distance of an object through a field of view of the vision system of FIG. 1 from the size change of object features between image frames; and
FIG. 33 is a schematic diagram of an exemplary master-slave mechanism showing a plurality of interconnected camera assemblies and illuminators, according to an embodiment.
Detailed Description
I. Overview of the System
Fig. 1 depicts a vision system 100, also referred to as a "machine vision system," according to an exemplary embodiment. The vision system 100 includes a vision system camera 110, which illustratively includes an integrated (and/or internal) processor mechanism 114. The processor mechanism 114 allows image data acquired by the imager (e.g., a CMOS or CCD sensor) 112 (shown in phantom) to be processed to analyze information within the acquired image. The imager 112 is disposed on a mating imager circuit board 113 (also shown in phantom) and the processor mechanism 114 in this embodiment as described below includes a multi-core architecture comprising at least two separate (discrete) processing cores C1 and C2, which may be provided as a single wafer (die) according to one embodiment. The processor 114 is disposed on a processor board or "main" board 115, also as described below. Likewise, an input/output (I/O) board 117 and a User Interface (UI) board 123 for interconnection for communication with remote devices and information display are provided, respectively. The functions of the imager 112 and the multicore processor 114 will be described in further detail below. In general, the processor runs a vision system process 119 that takes advantage of the many-core processor mechanism 114 and runs an ID lookup and decode process 121. Alternatively, all or part of the decoding process may be processed by a dedicated decoder chip on a separate die of the processor 114.
The camera 110 includes a lens assembly 116 that is optionally removable and replaceable with a variety of conventional (or custom) mount base lens assemblies. The lens assembly may be manually or automatically focused. In one embodiment, the lens assembly 116 may include an auto-focus mechanism based on known systems, such as commercially available liquid lens systems. In one embodiment, the mounting base may be defined as the geometry of a well-known movie (cine) or "C-mount" base-other known or custom geometries are expressly contemplated in alternative embodiments.
As shown, an exemplary field expander (FOVE)118 is mounted in front of the lens assembly 116. FOVE allows for expansion of the width WF of the field of view 120, typically the lens assembly 116 defines a width WF that is N times the original width (less the width of any overlapping region (s)) between the fields of view at a given focal length, while the length LF of the field of view 120 is reduced to 1/N times the original length. FOVE118 can be implemented using a variety of mechanisms, and typically includes a set of tilting mirrors that divide the field of view into a series of vertically split portions of the imager. In one embodiment, the integrated FOVE is configured to direct its outside mirrors to receive light from different lateral portions of a scene, which may be a pipeline with movement of objects (as shown in FIG. 1). The outer mirror then directs the light to a mating vertically tilted inner mirror of a beam splitter, and then directs the light through an aperture in the FOVE that is substantially aligned with the optical axis of the camera to avoid image distortion. The inner mirrors direct the light from each outer mirror separately to separate strips on the imager, one vertically stacked (for example) above the other, and the vision system then looks for and analyzes features of the entire image. The field of view defined by the mirror includes a transverse (width) overlap region that is sized and arranged to ensure that the central feature appears entirely in at least one of the strips. In another embodiment, the moving mirror changes position between acquired image frames so that the full width of the scene is imaged in successive frames. Exemplary FOVE mechanisms, including the FOVE mechanism described herein, are shown and described in U.S. patent application No. 13367141 entitled "System and method for visual System View extension", invented by Nunnink et al. This application is incorporated by reference herein as useful background.
In one embodiment, FOVE118 is provided with a first outside mirror that forms an acute angle with respect to the optical axis of the camera and a second outside mirror that forms an opposite acute angle with respect to the opposite side of the optical axis. From the direction of the vision system camera, a beam splitter is located in front of the first and second outside mirrors. The beam splitter is provided with a first reflective surface and a second reflective surface. The exemplary first outside mirror and first reflective surface are arranged to align a first field of view from the scene to the imager along the optical axis. Likewise, the exemplary second outside mirror and second reflective surface are configured to align a second field of view from the scene to the imager along the optical axis. The first field of view is at least partially spaced from the second field of view in a horizontal direction at the scene. In addition, the first outside mirror, the second outside mirror, and the beam splitter are arranged to project each of the first field of view and the second field of view to the imager in a vertically stacked ribbon-like relationship. It should be clear that in the various embodiments herein, a wide variety of FOVE implementations are explicitly contemplated.
FOVE allows a field of view sufficient to image objects 122, 124 (e.g., boxes) moving relative to the camera assembly 110 on a moving pipeline 126 at a velocity VL in order to properly acquire useful features (e.g., barcodes 130, 132, 134). As an example, the width WF of the field of view 120 is expanded to approximately match the width WL of the pipeline 126. It is contemplated in alternative embodiments that the object remains stationary and the camera assembly may be moved relative to the object on a rail or other suitable structure (e.g., a robotic arm). For example, two objects 122 and 124 with different heights HO1 and HO2 pass through the field of view 120, respectively. As mentioned above, the height difference is one factor that generally requires the camera assembly to change focal length. The ability to change focus more quickly becomes highly desirable as the subject moves faster through the field of view 120. As such, the ability to more quickly identify useful features and process those features using the vision system processor 114 becomes highly desirable. It is expressly contemplated that multiple vision system camera assemblies with cooperating FOVEs, illuminators, and other accessories may be employed to image objects through a scene. For example, a second vision system 180 (shown in phantom) is provided to image the opposite side of the object. As shown, the additional vision system 180 is connected (via connection 182) to the system 100 described above. This allows for sharing of image data and simultaneous capture and illumination triggering, among other functions (e.g., using a master-slave mechanism of interconnected camera assemblies as described below). Each camera assembly may process image data independently or may execute some or all of the processes in the core of interconnected camera assemblies, in accordance with various multi-core processes as described below. The number, placement and operation of further vision systems is highly variable in various embodiments.
Electronic part of the System
By referring to fig. 2, the circuit routing and functions of the imager circuit board 113, main circuit board 115, I/O circuit board 117, and UI circuit board 123 will be described in more detail. As shown, the imager 112 is located on an imager board 113 and may comprise a commercially available CMOS200 ten thousand pixel gray scale cell, such as model CMV2000 from CMOSIS, Belgium. Other types and sizes of imagers may be provided in alternative embodiments, including higher or lower resolution imagers, color imagers, multispectral imagers, and the like. Via control and data connections, the imager is operatively connected to an FPGA210 (or other programmable circuit) that performs image processing procedures in accordance with the exemplary embodiments described below. For purposes of this specification, an FPGA or equivalent high-speed processing logic, such as an ASIC, DSP, or the like, may be referred to as an "imager-interface (imager-interfaced)" pre-processor that performs initial and/or some auto-adjustment functions on a stream of image frames received from an imager. Further, although an FPGA is used as an example, any programmable or non-programmable processing logic (or logics) that can perform the required pre-processing functions is explicitly contemplated for use as a "pre-processor". An exemplary preprocessor circuit is the ECP3 family of FPGAs, which are commercially available from Lattice Semiconductor of Hillsboro, Oreg. The FPGA210 is interconnected with a suitably sized non-volatile memory 212(Flash), the memory 212 providing configuration data to the FPGA. FPGA210 also controls optional internal illumination 214 (described further below) and optional variable (e.g., liquid) lens assembly 216 for providing fast autofocus to the camera lens assembly. Also, the preprocessor described herein is adapted to perform certain functions, including but not limited to auto-adjustment, image data conversion, and captured image data storage operations, and a wide variety of additional processes (e.g., vision system processes) directly related to information processing within an image may be performed by the preprocessor, such as finding features, and the like. More generally, the high frame rate of the imager makes the use of such high speed processors available (in various embodiments) to operate the initial process relative to the acquired image frames.
One way to quickly manipulate the liquid lens assembly is an EL-6-18-VIS-LD thin film bottom liquid lens, available from OptotuneAG, switzerland. In addition to high speed operation, this lens defines, illustratively, a 6mm aperture, making it well suited for wide angle imaging and high speed operation. This exemplary variable lens package has dimensions of 18 × 18.4 × 8.9 (thickness) mm. The control current is between about 0 to 200 mA. The response time is typically less than 2 milliseconds and its correction time is typically less than 10 milliseconds. After this liquid lens is integrated into an exemplary lens assembly, the entire lens assembly provides a field of view of about 20 degrees and a focal length adjustment range of about 60 millimeters to infinity. In operation, the EL-6-18-VIS-LD is a deformable lens. It comprises an injection moulded container filled with an optical liquid and sealed by an elastic polymer film. The deflection of the lens is proportional to the pressure in the liquid. The EL-6-18 employs an electromagnetic actuator that exerts pressure on the container. Thus, the focal length of the lens is controlled by the current through the actuator coil. This focal length decreases with increasing applied current.
A temperature sensor 218 is provided in association with the lens to monitor the operating temperature in the vicinity of the lens. This allows for temperature-based adjustment of the liquid lens, as well as other temperature-related parameters and functions. The temperature sensor is placed on the I2C bus 220, and the I2C bus 220 also controls the interior lighting 214 and the liquid lens using appropriate control signals, which are specified by the lens manufacturer. As described below, additional temperature sensors may be provided to one or more circuit boards (e.g., sensor 288) to monitor the temperature status of various components of the system. As shown, a bus 220 interconnects the multicore processor 114 on the motherboard 115. Likewise, FPGA210 is bundled to processor 114 via a Serial Peripheral Interface (SPI) bus 224 and a PCIe bus 226, which communicate control and data signals between the units, respectively. Illustratively, an SPI224 bus interface (interface) between FPGA210 and processor 114 is employed by processor 114 to configure the FPGA during system boot. Subsequent configuration, communication of image data, and other system data, is transmitted over the PCIe bus 226. The PCIe bus may be configured as a dual (2X) lane. FPGA210 is also interconnected via a 16-bit connection with a 64MB data memory 228 that allows buffering of image data to support the high frame rate of the imager at the imager board level-and such image frames can be subsequently employed for downstream image processing or auto-adjustment functions as described below. Typically, a portion of the automatic adjustment may require the use of a lower resolution image. Further, the sequence of captured images may be stored in memory 228 at a lower resolution (to satisfy FPGA functionality) while the higher resolution images are sent to processor 114 for processing as described below. The memory 228 may be of any acceptable type, such as DDR3 dynamic random access memory. Alternatively, another memory type, such as Static Random Access Memory (SRAM), may be employed. Suitable supply voltages 230 for the various imager board components are also provided, which are taken from an external voltage source (typically 120-240VAC wall (wall) current with appropriate transformers, rectifiers, etc.).
Link 232 also illustratively connects FPGA210 with an external lighting control connector 234, connector 234 being on I/O board 117 and exposed outside the rear of the housing of camera assembly 110. Likewise, link 232 also interconnects the FPGA through a synchronization trigger connection 236 on the I/O board 117 to synchronize image acquisition (including illumination triggering) with other interconnected camera assemblies. This interconnection may occur where multiple camera assemblies simultaneously image multiple sides of the box and/or where the box passes through multiple relatively adjacent stations on the pipeline. Synchronization avoids cross talk between the luminaires, and other undesirable effects. Generally speaking, it should be noted that in this embodiment, the various image acquisition functions and/or processes, including in-out illumination, focus and brightness control, are all directly controlled by the fast running FPGA process 245. This allows the motherboard processor 114 to focus operations on the vision system tasks, as well as image data decoding. Furthermore, synchronization of acquisition also allows multiple camera assemblies to share a single illuminator or illuminator group, since the illuminator (or illuminators) trigger independently for each camera as each camera acquires an image frame.
Note that an appropriate interface may be provided for the external trigger. Such an external trigger may allow gating of the camera assembly for image acquisition when a moving object is within the field of view. This gating avoids acquiring unnecessary images of the space between objects on the pipeline. A detector or other switching device may be used to provide the gating signal in accordance with conventional techniques.
The FPGA210 provides some pre-processing work on the image to improve the speed and efficiency of image data manipulation. Image data is serially transferred from the imager 112 to the FPGA. All or a portion of the data may be temporarily stored in data storage 228 for analysis by various FPGA operations. The FPGA210 converts the serial image data to a PCIe protocol using conventional techniques so that it is compatible with the processor's data bus architecture and is transmitted to the processor 114 over the PCIe bus 226. This image data is then transferred directly into the data memory 244 for subsequent processing by the processor cores C1 and C2. By utilizing multiple cores, many desirable and efficiency-enhancing operations in processing image data may be enabled, as described in detail below. FPGA210 is also programmed (e.g., FPGA process 245) to analyze the acquired image data to perform specific system auto-adjustment operations, such as auto-brightness control (e.g., auto-exposure) and auto-focus control (e.g., when using liquid lens assembly 216). Generally, for a focal length change situation, such as encountering objects of different heights, this requires both brightness and focus adjustments. Typically, these operations require a higher image acquisition rate of the imager 112 (e.g., acquisition at a speed of about 200-300 image frames per second) to allow for additional operations on the image data, while the net decoding rate at the processor 114 is at least 100 frames per second. That is, some images are processed within the FPGA while others are transferred to memory on the motherboard 115 for vision system processing (e.g., ID lookup and decoding of IDs found in the images) without compromising the maximum frame rate of the processor. More generally, the data store 228 buffers the acquired image frames and takes some frames (from the excess number of available image frames resulting from the high frame rate) for the auto-adjustment function of the FPGA210 while passing others to the processor 114 for further processing. The division of functionality between the FPGA210 and the processor 114 facilitates efficiency and more optimal utilization of system resources.
In various embodiments, the FPGA210 and the memory 228 may be adapted to receive a "burst" of image frames at a high acquisition frame rate, which takes a portion of the image frame "burst" for performing automatic adjustment, and send other frames to the processor at a speed suitable for the processor processing speed. The high volume of image frames obtained from this "burst" (e.g., when the object is in view) may be fed out to the processor 114 during an interstitial time prior to the point in time when the next object arrives in view, causing the next "burst" which is also acquired, stored, and transmitted to the processor 114.
The terms "process" and/or "processor" as used herein are intended to be broadly construed to include a variety of electronic hardware-based and/or software-based functions and components. Further, the process or processor may be combined with other processes and/or processors or divided into multiple sub-processes or processors. Various different combinations of such subroutines and/or sub-processors may be made in accordance with embodiments herein. As such, it is expressly contemplated that any of the functions, processes, and/or processors described herein can be implemented using electronic hardware, software, or a combination of hardware and software, wherein the software is comprised of a non-transitory computer readable medium of program instructions.
Referring to motherboard 115 of FIG. 2, a multicore processor 114 is shown. Various types, brands, and/or configurations of processors may be employed to carry out the teachings of the embodiments herein. In an exemplary embodiment, the processor 114 comprises a dual core DSP, such as model 6672 available from Texas instruments, Inc. of Dallas, Tex. For purposes of the vision system application contemplated herein, the processor 114 may operate fast enough and cost effective. The term "multi-core" as used herein shall refer to two (i.e., "dual core") or more discrete processors implemented on a single wafer and/or packaged within a single on-board circuit chip. Each core is generally capable of independently processing at least a portion of the data stored in memory 244. The processor 114 is interconnected with the non-volatile memory 240, the non-volatile memory 240 containing appropriate boot configuration data. This allows the basic functioning of the processor at start-up of the camera system, including when any program code and/or operating system software is loaded. The program code/operating system software is stored in a program memory 242, and the program memory 242 may be configured to utilize a variety of solid state memory devices. In an exemplary embodiment, NORFlash memory with 32MB capacity and a 16-bit interface is employed. At startup, program code is loaded from flash program memory 242 into data memory 244. Image data and other data of processor operations are also stored in data memory 244 and can be flushed from the data memory when no longer needed by system processes. Various types, sizes and configurations of memory may be employed. In one embodiment, the memory is a 256MB DDR3 dynamic random access memory with a 64 bit interface.
Other conventional circuitry for driving the processor and providing other functions (such as code error rejection) is also provided on the motherboard 115 and interconnected with the processor 114. These circuits may be configured in accordance with conventional techniques and may include a core voltage regulator 246 (e.g., model UCD7242 from texas instruments), an LVDS clock generator 248 (e.g., model CDCE62005 from texas instruments), and a sequential microcontroller 250 (e.g., PIC18F45 from Microchip Technology inc. of Chandler, arizona). A JTAG interface 252 (e.g., 60 pins and 14 pins) is also provided interconnected between a port on the processor 114 and the sequential microcontroller 250. Appropriate voltages (e.g., 1.5V, 1.8V, 2.5V, and 6.2V) are provided to the various circuit elements of the motherboard 115 from a voltage source 254 on the I/O board, the voltage source 254 being connected to a regulator 260 (e.g., a 24V to 3.3V regulator). This receives external power from a power source (e.g., a 24V wall transformer) via a suitable cable 262. The motherboard 115 and the mating processor 114 are connected to the I/O board via a UART loaded on the processor that connects to a serial connector 266 external to the housing that conforms to the RS-232 standard. The port may be used to control external functions such as alarms, conveyor pipeline shutdown opens, and the like. The processor also includes a Serial Gigabit Media Independent Interface (SGMII) connected to an ethernet port at the rear of the housing via a physical layer chip 268 and a gigabit ethernet transformer 270. This allows image data and other control information to be transmitted over a network to a remote computer system. Via the interface computer and a suitable user interface (e.g., a web-based graphical user interface/one or more browser screens), the user is also allowed to program the functionality of the system. In various embodiments (not shown), the camera assembly may optionally be provided with a wireless Ethernet connection,
Figure BDA0001594302870000121
Communications, and the like.
The processor SPI bus 224 is connected to a suitable ATTINY microcontroller 272 (such as available from Atmel corporation of San Jose, california) that implements interfaces to a 4X optical input (4X OPTO IN)274 and a 4X optical output (4X OPTO OUT)276 using conventional techniques. The interface provides "slow" I/O operations, including external strobe trigger inputs, good-read outputs and bad-read outputs, encoder inputs (e.g., counting move pulses on a move pipeline assembly), target detection, and various other I/O functions. Bus 224 is also connected to a further ATTINY microcontroller 280 on UI board 123. The microcontroller is connected to a User Interface (UI) device outside the rear of the camera assembly housing. These devices include, but are not limited to, an audible tone generator 282 (e.g., a buzzer), one or more control buttons 284, and one or more indicator lights 286 (e.g., LEDs). These devices allow a user to perform various functions, including vision system training, calibration, and the like, as well as receiving the status of the system operation. This may include on/off functionality, fault warning, success/failure in reading the ID, etc. A common status indicator (LED) may associate trigger-on, trigger-off, encoder, and target detection status. Other interface means (not shown) such as a display screen and/or an alphanumeric display may optionally be provided. The I/O board 117 includes appropriate temperature sensors to monitor the internal temperature.
It should be clear that the placement and location of the components on each of the various panels, as well as the function of those components, is highly variable. It is expressly contemplated that more or fewer circuit boards may be employed in various embodiments. Likewise, some or all of the functionality of multiple components may be combined into a single circuit, or some or all of the functionality of a particular described component may be split into multiple circuits on one or more boards. Further, the components, interconnect interfaces, bus architectures, and functions depicted in FIG. 2 are merely examples of various circuit layouts that may perform similar functions. Alternative circuit routing configurations having similar or identical functions will be apparent to those skilled in the art.
Physical encapsulation
Having described the mechanical arrangement of the electronic components on the various circuit boards of the camera assembly, as well as their respective interconnect interfaces and functions, reference is now made to fig. 3-7, which describe the physical structure of the camera assembly 110. Fig. 3-6 depict a camera assembly 110 having a conventional lens 310 and a surrounding inner (annular) illumination assembly 320, according to one embodiment. FIG. 7 is a more detailed external view of camera assembly 110 with an optional FOVE attachment 118 as described in FIG. 1.
The housing 330 of the camera assembly 110 is constructed of a material having suitable rigidity and heat transfer characteristics. In an exemplary embodiment, an aluminum alloy (e.g., 6061) may be used to construct portions or the entirety of the housing. Body 332 is also provided with integrally formed longitudinal fins 339 around its perimeter to further assist in heat transfer. The housing 330 is comprised of three main parts, a body 332, a front 334 and a rear 336. Body 332 is a single piece with an open interior. Front portion 334 and rear portion 336 are each secured to opposite ends of the body using screws seated in holes 338 and 410, respectively. The front 334 and rear 336 are pressed against the ends of the body to form a hermetic seal that protects the internal electronic components from dust, moisture, and other contaminants that may be present in the manufacturing process or other process environment. A gasket 510 (e.g., an O-ring, see fig. 5) is disposed at each respective end of body 332 to compressively seal front 334 and rear 336. It is noted that the body can be made as a protruding structure with appropriate counterbores formed by holes and other machined shapes applied to the outside and inside.
As shown in FIG. 5, the imager board and mating imager 112 are secured against a front portion 334, wherein the imager is perpendicular to an optical axis OA defined by lens assembly 310. In this embodiment, a fixed lens assembly 310 is employed having front and rear convex lenses 512 and 514 in a conventional configuration. For example, the lens assembly is a 16mm lens assembly with a C-shaped mounting base. Which is threaded into camera assembly lens base 520, with lens base 520 extending from front portion 334. In alternative embodiments described below, other lens models and mounting base configurations are explicitly contemplated.
The lens is surrounded by a wheel-like inner ring illumination assembly 320, the illumination assembly 320 having an outer ring 524 and an illumination circuit board 526 at its front end. The circuit board 526 is supported on three standoffs 528, the standoffs 528 being disposed in a triangular orientation about the optical axis OA. In this embodiment, illumination is provided by 8 high output LEDs 530 (e.g., OSRAM Dragon LEDs) with a mating lens 532. The LEDs operate at selected, non-contiguous visible and/or near visible (e.g., infrared) wavelengths. In various embodiments, different LEDs operate at different wavelengths, which may be selected by a lighting control process. For example, some LEDs may operate at green wavelengths while others may operate at red wavelengths. Referring to FIG. 6, the illumination assembly 320 has been removed, exposing the front 610 of the camera assembly 110. The front face 610 includes a pair of multi-pin connectors 614 and 616 located on the imager board and similar to the illustrated components 214 and 216 in fig. 2. That is, the 5-pin connector 614 is interconnected with the lighting panel 526 via a cable (not shown). An 8-pin connector 616 is connected to control and power the optional liquid lens assembly described below. The front face 610 also includes three pedestals 620 (which may be threaded) to support each lighting circuit board support 528. A threaded C-shaped mounting base 520 is also seen. Note that the inboard illumination assembly 320 is an alternative implementation for a vision system camera assembly. In various embodiments described herein, the interior illumination assembly may be omitted and replaced with one or more exterior illumination assemblies, or, in some special cases, ambient illumination.
With particular reference to the cross-sectional view of fig. 5, the imager board is connected to the main board 115 by a ribbon cable 550, illustratively the main board 115 against the top side of the body interior. The main plate in this position exchanges heat with the body 332 and the cooperating fins 339 to allow for better heat transfer. The motherboard 115 may be mounted using fasteners or, as shown, using bracket members 552 that engage the underside of the motherboard 115 at locations that do not interfere with the on-board circuit components. The bracket 552 includes a lower extension 553 having an aperture that receives a vertical post 555 extending upwardly in a telescoping fashion from a base 554. The base 554 sits on the bottom side of the housing body 332. The bracket 552 is biased upwardly by a compression spring 556 disposed between the underside of the bracket and the base 554 and surrounding the extension 553 and the vertical post 555. This mechanism allows for insertion or removal of a plate by adjusting the position of the bracket 552 relative to the base 554. That is, to install the plate 115, the user depresses the bracket 552 against the biasing force of the spring 556, slides the plate 115 into the interior of the body 332, and then releases the bracket 552 to compressively snap-fit the plate 115 and maintain it in position against the top end of the interior of the body 332. Removal is the reverse of this process. The plate 115 is held firmly snapped against the body 332 by the spring 556, ensuring adequate heat exchange. In various embodiments, the main board 115 may also include an on-board heat sink that is connected to the body 332. Likewise, a thermally conductive paste, or another thermal transfer medium, may be disposed between the contact portion of the plate 115 (e.g., the processor 114) and the inner surface of the body 332. Referring briefly to fig. 13, as described below, the upper side of the main plate 115 may include a thermal gap pad 1330 that fills the gap between the upper portion of the plate 115 and the inner surface of the body.
More generally, and with further reference to fig. 5A, the inner surface 580 of the body 332 is shaped relative to the profile of the motherboard 115 so as to closely conform to the shape of the protrusions, surface mount components, circuit components on the motherboard 115, and these components are mounted to conform to the shape of the body. That is, the taller elements are positioned toward the longitudinal centerline where the body presents a taller profile, while the shorter elements are positioned along either side of the longitudinal axis of the main panel. More generally, the element is divided into a plurality of height regions in conformity with the geometry of the interior of the body. In situations where certain circuits tend to be large or high (e.g., capacitive), these elements may be separated into two or more smaller elements having the same overall electronic magnitude as a single larger element. A thermal gap filler (e.g., a pad or another medium) is disposed between the plate and the internal tip, and such placement of the elements, based on the internal geometry of the body, ensures that the distance between the body and both the short and tall elements is minimized. Illustratively, as shown, the multicore processor is placed in direct contact with the inside of the body (typically with a thin layer of thermally conductive glue therebetween) so that the body acts as an effective heat sink for the processor. Also as shown, the main panel 115 is laterally offset (extended) relative to the bracket 552 via a vertical post 582 that passes through a hole in the panel. This ensures that the bracket and plate maintain a predetermined alignment relative to the body. It is noted that although in the described embodiment the cooling is passive, in further embodiments one or more fan units may participate in cooling the interior or exterior of the enclosure. In particular, 4 mounting holes 588 (2 of which are shown in phantom in FIG. 5A) may be provided along the bottom of the body 332. In this embodiment, the holes 588 receive a conventional 60x60mm computer fan. Alternatively, as described below, the aperture 588 may receive an intermediate bracket for mounting a fan and/or other specifically contemplated fan mechanisms/dimensions. A connector may be provided on the housing or an external connection may be employed to connect a suitable voltage adapter and power the fan(s). In addition, an auxiliary cooling mechanism (e.g., liquid cooling) may be used in alternative embodiments. Typically, the system is designed to run up to approximately 40 degrees using ambient cooling. However, in some environments, the use of at least one cooling fan is enabled in situations where the operating temperature may exceed this value.
As shown in FIG. 5, the I/O board 117 is mounted against the back 336 of the camera assembly housing 330. The I/O board 117 is connected to the rear end of the main board 115 by a ribbon cable 560. Extending from the back side of the I/O board 117 are various rear connectors 420, 422, 424, 426 and 428 (see FIG. 4) that function as described with reference to FIG. 2. The I/O board is likewise interconnected to UI board 123 via ribbon cable 570. As shown, the UI board is exposed to the user along the angled top surface 440 of the back portion 336. In other embodiments, the arrangement and location of the circuit boards on and/or within the body may be changed.
Referring to FIG. 7 and the more detailed cross-sectional view of FIG. 7A, FOVE118 is shown with a coupler 710 attached, coupler 710 including a removable L-shaped bracket 712 at the front of the camera assembly. Bracket 712 includes a vertical plate 714 that faces camera front 334 and is secured with fasteners, and a horizontal plate 716 that is adapted to have further mounting brackets and support structures secured thereto. Bracket 712 of coupler 710 may also be used to mount removable illuminator 750, as described below. The FOVE housing 730 is supported relative to the camera assembly by a set of 4 vertical posts 732, the posts 732 being fixed into the base bracket on the camera side, and the posts 732 being fixed to the rear wall 736 of the FOVE housing. Flange 736 is secured to the rear of FOVE housing 730 by suitable fasteners or other securing mechanisms (not shown). The lens assembly 116 is covered by the cylindrical housing 720, the cylindrical housing 720 extending between the front face (610)110 of the camera assembly 110 and the rear of the FOVE housing 730. The housing 720 is removable and serves to seal the lens and the FOVE housing from dust and contaminants from the external environment. The stem 732 or another acceptable open frame allows the user to adjust and maintain the lens assembly 116. The vertical rod 732 movably (bold arrow 744) supports a sliding block 746, the sliding block 746 being engaged with the sliding lens housing 1692. A pair of joints 747 comprising low friction bushings (bunting) encases two (or more) vertical posts 732. O- rings 748, 749 are embedded inside the inner circumference of the flange 736 and inside the inner circumference of the vertical face 714 of the opposing L-shaped bracket 712, respectively. The lens housing 720 may be slid forward to the sealed position depicted in the figures to expose the lens assembly 116 (shown in phantom in fig. 7A as an exemplary lens type). A thrust shoulder 754 is formed on vertical face 714 and defines a center bore (aperture) 756. The shoulder prevents the housing 720 from continuing to move forward toward the camera assembly after it is sealingly engaged. Likewise, a rear stop 758 is provided at the front end of the housing 720 to engage the inner face of the flange 736. The forward sliding of the housing 720 causes it to enter the interior of the FOVE housing 730 until the sliding block engages the outer wall of the flange 736. This provides sufficient space to access lens 1697 for adjustment and/or maintenance. The FOVE housing 730 can be constructed of a variety of materials, including a variety of polymers, such as injection molded, glass filled polycarbonate and/or composites, or metals, such as aluminum. In particular, glass-filled polycarbonates minimize dimensional tolerances due to shrinkage during the molding process. The front end of the FOVE housing is open to the scene and includes a covering transparent window 740.
With further reference to fig. 8 and 9, the housing 730 is removed and the geometry of the FOVE mirror is shown in greater detail. In various embodiments, a variety of optical components and mechanisms may be employed to provide a FOVE, while it is generally contemplated that a FOVE divides a wide image into at least two stacked images (strips), each of which occupies a portion of the imager. In this way, the image height is reduced by about 1/2 (with some overlap), while the width of each swath is (again with some overlap) the full width of the imager. Given the dual core processing capabilities and high image acquisition speed provided by the exemplary camera assembly, various processing techniques may be used to perform this highly efficient and fast processing of the pair of strips (as described below). Illustratively, FOVE118 is based on the above-incorporated U.S. patent application No. 13367141 entitled "System and method for visual field expansion of the visual System" invented by Nunnink et al. Further embodiments of FOVE mechanisms that may be employed in accordance with a vision system camera assembly, as well as mating couplers and accessories, are likewise described as useful background information in the co-pending U.S. patent application No. (acceptance number C12-004CIP (119/0126P1)) entitled "system and method for vision system field of view extension" by Nunnink et al, filed on even date herewith, and the teachings therein are expressly incorporated herein by reference.
As shown in FIG. 8, the optical components of a FOVE include left and right outer mirrors 810 and 812, and stacked and crossed inner mirrors 820 and 822. Outside mirrors 810 and 812 are tilted at different angles. Likewise, the inner mirrors 820, 822 are tilted at different angles. Referring to FIG. 9, there is shown fields of view 910 and 912 for each of the outside mirrors 810 and 812. A slightly overlapping area OR is provided which is at least as wide as the largest useful feature (e.g., the largest barcode) imaged at the focal distance FD. This ensures that a complete image of the feature appears in at least one of the two fields of view 910, 912. Each of the imaging fields of view 910, 912 is totally reflected by its respective outer mirror on the inner crossed mirror 820, 822, as shown. The reflected image is then further reflected to the lens 310, each field of view being vertically stacked relative to the other (caused by the relative tilt of each of the mirrors 810, 812, 820, 822). Thus, as schematically shown in FIG. 10, each of the fields of view 910, 912 is projected onto each of a pair of stacked swath regions 1010, 1012, respectively, on the imager 112. A relatively small, vertical overlap region 1030 may be provided which includes both images of the fields of view 910, 912. The overlap in the vertical direction depends on the aperture of the lens assembly and can be minimized using small aperture settings, such as F: 8. the dashed lines 1040 and 1042 on each strip represent the horizontal overlap of the fields of view OR of FIG. 9. The region is analyzed to derive a complete feature (e.g., an ID) that can be fully represented in one band and the middle and missing in whole or in part in another band.
In an exemplary embodiment, taking representative dimensions as an example, each of the outside mirrors 810, 812 has a horizontal length OML of between 40-120mm, which is typically 84mm, and a vertical height OMH of between 20-50mm, which is typically 33 mm. Likewise, the crossed inner mirrors 820, 822 illustratively have a horizontal length CML of 30-60mm, which is typically 53mm, and a vertical height CMH of 10-25mm, which is typically 21 mm. In an exemplary embodiment, the overall horizontal span of the outer mirrors 810, 812 is approximately 235mm, and the spacing MS between each respective outer mirror surface and the mating inner mirror surface (e.g., 210 and 220; 212 and 222) is approximately 100 mm. Based on the previous measurements made in the selected camera lens 310 and the appropriate focus adjustment, an overall extended field of view WF of about 60-80cm is covered by a single FOVE camera mechanism at high resolution according to a focal length FD of about 35-40 mm. As shown, FOVE separates the two fields of view 910, 912 into two stacked strips, each of which is approximately 600 pixels in height on the imager, which will provide sufficient resolution or sufficient decoding of barcode features on a fast moving pipeline.
As shown in fig. 11, the FOVE assembly allows for the removable mounting of an accessory rail type illuminator 750. The position of the illuminator 750 (or multiple illuminators) relative to the FOVE housing is highly variable in further embodiments. In this embodiment, the illuminator 750 is attached to the bracket 1110, and the bracket 1110 extends forward from the coupler 710 (see FIG. 7) relative to the underside of the FOVE housing 730. The bracket 1110 and the rail-type illuminator can be permanently or removably engaged, for example, using threaded fasteners (not shown) that pass through the top end of the bracket 1110 and are inserted into threaded holes (not shown) on the top side of the illuminator 750. Mounting holes that the bracket may attach to the L-shaped bracket 712 although a rail-type illuminator is described, a variety of alternative illumination types and configurations may be employed. The illuminator may include a plurality of multi-wavelength light sources that are selectively operated and/or the light sources are operated at different intensities, angles, or ranges. In alternative embodiments, other attachment mechanisms, such as adhesive strips, hook and loop type fasteners, screws, etc., may be used to provide a secure and removable mechanical connection between the illumination and bracket components. FOR example, U.S. patent application number entitled "COMPOSITENT ATTACHED DEVICES AND RELATED SYSTEMS AND METHOD FOR MACHINE VISION SYSTEMS", filed on the same day as U.S. patent application Ser. No. C12-022, by the applicant of Saul Sanz Rodriguez and Laurens Nunnink, and incorporated herein by reference as further background information. This application describes techniques for attaching illuminators and other optical accessories to a FOVE assembly or other vision system structure using magnetic assemblies.
Note that the use of FOVE, as described herein, is one option to extend the FOV to provide a wider aspect ratio relative to height. Another available option in addition to (or instead of) a FOVE is to use a probe configured to have, for example, a 1: 4 or 1: 5 aspect ratio. Such a ratio may be optimal for scanning objects moving along a wider pipeline. Thus, in various embodiments, a sensor for a camera assembly herein may be selected as a sensor having a wide aspect ratio, where the pixel width is a multiple of the pixel height. Exemplary methods and processes for manipulating image data may be suitable for processing data across a wide sensor, for example, manipulating different regions of a sensor with different cores of a processor.
Referring now to FIG. 12, an exemplary liquid lens assembly 1210 is depicted for use with camera assembly 110, and a mating mounting base 520, according to one embodiment. In this embodiment, the liquid lens unit 1220 (a membrane base unit as described above) is mounted in a housing 1222, the housing 1222 receiving the rectangular shape of the lens unit 1220 using a bracket structure 1230. Various support structures may be employed to secure the lens within assembly 1210. The liquid lens cell illustratively includes a housing 1232 that supports a front offset lens 1240. Behind offset lens 1240 is mounted a variable, liquid-filled, thin-film lens 1244. The lens varies based on electromechanical actuation of the actuator assembly 1250. The actuator assembly, temperature sensor, and other components are connected to the 8-pin connector 616 by a ribbon cable 1256, with the ribbon cable 1256 extending from the liquid lens housing 1232 out of the lens assembly housing 1222. The routing of the cables and/or the size/shape of the enclosure and other components are highly variable. A transparent cover glass 1258 is provided at the rear of the liquid lens unit 1220 to seal it. The received light is transmitted to a suitably mounted rear lens 1260 supported within the housing 1222. The housing includes a mounting assembly 1270 (which may also include a lock ring-not shown in the figures) that threadably secures the lens assembly 1210 to the camera front 610 at the mount 520. As an application of auto-focus, the focusing of liquid lens assembly 1210 is further described below.
Although not shown in the figures, any of the lens assemblies described herein may include various optical filters to attenuate certain wavelengths of light or to provide various effects, such as polarization. Also the luminaire may be provided with various filters. This allows selective imaging of the object when certain types of illumination are projected and received through filters appropriate to the type of illumination.
It should be appreciated that various alternative interfaces and indicators may be provided with the camera assembly according to embodiments herein. With particular reference to fig. 3, 4 and 5, and with reference now to fig. 13, the internal components of the camera assembly are described with the front 334, body cover 332 and rear 336 portions of the housing removed. The joint between the body 332 and the back 336 includes a ring 1310 of translucent material (acrylic or polycarbonate) that functions as a light pipe. The translucent ring 1310 may surround a portion of the perimeter of the joint, or, as illustrated, the entire (e.g., "360 degree indicator") perimeter of the joint. Ring 1310 may be completely transparent or portions thereof may be transparent. Illustratively, the ring 1310 is illuminated by one of a plurality of differently colored light sources (e.g., LEDs not shown) that are operatively connected to the imager circuit board 113. Light from the LEDs is directed into ring 1310 via a light pipe or other light transmissive conduit. Depending on the color and/or time of illumination (e.g., one or more colors that flash at a certain time rate or pattern), the ring may be used to indicate various operating states. For example, a good ID read/decode may glow green, while no (e.g., failed or erroneous) ID read/decode may glow red. A flashing red color may indicate a system fault. Other colors, such as yellow, may also be included for various indications. The ring provides a unique and aesthetically pleasing, yet intuitive way to indicate system status. The number of light sources used around the perimeter to illuminate the ring is highly variable and can be set according to conventional techniques. Although as shown, ring 1310 is sandwiched between body 332 and front portion 334, it is expressly contemplated that a similar ring may be sandwiched at the joint between rear portion 336 (not shown) and body 332 using the principles described above. Additionally, in various embodiments, rings may be provided at the forward and aft joints.
Processing image data in a multi-core processor
The example multicore processor 114 gives a high degree of processing independence with respect to each of the discrete cores (C1, C2). No specific instruction of the user is needed, and minimal cross communication is set among the processes to share data. Typically each processor operates its own operating system and operates the loaded program independently of the other. The memory space in RAM244 for each processor is typically non-contiguous and has minimal shared memory space. Internal buses within the processor provide for the exchange of data between cores, as appropriate based on user program instructions. Thus, the process gives the ability to divide the image processing task in order to increase the efficiency and speed of the processing. The following is a description of various exemplary processes that may be executed using dual core functionality of the processor 114.
Referring to FIG. 14, as shown, generalized program 1400 allows processors to dynamically assign different tasks to each processor for execution. A task may be an operation on a single image frame transmitted from the FPGA to the processor. The task may be a vision system task such as an ID lookup or ID decoding task. The process 1400 may allow core operations in the multicore processor 114 to be optimized so that the cores are used efficiently. That is, if ID lookup consumes some less processor resources than ID decoding, one core may be adapted to lookup multiple IDs while another decodes useful image frames with the looked-up IDs. Similarly, where a frame represents two halves of a FOVE image, the image may be split between two kernels, and so on. Typically, the program data includes one or more scheduling algorithms that may be adapted to operate with the highest efficiency on a particular set of image data. These scheduling algorithms may help the processor predict when each core becomes idle to perform a given task. An appropriate scheduling algorithm is determined in step 1410 of process 1400, and the algorithm is well suited for a specific set of tasks that are loaded into at least one core in step 1420. The core becomes a dispatcher for multiple cores and transmits dispatch plans over the internal bus. As the image frame is transferred from the FPGA to the core of the processor over the PCIe bus, the frame is monitored and the task to be performed on the image data is identified by the scheduling algorithm (step 1430). The scheduling algorithm assigns image data and tasks to the next available processor (step 1440). The allocation may be based on a prior estimate of when the processor becomes available. When the task on a particular image frame is completed, the scheduling algorithm continues to monitor and assign new tasks and data to the core. The scheduling algorithm may be employed for a long time (overtime) to monitor the observed results of different types of tasks and optimize the priority order of the tasks in each core. One core has a scheduling algorithm that defines which core receives the task.
It should be noted that in the exemplary embodiment, the use of two cores C1 and C2 is exemplary of a multi-core processor, which may include three or more cores. The processes described herein may be adapted to scale up to three or more cores.
The following is a description of a further process for using a multicore processor according to an embodiment:
referring to the schematic of fig. 15, a multi-core process 1500 is shown in which the processor 114 receives an image frame 1510 divided into two portions 1520, 1522. The portions may be divided vertically (e.g., two views provided by FOVE), horizontally, or in another division method (e.g., alternating pixels). Two (or more) image portions 1520, 1522 are transmitted to each core C1 and C2. Each of the two (or more) partial images is processed and decoded in parallel by their respective corresponding core C1, C2. The decoded results 1530, 1532 may be combined and provided to downstream processes, such as an indication of good or no ID read, and the decoded information transmitted to a remote computer. An overlap may typically be provided between two partial images so that the ID between the images is sufficiently identified in at least one kernel. The overlap may vary, but is generally large enough to properly contain a given size of ID in at least one of the partial images. In the case of an image being segmented by the processor itself, the overlap is provided by sending overlapping image data to both cores simultaneously. With FOVE, overlap exists in the acquired images, and the images of each field of view can be transmitted to each core without an additional share of overlap. Communication between cores (bus link 1540) allows consolidation of results and other cross-core communication as needed.
In further embodiments, process 1500 may be replaced by a stitching process for situations where there is little or no overlap between images (e.g., multiple FOVE images that do not substantially overlap). Thus, in this embodiment, each FOVE image, possibly including a portion (but not all) of the exemplary ID feature set, and both images collectively contain substantially the entire ID feature set. One or more of the cores are employed to identify the interrelationship between ID fragments in each image and "stitch" into a complete ID. This may occur during the ID lookup phase of a process in which the complete IDs are combined and then decoded by one or more cores, or during a decoding process, such as a process that decodes a portion of the entire ID of each picture and attempts to merge each individual decoded result.
It is noted that although each of the multi-core processes described herein are shown as using separate cores to execute separate processes, it is expressly contemplated that the term "core" as used herein may refer broadly to a set of cores. Thus, in the case of a four core processor, one set of two cores may be responsible for one process task, while a second set of two cores may be responsible for another process task. Alternatively, a set of three cores may be responsible for one (higher processing overhead) task, while a single core may be responsible for a different (lower processing overhead) task. Alternatively, a simultaneous task or 4 simultaneous tasks may be performed by assigning the task to the appropriate processor core and/or core group. The scheduling algorithm may also be programmed to dynamically reassign cores for different tasks, depending on the current processing needs of a given task. The appropriate level of processing power (e.g., cores) required for a given task may be determined by experimentation, the operation of different types of tasks, and monitoring the speed at which different numbers of processors complete the task. The process is as follows.
Referring to the schematic diagram of fig. 16, a multi-core process 1600 is shown in which the processor 114 receives an image frame 1610 at a core (or set of cores) C1, and C1 performs ID decoding to output a decoding result 1620. The second (or set) core (or cores) C2, in contrast, performs one or more (non-decoding) system-related tasks 1630 that support image acquisition and other system operations through output information 1640, which information 1640 is used for further downstream tasks. Such system tasks 1630 can include (but are not limited to):
focus setting algorithms (including measured distance/calibration and calculated sharpness) and auto brightness (which may include exposure, gain and illumination intensity) algorithms;
JPEG (or other) image data compression, e.g., performed on image frames and then stored and/or transmitted to a remote computer; and/or
Wavefront reconstruction, which is used, for example, in a vision system, using known wavefront coding techniques to improve depth of field.
In the case where the system is performing non-decoded system tasks using one or more cores (e.g., process 1600 of FIG. 16), the assignment of system tasks to certain cores may depend on the current trigger frequency. As shown in fig. 17, the scheduling process 1700 determines the current trigger frequency at step 1710. If the trigger frequency is below a certain threshold, so that fewer cores may perform the required decoding tasks, decision 1720 assigns one or more cores to non-decoding tasks (step 1730). Conversely, when the trigger frequency exceeds a certain threshold (or thresholds), one or more cores (the number of cores may depend on the frequency) are assigned to the decoding task (step 1740). As shown in a simplified dual core embodiment, at a low trigger frequency, one core is assigned to decode and the other core is assigned to system tasks. At a higher trigger frequency, one core (e.g., C1) is assigned to decode, while the one or more other core(s) (e.g., C2) may perform decode and system tasks simultaneously. This applies in particular to dual core systems. In an exemplary multi-core system employing more than two cores, one or more cores may be allocated for decoding while the other core(s) are allocated for both decoding and system tasks.
Fig. 18 schematically depicts a process 1800 that employs multiple cores when one-dimensional codes and two-dimensional codes (or other independent types of features requiring different processing power/decoding time) coexist. Two-dimensional codes typically require more processing resources/time to decode completely. Once the IDs in the image are found, they are scheduled for dynamic load balancing of the tasks of each of cores C1 and C2 to optimize the throughput of the system. For example, as shown, two one-dimensional codes 1810 and 1820 are in respective images 1850 and 1860. Likewise, two- dimensional codes 1830 and 1840 are in respective images. These codes are organized so that at each next image, the two-dimensional and one-dimensional decoding tasks can be switched between the two kernels. In this way, on average each core C1, C2 produces decoding results 1880, 1890 of the same throughput.
The multi-core process 1900, as shown in FIG. 19, assigns the first core (or group of cores) to decode the picture within the maximum time determined by the highest throughput of the system (step 1910). If the maximum time is exceeded without completing decoding, decision step 1920 jumps to decision step 1930, which determines whether the picture is decodable given more processing time than the maximum time. If not, then the system indicates no reading (step 1940). If decoding is presumably possible, then a second (or set of) core (or cores) is allocated at step 1950 to attempt to decode the picture further or more pictures that cannot be decoded within the maximum time (but have the characteristic that more processing time can be spent to complete decoding). In an example of operation, possible features that assume that an image can finish decoding given more time include: (a) the positioning pattern (finder pattern) of the code has been found in the image; and/or (b) other codes from a set of codes printed on the object have been located (e.g., Maxicode and barcode are imprinted on the same package and one has been located). Alternatively, if an ID assumes or may take more time to complete decoding, or by utilizing one or more algorithms different from those currently employed to complete decoding, decision step 1930 may jump (shown in phantom) to step 1960, wherein the system controls the first core or reallocates the second core to continue processing the ID using a different decoding algorithm. The algorithm may be selected by default or based on certain features in the image and/or in the ID features (e.g., apparent image contrast, etc.) that make such algorithms particularly suited for processing thereof.
A variation of process 1900 of fig. 19 is shown in fig. 20. In the depicted process 2000, the maximum decoding time on a given picture has been reached (steps 2010 and 2020). Assuming there is a feature that gives more processing time to complete the decoding (otherwise no read indication is issued in step 2040), the system allows the first core (or group (s)) to continue processing the picture and to distribute the decoding of the next picture to a different core (or group (s)) so that the first core (or groups) completes its decoding task (step 2050).
The multi-core process 2100, as shown in FIG. 21, is used to attempt to decode the ID/code 2110 in an image using multiple decoding algorithms. A first (or group) core (or cores) C1 attempts to decode the ID/code 2110 with a first decoding algorithm 2120, while a second (or group) core (or cores) C2 simultaneously (when available) attempts to decode the same ID/code 2110 with a second decoding algorithm 2130. For example, one core C1 attempts to decode the picture with an algorithm optimized for DataMatrix codes with high contrast, while the other core C2 employs an algorithm optimized for low contrast (DPM) codes. Decoding results or decoding failures 2140, 2150 are output from each of the cores (or core groups) C1, C2. Note that in some instances, two sets of results from different algorithms may be combined to "stitch" into a complete code or otherwise used to validate the decoding task. This may occur where neither result is a complete (or reliable) read of the ID/code.
As shown in FIG. 22, another multi-core process 2200 is employed from core 1(C1) to Core N (CN). In this process, each of the successive images 1-N (2210, 2212, 2214) is decoded using one (or a group) of kernels. The cores C1-CN generate decoding results 1-N (2220, 2222, 2224), respectively. As described above, the images may be sequentially assigned to the above-described cores based on a preset order or based on a dynamically determined order. In the case of dynamic allocation (as described above), various factors may be considered, such as the type of code and the speed at which a given picture is decoded (e.g., the decoding time exceeds a maximum threshold).
FIG. 23 depicts a multi-core process 2300 in which a region containing an ID is located by one (or a group) of cores and the ID of the region is decoded in another (or a group) of cores. The image frame data 2310 is simultaneously transmitted to the cores C1 and C2. One core C1 operates a process 2320 for finding regions containing symbolic (ID) information, while the other core C2 operates an ID decoding process (typically passed between cores over an internal bus) that uses region information 2340 to focus the information of approximate IDs and the characteristics of the transmitted IDs in those regions (e.g., barcode direction, boundaries, etc.) to speed up the decoding process and produce decoded results 2350 efficiently. In the case where more than two cores are used, a fewer number of cores may be used to find and more cores may be used to decode (or vice versa).
FIG. 24 depicts a multi-core process 2400. In this embodiment, the first (or group) of kernels C1 processes the image frame data 2410 using various conventional and/or specialized vision system tools 2420 to extract relevant image information (e.g., edges, downsampled pixels, blobs, etc.). The extracted image information 2440 is communicated over the bus to a second (or set) core C2 for decoding by a decoding process 2430, the decoding process 2430 including a process for interpreting the extracted information to filter features like IDs. This produces decoding result 2450 (if any).
FIG. 25 depicts a multi-core process 2500 similar to processes 2300 and 2400. The first (or group) core C1 employs an ID presence/absence process 2520 (e.g., adapted to search for ID-like features, such as closely-spaced parallel lines, and/or DataMatrix geometry in the image data) in the transmitted image frame data 2510 to determine the presence/absence of an ID/code. This may vary from location, position, or image characteristic information, where the actual presence or absence is uniquely determined. This determines whether the image contains an ID/code and if not, it is discarded without further processing. The presence/absence information 2540 is transmitted to a second (or set) of cores C2. This is used in the second core to execute process 2530 or to discard image data. If the ID/code appears to be present, the second (or group) core C2 employs the ID locating and decoding process 2530 (or processes) to find and decode the image with sufficient similarity to a symbol presentation. When the decoding process is completed, any decoding result 2550 is output. This and other processes described herein may transmit other ID-related data between cores in addition to (or instead of) ID-location data. Such other data may include, but is not limited to, image resolution, ID type, and the like.
Further variations of the multi-core processes 2300, 2400, and 2500 as described in process 2600 of fig. 26, the first (or group) core C1 analyzes the data of each image frame 2610 to determine whether the image is of sufficient quality and/or content to be processed by the second (or group) core C2. The image analysis process 2620 determines the image characteristics and whether it is worth performing the ID lookup and decoding process. If so, the first (or group) core C1 indicates (sends instruction 2640) that the second (or group) core is responsible for the ID lookup/location and decode process 2630, which outputs the decode result 2650. Possible features for determining the sufficiency of the image data include, but are not limited to, image contrast, sharpness/focus quality, and the like. As shown, it is also expressly contemplated that at least a portion of the image analysis process 2620 may be operated within the FPGA using preset algorithms adapted to run within the FPGA. The information derived by the algorithm is then passed to one or more cores (e.g., C1, C2, etc.), which is used for positioning and decoding according to the process 2630 ID.
It should be clear that any of the above-described multi-core processes can be combined with other multi-core processes in a single runtime operation by means of a scheduling algorithm. For example, autofocus (process 1600 in FIG. 16) may be tasked in one core as a system to correspond to a portion of an image acquisition of an object, while processing of a local image (e.g., two portions of a FOVE image) may be performed during a subsequent next portion of the image acquisition event. The other processes described above may also be performed during other portions of the acquisition event, as appropriate.
V. additional system features and functions
Having described various exemplary embodiments of the electronic, physical packaging, and multi-core processes of the vision systems herein, exemplary features and functions are described further below, which are preferably and advantageously employed to enhance overall operation and versatility.
Typically, the determination of the focal length and the rapid adjustment of the lens assembly is desirable on a continuous object basis, especially where the objects differ in height and/or orientation (as shown in the example of FIG. 1). Typically, the conveying system and other moving pipelines are adapted to include: the encoder signal in the form of a pulse based on the travel-distance, the period of which varies with the speed of the pipeline. By knowing the move-distance increment between pulses, the speed of the pipeline (and of the object thereon) at any instant in time can be determined. Thus, referring to process 2700 of FIG. 27, the encoder signal is input to the interface of the camera assembly (step 2710) and processed to determine the actual object velocity (step 2720). When features (e.g., IDs or other discernable shapes) on the object are identified, their pixel drift may be tracked between image frames (step 2730). The time between frames is known, so that the movement of pixels in the feature between frames allows the system to calculate the relative focus to the object (feature). With the described (bending) camera lens, the pixel drift increases at shorter distances and decreases at longer distances. Thus, from the measured pixel drift, the focal length may be calculated using the base equation (step 2740). When the focal length is calculated, the system can command the FPGA to adjust the liquid lens assembly (or other autofocus lens) appropriately (step 2750). Generally, the list in which the current values are stored corresponds to a preset focal length. Once the distance is known, the system sets the current to that value. The calibration of the lens assembly to ensure that the current adjustment matches the determined focal length may be performed periodically using conventional or custom techniques. In an exemplary embodiment, a known distance to a conveyor may be used to correct the focal length of the liquid lens. A feature (or applied reference point) on the conveyor belt is sharply focused by the lens and then this feature is set to a known focal length. This feature may be stationary (e.g., located on the side of the conveyor in the field of view), or may be on the conveyor belt. Where it is on the conveyor belt, it is optionally programmed into an encoder position whereby the relatively precise position (downstream direction) of the alignment feature within the field of view can be known.
Referring to process 2800 of FIG. 28, the FPGA (or other pre-processor coupled to the imager) may include a program or process that performs a high speed search for features of similar IDs/codes. The process may use standard ID lookup procedures, such as searching for a pattern of multiple adjacent parallel lines or edges similar to datamatrix. The FPGA transmits image frames containing such features from the buffer (memory 228) to the processor 114 only over the PCIe bus (step 2820), essentially clearing image frames that do not contain code. The processor then performs a further decoding process on the received image frame using the assigned core(s) (step 2830). The FPGA may also transmit relevant ID location data (if any) to reduce decoding time within the processor 114.
Referring to FIG. 29, vision system 100 is shown having camera assembly 110, lens assembly/housing 116, and additional FOVE 118. FOVE has been provided with one or more applied fiducial points 2910, which may comprise a checkerboard pattern of light and dark elements or another clearly discernable pattern. In this embodiment, reference point 2910 is applied to a corner of FOVE window 740, a relatively small and distant location (e.g., at a corner) relative to the overall field of view. Alternatively (or in addition), fiducial 2912 (shown in phantom) may be placed in place on a mirror (e.g., large mirror 812-shown in phantom). Typically, the datum is located on an optical component along the FOVE optical path. The distance between the reference point and the image plane (sensor 112-shown in phantom) can be accurately determined by focusing on the reference point, and the focal length of the liquid lens (or other lens assembly) can be accurately corrected. Additional techniques for providing "closed loop" auto-calibration of a liquid lens (or other variable lens assembly) are shown and described in commonly assigned U.S. patent application No. 13563499 entitled "system and method for determining and controlling focal length in a vision system camera" invented by Laurens Nunnink et al. The teachings of which are incorporated herein by reference as useful background. In general, the structures and techniques described in this incorporated application require that the lens assembly be provided with a structure that selectively projects a reference pattern onto at least a portion of the optical path during calibration (which may occur dynamically during runtime operation), but that allows some or all of the field of view to remain undisturbed during acquisition of an image of the object during normal runtime operation. The method substantially eliminates inaccuracies due to manufacturing tolerances, calibration drift over time, temperature of the system and/or lens assembly.
For further explanation, in FIG. 29, the optional fan assembly 2920 described above is mounted to the underside of the camera assembly 110 by screws or other fasteners 2921 as shown. The connection cable 2922 connects to a suitable connector at the rear of the camera assembly. Alternatively, the cable 2922 may be connected to an external power source.
With further reference to the more detailed perspective views of fig. 29A and 29B, the exemplary camera assembly 110 (with exemplary lens 2928) can also include an optional bracket 2930 that provides an intermediary assembly with respect to the fan 2920. The bracket 2930 includes an annular access opening 2931 sized to match the diameter of the fan blades to allow airflow therethrough. The bracket 2930 also includes fasteners 2932 that secure the bracket to the threaded holes (588 in figure 5 a) in the bottom of the camera body described above. The fan 2920 is mounted to the outside of the bracket 2930 by fasteners 2936 offset from the bracket fasteners 2932. These fasteners 2938 are placed into threaded holes 2937 of the bracket 2930. The fasteners 2936 pass through spacers 2938, and the spacers 2938 maintain the rigidity of the mounting flange of the fan. Fasteners 2936 also pass through standoffs 2940 that space the fan 2920 from the outside of the plate, thereby allowing airflow to exit from the bottom surface. In one embodiment, the spacing of the separations may be between about 0.5 and 2cm, although a wide range of possible standoff distances is expressly contemplated. Note that it is also expressly contemplated that in alternative embodiments, the bracket and/or fan may be mounted on one or more sides (e.g., left or right sides) and/or a top side of the camera body. This may depend in part on the mounting mechanism of the camera. The fan may be covered by a conventional safety grill that is part of the fastening mechanism, and the bracket 2930 further includes a pair of exemplary tabs 2934 with fastening holes 2944 that may be used as part of the mounting mechanism for hanging the camera assembly (and any mating accessories, such as FOVE in an imaged scene).
Referring to fig. 30, the precise operation of the liquid lens (or another variable lens) assembly can be improved by setting the drive current versus focal length characteristic (or lens optical power). That is, the operating curve for the drive current for the lens assembly is typically non-linear over its entire focal length range. The process 3000 is for non-linearity. During manufacturing, or during calibration, the lens is driven to focus the object/reference point at different known focal lengths (step 3010). The lens is driven to focus the object/reference point at a known focal length. At this focus, the actual drive current is measured (step 3020). The process continues through the incrementing of the plurality of focal lengths (decision steps 3030 and 3040) until all focal lengths have been experienced by the process. Decision step 3030 then jumps to step 3050 where the data points on the drive current are used to generate a drive current versus focal length (or optical power) characteristic at step 3050. The characteristic is indicative of any non-linearity and it may be stored (e.g., a look-up table or modeled equation) for subsequent driving of the lens during operation using the correction provided by the characteristic. It will be appreciated that the analysis and error correction for the non-linearity of the lens drive current may be achieved using a wide range of techniques that will be apparent to those skilled in the art.
Referring to FIG. 31, a process 3100 is shown that determines focus based on overlapping regions in a FOVE image. Image frame 3110 is divided into two portions 3120 and 3122, corresponding to each side of the overall extended width of the FOVE. Each of the image portions 3120 and 3122 includes cooperating overlapping areas 3130 and 3132 as described above. Each of the overlapping regions 3130, 3132 has one or more discernable features (e.g., X3140 and barcode 3142) within it. These features may be elements of any contrast that are visible in both overlapping regions. The system identifies these features in each overlap region and determines their relative positions and sizes (step 3150). These parameters are all varied at different focal lengths on a known scale. In step 3160, process 3100 compares the position drift (and size difference, if any) of the known corresponding values of the corresponding focal lengths. More generally, the process works in the manner of a coincidence range finder. The value of the corresponding focal length is then used to set the focal length in the lens assembly in step 3170. This process, and other automatic adjustment processes described herein, can be programmed on the FPGA or can use system task functions in one or more cores of the processor 114, which returns information to the FPGA so that focus adjustments can be performed by the FPGA.
As shown in fig. 32, another process 3200 for more generally determining the velocity and distance of objects passing through a field of view is useful in auto-focus and other auto-adjustment processes. In this embodiment, the system identifies one or more features in the object, which are typically some or all of the edges of the object itself or another closed or semi-closed element. At step 3220, the process records and stores the size of the feature (or features). The process then looks for the next image frame with the feature (or features) (decision step 3230) and/or has obtained enough frames to make the decision. If the next frame is to be processed, the process returns to step 3220 and the size of the feature (or features) in the next frame is recorded/stored. This continues until there are no more frames available or enough frames have been processed. Decision step 3230 then jumps to step 3240 where the size change between image frames is calculated in step 3240. Then, at step 3250, the process calculates the relative distance and velocity of the object, assuming that the time axis between image frames is known and by means of relative distance information (e.g. a characteristic curve or a look-up table) regarding the velocity for a given change in size over time. This can be used to control the focus of the lens assembly.
Referring to fig. 33, an exemplary mechanism of two camera assemblies M and S (with the FOVE omitted) is located on each opposite side of the scene to image the front and back of a 3312 object 3310 having multiple IDs on different surfaces, only some of which are in the field of view of each camera, but all of which (e.g., front 3320, top 3322, and back 3324) are completely imaged by the two camera assemblies M and S. Each camera assembly M and S includes a respective illuminator MI and SI. It is noted that the cameras M and S are each housed in a master-slave configuration, with a rear mounted RS-485 connector 3330 on the assembly M (which is part of the communication interface provided by the camera assembly and communicates with the processor 114) connected to the Y-cable 3332. The Y-cable includes opposing male and female connectors 3334. One of the connectors (3336) connects the opposite connector 3338, the connector 3338 being connected to the assembly S via a second Y-cable 3340, the second Y-cable 3340 having a further connector 3342 for connecting additional slave units. To avoid cross-talk between illuminators, the processor of assembly M controls its imaging collection and its illumination triggering at time TM, and controls the image capture/illumination of assembly S at non-consecutive times TS. The capture times TM and TS are offset by a predetermined time axis, which ensures that the image capture of each camera assembly is not disturbed by the other. The image may be processed by either core in each camera assembly, or may be processed by either core in two camera assemblies that share image data between the cameras using an appropriate connection, such as a network connection (270 of fig. 2). For example, one set of cores may be adapted to look for IDs in all pictures, while another set may be adapted to decode all pictures. Additional camera assemblies may be connected by appropriate cables to implement an extended master-slave mechanism (or other control mechanism).
Summary of the invention
It should be clear that the embodiments described above for a vision system employing a vision system camera with a multi-core processor, a high speed, high resolution imager, a FOVE, an autofocus lens, and a preprocessor connected to the imager for preprocessing image data provide highly desirable acquisition and processing speeds, as well as image sharpness, in a wide range of applications. More particularly, the mechanism scans efficiently, requiring a wide field of view, varying sizes and locations of useful features, and relatively fast moving objects relative to the system field of view. The vision system provides a physical package with a variety of physical interconnect interfaces to support various options and control functions. The package optimizes heat exchange with the surrounding environment by arranging components to efficiently dissipate internally generated heat, and includes heat dissipating structures to facilitate such heat exchange (e.g., fins). The system also allows for multiple multi-core processes to optimize and load balance image processing and system operations (e.g., auto-tuning tasks). Also, it is expressly contemplated that the above-described methods and programs for operating a camera assembly and performing vision system/decoding tasks may be combined in various ways to achieve desired processing results. Likewise, the program can be switched according to the processing conditions (e.g., the program 2100 can be used and then switched to the program 2300 as appropriate, etc.). Likewise, given multiple cores (greater than two), multiple programs may execute concurrently (e.g., program 2500 executes in two of the 4 cores, while program 2600 executes in the other two of the 4 cores concurrently).
Exemplary embodiments of the present invention are described above in detail. Many modifications and additions may be made to the invention without departing from the spirit and scope thereof. The features of each different embodiment described above may be combined with the features of other described embodiments as appropriate to provide a multiplicity of combinations of features relating to the new embodiments. Additionally, while various separate embodiments of the apparatus and method of the present invention have been described above, it is to be understood that this description is merely illustrative of the application of the principles of the invention. For example, various directional and orientational terms used herein, such as "vertical," "horizontal," "upper," "lower," "bottom," "top," "side," "front," "rear," "left," "right," and the like, are used solely as relative conventions and not as absolute orientations relative to a fixed coordinate system, such as gravity. Also, although not depicted, it is expressly contemplated that various mounting mechanisms supported by various structures (e.g., top boom, ceiling post, beam, etc.) may be used to secure the camera assembly and other vision system components relative to the imaging scene, as the case may be. Likewise, although FOVE is shown as a dual field expander, it is expressly contemplated that FOVE may expand the field of view to three or more fields of view, each suitably projected as a partial image on the imager. Also, while the described FOVE expansion is performed along the "width" dimension, it is expressly contemplated that the term "width" may be substituted for "height" herein, where such an application is desired. Thus, the expansion may occur along either of the width and the height. Likewise, it is expressly contemplated that the internal or external illumination may include wavelengths that project visible and/or invisible (e.g., near infrared light) for a particular function, such as calibration, while the imager may be adapted to uniquely read such wavelengths during a particular task, such as calibration. Further, although each of the FPGAs and processors are shown herein as performing certain functions, it is expressly contemplated that some functions may be switched in any of these configurations. In alternative embodiments, most of the tasks and functions may be performed by a multi-core processor, while the hardware/firmware-based functions performed by the FPGA may be minimized or the FPGA may be omitted altogether, which facilitates different circuitry adapted to send image data from the image sensor to the processor in the appropriate format at the appropriate time. Accordingly, this description is to be taken only by way of example and not to otherwise limit the scope of the invention.

Claims (49)

1. A vision system, comprising:
a multi-core processor that receives an image captured by an imager, the multi-core processor performing one or more system operation tasks and one or more visual system tasks on the image to produce results related to information in the image, wherein the multi-core processor is constructed and arranged to run according to a schedule that causes each of a plurality of cores to be assigned to process the one or more system operation tasks or the one or more visual system tasks;
wherein the one or more system operation tasks include at least one of (a) one or more focus setting algorithms; (b) one or more automatic brightness algorithms; (c) one or more image compression algorithms; and (d) one or more wavefront reconstruction algorithms; and
wherein the one or more vision system tasks include an ID decoding task or an ID lookup task.
2. The vision system of claim 1, wherein the schedule controls the images such that each of the images is selectively processed at each core to increase efficiency of result generation.
3. The vision system of claim 2, wherein the schedule controls the at least one core to perform the one or more system operation tasks without producing a result.
4. The vision system of claim 1, further comprising a preprocessor based at least in part on information generated by one or more system operation tasks performed in the at least one core, the preprocessor performing at least a predetermined auto-adjustment operation, the auto-adjustment operation including at least one of (a) one or more focus setting algorithms; and (b) one or more automatic brightness algorithms.
5. The vision system of claim 4, wherein the one or more focus setting algorithms include auto-focusing of a liquid lens.
6. The vision system of claim 1, wherein the result includes decoded symbol information from an object containing a symbol code.
7. The vision system of claim 1, further comprising a field of view expander (FOVE) that divides an image received at the imager into a plurality of partial images along one of an expanded width and an expanded height.
8. The vision system of claim 7, wherein the image of each local is processed by a core in a multi-core processor, respectively.
9. The vision system as claimed in claim 8, wherein each image or partial image includes an overlap region with respect to another partial image, and each kernel processes the overlap region separately.
10. The vision system of claim 8, wherein each partial image includes a portion of a symbol code, and wherein each kernel identifies and separately processes the portion to produce results that are stitched together to include decoded symbol information.
11. The vision system of claim 1, further comprising an interface corresponding to an external speed signal of the pipeline moving relative to a field of view of the camera assembly.
12. The vision system of claim 4, wherein at least one of the preprocessor and/or multi-core processor is constructed and arranged to, based on the velocity signal and the plurality of images of a moving object, perform at least one of:
(a) the focus of the variable lens is controlled,
(b) the focal distance to the object to be imaged is determined,
(c) correcting the focus to the pipeline, an
(d) The relative velocity of the imaged object is determined.
13. The vision system of claim 1, further comprising a preprocessor that selectively transfers a portion of the image from the imager to the multi-core processor, and the preprocessor processes other images from the imager for system control including automatic adjustment.
14. The vision system of claim 13, wherein the preprocessor selectively communicates information to the multi-core processor for further processing based on its identification of the useful feature, the information being at least one of (a) the useful feature and (b) an image containing the useful feature.
15. The vision system of claim 14, wherein the feature of interest is a symbol.
16. The vision system of claim 1, wherein the multi-core processor is constructed and arranged to process the partial images from each image separately in each of the plurality of cores.
17. The vision system of claim 1, wherein the multi-core processor is constructed and arranged to decode symbols in the image in at least one core, and the multi-core processor is constructed and arranged to (a) identify symbols contained within the image in at least one core and (b) decode symbols in the image containing the identified symbols in another of the cores.
18. The vision system of claim 17, wherein the multicore processor is constructed and arranged to provide information relating to at least one of: (a) a location of the symbol in the image containing the symbol, and (b) other features of another of the kernels that are related to the symbol in the image containing the symbol.
19. The vision system of claim 1, wherein the multi-core processor is constructed and arranged to perform image analysis on the images to identify images having sufficient features for decoding in at least one of the cores and to perform the step of decoding images having sufficient features for decoding in another of the cores.
20. The vision system of claim 1, wherein the multi-core processor is constructed and arranged to process images in at least one of the cores using a first decoding process and in another of the cores using a second decoding process.
21. The vision system of claim 1, wherein the multi-core processor is constructed and arranged to decode an image containing a symbol from the plurality of images in at least one of the cores and to decode the image in another of the cores after a preset time interval if (a) the image does not complete decoding and (b) it takes more time that decoding of the image is likely to complete.
22. The vision system of claim 1, wherein the multi-core processor is constructed and arranged to decode an image containing a symbol from the plurality of images in at least one core and, after a preset time interval, continue decoding of the image in the at least one core and decode another image from the plurality of images in another one of the cores if (a) the image does not complete decoding and (b) it takes more time to potentially complete decoding of the image.
23. The vision system of claim 1, wherein the multi-core processor is constructed and arranged to separately process a partial image containing a portion of each image, wherein the image contains symbols of a first type and symbols of a second type, and wherein the multi-core processor is further constructed and arranged to decode the partial image using each of the plurality of cores such that the symbols of the first type and symbols of the second type are processed load-balanced between each core.
24. The vision system of claim 23, wherein the first type of symbol is a one-dimensional type of barcode and the second type of symbol is a two-dimensional type of barcode.
25. The vision system of claim 1, wherein the kernels are configured such that, based on a measured current trigger frequency of image capture by the imager, at least one kernel performs non-decoding system operation tasks if the trigger frequency is within a preset threshold, and performs decoding tasks without performing system operation tasks if the trigger frequency exceeds the preset threshold.
26. The vision system of claim 25, wherein the non-decoding system operation task is an auto-adjustment task, said auto-adjustment task including at least one of (a) one or more focus setting algorithms; and (b) one or more automatic brightness algorithms.
27. The vision system of claim 1, further comprising a pre-processor that pre-processes images received from the imager to identify symbols in at least some of the images prior to transmitting the images to the multi-core processor, such that the transmitted images include images having symbols, the multi-core processor constructed and arranged to decode the symbols in the images in at least one of the cores.
28. The vision system of claim 1, wherein said one or more focus setting algorithms comprise at least one of: measuring distance, calibrating and calculating sharpness.
29. The vision system of claim 1, wherein said one or more automatic brightness algorithms comprise at least one of: exposure, gain and illumination intensity.
30. The vision system of claim 1, wherein said one or more image compression algorithms comprise JPEG image data compression performed on image frames.
31. The vision system of claim 1, wherein said one or more wavefront reconstruction algorithms include a wavefront coding technique configured to improve depth of field.
32. A vision system, comprising:
a preprocessor that selectively stores images received from an imager at a frame rate and that transmits at least a portion of the images to a multi-core processor that processes information in the images among a plurality of cores to produce results, the preprocessor employing at least some of the stored images for one or more system operation tasks, wherein the one or more system operation tasks include at least one of (a) one or more focus setting algorithms; (b) one or more automatic brightness algorithms; (c) one or more image compression algorithms; and (d) one or more wavefront reconstruction algorithms.
33. A method of processing an image in a vision system, comprising the steps of:
capturing an image at a first frame rate in an imager of a vision system camera;
communicating at least a portion of the image to a multi-core processor; and
the transmitted image is processed in each of the plurality of cores of the multi-core processor to produce a result containing information related to the image according to a schedule that causes each of the plurality of cores to be assigned to process system operation tasks, including camera auto-tuning, or to process vision system tasks, including ID decoding tasks or ID lookup tasks.
34. A vision system, comprising:
a camera comprising an imager and a processor mechanism, the processor mechanism comprising:
(a) a preprocessor interconnected with the imager, receiving and preprocessing an image of the object having the symbol code from the imager at a first frame rate; and
(b) a multi-core processor that receives a pre-processed image from a pre-processor, dynamically allocates one or more visual system tasks and one or more system operation tasks to be performed by one or more respective cores of the multi-core processor, and performs the one or more system operation tasks and the one or more visual system tasks on at least one image to produce a result related to information in the image;
wherein the system operation tasks include (a) one or more focus setting algorithms; (b) one or more automatic brightness algorithms; (c) one or more image compression algorithms; or (d) one or more wavefront reconstruction algorithms, the vision system task comprising an ID decoding task or an ID lookup task.
35. A visual processing method comprises
Capturing one or more image frames of an object having a symbol code;
sending one or more image frames from a preprocessor to a multi-core processor;
dynamically allocating, by a scheduler, one or more visual system tasks to be performed on image frames by one or more respective cores of the multi-core processor;
dynamically allocating, by a scheduler, one or more system operation tasks to be executed by one or more respective cores of the multi-core processor;
performing, by one or more respective cores of the multi-core processor, one or more vision system tasks on an image frame; and
executing, by one or more respective cores of the multi-core processor, one or more system operation tasks;
wherein the system operation tasks include (a) one or more focus setting algorithms; (b) one or more automatic brightness algorithms; (c) one or more image compression algorithms; or (d) one or more wavefront reconstruction algorithms, the vision system task comprising an ID decoding task or an ID lookup task.
36. The method of claim 35, wherein dynamically allocating is based on a prior estimate of when the one or more respective cores will be available to perform the one or more tasks.
37. The method of claim 35, wherein the image frame is divided into two image portions.
38. The method of claim 37, wherein the image frame divided into two image portions is divided vertically or horizontally.
39. The method of claim 38, wherein performing the one or more vision system tasks comprises:
decoding a first one of the two image portions by a first one of the one or more respective kernels and generating a first decoding result; and
a second one of the two image portions is decoded by a second one of the one or more respective kernels, and a second decoding result is generated.
40. The method of claim 39, further comprising:
merging the first decoding result and the second decoding result into a merged decoding result; and
the combined decoding result is provided to downstream processes.
41. The method of claim 40, wherein the downstream process comprises one or more of: an indication of good or no ID reading; and or transmitting the combined decoding results to a remote computer.
42. The method of claim 37, wherein the two image portions at least partially overlap.
43. The method of claim 35, wherein the symbol code comprises one or more of a 1D code or a 2D code.
44. The method of claim 35, wherein performing the one or more vision system tasks comprises:
decoding a first image of the one or more images with a first kernel of the one or more respective kernels using a first decoding algorithm; and
a first image of the one or more images is decoded with a second core of the one or more respective cores using a second decoding algorithm that is different from the first decoding algorithm.
45. The method of claim 35, wherein dynamically allocating comprises: assigning a first kernel of the one or more respective kernels to decode an image frame of the one or more image frames within a predetermined maximum time.
46. The method of claim 35, wherein a first core of the one or more respective cores exclusively performs ID lookup tasks and a second core of the one or more respective cores exclusively performs ID decoding tasks.
47. The method of claim 46, wherein ID lookup results are sent from the first core to the second core.
48. The method of claim 35, further comprising:
image frames transmitted from the preprocessor to the multi-core processor are monitored.
49. The method of claim 35, wherein the one or more system operation tasks include at least one of: illumination control, brightness exposure, and focusing of the auto-focus lens.
CN201810200359.1A 2012-10-04 2013-10-08 Symbol reader with multi-core processor and operation system and method thereof Active CN108460307B (en)

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
US13/645,173 2012-10-04
US13/645,173 US10154177B2 (en) 2012-10-04 2012-10-04 Symbology reader with multi-core processor
US13/645,213 US8794521B2 (en) 2012-10-04 2012-10-04 Systems and methods for operating symbology reader with multi-core processor
US13/645,213 2012-10-04
CN201310465330.3A CN103714307B (en) 2012-10-04 2013-10-08 With the symbol reader of polycaryon processor and its runtime and method

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
CN201310465330.3A Division CN103714307B (en) 2012-10-04 2013-10-08 With the symbol reader of polycaryon processor and its runtime and method

Publications (2)

Publication Number Publication Date
CN108460307A CN108460307A (en) 2018-08-28
CN108460307B true CN108460307B (en) 2022-04-26

Family

ID=50407267

Family Applications (3)

Application Number Title Priority Date Filing Date
CN201810200359.1A Active CN108460307B (en) 2012-10-04 2013-10-08 Symbol reader with multi-core processor and operation system and method thereof
CN202210397986.5A Pending CN114970580A (en) 2012-10-04 2013-10-08 Symbol reader with multi-core processor and operation system and method thereof
CN201310465330.3A Active CN103714307B (en) 2012-10-04 2013-10-08 With the symbol reader of polycaryon processor and its runtime and method

Family Applications After (2)

Application Number Title Priority Date Filing Date
CN202210397986.5A Pending CN114970580A (en) 2012-10-04 2013-10-08 Symbol reader with multi-core processor and operation system and method thereof
CN201310465330.3A Active CN103714307B (en) 2012-10-04 2013-10-08 With the symbol reader of polycaryon processor and its runtime and method

Country Status (2)

Country Link
CN (3) CN108460307B (en)
DE (1) DE102013110899B4 (en)

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3159731B1 (en) * 2015-10-19 2021-12-29 Cognex Corporation System and method for expansion of field of view in a vision system
CN105469131A (en) * 2015-12-30 2016-04-06 深圳市创科自动化控制技术有限公司 Implicit two-dimensional code and reading and recognizing device thereof
CN106937047B (en) * 2017-03-08 2019-08-09 苏州易瑞得电子科技有限公司 Adaptive focusing visual identity method, system and the equipment of symbolic feature
CN107358135B (en) * 2017-08-28 2020-11-27 北京奇艺世纪科技有限公司 Two-dimensional code scanning method and device
DE102017128032A1 (en) * 2017-11-27 2019-05-29 CRETEC GmbH Code reader and method for online verification of a code
US10776972B2 (en) 2018-04-25 2020-09-15 Cognex Corporation Systems and methods for stitching sequential images of an object
CN112747677A (en) * 2020-12-29 2021-05-04 广州艾目易科技有限公司 Optical positioning method and system for multiple processors
US11717973B2 (en) 2021-07-31 2023-08-08 Cognex Corporation Machine vision system with multispectral light assembly
US20230030276A1 (en) * 2021-07-31 2023-02-02 Cognex Corporation Machine vision system and method with multispectral light assembly

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1300406A (en) * 1999-04-07 2001-06-20 讯宝科技公司 Imaging engine and technology for reading postal codes
CN102034076A (en) * 2009-10-01 2011-04-27 手持产品公司 Low power multi-core decoder system and method
CN102596002A (en) * 2009-10-30 2012-07-18 卡尔斯特里姆保健公司 Intraoral camera with liquid lens
WO2012125296A2 (en) * 2011-03-16 2012-09-20 Microscan Systems, Inc. Multi-core distributed processing for machine vision applications

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5166745A (en) * 1990-05-01 1992-11-24 The Charles Stark Draper Laboratory, Inc. Rapid re-targeting, space-based, boresight alignment system and method for neutral particle beams
DE19639854A1 (en) 1996-09-27 1998-06-10 Vitronic Dr Ing Stein Bildvera Method and device for detecting optically detectable information applied to potentially large objects
US6766515B1 (en) * 1997-02-18 2004-07-20 Silicon Graphics, Inc. Distributed scheduling of parallel jobs with no kernel-to-kernel communication
US7494064B2 (en) 2001-12-28 2009-02-24 Symbol Technologies, Inc. ASIC for supporting multiple functions of a portable data collection device
US8146823B2 (en) * 2002-01-18 2012-04-03 Microscan Systems, Inc. Method and apparatus for rapid image capture in an image system
US20040169771A1 (en) * 2003-01-02 2004-09-02 Washington Richard G Thermally cooled imaging apparatus
US6690451B1 (en) * 2003-02-06 2004-02-10 Gerald S. Schubert Locating object using stereo vision
JP4070778B2 (en) * 2005-05-13 2008-04-02 株式会社ソニー・コンピュータエンタテインメント Image processing system
AT504940B1 (en) 2007-03-14 2009-07-15 Alicona Imaging Gmbh METHOD AND APPARATUS FOR THE OPTICAL MEASUREMENT OF THE TOPOGRAPHY OF A SAMPLE
US20090072037A1 (en) * 2007-09-17 2009-03-19 Metrologic Instruments, Inc. Autofocus liquid lens scanner
CN101546276B (en) * 2008-03-26 2012-12-19 国际商业机器公司 Method for achieving interrupt scheduling under multi-core environment and multi-core processor
US20100097444A1 (en) * 2008-10-16 2010-04-22 Peter Lablans Camera System for Creating an Image From a Plurality of Images
CN101299194B (en) * 2008-06-26 2010-04-07 上海交通大学 Heterogeneous multi-core system thread-level dynamic dispatching method based on configurable processor
CN101466041B (en) * 2009-01-16 2010-09-15 清华大学 Task scheduling method for multi-eyepoint video encode of multi-nuclear processor
CN101710986B (en) * 2009-11-18 2012-05-23 中兴通讯股份有限公司 H.264 parallel decoding method and system based on isostructural multicore processor
US8700943B2 (en) * 2009-12-22 2014-04-15 Intel Corporation Controlling time stamp counter (TSC) offsets for mulitple cores and threads
US8711248B2 (en) * 2011-02-25 2014-04-29 Microsoft Corporation Global alignment for high-dynamic range image generation
CN102625108B (en) * 2012-03-30 2014-03-12 浙江大学 Multi-core-processor-based H.264 decoding method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1300406A (en) * 1999-04-07 2001-06-20 讯宝科技公司 Imaging engine and technology for reading postal codes
CN102034076A (en) * 2009-10-01 2011-04-27 手持产品公司 Low power multi-core decoder system and method
CN102596002A (en) * 2009-10-30 2012-07-18 卡尔斯特里姆保健公司 Intraoral camera with liquid lens
WO2012125296A2 (en) * 2011-03-16 2012-09-20 Microscan Systems, Inc. Multi-core distributed processing for machine vision applications

Also Published As

Publication number Publication date
CN103714307B (en) 2018-04-13
CN108460307A (en) 2018-08-28
DE102013110899A1 (en) 2014-04-30
CN103714307A (en) 2014-04-09
CN114970580A (en) 2022-08-30
DE102013110899B4 (en) 2019-07-04

Similar Documents

Publication Publication Date Title
US11606483B2 (en) Symbology reader with multi-core processor
CN108460307B (en) Symbol reader with multi-core processor and operation system and method thereof
US8794521B2 (en) Systems and methods for operating symbology reader with multi-core processor
JP7257092B2 (en) Constant Magnification Lens for Vision System Cameras
US8646690B2 (en) System and method for expansion of field of view in a vision system
US9857575B2 (en) System and method for expansion of field of view in a vision system
US10445544B2 (en) System and method for expansion of field of view in a vision system
US11422257B2 (en) Lens assembly with integrated feedback loop and time-of-flight sensor
WO2019174435A1 (en) Projector and test method and device therefor, image acquisition device, electronic device, readable storage medium
CN104923923A (en) Laser positioning cutting system based on large-format visual guidance and distortion rectification
CN102693407A (en) Auto-exposure method using continuous video frames under controlled illumination
WO2020041734A1 (en) Shelf-viewing camera with multiple focus depths
CN103051839A (en) Device and method for intelligently adjusting light supplementation angle
CN116264637A (en) Lens assembly with integrated feedback loop and time-of-flight sensor
US20220012444A1 (en) Handheld id-reading system with integrated illumination assembly
CN110231749A (en) Camera
EP3081926B1 (en) System and method for acquiring images of surface texture
CN104866798A (en) Bar-code identifying and reading engine for obtaining image data
CN106304831A (en) A kind of integrated camera structure of chip mounter
US11223814B2 (en) Imaging optics for one-dimensional array detector
EP2898676B1 (en) Three dimensional image capture system with image conversion mechanism and method of operation thereof
US11966810B2 (en) System and method for expansion of field of view in a vision system
KR101133653B1 (en) Board inspection apparatus and board inspection method using the apparatus
US20200160012A1 (en) System and method for expansion of field of view in a vision system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant