CN108460307A - With the symbol reader of multi-core processor and its operating system and method - Google Patents
With the symbol reader of multi-core processor and its operating system and method Download PDFInfo
- Publication number
- CN108460307A CN108460307A CN201810200359.1A CN201810200359A CN108460307A CN 108460307 A CN108460307 A CN 108460307A CN 201810200359 A CN201810200359 A CN 201810200359A CN 108460307 A CN108460307 A CN 108460307A
- Authority
- CN
- China
- Prior art keywords
- image
- core
- vision system
- symbol
- decoding
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K7/00—Methods or arrangements for sensing record carriers, e.g. for reading patterns
- G06K7/10—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
- G06K7/10544—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
- G06K7/10821—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum further details of bar or optical code scanning devices
- G06K7/10831—Arrangement of optical elements, e.g. lenses, mirrors, prisms
Abstract
The present invention is provided with multi-core processor, high speed and high resolution imager, field-of-vision expanders, auto-focusing lens and the vision system camera for the preprocessor for preprocessing image data being connect with imager and the operating method of cooperation, the vision system camera and its operating method provide the desirable acquisition of height and processing speed and image definition in being widely applied.The mechanism, which effectively scans, requires the wide visual field, size different, and relative to the object of the relatively rapid movement in the system visual field.The physical package that the vision system provides has a variety of physical interconnections interfaces to support various options and control function.The encapsulation effectively disperses the heat of internal generation by component of arranging, optimization and the heat exchange of ambient enviroment, and includes radiator structure in order to such heat exchange (such as fin).The system also allows a variety of multi-core processes to optimize and make image procossing and system operatio load balance (such as adjust automatically task).
Description
Divisional application
The application is application number 2013104653303, on October 2013 applying date 8, entitled " with multi-core processor
The divisional application of symbol reader and its operating system and method ".
Technical field
The present invention relates to NI Vision Builder for Automated Inspection, relates more particularly to that symbol (such as bar code) can be obtained, locates
Reason and decoded vision system.
Background technology
For measuring, detecting, the decoding of calibration object and/or symbol (a such as peacekeeping two-dimensional bar, also referred to as
" ID ") vision system application and industry in be widely used.This system is based on using a kind of imaging sensor (also referred to as
For " imager "), obtain image (the typically gray scale or coloured image and one-dimensional, two-dimentional or graphics of object or target
Picture), and handle these images obtained using onboard or interconnection vision system processor.Processor had usually both included place
It includes non-transitory computer-readable program instructions to manage hardware again, they execute one or more based on the information to image procossing
Vision system process, to generate desired output.The image information is normally provided in the array of image pixel, each figure
As pixel all has different colours and/or intensity.In the example of symbol reader (being referred to herein as " camera "), use
Family or automated process obtain the figure for being considered the target comprising one or more bar codes, Quick Response Code or other sign patterns
Picture.This image is handled to identify the feature of bar code, is then decoded by decoding program and/or processor to obtain
Take the inherent alphanumeric data representated by the bar code.
One common application of ID readers is to mobile along the route (such as conveyer belt) in production and logistics operation
Target be tracked and classify.The ID readers can be positioned over the entire course, be moved down in its visual field in every an object
The respective ID of all objects needed is obtained with suitable visual angle when dynamic.Placement position according to reader relative to mobile route
Set with the size of object (such as height), reader can change relative to the focal length of object.Also that is, larger object can
ID on it can be caused to be closer to reader, and the ID that smaller/more flat object includes may be from reader farther out.
At each occurrence, ID should occur under enough resolution ratio, correctly could be imaged and be decoded in this way.Adversely,
The imaging sensor that the vision system camera for being easiest to buy in the market is relied on limits dimensionally close to square
(for example, close to 1:1 depth-width ratio, and more generally ratio is 4:3、5:4 or 16:9) cell array.The width height ratio and reading
Take the requirement of application there is no coordinating well, in reading application, object is wider in the visual field (FOV) of opposite camera
Pass through on conveyer assembly line.More generally, the height in the visual field should be slightly than ID (or other useful regions) bigger, and regards
Wild width should approximately equal in or less times greater than conveyer assembly line width.In some instances, flowing water may be used
Line-smear camera is to deal with object movement and the wide visual field.However, such scheme is not particularly suited for certain geometries
Object and assembly line mechanism.Similarly, row scanning (i.e. one-dimensional) imaging sensor tends to sense than conventional rectangular format
Device higher costs.
In the case of object and/or relatively wide assembly line, the camera lens or imager of single ID readers may
Do not have enough visuals field on lateral direction, to keep carrying out accurately image to ID and decode required resolution ratio
The entire width of route is covered simultaneously.Can not be imaged to full duration can cause reader to miss except its visual field or too
Pass through the ID in the visual field soon.A kind of method of width needed for high-cost offer is, using multiple photograph across flowing water line width
Machine, typically its be networked to together with equal sub-image data and process.Optionally, it is passed by using field-of-vision expanders etendue
The primary visual field of sensor, can obtain the broader visual field aspect ratio of one or more cameras, and wherein field-of-vision expanders are by the visual field
It is divided into the multiple relatively narrow bands extended across the width of conveyer assembly line.There is provided the challenge of such mechanism is, mobile
Assembly line upstream to downstream direction in relatively narrow section may require higher frame per second, with ensure ID from the section remove
Fully it is captured before.Processing speed can be asked for system in this way, and obtained in a wider region based on
The decoding system of current imager substantially lacks the required reliable decoded frame per second of the progress when high object passes through speed.
Further challenge in the ID readers of operation view-based access control model system is that focusing and illumination should be set as phase
To optimum value, with for decoding application readable ID images are provided.This needs the rapid analysis method of focal length and lighting condition, with
Just these parameters can be automatically calculated and/or automatically be adjusted.The visual field be wider and/or object throughput relative to imaging
In the case of scene is higher, it may be unable to reach needed for the such function of execution using the reader of conventional view-based access control model system
Processing speed.
In general, to provide such high-speed functions, imager/sensor can obtain image under relatively high frame per second.
It is generally desirable to provide, image procossing mechanism/flow of picture frame can be efficiently used in various ways,
System capability can be improved with adjusting parameter at the high velocities and read image data.
Invention content
The present invention overcomes lacking for the prior art by the way that a vision system camera, and the operating method of cooperation is arranged
Point, the vision system camera have a multi-core processor, high speed and high resolution imager, field-of-vision expanders (FOVE), from
Dynamic focus lens and the preprocessor for preprocessing image data being connect with imager, the vision system camera and its
Operating method provides the desirable acquisition of height and processing speed and image definition in being widely applied.The mechanism is efficient
Ground scanning requires the position of the wide visual field, size and useful feature different, and relative to pair of the relatively rapid movement in the system visual field
As.The physical package that the vision system provides has a variety of physical interconnections interfaces to support various options and control function.The envelope
Dress effectively disperses the heat of internal generation, and includes radiator structure by component of arranging, the heat exchange of optimization and ambient enviroment
In order to such heat exchange (such as fin).The system also allows a variety of multi-core processes to optimize and make image procossing and system
Service load balances (such as adjust automatically task).
In an exemplary embodiment, vision system includes camera housing, stores imager and processor mechanism.
The processor mechanism includes (a) preprocessor with imager interconnection, with the first frame per second (such as per second 200 to 300 or more
Image) image from imager, and (b) multi-core processor (there are multiple cores) are received and pre-process, from pretreatment
Device receives pretreated image and executes vision system task on it.Thus it can generate and the relevant knot of information in image
Fruit.It should be noted that terms used herein " core " should be construed broadly to include the " more of a discrete assigned specific tasks
Group core ".Illustratively, the first frame per second is more much higher than the second frame per second of multi-core processor from preprocessor reception image.Pre- place
Reason device (such as FPGA, ASIC, DSP etc.) can also be connected with each other with data storage, and data storage buffering is from imaging
The image of device.In various processes, in the case of a concrete function unnecessary use whole image (such as adjust automatically), figure
The part of picture or partial image can the instruction based on preprocessor into row buffering.Similarly, the image of down-sampled (sub-sampled)
Data can not need the figure of complete resolution into row buffering, such as adjust automatically in certain processes in the task of execution
Picture.In addition, multi-core processor can multi-core processor corresponding with being stored with each core operational order data storage phase
It connects.The memory equally stores, the image data handled based on a dispatch list by each core.Especially, the scheduling
Table makes order be that each image is selectively handled in each core, to increase resulting efficiency.The dispatch list
One or more cores can be ordered and (also referred to as " system operatio task ", appointed with image procossing and decoding with executing system task
Business is without directly contacting), such as adjust automatically, such as Lighting control, brightness exposure and auto-focusing lens focusing.The lens
Can be liquid lens or other kinds of variable focus lens.The preprocessor can be configured and disposed to, and be based at least partially on
Information caused by the system task executed by core at least executes such preset adjust automatically operation.More specifically,
The result generated by core may include the decoded symbol (ID/ codes) being imaged from an object.
In an exemplary embodiment, camera assembly lens can be connect with a FOVE optics, which will be imaged
The image that device receives is divided into the partial image of multiple width along an extension.These partial images can be vertically stacked on imager simultaneously
It include the overlapping towards width direction.The overlapping may alternatively appear in each partial image, and can be sufficiently wide completely to be seen to needs
The maximum ID/ code imagings observed, to ensure without losing symbol because of the segmentation between the visual field.Illustratively, often
One partial image is handled by a discrete core (or discrete one group of core) of multi-core processor respectively.It, should to assist automatic calibration
FOVE may include the datum mark at known focal length relative to imager, and being located on light path can make it selectively or portion
It is exposed to the position of imager with dividing, Image Acquisition can not significantly interfered with lower completion by any of datum mark when to run.
One self-calibration process measures the focal length (focusing) of lens using the datum mark.Illustratively the datum mark can be located at a light of FOVE
On department of the Chinese Academy of Sciences's part.Optionally, FOVE shells support exterior illuminator, and the exterior illuminator is by the align structures and magnet that are mutually interlocked
It is detachably attached to shell.
The physical package of camera assembly by with good thermal conductivity heat to be faster transferred to the material of ambient enviroment
Material is built, such as aluminium alloy.The processor mechanism includes an imager plate, which includes imager and a mainboard, is somebody's turn to do
Mainboard includes multi-core processor, which is biased against the side inside camera housing by the bracket assembly of a load on spring,
Thus reach fixed and dismountable fastening, and closely fastened with the internal side wall of camera assembly shell, for improving
From the heat transfer of mainboard.In order to further strengthen heat exchange and close fastening, which includes the type face of the circuit element of protrusion,
It is set as the inner mold face for following the interior side of camera housing, to minimize the distance between its.Outside camera assembly
Shell on the outside equally include multiple radiating fins with ambient enviroment heat exchange.The shell further supports one or more outer
Portion's fan.It is adapted for that dismountable lens assembly is installed before shell.Such dismountable lens assembly may include a liquid
Lens, by the connector in a cable connection to camera assembly shell side (such as front).Be provided with another connector with
Optional internal (or external) illumination of control.The rear portion of camera includes a discrete I/O plates, is connected to by an electronic link
Mainboard.The I/O plates include multiple connectors being externally exposed, and are used for various data and control function interface.As one
Control/function is an external speed signal (such as encoder for the assembly line from the visual field movement relative to camera assembly
Signal).The preprocessor and/or multi-core processor are built and are set as, and following behaviour is executed based on speed signal and multiple images
At least one of make:(a) focusing of variable lens is controlled;(b) focal length of the object of imaging is measured;(c) it corrects to assembly line
Focal length;And (d) measure the relative velocity of the object of imaging.In general, camera housing includes a front and the back side, it is each
The respective seam crossing (using gasket seal) of a each opposed end for being sealingly attached to ontology.Optionally, it before and carries on the back
Seam of one of the face (or two together) between ontology, includes the ring made of trnaslucent materials wherein, builds and sets
One be set in the multiple preset colors of irradiation, to provide the indicator of correspondence system state to user.For example, the ring
Green corresponding good (success) ID can be irradiated to read, and red correspond to does not have (failure) ID readings.
In one embodiment, identification of the preprocessor to useful feature (such as symbol/ID/ codes), the pretreatment are based on
Device can be adapted for that image is selectively transmitted to multi-core processor from a buffer storage, for the core in multi-core processor
The heart is for further processing.
In an exemplary embodiment, the method for handling image in vision system includes:Existed with the first frame per second
Capture images in the imager of vision system camera, and at least part of the image is sent to a multi-core processor.Place
The image of the transmission is managed with according to being generated in each in the multiple cores of the multi-core processor of a dispatch list as a result, it is wrapped
Containing with the relevant information of the image.Processing step can further comprise:The figure of transmission at least one of multiple cores
As in the step of image of the identification comprising symbol, and solution is executed on the image comprising symbol in another of multiple cores
The step of code, so that a core identification symbol is with the presence or absence of (and optionally providing other with the relevant information of the symbol, example
Such as include resolution, sign pattern), and another core codec identified symbol.Optionally, the step of processing can
Including:In the step of executing image analysis on the image of transmission, there is enough be used in multiple cores at least to identify
The image for the feature being decoded in one.In other words, which measures whether the image sufficiently clear and can be used for decoding.
Another core executes the step of being decoded on the image with enough features, thus before attempting positioning and/or solution code sign,
Abandon not available image.In one embodiment, to the image of transmission, at least one the first decoding of middle use of multiple cores
Process (such as algorithm) executes the step of decoding, and executes decoding using the second decoding process in another of multiple cores
The step of, so decoding can occur at least one decoding process.Illustratively, the step of decoding must be asked in multiple cores
One image of at least one middle decoding, and after a preset time interval, if (a) image remains unfulfilled decoding, and (b)
It is assumed that with the more time image decodable code, then the image continues to decode in another of multiple cores.Optionally, in the time limit
After, there is a possibility that spend more times that can be successfully decoded, then the permissible core of system continues to decode and distribute next
A image is to different cores.In a further embodiment, there is symbol (such as one-dimension code and two dimension with multiple types
Code) multiple images frame situation when, which can provide load balance.Core is according to by one-dimensional (1D) code and two-dimentional (2D) code
Relative load is evenly provided to the mode of each core to divide image.
In a further embodiment, current triggering frequency can be based on to distribute code to non-decoded system task.One
Low triggering frequency in threshold value allows core to be used for system task, such as adjust automatically, and higher triggering frequency indicates core
For decoding (such as generating and the relevant result of image information).It, can be as described above, various distribute relevant process with core
Vision system mingles during running, and process resource (core) can reallocate for various purposes.
Description of the drawings
The specification reference attached drawing of the present invention below, wherein:
Fig. 1 is the schematic diagram of a vision system, and the moving assemly line relative to demonstration is arranged, which has various
Size and shape include ID or the object of other symbols, according to an exemplary embodiment, pass through the system per an object
The visual field;
Fig. 2 is according to an exemplary embodiment, for obtaining and handle image data, and for controlling various systems
The block diagram of the circuit of function;
Fig. 3 is the elevational perspective view of the vision system camera assembly according to an exemplary embodiment of Fig. 1;
Fig. 4 is the rear perspective of the vision system camera assembly according to an exemplary embodiment of Fig. 1;
Fig. 5 is the side cutaway view of the line 5-5 along Fig. 3 of vision system camera assembly;
Fig. 5 A are the rear pseudosection of the line 5A-5A along Fig. 3 of vision system camera assembly;
Fig. 6 be Fig. 1 vision system camera assembly elevational perspective view, wherein remove interior lighting assembly and thoroughly
Mirror;
Fig. 7 is the perspective view of the vision system according to an exemplary embodiment of Fig. 1 comprising vision system camera is total
Coordinate at the field-of-vision expanders (FOVE), FOVE and outer lateral rod-type luminaire mounted thereto;
Fig. 7 A are to be bowed according to the more detailed of the connector being arranged between FOVE shells and camera assembly front of Fig. 7
Pseudosection;
Fig. 8 is the perspective view of the optical component of the illustrative FOVE of Fig. 7, is shown as removing shell;
Fig. 9 is the plan view of the optical component of the illustrative FOVE of Fig. 7, is shown as removing shell and is obtaining width
The image in the visual field;
Figure 10 is the signal of the stacker mechanism in the multiple visuals field provided by the imager that the FOVE of Fig. 7 is camera assembly
Figure;
Figure 11 is the front view of the FOVE of Fig. 7, has and is placed in the illumination of the cross bar type on a bracket relative to FOVE shells
Device, and the connector with the camera assembly cooperation of Fig. 1;
Figure 12 be in the camera assembly according to an exemplary embodiment of Fig. 1 and be controlled by it based on film
Liquid lens assembly part top plan view;
Figure 13 is the rear perspective of the internals of the camera assembly of Fig. 1, wherein removing outer cover body and displaying the details of
" 360 degree " ring indicator structure between ontology and its front;
Figure 14 is the core distribution system operation task of the multi-core processor for the vision system for Fig. 1 and vision system
The flow chart of the generalization operation of dispatching algorithm/process of system task;
Figure 15 show a block diagram of multi-core process, wherein a picture frame is divided into multiple portions, is divided respectively
It is assigned in multiple cores and is handled;
Figure 16 show a block diagram of multi-core process, wherein a picture frame is assigned into a core
Reason, and another core executes one or more system tasks;
Figure 17 is a flow chart, and which show be based on current trigger frequency, dynamically distribute core to carry out image procossing
With the system task of non-image processing;
Figure 18 show a block diagram of multi-core process, wherein negative with the processing for more effectively balancing entire core group
The mode of load dynamically distributes the ID/ codes in each picture frame to core;
Figure 19 is a flow chart, and display, is more than one preset in the decoding process of one identifier code of the first core processing
Time restriction after, by the course allocation to the second core;
Figure 20 is a flow chart, and display, is more than one preset in the decoding process of one identifier code of the first core processing
Time restriction after, which is continued into distribution to the first core;
Figure 21 show a block diagram of multi-core process, and the ID/ codes wherein in picture frame are concurrently distributed to two
Core, wherein each core executes different decoding algorithms;
Figure 22 show a block diagram of multi-core process, wherein each distribution of a series of picture frame is to different
Core is handled;
Figure 23 show a block diagram of multi-core process, and wherein image frame data is concurrently distributed to being currently running ID/
First core of codelookup process and the ID/ code informations operation ID/ codes found provided according to the first core
Second core of decoding process;
Figure 24 show a block diagram of multi-core process, and wherein image frame data is concurrently distributed to being currently running vision
First core of system process and the second core that ID/ code decoding processes are run according to the image information that the first core provides
The heart;
Figure 25 show a block diagram of multi-core process, and wherein image frame data is concurrently distributed to being currently running ID/
Code is in the presence/absence of the first core of process and according to the ID/ codes of the first core offer in the presence/absence of information fortune
Second core of row ID/ Code locations and decoding process;
Figure 26 show a block diagram of multi-core process, and wherein image frame data is concurrently distributed to being currently running image
First core of analysis process and information related with the picture frame quality and feature operation ID/ provided according to the first core
Second core of Code location and decoding process;
Figure 27 is the flow chart of a system process, is used for, according to from conveyer/flowing water linear velocity trans (encoder)
Focal length is adjusted with the comparison measurement to the tracking of feature on the object by the illustrative vision system visual field;
Figure 28 is the flow chart of a process, uses preprocessor (FPGA) the positioning useful feature being connect with imager
(ID/ codes) and transmission seem that unique picture frame to the multi-core processor comprising useful feature is for further processing;
Figure 29 is the side view of the vision system of Fig. 1, and which show the self calibration datum marks and vision that are provided for FOVE
The cooling fan of optional bottom installation on system camera assembly;
Figure 29 A are according to the more detailed perspective view of the camera assembly of an exemplary embodiment, and it includes bottom peaces
The bracket and cooling fan of dress;
Figure 29 B are the decomposition perspective view of camera assembly, the bracket with Figure 29 A and cooling fan;
Figure 30 be a system process flow chart, be used for correct for focal length/luminous power lens driving current at
Curve it is non-linear;
Figure 31 is the flow chart of a system process, according to the Q-character in each overlay region of the image of FOVE projections
The analysis set measures focal length;
Figure 32 is the flow chart of a system process, is measured by the change in size of the characteristics of objects between picture frame logical
Cross the speed and/or distance of the object in the visual field of Fig. 1 vision systems;And
Figure 33 is according to the schematic diagram of the principal and subordinate mechanism of the demonstration of an embodiment, and which show the camera of multiple interconnection is total
At and luminaire.
Specific implementation mode
I. system survey
Fig. 1 describes the vision system 100 according to an exemplary embodiment, also referred to as " NI Vision Builder for Automated Inspection ".Depending on
Feel system 100 includes vision system camera 110, includes illustratively (and/or internal) the processor mechanism integrated
114.Processor mechanism 114 makes the image data obtained by imager (such as CMOS or ccd sensor) 112 (being shown in phantom)
It can be handled, to analyze the information in acquired image.Imager 112 be placed on the imaging circuits plate 113 of a cooperation (also with
Dotted line is shown), the processor mechanism 114 in the embodiment includes a multi-core framework as described below, and it includes at least two
Individually (discrete) processing core C1 and C2, according to an embodiment, it may be configured as single wafer (die) (such as chip).
As described below, processor 114 is placed on processor plate or " master " plate 115.Similarly, it is separately provided for and remote equipment
Input/output (I/O) plate 117 and user interface (UI) plate 123 of communication and the interconnection of presentation of information.Imager 112 and multinuclear
The function of processor 114 will be described in further detail following.In general, processor runs vision system process 119, it should be into
The advantages of multi-core processor mechanism 114 is utilized in Cheng Shi, and operation ID is searched and decoding process 121.Optionally, decoding process
All or part can be handled by the dedicated decoder chip on an independent chip of processor 114.
Camera 110 includes lens assembly 116, is optionally detachable and can be (or fixed with various routines
System) displacement of installation pedestal lens assembly.The lens assembly can manually or automatically focus.In one embodiment, lens assembly 116 can
To include automatic focusing (automatic-focusing) mechanism based on known system, such as commercially available liquid lens system.
In one embodiment, installation pedestal may be defined as well known film (cine) or " c-type installation " pedestal geometry-other
Geometry know or customization has specific imagination in an alternate embodiment of the invention.
As shown, illustrative field-of-vision expanders (FOVE) 118 are mounted on before lens assembly 116.FOVE allows
The extension of the width WF in the visual field 120, usual lens assembly 116 limit width WF at a given focal length and (are less than as original width
The width of any overlapping region (or multiple overlapping regions) between the visual field) N times, and the length LF in the visual field 120 is reduced to first
1/N times of beginning length.FOVE118 can be realized using various mechanisms, generally include one group of oblique mirror, and the visual field is drawn
It is divided into the vertical segmentation part of a series of imager.In one embodiment, the FOVE of above-mentioned combination is configured to, and guides its outside
To receive the light of the different lateral parts from scene, which can be the assembly line of the movement with object in the direction of mirror
(as shown in Figure 1).Thereafter, light is directed to the interior side mirrors of the vertical tilt of the cooperation of a beam splitter for outside mirror, then, guide light
It is directed at aperture in line by the optical axis of the substantial and camera in FOVE, to avoid image fault.Interior side mirror is in the future
It is separately directed to the discrete band on imager from the light of each outside mirror, one of band is vertically (for example) stacked on separately
One top, then vision system search and analyze the feature of whole image.Include lateral by the visual field that mirror limits
(widthwise) overlay region for certain size and is provided to ensure that the feature in center fully appears at least one
In band.In another embodiment, mobile mirror changes position between the picture frame of acquisition, is imaged in so as to the overall with of scene
In continuous frame.Exemplary FOVE mechanisms, including FOVE mechanisms described herein, by the entitled of the inventions such as Nunnink
It shows and describes in the U.S. Patent Application No. 13367141 of " system and method for the extension of the vision system visual field ".This application
It is bound in a manner of reference herein as useful background information.
In one embodiment, FOVE118 is provided with the first outside mirror, and the optical axis structure relative to camera is in an acute angle,
And it is provided with the second outside mirror, the opposite side relative to optical axis constitutes an opposite acute angle.From vision system camera
Direction, a beam splitter are located at the front of the first outside mirror and the second outside mirror.The beam splitter is provided with the first reflecting surface and second
Reflecting surface.Illustratively the first outside mirror and the first reflecting surface are set as to be aligned to along optical axis from first visual field of scene
Imager.Similarly, illustratively be set as will be from second visual field of scene along light for the second outside mirror and the second reflecting surface
Axis is aligned to imager.At scene in the horizontal direction, first visual field is separated from second visual field at least partly.In addition, first
Outside mirror, the second outside mirror and beam splitter are set as each in first visual field and second visual field at vertical stacking relationship
Ribbon project to imager.It will be apparent to the skilled artisan that in various embodiments herein, explicitly it is susceptible to miscellaneous
FOVE embodiments.
FOVE makes the visual field enough to being moved relative to camera assembly 110 on mobile assembly line 126 with speed VL
Object 122,124 (such as chest) is imaged, suitably to obtain useful feature (such as bar code 130,132,134).As
Example, the width WL that the width WF in the visual field 120 is extended to about with assembly line 126 match.In an alternate embodiment of the invention it is contemplated that
Object is kept fixed and camera assembly can be on a track or other structures (such as manipulator) appropriate relative to object
It is mobile.For example, two objects 122 and 124 with different height HO1 and HO2 pass through the visual field 120 respectively.As described above,
Difference in height is a factor for usually requiring camera assembly to change focal length.When object moves more rapidly through the visual field 120,
More quickly changing the ability of focusing becomes highly desirable.Similarly, it more quickly identifies useful feature and uses vision system
The ability that processor 114 handles these features becomes highly desirable.Clearly it is contemplated that may be used it is multiple have cooperation
The vision system camera assembly of FOVE, luminaire and other attachmentes by the object of scene with to being imaged.For example, setting
Second vision system 180 (being shown in phantom) is to be imaged the opposite side of object.As shown, the additional vision system 180
Connection (via connection 182) extremely above-mentioned system 100.This allows common image data and synchronization acquistion and illumination to trigger, together with
Other functions together (such as using interconnection as described below camera assembly principal and subordinate mechanism).According to as described below various
Multi-core process, each camera assembly can be with independent process image data or the cores for the camera assembly that can execute interconnection
In some or all of processes.The number of further vision system, placement and operation alterable height in various embodiments.
II. the electronic section of system
By reference to Fig. 2, imaging circuits plate 113, main circuit board 115, I/O circuit boards 117 and UI circuit boards 123 electricity
Road connects up and function will be described in further detail.As shown, imager 112 is located on imager plate 113, and may include market
Ten thousand pixel gray level units of upper commercially available CMOS200, such as the model C MV2000 from Belgian CMOSIS.Other types
It may be provided in optional embodiment with the imager of size comprising the imager of higher or smaller resolution, colour imaging
Device, multispectral imager etc..Via control and data connection, imager be operably coupled to a FPGA210 (or other
Programmable circuit), according to the embodiment of examples described below, which executes image procossing process.This corresponding explanation
The purpose of book, FPGA or equivalent high speed processing logic, such as ASIC, DSP, it is such, it is properly termed as " imager-interface
(imager-interconnected) " " preprocessor " executes initial stage to the picture frame stream from imager received
And/or certain automatic regulating functions.In turn, although as an example with FPGA, it is any to execute required preprocessing function
Programmable or non-programmable processing logic (or multiple logics) can all be expressly contemplated that as " preprocessor " use.Show
The preprocessor circuit of plasticity is the ECP3 races of FPGA, can be from the Lattice in the cities Ore. Hillsboro
Semiconductor is bought.The non-volatile memory body 212 (Flash) of FPGA210 and certain size appropriate interconnect, memory
Body 212 provides structured data to FPGA.FPGA210 also controls optional interior lighting 214 (described further below) and can
Variable (such as liquid) the lens assembly 216 for providing fast automatic focusing to camera gun assembly of choosing.Equally, herein
The preprocessor of description is adapted for carrying out certain functions, and including but not limited to adjust automatically, image data is converted and obtained
Image data storage operation, directly related with the information processing in image various additional processes (such as vision system
System process) it can be executed by the preprocessor, for example feature is searched, it is such.More generally, the high frame per second of imager makes in this way
The use of high speed processor become desirable (in various embodiments), to operate the initial processes of the picture frame relative to acquisition.
A kind of a kind of mode of fast operating liquid lens assembly is EL-6-18-VIS-LD films bottom liquid lens, can
It is obtained from the OptotuneAG of Switzerland.Other than high speed operation, this camera lens also defines, illustratively, one 6 millimeters of light
Circle, enables it be highly suitable for wide-angle image and high speed operation.This illustrative variable lens encapsulation has 18 × 18.4 × 8.9
The size of (thickness) mm.Electric current is controlled about 0 between 200mA.Response time is usually less than 2 milliseconds and its timing
Between usually less than 10 milliseconds.After this liquid lens is integrated into illustrative camera lens assembly, the entire camera lens assembly carries
Supplied be about 20 degree the visual field and about 60 millimeters of focus adjustment ranges to infinity.In operation, the EL-6-
18-VIS-LD is deformation camera lens.Container that it includes injection molding, being marked with optical liquid and sealed by elastomeric polymer film.
The flexure of camera lens is directly proportional to pressure in liquid.The EL-6-18 uses electromagnetic actuators, applies pressure on container.Cause
This, the focal length of camera lens by the electric current of actuator coil by being controlled.This focal length is reduced with the increase for applying electric current.
Temperature sensor 218 is set as being associated with lens to monitor the operating temperature near lens.This allows liquid lens
Adjustment based on temperature, and it is other with the relevant parameter of temperature and function.Temperature sensor is placed in I2C buses 220,
I2C buses 220 are also produced using control signal appropriate control interior lighting 214 and liquid lens, the control signal by lens
Quotient is specified.As described below, additional temperature sensor can be arranged to one or more circuit boards (such as sensor 288) to monitor
The state of temperature of the various parts of system.As shown, bus 220 is interconnected with multi-core processor 114 on mainboard 115.Equally
Ground, FPGA210 are bundled via serial peripheral interface (SPI) bus 224 and PCIe buses 226 to processor 114, SPI and PCIe
It is respectively transmitted and control and data signal between cells.Illustratively, the SPI224 buses between FPGA210 and processor 114
Interface (interconnection) is used by processor 114 to configure FPGA between system startup.Subsequent configuration, image
The communication of data and other system datas, are transmitted in PCIe buses 226.PCIe buses are configurable to bis- channels (2X).
FPGA210 is also interconnected via 16 connections and the data storage 228 of 64MB, which allows the slow of image data
Punching, to support the high frame per second-of imager in imager plate level and such picture frame then can be used for as follows
The image procossing or automatic regulating function in the downstream.In general, a part of of adjust automatically may need to use lower explanation
The image of rate.In turn, the sequence of the image of acquisition can be stored in lower resolution (meeting FPGA functions) in memory body 228
And processor 114 is sent to for process as described below compared with the image of high resolution.Memory body 228 can be any acceptable class
Type, such as DDR3 dynamic random access memory.It is alternatively possible to using another memory body type, for example static random is deposited
Access to memory (SRAM).It is additionally provided with the supply voltage appropriate 230 for various imager board members, is derived from external electricity
Potential source (being typically 120-240VAC wall types (wall) electric current and transformer appropriate, rectifier etc.).
FPGA210 is also illustratively connect by link 232 with exterior lighting control connector 234, and connector 234 is in I/O
On plate 117 and expose in the outside of 110 rear shell of camera assembly.Similarly, link 232 is also by I/O plates 117
Synchronizer trigger connection 236 and FPGA is interconnected, so that image obtains (including illuminating triggering) and the camera of others interconnection is total
At synchronization.The case where interconnection can betide multiple camera assemblies while be imaged multiple sides of chest and/or chest pass through
The case where website of multiple relative proximities on assembly line.It synchronizes crosstalk between avoiding luminaire and other is undesirable
It influences.In general, it should be noted that in this embodiment, various image-acquisition functions and/or process, including it is internal-external lighting, right
Burnt and brightness control is all directly controlled by the FPGA processes 245 quickly run.This allows mainboard processor 114 by operation set
In in vision system task and image data decoding.In addition, the synchronization of acquisition also allows multiple camera assemblies shared single
A luminaire or luminaire group because luminaire (or multiple luminaires) each camera obtain a picture frame when correspond to it is each
Camera independent triggers.
Interface appropriate can be provided for external trigger by noticing.Such external trigger allows camera assembly
Gating, to carry out image acquisition when a moving target is within sweep of the eye.The gating avoids obtaining unnecessary assembly line
On object between space image.One detector or other switching devices can be used for providing gating letter according to routine techniques
Number.
FPGA210 provides certain pretreatment works to improve the speed and efficiency of manipulation of image data on the image.Image
Data are serially transferred to FPGA from imager 112.All or part of data can be stored temporarily in data storage 228, with
Just various FPGA operations analyze it.Serial image data is converted to the PCIe protocol using routine techniques by FPGA210, with
Just it is compatible with the data bus architecture of processor and processor 114 is transmitted in PCIe buses 226.Then the image data
It is sent directly in data storage 244, for being followed by subsequent processing for processor core C1 and C2.By using multiple cores
The heart allows many desirable and efficiency enhancing operations when handling image data, detailed further below.
FPGA210 is also programmed (such as FPGA processes 245) to analyze the image data obtained, is adjusted automatically to execute specific system
Whole operation, for example, auto brightness control (such as automatic exposure) and auto-focusing control (such as use liquid lens assembly 216
When).Typically, for the situation that focal length changes, for example the object of different height is encountered, this requires brightness and focusing to be intended to adjust
It is whole.In general, these operations require higher image acquisition rate (such as the speed in 200-300 picture frames about per second of imager 112
Degree is lower to be obtained) to allow the additional operation to image data, and minimum 100 frame per second of net decoding rate at processor 114.
That is, some images are handled in FPGA, and other memory bodys being transferred on mainboard 115 are used for vision system
Processing (such as ID search and find in the picture decoding of ID), and without making the maximum frame per second of processor compromise drop
It is low.More generally, data storage 228 buffers the picture frame got simultaneously (from the available of the superfluous number brought by high frame per second
In picture frame) using some frames for the automatic regulating function of FPGA210, while transmit it is other to processor 114 make into
The processing of one step.The division of labor of function between FPGA210 and processor 114 is conducive to the more optimized profit of efficiency and system resource
With.
In various embodiments, FPGA210 and memory body 228 can be adapted for, and receive the picture frame in high acquisition frame rate
" outburst " uses a part of frame in the picture frame " outburst " for executing adjust automatically, and by other frames to be suitable for
The speed of processor processing speed is sent to the processor.From what is obtained in " outburst " (for example, when object is in visual field)
The picture frame of high power capacity, can be in gap (interstitial) phase time before the time point that next object reaches the visual field
Between, it is fed out to processor 114, wherein when next object reaches the visual field, causes next " outburst ", and next " outburst "
Also it is acquired, stores and be transferred to processor 114.
Term used herein " process (process) " and/or " processor " should be broadly including various
Features and parts based on electronic hardware and or software-based.In addition, the process or processor can be with other processes
And/or processor combines or is divided into multiple subprocess or processor.It according to the embodiment herein can be to this seed routine and/or son
Processor carries out a variety of different combinations.Similarly, it is expressly contemplated, any function, process and/or processing described herein
Device can utilize the combination of electronic hardware, software or hardware and software to implement, and wherein the software is by the non-volatile of program instruction
Computer readable medium composition.
With reference to the mainboard 115 of Fig. 2, multi-core processor 114 is shown.Various types, brand and/or the place of configuration can be used
Device is managed to fulfil the introduction of embodiment herein.In an exemplary embodiment, processor 114 includes double-core DSP, such as
The model 6672 that can be bought from the Texas Instruments in the city of Dallas of Texas.It is corresponding to imagine ground vision system herein
The purpose of application, processor 114 can work and have cost performance fast enough.It should refer to as terms used herein " multi-core "
Be two (i.e. " double-core ") or more discrete processor, be implemented on single wafer and/or be packaged in single plate
It carries in circuit chip.Each core is generally possible at least part for the data that independent process is stored in memory body 244.Processing
Device 114 is interconnected with nonvolatile memory 240, and nonvolatile memory 240 includes startup configuration data appropriate.This allows
When camera arrangement starts, it is included in the basic running of processor when loading any program code and/or operating system software.
Program code/the operating system software is stored in program storage 242, and program storage 242 can be configured to using various solid
State memory device.In an exemplary embodiment, using the NORFlash memory bodys with 32MB capacity and 16 interfaces.
On startup, out of flash program storages 242 loading procedure code to data storage 244.The picture number of processor operation
According to and other data also be stored in data storage 244, and can be clear from data storage when system process no longer needs its
It brushes off.Various types, size and the memory of configuration can be used.In one embodiment, which is 256MB with 64
The DDR3 dynamic random access memory of interface.
Other conventional circuits for being used to drive processor and providing other functions (for example excluding code error) are also provided with
It is interconnected on mainboard 115 and with processor 114.These circuits can be configured according to routine techniques, and may include that core voltage adjusts
Device 246 (for example, model UCD7242 from Texas Instrument), (such as the type from Texas Instrument of LVDS clock generators 248
Number CDCE62005) and sequence microcontroller 250 (such as the Microchip from the Arizona State cities Chandler
The PIC18F45 of Technology Inc.).It is also mutually connected between Single port on processor 114 and sequence microcontroller 250
It is equipped with jtag interface 252 (such as 60 needles and 14 needles).Voltage (such as 1.5V, 1.8V, 2.5V and 6.2V) appropriate is by I/O plates
On voltage source 254 be provided on the various circuit elements of mainboard 115, voltage source 254 and adjuster 260 (such as 24V to 3.3V
Adjuster) it is connected.In this way external power is received from power supply (such as 24V wall types transformer) via cable 262 appropriate.Mainboard
115 are connected to I/O plates with the processor 114 coordinated via the UART being loaded on processor, which is located at outside shell
The serial connector 266 for meeting rs-232 standard in portion.The port can be used for controlling external function, such as warning, conveyer stream
Waterline closes open circuit, such.Processor further includes connecting via physical chip 268 and Gigabit Ethernet transformer 270
It is connected to the Serial Gigabit Media Independent Interface (SGMII) of the ethernet port of rear shell.In this way allow image data and its
He controls information via transmission of network a to remote computer system.Via interface computer and user interface appropriate (such as
Web-based graphic user interface/one or more browser screen), also user is allowed to be programmed the function of system.
(not shown) in various embodiments, as option, camera assembly can also be provided with wireless ethernet connection,Communication etc..
Processor spi bus 224 is connected to ATTINY microcontrollers appropriate 272 (such as can be from the San Jose of California
The Atmel companies in city buy), it is interfaced to 4x optics using routine techniques realization and inputs (4X OPTO IN) 274 and 4x optics
Export (4X OPTO OUT) 276.Interface offer " slow " I/O operation, including external strobe triggering inputs, good-reading is defeated
Go out and bad-reading output, encoder input (such as being loaded onto to mobile step-by-step counting in moving assemly line transmission group), target inspection
Survey and various other I/O functions.Bus 224 is additionally attached to the further ATTINY microcontrollers 280 on UI plates 123.
The microcontroller is connected to user interface (UI) device outside camera assembly rear shell.These devices include but not
It is confined to, sound tone generator 282 (such as buzzer), one or more control buttons 284 and one or more indicator lights
286 (such as LED).These devices allow user to perform various functions, including vision system training, calibration, such, with
And receive the state of system operation.This may include the function of on/off, failure warning, success/failure, etc. when reading ID.It is public
Total positioning indicator (LED) can be associated with that triggering-is logical, triggering-is disconnected, encoder and target detection state.It can also be optionally
The other interface arrangement (not shown) of setting, such as display screen and/or alphanumeric display.I/O plates 117 include appropriate
Temperature sensor is to monitor internal temperature.
It will be apparent to the skilled artisan that the placement of the component in each of various plates and the function of position and those components are high
What degree can be changed.Clearly it is contemplated that more or fewer circuit boards can be used in various embodiments.Similarly, multiple components
Some or all functions can merge into single circuit or some or all functions of a component may be partitioned into
Multiple circuits on one or more plates.In addition, component, interconnection interface, bus architecture and function described in Fig. 2 are only each
The example of the wiring of the executable identity function of kind.It should be clear to a person skilled in the art that with similar or identical function
Optional wiring structure.
III. physical package
Described the electronic component on the various circuit boards of camera assembly arrangement of mechanism and theirs it is respective mutually
Connector and function, referring now to Fig. 3-7, which depict the physical arrangements of camera assembly 110.Fig. 3-6 describes real according to one
The camera assembly 110 for applying example illuminates assembly 320 with conventional lenses 310 and circular inside (annular).Fig. 7 is more in detail
The external view of thin camera assembly 110 has optional FOVE attachmentes 118 as described in Figure 1.
The shell 330 of camera assembly 110 with the material of rigidity appropriate and heat transfer characteristic by building.In an example
In the embodiment of property, aluminium alloy (such as 6061) can be used to build a part or whole part of shell.Ontology 332, which is additionally provided with, to be surrounded
The integrally formed longitudinal fin 339 of its circumference is conducted heat with further auxiliary.Shell 330 is made of three major parts, ontology
332, front 334 and rear portion 336.Ontology 332 is the monomer part of the inside with opening.Front 334 and rear portion 336 respectively make
The opposed end of ontology is fixed to the screw for being seated hole 338 and hole 410.Front 334 and rear portion 336 are compacted to the end of ontology
Portion is gas-tight seal to constitute, the electronic component inside the seal protection so that its not with dust, moisture and other may be present in system
Make process or the pollutant contact of other process environments.Gasket 510 (such as O-ring, see Fig. 5) is placed in each each of ontology 332
From end, with compression seal front 334 and rear portion 336.Notice that ontology can be made into prominent structure, with appropriate by hole
The counterbore of formation and other machines machining shape applied to outside and inside.
As shown in figure 5, imager plate and the imager of cooperation 112 are fixed against front 334, wherein imager perpendicular to by
The optical axis OA that lens assembly 310 limits.In this embodiment, using fixed lens assembly 310, before having routinely to configure
Portion and rear portion convex lens 512 and 514.For example, the lens assembly is that the 16mm lens with c-type installation pedestal are total
At., to be threaded into camera assembly lens pedestal 520, lens pedestal 520 is stretched out from front 334 for it.It is described below can
In the embodiment of choosing, it is expressly contemplated that other camera lens models and installation pedestal configuration.
The lens are surrounded by the inner ring illumination assembly 320 of a colyliform, and illumination assembly 320 has outer shroud 524 and before it
End has lighting circuit plate 526.Circuit board 526 is supported on three bearings 528, and bearing 528 is around optical axis OA with triangle
Towards setting.In this embodiment, illumination from 8 have with lens 532 height output LED530 (such as OSRAM
Dragon LED) it provides.The LED operation is in selected, discrete visual and/or close visual (such as infrared ray) wave
It is long.In various embodiments, different LED operations can be selected in different wavelength, the wavelength by Lighting control process.Such as one
A little LED can be operated in green wavelength, and others can be operated in red wavelength.With reference to Fig. 6, wherein illumination assembly 320 has removed
It goes, exposes the front 610 of camera assembly 110.Front 610 includes a pair of of multi-pin connector 614 and 616, is located at imager
On plate and similar to the illustrated parts 214 and 216 in Fig. 2.That is, 5 needle connectors 614 are via cable (not shown)
It is interconnected with illuminatian plate 526.8 needle connectors 616 are connected to control and power for optional liquid lens assembly described below.Just
Face 610 further includes three pedestals 620 (it can have screw thread) to support each lighting circuit board support 528.Also it can be seen that with spiral shell
The c-type installation pedestal 520 of line.Notice that the inside illumination assembly 320 is for the optional of vision system camera assembly
Implementation.In various embodiments described herein, inside illumination assembly can be omitted and is substituted by the outside of one or more
Assembly is illuminated, alternatively, being ambient lighting in some special cases.
Referring specifically to the sectional view of Fig. 5, imager plate is connected to mainboard 115 by ribbon cable 550, illustratively mainboard
115 against the top side of body interior.Mainboard exchanges heat with the fin 339 of ontology 332 and cooperation in this position, to allow more
Good heat transfer.Fastener can be used to install for mainboard 115, or as shown, installed using bracket component 552, no and
It is engaged with the downside of mainboard 115 position of on-board circuitry element interference.Bracket 552 includes lower extension 553, has one
Hole, hole set stay in the vertical montant 555 upwardly extended in the form of telescopic on pedestal 554.Pedestal 554 is seated shell
On the bottom side of ontology 332.Bracket 552 is via being placed on the downside of bracket between pedestal 554 and around extension 553 and montant 555
Compressed spring 556 bias upward.Mechanism permission is inserted into or is moved relative to the position of pedestal 554 by adjusting bracket 552
Except plate.Also that is, in order to install plate 115, user depresses bracket 552 against the bias force of spring 556, and plate 115 is slid into ontology
Then 332 inside discharges bracket 552 so that it is fastened with plate 115 with pressure, and it is made to maintain against in ontology 332
The position on the top in portion.Remove is then the opposite of the process.Plate 115 is steadilyed maintain against ontology 332 by spring 556 and is buckled
It closes, to ensure enough heat exchanges.In various embodiments, mainboard 115 can also include the radiator on plate, be connected to
Ontology 332.Similarly, heat transfer glue or another heat-transfer material may be disposed at contact portion (such as the processor of plate 115
114) between the inner surface of ontology 332.Referring briefly to Figure 13, as described below, the upside of mainboard 115 may include temperature gap
Pad 1330 fills the gap between 115 top of plate and body internal surface.
More generally, referring also to Fig. 5 A, the inner surface 580 of ontology 332 is configured to relative to the type face of mainboard 115, makes it
It is close consistent with the protrusion on mainboard 115, element pasted on surface, the shape of circuit element, and these elements are installed as adapting to
The shape of ontology.That is, higher element is close to longitudinal centre line placement, ontology at this there are higher type face,
And shorter element is placed along the either side of the longitudinal axis of mainboard.More generally, element in accordance with body interior geometry quilt
It is divided into multiple height regions.Tend to big or high (such as capacitance) situation in certain circuits, these elements can be divided into two
Or more smaller element, have collective electron magnitude identical with single large component.Temperature gap filler (such as pad
Or another medium) be set between plate and inner tip, and such placement of element, the interior geometry based on ontology,
Ensure that the distance between ontology and short and high element minimize.Illustratively, as shown, multi-core processor is set as straight
The inside (usually having lamellate heat transfer glue in-between) of ground connection contact ontology, such ontology act as having for processor
Imitate radiator.As it is shown as well, montant 582 of the mainboard 115 via the hole in plate, relative to bracket 552 to horizontal lateral deviation
It moves (indexed laterally).Ensure that bracket and plate maintain preset alignment relative to ontology in this way.Although noticing
Cooling is passive in the embodiment described, but one or more fan units may participate in shell in a further embodiment
Internal or external cooling.Especially, can be arranged along the bottom of ontology 332 4 mounting holes 588 (wherein 2 in fig. 5 with
Dotted line is shown).In this embodiment, this some holes 588 receives conventional 60x60mm computer fans.Optionally, as described below,
Hole 588 can receive the bracket of an intermediary, be used to install fan and/or other fan mechanism/sizes being expressly contemplated that.
A connection piece can be set on shell, or can be used for an external lug with connect voltage adapter appropriate and for fan (or
Multiple fans) power supply.In addition, adminicle cooling body (such as liquid cooling) can be used for optional embodiment.In general, system is set
Use environment cooling is counted into run up to close to 40 degree.Can be more than the feelings of the value in operating temperature however, in certain environment
The use of shape, at least one cooling fan is activated.
As shown in figure 5, I/O plates 117 are installed as in the rear portion 336 for being resisted against camera assembly shell 330.I/O plates 117 by
Ribbon cable 560 is connected to the rear end of mainboard 115.Function described with reference to Figure 2 various rear connectors 420,422,424,
426 and 428 (referring to Fig. 4) extend from the rear side of I/O plates 117.I/O plates are similarly mutual via ribbon cable 570 and UI plates 123
Even.As shown, UI plates expose along the angled top surface 440 at rear portion 336 to user.In other examples, can change
On change ontology and/or arrangement and the position of interior circuit board.
With reference to the more detailed sectional view of Fig. 7 and Fig. 7 A, FOVE118 is shown as attachment connector 710, and connector 710 wraps
Include dismountable L-bracket 712 in camera assembly front.Bracket 712 includes vertical panel 714, faces camera front
It 334 and is fixed with fastener, and including level board 716, is adapted for that further mounting bracket and supporting structure is made to fix
Thereon.The bracket 712 of connector 710 can also be used to install dismountable luminaire 750, as described below.FOVE shells 730
It is supported by one group 4 montants 732 relative to camera assembly, montant 732 is fixed in the base bracket of camera side, and perpendicular
Bar 732 is fixed to the rear wall 736 of FOVE shells.Flange 736 (is not shown by fastener appropriate or other fixed mechanisms in figure
Show) it is fixed to the rear portion of FOVE shells 730.Lens assembly 116 is covered by the cylinder outer cover 720, and cylinder outer cover 720 is shining
Extend between the rear portion of front (610) 110FOVE shells 730 of camera assembly 110.Outer cover 720 is detachable and for sealing
Mirror and FOVE shells are to prevent it from contacting dust and prevent the pollutant of external environment from penetrating into wherein.Montant 732 another connects
The open frame received allows user that lens assembly 116 can be adjusted and be maintained.Movably (the overstriking arrow of montant 732
It is first that sliding shoe 746, sliding shoe 746 744) is supported to be engaged with the lens cover 1692 of sliding.A pair includes low friction casing
(bushing) connector 747 entangles two (or more) montants 732.O-ring 748,749 is respectively embedded into flange 736
The inside of the inner periphery of the vertical plane 714 of the inside of circumference and opposite L-bracket 712.Lens cover 720 can skid off forward figure
Described in sealing station to expose lens assembly 116 (as exemplary lens type, being shown in phantom in Fig. 7 A).It hangs down
It faces directly and is formed with thrust shoulder 754 on 714, limit center port (orifice) 756.The shoulder prevents outer cover 720 close at its
Continue to be moved along towards camera assembly after the engagement of feud.Similarly, rear portion block 758 is arranged in the front end of outer cover 720
With the inner face of joint flange 736.The forward slip of outer cover 720 make it into the inside of FOVE shells 730 until sliding shoe with it is convex
The outer wall of edge 736 engages.Enough spaces can be provided in this way to touch lens 1697 for adjusting and/or maintaining.FOVE shells
730 can be built by a variety of materials, including various condensates, such as injection mould, the makrolon filled with glass and/or synthesis
Object or metal, such as aluminium.Especially, glass-filled makrolon makes dimensional tolerance caused by being shunk during molding process most
Smallization.The front end of FOVE shells opens to scene and includes covering transparency window 740.
With further reference to Fig. 8 and Fig. 9, shell 730 is removed in figure, and the geometry of FOVE mirrors is shown in greater detail.
In various embodiments, various optical components and mechanism can be used to provide FOVE, and usually it is contemplated that FOVE is by a wide cut
Image is divided into the image (band) of at least two stackings, each of which occupies a part for imager.In this way, image
Height reduces by about 1/2 (having some overlappings), and the width of each band is the overall with that (equally having some overlappings) is imager.It is false
Fixed illustrative camera assembly provides double-core processing capacity and hi-vision picking rate, and various treatment technologies can be used to execute
The high efficiency to band and quick processing (as described below).Illustratively, FOVE118 based on it is above-mentioned be merged in by
The U.S. Patent Application No. of entitled " system and method for the extension of the vision system visual field " of the inventions such as Nunnink
13367141.It is carried from entitled " system and method extended for the vision system visual field " of the inventions such as Nunnink, same date
It, can root in the continuous U.S. Patent Application No. in part (number of accepting C12-004CIP (119/0126P1)) of the common assignee of friendship
The further embodiment of the FOVE mechanisms used according to vision system camera assembly, and cooperation connector and attachment, make
It is carried out similarly description for useful background information, and introduction therein is incorporated into herein explicitly by reference.
As shown in figure 8, the optical component of FOVE includes left outside side mirror 810 and right outside mirror 812, and stack and intersect
Interior side mirror 820 and 822.Outside mirror 810 and 812 tilts at different angles.Similarly, interior side mirror 820,822 is with different
Angle tilt.With reference to Fig. 9, the visual field 910 and 912 of each outside mirror 810 and 812 is shown.It is provided with slightly overlapping region
OR, it is at least wide as the maximum useful feature (such as maximum bar code) being imaged at focal length FD.This ensures the spy
The complete image of sign appear in two visuals field 910,912 it is at least one in.Each of visual field 910,912 by it respectively
The fully reflective intersection in inside of outside mirror mirror 820,822 on, as shown in the figure.Then the image of the reflection further reflects
To lens 310, each visual field is relative to another visual field vertical stacking (by each relative tilt of mirror 810,812,820,822
Cause).To which if Figure 10 is shown with schematic diagram, each of the visual field 910,912 is projected in a pair on imager 112 respectively
In each of the banded zone 1010,1012 of stacking.Relatively small, vertical overlapping region 1030 can be set, wrapped simultaneously
Include the image in the visual field 910,912.Overlapping in vertical direction small aperture can be used to set depending on the aperture of lens assembly
It is fixed to be minimized to realize, such as F:8.Dotted line 1040 and 1042 on each band represents the horizontally overlapping of the visual field OR of Fig. 9.It should
Region is analyzed, with complete feature (such as ID) in order to obtain, can completely be presented in a band, and at another
It is entirely or partly lacked in band.
In an exemplary embodiment, as an example with representative size, each of outside mirror 810,812 has
Horizontal length OML between 40-120mm, is typically 84mm, and the vertical height OMH between 20-50mm, allusion quotation
It is type 33mm.Similarly, the interior side mirror 820,822 of intersection illustratively has 30-60mm horizontal length CML, typically
It is typically 21mm for the vertical height CMH of 53mm and 10-25mm.In an exemplary embodiment, outside mirror
810,812 aggregate level span about 235mm, and each respective outside minute surface and the inside minute surface of cooperation (such as 210 Hes
220;212 and 222) between spacing MS be about 100mm.With the advance survey carried out in selected camera gun 310
Based on amount and focusing appropriate adjustment, single FOVE cameras are passed through with high-resolution according to the focal length FD of about 35-40mm
Mechanism covers the visual field WF of the integral extension of about 60-80cm.As shown, two visuals field 910,912 are divided into two by FOVE
The band of a stacking, each of which about height of 600 pixels on imager, will provide enough resolutions or one
The abundant decoding of bar code feature on the assembly line fast moved.
As shown in figure 11, FOVE assemblies allow dismountable installation of attached cross bar type luminaire 750.Luminaire 750 (or
Multiple luminaires) relative to FOVE shells position in a further embodiment be alterable height.In this embodiment, luminaire
750 are attached on bracket 1110, and the bottom side of opposite FOVE shells 730, bracket 1110 prolongs forward from connector 710 (looking at Fig7)
It stretches.Bracket 1110 and cross bar type luminaire can be engaged permanently or removably, for example, using across the top of bracket 1110 and inserting
Enter the threaded fastener (not shown) of the threaded hole (not shown) on the top side of luminaire 750.The bracket can connect
To L-bracket 712 mounting hole notwithstanding cross bar type luminaire, but various optional types of illumination and configuration can be used.
Luminaire may include the light source of multiple multi-wavelengths, and the work of selectivity and/or light source are with different brightness, angle or range work
Make.In an alternate embodiment of the invention, other subsidiary bodies, such as adhesive tape, hook and loop knot type fastener, screw etc., can
For providing firm and dismountable mechanical connection between illumination and carriage member.For example, in the submission of same day by Saul
Entitled " the COMPONENT ATTACHED DEVICES AND of Sanz Rodriguez and Laurens Nunnink inventions
The common assignee's of the applicant of RELATED SYSTEMS AND METHODS FOR MACHINE VISION SYSTEMS "
U.S. Patent Application No. (number of accepting C12-022) is incorporated into herein with reference to it as further background information.This application is retouched
It has stated and luminaire and other optical accessories is attached to FOVE assemblies or other vision system structures using magnetic assembly
Technology.
It is noted that being such as described herein, the use of FOVE is an option of spread F OV, with wider relative to highly providing
Depth-width ratio.Another option that can be used of supplement (or replace FOVE) as FOVE is, using being configured with (for example) 1:4
Or 1:The imaging sensor of 5 depth-width ratio.Such ratio, can for scanning for along the object that a wider assembly line moves
It is optimal.To which in various embodiments, the sensor for camera assembly of this paper can be chosen to be with wide height
The sensor of wide ratio, wherein pixel wide are the multiple of pixels tall.Illustrative method and mistake for operating image data
Journey is suitably adapted for handling the data on wide sensor, for example, operating the different areas of sensor with the different core of processor
Domain.
Referring now to figure 12, according to an embodiment, illustrative liquid lens assembly 1210 is described, coordinates camera assembly
110 use, and the installation pedestal 520 coordinated.In this embodiment, (the film base as described above of liquid lens unit 1220
Unit) it is mounted in cover body 1222, cover body 1222 accommodates the rectangular shape of lens unit 1220 using carrier structure 1230.It can adopt
With various supporting structures with the lens in fixation assembly 1210.Liquid lens unit includes illustratively shell 1232, support
Front biases camera lens 1240.Variable, filling liquid thin film lens 1244 is installed behind biasing camera lens 1240.The lens are based on
The electromechanically of actuator assembly 1250 and change.Actuator assembly, temperature sensor and other component are by ribbon cable 1256
8 needle connectors 616 are connected to, ribbon cable 1256 extends from liquid lens cover body 1232 outside lens assembly cover body 1222.Electricity
The route and/or cover body of cable and the size/shape of other component are alterable heights.Transparent shroud glass 1258 is arranged in liquid
The rear portion of lens unit 1220 is to seal it.After the light of reception is transferred to the fixation appropriate being supported in cover body 1222
Portion's lens 1260.Cover body includes installation assembly 1270 (it can also include not shown in lock ring-figure), with screw thread that lens are total
It is fixed at 1210 at the mounting base 520 of photograph machine face 610.As the application of auto-focusing, liquid lens assembly 1210
It focuses further described below.
Although not shown in figure, any lens assembly described herein may include various optical filters to weaken certain waves
Long light or the various effects of offer, for example polarize.Similarly luminaire may be provided with various optical filters.Allow in this way, certain
When the projecting illumination of type and the optical filter reception by being suitble to the types of illumination, the imaging of the selectivity of object.
It will be apparent to the skilled artisan that according to embodiment hereof, various optional interfaces and indicator can be arranged in camera assembly.Especially join
It is total to camera after removing front 334, body cap 332 and the rear portion 336 of shell according to Fig. 3,4 and 5, and referring now to Figure 13
At internal part be described.Connector between ontology 332 and rear portion 336 includes trnaslucent materials (acrylic fibers or makrolon)
Ring 1310, it acts as photoconductive tubes.Translucent ring 1310 can surround the partial perimeter of connector, or, as described in Figure, around connecing
Entire (such as " 360 degree of indicators ") circumference of head.It is transparent that ring 1310, which can be fully transparent or part thereof,.Illustratively, ring
1310 are illuminated by one in the light source (such as not shown LED in figure) of multiple and different colors, the light source operationally at
As device circuit board 113 connects.The light of LED is guided via photoconductive tube or other light transparent conduits to ring 1310.According to illuminating
Color and/or the time (such as the one or more colors flickered with sometime ratio or pattern), which can be used as indicating
Various modes of operation.For example, good ID is read and/or decoding can shine as green, without (such as fail or mistake) ID
Read/decode can shine as red.The red of flicker can be shown that the system failure.Other colors, such as yellow, can also by including
It is used for various instructions inside.The ring provides unique and beautiful, and intuitive way indicates system mode.Make around circumference
The number of light sources alterable height of the ring is illuminated, and can be arranged according to routine techniques.Although as shown, ring 1310 is clipped in this
Between body 332 and front 334, clearly it is contemplated that similar ring could be sandwiched in 336 (not shown) of rear portion and ontology 332 it
Between the joint using above-mentioned principle.In addition, in various embodiments, ring can be arranged in front connector and back joint.
IV. image data is handled in multi-core processor
Illustrative multi-core processor 114 gives the processing independence of height relative to each discrete core (C1, C2).
It is specifically instructed without user, minimal Cross talk is provided between process, with shared data.Usual each processing
Device operates the operating system of their own, and independently of the program of another ground operation load.The each processing of correspondence in RAM244
The memory space of device is typically discrete, and has minimal shared memory space.Inside in processor is total
Line, the program instruction based on user, provides the data exchange between core as one sees fit.To which process, which is given, appoints image procossing
The ability that business divides, to improve the efficiency and speed of processing.It is the description of various illustrative processes below, these processes can
It is executed using the double-core function of processor 114.
Referring to Fig.1 4, as shown, unitized program 1400 allow processor dynamically by different tasks distribute to
Each processor executes.Task can be the operation of the single image frame to being sent to the processor from FPGA.The task can be with
It is a vision system task, for example ID is searched or ID decoding tasks.Process 1400 allows the core in multi-core processor 114
Operate it is optimised, so that core is used efficiently.That is, if ID is searched consumes few some processors money compared to ID decodings
Source, then a core can be adapted for searching multiple ID, and another decoding has the useful picture frame of the ID found.Equally
Ground, represents the situation of the two halves of a FOVE images in frame, and image can be divided between two cores, etc..In general, program data
Including one or more dispatching algorithms, can be adapted for the specific one group of image data of peak efficiency operation.These scheduling are calculated
Method can help when the estimated each core of processor is becoming one Given task of idle execution.Dispatching algorithm appropriate is in process
It is determined in 1400 step 1410, and the algorithm is very suitable for a specific group task, is loaded in step 1420 group task
To at least one core.The core becomes the dispatcher of multiple cores and transmits operation plan by internal bus.Work as picture frame
When being sent to the core of processor from FPGA by PCIe buses, the frame is monitored, and will execute on the image data
Task identifies (step 1430) by dispatching algorithm.The dispatching algorithm distributes image data and task to next available processor
(step 1440).The distribution can be based on pre-estimating when processor becomes available.When the task on specific picture frame is completed
When, which continues to monitor and distribute new task and data to core.It (overtime) can be calculated for a long time using the scheduling
Method with monitor different types of task observe as a result, and optimizing the priority of the task in each core.One core
The heart, which has, limits the dispatching algorithm which core receives task.
It will be noted that the use of two cores C1 and C2 is the demonstration of multi-core processor in this illustrative embodiment,
It may include the core of three or more.Process described herein can be adapted for popularization to three or more cores.
It is the description according to embodiment using the further process of multi-core processor below:
5 sketch map referring to Fig.1, as shown is multi-core process 1500, and the wherein reception of processor 114 is divided into two parts
1520,1522 picture frame 1510.The part can vertical (such as two visuals field provided by FOVE), horizontal division or press
Another dividing method divides (for example, alternate pixel).Two (or more) image section 1520,1522 is sent to each core
C1 and C2.Two (or more) topography each by concurrently handling reconciliation with their respective cores C1, C2
Code.Decoding result 1530,1532 can merge and be provided to downstream process, such as good ID readings or the instruction without ID readings,
And decoded information is transmitted to a remote computer.One overlapping can be usually set between Liang Ge topographies, to scheme
ID as between is adequately identified at least one core.The overlapping is changeable, but is typically sufficiently large, suitably will
The ID of one intended size be included in the topography it is at least one in.In the situation that image is divided by processor itself, pass through
The image data of sending overlap provides the overlapping to two cores simultaneously.With FOVE, overlapping, which is present in, to be obtained
In the image taken, and the image in each visual field can be transmitted each core and without the overlapping of additional shares volume.Between core
Communication (bus links 1540) allow result merging and other desired interleaving kernel communicate.
In a further embodiment, for (such as multiple not having substantially between image the case where little or no overlapping
Have the FOVE images of overlapping), process 1500 can be replaced by suture (stitching) process.To in this embodiment, often
One FOVE images, it is possible to which part (however and not all) and two images including exemplary ID feature sets are common
Ground includes substantially entire ID feature sets.Use the one or more of core to identify the phase between the ID segments in each image
Mutually simultaneously " suture " is a complete ID for contact.During this ID lookup stage that can betide process, in the process, combination is complete
ID, then by one or more core codecs, or during betiding decoding process, such as the process decodes the whole of each image
A part of a ID simultaneously attempts to merge each individual decoding result.
It is noted that while characterized as each multi-core process executed as shown in the figure using discrete core it is discrete
Process, can it is expressly contemplated that, terms used herein " core " can broadly refer to one group of core.To use four cores
The situation of heart processor, one group two cores can be responsible for a process task, and second group of two cores can be responsible for it is another
Process task.Optionally, one group of three core can be responsible for (higher processing expense) task, and single core can be responsible for
Different (lower processing expense) tasks.It optionally, can be by distributing task to processor core appropriate and/or core
Group come execute and meanwhile task or 4 simultaneously task.It, can also be to dispatching algorithm according to the currently processed needs of Given task
It is programmed and to be dynamically that different tasks specifies core again.Proper level for the processing capacity needed for a Given task
(such as multiple cores) can complete the task by the different processor number of experiment, the operation of different type task and monitoring
Speed determines.The process is as described below.
6 schematic diagram referring to Fig.1 as shown is multi-core process 1600, and wherein processor 114 is at one (or one group)
Picture frame 1610 is received at core (or multiple cores) C1, C1 executes ID decodings to export decoding result 1620.Second (or
Group) core (or multiple cores) C2, opposite, the relevant task 1630 of one or more (non-decoding) system is executed, is passed through
Output information 1640 supports Image Acquisition and other system operatios, information 1640 to be used for the task in further downstream.It is such
System task 1630 may include (but being not limited to):
Focusing set algorithm (including measurement distance/calibration and calculating clarity (sharpness)) and auto brightness (its
May include exposure, gain and illumination intensity) algorithm;
JPEG (or other) Image Data Compression, such as execute and then store and/or be transferred on picture frame
One remote computer;And/or
Wave surface reconstructs, and is used for, such as in a vision system, using known wave surface coding techniques to improve
The depth of field.
Situation (such as the process of Figure 16 of non-decoded system task is executed using one or more cores in system
1600) system task, is distributed into certain cores and may depend on current triggering frequency.As shown in figure 17, process 1700 is dispatched
Current triggering frequency is determined in step 1710.If the triggering frequency is less than a certain threshold value, so as to so that less core
The decoding task needed is executed, deciding step 1720 distributes one or more cores to non-decoding task (step 1730).On the contrary
Ground, when triggering frequency exceeds a certain threshold value (or multiple threshold values), one or more cores (core number possibly relies on frequency)
It is assigned to decoding task (step 1740).In the double-core embodiment simplified as shown in the figure one, at a low triggering frequency,
One core distribute to decoding and another core distribute to system task.At a higher triggering frequency, core (such as
C1) distribution is to decoding, and the one or other core (or multiple cores) (such as C2) can be performed simultaneously decoding and system is appointed
Business.This is particularly suitable for double-core system.Two cores are utilized more than in an illustrative many-core systems, it is one or more
Core can be distributed to decoding and other core (or multiple cores) while distribution to decoding and system task.
Process 1800 is described to Figure 18 schematic diagrames, in (or the requirement of other separate types of one-dimension code and Quick Response Code
The feature of different processing capacity/decoding times) multiple cores are used when existing simultaneously.Usual Quick Response Code requires more processing money
Source/time is fully to decode.Once it was found that the ID in image, they are just scheduled for appointing for each of core C1 and C2
Business balancing dynamic load, with the handling capacity of optimization system.For example, as shown, two one-dimension codes 1810 and 1820 are respective
In image 1850 and 1860.Similarly two Quick Response Codes 1830 and 1840 are in respective image.These codes are organized, so as to
At per next image, two peacekeeping dimension solutions code tasks can switch between two cores.In this way, on an average often
One core C1, C2 generates the decoding result 1880,1890 of same treating capacity.
Multi-core process 1900 as shown in figure 19 distributes first (or group) core (or multiple cores) with by system
Highest treating capacity determine maximum time in decode image (step 1910).If it exceeds the maximum time solves without completing
Code, deciding step 1920 then jump to deciding step 1930, if determining to give processing times more more than maximum time
Talk about the image whether decodable code.If it not, so system instruction is without reading (step 1940).If being decoded as assuming possible
, then second (or group) core (or multiple cores) is distributed in step 1950 to attempt further to decode the image or more
Cannot within the maximum time decoded image (but with the more processing time can be spent to complete decoded feature).In a behaviour
In the example of work, it is assumed that image can complete decoded possible feature under the given more time and include:(a) in image
In find the registration pattern (finder pattern) of this yard;And/or other codes for the code (b) being printed on object from one group
It has been searched (such as Maxicode and bar code are imprinted in identical packaging and one of them has been searched).It is optional
Ground, if an ID assumes or may can complete to decode with the more time, or by using one or more different from currently employed
Algorithm complete decoding, then deciding step 1930 can redirect and (be shown in phantom) to step 1960, and wherein system controls the first core
The heart redistributes the second core to use different decoding algorithms to continue with the ID.The algorithm can be that acquiescence is selected or is based on
Certain features (such as apparent picture contrast etc.) in image and/or ID features, wherein this feature make such
Algorithm is particularly suitable for handling it.
The variant of the process 1900 of Figure 19 is as shown in figure 20.In described process 2000, have reached given one
Maximum decoding time (step 2010 and 2020) on image.Assuming that decoded feature can be completed by giving the more processing time
(the instruction information not read otherwise is sent out in step 2040), system allow first (or group) core (or multiple cores)
The image is continued with, and the decoding of next image is distributed to different (or group) cores (or multiple cores), so that the
One (or group) core (or multiple cores) completes its decoding task (step 2050).
If the multi-core process 2100 that Figure 21 is shown is for attempting to decode the ID/ in an image using multiple decoding algorithms
Code 2110.First (or group) core (or multiple cores) C1 attempts to decode ID/ codes with first decoding algorithm 2120
2110, and second (or group) core (or multiple cores) C2 attempts to be decoded with the second decoding algorithm 2130 simultaneously (when applicable)
Identical ID/ codes 2110.For example, a core C1 attempts the calculation crossed to the DataMatrix code optimizations with high contrast
Method decodes the image, and another core C2 uses the algorithm crossed to (DPM) code optimization of low contrast.Decoding result or decoding
Each output of failure 2140,2150 from core (or core group) C1, C2.It notices in some instances, comes from different calculations
Two groups of results of method can merge to be used for verifying decoding task with " suture " at complete code or otherwise.This can be happened at any result
It is not the situation that complete (or reliable) of ID/ codes reads.
It is as shown in figure 22, using another multi-core process 2200 of core 1 (C1) to core N (CN).In the process,
Each using (or group) core to continuous image 1-N (2210,2212,2214) is decoded.C1-CN points of core
It Chan Sheng not decoding result 1-N (2220,2222,2224).As described above, can be suitable based on preset sequence or based on what is be dynamically determined
Image is sequentially assigned to above-mentioned core by sequence.Using dynamically distribute (as described above) in the case of, it is contemplated that it is various because
Element, such as code type and the speed (such as decoding time is more than a max-thresholds) for decoding a given image.
Figure 23 describes a multi-core process 2300, wherein including that the region of ID is positioned by (or group) core, and is somebody's turn to do
The ID in region is decoded in another (or group) core.Image frame data 2310 is simultaneously transmitted to core C1 and C2.One core
Process 2320 of the C1 operations for searching the region for including symbol (ID) information, and the C2 operations of another core are (typically logical
Internal bus is crossed to transmit between core) ID decoding process, which concentrates approximate ID
Information and concentrate those regions in transmission ID features (such as bar code direction, boundary etc.), to accelerate decoding process
With efficiently produce decoding result 2350.In the situation for using more than two cores, can be searched simultaneously with the core of lesser number
It is decoded (vice versa) using more multi-core.
Figure 24 describes multi-core process 2400.In this embodiment, first (or group) core C1 uses various routines
And/or dedicated vision system tool 2420 handle image frame data 2410, to extract relevant image information (such as side
Edge, down-sampled pixel, spot (blob) etc.).The image information 2440 extracted be sent to second by bus (or
Group) core C2, it is decoded by decoding process 2430, decoding process 2430 includes for interpreting the information of extraction to screen and ID
The process of alike feature.So generate decoding result 2450 (if any).
Figure 25 describes multi-core process similar with process 2300 and 2,400 2500.First (or group) core C1 is being passed
In the image frame data 2510 sent using ID in the presence/absence of process 2520 (be for example suitable for search and alike ID feature, such as
The geometry of DataMatrix in close parallel lines and/or image data), to determine the presence of ID/ codes/do not deposit
.This is different with the difference of position, place or image feature information, wherein uniquely determining actual existence or non-existence.
This determines whether image includes ID/ codes, if being abandoned without if without being further processed.In the presence/absence of information 2540
It is transferred to the second (or group) core C2.This for executive process 2530 or abandons image data in the second core.If ID/
Code is shown as existing, then the second (or group) core C2 is using ID positioning and decoding process 2530 (or multiple processes), by with one
Image is searched and decoded to the sufficient similitude that symbol is presented.When decoding process is completed, any decoding result is exported
2550.In addition to ID location datas (or replacement), this and other process described herein can transmit others between core
With the relevant data of ID.Such others data may include, but be not limited to, image resolution-ratio, ID types etc..
The further variant of multi-core process 2300,2400 and 2500 as described in the process 2600 of Figure 26, first (or
Group) core C1 analyzes the data of each picture frame 2610, determine whether the image has enough quality and/or content to come to second
The C2 processing of (or group) core.Image analysis process 2620 determines characteristics of image and determines whether execute ID lookups and decoding process is worth
.If be worth, first second (or group) core of (or group) core C1 instructions (sending instruction 2640) is responsible for ID lookups/positioning
With decoding process 2630, which exports decoding result 2650.For determining that the possible feature of image data adequate includes,
But it is not limited to, picture contrast, clarity/focusing quality etc..As shown, equally clearly, it is contemplated that can be
In FPGA at least part of image analysis process 2620 is operated suitable for the algorithm run in FPGA using preset.So
Afterwards, will one or more cores (such as C1, C2 etc.) be sent to by the information of the algorithmic derivation, which is used for according to process
The positioning and decoding of 2630ID.
It will be apparent to the skilled artisan that any of above multi-core process can by dispatching algorithm in single runtime operation with other
Multi-core process combination.For example, can in a core, run as a system task auto-focusing (in Figure 16 into
Journey 1600), with a part for the Image Acquisition of corresponding objects, and the processing (such as two parts of FOVE images) of topography
It can be executed during the subsequent next part of the image capturing events.Above-mentioned other processes can also take the circumstances into consideration in collection event
Other parts during execute.
V. additional system features and function
In the various exemplary embodiments of the electronics, physical package and the multi-core process that describe this paper vision systems
Afterwards, illustrative feature and function described further below preferably and are valuably used to reinforce whole operation and more
The property used.
Typically, the determination of focal length and the quick adjustment of lens assembly are desirable on the basis of continuous object, especially
It is the different situation (as shown in example figure 1) of height and/or direction of object.In general, transport system and the stream of other movements
Waterline is adapted for, including:The code device signal of impulse form based on movement-distance, the period is with flowing water change of line speed.It is logical
Cross movement-distance increment between knowing pulse, you can determine assembly line (and object thereon) at any time
Speed.To which with reference to the process 2700 of Figure 27, code device signal is input to the interface (step 2710) of camera assembly and handles
With the actual object velocity (step 2720) of determination.When the feature (such as the recognizable shapes of ID or other) on object is known
When other, their pixel drift can track (step 2730) between picture frame.Time between frame is known, to special
The movement of pixel between frames in sign allows system to calculate to the relative focal length of object (feature).Pass through the buckling
(diverging) camera lens, pixel, which drifts at relatively short distance, to be increased and is reduced at relatively long distance.To pass through survey
The pixel drift measured, fundamental equation can be used to calculate focal length (step 2740).When calculating focal length, system can order FPGA
Suitably adjust liquid lens assembly (or other auto-focusing lens) (step 2750).In general, storing the row of current value
Table is corresponding with preset focal length.Once knowing distance, default electric current is the value.To the coke for ensureing electric current adjustment with determining
Lens assembly calibration away from matching can be executed regularly using conventional or customization technology.In an exemplary embodiment,
It can be used for correcting the focal length of liquid lens to the known distance of a conveyer.On conveyer belt a feature (or apply benchmark
Point) it is clearly focused by lens, then this feature is set as known focal length.This feature can be fixed (such as positioned at
The side of conveyer in the visual field), or can be taken in transport.It is located at the situation transported and taken at it, it is one to be optionally incorporated into
Thus encoder position would know that the relatively accurate position (downstream) of the alignment features in the visual field.
With reference to the process 2800 of Figure 28, FPGA (or other connect with imager preprocessor) may include a program or
Process executes the high-speed search to the feature of similar ID/ codes.Standard ID search programs can be used in the process, for example search for
The pattern of multiple adjacent parallel lines or edge similar with datamatrix.FPGA only (is deposited by PCIe buses from buffer
Reservoir 228) picture frame comprising such feature is transmitted to 114 (step 2820) of processor, it substantially removes not comprising code
Picture frame.Then processor executes further decoding using the core (or multiple cores) of distribution on the picture frame of reception
Process (step 2830).FPGA can also transmit relevant ID position datas (if any) to shorten in processor 114
Decoding time.
With reference to Figure 29, vision system 100 as shown in the figure has camera assembly 110, lens assembly/outer cover 116 and adds
FOVE118.FOVE has been provided with the datum marks 2910 of one or more applications, may include being made of bright and dark element
Checkerboard chequer or another clearly recognizable pattern.In this embodiment, datum mark 2910 is applied to FOVE windows 740
One jiao, relative to holistic vision is relatively small and remote position (such as in a corner).Optionally (or in addition), datum mark
2912 (being shown in phantom) can be placed in the position appropriate on a mirror (such as big mirror 812- is shown in phantom).In general, benchmark
Point is located at along an optical component of FOVE light paths.Between datum mark and image plane (sensor 112- is shown in phantom) away from
From can accurately be determined by focusing on datum mark, the focal length of liquid lens (or other lens assemblies) can accurately school
Just.For provided for liquid lens (or other variable lens assembly) added technique calibrated automatically of " closed loop " by
Entitled " system and method in vision system camera for determining and controlling focal length " of the inventions such as Laurens Nunnink
Commonly assigned U.S. Patent Application No. 13563499 in show and describe.Introduction therein is by referring to as the useful back of the body
Scape data is incorporated into herein.In general, the structure and technology described in the application of the merging require to provide one for lens assembly
Structure, which selectively projects a reference pattern during calibration, and at least part of light path, (it can be at runtime
Dynamic (on-the-fly) occurs during operation), but some or all of the permission visual field are obtaining in normal runtime operation
It takes and keeps interference-free during object images.This method is substantially eliminated due to manufacturing tolerance, drifts about with the calibration of usage time, is
Inaccuracy caused by the temperature of system and/or lens assembly.
In order to further illustrate, in Figure 29, as shown, above-mentioned optional fan assembly 2920 by screw or other
The installation of fastener 2921 to the bottom side of camera assembly 110.Connecting cable 2922 is connected in the suitable of camera assembly rear portion
When connector.Optionally, cable 2922 may be connected to an external power supply.
With further reference to the more detailed perspective view of Figure 29 A and 29B, illustrative camera assembly 110 (has demonstration
Lens 2928) can also include optional bracket 2930, intermediary's assembly relative to fan 2920 is provided.Bracket 2930 wraps
Annular inlet and outlet 2931 are included, are sized to import and export to cause air flow through the annular with the diameter matches of fan blade.
Bracket 2930 further includes fastener 2932, bracket is fixed to above-mentioned camera body bottom threaded hole (Fig. 5 a's
588).Fan 2920 is installed by the fastener 2936 deviated from bracket fastener 2932 to the outside of bracket 2930.These are tight
Firmware 2938 is placed in the threaded hole 2937 of bracket 2930.Fastener 2936 passes through gasket 2938, gasket 2938 to maintain fan
The rigidity of mounting flange.Fastener 2936 again passes through the bearing 2940 for separating fan 2920 outside plate, to
Allow that air-flow is discharged from bottom surface.In one embodiment, the spacing of the separation can be between about 0.5 and 2cm, but can clearly set
Think large-scale possible offset distance.Note that can be same it is expressly contemplated that in an alternate embodiment of the invention, bracket and/or fan can
Mounted on one or more sides (such as left or right side) of camera body and/or top side.This can rely partially on photograph
The installing mechanism of machine.Fan can be covered by conventional safety grids, a part of the safety grids as retention mechanism, bracket
2930 further include a pair of illustratively connector 2934 with fastener hole 2944, can be used as a part for installing mechanism for hanging
Extension camera assembly (and the attachment of any cooperation, such as the FOVE on image scene).
With reference to Figure 30, the precise manipulation of liquid lens (or another variable lens) assembly can be focused by the way that driving current is arranged
Away from characteristic curve (or lens luminous power) improve.That is, the operating curve of the driving current for lens assembly exists
Its entire focal range is usually nonlinear.The process 3000 is for non-linear.During manufacture, or in alignment epoch
Between, the lens are actuated in different known focal length focusing object/datum mark (steps 3010).The lens are actuated to
Known focal length object/datum mark.At the focusing, actual driving current (step 3020) is measured.The process is held
It is continuous pass through multiple focal lengths be incremented by that (deciding step 3030 and step 3040) are gone through by the process until whole focal lengths and tested.Then
Deciding step 3030 gos to step 3050, and in step 3050, the data point in driving current is for generating driving current pair
The characteristic curve of focal length (or luminous power).The characteristic curve indicate it is any non-linear and its can store (such as a look-up table or
Modeling) then use the correction amount provided by the characteristic curve to drive during runtime so as to lens.It will be apparent to the skilled artisan that
The extensive skill that will be apparent to those skilled in the art can be used in nonlinear analysis and error correction for lens driving current
Art is realized.
With reference to Figure 31, it as shown is process 3100, focal length is measured based on the overlapping region in FOVE images.Picture frame
3110 are divided into two parts 3120 and 3122, every side of the integral extension width of corresponding FOVE.3120 He of image section
Each of 3122 includes the overlapping region 3130 and 3132 coordinated as described above.Each of overlapping region 3130,3132
Inside there are one or multiple recognizable features (such as X3140 and bar code 3142).These features can be to be overlapped at two
The element of the visible any contrasts of Qu Jun.System identification identifies these features in each overlapping region and measures their phase
To positions and dimensions (step 3150).In different focal lengths, these parameters are changed with the known scale of measurement.In step 3160
In, the position excursion (and size difference, if any) of the known respective value of the relatively more corresponding focal length of process 3100.More typically
Ground, the process are worked in a manner of coincidence rangefinder (coincidence range finder).Then the number of the correspondence focal length
Value is in step 3170 for setting the focal length in lens assembly.The process and other adjust automatically processes described herein, can
By programming on FPGA or using the system task function in processor 114 1 or multiple cores, return information
To FPGA Focussing is executed in order to by FPGA.
As shown in figure 32, speed and distance of another process 3200 for more generally measuring the object by the visual field,
It is useful in automatic focusing and other adjust automatically processes.In this embodiment, one in system identification object or more
A feature, be typically object itself some or all edges or it is another closure or semi-closed element.It, should in step 3220
Process records and the size of storage this feature (or multiple features).Then the process, which is found, next has feature (or multiple spies
Sign) picture frame (deciding step 3230) and/or obtained enough frames to make decision.If next frame will be located
Reason, the process are back to step 3220 and record/store the size of the feature in next frame (or multiple features).Continue in this way
Until without available frame or having handled enough frames again.Then deciding step 3230 gos to step 3240, in step
The change in size between picture frame is calculated in 3240.Then in step 3250, it is assumed that know the time shaft between picture frame and lead to
The relative distance information (such as a characteristic curve or look-up table) of the speed of the given variation about size at any time is crossed, it should be into
The relative distance and speed of journey computing object.This can be used for controlling the focusing of lens assembly.
With reference to Figure 33, the exemplary mechanism of two camera assembly M and S (saving FOVE) is located at each opposite side of scene
On so as to be imaged on different surfaces with multiple ID 3312 objects 3310 before and the back side, only some of which is every
In the visual field of one camera, but its whole (such as front 3320, top surface 3322 and back sides 3324) are fully taken a picture by two
Machine assembly M and S imaging.Each camera assembly M and S include respective luminaire MI and SI.It is worth noting that camera M and
S being placed in a principal and subordinate mechanism respectively, (it is camera to the RS-485 connectors 3330 overleaf installed on wherein assembly M
A part for the communication interface that assembly provides is simultaneously communicated with processor 114) it is connected to Y types cable 3332.Y type cables include opposite
Male and female connectors 3334.The opposite connector 3338 of one in connector (3336) connection, connector 3338 is via the
Two Y types cables 3340 are connected to assembly S, the 2nd Y types cable 3340 there is further connector 3342 with connect it is additional from
Belong to unit.In order to avoid the crosstalk between luminaire, the processor of assembly M controls its imaging collection in moment TM and its illumination is touched
Hair, and image capture/illumination in discrete moment TS control assemblies S.Capture moment TM and TS via one it is preset when
Countershaft biases, and ensures that the image capture of each camera assembly is not interfered by another.Image can be total by each camera
Any core processing in, or can be by using connection (such as network connection (the 270 of Fig. 2)) appropriate total between camera
Enjoy any core processing in two camera assemblies of image data.For example, one group of core can be adapted in all images
ID is searched, and another group can be adapted for decoding all images.Can by the additional camera assembly of cable connection appropriate, to
Realize the principal and subordinate mechanism (or other control mechanisms) of extension.
VI. it summarizes
It will be apparent to the skilled artisan that the above-mentioned embodiment for vision system, the vision system camera used has at multinuclear
Reason device, high speed, high resolution imager, FOVE, auto-focusing lens and connect with imager for pretreatment image number
According to preprocessor, these embodiments provide the desirable acquisition of height and processing speed in being widely applied and image is clear
Clear degree.More particularly, which efficiently scans, it is desirable that and the wide visual field, size are different with the position of useful feature, and relative to
The object of the relatively rapid movement in the system visual field.The physical package that the vision system provides has a variety of physical interconnections interfaces to prop up
Hold various options and control function.The encapsulation effectively disperses inside by component of arranging, the heat exchange of optimization and ambient enviroment
The heat of generation, and include radiator structure in order to such heat exchange (such as fin).The system also allows a variety of multi-cores
Process optimizes and makes image procossing and system operatio load balance (such as adjust automatically task).Meanwhile it is expressly contemplated that,
The above-mentioned methods and procedures for operating camera assembly and executing vision system/decoding task, can combine in various ways
To obtain required handling result.It similarly, can (such as program 2100 can be used and then be regarded according to treatment conditions changeover program
Situation switches to program 2300 etc.).Similarly, give multiple cores (be more than two), multiple programs can be performed simultaneously (such as
Program 2500 executes in two of 4 cores, and program 2600 executes in other two of 4 cores simultaneously).
Exemplary embodiments of the present invention described in detail above.In the feelings for the spirit and scope for not carrying on the back the present invention
Under condition, a variety of modifications can be carried out to the present invention and additives is provided.It depends on the circumstances, each different embodiments described above
Feature in combination with other embodiments feature, to provide the diversification combined with the relevant feature of new embodiment.In addition,
In the above, when a plurality of separate embodiments of the apparatus and method of the present invention were described, in the original of this described only present invention
The exemplary application of reason.For example, various directions used herein and orientation term, for example, " vertical ", "horizontal", "upper",
"lower", " bottom ", " top ", " side ", " front ", " rear portion ", "left", "right" etc., only as opposite accustomed to using rather than work
For the absolute orientation relative to a fixed coordinate system, such as gravity.Equally, although not describing, can it is expressly contemplated that, by each
The various installing mechanisms of kind of structure (such as top sunpender, smallpox montant, beam etc.) support depend on the circumstances, can be used for relative to
Image scene fixed camera assembly and other visual system components.Similarly, although FOVE as shown is the extension of double vision open country
Device, but clearly it is contemplated that FOVE the visual field can be extended to the visual field of three or more, each on imager suitably
It is projected as a topography.Equally, although the FOVE extensions are carried out along " width " dimension, clearly it is contemplated that the term
" width " can this paper " height " replace, in the case of application as needs.To which extension can be along width and height
Either one or two of occur.Similarly, clearly, it is contemplated that internal or external illumination may include projecting visible and/or be used for special
The wavelength of invisible (such as the near infrared light) of distinguished service energy, for example calibrate, and imager can be adapted for during specific task
Such wavelength is uniquely read, for example is calibrated.In addition, although each of this paper FPGA and processor show execute it is certain
Function, clearly it is contemplated that some functions can switch in either one or two of these structures.In an alternate embodiment of the invention, most of
Task and function can be executed by multi-core processor, and the function based on hardware/firmware of being executed by the FPGA can subtract
It can be omitted completely to minimum limit or FPGA, this is conducive to be adapted in reasonable time in an appropriate format from image sensing
Device sends image data to the different circuits of processor.Therefore, this specification should only be taken as exemplary illustration, without should
As limiting the scope of the invention.
Claims (30)
1. a kind of vision system, including:
One multi-core processor, receives the image captured by an imager, which executes system operatio to image and appoint
Business and vision system task, it is relevant with information in described image as a result, wherein the multi-core processor is configured and disposed to generate
For, run according to a dispatch list, the dispatch list make in distribution multiple cores each come processing system operation task or vision
System task.
2. image control is that each of the image is made to have by vision system according to claim 1, the wherein dispatch list
It is selectively handled in each core, to increase resulting efficiency.
3. vision system according to claim 2, the wherein dispatch list control at least one core and execute system operatio times
Business is without generating result.
4. vision system according to claim 1 further includes preprocessor, based in part on by described at least one
The information that the system operatio task executed in a core generates, the preprocessor execute at least scheduled adjust automatically operation.
5. vision system according to claim 4, wherein the adjust automatically includes Lighting control, brightness exposure and increases
At least one of the focusing of benefit, auto-focusing lens.
6. vision system according to claim 5, wherein the auto-focusing lens include a liquid lens.
7. vision system according to claim 1, the wherein result include decoded symbolic information, come self-contained one
The object of symbolic code.
8. vision system according to claim 1 further comprises a field-of-vision expanders (FOVE), will be connect in imager
The image of receipts is divided into multiple one topography in an extension width and expanded height.
9. vision system according to claim 8, wherein each local image is respectively by the core in multi-core processor
Heart processing.
10. vision system according to claim 9, wherein each image or topography include a relatively another Local map
The overlapping region of picture and each core handle the overlapping region respectively.
11. vision system according to claim 9, wherein each topography includes a part for a symbolic code, and
Wherein each core identification simultaneously respectively handles the part to generate as a result, the result is seamed to together to include decoded symbol
Number information.
12. vision system according to claim 1 further comprises an interface, correspond to opposite camera assembly
The external speed signal of the assembly line of visual field movement.
13. vision system according to claim 12, wherein at least one of the preprocessor and/or multi-core processor
It is configured and disposed to, speed signal and multiple images based on a mobile object execute at least one of following operation:
(a) focusing of variable lens is controlled,
(b) it measures to the focal length of imaging object,
(c) it corrects to the focal length of assembly line, and
(d) relative velocity of imaging object is measured.
14. vision system according to claim 1 further comprises a preprocessor, selectively by image
A part is sent to multi-core processor from imager, and the preprocessor handles other images from imager for including
The system of adjust automatically controls.
15. vision system according to claim 1, the wherein preprocessor have selection based on it to the identification of useful feature
Property transfer information to multi-core processor for being further processed, which is (a) useful feature and (b) has comprising this
With at least one of the image of feature.
16. vision system according to claim 15, wherein interested feature is symbol.
17. vision system according to claim 1, wherein multi-core processor are configured and disposed to every in multiple cores
The topography from each image is handled in one respectively.
18. vision system according to claim 1, the wherein multi-core processor are configured and disposed at least one core
Symbol in middle decoding image, and the multi-core processor is configured and disposed to (a) identification at least one core and is contained in figure
Symbol and (b) as in solve code sign in another of core in the image comprising identified symbol.
19. vision system according to claim 18, the wherein multi-core processor are configured and disposed in providing and being following
At least one relevant information:(a) include symbol image in symbol position, and (b) to core another with
Including other relevant features of symbol in the image of symbol.
20. vision system according to claim 1, the wherein multi-core processor are configured and disposed to, figure is executed to image
As analysis, there is enough images for the decoded feature at least one core to identify, and execute to having enough
For in another of core image of decoded feature decoding step.
21. vision system according to claim 1, the wherein multi-core processor are configured and disposed to, at least one core
It uses the first decoding process to handle image in the heart, and image is handled using the second decoding process in another of core.
22. vision system according to claim 1, the wherein multi-core processor are configured and disposed to, at least one
The image for including symbol from multiple images is decoded in core, and after a preset time interval, if (a) image is not
It completes decoding and (b) takes more time, it is possible that completing the decoding to the image, then decoded in another of core
The image.
23. vision system according to claim 1, the wherein multi-core processor are configured and disposed at least one core
The image for including symbol from multiple images is decoded in the heart, and after a preset time interval, if (a) image is not complete
At decoding and (b) taking more time, it is possible that completing the decoding to the image, then continue at least one core
The decoding of the image simultaneously decodes another image from multiple image in another of core.
24. vision system according to claim 1, the wherein multi-core processor are configured and disposed to, processing respectively includes
The topography of the part of each image, the wherein image include first kind symbol and Second Type symbol, and wherein should
Multi-core processor is further configured and disposed to, and the topography is decoded using each in multiple cores, so that each
First kind symbol and Second Type symbol are handled to load balance between core.
25. vision system according to claim 24, wherein the first kind symbol is one-dimensional type bar code, and
The Second Type symbol is two-dimensional type bar code.
26. vision system according to claim 1, the wherein core are set as, the survey of the image capture based on imager
The current triggering frequency of amount, if the triggering frequency, in a preset threshold range, at least one core executes non-decoding
System operatio task, and if the triggering frequency exceeds preset threshold value, which executes decoding task
Without executing system operatio task.
27. vision system according to claim 26, the wherein non-decoded system task are an adjust automatically task.
28. vision system according to claim 1 further includes preprocessor, by image transmitting to multi-core processor it
Before, the preprocessor pretreatment identifies the symbol at least some images from the image that imager receives so that is transmitted
Image include the image for having symbol, the multi-core processor is constructed and arranged to decode in image at least one core
Symbol.
29. a kind of vision system, including:
One preprocessor selectively stores the image received from an imager with a frame per second, and the preprocessor will
At least some in described image is sent to a multi-core processor, which is handled in multiple cores in image
Information is to generate as a result, the preprocessor is used for vision system adjust automatically task using at least some of image of storage.
30. a kind of method handling image in vision system comprising step:
With the first frame per second capture images in the imager of vision system camera;
At least part in described image is sent to a multi-core processor;And
According to a dispatch list, in each in the multiple cores of the multi-core processor, the image of transmission is handled to generate packet
Containing with the photographed image-related information as a result, the dispatch list makes each in distribution multiple cores with processing system operation task,
It includes camera adjust automatically, or processing vision system task, including image processing tasks.
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/645,213 | 2012-10-04 | ||
US13/645,213 US8794521B2 (en) | 2012-10-04 | 2012-10-04 | Systems and methods for operating symbology reader with multi-core processor |
US13/645,173 | 2012-10-04 | ||
US13/645,173 US10154177B2 (en) | 2012-10-04 | 2012-10-04 | Symbology reader with multi-core processor |
CN201310465330.3A CN103714307B (en) | 2012-10-04 | 2013-10-08 | With the symbol reader of polycaryon processor and its runtime and method |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310465330.3A Division CN103714307B (en) | 2012-10-04 | 2013-10-08 | With the symbol reader of polycaryon processor and its runtime and method |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108460307A true CN108460307A (en) | 2018-08-28 |
CN108460307B CN108460307B (en) | 2022-04-26 |
Family
ID=50407267
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210397986.5A Pending CN114970580A (en) | 2012-10-04 | 2013-10-08 | Symbol reader with multi-core processor and operation system and method thereof |
CN201310465330.3A Active CN103714307B (en) | 2012-10-04 | 2013-10-08 | With the symbol reader of polycaryon processor and its runtime and method |
CN201810200359.1A Active CN108460307B (en) | 2012-10-04 | 2013-10-08 | Symbol reader with multi-core processor and operation system and method thereof |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210397986.5A Pending CN114970580A (en) | 2012-10-04 | 2013-10-08 | Symbol reader with multi-core processor and operation system and method thereof |
CN201310465330.3A Active CN103714307B (en) | 2012-10-04 | 2013-10-08 | With the symbol reader of polycaryon processor and its runtime and method |
Country Status (2)
Country | Link |
---|---|
CN (3) | CN114970580A (en) |
DE (1) | DE102013110899B4 (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3159731B1 (en) * | 2015-10-19 | 2021-12-29 | Cognex Corporation | System and method for expansion of field of view in a vision system |
CN105469131A (en) * | 2015-12-30 | 2016-04-06 | 深圳市创科自动化控制技术有限公司 | Implicit two-dimensional code and reading and recognizing device thereof |
CN106937047B (en) * | 2017-03-08 | 2019-08-09 | 苏州易瑞得电子科技有限公司 | Adaptive focusing visual identity method, system and the equipment of symbolic feature |
CN107358135B (en) * | 2017-08-28 | 2020-11-27 | 北京奇艺世纪科技有限公司 | Two-dimensional code scanning method and device |
DE102017128032A1 (en) * | 2017-11-27 | 2019-05-29 | CRETEC GmbH | Code reader and method for online verification of a code |
US10776972B2 (en) | 2018-04-25 | 2020-09-15 | Cognex Corporation | Systems and methods for stitching sequential images of an object |
CN112747677A (en) * | 2020-12-29 | 2021-05-04 | 广州艾目易科技有限公司 | Optical positioning method and system for multiple processors |
US11717973B2 (en) | 2021-07-31 | 2023-08-08 | Cognex Corporation | Machine vision system with multispectral light assembly |
US20230030276A1 (en) * | 2021-07-31 | 2023-02-02 | Cognex Corporation | Machine vision system and method with multispectral light assembly |
Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1300406A (en) * | 1999-04-07 | 2001-06-20 | 讯宝科技公司 | Imaging engine and technology for reading postal codes |
US6766515B1 (en) * | 1997-02-18 | 2004-07-20 | Silicon Graphics, Inc. | Distributed scheduling of parallel jobs with no kernel-to-kernel communication |
US20080128503A1 (en) * | 2002-01-18 | 2008-06-05 | Microscan Systems, Inc. | Method and apparatus for rapid image capture in an image system |
CN101299194A (en) * | 2008-06-26 | 2008-11-05 | 上海交通大学 | Heterogeneous multi-core system thread-level dynamic dispatching method based on configurable processor |
US20090066706A1 (en) * | 2005-05-13 | 2009-03-12 | Sony Computer Entertainment Inc. | Image Processing System |
US20090072037A1 (en) * | 2007-09-17 | 2009-03-19 | Metrologic Instruments, Inc. | Autofocus liquid lens scanner |
CN101466041A (en) * | 2009-01-16 | 2009-06-24 | 清华大学 | Task scheduling method for multi-eyepoint video encode of multi-nuclear processor |
CN101546276A (en) * | 2008-03-26 | 2009-09-30 | 国际商业机器公司 | Method for achieving interrupt scheduling under multi-core environment and multi-core processor |
US20100097444A1 (en) * | 2008-10-16 | 2010-04-22 | Peter Lablans | Camera System for Creating an Image From a Plurality of Images |
CN101710986A (en) * | 2009-11-18 | 2010-05-19 | 中兴通讯股份有限公司 | H.264 parallel decoding method and system based on isostructural multicore processor |
CN102034076A (en) * | 2009-10-01 | 2011-04-27 | 手持产品公司 | Low power multi-core decoder system and method |
US20110154090A1 (en) * | 2009-12-22 | 2011-06-23 | Dixon Martin G | Controlling Time Stamp Counter (TSC) Offsets For Mulitple Cores And Threads |
CN102596002A (en) * | 2009-10-30 | 2012-07-18 | 卡尔斯特里姆保健公司 | Intraoral camera with liquid lens |
US20120218442A1 (en) * | 2011-02-25 | 2012-08-30 | Microsoft Corporation | Global alignment for high-dynamic range image generation |
WO2012125296A2 (en) * | 2011-03-16 | 2012-09-20 | Microscan Systems, Inc. | Multi-core distributed processing for machine vision applications |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5166745A (en) * | 1990-05-01 | 1992-11-24 | The Charles Stark Draper Laboratory, Inc. | Rapid re-targeting, space-based, boresight alignment system and method for neutral particle beams |
DE19639854A1 (en) | 1996-09-27 | 1998-06-10 | Vitronic Dr Ing Stein Bildvera | Method and device for detecting optically detectable information applied to potentially large objects |
US7494064B2 (en) | 2001-12-28 | 2009-02-24 | Symbol Technologies, Inc. | ASIC for supporting multiple functions of a portable data collection device |
US20040169771A1 (en) * | 2003-01-02 | 2004-09-02 | Washington Richard G | Thermally cooled imaging apparatus |
US6690451B1 (en) * | 2003-02-06 | 2004-02-10 | Gerald S. Schubert | Locating object using stereo vision |
AT504940B1 (en) | 2007-03-14 | 2009-07-15 | Alicona Imaging Gmbh | METHOD AND APPARATUS FOR THE OPTICAL MEASUREMENT OF THE TOPOGRAPHY OF A SAMPLE |
CN102625108B (en) * | 2012-03-30 | 2014-03-12 | 浙江大学 | Multi-core-processor-based H.264 decoding method |
-
2013
- 2013-10-01 DE DE102013110899.7A patent/DE102013110899B4/en not_active Revoked
- 2013-10-08 CN CN202210397986.5A patent/CN114970580A/en active Pending
- 2013-10-08 CN CN201310465330.3A patent/CN103714307B/en active Active
- 2013-10-08 CN CN201810200359.1A patent/CN108460307B/en active Active
Patent Citations (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6766515B1 (en) * | 1997-02-18 | 2004-07-20 | Silicon Graphics, Inc. | Distributed scheduling of parallel jobs with no kernel-to-kernel communication |
CN1300406A (en) * | 1999-04-07 | 2001-06-20 | 讯宝科技公司 | Imaging engine and technology for reading postal codes |
US20080128503A1 (en) * | 2002-01-18 | 2008-06-05 | Microscan Systems, Inc. | Method and apparatus for rapid image capture in an image system |
US20090066706A1 (en) * | 2005-05-13 | 2009-03-12 | Sony Computer Entertainment Inc. | Image Processing System |
US20090072037A1 (en) * | 2007-09-17 | 2009-03-19 | Metrologic Instruments, Inc. | Autofocus liquid lens scanner |
CN101546276A (en) * | 2008-03-26 | 2009-09-30 | 国际商业机器公司 | Method for achieving interrupt scheduling under multi-core environment and multi-core processor |
CN101299194A (en) * | 2008-06-26 | 2008-11-05 | 上海交通大学 | Heterogeneous multi-core system thread-level dynamic dispatching method based on configurable processor |
US20100097444A1 (en) * | 2008-10-16 | 2010-04-22 | Peter Lablans | Camera System for Creating an Image From a Plurality of Images |
CN101466041A (en) * | 2009-01-16 | 2009-06-24 | 清华大学 | Task scheduling method for multi-eyepoint video encode of multi-nuclear processor |
CN102034076A (en) * | 2009-10-01 | 2011-04-27 | 手持产品公司 | Low power multi-core decoder system and method |
CN102596002A (en) * | 2009-10-30 | 2012-07-18 | 卡尔斯特里姆保健公司 | Intraoral camera with liquid lens |
CN101710986A (en) * | 2009-11-18 | 2010-05-19 | 中兴通讯股份有限公司 | H.264 parallel decoding method and system based on isostructural multicore processor |
US20110154090A1 (en) * | 2009-12-22 | 2011-06-23 | Dixon Martin G | Controlling Time Stamp Counter (TSC) Offsets For Mulitple Cores And Threads |
US20120218442A1 (en) * | 2011-02-25 | 2012-08-30 | Microsoft Corporation | Global alignment for high-dynamic range image generation |
WO2012125296A2 (en) * | 2011-03-16 | 2012-09-20 | Microscan Systems, Inc. | Multi-core distributed processing for machine vision applications |
Also Published As
Publication number | Publication date |
---|---|
CN108460307B (en) | 2022-04-26 |
CN103714307A (en) | 2014-04-09 |
CN114970580A (en) | 2022-08-30 |
DE102013110899B4 (en) | 2019-07-04 |
CN103714307B (en) | 2018-04-13 |
DE102013110899A1 (en) | 2014-04-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103714307B (en) | With the symbol reader of polycaryon processor and its runtime and method | |
US11606483B2 (en) | Symbology reader with multi-core processor | |
CN103338322B (en) | Visual field expansion system and method in vision system | |
US8794521B2 (en) | Systems and methods for operating symbology reader with multi-core processor | |
CN102589475B (en) | Three dimensional shape measurement method | |
CN102693407B (en) | Use the automatic explosion method of successive video frames under controlled lighting conditions | |
CN104923923A (en) | Laser positioning cutting system based on large-format visual guidance and distortion rectification | |
CN105022980B (en) | A kind of bar code image recognizing apparatus | |
RU2498931C2 (en) | Data collection system for locking of bottles and its application (versions) | |
CN104838255B (en) | The measurement of carbon fibre material machine direction and the manufacture of object carbon fiber composite structure mode | |
CN103842259A (en) | Device and method for aligning containers | |
CN105700280A (en) | Structured-Light Projector And Three-Dimensional Scanner Comprising Such A Projector | |
CN108537082A (en) | Using the device and method of Bi-objective automatic exposure | |
CN104520698B (en) | Optical method for inspecting transparent or translucent containers bearing visual motifs | |
CN106529365B (en) | Automatic price machine | |
US11676366B1 (en) | Methods to detect image features from variably-illuminated images | |
US20230030276A1 (en) | Machine vision system and method with multispectral light assembly | |
CN102590226A (en) | Detection system for detecting transparent packaging film with patterns | |
CA2633744A1 (en) | Counting device for small series | |
CN102737436A (en) | Subject discriminating apparatus and coin discriminating apparatus | |
CN114577805A (en) | MiniLED backlight panel defect detection method and device | |
CN202915911U (en) | Shooting device for distance measurement | |
CN106483734A (en) | Light fixture | |
CN109284407A (en) | Device for training automatic labeling data set of intelligent sales counter | |
WO2023134304A1 (en) | Optical information collection apparatus and optical information collection method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |