WO2024121556A1 - Sensor - Google Patents

Sensor Download PDF

Info

Publication number
WO2024121556A1
WO2024121556A1 PCT/GB2023/053147 GB2023053147W WO2024121556A1 WO 2024121556 A1 WO2024121556 A1 WO 2024121556A1 GB 2023053147 W GB2023053147 W GB 2023053147W WO 2024121556 A1 WO2024121556 A1 WO 2024121556A1
Authority
WO
WIPO (PCT)
Prior art keywords
sensing
sensor
signal
aperture
view
Prior art date
Application number
PCT/GB2023/053147
Other languages
French (fr)
Inventor
Martin Mcnestry
Original Assignee
Videojet Technologies Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Videojet Technologies Inc. filed Critical Videojet Technologies Inc.
Publication of WO2024121556A1 publication Critical patent/WO2024121556A1/en

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B41PRINTING; LINING MACHINES; TYPEWRITERS; STAMPS
    • B41JTYPEWRITERS; SELECTIVE PRINTING MECHANISMS, i.e. MECHANISMS PRINTING OTHERWISE THAN FROM A FORME; CORRECTION OF TYPOGRAPHICAL ERRORS
    • B41J11/00Devices or arrangements  of selective printing mechanisms, e.g. ink-jet printers or thermal printers, for supporting or handling copy material in sheet or web form
    • B41J11/0095Detecting means for copy material, e.g. for detecting or sensing presence of copy material or its leading or trailing end
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01PMEASURING LINEAR OR ANGULAR SPEED, ACCELERATION, DECELERATION, OR SHOCK; INDICATING PRESENCE, ABSENCE, OR DIRECTION, OF MOVEMENT
    • G01P3/00Measuring linear or angular speed; Measuring differences of linear or angular speeds
    • G01P3/64Devices characterised by the determination of the time taken to traverse a fixed distance
    • G01P3/68Devices characterised by the determination of the time taken to traverse a fixed distance using optical means, i.e. using infrared, visible, or ultraviolet light
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B41PRINTING; LINING MACHINES; TYPEWRITERS; STAMPS
    • B41JTYPEWRITERS; SELECTIVE PRINTING MECHANISMS, i.e. MECHANISMS PRINTING OTHERWISE THAN FROM A FORME; CORRECTION OF TYPOGRAPHICAL ERRORS
    • B41J3/00Typewriters or selective printing or marking mechanisms characterised by the purpose for which they are constructed
    • B41J3/407Typewriters or selective printing or marking mechanisms characterised by the purpose for which they are constructed for marking on special material
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B41PRINTING; LINING MACHINES; TYPEWRITERS; STAMPS
    • B41JTYPEWRITERS; SELECTIVE PRINTING MECHANISMS, i.e. MECHANISMS PRINTING OTHERWISE THAN FROM A FORME; CORRECTION OF TYPOGRAPHICAL ERRORS
    • B41J3/00Typewriters or selective printing or marking mechanisms characterised by the purpose for which they are constructed
    • B41J3/407Typewriters or selective printing or marking mechanisms characterised by the purpose for which they are constructed for marking on special material
    • B41J3/4073Printing on three-dimensional objects not being in sheet or web form, e.g. spherical or cubic objects
    • B41J3/40733Printing on cylindrical or rotationally symmetrical objects, e. g. on bottles
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B65CONVEYING; PACKING; STORING; HANDLING THIN OR FILAMENTARY MATERIAL
    • B65BMACHINES, APPARATUS OR DEVICES FOR, OR METHODS OF, PACKAGING ARTICLES OR MATERIALS; UNPACKING
    • B65B61/00Auxiliary devices, not otherwise provided for, for operating on sheets, blanks, webs, binding material, containers or packages
    • B65B61/26Auxiliary devices, not otherwise provided for, for operating on sheets, blanks, webs, binding material, containers or packages for marking or coding completed packages

Definitions

  • the present invention relates to a sensor for detecting a succession of objects carried past the sensor on a conveyor, and a method of detecting an object.
  • a processing station may be a marking or printing station.
  • the objects may be products such as manufactured articles or packaged food stuffs and a printer may be located at the printing station and used to print product and batch information, “use by” dates etc.
  • the printer may be a non-contact printer such as an industrial ink jet printer or a laser marking system (i.e. using a laser to print by directing a laser beam at an object so as to mark the object by changing a surface characteristic of the object).
  • the printer may be a continuous ink jet printer, for example an electrostatic deflection continuous inkjet printer.
  • Figure 1 illustrates schematically a known printing station 100.
  • a printer 101 having a printhead 102 and a main body 107 is positioned adjacent to a conveyor 103.
  • Products 104 are caused to move in a movement direction 105 past a marking location 106.
  • the printhead 102 applies a mark 104a to each of the products 104.
  • a sensor 108 upstream of the printer 101 In order to position the marking correctly on each object, it is known to use a sensor 108 upstream of the printer 101 to detect an approaching object 104 and trigger printing. In order to position the printing correctly, the system also needs to delay the start of printing, following detection of the approaching object, by the time it takes the object to travel the distance from its position when it is detected to its position for the start of printing. It is known to calculate this delay from a distance to be travelled (which is known) and the line speed (i.e. the speed at which objects are carried past the printer by a conveyor). The line speed may also be used to adjust the printing operation to ensure the correct spacing of the printing in the direction of movement of the objects and to adjust other factors that control print quality.
  • the line speed may be detected using a shaft encoder 110, or alternatively a second sensor may be used, spaced from the first sensor in the direction of travel of the objects, and the line speed can be calculated from the time taken for an object to travel from one sensor to the other.
  • the sensor 108 or sensors typically comprise a photocell.
  • each sensor may be constructed as a light source and a photodetector positioned close together, so that the photodetector detects light originating from the light source and reflected by an object when the object is present.
  • the sensor may comprise a reflector 108a positioned on the other side of the conveyor 103 from the sensor 108.
  • the sensors In order for the sensors to detect the presence of an object reliably, they must be able to distinguish between the signal received when an object is present and the signal received in the absence of any object.
  • aspects of the present disclosure provide a low-cost sensor configured to detect the speed of an object and optionally, both speed and location of an object.
  • a sensor configured to detect an object moving along a predefined movement path.
  • the sensor comprises a first sensing element, a second sensing element, and a sensor housing configured to shield the first sensing element and the second sensing element, the sensor housing comprising a first wall portion defining a first sensing aperture and a second wall portion defining a second sensing aperture.
  • the first sensing aperture is configured to allow electromagnetic radiation to travel towards the first sensing element, the first sensing aperture and the first sensing element defining a first field of view, the first field of view extending from the first sensing aperture in a sensing direction and comprising a first plurality of radiation paths from locations within the first field of view, via the first sensing aperture, to the first sensing element.
  • the second sensing aperture is configured to allow electromagnetic radiation to travel towards the second sensing element, the second sensing aperture and the second sensing element defining a second field of view, the second field of view extending from the second sensing aperture in the sensing direction and comprising a second plurality of radiation paths from locations within the second field of view, via the second sensing aperture, to the second sensing element.
  • the sensor housing is configured to block paths of radiation between locations external to the first field of view and the first sensing element and locations external to the second field of view and the second sensing element.
  • the first aperture and the second aperture are separated by a sensor separation distance from each other in a direction parallel to the predefined movement path of the object.
  • An aperture may refer to an opening in the housing e.g. a circular opening, or an elongate opening, such as a slot.
  • the sensor housing may comprise a single housing shielding both of the first and second sensing elements and providing each of the first and second sensing apertures.
  • the sensor housing may comprise a multiple housing or wall portions, one shielding the first sensing element and providing the first sensing aperture, and a second one shielding the second sensing element and providing the second sensing aperture.
  • the sensor housing may be configured to shield the first and second sensing elements from radiation reflected from, or otherwise originating from, locations outside of the respective fields of view.
  • the sensor housing may comprise a single housing enclosing both of the first and second sensing elements and providing each of the first and second sensing apertures.
  • first sensing aperture, first sensing element and sensor housing may be applied to the second sensing aperture, and second sensing element as appropriate. That is, the first and second sensing arrangements may be generally similar, or even identical, to one another, but spatially separated.
  • the predefined movement path may be defined by a production line or packaging line.
  • the predetermined movement path may be substantially perpendicular to the first and second sensing directions.
  • a movement direction of the object may be in either of two directions along the predetermined movement path (e.g. left-right past the sensor, or right-left past the sensor).
  • the first and second pluralities of radiation paths from locations within the first and second fields of view to the respective first and second sensing elements do not pass through a lens.
  • the fields of view are preferably not defined by a lens. That is, the fields of view are defined by the apertures and sensor geometry, rather than by lenses.
  • the primary optical components between the sensing elements and the object are the apertures, and not lenses.
  • the apertures may, in some cases, be covered by a transparent window.
  • a transparent window would serve to prevent the ingress of debris (e.g. ink or dust) into the sensing cavity, but would not have any optical power, and would not, therefore, act as a lens.
  • the apertures may, in some cases, be covered by a low power lens.
  • a low power lens may serve to prevent the ingress of debris (e.g. ink or dust) into the sensing cavity, but would not have any significant optical power, and would not, therefore, significantly contributed to defining the fields of view.
  • the first sensing aperture may have a width in the direction parallel to the predefined movement path that is less than 5 mm.
  • the first sensing aperture may have a height perpendicular to the direction parallel to the predefined movement path that is greater than or equal to the width.
  • the first sensing aperture may have a height of at least 5 mm.
  • a first sensing length may be defined between the first sensing aperture and the first sensing element.
  • the first sensing length may be at least 5 times a width of the first aperture in the direction parallel to the predefined movement path.
  • the first sensing length may be at least 3 times a width of the first sensing element in the direction parallel to the predefined movement path.
  • the first sensing length may be at least 3 times greater than a maximum of the width of the aperture in the direction parallel to the predefined movement path, and the width of the sensing element in the direction parallel to the predefined movement path.
  • the second aperture and second sensing element may have similar dimensions, and dimensional relationships to the first aperture and first sensing element.
  • the first sensing element and the second sensing element may have substantially the same dimensions.
  • the first sensing aperture and the second sensing aperture may have substantially the same dimensions.
  • a spatial relationship between the first aperture and the first sensing element may be substantially the same as a spatial relationship between the second aperture and the second sensing element.
  • An internal surface of the housing may comprise a surface having a low reflectance.
  • a surface having a low reflectance may suppress internal reflections, minimising the risk that stray radiation will reach the sensing element from locations other than the sensing region.
  • the sensor may further comprise a first radiation source configured to emit a first beam of electromagnetic radiation in the sensing direction.
  • a first sensing region may be defined by an overlap between the first beam of electromagnetic radiation and the first field of view.
  • the first radiation source may be configured such that no direct radiation path exists from the first radiation source to the first and second sensing elements.
  • the first radiation source may be provided externally of the housing.
  • radiation from the source can be reflected from an object and directed towards the sensing element.
  • An object outside of the sensing region will either not be within the field of view of the sensor, or not be illuminated by the radiation source. As such, objects outside each field of view will not cause radiation from the radiation source to be reflected from an object and directed towards the respective sensing element.
  • a central axis of the first beam of emitted electromagnetic radiation may be perpendicular to the predefined movement path.
  • a central axis of the first field of view may be perpendicular to the predefined movement path.
  • each location in the first sensing region there may exist a direct radiation path from the first radiation source, and a direct radiation path to the first sensing element, via the first sensing aperture
  • An offset angle may be defined between a central axis of the first beam of electromagnetic radiation emitted by the first radiation source and a central axis of the first field of view.
  • the sensor may further define a minimum sensing distance between the first sensing aperture and the onset of the first sensing region in the sensing direction.
  • the sensor may further define a maximum sensing distance between the first sensing aperture and the end of the first sensing region in the sensing direction.
  • the sensing direction may preferably be a direction extending from the sensor, perpendicular to the predefined movement path.
  • the sensor may further comprise a second radiation source configured to emit a second beam of electromagnetic radiation in the sensing direction
  • a second sensing region may be defined by an overlap between the second beam of electromagnetic radiation and the second field of view.
  • the second beam of radiation may have equivalent spatial characteristics to those of the first beam of radiation.
  • each radiation beam can be oriented to similarly with respect to the respective field of view.
  • the first beam of radiation and/or the second beam of radiation may have a divergence angle of less than around 16 degrees.
  • the first beam of radiation and/or the second beam of radiation may have a divergence angle of around 8 degrees.
  • a central axis of the first beam of electromagnetic radiation and a central axis of the second beam of electromagnetic radiation may be parallel to one another, and perpendicular to the predefined movement path.
  • the first and second radiation sources may be configured such that no direct radiation path exists from the first and second radiation sources and the first and second sensing elements.
  • the first and second radiation sources may each be provided externally of the housing. In the sensing regions, radiation from the source can be reflected from an object and directed towards the sensing element. An object outside of the sensing region will either not be within the field of view of the sensor, or not be illuminated by the radiation source. As such, objects outside each field of view will not cause radiation from the respective radiation source to be reflected from an object and directed towards the respective sensing element.
  • each location in each of the first and second sensing regions there may exist a direct radiation path from the respective radiation source, and a direct radiation path to the respective sensing element, via the respective sensing aperture.
  • Each of the first field of view and the second field of view may have a parallel viewing angle extending in a direction parallel to the predefined movement path and a perpendicular viewing angle extending in a direction perpendicular to the predefined movement path, the parallel viewing angle being less than the perpendicular viewing angle.
  • the viewing angle may be determined by the sensing element size, the aperture size and the separation between the sensing element and the aperture.
  • the viewing angle may be referred to as a divergence angle.
  • Selection of the divergence angle of the beam of radiation, viewing angle of the field of field, and offset angle between the beam of radiation and field of view allows the minimum sensing distance and maximum sensing distance to be determined and controlled.
  • the sensor may be configured to pulse the first radiation source at a first pulse frequency and to obtain signals from the first sensing element at the first pulse frequency.
  • the second radiation source may be pulsed in a similar manner.
  • the first and second radiation sources may be pulsed at different frequencies to each other.
  • the first and second radiation sources may be pulsed at different times to each other. In this way, it is possible to allow some spatial overlap between the first and second sensing regions (thereby allowing closer sensor spacing), while still avoiding cross-talk between the two sensors.
  • a central axis of the first field of view and a central axis of the second field of view may be parallel to each other, and perpendicular to the predefined movement.
  • the central axis of the first field of view and the central axis of the second field of view may be separated from one another by around 5 mm or less in the direction parallel to the predefined movement path.
  • the sensor may be configured to generate an object detection signal based on a first analog signal received from the first sensing element and/or a second analog signal received from the second sensing element.
  • the sensor may be configured to receive the first and/or a second analog signals from the respective first and second sensing elements and to generate a respective detection signal based each of the respective first and second analog signals.
  • the sensor may be configured to generate an object detection signal based on the first and/or second detections signals.
  • the sensor may be configured to generate an object speed signal based upon a first analog signal received from the first sensing element and a second analog signal received from the second sensing element.
  • the object speed signal may comprise a quadrature output signal comprising a first quadrature signal and a second quadrature signal, each of the first and second quadrature signals comprising a periodic signal component having a frequency that is proportional to the speed of the object, and a phase difference between the first and second quadrature signals being controlled based upon a direction of the object.
  • the sensor may comprise a sensor control circuit configured to receive a first sensor signal from the first sensing element.
  • the sensor control circuit may comprise one or more of: an amplifier configured to generated a first amplified sensor signal based upon the first sensor signal; a filter configured to generate a first filtered sensor signal based upon the first sensor signal; a peak detector configured to generate a first amplitude signal indicative of an amplitude of the first sensor signal; an analog to digital convertor configured to generate a first digital sensor signal based on the first sensor signal; and a signal processor configured to generate an object detection signal based upon the first sensor signal.
  • Each of the processing steps may be performed sequentially. As such, by generating a signal “based upon the first sensor signal”, it will be appreciated that one or more intermediate steps may also be performed.
  • any of the first sensor signal, the first amplified sensor signal, the first filtered sensor signal and the first amplitude signal may be referred to as a first analog signal.
  • the filter may be a band-pass filter, configured to exclude signal components at frequencies other than the first pulse frequency or multiples thereof.
  • the amplifier may be configured to generate the first amplified sensor signal based upon the first sensor signal.
  • the filter may be configured to generate the first filtered sensor signal based upon the first amplified sensor signal.
  • the peak detector may be configured to generate the first amplitude signal based upon the first filtered sensor signal.
  • the analog to digital convertor may be configured to generate the first digital sensor signal based upon the first amplitude signal.
  • the signal processor may be configured to generate the object detection signal based upon the first digital sensor signal.
  • the sensor control circuit may be further configured to receive a second sensor signal from the second sensing element and to perform equivalent processing steps thereon.
  • a printer or marking system comprising a printhead or marking head configured to cause a mark to be created on an object moving in the expected movement direction past a marking location, and a sensor according to the first aspect.
  • the printer or marking system may be configured to receive an object detection signal from the sensor; and initiate printing or marking based upon the object detection signal.
  • the printer or marking system may further receive an object movement signal indicating a direction and/or speed of the object. Printing or marking may be controlled based upon the object movement signal.
  • a method of detecting an object comprises: defining a first field of view, the first field of view extending from a first sensing aperture defined by a first wall portion in a sensing direction and comprising a first plurality of radiation paths from locations within the first field of view, via the first sensing aperture, to a first sensing element; defining a second field of view, the second field of view extending from a second sensing aperture defined by a second wall portion in a sensing direction and comprising a second plurality of radiation paths from locations within the second field of view, via the second sensing aperture, to a second sensing element; blocking paths of radiation between locations external to the first and second fields of view and the respective sensing elements; receiving, by the first sensing element, electromagnetic radiation reflected by an object present at a sensing location within the first field of view; generating, by the first sensing element, a first sensor signal indicative of an amount of radiation incident upon the first sensing element; receiving, by the second sensing
  • the method may further comprise, for each of the first and second sensing elements: amplifying the sensor signal; filtering the amplified signal to generate a filtered sensor signal; generating an amplitude signal based on the filtered sensor signal; and converting the amplitude signal to a digital sensor signal.
  • the method may further comprise generating the object detection signal based upon first and/or second digital sensor signals.
  • the method may further comprise, for each of the first and second sensing elements: generating a baseline sensor signal based upon past values of the sensor signal; and generating a comparison result by comparing the baseline sensor signal with a current sensor signal.
  • the method may further comprise generating the object detection signal based upon the first and/or second comparison results.
  • the past values may comprise a plurality of discrete past values (e.g. ADC sample values).
  • the baseline sensor signal may be generated by combining (e.g. averaging) the plurality of discrete values.
  • the past values may comprise a continuous analog signal extending over a period of time. Such a continuous signal may be averaged by an analog component (e.g. a capacitor, an integrator) to generate the baseline sensor signal.
  • an analog component e.g. a capacitor, an integrator
  • Generating the baseline signal may be based on past values of the sensor signal representing at least 5 seconds duration. Generating the baseline signal may be performed at periodic intervals. Generating the baseline signal may be performed at a predetermined time (e.g. at product start-up as part of an initialisation routine).
  • the current sensor signal for each sensing element may be generated based on a plurality of sensor signal values received from the respective sensing element.
  • the current sensor signal may comprise the amplitude signal.
  • Each sensor signal may comprise a plurality of sensor signal values, each representing a respective point in time.
  • Each of the plurality of sensor signal values may be generated by sampling an analog signal (e.g. the amplitude signal) with an ADC.
  • Generating the detection signal may comprise identifying a plurality of consecutive sensor signal values of the plurality of sensor signal values that satisfy a predetermined criterion.
  • the predetermined criterion may comprise the plurality of consecutive sensor signal values each exceeding a difference threshold from the baseline signal.
  • Similar processing may be performed for signals received from each of the sensing elements.
  • the method may further comprise generating an object movement signal based upon the first sensor signal and/or the second sensor signal.
  • the object movement signal may comprise an object direction signal and/or an object speed signal.
  • the object direction signal may be generated based on a comparison between the first and second sensor signals
  • the object speed signal may be generated based on a temporal separation between first and second detection signals.
  • the object speed signal may be generated further based on a spatial separation between the first and second sensing locations.
  • the object speed signal may be generated based on generated object speed data satisfying a speed criterion.
  • the speed criterion may be the speed being greater than a minimum detection speed (e.g. 5 mm/s) and less than a maximum detection speed (e.g. 5 m/s).
  • the method may further comprise generating a first quadrature signal and a second quadrature signal, each of the first and second quadrature signals comprising a periodic signal component having a frequency that is proportional to the speed of the object, and controlling the phase difference between the first and second quadrature signals based upon a direction of the object.
  • the method may further comprise controlling an industrial printer or marking system based upon the object detection signal.
  • Controlling the printer or marking system may comprise initiating a printing or marking operation upon receipt of the detection signal.
  • Generating the object detection signal may further comprise comparing a current first sensor signal and a current second sensor signal and generating the object detection signal if a predetermined criterion is satisfied.
  • the predetermined criterion may comprise a degree of similarity between the first current sensor signal and the second current sensor signal, or the first comparison result and the second comparison result.
  • first and second signals By assessing a similarity between the first and second signals and/or first and second comparison results, it is possible to compare the shape of the first and second signals. In this way, anomalous object detection results can be minimised, since in a correct detection result the waveforms of the first and second signals would be expected to be time shifted versions of one another.
  • a detection result may be rejected if the signals do not match sufficiently.
  • the method may further comprise controlling the industrial printer or marking system based upon an object movement signal (e.g. an object speed signal).
  • an object movement signal e.g. an object speed signal
  • Figure 1 shows a top view of a known printing station
  • Figure 2 shows a top view of a printing station including a sensor according to the invention
  • Figure 3a shows a front view of a sensor as shown in Figure 2 in more detail
  • Figure 3b shows a top cross-section view of a sensor as shown in Figure 2 in more detail
  • Figures 4a and 4b show a sensor as shown in Figure 2 in more detail
  • Figure 5 shows side cross-section view of part of a sensor as shown in Figure 2 in more detail
  • Figure 6 shows a block diagram of a control circuit for a sensor as shown in Figure 2.
  • Figure 2 illustrates schematically a printing station according to the invention 200.
  • a printer 201 having a printhead 202 is positioned adjacent to a conveyor 203.
  • Products 204 are caused to move along a predefined movement path MP in an expected movement direction 205 past a marking location 206.
  • the printhead 202 applies a mark 204a to each of the products 204.
  • a sensor constructed according to the invention 208 is positioned upstream of the printer 201 to detect an approaching object 204 and trigger printing.
  • the direction of movement of the products 204 is known (i.e. the expected movement direction 205). However, during configuration, this may not be known. Moreover, the printer 201 may not know the direction of movement (i.e. whether left-right, or right-left). It is desirable, therefore, to provide information regarding the direction of movement along the predefined movement path MP to the printer.
  • references to the predefined movement path MP and the expected movement direction 205 may be used interchangeably in some instances. That is, where reference is made to distances, or directions, the terms “predefined movement path” MP and “expected movement direction” 205 may be used interchangeably, since a direction parallel or perpendicular to one of these terms is equivalent to a direction parallel or perpendicular to the other of these terms.
  • the printer may be an industrial printer or marking system.
  • the printer may be a non-contact printer such as, for example, a continuous inkjet printer (with a single, or multiple jets), a drop on demand inkjet printer, or a laser marking system.
  • the printer will be controlled separately from the location of objects 204 to be marked. That is, unlike a document printing system (e.g. a desktop printer) in which the printer typically controls both the printhead and the medium upon which the image is to be printed, or a laser writing system in which an object is secured to a stage that is under common control with the laser beam, the industrial printer or marking system in the present case may itself have no direct knowledge or control of the location of the objects being marked.
  • the printer or marking system may be arranged next to or above a production line or conveyor belt 3, and may be required to print on objects 204 as and when they appear at the marking location 206.
  • the supply of objects may be intermittent, with variable spacing and speed.
  • the printer 201 may have no internally available source of information regarding the location or speed of objects to be marked.
  • the printer 201 may, for example, be an ink jet printer. It may comprise means for deflecting the ink drops in flight, so that different drops can travel to different destinations.
  • the ink is electrically conductive when wet, and the printer comprises an arrangement of electrodes to trap electric charges on the ink drops and create electrostatic fields in order to deflect the charged drops.
  • the ink jet printer has a print head that is separate from the main printer body and is connected to the main printer body by a flexible connector sometimes known as a conduit or umbilical that carries fluid and electrical connections between the print head and the main printer body.
  • the printhead 202 may include a droplet generator that receives pressurised ink and allows it to exit through an orifice to form a jet of ink, a charge electrode for trapping electric charges on drops of ink, deflection electrodes for creating an electrostatic field for deflecting charged drops of ink, and a gutter for collecting drops of ink that are not used for printing.
  • a droplet generator that receives pressurised ink and allows it to exit through an orifice to form a jet of ink
  • a charge electrode for trapping electric charges on drops of ink
  • deflection electrodes for creating an electrostatic field for deflecting charged drops of ink
  • a gutter for collecting drops of ink that are not used for printing.
  • the umbilical will include fluid lines, for example for providing pressurised ink to the ink gun and for applying suction to the gutter and transporting ink from the gutter back to the main printer body, and electrical lines, for example to provide a drive signal to a piezoelectric crystal or the like for imposing pressure vibrations on the ink jet, to provide electrical connections for the charge electrode and the deflection electrodes, and to provide drive currents for any valves that may be included in the print head.
  • the sensor 208 is configured to detect objects approaching the marking location 206 in the expected direction of movement 205.
  • the sensor 208 detects objects as they enter a sensing location 209.
  • the sensing location 209 extends from the sensor 208 in a sensing direction 210.
  • the sensor 208 is optimally arranged such that the sensing direction 210 is perpendicular to the expected direction of movement 205 of the object 204 on the conveyor 203.
  • the printer 201 is configured to delay the start of printing, following detection of an approaching object by the sensor 208, by the time it takes the object to travel a delay distance 211 from the sensing location 209 to the marking location 206.
  • the printer 201 further comprises a printer main body 207 (e.g. housing an ink supply, or a laser source), and a printer controller 212.
  • a printer main body 207 e.g. housing an ink supply, or a laser source
  • Figures 3a and 3b show the sensor 208 in more detail.
  • Figure 3a shows a front view of the sensor 208, with the viewing position being that of an object in the sensing location 209.
  • Figure 3b shows a cross-section view looking from above, with the cross section taken along line A-A’, shown in Figure 3a.
  • the sensor 208 comprises a first sensing element 300 and a first sensing aperture 302.
  • the first sensing aperture 302 is configured to allow a limited portion of external electromagnetic radiation to travel towards the first sensing element 300. Together, the first sensing aperture 302 and the first sensing element 300 define a first field of view 304.
  • the first field of view 304 extends from the first sensing aperture 302 in the sensing direction 210.
  • the first field of view 304 comprises a first plurality of radiation paths 304a, 304b, from locations within the first field of view 304, via the first sensing aperture 302, to the first sensing element 300.
  • the first field of view 304 comprises a first central axis 306, which extends from the sensor 208 in the sensing direction 210.
  • the sensor further comprises a second sensing element 310 and a second sensing aperture 312.
  • the second sensing aperture 312 is configured to allow a limited portion of external electromagnetic radiation to travel towards the second sensing element 310.
  • the second sensing aperture 312 and the second sensing element 310 define a second field of view 314.
  • the second field of view 314 extends from the second sensing aperture 312 in the sensing direction 210.
  • the second field of view 314 comprises a second plurality of radiation paths 314a, 314b, from locations within the second field of view 314, via the second sensing aperture 312, to the second sensing element 310.
  • the second field of view 314 comprises a second central axis 316, which extends from the sensor 208 in the sensing direction 210.
  • the first central axis 306 and the second central axis 316 are parallel to one another.
  • the sensor 208 further comprises a sensor housing 320 configured to enclose and separate the first sensing element 300 and the second sensing element 310.
  • the sensor housing 320 further defines the first sensing aperture 302 and the second sensing aperture 312.
  • the sensor housing 320 further prevents any radiation from the second sensing aperture 312 from reaching the first sensing element 300, or any radiation from the first sensing aperture 302 from reaching the second sensing element 310.
  • the sensor housing 320 comprises a first wall 320a defining the first sensing aperture 302 and a second wall portion 320b defining the second sensing aperture 312.
  • a first sensing cavity 301 is formed enclosing the first sensing element 300 and volume of space between the first sensing aperture 302 and first sensing element 300.
  • the first sensing element 300, first sensing cavity 301 and first sensing aperture 302 together comprise a first sensing arrangement 303.
  • a second sensing cavity 311 is formed enclosing the second sensing element 310 and volume of space between the second sensing aperture 312 and second sensing element 310.
  • the second sensing element 310, second sensing cavity 311 and second sensing aperture 312 together comprise a second sensing arrangement 313.
  • a third wall portion 320c separates the first and second sensing cavities 301 , 311.
  • first and second sensing arrangements 303, 313 are preferably substantially similar. That is, by providing first and second sensing elements 300, 310 having substantially the same dimensions, first and second sensing apertures 302, 312 having substantially the same dimensions, and spatial relationships between the first aperture 302 and the first sensing element 300, and the second aperture 312 and the second sensing element 310 that are substantially the same, it is possible to provide similar fields of view. In some cases, first and second sensing arrangements 303, 313 may be identical to one another, but spatially separated in the expected direction of movement 205.
  • the sensor housing 320 is configured to block paths of radiation 322a, 322b between locations external to the first field of view 304 and the first sensing element 300 and locations external to the second field of view 314 and the second sensing element 310.
  • the paths 322a, 322b may be blocked by the external surface of the housing 320 (see e.g. path 322b, which is blocked by wall portion 320a), or alternatively may be blocked as a result of the internal structure (e.g. wall portion 320c) after passing through one of the apertures 302, 312 (e.g. path 322a).
  • the sensor housing 320 defines the first sensing cavity 301 fully enclosing (except for the first aperture 302) the first sensing element 300, and the second sensing cavity 311 fully enclosing (except for the second aperture 312) the second sensing element 310.
  • the sensor housing 320 may not fully enclose the first and second sensing elements 300, 310. Rather the sensor housing 320 may be configured to shield the first and second sensing elements 300, 310, and to define the first and second fields of view 304, 314. That is, the sensor housing 320 is configured to shield the first and second sensing elements 300, 310 from radiation reflected from, or otherwise originating from, locations outside of the respective fields of view 304, 314.
  • Providing fully enclosed sensing cavities 301 , 311 is one way of providing such shielding, but other possibilities exist.
  • openings other than the apertures 302, 312 may be provided.
  • other openings may be provided and the housing arranged such that no direct paths exist for radiation to pass from a shielded region surrounding the fields of view 304, 314 to the first and second sensing elements 300, 310.
  • some radiation e.g. ambient radiation
  • the housing 320 is configured to block all direct paths of radiation to the first and second sensing elements 300, 310, other than those passing through the first and second apertures 302, 312.
  • the first aperture 302 and the second aperture 312 are separated by a sensor separation distance 324 from each other in the expected direction of movement 205 of the object.
  • the separation distance 324 may be measured from like parts of the first and second apertures 302, 312. Equivalently, the separation distance 324 may be measured between the first and second central axes 306, 316 of the first and second fields of view 304, 314.
  • Figure 4a shows a cross-section of the first sensing arrangement, the cross-section taken along the line A-A’ shown in Figure 3a.
  • Figure 4b shows a cross-section of the first sensing arrangement 303, the cross-section taken along the line B-B’ shown in Figure 3a. That is, Figure 4a shows a view that is similar to part of the view shown in Figure 3b, whereas Figure 4b shows a view that is looking from a direction parallel to the direction of movement 205.
  • the first sensing aperture 302 has an aperture width 302w in the expected direction of movement 205.
  • the second sensing aperture 312 has an aperture width 312w in the expected direction of movement 205 ( Figures 3a, 3b).
  • the first sensing aperture 302 has a height 302h perpendicular to the expected direction of movement 205, and perpendicular to the width 302w.
  • the second sensing aperture 312 has a height 312h (not shown) perpendicular to the expected direction of movement 205, and perpendicular to the width 312w.
  • the first sensing element 300 has a width 300w in the expected direction of movement 205, and a height 300h perpendicular to the expected direction of movement 205, and perpendicular to the width 300w.
  • the second sensing element 310 has a width 31 Ow in the expected direction of movement 205, and a height 31 Oh (not shown) perpendicular to the expected direction of movement 205, and perpendicular to the width 31 Ow.
  • the sensor has a first sensing length 300I (also referred to as aperture separation) between the first sensing aperture 302 and the first sensing element 300 and a second sensing length 3101 between the second sensing aperture 312 and the second sensing element 310.
  • first sensing length 300I also referred to as aperture separation
  • the first field of view 304 has a parallel viewing angle 304a extending in a direction parallel to the expected direction of movement of the object 205 and a perpendicular viewing angle 304p extending in a direction perpendicular to the expected direction of movement of the object 205.
  • the viewing angles may be referred to as divergence angles.
  • the parallel and perpendicular viewing angles are defined by the geometry of the first sensing element 300, the first aperture 302, and the first sensing length 300I according to the following equations: Equation (1) Equation (2)
  • radiation e.g. visible light
  • the apertures 302, 312 cooperate with the sensing elements 300, 310 to define the fields of view 304, 314. Radiation that is reflected from the portion of the objects within the fields of view 304, 314 is permitted to travel towards the respective sensing elements 300, 310. However, radiation reflected from, or otherwise originating from, locations outside of the fields of view is not permitted to reach the sensing elements 300, 310.
  • the apertures 302, 312 are provided by openings in the housing 320.
  • the openings may be any suitable shape or size. For example, circular, or an elongate opening, such as a slot.
  • the configuration of the sensor in this way, with apertures, rather than lenses, provides a mechanically simple sensor that is relatively insensitive to contamination (e.g. by ink, or dust).
  • the first and second pluralities of radiation paths 304a, 304b, 314a, 314b (see Figure 3b) from locations within the first and second fields of view to the respective first and second sensing elements do not, therefore, pass through a lens. That is, the extent of the field of view for each sensing element is not defined by a lens. Rather, the field of view for each sensing element is defined by the respective aperture 302, 312, sensing element 300, 310, and geometry of the housing 320, as described above with reference to equations (1) and (2), rather than by lenses.
  • the primary optical components between the sensing elements and a detected object are therefore apertures, and not lenses.
  • a lens By avoiding the use of a lens to define the field of view, a single focal distance is avoided, allowing a sensing across a broad range of depths to be performed. Further still, the use of a lens may limit the geometry of the sensor in some way (e.g. the spacing between the first and second apertures 302, 312).
  • the apertures may, in some cases, be covered by a transparent window.
  • a transparent window would serve to prevent the ingress of debris (e.g. ink or dust) into the sensing cavity, but would not have any optical power, and would not, therefore, act as a lens.
  • a lens having a very low optical power could be used in some examples. Such a lens would not significantly affect extend of the field of view.
  • the sensor housing 320 may comprise a single housing enclosing (or shielding) both of the first and second sensing elements 300, 310, and providing each of the first and second sensing apertures 302, 312.
  • the sensor housing may comprise a multiple housing portions, one enclosing (or shielding) the first sensing element and providing the first sensing aperture, and a second one enclosing (or shielding) the second sensing element and providing the second sensing aperture.
  • the first wall portion 320a and second wall portion 320b may be removably connected to the remainder of the housing 320, and are not necessarily integrally formed therewith.
  • part, or even all, of the sensor housing 320 may be provided by components of a printer in which the sensor is integrated.
  • a printer housing, or printhead housing may provide some or all of the shielding of blocking functionality described herein as being provided by the sensor housing 320.
  • An internal surface 321 of the housing 320 may comprise a low reflectance surface.
  • a low reflectance surface may suppress internal reflections, minimising the risk that stray radiation will reach the sensing element from locations other than the sensing region.
  • Such a housing 320 may allow a high gain to be applied to the sensing elements 300, 310, thereby maximising sensitivity. If the housing was omitted, and ambient light was permitted to reach the sensing elements 300, 310, it would be more difficult to identify signal changes caused by objects entering the sensing location 209.
  • stray light e.g. ambient light
  • some stray light may be able to reach the sensing elements 300, 310 in some instances.
  • the extent of such radiation is reduced to the extent possible.
  • Figure 5 shows a further cross-section of the first sensing arrangement 303, equivalent to the view shown in Figure 4b, but with additional parts included.
  • the first sensing arrangement further comprises a first radiation source 340.
  • the first radiation source 340 comprises a light emitting diode.
  • the first radiation source 340 is configured to emit a first beam of electromagnetic radiation 342 in the sensing direction 210. It will be understood that the first beam of electromagnetic radiation 342 does not need to extend directly in the sensing direction 210, but should have a component in that direction.
  • the first beam of electromagnetic radiation 342 comprises a central axis 344 perpendicular to the expected direction of movement of the object 205 (which is in a direction into or out of the page in the orientation shown in Figure 5).
  • the first beam of radiation 342 has a divergence angle 342a. In one arrangement, the divergence angle 342a may be around 8 degrees.
  • An offset angle 348a is defined between the central axis 344 of the first beam of electromagnetic radiation emitted by the first radiation source 342 and the central axis 306 of the first field of view 304.
  • a first sensing region 346 is defined by an overlap between the first beam of electromagnetic radiation 342 and the first field of view 304 and is shown hatched both horizontally and vertically in Fig.5.
  • radiation from the first radiation source 340 can be reflected from an object and allowed to travel towards the first sensing element 300.
  • An object outside of the first sensing region 346 may reflect the first beam of electromagnetic radiation 342 but any reflected radiation will not be within the field of view 304 of the sensing element 300, so will not be detected..
  • an object outside of the first sensing region 346 since it is outside of the range of the first beam of electromagnetic radiation 342 will not reflect any radiation towards the sensing element 300, and so will also not be detected. As such, objects outside of the first sensing region 346 will not cause radiation from the radiation source 340 to be reflected from an object and allowed to travel towards the sensing element 300.
  • a first object OBJ1 is partially in the first field of view 304. Since this object OBJ1 is not within the first beam of radiation 342, it will not be detected by the first sensing arrangement 303.
  • a second object OBJ2 is fully in the first field of view 304 and within the first beam of radiation 342, and will therefore be detectable by the first sensing arrangement 303.
  • a third object OBJ3 is partially in the beam of radiation 342, but entirely outside the first field of view 304, and will therefore not be detectable by the first sensing arrangement 303.
  • the first sensing region 346 may thus comprise a region in which, at each location there exists a radiation path from the first radiation source 340, and a radiation path to the first sensing element 300, via the first sensing aperture 302.
  • the radiation paths may each comprise direct radiation paths.
  • the radiation source and/or the sensing element may be provided with one or more mirrors configured to reflect radiation in a predetermined and intended way.
  • one or more mirrors may be used within the sensor housing in order to direct the radiation beam, or the paths of radiation incident upon the sensing element.
  • the first sensing arrangement comprises a minimum sensing distance SENSE_MIN between the first sensing aperture 302 and the onset of the first sensing region 346 in the sensing direction, and a maximum sensing distance SENSE_MAX between the first sensing aperture 302 and the end of the first sensing region 346 in the sensing direction.
  • the offset angle 348a between the beam of radiation 342 and the field of view 304 is around 25 degrees.
  • the divergence angle of the beam of radiation 342a is around 3 degrees, and the viewing angle of the field of field 304a is around 25 degrees.
  • Such an arrangement may provide for a minimum sensing distance SENSE_MIN of 1.5 mm and a maximum sensing distance SENSE_MAX of 30 mm.
  • the selection of the minimum and maximum sensing distances allows spurious detection events to be minimised. For example, if a sensor is configured to look horizontally across a conveyor, detection signals will not be generated by reflections from highly reflective objects passing at the other side of the conveyor, provided they are further from the sensor than the maximum sensing distance. A convenient sensing distance range may be 1.5 mm (minimum) to 30 mm (maximum). Similarly, if a sensor is configured to look vertically down onto a conveyor, detection signals will not be generated by reflections from highly reflective parts of the conveyor itself.
  • the senor 208 may typically be oriented such that a bisecting line 345 between the central axis 306 of the first field of view 304, and the central axis 344 of the beam of radiation 342 is aligned with the sensing direction 210. It is noted that when illustrated without a radiation source (as shown in Figures 3b, 4a, 4b), the sensing direction is shown parallel with the central axis 306 of the first field of view 304. However, this is a not a requirement, and alignment between the bisecting line 345 and sensing direction 210 may be preferred (although again, is not essential).
  • the sensing direction 210 is configured to be in a horizontal direction
  • the central axis 306 of the first field of view 304, and the central axis 344 of the beam of radiation 342 will each be 12.5 degrees from the horizontal (for an offset angle of 25 degrees).
  • Such an arrangement may provide increased sensitivity for example, where an object being detected has a low level of reflectivity for the radiation being emitted from the radiation source.
  • the marking direction of the marking head 202 is generally arranged to be perpendicular to the surface being marked because this gives the most accurate mark. For most accurate and consistent object detection, the sensing direction 210 and the marking direction are optimally parallel.
  • Aligning the bisecting center-line 345 with the sensing direction 210 will mean that bisecting center-line 345 is perpendicular to the surface being detected which will maximise the light reflected in the direction of the field of view 304 of the sensing arrangement 303.
  • the sensor 208 may further comprise a second radiation source configured to emit a second beam of electromagnetic radiation in the sensing direction.
  • a second sensing region is defined by an overlap between the second beam of electromagnetic radiation and the second field of view.
  • the second radiation source may have similar beam characteristics to those of the first radiation source.
  • a central axis of the second beam of radiation may be parallel to the central axis 344 of the first beam of radiation, and perpendicular to the expected direction of movement 205 of the object.
  • the second radiation source may be arranged in a similar manner to that shown in Figure 5, although with the orientation being relative to the second sensing arrangement 313, rather than the first sensing arrangement 303.
  • the first and second radiation sources are preferably configured such that no direct radiation paths exist from the first and second radiation sources to the first sensing element or the second sensing element.
  • the first and second radiation sources may be provided externally of the housing 320 (although could alternatively be provided within a single housing, with suitable internal light blocking structures).
  • the operation of the sensor 208 is controlled by a sensor control circuit 220, as shown schematically in Figure 6.
  • the sensor control circuit 220 comprises a first LED driver 221 configured to provide a drive signal 222 to the first radiation source 340, and a second LED driver 231 configured to provide a drive signal 232 to the second radiation source 350.
  • the first sensing element 300 is configured to generate a first sensor signal 223.
  • the sensor control circuit 220 further comprises a first amplifier 224 configured to receive the first sensor signal 223.
  • a first amplified sensor signal 225 is then passed to a band-pass filter 226 to produce a first filtered sensor signal 227.
  • the sensor control circuit 220 further comprises a second amplifier 234 configured to receive a second sensor signal 233 generated by the second sensing element 310 and produce a second amplified sensor signal 235, and a second band-pass filter 236 for filtering the second amplified signal 235 to produce a second filtered sensor signal 237.
  • the filtered signals 227, 237 are each passed to a respective peak detector 228, 238, where further processing is performed to smooth the filtered signals 227, 237 and to generate respective first and second amplitude signals 229, 239 indicative of an amplitude of the filtered sensor signals 227, 237.
  • the first and second amplitude signals 229, 239 are passed to a signal processor 240, for further processing.
  • the signal processor 240 may comprise an analog-to-digital convertor configured to generate first and second digital sensor signals. Any of the first and second sensor signals, the first and second amplified sensor signals, the first and second filtered sensor signals and the first and second amplitude signals may be referred to as analog sensor signals.
  • the first and second sensing elements 300, 310 may each be photodiodes, such as, for example, part number SFH2440L manufactured by OSRAM Opto Semiconductors GmbH, Germany. The use of simple, single pixel, photodiodes provide a convenient sensing element, especially in view of their highly linear output, low noise and high speed operation.
  • the radiation sources 340, 350 may be red LEDs, such as, for example, the VLCS5830 part manufactured by Vishay Intertechnology, Inc., United States of America.
  • the first LED driver 221 applies a pulsed driving signal 222 to the first radiation source 340.
  • a first pulse frequency of 100 kHz may be convenient (although other frequencies can be selected as required).
  • Changes in the first sensor signal 223 are amplified by the amplifier 224 to produce the first amplified sensor signal 225.
  • Components of the signal 225 at the first pulse frequency are then selectively filtered by the band-pass filter 226 which removes substantially all of the signal except for the 100 kHz component.
  • the peak detector 228 creates the first amplitude signal 229 based on the amplitude of the first filtered sensor signal 227.
  • This amplitude signal 229 is representative of the amount of radiation from the first radiation source 340 which impinges on the first sensing element 300 and is passed to the signal processor 240. In this way it is possible to obtain radiation level signals from the first sensing element 300 at the first pulse frequency only.
  • the radiation source 340 By pulsing the radiation source 340 and filtering the sensor signal at a corresponding frequency, it is possible to suppress ambient radiation, such as that from daylight or electric lighting, incident upon the sensing element 300. In particular, radiation that is able to impinge upon the sensing element 300, but which has not originated from the radiation source 340 will be rejected by the band-pass filter 226, since it will not have a component at the pulse frequency.
  • the second radiation source 350 may be pulsed in a similar manner.
  • the signal processor 240 converts the first amplitude signal 229 to a series of digital values (e.g, by taking samples at a sampling frequency), creating a first digital signal. Digital signal values may then be processed in various way to generate various further control signals.
  • a primary function of the signal processor 240 is to generate an object detection signal 241.
  • the object detection signal 241 may be generated by identifying a high signal level (e.g. when a highly reflective object has entered the sensing region 346). If the sensor is configured to generate an ‘active high’ output signal (i.e. the signal has a low level when no object is present), a transition from a low signal value to a high signal value (e.g. the crossing of a signal threshold level) in the amplitude signal 229 can be used to identify that an object has entered the sensing region 346.
  • a first (binary) detection signal 242 may thus be identified by the signal processor 240 based on the received (and digitised) first amplitude signal 229.
  • a similar second detection signal 243 may be identified by the signal processor 240 based on the second amplitude signal 239, based on a signal level change caused by an object entering the sensing region associated with the second sensing element 310.
  • the same detection method for example crossing a similar threshold, is used for both sensor arrangements 303, 313, so that they operate in the same way. This allows a time period between T the two identified signals 242, 243 to be used to generate object movement data 244.
  • the signal processor 240 is further able to calculate data indicating the speed of the object (e.g. the linear speed of the object in the direction 205), according to the following equation:
  • V S /T , Equation (3)
  • V is the speed of the object
  • S is the separation distance (e.g. distance 324)
  • T is the time period between the two detection signals 242, 243. It will be understood that the data indicating the speed of the object may be in any convenient form, and is not required to have units of metres per second.
  • the data indicating the speed of the object may be referred to as object speed data 244a.
  • the object speed data 244a may be validated based on the data satisfying a speed criterion.
  • the speed criterion may be the speed being greater than a minimum detection speed (e.g. 5 mm/s) and less than a maximum detection speed (e.g. 5 m/s). In this way, it is possible to discount spurious speed data.
  • object direction data 244b may together be referred to as object movement data 244.
  • the signal processor 240 may be configured to generate one or more output signals for interfacing with a printer.
  • the output signals may include the object detection signal 241.
  • the output signals may further (or instead) include an object speed signal 245.
  • the object speed signal 245 may be derived from the object speed data 244a, and in some instances may simply comprises a speed value that can be interpreted by the printer (or other device).
  • the object speed signal 245 may comprise a periodic signal component (e.g. a square waveform) having a frequency that is proportional to the speed of the object.
  • the signal processor 240 is configured to synthesize and output quadrature pulse output signals 245a and 245b, which together form the object speed signal 245.
  • Such signals mimic the two output signals of a standard quadrature shaft encoder.
  • a standard quadrature shaft encoder output signal comprises two square wave signals with a slight phase difference between them (e.g. 90 degrees).
  • the direction of encoder rotation is given by which of the two square wave signals leads (i.e. transitions high or low before the other one of the two square wave signals), and the speed of rotation is proportional to the frequency of the pulses. That is, each of the first quadrature output signal 245a, and the second quadrature output signal 245b has a periodic signal component (e.g. a square waveform) having a frequency that is proportional to the speed of the object.
  • the phase difference between the two signals (e.g. +90 degrees, or -90 degrees) is controlled based upon the direction of the object relative to the sensor.
  • the processor 240 may calculate the necessary timings of two such square waves signals based on the object speed data 244a and the object direction data 244b and synthesize the equivalent square wave output signals 245a and 245b.
  • the signal processor 240 may also be configured to generate the object detection signal 241 based on either (or both) of the first and second detection signals 242, 243.
  • the object speed signal 245 may be used to adjust the printing operation. For example, printing may be adjusted to ensure the correct spacing of the printing in the direction of movement of the object, and/or to adjust other factors that control print quality.
  • a sensor baseline signal may be measured and stored for each sensing element.
  • the signal processor 240 may store a baseline sensor signal based upon past values of the amplitude signals 229, 239.
  • the past values may comprise a plurality of discrete past values (e.g. ADC sample values) with the baseline sensor signal being generated by combining (e.g. averaging) the plurality of discrete values.
  • the baseline signal may, for example, be based on past values of the amplitude signal representing at least 5 seconds duration when no object is present in the sensing region 346 (although different durations or configurations can be used).
  • Generating the baseline signal may be performed at periodic intervals, and/or at a predetermined time (e.g. at product start-up as part of an initialisation routine). In this way, it is possible to determine a representative baseline signal against which current sensor values can be compared. A comparison result is then generated by comparing the respective baseline signal with a current value of the amplitude signal 229, 239. The detection signal 241 may then be generated based on the comparison result.
  • the analog amplitude signals may also be processed by the processor 240 based on a plurality of amplitude signal values digitally sampled over short a period of time.
  • Each of the amplitude signals may comprise a plurality of digital values, each representing signal amplitude at a respective point in time.
  • Each of the plurality of amplitude signal values may be generated by sampling an analog signal with an ADC (e.g. as part of the signal processor 240).
  • the ADC sampling rate may be selected based upon the expected speed of object movement. A sampling rate of 40,000 samples per second may be suitable.
  • the past values may comprise a continuous analog signal extending over a period of time.
  • a continuous signal may be averaged by an analog component (e.g. a capacitor, an integrating amplifier - not shown) to generate the baseline sensor signal, which is then provided to the signal processor 240.
  • an analog component e.g. a capacitor, an integrating amplifier - not shown
  • peak detectors 228, 238 may also perform a signal smoothing operation.
  • alternative combinations of digital and/or analog components may be selected to perform a similar function.
  • Generating the object detection signal 241 may comprise identifying a plurality of consecutive sensor signal values of the plurality of sensor signal values (or values of amplitude signals 229, 239) that satisfy a predetermined criterion.
  • the object detection signal 241 may be generated when the sampled amplitude signal 229 is above a threshold value for a predetermined period of time (e.g. 0.1 to 1 millisecond). Such a period may correspond to 4-40 samples, when using the sampling rate of 40,000 samples per second.
  • the predetermined criterion may comprise the plurality of consecutive sensor output signal values each exceeding a difference threshold from the baseline signal.
  • a threshold of around 300 mV above a baseline signal for the particular sensor signal may provide robust detection while also effectively rejecting noise and avoiding false triggering.
  • Other detection arrangements may be preferred, depending on the particular characteristics of the signals.
  • the signal processor 240 may further generate the object detection signal 241 based upon a combination of the first amplitude signal 229 and the second amplitude signal 239.
  • the object detection signal 241 may usually be generated when an object is detected in the sensing region.
  • generating the object detection signal 241 may further comprise comparing the first and second detection signals 242, 243.
  • generating the object detection signal 241 may comprise comparing how the first amplitude signal 229 varies over time and how the second amplitude signal 239 varies over time, and generating the object detection signal 241 only if a predetermined criterion is satisfied.
  • the predetermined criterion may comprise a degree of similarity between the first amplitude signal 229 over time and the second amplitude signal 239 over time.
  • additional processing may be performed on the amplitude signals 229, 239.
  • the shape of the signals 229, 239 may be determined and used to generate information regarding a detected object.
  • a rate of change in signal value for one or both signals may be used to determine the shape of an object.
  • An abrupt signal value change may be considered to indicate a square edge (e.g. a box), whereas a gradual change (e.g. a curve) may be considered to indicate a rounded object (e.g. a can or bottle).
  • a speed value e.g. speed data 244a, or speed signal 245 generated based on a separation between the first and second detection signals 242, 243 may be used to quantify a shape (e.g.
  • Such information may be used to confirm that the object shape corresponds to an expected object shape, allowing improved printing control. For example, if a printer is configured to print on a round product, but senses square boxes, a warning may be generated.
  • the processor 240 may thus provide additional outputs than those illustrated.
  • the parallel viewing angle 304a may be less than the perpendicular viewing angle 304p. That is, a relatively small viewing angle 304a in the direction of movement 205 allows a more abrupt transition to be determined between an objection being outside of the field of view and inside the field of view. Further a relatively wide viewing angle 304p in a perpendicular direction allows an increased amount of radiation to reach the sensor, since a larger surface area of the object can be viewed.
  • the narrow field of view in the direction of movement may be achieved by providing a narrow aperture width 302w (see Figures 3a, 3b, 4a) in the direction of movement 205.
  • the aperture width 302w may be less than 5 mm.
  • the aperture width is around 1 mm.
  • the first sensing aperture 302 may thus have a height 302h perpendicular to the expected direction of movement 205 of the object that is greater than or equal to the width 302w.
  • the aperture may have a height of around 6 mm.
  • the viewing angles 304a, 304p are further influenced by the sensing length 300I.
  • a larger sensing length 300I results in a smaller viewing angle 304a, 304p for a given aperture and sensing element size.
  • the sensing element 300 may have a width 300w in the direction of movement 205 of 2.65 mm.
  • the sensing element 300 may have a height 300h perpendicular to the direction of movement 205 of 2.65 mm.
  • alternative sensor geometry may be used.
  • a square sensor may be preferred since they are widely available and low cost components.
  • the sensing length 300I is at least 10 times the width of the aperture 302w in the expected direction of movement of the object 205.
  • the sensing length 300I may be at least 5 times the width 300w of the first sensing element in the expected direction of movement of the object 205.
  • the sensing length 300I may be at least 10 times greater than a maximum of the width of the aperture 302w in the expected direction of movement of the object 205, and the width of the sensing element 300w in the expected direction of movement of the object 205. In this way, a narrow field of view in the expected direction of movement can be provided. Such a narrow field of view provides high sensitivity to edges of objects, and also rapid detection of objects.
  • the second aperture 312, second sensing element 310, and second sensing arrangement 313 may have similar dimensions, and dimensional relationships, to those of the first aperture 302, first sensing element 300, and first sensing arrangement 303. In this way, a similar field of view is provided for each sensor, albeit separated by the sensor separation distance 324 in the expected direction of movement 205 of the object. It is further noted above that the first and second fields of view are separated in the expected direction of movement 205 by a separation distance 324. It can further be seen from Figure 3b, that the central axis of the first field of view 306 and a central axis of the second field of view 316 are parallel to each other, and perpendicular to the expected direction of movement of the object 205.
  • Arranging the fields of view in this way ensures that wherever an object is placed in the sensing direction (e.g. close to the sensor, or far away from the sensor) it will be detected by both sensing arrangements 303, 313 at an equivalent distance and in a similar way. That is, if the axes 306, 316 were not parallel, then the effective sensor separation distance would vary as a function of the distance from the sensor in the sensing direction 210.
  • the effective sensor separation distance would vary as a function of angle between the axes 306, 316 and the direction of movement of the object 205.
  • a separation distance 324 between the first and second central axes 306, 316.
  • a separation distance 324 of around 5 mm or less results in a sensor capable of detecting small products, with a high degree of accuracy.
  • a small separation distance e.g. less than 10 mm minimises the risk that consecutive detection signals from the two sensing elements 300, 310 will not relate to the same object. For example, where an object is small in the direction of movement 205, and where a series of similar objects are advanced along a conveyor, there is a risk that detection signals generated a short time apart could relate to different products.
  • two detection signals could be generated either by a) a fast moving object being detected sequentially by the first and second sensing arrangements 303, 313, or b) a first slow moving object being detected by the first sensing arrangement 303 and a second slow moving object being detected by the second sensing arrangement 313.
  • a fast moving object being detected sequentially by the first and second sensing arrangements 303, 313, or b) a first slow moving object being detected by the first sensing arrangement 303 and a second slow moving object being detected by the second sensing arrangement 313.
  • the 240 may be performed by a processor that is part of the sensor 208.
  • the sensor 208 may thus comprise a standalone component that can provide an object detection signal 241 and/or an object speed signal 245 to a printer.
  • a printer or marking system may be configured to receive an object detection signal 241 from the sensor (e.g. a detection signal that has been generated by a processor 240 of the sensor) and to initiate (or adjust) printing or marking based upon the object detection signal 241 and/or speed and/or direction signal (e.g. an object movement signal).
  • the printer or marking system may include a printhead or marking head (e.g. printhead 201) that is configured to cause a mark to be created on an object 204 moving in the expected movement direction 205 past a marking location.
  • a printhead or marking head e.g. printhead 201
  • processing described above may be performed by a processor that is part of the printer 201 (or other marking system), such as the printer controller 212.
  • the senor 208 may be provided as part of the printer 201.
  • the sensor 208 may thus generate detection signals that are processed by the printer controller 212 in order to control printing.
  • the analog output signals e.g. 229, 239 may be provided directly to the printer controller 212 for processing into a detection signal which is used internally by the controller 212 to control printing.
  • the signal processor 240 can be implemented in any convenient way including as an application specific integrated circuit (ASIC), field programmable gate array (FPGA) or a microprocessor connected to a memory storing processor readable instructions, the instructions being arranged to control the sensor (e.g. LED drivers 221, 231) and the microprocessor being arranged to read and execute the instructions stored in the memory.
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • the processor 240 may be provided by a plurality of controller devices each of which is charged with carrying out some of the processing functions attributed to the processor 240, or even to a printer controller (where present).
  • the sensor 208 may be integrated into a marking head of a printer or marking system.
  • the senor 208 has been described in the context of printing and marking. However, the sensor could also be used to determine speed and/or location information for a variety of different processing purposes.
  • a packaging line or apparatus may receive a detection signal (or even an analog sensor output) and use the signal, in a similar manner to that described above with reference to the printer, to initiate some action. Some examples may include the filling or sealing of a container, the removal of an object from a processing line, or the application of a label to a passing object.
  • first and second radiation sources 340, 350 may be pulsed at different frequencies to each other.
  • first and second radiation sources may be pulsed at different times to each other. In this way, it is possible to allow some spatial overlap between the first and second sensing regions (thereby allowing closer sensor spacing), while still avoiding cross-talk between the two sensors.
  • a background sensor output signal may be obtained from the sensing element 300 when no radiation is emitted from the radiation source 340
  • an illuminated sensor output signal may be obtained from the sensing element 300 when radiation is being emitted from the radiation source 340.
  • pulsing of the radiation sources may be omitted entirely.
  • the radiation sources 340, 350 themselves may be omitted entirely.
  • the sensing elements may rely on the reflection of ambient light into the apertures 302, 312, with the presence of an object 304 in the field of view changing the light levels reaching the sensing elements 300, 310.
  • the senor 208 comprises two sensing assemblies 303, 313. It will be appreciated, however, that the sensor may comprise a different number of sensing assemblies. For example, three or more sensing assemblies may be provided. Each of the sensing assemblies may be generally as descried above. A common radiation source may be provided for all sensing assemblies, or a separate radiation source may be provided for each of the sensing assemblies.
  • the senor 208 is shown facing the conveyor 303 such that the sensing direction 210 extends in a substantially horizontal direction.
  • various dimensions and angles have been referred to as “width”, “height”, “horizontal” or “vertical”. It will be appreciated, however, that the sensor may be oriented in any convenient way. For example, the sensor may point downwards from above a production line (and similarly, a printer may be configured to mark the top of products). The terminology used is not intended to imply any restriction to the orientation of the sensor.
  • an aperture “width” is considered to be a dimension of the aperture in the direction of movement 205
  • an aperture “height” is considered to be a dimension of the aperture in a perpendicular direction to both the direction of movement 205, and the sensing direction 210.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Geophysics And Detection Of Objects (AREA)

Abstract

A sensor configured to detect an object moving along a predefined movement path. The sensor comprises a first sensing element, a second sensing element, and a sensor housing configured to shield the first sensing element and the second sensing element, the sensor housing comprising a first wall portion defining a first sensing aperture and a second wall portion defining a second sensing aperture. The first sensing aperture is configured to allow electromagnetic radiation to travel towards the first sensing element, the first sensing aperture and the first sensing element defining a first field of view, the first field of view extending from the first sensing aperture in a sensing direction and comprising a first plurality of radiation paths from locations within the first field of view, via the first sensing aperture, to the first sensing element. The second sensing aperture is configured to allow electromagnetic radiation to travel towards the second sensing element, the second sensing aperture and the second sensing element defining a second field of view, the second field of view extending from the second sensing aperture in the sensing direction and comprising a second plurality of radiation paths from locations within the second field of view, via the second sensing aperture, to the second sensing element. The sensor housing is configured to block paths of radiation between locations external to the first field of view and the first sensing element and locations external to the second field of view and the second sensing element. The first aperture and the second aperture are separated by a sensor separation distance from each other in a direction parallel to the predefined movement path of the object.

Description

Sensor
The present invention relates to a sensor for detecting a succession of objects carried past the sensor on a conveyor, and a method of detecting an object.
In many manufacturing and industrial processing facilities objects are conveyed by a conveying system past a variety of different processing stations. A processing station may be a marking or printing station. The objects may be products such as manufactured articles or packaged food stuffs and a printer may be located at the printing station and used to print product and batch information, “use by” dates etc. The printer may be a non-contact printer such as an industrial ink jet printer or a laser marking system (i.e. using a laser to print by directing a laser beam at an object so as to mark the object by changing a surface characteristic of the object). The printer may be a continuous ink jet printer, for example an electrostatic deflection continuous inkjet printer.
Figure 1 illustrates schematically a known printing station 100. A printer 101 having a printhead 102 and a main body 107 is positioned adjacent to a conveyor 103. Products 104 are caused to move in a movement direction 105 past a marking location 106. At the marking location, the printhead 102 applies a mark 104a to each of the products 104.
In order to position the mark correctly on each object, it is known to use a sensor 108 upstream of the printer 101 to detect an approaching object 104 and trigger printing. In order to position the printing correctly, the system also needs to delay the start of printing, following detection of the approaching object, by the time it takes the object to travel the distance from its position when it is detected to its position for the start of printing. It is known to calculate this delay from a distance to be travelled (which is known) and the line speed (i.e. the speed at which objects are carried past the printer by a conveyor). The line speed may also be used to adjust the printing operation to ensure the correct spacing of the printing in the direction of movement of the objects and to adjust other factors that control print quality. The line speed may be detected using a shaft encoder 110, or alternatively a second sensor may be used, spaced from the first sensor in the direction of travel of the objects, and the line speed can be calculated from the time taken for an object to travel from one sensor to the other. The sensor 108 or sensors typically comprise a photocell. For example, each sensor may be constructed as a light source and a photodetector positioned close together, so that the photodetector detects light originating from the light source and reflected by an object when the object is present. Alternatively, the sensor may comprise a reflector 108a positioned on the other side of the conveyor 103 from the sensor 108.
In order for the sensors to detect the presence of an object reliably, they must be able to distinguish between the signal received when an object is present and the signal received in the absence of any object.
It is an object of the present disclosure to provide an improved sensor for detecting objects moving at a sensing location.
Aspects of the present disclosure provide a low-cost sensor configured to detect the speed of an object and optionally, both speed and location of an object.
According to a first aspect of the invention there is provided a sensor configured to detect an object moving along a predefined movement path. The sensor comprises a first sensing element, a second sensing element, and a sensor housing configured to shield the first sensing element and the second sensing element, the sensor housing comprising a first wall portion defining a first sensing aperture and a second wall portion defining a second sensing aperture. The first sensing aperture is configured to allow electromagnetic radiation to travel towards the first sensing element, the first sensing aperture and the first sensing element defining a first field of view, the first field of view extending from the first sensing aperture in a sensing direction and comprising a first plurality of radiation paths from locations within the first field of view, via the first sensing aperture, to the first sensing element. The second sensing aperture is configured to allow electromagnetic radiation to travel towards the second sensing element, the second sensing aperture and the second sensing element defining a second field of view, the second field of view extending from the second sensing aperture in the sensing direction and comprising a second plurality of radiation paths from locations within the second field of view, via the second sensing aperture, to the second sensing element. The sensor housing is configured to block paths of radiation between locations external to the first field of view and the first sensing element and locations external to the second field of view and the second sensing element. The first aperture and the second aperture are separated by a sensor separation distance from each other in a direction parallel to the predefined movement path of the object.
By providing a sensor in which an apertures, rather than lenses, are configured to cooperate with the sensing elements to define the fields of view, it is possible to provide a mechanically simple sensor that is relatively insensitive to contamination (e.g. by ink, or dust). By avoiding the use of a lens to define the field of view, a single focal distance is avoided, allowing a sensing across a broad range of depths to be performed.
An aperture may refer to an opening in the housing e.g. a circular opening, or an elongate opening, such as a slot.
By providing two such apertures separated at a sensor separation distance it is possible to determine a movement speed of an object.
The sensor housing may comprise a single housing shielding both of the first and second sensing elements and providing each of the first and second sensing apertures. The sensor housing may comprise a multiple housing or wall portions, one shielding the first sensing element and providing the first sensing aperture, and a second one shielding the second sensing element and providing the second sensing aperture.
The sensor housing may be configured to shield the first and second sensing elements from radiation reflected from, or otherwise originating from, locations outside of the respective fields of view.
The sensor housing may comprise a single housing enclosing both of the first and second sensing elements and providing each of the first and second sensing apertures.
Optional features of the first sensing aperture, first sensing element and sensor housing may be applied to the second sensing aperture, and second sensing element as appropriate. That is, the first and second sensing arrangements may be generally similar, or even identical, to one another, but spatially separated. The predefined movement path may be defined by a production line or packaging line. The predetermined movement path may be substantially perpendicular to the first and second sensing directions. A movement direction of the object may be in either of two directions along the predetermined movement path (e.g. left-right past the sensor, or right-left past the sensor).
Optionally, the first and second pluralities of radiation paths from locations within the first and second fields of view to the respective first and second sensing elements do not pass through a lens.
The fields of view are preferably not defined by a lens. That is, the fields of view are defined by the apertures and sensor geometry, rather than by lenses. The primary optical components between the sensing elements and the object are the apertures, and not lenses.
The apertures may, in some cases, be covered by a transparent window. Such a transparent window would serve to prevent the ingress of debris (e.g. ink or dust) into the sensing cavity, but would not have any optical power, and would not, therefore, act as a lens.
The apertures may, in some cases, be covered by a low power lens. Such a low power lens may serve to prevent the ingress of debris (e.g. ink or dust) into the sensing cavity, but would not have any significant optical power, and would not, therefore, significantly contributed to defining the fields of view.
The first sensing aperture may have a width in the direction parallel to the predefined movement path that is less than 5 mm.
The first sensing aperture may have a height perpendicular to the direction parallel to the predefined movement path that is greater than or equal to the width.
The first sensing aperture may have a height of at least 5 mm.
A first sensing length may be defined between the first sensing aperture and the first sensing element. The first sensing length may be at least 5 times a width of the first aperture in the direction parallel to the predefined movement path.
The first sensing length may be at least 3 times a width of the first sensing element in the direction parallel to the predefined movement path.
The first sensing length may be at least 3 times greater than a maximum of the width of the aperture in the direction parallel to the predefined movement path, and the width of the sensing element in the direction parallel to the predefined movement path.
The second aperture and second sensing element may have similar dimensions, and dimensional relationships to the first aperture and first sensing element.
By providing a large aperture separation (as compared to the aperture width) it is possible to provide a narrow field of view in the direction of (expected) object travel. Such a narrow field of provides high sensitivity to edges of objects, and also rapid detection of objects.
The first sensing element and the second sensing element may have substantially the same dimensions.
The first sensing aperture and the second sensing aperture may have substantially the same dimensions.
A spatial relationship between the first aperture and the first sensing element may be substantially the same as a spatial relationship between the second aperture and the second sensing element.
In this way, similar fields of view are provided, and the sensing signals generated by each of the sensing elements are similar, but separate in time when an object moves past both sensing apertures in sequence.
An internal surface of the housing may comprise a surface having a low reflectance. A surface having a low reflectance may suppress internal reflections, minimising the risk that stray radiation will reach the sensing element from locations other than the sensing region.
The sensor may further comprise a first radiation source configured to emit a first beam of electromagnetic radiation in the sensing direction. A first sensing region may be defined by an overlap between the first beam of electromagnetic radiation and the first field of view.
The first radiation source may be configured such that no direct radiation path exists from the first radiation source to the first and second sensing elements.
The first radiation source may be provided externally of the housing.
In the sensing region, radiation from the source can be reflected from an object and directed towards the sensing element. An object outside of the sensing region will either not be within the field of view of the sensor, or not be illuminated by the radiation source. As such, objects outside each field of view will not cause radiation from the radiation source to be reflected from an object and directed towards the respective sensing element.
A central axis of the first beam of emitted electromagnetic radiation may be perpendicular to the predefined movement path.
A central axis of the first field of view may be perpendicular to the predefined movement path.
At each location in the first sensing region there may exist a direct radiation path from the first radiation source, and a direct radiation path to the first sensing element, via the first sensing aperture
An offset angle may be defined between a central axis of the first beam of electromagnetic radiation emitted by the first radiation source and a central axis of the first field of view. The sensor may further define a minimum sensing distance between the first sensing aperture and the onset of the first sensing region in the sensing direction.
The sensor may further define a maximum sensing distance between the first sensing aperture and the end of the first sensing region in the sensing direction.
The sensing direction may preferably be a direction extending from the sensor, perpendicular to the predefined movement path.
The sensor may further comprise a second radiation source configured to emit a second beam of electromagnetic radiation in the sensing direction
A second sensing region may be defined by an overlap between the second beam of electromagnetic radiation and the second field of view.
The second beam of radiation may have equivalent spatial characteristics to those of the first beam of radiation.
Providing separate radiation sources for each sensing elements allows improved accuracy, since each radiation beam can be oriented to similarly with respect to the respective field of view.
The first beam of radiation and/or the second beam of radiation may have a divergence angle of less than around 16 degrees. The first beam of radiation and/or the second beam of radiation may have a divergence angle of around 8 degrees.
A central axis of the first beam of electromagnetic radiation and a central axis of the second beam of electromagnetic radiation may be parallel to one another, and perpendicular to the predefined movement path.
The first and second radiation sources may be configured such that no direct radiation path exists from the first and second radiation sources and the first and second sensing elements.
The first and second radiation sources may each be provided externally of the housing. In the sensing regions, radiation from the source can be reflected from an object and directed towards the sensing element. An object outside of the sensing region will either not be within the field of view of the sensor, or not be illuminated by the radiation source. As such, objects outside each field of view will not cause radiation from the respective radiation source to be reflected from an object and directed towards the respective sensing element.
At each location in each of the first and second sensing regions there may exist a direct radiation path from the respective radiation source, and a direct radiation path to the respective sensing element, via the respective sensing aperture.
Each of the first field of view and the second field of view may have a parallel viewing angle extending in a direction parallel to the predefined movement path and a perpendicular viewing angle extending in a direction perpendicular to the predefined movement path, the parallel viewing angle being less than the perpendicular viewing angle.
The viewing angle may be determined by the sensing element size, the aperture size and the separation between the sensing element and the aperture. The viewing angle may be referred to as a divergence angle.
Selection of the divergence angle of the beam of radiation, viewing angle of the field of field, and offset angle between the beam of radiation and field of view allows the minimum sensing distance and maximum sensing distance to be determined and controlled.
The sensor may be configured to pulse the first radiation source at a first pulse frequency and to obtain signals from the first sensing element at the first pulse frequency.
By pulsing the radiation source and sampling the sensor at a corresponding frequency, it is possible to suppress ambient radiation incident upon the sensor. The second radiation source may be pulsed in a similar manner. The first and second radiation sources may be pulsed at different frequencies to each other. The first and second radiation sources may be pulsed at different times to each other. In this way, it is possible to allow some spatial overlap between the first and second sensing regions (thereby allowing closer sensor spacing), while still avoiding cross-talk between the two sensors.
A central axis of the first field of view and a central axis of the second field of view may be parallel to each other, and perpendicular to the predefined movement.
The central axis of the first field of view and the central axis of the second field of view may be separated from one another by around 5 mm or less in the direction parallel to the predefined movement path.
The sensor may be configured to generate an object detection signal based on a first analog signal received from the first sensing element and/or a second analog signal received from the second sensing element.
The sensor may be configured to receive the first and/or a second analog signals from the respective first and second sensing elements and to generate a respective detection signal based each of the respective first and second analog signals.
The sensor may be configured to generate an object detection signal based on the first and/or second detections signals.
The sensor may be configured to generate an object speed signal based upon a first analog signal received from the first sensing element and a second analog signal received from the second sensing element.
The object speed signal may comprise a quadrature output signal comprising a first quadrature signal and a second quadrature signal, each of the first and second quadrature signals comprising a periodic signal component having a frequency that is proportional to the speed of the object, and a phase difference between the first and second quadrature signals being controlled based upon a direction of the object. The sensor may comprise a sensor control circuit configured to receive a first sensor signal from the first sensing element.
The sensor control circuit may comprise one or more of: an amplifier configured to generated a first amplified sensor signal based upon the first sensor signal; a filter configured to generate a first filtered sensor signal based upon the first sensor signal; a peak detector configured to generate a first amplitude signal indicative of an amplitude of the first sensor signal; an analog to digital convertor configured to generate a first digital sensor signal based on the first sensor signal; and a signal processor configured to generate an object detection signal based upon the first sensor signal.
Each of the processing steps may be performed sequentially. As such, by generating a signal “based upon the first sensor signal”, it will be appreciated that one or more intermediate steps may also be performed.
Any of the first sensor signal, the first amplified sensor signal, the first filtered sensor signal and the first amplitude signal may be referred to as a first analog signal.
The filter may be a band-pass filter, configured to exclude signal components at frequencies other than the first pulse frequency or multiples thereof.
The amplifier may be configured to generate the first amplified sensor signal based upon the first sensor signal. The filter may be configured to generate the first filtered sensor signal based upon the first amplified sensor signal. The peak detector may be configured to generate the first amplitude signal based upon the first filtered sensor signal. The analog to digital convertor may be configured to generate the first digital sensor signal based upon the first amplitude signal. The signal processor may be configured to generate the object detection signal based upon the first digital sensor signal.
The sensor control circuit may be further configured to receive a second sensor signal from the second sensing element and to perform equivalent processing steps thereon.] There is also provided a printer or marking system comprising a printhead or marking head configured to cause a mark to be created on an object moving in the expected movement direction past a marking location, and a sensor according to the first aspect.
The printer or marking system may be configured to receive an object detection signal from the sensor; and initiate printing or marking based upon the object detection signal.
The printer or marking system may further receive an object movement signal indicating a direction and/or speed of the object. Printing or marking may be controlled based upon the object movement signal.
According to a second aspect of the invention there is provided a method of detecting an object. The method comprises: defining a first field of view, the first field of view extending from a first sensing aperture defined by a first wall portion in a sensing direction and comprising a first plurality of radiation paths from locations within the first field of view, via the first sensing aperture, to a first sensing element; defining a second field of view, the second field of view extending from a second sensing aperture defined by a second wall portion in a sensing direction and comprising a second plurality of radiation paths from locations within the second field of view, via the second sensing aperture, to a second sensing element; blocking paths of radiation between locations external to the first and second fields of view and the respective sensing elements; receiving, by the first sensing element, electromagnetic radiation reflected by an object present at a sensing location within the first field of view; generating, by the first sensing element, a first sensor signal indicative of an amount of radiation incident upon the first sensing element; receiving, by the second sensing element, electromagnetic radiation reflected by an object present at a sensing location within the second field of view; generating, by the second sensing element, a second sensor signal indicative of an amount of radiation incident upon the second sensing element; and generating an object detection signal based upon at least one of the first sensor signal and the second sensor signal. By defining fields of view with apertures acting as the primary optical components, it is possible to provide a mechanically simple sensor that is relatively insensitive to contamination (e.g. by ink, or dust), while still permitting accurate edge detection of passing objects, and determination of movement direction and speed.
The method may further comprise, for each of the first and second sensing elements: amplifying the sensor signal; filtering the amplified signal to generate a filtered sensor signal; generating an amplitude signal based on the filtered sensor signal; and converting the amplitude signal to a digital sensor signal. The method may further comprise generating the object detection signal based upon first and/or second digital sensor signals.
The method may further comprise, for each of the first and second sensing elements: generating a baseline sensor signal based upon past values of the sensor signal; and generating a comparison result by comparing the baseline sensor signal with a current sensor signal. The method may further comprise generating the object detection signal based upon the first and/or second comparison results.
The past values may comprise a plurality of discrete past values (e.g. ADC sample values). The baseline sensor signal may be generated by combining (e.g. averaging) the plurality of discrete values. Alternatively, the past values may comprise a continuous analog signal extending over a period of time. Such a continuous signal may be averaged by an analog component (e.g. a capacitor, an integrator) to generate the baseline sensor signal.
Generating the baseline signal may be based on past values of the sensor signal representing at least 5 seconds duration. Generating the baseline signal may be performed at periodic intervals. Generating the baseline signal may be performed at a predetermined time (e.g. at product start-up as part of an initialisation routine).
The current sensor signal for each sensing element may be generated based on a plurality of sensor signal values received from the respective sensing element. The current sensor signal may comprise the amplitude signal. Each sensor signal may comprise a plurality of sensor signal values, each representing a respective point in time. Each of the plurality of sensor signal values may be generated by sampling an analog signal (e.g. the amplitude signal) with an ADC.
Generating the detection signal may comprise identifying a plurality of consecutive sensor signal values of the plurality of sensor signal values that satisfy a predetermined criterion. The predetermined criterion may comprise the plurality of consecutive sensor signal values each exceeding a difference threshold from the baseline signal.
Similar processing may be performed for signals received from each of the sensing elements.
The method may further comprise generating an object movement signal based upon the first sensor signal and/or the second sensor signal.
The object movement signal may comprise an object direction signal and/or an object speed signal.
The object direction signal may be generated based on a comparison between the first and second sensor signals
The object speed signal may be generated based on a temporal separation between first and second detection signals. The object speed signal may be generated further based on a spatial separation between the first and second sensing locations.
The object speed signal may be generated based on generated object speed data satisfying a speed criterion. The speed criterion may be the speed being greater than a minimum detection speed (e.g. 5 mm/s) and less than a maximum detection speed (e.g. 5 m/s).
The method may further comprise generating a first quadrature signal and a second quadrature signal, each of the first and second quadrature signals comprising a periodic signal component having a frequency that is proportional to the speed of the object, and controlling the phase difference between the first and second quadrature signals based upon a direction of the object. The method may further comprise controlling an industrial printer or marking system based upon the object detection signal.
Controlling the printer or marking system may comprise initiating a printing or marking operation upon receipt of the detection signal.
Generating the object detection signal may further comprise comparing a current first sensor signal and a current second sensor signal and generating the object detection signal if a predetermined criterion is satisfied. The predetermined criterion may comprise a degree of similarity between the first current sensor signal and the second current sensor signal, or the first comparison result and the second comparison result.
By assessing a similarity between the first and second signals and/or first and second comparison results, it is possible to compare the shape of the first and second signals. In this way, anomalous object detection results can be minimised, since in a correct detection result the waveforms of the first and second signals would be expected to be time shifted versions of one another.
A detection result may be rejected if the signals do not match sufficiently.
The method may further comprise controlling the industrial printer or marking system based upon an object movement signal (e.g. an object speed signal).
Further aspects of the invention and optional features are set out in the accompanying claims.
Embodiments from the present invention, given by way of non-limiting example, will be described as reference to the following drawings.
Figure 1 shows a top view of a known printing station;
Figure 2 shows a top view of a printing station including a sensor according to the invention;
Figure 3a shows a front view of a sensor as shown in Figure 2 in more detail; Figure 3b shows a top cross-section view of a sensor as shown in Figure 2 in more detail;
Figures 4a and 4b show a sensor as shown in Figure 2 in more detail;
Figure 5 shows side cross-section view of part of a sensor as shown in Figure 2 in more detail; and
Figure 6 shows a block diagram of a control circuit for a sensor as shown in Figure 2.
Figure 2 illustrates schematically a printing station according to the invention 200. A printer 201 having a printhead 202 is positioned adjacent to a conveyor 203. Products 204 are caused to move along a predefined movement path MP in an expected movement direction 205 past a marking location 206. At the marking location, the printhead 202 applies a mark 204a to each of the products 204.
In order to position the mark correctly on each object, a sensor constructed according to the invention 208 is positioned upstream of the printer 201 to detect an approaching object 204 and trigger printing.
Generally speaking, during marking operations, the direction of movement of the products 204 is known (i.e. the expected movement direction 205). However, during configuration, this may not be known. Moreover, the printer 201 may not know the direction of movement (i.e. whether left-right, or right-left). It is desirable, therefore, to provide information regarding the direction of movement along the predefined movement path MP to the printer.
It will be appreciated that references to the predefined movement path MP and the expected movement direction 205 may be used interchangeably in some instances. That is, where reference is made to distances, or directions, the terms “predefined movement path” MP and “expected movement direction” 205 may be used interchangeably, since a direction parallel or perpendicular to one of these terms is equivalent to a direction parallel or perpendicular to the other of these terms.
The printer may be an industrial printer or marking system. For example, the printer may be a non-contact printer such as, for example, a continuous inkjet printer (with a single, or multiple jets), a drop on demand inkjet printer, or a laser marking system. Generally, the printer will be controlled separately from the location of objects 204 to be marked. That is, unlike a document printing system (e.g. a desktop printer) in which the printer typically controls both the printhead and the medium upon which the image is to be printed, or a laser writing system in which an object is secured to a stage that is under common control with the laser beam, the industrial printer or marking system in the present case may itself have no direct knowledge or control of the location of the objects being marked. For example, the printer or marking system may be arranged next to or above a production line or conveyor belt 3, and may be required to print on objects 204 as and when they appear at the marking location 206. The supply of objects may be intermittent, with variable spacing and speed. The printer 201 may have no internally available source of information regarding the location or speed of objects to be marked.
The printer 201 may, for example, be an ink jet printer. It may comprise means for deflecting the ink drops in flight, so that different drops can travel to different destinations. Typically, the ink is electrically conductive when wet, and the printer comprises an arrangement of electrodes to trap electric charges on the ink drops and create electrostatic fields in order to deflect the charged drops. Normally, the ink jet printer has a print head that is separate from the main printer body and is connected to the main printer body by a flexible connector sometimes known as a conduit or umbilical that carries fluid and electrical connections between the print head and the main printer body.
The printhead 202 may include a droplet generator that receives pressurised ink and allows it to exit through an orifice to form a jet of ink, a charge electrode for trapping electric charges on drops of ink, deflection electrodes for creating an electrostatic field for deflecting charged drops of ink, and a gutter for collecting drops of ink that are not used for printing. The umbilical will include fluid lines, for example for providing pressurised ink to the ink gun and for applying suction to the gutter and transporting ink from the gutter back to the main printer body, and electrical lines, for example to provide a drive signal to a piezoelectric crystal or the like for imposing pressure vibrations on the ink jet, to provide electrical connections for the charge electrode and the deflection electrodes, and to provide drive currents for any valves that may be included in the print head. The sensor 208 is configured to detect objects approaching the marking location 206 in the expected direction of movement 205. The sensor 208 detects objects as they enter a sensing location 209. The sensing location 209 extends from the sensor 208 in a sensing direction 210. The sensor 208 is optimally arranged such that the sensing direction 210 is perpendicular to the expected direction of movement 205 of the object 204 on the conveyor 203.
The printer 201 is configured to delay the start of printing, following detection of an approaching object by the sensor 208, by the time it takes the object to travel a delay distance 211 from the sensing location 209 to the marking location 206.
The printer 201 further comprises a printer main body 207 (e.g. housing an ink supply, or a laser source), and a printer controller 212.
Figures 3a and 3b show the sensor 208 in more detail. Figure 3a shows a front view of the sensor 208, with the viewing position being that of an object in the sensing location 209. Figure 3b shows a cross-section view looking from above, with the cross section taken along line A-A’, shown in Figure 3a.
The sensor 208 comprises a first sensing element 300 and a first sensing aperture 302. The first sensing aperture 302 is configured to allow a limited portion of external electromagnetic radiation to travel towards the first sensing element 300. Together, the first sensing aperture 302 and the first sensing element 300 define a first field of view 304. The first field of view 304 extends from the first sensing aperture 302 in the sensing direction 210. The first field of view 304 comprises a first plurality of radiation paths 304a, 304b, from locations within the first field of view 304, via the first sensing aperture 302, to the first sensing element 300. The first field of view 304 comprises a first central axis 306, which extends from the sensor 208 in the sensing direction 210.
The sensor further comprises a second sensing element 310 and a second sensing aperture 312. The second sensing aperture 312 is configured to allow a limited portion of external electromagnetic radiation to travel towards the second sensing element 310. Together, the second sensing aperture 312 and the second sensing element 310 define a second field of view 314. The second field of view 314 extends from the second sensing aperture 312 in the sensing direction 210. The second field of view 314 comprises a second plurality of radiation paths 314a, 314b, from locations within the second field of view 314, via the second sensing aperture 312, to the second sensing element 310. The second field of view 314 comprises a second central axis 316, which extends from the sensor 208 in the sensing direction 210. The first central axis 306 and the second central axis 316 are parallel to one another.
The sensor 208 further comprises a sensor housing 320 configured to enclose and separate the first sensing element 300 and the second sensing element 310. The sensor housing 320 further defines the first sensing aperture 302 and the second sensing aperture 312. The sensor housing 320 further prevents any radiation from the second sensing aperture 312 from reaching the first sensing element 300, or any radiation from the first sensing aperture 302 from reaching the second sensing element 310. The sensor housing 320 comprises a first wall 320a defining the first sensing aperture 302 and a second wall portion 320b defining the second sensing aperture 312.
A first sensing cavity 301 is formed enclosing the first sensing element 300 and volume of space between the first sensing aperture 302 and first sensing element 300. The first sensing element 300, first sensing cavity 301 and first sensing aperture 302 together comprise a first sensing arrangement 303.
A second sensing cavity 311 is formed enclosing the second sensing element 310 and volume of space between the second sensing aperture 312 and second sensing element 310. The second sensing element 310, second sensing cavity 311 and second sensing aperture 312 together comprise a second sensing arrangement 313.
A third wall portion 320c separates the first and second sensing cavities 301 , 311.
The first and second sensing arrangements 303, 313 are preferably substantially similar. That is, by providing first and second sensing elements 300, 310 having substantially the same dimensions, first and second sensing apertures 302, 312 having substantially the same dimensions, and spatial relationships between the first aperture 302 and the first sensing element 300, and the second aperture 312 and the second sensing element 310 that are substantially the same, it is possible to provide similar fields of view. In some cases, first and second sensing arrangements 303, 313 may be identical to one another, but spatially separated in the expected direction of movement 205.
The sensor housing 320 is configured to block paths of radiation 322a, 322b between locations external to the first field of view 304 and the first sensing element 300 and locations external to the second field of view 314 and the second sensing element 310. The paths 322a, 322b may be blocked by the external surface of the housing 320 (see e.g. path 322b, which is blocked by wall portion 320a), or alternatively may be blocked as a result of the internal structure (e.g. wall portion 320c) after passing through one of the apertures 302, 312 (e.g. path 322a).
In the described and illustrated arrangement, the sensor housing 320 defines the first sensing cavity 301 fully enclosing (except for the first aperture 302) the first sensing element 300, and the second sensing cavity 311 fully enclosing (except for the second aperture 312) the second sensing element 310. However, in some embodiments, the sensor housing 320 may not fully enclose the first and second sensing elements 300, 310. Rather the sensor housing 320 may be configured to shield the first and second sensing elements 300, 310, and to define the first and second fields of view 304, 314. That is, the sensor housing 320 is configured to shield the first and second sensing elements 300, 310 from radiation reflected from, or otherwise originating from, locations outside of the respective fields of view 304, 314.
Providing fully enclosed sensing cavities 301 , 311 (as illustrated), is one way of providing such shielding, but other possibilities exist. For example, openings other than the apertures 302, 312 may be provided. In some circumstances other openings may be provided and the housing arranged such that no direct paths exist for radiation to pass from a shielded region surrounding the fields of view 304, 314 to the first and second sensing elements 300, 310. In such an arrangement, some radiation (e.g. ambient radiation) may enter the sensing cavities 301, 311, but may still be substantially prevented from reaching the first and second sensing elements 300, 310 (e.g. by preventing direct radiation paths, and reducing internal reflections to the extent that any stray radiation reaching the first and second sensing elements 300, 310 was minimised). Preferably the housing 320 is configured to block all direct paths of radiation to the first and second sensing elements 300, 310, other than those passing through the first and second apertures 302, 312.
The first aperture 302 and the second aperture 312 are separated by a sensor separation distance 324 from each other in the expected direction of movement 205 of the object. The separation distance 324 may be measured from like parts of the first and second apertures 302, 312. Equivalently, the separation distance 324 may be measured between the first and second central axes 306, 316 of the first and second fields of view 304, 314.
Figure 4a shows a cross-section of the first sensing arrangement, the cross-section taken along the line A-A’ shown in Figure 3a. Figure 4b shows a cross-section of the first sensing arrangement 303, the cross-section taken along the line B-B’ shown in Figure 3a. That is, Figure 4a shows a view that is similar to part of the view shown in Figure 3b, whereas Figure 4b shows a view that is looking from a direction parallel to the direction of movement 205.
The first sensing aperture 302 has an aperture width 302w in the expected direction of movement 205. Equivalently, the second sensing aperture 312 has an aperture width 312w in the expected direction of movement 205 (Figures 3a, 3b).
The first sensing aperture 302 has a height 302h perpendicular to the expected direction of movement 205, and perpendicular to the width 302w. The second sensing aperture 312 has a height 312h (not shown) perpendicular to the expected direction of movement 205, and perpendicular to the width 312w.
The first sensing element 300 has a width 300w in the expected direction of movement 205, and a height 300h perpendicular to the expected direction of movement 205, and perpendicular to the width 300w. The second sensing element 310 has a width 31 Ow in the expected direction of movement 205, and a height 31 Oh (not shown) perpendicular to the expected direction of movement 205, and perpendicular to the width 31 Ow.
The sensor has a first sensing length 300I (also referred to as aperture separation) between the first sensing aperture 302 and the first sensing element 300 and a second sensing length 3101 between the second sensing aperture 312 and the second sensing element 310.
The first field of view 304 has a parallel viewing angle 304a extending in a direction parallel to the expected direction of movement of the object 205 and a perpendicular viewing angle 304p extending in a direction perpendicular to the expected direction of movement of the object 205. The viewing angles may be referred to as divergence angles.
The parallel and perpendicular viewing angles are defined by the geometry of the first sensing element 300, the first aperture 302, and the first sensing length 300I according to the following equations: Equation (1) Equation (2)
Figure imgf000023_0001
In use, radiation (e.g. visible light) is reflected from objects 204 moving along the conveyor 203 past the sensor 208. The apertures 302, 312 cooperate with the sensing elements 300, 310 to define the fields of view 304, 314. Radiation that is reflected from the portion of the objects within the fields of view 304, 314 is permitted to travel towards the respective sensing elements 300, 310. However, radiation reflected from, or otherwise originating from, locations outside of the fields of view is not permitted to reach the sensing elements 300, 310.
The apertures 302, 312 are provided by openings in the housing 320. The openings may be any suitable shape or size. For example, circular, or an elongate opening, such as a slot. The configuration of the sensor in this way, with apertures, rather than lenses, provides a mechanically simple sensor that is relatively insensitive to contamination (e.g. by ink, or dust).
The first and second pluralities of radiation paths 304a, 304b, 314a, 314b (see Figure 3b) from locations within the first and second fields of view to the respective first and second sensing elements do not, therefore, pass through a lens. That is, the extent of the field of view for each sensing element is not defined by a lens. Rather, the field of view for each sensing element is defined by the respective aperture 302, 312, sensing element 300, 310, and geometry of the housing 320, as described above with reference to equations (1) and (2), rather than by lenses. The primary optical components between the sensing elements and a detected object are therefore apertures, and not lenses.
By avoiding the use of a lens to define the field of view, a single focal distance is avoided, allowing a sensing across a broad range of depths to be performed. Further still, the use of a lens may limit the geometry of the sensor in some way (e.g. the spacing between the first and second apertures 302, 312).
While lenses are avoided, the apertures may, in some cases, be covered by a transparent window. Such a transparent window would serve to prevent the ingress of debris (e.g. ink or dust) into the sensing cavity, but would not have any optical power, and would not, therefore, act as a lens. A lens having a very low optical power could be used in some examples. Such a lens would not significantly affect extend of the field of view.
The sensor housing 320 may comprise a single housing enclosing (or shielding) both of the first and second sensing elements 300, 310, and providing each of the first and second sensing apertures 302, 312. Alternatively, the sensor housing may comprise a multiple housing portions, one enclosing (or shielding) the first sensing element and providing the first sensing aperture, and a second one enclosing (or shielding) the second sensing element and providing the second sensing aperture. For example, the first wall portion 320a and second wall portion 320b may be removably connected to the remainder of the housing 320, and are not necessarily integrally formed therewith.
In a further alternative, part, or even all, of the sensor housing 320 may be provided by components of a printer in which the sensor is integrated. For example, a printer housing, or printhead housing, may provide some or all of the shielding of blocking functionality described herein as being provided by the sensor housing 320.
An internal surface 321 of the housing 320 may comprise a low reflectance surface. A low reflectance surface may suppress internal reflections, minimising the risk that stray radiation will reach the sensing element from locations other than the sensing region. Such a housing 320 may allow a high gain to be applied to the sensing elements 300, 310, thereby maximising sensitivity. If the housing was omitted, and ambient light was permitted to reach the sensing elements 300, 310, it would be more difficult to identify signal changes caused by objects entering the sensing location 209.
It will, of course, be appreciated that some stray light (e.g. ambient light) may be able to reach the sensing elements 300, 310 in some instances. However, it is preferred that the extent of such radiation is reduced to the extent possible.
Figure 5 shows a further cross-section of the first sensing arrangement 303, equivalent to the view shown in Figure 4b, but with additional parts included.
The first sensing arrangement further comprises a first radiation source 340. The first radiation source 340 comprises a light emitting diode. The first radiation source 340 is configured to emit a first beam of electromagnetic radiation 342 in the sensing direction 210. It will be understood that the first beam of electromagnetic radiation 342 does not need to extend directly in the sensing direction 210, but should have a component in that direction.
The first beam of electromagnetic radiation 342 comprises a central axis 344 perpendicular to the expected direction of movement of the object 205 (which is in a direction into or out of the page in the orientation shown in Figure 5). The first beam of radiation 342 has a divergence angle 342a. In one arrangement, the divergence angle 342a may be around 8 degrees.
An offset angle 348a is defined between the central axis 344 of the first beam of electromagnetic radiation emitted by the first radiation source 342 and the central axis 306 of the first field of view 304.
A first sensing region 346 is defined by an overlap between the first beam of electromagnetic radiation 342 and the first field of view 304 and is shown hatched both horizontally and vertically in Fig.5. In the first sensing region 346, radiation from the first radiation source 340 can be reflected from an object and allowed to travel towards the first sensing element 300. An object outside of the first sensing region 346 may reflect the first beam of electromagnetic radiation 342 but any reflected radiation will not be within the field of view 304 of the sensing element 300, so will not be detected.. Similarly, an object outside of the first sensing region 346 since it is outside of the range of the first beam of electromagnetic radiation 342, will not reflect any radiation towards the sensing element 300, and so will also not be detected. As such, objects outside of the first sensing region 346 will not cause radiation from the radiation source 340 to be reflected from an object and allowed to travel towards the sensing element 300.
As shown in Figure 5, a first object OBJ1 is partially in the first field of view 304. Since this object OBJ1 is not within the first beam of radiation 342, it will not be detected by the first sensing arrangement 303. A second object OBJ2 is fully in the first field of view 304 and within the first beam of radiation 342, and will therefore be detectable by the first sensing arrangement 303. A third object OBJ3 is partially in the beam of radiation 342, but entirely outside the first field of view 304, and will therefore not be detectable by the first sensing arrangement 303.
The first sensing region 346 may thus comprise a region in which, at each location there exists a radiation path from the first radiation source 340, and a radiation path to the first sensing element 300, via the first sensing aperture 302. The radiation paths may each comprise direct radiation paths.
It will, of course, be appreciated that, while a simple embodiment is described here for clarity of explanation, other embodiments will fall within the scope of the invention. For example, where it is discussed that direct radiation paths exist from the radiation source, and to the sensing elements, it is possible that indirect radiation paths also can be constructed. Indeed, the radiation source and/or the sensing element may be provided with one or more mirrors configured to reflect radiation in a predetermined and intended way. For example, in order to provide flexibility in the alignment and design of the sensor 208, one or more mirrors may be used within the sensor housing in order to direct the radiation beam, or the paths of radiation incident upon the sensing element. As such, while it is generally the case in the described examples, it is not necessary for a direct radiation path to exist from the from the first radiation source 340 to all locations within the first sensing region 346. Similarly, it is not necessary for a direct radiation path to exist from all locations within the first sensing region 346 to the first sensing element 300, via the first sensing aperture 302.
The first sensing arrangement comprises a minimum sensing distance SENSE_MIN between the first sensing aperture 302 and the onset of the first sensing region 346 in the sensing direction, and a maximum sensing distance SENSE_MAX between the first sensing aperture 302 and the end of the first sensing region 346 in the sensing direction. Selection of the divergence angle of the beam of radiation 342a, the viewing angle of the field of field 304a, and the offset angle 348a between the beam of radiation 342 and field of view 304, as well as the relative positions of the first sensing arrangement 303 and first radiation source 340, allows the minimum sensing distance SENSE_MIN and maximum sensing distance SENSE_MAX to be determined and controlled.
In a preferred configuration, the offset angle 348a between the beam of radiation 342 and the field of view 304 is around 25 degrees. The divergence angle of the beam of radiation 342a is around 3 degrees, and the viewing angle of the field of field 304a is around 25 degrees. Such an arrangement may provide for a minimum sensing distance SENSE_MIN of 1.5 mm and a maximum sensing distance SENSE_MAX of 30 mm.
It will be understood that the selection of the minimum and maximum sensing distances allows spurious detection events to be minimised. For example, if a sensor is configured to look horizontally across a conveyor, detection signals will not be generated by reflections from highly reflective objects passing at the other side of the conveyor, provided they are further from the sensor than the maximum sensing distance. A convenient sensing distance range may be 1.5 mm (minimum) to 30 mm (maximum). Similarly, if a sensor is configured to look vertically down onto a conveyor, detection signals will not be generated by reflections from highly reflective parts of the conveyor itself.
In use, the sensor 208 may typically be oriented such that a bisecting line 345 between the central axis 306 of the first field of view 304, and the central axis 344 of the beam of radiation 342 is aligned with the sensing direction 210. It is noted that when illustrated without a radiation source (as shown in Figures 3b, 4a, 4b), the sensing direction is shown parallel with the central axis 306 of the first field of view 304. However, this is a not a requirement, and alignment between the bisecting line 345 and sensing direction 210 may be preferred (although again, is not essential).
In such a preferred arrangement, where the sensing direction 210 is configured to be in a horizontal direction, the central axis 306 of the first field of view 304, and the central axis 344 of the beam of radiation 342 will each be 12.5 degrees from the horizontal (for an offset angle of 25 degrees). Such an arrangement may provide increased sensitivity for example, where an object being detected has a low level of reflectivity for the radiation being emitted from the radiation source. The marking direction of the marking head 202 is generally arranged to be perpendicular to the surface being marked because this gives the most accurate mark. For most accurate and consistent object detection, the sensing direction 210 and the marking direction are optimally parallel. Aligning the bisecting center-line 345 with the sensing direction 210 will mean that bisecting center-line 345 is perpendicular to the surface being detected which will maximise the light reflected in the direction of the field of view 304 of the sensing arrangement 303.
The sensor 208 may further comprise a second radiation source configured to emit a second beam of electromagnetic radiation in the sensing direction. A second sensing region is defined by an overlap between the second beam of electromagnetic radiation and the second field of view. The second radiation source may have similar beam characteristics to those of the first radiation source. A central axis of the second beam of radiation may be parallel to the central axis 344 of the first beam of radiation, and perpendicular to the expected direction of movement 205 of the object. The second radiation source may be arranged in a similar manner to that shown in Figure 5, although with the orientation being relative to the second sensing arrangement 313, rather than the first sensing arrangement 303.
While not essential, providing separate radiation sources for each of the sensing elements may allow improved accuracy. For example, since each radiation beam can be oriented similarly with respect to the respective field of view it is possible to reduce parallax errors. The first and second radiation sources are preferably configured such that no direct radiation paths exist from the first and second radiation sources to the first sensing element or the second sensing element. The first and second radiation sources may be provided externally of the housing 320 (although could alternatively be provided within a single housing, with suitable internal light blocking structures).
The operation of the sensor 208 is controlled by a sensor control circuit 220, as shown schematically in Figure 6. The sensor control circuit 220 comprises a first LED driver 221 configured to provide a drive signal 222 to the first radiation source 340, and a second LED driver 231 configured to provide a drive signal 232 to the second radiation source 350. The first sensing element 300 is configured to generate a first sensor signal 223.
The sensor control circuit 220 further comprises a first amplifier 224 configured to receive the first sensor signal 223. A first amplified sensor signal 225 is then passed to a band-pass filter 226 to produce a first filtered sensor signal 227.
The sensor control circuit 220 further comprises a second amplifier 234 configured to receive a second sensor signal 233 generated by the second sensing element 310 and produce a second amplified sensor signal 235, and a second band-pass filter 236 for filtering the second amplified signal 235 to produce a second filtered sensor signal 237.
The filtered signals 227, 237 are each passed to a respective peak detector 228, 238, where further processing is performed to smooth the filtered signals 227, 237 and to generate respective first and second amplitude signals 229, 239 indicative of an amplitude of the filtered sensor signals 227, 237.
The first and second amplitude signals 229, 239 are passed to a signal processor 240, for further processing. The signal processor 240 may comprise an analog-to-digital convertor configured to generate first and second digital sensor signals. Any of the first and second sensor signals, the first and second amplified sensor signals, the first and second filtered sensor signals and the first and second amplitude signals may be referred to as analog sensor signals. The first and second sensing elements 300, 310 may each be photodiodes, such as, for example, part number SFH2440L manufactured by OSRAM Opto Semiconductors GmbH, Germany. The use of simple, single pixel, photodiodes provide a convenient sensing element, especially in view of their highly linear output, low noise and high speed operation.
The radiation sources 340, 350 may be red LEDs, such as, for example, the VLCS5830 part manufactured by Vishay Intertechnology, Inc., United States of America.
Further operations will be described with reference to the first sensing arrangement 303, although similar operations are generally (although not necessarily) performed for both sensing arrangements 303, 313.
In use, the first LED driver 221 applies a pulsed driving signal 222 to the first radiation source 340. A first pulse frequency of 100 kHz may be convenient (although other frequencies can be selected as required).
When no objects are present in the sensing region 346, no radiation is reflected back to the sensing element 300. On the other hand, when an object 204 enters the sensing region 346 radiation emitted by the radiation source 340 is reflected by the object 204 back towards the sensor 208. The reflected radiation passes through the aperture 302 and impinges on the sensing element 300. The first sensor signal 223 changes in response to the change in radiation intensity impinging on the sensing element 300.
Changes in the first sensor signal 223 are amplified by the amplifier 224 to produce the first amplified sensor signal 225. Components of the signal 225 at the first pulse frequency are then selectively filtered by the band-pass filter 226 which removes substantially all of the signal except for the 100 kHz component. The peak detector 228 creates the first amplitude signal 229 based on the amplitude of the first filtered sensor signal 227. This amplitude signal 229 is representative of the amount of radiation from the first radiation source 340 which impinges on the first sensing element 300 and is passed to the signal processor 240. In this way it is possible to obtain radiation level signals from the first sensing element 300 at the first pulse frequency only. By pulsing the radiation source 340 and filtering the sensor signal at a corresponding frequency, it is possible to suppress ambient radiation, such as that from daylight or electric lighting, incident upon the sensing element 300. In particular, radiation that is able to impinge upon the sensing element 300, but which has not originated from the radiation source 340 will be rejected by the band-pass filter 226, since it will not have a component at the pulse frequency. The second radiation source 350 may be pulsed in a similar manner.
The signal processor 240 converts the first amplitude signal 229 to a series of digital values (e.g, by taking samples at a sampling frequency), creating a first digital signal. Digital signal values may then be processed in various way to generate various further control signals.
A primary function of the signal processor 240 is to generate an object detection signal 241. The object detection signal 241 may be generated by identifying a high signal level (e.g. when a highly reflective object has entered the sensing region 346). If the sensor is configured to generate an ‘active high’ output signal (i.e. the signal has a low level when no object is present), a transition from a low signal value to a high signal value (e.g. the crossing of a signal threshold level) in the amplitude signal 229 can be used to identify that an object has entered the sensing region 346. A first (binary) detection signal 242 may thus be identified by the signal processor 240 based on the received (and digitised) first amplitude signal 229.
A similar second detection signal 243 may be identified by the signal processor 240 based on the second amplitude signal 239, based on a signal level change caused by an object entering the sensing region associated with the second sensing element 310.
In many instances it is preferred that the same detection method, for example crossing a similar threshold, is used for both sensor arrangements 303, 313, so that they operate in the same way. This allows a time period between T the two identified signals 242, 243 to be used to generate object movement data 244.
In more detail, by comparing the timing of the first and second detection signals 242, 243 (i.e. the temporal separation), and knowing the sensor separation distance 324 the signal processor 240 is further able to calculate data indicating the speed of the object (e.g. the linear speed of the object in the direction 205), according to the following equation:
V = S /T , Equation (3)
Where V is the speed of the object, S is the separation distance (e.g. distance 324), and T is the time period between the two detection signals 242, 243. It will be understood that the data indicating the speed of the object may be in any convenient form, and is not required to have units of metres per second.
The data indicating the speed of the object may be referred to as object speed data 244a. The object speed data 244a may be validated based on the data satisfying a speed criterion. The speed criterion may be the speed being greater than a minimum detection speed (e.g. 5 mm/s) and less than a maximum detection speed (e.g. 5 m/s). In this way, it is possible to discount spurious speed data.
Further, by comparing the order in which the first and second detection signals 242, 243 are generated, it is also possible to generate object direction data 244b. The object speed data 244a and the object direction data 244b may together be referred to as object movement data 244.
The signal processor 240 may be configured to generate one or more output signals for interfacing with a printer. The output signals may include the object detection signal 241. The output signals may further (or instead) include an object speed signal 245. The object speed signal 245 may be derived from the object speed data 244a, and in some instances may simply comprises a speed value that can be interpreted by the printer (or other device). The object speed signal 245 may comprise a periodic signal component (e.g. a square waveform) having a frequency that is proportional to the speed of the object.
In the illustrated example, the signal processor 240 is configured to synthesize and output quadrature pulse output signals 245a and 245b, which together form the object speed signal 245. Such signals mimic the two output signals of a standard quadrature shaft encoder. A standard quadrature shaft encoder output signal comprises two square wave signals with a slight phase difference between them (e.g. 90 degrees). The direction of encoder rotation is given by which of the two square wave signals leads (i.e. transitions high or low before the other one of the two square wave signals), and the speed of rotation is proportional to the frequency of the pulses. That is, each of the first quadrature output signal 245a, and the second quadrature output signal 245b has a periodic signal component (e.g. a square waveform) having a frequency that is proportional to the speed of the object. The phase difference between the two signals (e.g. +90 degrees, or -90 degrees) is controlled based upon the direction of the object relative to the sensor.
Such quadrature encoders are routinely used in industrial processing environments to monitor movement of production lines (e.g. conveyor belts). The processor 240 may calculate the necessary timings of two such square waves signals based on the object speed data 244a and the object direction data 244b and synthesize the equivalent square wave output signals 245a and 245b.
The signal processor 240 may also be configured to generate the object detection signal 241 based on either (or both) of the first and second detection signals 242, 243.
When received by a printer, the object speed signal 245 may be used to adjust the printing operation. For example, printing may be adjusted to ensure the correct spacing of the printing in the direction of movement of the object, and/or to adjust other factors that control print quality.
In order to enhance the sensitivity of the sensor, and to better discriminate against background noise, or sensor sensitivity variation, a sensor baseline signal may be measured and stored for each sensing element. In particular, the signal processor 240 may store a baseline sensor signal based upon past values of the amplitude signals 229, 239. The past values may comprise a plurality of discrete past values (e.g. ADC sample values) with the baseline sensor signal being generated by combining (e.g. averaging) the plurality of discrete values. The baseline signal may, for example, be based on past values of the amplitude signal representing at least 5 seconds duration when no object is present in the sensing region 346 (although different durations or configurations can be used). Generating the baseline signal may be performed at periodic intervals, and/or at a predetermined time (e.g. at product start-up as part of an initialisation routine). In this way, it is possible to determine a representative baseline signal against which current sensor values can be compared. A comparison result is then generated by comparing the respective baseline signal with a current value of the amplitude signal 229, 239. The detection signal 241 may then be generated based on the comparison result.
The analog amplitude signals may also be processed by the processor 240 based on a plurality of amplitude signal values digitally sampled over short a period of time. Each of the amplitude signals may comprise a plurality of digital values, each representing signal amplitude at a respective point in time. Each of the plurality of amplitude signal values may be generated by sampling an analog signal with an ADC (e.g. as part of the signal processor 240). The ADC sampling rate may be selected based upon the expected speed of object movement. A sampling rate of 40,000 samples per second may be suitable.
In some embodiments, the past values may comprise a continuous analog signal extending over a period of time. Such a continuous signal may be averaged by an analog component (e.g. a capacitor, an integrating amplifier - not shown) to generate the baseline sensor signal, which is then provided to the signal processor 240.
It will be understood that the above described peak detectors 228, 238 may also perform a signal smoothing operation. However, in some embodiments alternative combinations of digital and/or analog components may be selected to perform a similar function.
Generating the object detection signal 241 may comprise identifying a plurality of consecutive sensor signal values of the plurality of sensor signal values (or values of amplitude signals 229, 239) that satisfy a predetermined criterion. For example, the object detection signal 241 may be generated when the sampled amplitude signal 229 is above a threshold value for a predetermined period of time (e.g. 0.1 to 1 millisecond). Such a period may correspond to 4-40 samples, when using the sampling rate of 40,000 samples per second. Further, where a comparison with a baseline signal is performed, the predetermined criterion may comprise the plurality of consecutive sensor output signal values each exceeding a difference threshold from the baseline signal. For example, for a signal exhibiting a noise level of around 100 mV, a threshold of around 300 mV above a baseline signal for the particular sensor signal may provide robust detection while also effectively rejecting noise and avoiding false triggering. Other detection arrangements may be preferred, depending on the particular characteristics of the signals.
By treating the sensor output signals in this way, it is possible to reduce the effects of signal noise, since individual sensor readings that are high or low can be effectively ignored, and a consistent sensor signal acted upon.
The signal processor 240 may further generate the object detection signal 241 based upon a combination of the first amplitude signal 229 and the second amplitude signal 239. For example, the object detection signal 241 may usually be generated when an object is detected in the sensing region. However, generating the object detection signal 241 may further comprise comparing the first and second detection signals 242, 243.
Further, generating the object detection signal 241 may comprise comparing how the first amplitude signal 229 varies over time and how the second amplitude signal 239 varies over time, and generating the object detection signal 241 only if a predetermined criterion is satisfied. The predetermined criterion may comprise a degree of similarity between the first amplitude signal 229 over time and the second amplitude signal 239 over time.
This could be achieved in the signal processor 240 by storing ADC readings of both amplitude signals 229, 239 into memory at regular intervals and continuously pattern matching the two signals to see if a similar enough pattern of readings is repeated in both sets of readings. It will be appreciated that in the event that a similar enough pattern is found in both sets of stored readings, one pattern will be time delayed compared to the other, because the sensors are separated by the separation distance 324. It will also be appreciated that while the distance 324 is fixed, the time delay between the two matched patterns will vary with the speed of the moving object being detected. Nevertheless, with modern signal processors or FPGA chips it should be possible to perform pattern matching between both sets of stored ADC readings with sufficient speed to generate the needed output signals 241 , 245 in time to print on the object as it goes past the marking location 206.
By assessing a similarity between the first and second signals it is possible to compare the shape of the first and second signals over time. In this way, anomalous object detection results can be minimised, since in a correct detection result the waveforms of the first and second signals would be expected to be time-shifted versions of one another. A detection result may thus be rejected if the signals do not match sufficiently (e.g. if they differ by more than a certain percentage or a standard deviation or some derivative thereof).
In some examples, additional processing may be performed on the amplitude signals 229, 239. For example, the shape of the signals 229, 239 may be determined and used to generate information regarding a detected object. For example, a rate of change in signal value for one or both signals may be used to determine the shape of an object. An abrupt signal value change may be considered to indicate a square edge (e.g. a box), whereas a gradual change (e.g. a curve) may be considered to indicate a rounded object (e.g. a can or bottle). Further, a speed value (e.g. speed data 244a, or speed signal 245) generated based on a separation between the first and second detection signals 242, 243 may be used to quantify a shape (e.g. an object length in the movement direction 205). Such information may be used to confirm that the object shape corresponds to an expected object shape, allowing improved printing control. For example, if a printer is configured to print on a round product, but senses square boxes, a warning may be generated. The processor 240 may thus provide additional outputs than those illustrated.
It will be appreciated that he above described processing uses a particular combination of analog and digital signal processing steps. However, this particular combination is not required. For example, a signal could be digitised at an earlier point in the processing, and one of more of the amplifying, filtering, or peak detection steps performed digitally, or even omitted entirely. Alternatively, some of the processing steps performed by the processor 240 could be performed by analog components. As descried above with reference to Figures 3 and 4 and Equations (1) and (2), the fields of view 304, 314 defined by each of the sensing arrangements 303, 313 is defined by the geometry of the apertures 302, 312 and the sensing elements 300, 310. Further, as described above with reference to Figure 6, the sensor 208 is configured to generate detection signals when an object passes into each of the fields of view 304, 314.
It will be appreciated, therefore, that in order to accurately detect moving objects the areas of the fields of view should be well defined. Moreover, it will also be appreciated that there exists a trade-off between providing a wide field of view, so as to allow a maximum amount of radiation to reach the sensing elements 300, 310 and a narrow field of view, which would provide a sharper detection signal.
It has been realised that by providing a relatively narrow field of view in the direction of movement 205, in combination with a relatively tall field of view in a perpendicular direction, provides a useful compromise. In this way, the parallel viewing angle 304a may be less than the perpendicular viewing angle 304p. That is, a relatively small viewing angle 304a in the direction of movement 205 allows a more abrupt transition to be determined between an objection being outside of the field of view and inside the field of view. Further a relatively wide viewing angle 304p in a perpendicular direction allows an increased amount of radiation to reach the sensor, since a larger surface area of the object can be viewed.
The narrow field of view in the direction of movement may be achieved by providing a narrow aperture width 302w (see Figures 3a, 3b, 4a) in the direction of movement 205. The aperture width 302w may be less than 5 mm. Preferably, the aperture width is around 1 mm.
To compensate for the reduction in radiation reaching the sensing element 300 due to the narrow aperture width, a larger aperture height 302h may be used. The first sensing aperture 302 may thus have a height 302h perpendicular to the expected direction of movement 205 of the object that is greater than or equal to the width 302w. For example, the aperture may have a height of around 6 mm.
Different aperture geometry may be used as required. As described above with reference to Equations (1) and (2), the viewing angles 304a, 304p are further influenced by the sensing length 300I. A larger sensing length 300I results in a smaller viewing angle 304a, 304p for a given aperture and sensing element size.
The sensing element 300 may have a width 300w in the direction of movement 205 of 2.65 mm. The sensing element 300 may have a height 300h perpendicular to the direction of movement 205 of 2.65 mm. Of course, alternative sensor geometry may be used. However, a square sensor may be preferred since they are widely available and low cost components.
In an embodiment, the sensing length 300I is at least 10 times the width of the aperture 302w in the expected direction of movement of the object 205. The sensing length 300I may be at least 5 times the width 300w of the first sensing element in the expected direction of movement of the object 205.
In an embodiment, the sensing length 300I may be at least 10 times greater than a maximum of the width of the aperture 302w in the expected direction of movement of the object 205, and the width of the sensing element 300w in the expected direction of movement of the object 205. In this way, a narrow field of view in the expected direction of movement can be provided. Such a narrow field of view provides high sensitivity to edges of objects, and also rapid detection of objects.
In the above described example, with a sensor having a width 300w of 2.65 mm, an aperture width 302w of 1 mm aperture width, and a 20 mm aperture separation 300I, the field of view would subtend 10.43 degrees, and at a distance (from the aperture) of 10 mm would have a width of just 2.83 mm.
It will be understood that the second aperture 312, second sensing element 310, and second sensing arrangement 313 may have similar dimensions, and dimensional relationships, to those of the first aperture 302, first sensing element 300, and first sensing arrangement 303. In this way, a similar field of view is provided for each sensor, albeit separated by the sensor separation distance 324 in the expected direction of movement 205 of the object. It is further noted above that the first and second fields of view are separated in the expected direction of movement 205 by a separation distance 324. It can further be seen from Figure 3b, that the central axis of the first field of view 306 and a central axis of the second field of view 316 are parallel to each other, and perpendicular to the expected direction of movement of the object 205. Arranging the fields of view in this way ensures that wherever an object is placed in the sensing direction (e.g. close to the sensor, or far away from the sensor) it will be detected by both sensing arrangements 303, 313 at an equivalent distance and in a similar way. That is, if the axes 306, 316 were not parallel, then the effective sensor separation distance would vary as a function of the distance from the sensor in the sensing direction 210.
Similarly, if the axes 306, 316 were not both perpendicular to the direction of movement of the object 205, then the effective sensor separation distance would vary as a function of angle between the axes 306, 316 and the direction of movement of the object 205.
Further, in order to provide an accurate and high speed sensor, it is desirable to provide a small separation distance 324 between the first and second central axes 306, 316. For example, a separation distance 324 of around 5 mm or less results in a sensor capable of detecting small products, with a high degree of accuracy. Moreover, a small separation distance (e.g. less than 10 mm) minimises the risk that consecutive detection signals from the two sensing elements 300, 310 will not relate to the same object. For example, where an object is small in the direction of movement 205, and where a series of similar objects are advanced along a conveyor, there is a risk that detection signals generated a short time apart could relate to different products. For example, two detection signals could be generated either by a) a fast moving object being detected sequentially by the first and second sensing arrangements 303, 313, or b) a first slow moving object being detected by the first sensing arrangement 303 and a second slow moving object being detected by the second sensing arrangement 313. However, by making the separation of the two sensing arrangements small relative to the size of the object being detected, it is possible to reduce this risk significantly.
The various processing steps described above with reference to the signal processor
240 may be performed by a processor that is part of the sensor 208. The sensor 208 may thus comprise a standalone component that can provide an object detection signal 241 and/or an object speed signal 245 to a printer. As described above, such a sensor 208 may provide quadrature detection signals 245a, 245b, allowing product speed and direction to be determined. A printer or marking system may be configured to receive an object detection signal 241 from the sensor (e.g. a detection signal that has been generated by a processor 240 of the sensor) and to initiate (or adjust) printing or marking based upon the object detection signal 241 and/or speed and/or direction signal (e.g. an object movement signal).
The printer or marking system may include a printhead or marking head (e.g. printhead 201) that is configured to cause a mark to be created on an object 204 moving in the expected movement direction 205 past a marking location.
Alternatively, some or all of the processing described above may be performed by a processor that is part of the printer 201 (or other marking system), such as the printer controller 212.
In some examples, the sensor 208 may be provided as part of the printer 201. The sensor 208 may thus generate detection signals that are processed by the printer controller 212 in order to control printing. Alternatively, the analog output signals (e.g. 229, 239) may be provided directly to the printer controller 212 for processing into a detection signal which is used internally by the controller 212 to control printing.
Reference has been made in the preceding description to the signal processor 240, and various functions have been attributed to the processor 240. It will be appreciated that the signal processor 240 can be implemented in any convenient way including as an application specific integrated circuit (ASIC), field programmable gate array (FPGA) or a microprocessor connected to a memory storing processor readable instructions, the instructions being arranged to control the sensor (e.g. LED drivers 221, 231) and the microprocessor being arranged to read and execute the instructions stored in the memory. Furthermore, it will be appreciated that in some embodiments the processor 240 may be provided by a plurality of controller devices each of which is charged with carrying out some of the processing functions attributed to the processor 240, or even to a printer controller (where present). The sensor 208 may be integrated into a marking head of a printer or marking system.
While various examples have been described above, it will be appreciated that various alternatives may exist to the specific details described.
For example, the sensor 208 has been described in the context of printing and marking. However, the sensor could also be used to determine speed and/or location information for a variety of different processing purposes. For example, a packaging line or apparatus may receive a detection signal (or even an analog sensor output) and use the signal, in a similar manner to that described above with reference to the printer, to initiate some action. Some examples may include the filling or sealing of a container, the removal of an object from a processing line, or the application of a label to a passing object.
Further, while the above examples describe the first and second radiation sources 340, 350 being caused to pulse at a common pulse frequency, the first and second radiation sources 340, 350 may be pulsed at different frequencies to each other. Alternatively, or additionally, the first and second radiation sources may be pulsed at different times to each other. In this way, it is possible to allow some spatial overlap between the first and second sensing regions (thereby allowing closer sensor spacing), while still avoiding cross-talk between the two sensors.
In some examples, alternative ambient radiation suppression techniques may be used. For example, a background sensor output signal may be obtained from the sensing element 300 when no radiation is emitted from the radiation source 340, and an illuminated sensor output signal may be obtained from the sensing element 300 when radiation is being emitted from the radiation source 340. By comparison between the background signal and the illuminated signal it is possible to reduce the impact of stray radiation incident upon the sensor that does not originate from the radiation source (via the object).
In further examples, pulsing of the radiation sources may be omitted entirely.
In yet further examples, the radiation sources 340, 350 themselves may be omitted entirely. In such an example, the sensing elements may rely on the reflection of ambient light into the apertures 302, 312, with the presence of an object 304 in the field of view changing the light levels reaching the sensing elements 300, 310.
In the above described examples the sensor 208 comprises two sensing assemblies 303, 313. It will be appreciated, however, that the sensor may comprise a different number of sensing assemblies. For example, three or more sensing assemblies may be provided. Each of the sensing assemblies may be generally as descried above. A common radiation source may be provided for all sensing assemblies, or a separate radiation source may be provided for each of the sensing assemblies.
In the above described examples the sensor 208 is shown facing the conveyor 303 such that the sensing direction 210 extends in a substantially horizontal direction. In view of this, various dimensions and angles have been referred to as “width”, “height”, “horizontal” or “vertical”. It will be appreciated, however, that the sensor may be oriented in any convenient way. For example, the sensor may point downwards from above a production line (and similarly, a printer may be configured to mark the top of products). The terminology used is not intended to imply any restriction to the orientation of the sensor. As such, an aperture “width” is considered to be a dimension of the aperture in the direction of movement 205, whereas an aperture “height” is considered to be a dimension of the aperture in a perpendicular direction to both the direction of movement 205, and the sensing direction 210.
It will be appreciated that the examples described above are for all purposes exemplary, not limiting. Various modifications can be made to the described embodiments without departing from the spirit and scope of the present invention, which is defined by the following claims.

Claims

CLAIMS:
1. A sensor configured to detect an object moving along a predefined movement path, the sensor comprising: a first sensing element; a second sensing element; and a sensor housing configured to shield the first sensing element and the second sensing element, the sensor housing comprising a first wall portion defining a first sensing aperture and a second wall portion defining a second sensing aperture; wherein: the first sensing aperture is configured to allow electromagnetic radiation to travel towards the first sensing element, the first sensing aperture and the first sensing element defining a first field of view, the first field of view extending from the first sensing aperture in a sensing direction and comprising a first plurality of radiation paths from locations within the first field of view, via the first sensing aperture, to the first sensing element; the second sensing aperture is configured to allow electromagnetic radiation to travel towards the second sensing element, the second sensing aperture and the second sensing element defining a second field of view, the second field of view extending from the second sensing aperture in the sensing direction and comprising a second plurality of radiation paths from locations within the second field of view, via the second sensing aperture, to the second sensing element; and the sensor housing is configured to block paths of radiation between locations external to the first field of view and the first sensing element and locations external to the second field of view and the second sensing element; and the first aperture and the second aperture are separated by a sensor separation distance from each other in a direction parallel to the predefined movement path of the object.
2. The sensor according to claim 1, wherein the first and second pluralities of radiation paths from locations within the first and second fields of view to the respective first and second sensing elements do not pass through a lens.
3. The sensor according to claim 1 or 2, wherein the first sensing aperture has a width in the direction parallel to the predefined movement path that is less than 5 mm.
4. The sensor according to claim 3, wherein the first sensing aperture has a height perpendicular to the direction parallel to the predefined movement path that is greater than or equal to the width.
5. The sensor according to any preceding claim, wherein a first sensing length is defined between the first sensing aperture and the first sensing element: wherein: the first sensing length is at least 5 times a width of the first aperture in the direction parallel to the predefined movement path; and/or the first sensing length is at least 3 times a width of the first sensing element in the direction parallel to the predefined movement path; and/or the first sensing length is at least 3 times greater than a maximum of the width of the aperture in the direction parallel to the predefined movement path, and the width of the sensing element in the direction parallel to the predefined movement path.
6. The sensor according to any preceding claim, wherein: the first sensing element and the second sensing element have substantially the same dimensions; the first sensing aperture and the second sensing aperture have substantially the same dimensions; and a spatial relationship between the first aperture and the first sensing element is substantially the same as a spatial relationship between the second aperture and the second sensing element.
7. The sensor according to any preceding claim, further comprising a first radiation source configured to emit a first beam of electromagnetic radiation in the sensing direction; wherein a first sensing region is defined by an overlap between the first beam of electromagnetic radiation and the first field of view.
8. The sensor according to claim 7, wherein: an offset angle is defined between a central axis of the first beam of electromagnetic radiation emitted by the first radiation source and a central axis of the first field of view: the sensor further defining: a minimum sensing distance between the first sensing aperture and the onset of the first sensing region in the sensing direction; and a maximum sensing distance between the first sensing aperture and the end of the first sensing region in the sensing direction.
9. The sensor according to claim 7 or 8, further comprising a second radiation source configured to emit a second beam of electromagnetic radiation in the sensing direction; wherein: a second sensing region is defined by an overlap between the second beam of electromagnetic radiation and the second field of view.
10. The sensor according to claim 9 wherein a central axis of the first beam of electromagnetic radiation and a central axis of the second beam of electromagnetic radiation are parallel to one another, and perpendicular to the predefined movement path.
11. The sensor according any preceding claim, wherein each of the first field of view and the second field of view has a parallel viewing angle extending in a direction parallel to the predefined movement path and a perpendicular viewing angle extending in a direction perpendicular to the predefined movement path, the parallel viewing angle being less than the perpendicular viewing angle.
12. The sensor according to claim 7 or any preceding claim dependent thereon, wherein the sensor is configured to pulse the first radiation source at a first pulse frequency and to obtain signals from the first sensing element at the first pulse frequency.
13. The sensor according to any preceding claim, wherein a central axis of the first field of view and a central axis of the second field of view are parallel to each other, and perpendicular to the predefined movement.
14. The sensor according to claim 13, wherein the central axis of the first field of view and the central axis of the second field of view are separated from one another by around 5 mm or less in the direction parallel to the predefined movement path.
15. The sensor according to any preceding claim, configured to generate an object detection signal based on a first analog signal received from the first sensing element and/or a second analog signal received from the second sensing element.
16. The sensor according to any preceding claim, configured to generate an object speed signal based upon a first analog signal received from the first sensing element and a second analog signal received from the second sensing element.
17. The sensor according to claim 16, wherein the object speed signal comprises a quadrature output signal comprising a first quadrature signal and a second quadrature signal, each of the first and second quadrature signals comprising a periodic signal component having a frequency that is proportional to the speed of the object, and a phase difference between the first and second quadrature signals being controlled based upon a direction of the object.
18. The sensor according to any preceding claim, comprising a sensor control circuit configured to receive a first sensor signal from the first sensing element, the sensor control circuit comprising one or more of: an amplifier configured to generated a first amplified sensor signal based upon the first sensor signal; a filter configured to generate a first filtered sensor signal based upon the first sensor signal; a peak detector configured to generate a first amplitude signal indicative of an amplitude of the first sensor signal; an analog to digital convertor configured to generate a first digital sensor signal based on the first sensor signal; and a signal processor configured to generate an object detection signal based upon the first sensor signal.
19. A printer or marking system comprising: a printhead or marking head configured to cause a mark to be created on an object moving in the expected movement direction past a marking location; and a sensor according to any preceding claim; wherein the printer or marking system is configured to: receive an object detection signal from the sensor; and initiate printing or marking based upon the object detection signal.
20. A method of detecting an object, the method comprising: defining a first field of view, the first field of view extending from a first sensing aperture defined by a first wall portion in a sensing direction and comprising a first plurality of radiation paths from locations within the first field of view, via the first sensing aperture, to a first sensing element; defining a second field of view, the second field of view extending from a second sensing aperture defined by a second wall portion in a sensing direction and comprising a second plurality of radiation paths from locations within the second field of view, via the second sensing aperture, to a second sensing element; blocking paths of radiation between locations external to the first and second fields of view and the respective sensing elements; receiving, by the first sensing element, electromagnetic radiation reflected by an object present at a sensing location within the first field of view; generating, by the first sensing element, a first sensor signal indicative of an amount of radiation incident upon the first sensing element; receiving, by the second sensing element, electromagnetic radiation reflected by an object present at a sensing location within the second field of view; generating, by the second sensing element, a second sensor signal indicative of an amount of radiation incident upon the second sensing element; and generating an object detection signal based upon at least one of the first sensor signal and the second sensor signal.
21. The method of claim 20, further comprising: for each of the first and second sensing elements: amplifying the sensor signal; filtering the amplified signal to generate a filtered sensor signal; generating an amplitude signal based on the filtered sensor signal; and converting the amplitude signal to a digital sensor signal; and generating the object detection signal based upon first and/or second digital sensor signals.
22. The method of claim 20 or 21 , further comprising: for each of the first and second sensing elements: generating a baseline sensor signal based upon past values of the sensor signal; and generating a comparison result by comparing the baseline sensor signal with a current sensor signal; and generating the object detection signal based upon the first and/or second comparison results.
23. The method according to any one of claims 20 to 22, further comprising generating an object movement signal based upon the first sensor signal and/or the second sensor signal.
24. The method according to any one of claims 20 to 23, further comprising generating a first quadrature signal and a second quadrature signal, each of the first and second quadrature signals comprising a periodic signal component having a frequency that is proportional to the speed of the object, and controlling the phase difference between the first and second quadrature signals based upon a direction of the object.
25. The method according to any one of claims 20 to 24, further comprising controlling an industrial printer or marking system based upon the object detection signal.
PCT/GB2023/053147 2022-12-07 2023-12-06 Sensor WO2024121556A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GBGB2218416.2A GB202218416D0 (en) 2022-12-07 2022-12-07 Sensor
GB2218416.2 2022-12-07

Publications (1)

Publication Number Publication Date
WO2024121556A1 true WO2024121556A1 (en) 2024-06-13

Family

ID=84926673

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2023/053147 WO2024121556A1 (en) 2022-12-07 2023-12-06 Sensor

Country Status (2)

Country Link
GB (1) GB202218416D0 (en)
WO (1) WO2024121556A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120169805A1 (en) * 2011-01-05 2012-07-05 Bateson John E Sensor
US20190202200A1 (en) * 2016-05-25 2019-07-04 Linx Printing Technologies Ltd. Printer for printing onto a succession of objects
US20200094108A1 (en) * 2018-09-20 2020-03-26 Catalyst Sports Llc Bat speed measuring device
CN217455416U (en) * 2022-06-08 2022-09-20 青岛灯塔味业有限公司 Bottled assembly line code missing prevention code spraying device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120169805A1 (en) * 2011-01-05 2012-07-05 Bateson John E Sensor
US20190202200A1 (en) * 2016-05-25 2019-07-04 Linx Printing Technologies Ltd. Printer for printing onto a succession of objects
US20200094108A1 (en) * 2018-09-20 2020-03-26 Catalyst Sports Llc Bat speed measuring device
CN217455416U (en) * 2022-06-08 2022-09-20 青岛灯塔味业有限公司 Bottled assembly line code missing prevention code spraying device

Also Published As

Publication number Publication date
GB202218416D0 (en) 2023-01-18

Similar Documents

Publication Publication Date Title
CN111133329B (en) Method for calibrating a time-of-flight system and time-of-flight system
EP3147690B1 (en) Circuit device, optical detector, object detector, sensor, and movable device
US20090086189A1 (en) Clutter Rejection in Active Object Detection Systems
EP3165946A1 (en) Object detector, sensor, and movable device
US4634855A (en) Photoelectric article sensor with facing reflectors
EP1903299A1 (en) Method and system for acquiring a 3-D image of a scene
EP3225977B1 (en) Method and sensor system for detecting particles
US20140285817A1 (en) Limited reflection type photoelectric sensor
US20140240719A1 (en) Real-time measurement of relative position data and/or of geometrical dimensions of a moving body using optical measuring means
JP2012132917A (en) Photoelectronic sensor, and method for object detection and distance measurement
JP2009063339A (en) Scanning type range finder
KR102122142B1 (en) How to detect malfunctions of laser scanners, laser scanners and automobiles
US10761209B2 (en) Triangulation light sensor
CN115720634A (en) LIDAR system with fog detection and adaptive response
CN113661409A (en) Distance measuring system and method
CN114902075A (en) Fog detector with specially shaped lens for vehicle
WO2024121556A1 (en) Sensor
JP2019028039A (en) Distance measurement device and distance measurement method
US20140138518A1 (en) Optical detection apparatus
US9927343B2 (en) Apparatus and method for determining a size of particles in a spray jet
US20090099736A1 (en) Vehicle pre-impact sensing system and method
DE102015004903A1 (en) Optoelectronic measuring device
JP2017106894A (en) Method of detecting objects
CN110476080B (en) Lidar device and method for scanning a scanning angle and for analyzing a treatment detector
US20120169805A1 (en) Sensor