WO2017173501A1 - Method and system for validating resolvable detail across a depth of field - Google Patents

Method and system for validating resolvable detail across a depth of field Download PDF

Info

Publication number
WO2017173501A1
WO2017173501A1 PCT/AU2017/050306 AU2017050306W WO2017173501A1 WO 2017173501 A1 WO2017173501 A1 WO 2017173501A1 AU 2017050306 W AU2017050306 W AU 2017050306W WO 2017173501 A1 WO2017173501 A1 WO 2017173501A1
Authority
WO
WIPO (PCT)
Prior art keywords
colour
image capture
capture device
degrees
edge
Prior art date
Application number
PCT/AU2017/050306
Other languages
French (fr)
Inventor
Rhys Ernst Hill
Original Assignee
Lbt Innovations Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from AU2016901315A external-priority patent/AU2016901315A0/en
Application filed by Lbt Innovations Limited filed Critical Lbt Innovations Limited
Publication of WO2017173501A1 publication Critical patent/WO2017173501A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present invention relates to a method, and system for testing operation of an electronic image capture device and system. Particularly, but not exclusively, the present invention relates to the use of visual elements at multiple distances from an image capture device, and which bound a desired depth of field, to permit assessment of the resolvable detail of the image capture system across the depth of field. Additionally, the method and system includes visual elements which permit the assessment of the gross colour response of the image capture system.
  • changes in focus, changes in the aperture of the lens, or changes in the distance of an object from an image capture system may result in a change in the resolvable detail of the object. For example, if the focus of an image capture device drifts then the distance to the focal plane will change. Moreover, if the aperture of the image capture device is widened (e.g. the lens is stopped up) then the nearest and farthest distances from the image capture device that an object will appear acceptably sharp become closer (the depth of field decreases). This effect is reversed when the aperture is narrowed.
  • the size of the depth of field of an image capture device and the distance of the focal plane from the image capture device, relative to an object to be imaged, are major determinants of the resolution of the object in a captured image.
  • resolution of an electronic image capture system is defined by the system's ability to capture and process finely spaced details. That is, the resolution indicates the highest spatial frequency that any image capture system can produce. However, measuring resolution in this way can be insufficient to test whether an electronic image capture system is operating correctly.
  • a more accurate metric for measuring the resolvable detail of a captured image is to use the spatial frequency response (SFR) of the electronic image capture system.
  • SFR spatial frequency response
  • the SFR can be used to assess contrast loss as spatial frequency increases. Generally, as the distance between visual elements decreases (e.g. the spatial frequency increases), the contrast between those elements decreases to a point whereby the difference between the elements can no longer be sufficiently distinguished
  • test charts In order to measure the resolvable detail of an electronic image capture system, a test chart can be used such as that provided by the International Standard: ISO 12233:2014 Photography - Electronic still picture imaging - Resolution and spatial frequency responses.
  • ISO 12233:2014 Photography Electronic still picture imaging - Resolution and spatial frequency responses.
  • test charts consist of a card containing visual elements which are can be analysed to determine the SFR of a captured image of the chart.
  • the use of a test chart provides information on the resolvable detail of the image capture device at one distance. As such it provides no information on the depth of field of the image capture system.
  • the image capture system and any cell culture plate support are configured in an appropriate manner so that an image of the cell culture plate, and any associated bacteria growth, can be produced with enough resolvable detail to allow accurate analysis.
  • this task is complicated by the fact that the depth of culture medium in any given plate can vary and as such the distance of the surface of the cell culture medium (and any colonies thereon) from the image capture system can therefore vary, even if the base of the plates are positioned at a consistent distance.
  • the depth of field of the image capture device is such that it can produce an image of the surface of the medium in the plate (and any associated colonies) with sufficient resolvable detail to allow accurate analysis.
  • the present invention provides a method of testing operation of an image capture system, the method including: providing a first visual element including a face at a first distance from an image capture device and a second visual element including a face at a second distance from the image capture device, the face of each of the first and second visual elements including a first colour and a second colour wherein the transition between the first colour and the second colour is defined by at least one straight edge, the edge oriented such that it is between 2 degrees and 88 degrees rotated from vertical or horizontal relative to the orientation of the image capture device; capturing an image of the faces of both the first and second visual element with the image capture device; measuring resolvable detail of the captured image by analysing the at least one straight edge of the first and the second visual element; and determining whether the resolvable detail exceeds a minimum operational value for each of the first and second visual elements.
  • the invention provides a system for testing operation of an image capture device, the system including: an image captured device; a first visual element including a face at a first distance from the image capture device and a second visual element including a face at a second distance from the image capture device, the face of each of the first and second visual elements including a first colour and a second colour wherein the transition between the first colour and the second colour is defined by at least one straight edge, the edge oriented such that it is between 2 degrees and 88 degrees rotated from vertical or horizontal relative to the orientation of the image capture device; and a processor for measuring resolvable detail of the image capture device by analysing the at least one straight edge of the first visual element and the at least one straight edge of the second visual element in a captured image and determining whether the resolvable detail exceeds a minimum operational value for each of the first and second visual elements.
  • Providing a first visual element and a second visual element permits the assessment of the resolvable detail at two distances from the image capture device corresponding to the first and second distance of the first and second visual elements.
  • the depth of field of the image capture device is
  • the image capture device will be able to provide an image of an object at distance at or between the distance of the first and second visual elements with a resolvable detail at or above the minimum operational value.
  • the system for testing includes an electronic image capture device.
  • the image capture device is an electronic camera.
  • the system for testing operation of an image capture device includes a support for supporting objects to be imaged, the support being positionable at a predetermined distance from the image capture device.
  • the system under test is used for imaging and analysing microbial growth on a solid culture medium in a culture plate, and the minimum operational value is a sufficient value for the image capture system to be used for this analysis. Therefore, it is preferably for the support to be adapted to support a culture plate for containing a culture medium such as agar.
  • the support defines a plane and is adapted to abut the culture plate. The support may abut the underside of the culture plate or may encompass and abut the circumference of the culture plate.
  • a culture plate for microbiology generally consists of a round disc with raised edged which define a circular well for deposition of liquid medium which sets in the plate to form a solid culture medium.
  • Such culture plates typically include a lid for minimizing contamination of the medium and for assisting in preventing dehydration of the medium.
  • the combination of the medium and the plate is hereinafter referred to throughout the specification as a "culture plate”.
  • the image capture system has been found, in on example, to provide sufficiently accurate images of microbial growth on the culture plate in order to provide a microbiological assessment.
  • This assessment may be performed manually by a skilled laboratory technologist. Alternatively, the assessment may be automated and performed using a classifier that has been trained using a machine learning algorithm. Images obtained using the apparatus may be processed and used as input into the classifier.
  • An example of an electronic image capture system, including such a classifier, is described in the Applicant's Australian patent 2012225196 titled
  • the method and systems allow for the evaluation of the resolvable detail of the image capture device at two distances from the image capture device.
  • the first visual element and the second visual element are associated with the support.
  • the first visual element and second visual element are mounted on, or fixed to, the support.
  • the first and second visual elements are mounted relative to the support such that they have a defined spatial relationship with the support but are independent of the support. Associating the 8
  • the distance to the face of the first visual element from the image capture device is the maximum expected distance to an objected being supported. In some embodiments the distance to the face of the second visual element from the image capture device is the minimum distance expected to an objected being supported.
  • the desired range for assessment of the resolvable detail extends from the bottom of the well of the plate to the upper edge of the rim of the plate, when supported. Consequently, it can be established that the surface of the culture medium in the plate, and any associated microorganisms, can be imaged at or above a specified resolvable detail irrespective of the depth of the culture medium in the plate.
  • the distance to the face of the first visual element from the image capture device is 1 mm or less than the distance of the support from the image capture device. In some embodiments the distance to the face of the second visual element from the image capture device is 15mm or less than the distance of the support from the image capture device.
  • the first and second visual elements include a face including a first colour and a second colour.
  • the transition from the first colour to the second colour is defined by at least one straight edge.
  • this edge is slanted relative to either vertical or horizontal relative to the orientation of the image capture system.
  • a slanted edge is considered a straight edge wherein the direction of the edge is between 2 degrees and 22.5 degrees from vertical or horizontal relative to the orientation of the image capture system.
  • the direction of the edge is between 2 degrees and 10 degrees relative to horizontal or vertical relative to the image capture device.
  • the direction of the edge is 5 degrees relative to horizontal or vertical relative to the image capture device.
  • e-SFR Spatial Frequency Response
  • the term "operational value" as used in the context of resolvable detail refers to the required resolvable detail needed to allow analysis of the imaged object.
  • SFR is a multi-valued metric that assesses the contrast loss as a function of spatial frequency.
  • the minimum operational value will be expressed as a required contrast (modulation level or SFR values in the range of 1 to 0) for a minimum spatial frequency.
  • the image capture device will meet the minimum operational value when the minimum required SFR occurs at or above a desired spatial frequency.
  • Any suitable SFR can be chosen, however it is advantageous to select a SFR in the linear range of a SFR curve.
  • the SFR is measured at 0.5 (or 50% contrast).
  • the spatial frequency will be expressed as a sampling frequency (cycles/pixel or cycles per mm), line width per picture height (LW/PH) or lines per pixel.
  • the minimum operational value for the electronic image capture device may be met when the SFR is 0.5 at or above the spatial frequency of 10 lines per pixel.
  • the visual elements include at least two straight edges defining a transition between the first colour and the second colour.
  • the operational value of each of the visual elements can be an average of each of these two edges.
  • one of the at least two edges is slanted relative to vertical and a second of the at least two edges is slanted relative to horizontal in relation to the orientation of the image capture device.
  • one of the at least two straight edges is between 2 degrees and 22.5 degrees rotated from vertical relative to the orientation of the image capture device and a second edge of the at least two straight edges is between 2 degrees and 22.5 degrees rotated from horizontal relative to the orientation of the image capture device.
  • the direction of the at least two edges is between 2 degrees and 10 degrees relative to horizontal or vertical relative to the image capture device. In some embodiments, the direction of the at least two edges is 5 degrees relative to horizontal or vertical relative to the image capture device. In some embodiments, the at least two edges are perpendicular to each other. [0023] In some embodiments, the visual elements include at least four straight edges defining a transition between the first colour to the second colour.
  • a first and second edge of the at least four straight edges is between 2 and 22.5, or 2 to 10 degrees, or 5 degrees rotated from vertical relative to the orientation of the image capture device and a third and fourth edge of the at least four straight edges is between 2 degrees and 22.5 degrees, or 2 to 10 degrees, or 5 degrees rotated from horizontal relative to the orientation of the image capture device thereby providing at least two edges for assessment of the vertical and horizontal resolvable detail.
  • the first and second edge are parallel to each other and the third and fourth edge are parallel to each other.
  • the operational value of each of the visual elements can be an average of each of these four edges.
  • the operational value for the vertical resolvable detail can be derived by averaging the operational value for each of the slanted horizontal edges
  • the horizontal resolvable detail can be derived by averaging the
  • horizontal resolution values are measured in the longer image dimension which corresponds to the horizontal direction for a "landscape” image orientation
  • vertical resolution values are measured in the shorter image dimension which corresponds to the vertical direction for a "landscape” image orientation.
  • the alignment is expressed relative to the orientation of the rows and columns of the photo-elements which comprise the sensor array.
  • the slanted vertical line(s) of the first and second visual elements can be used to assess the horizontal resolution and the slanted horizontal iine(s) of the first and second visual elements can be used to assess the vertical image resolution
  • the edges are provided by the visual elements having a background including the first colour and shape provided on the background, the shape including a second colour.
  • the two colours being coplanar.
  • the shape provided on the background is a general "L" shape, slanted off vertical and horizontal. This provides two lines slanted relative to vertical, and two lines slanted relative to horizontal. As would be understood, other shapes can be used such as squares, rectangles or parallelograms.
  • Optical systems create perspective effects whereby objects of the same size, but at differing distances from the image capture device will appear different in size. For example the further an object is from the image capture device, the smaller it will appear in a captured image.
  • the first visual element is further from the image capture device than the second visual element and the face of the first visual element appears larger than the second visual element in a captured image.
  • the increase in size of the first visual element, relative to the second visual element will depend on the distance between the two visual elements as a proportion of the total distance from the image capture device.
  • the first visual element appears between 8% and 12% larger than the second visual element in the captured image, !n some embodiment the first visual element appears approximately 10% larger than the second visual element.
  • the method includes the step of compensation for the perspective effects in the captured image.
  • the first and second colours of the visual elements have low contrast relative to each other.
  • the first and second colours are different shades of grey.
  • the first colour is between 15% and 25% black.
  • the first colour is (approximately) 20% black.
  • the first colour is Pantone 427C.
  • the second colour is between 75% black and 85% black. In some embodiments the second colour is
  • the second colour is Pantone 425C.
  • the first colour is the used in the background of the first and second visual elements.
  • the first and second visual elements can be used to measure and validate the resolvable detail
  • the method relates generally to testing the operation of the image capture system and device.
  • the first and second aspects of the present invention include a third and fourth visual element together with the first and second visual elements, the third visual element including a third colour and the fourth visual element including a fourth colour, !n
  • the method further includes the steps of providing the third and fourth visual element; measuring the colour of the third and fourth visual elements in the capture image; and determining the gross colour response of the image capture system from the measured colour of the third and fourth visual elements in the captured image. Consequently, the term "operation of an image capture system/device" encompasses both resolvable detail and gross colour response of the image capture device and system. In some embodiments, the method and system only relates to testing the resolvable detail.
  • the third colour is selected to permit assessment the white balance of the captured image.
  • the third colour is a shade of grey.
  • the third colour is (approximately) 50% black, preferably the third colour is Pantone Cool Gray 7C.
  • the fourth colour is selected to assess the colour response of the image capture system, preferably the fourth colour is Pantone 674C. That is, the colours are selected so that they have different properties relative to each other, and this makes gross errors easy to spot and to permit detection of major camera faults.
  • the visual elements are positioned such that they are in the field of view of the image capture device together with the object to be imaged. This permits the testing of the operation of the image capture device and system in each image captured during operation. As such the visual elements can be used as internal controls within an image to ensure that the image was captured within appropriate operational values and parameters.
  • Figure 1 illustrates an apparatus suitable to the imaging and analysis of microbial growth on culture plates which is suitable for use with an embodiment of the method and system of the present invention.
  • Figure 2 illustrates a layout of a system suitable for the imaging a culture plate.
  • Figure 3 illustrates the physical layout of the first, second, third and fourth visual elements relative to the field of view of the image capture system.
  • Figure 4 illustrates the apparent (perspective) view of the first, second, third and fourth visual elements in a captured image.
  • Figure 5 illustrates the dimension and layout of the third and fourth visual element relative to one of the first or second visual elements.
  • Figure 6 illustrates the location of the regions of interest relative to the L- shape on the first or second visual elements.
  • Figure 7 illustrates a cropped region of interest to be processed for determination of the resolvable detail.
  • Figure 8 illustrates a spatial frequency curve for a slanted horizontal and vertical edge using a focussed and defocussed electronic camera.
  • Figure 9 illustrates a flow chart of the method of an embodiment of the present invention.
  • Figure 1 illustrates an embodiment of an apparatus for the automated imaging and analysis of microbial growth plates as described in the Applicant's published PCT application WO2012/1 19190 A1 (herein incorporated by way of this reference) which is suitable for use with the method and system the present invention.
  • Figure 1 shows an embodiment of an apparatus 100 for use in analysing microbial growth on a medium in a culture plate 102 in the form of an agar plate.
  • the apparatus 100 includes the following components.
  • An image capture device 104 in the form of a high resolution digital camera 106 of machine vision quality with an appropriate fixed focal length lens is positioned above a ring light 1 10.
  • the ring light 1 10 has a large diameter relative to the diameter of the culture plate 102. In this example, the ring light has a diameter of 180mm .
  • the ring light 1 10 contains several hundred white LEDs arranged in a circular array and a diffuser. This light provides low angle, diffused side lighting to enable the culture plate to be uniformly illuminated.
  • the ring light 1 10 is positioned around 40 mm above an opaque cover 1 12 that forms part of the frame 1 18, and thus about 30 mm above the culture plate 102. The positioning of the ring light 1 10 so that light from the white LEDs impinge on the surface of the culture plate 102 at a low angle prevents a specular reflection of the LEDs from a central surface of the medium being captured by the image capture device 104.
  • a lighting device 1 14 in the form of a flat panel light based on an array of white LEDs behind a diffuser.
  • the lighting device 1 14 is located about 150 mm below the opaque cover 1 12. This distance is chosen so that light from the ring light 1 10 falls on the baffles rather than the light 1 14, to reduce rear illumination of the culture plate 102.
  • a support 1 16 for supporting the culture plate 102 in the direct field of view of the image capture device 104 is a transparent glass stage that is 3 mm thick. The glass may be replaced if it becomes scratched over time.
  • the support 1 16 includes two or more triangle shaped transparent positioning elements for positioning the culture plate 102 on the support. The apexes of the triangles point towards the centre of the support for placement of the culture plate 102 so that the apexes touch the circumference of the culture plate 102.
  • a frame 1 18 positions the image capture device 104, support 1 16, ring light 1 10 and lighting device 1 14 relative to each other.
  • the frame 1 18 is made of an opaque material, such as sheet metal or plastic, which reduces the amount of light entering the apparatus 100.
  • the internal surfaces of the apparatus 100 are blackened where possible to reduce reflection of light from the internal surfaces into the lens 108.
  • the frame 1 18 includes a door 120 providing an access patch for a human operator to place the culture plate 102 on the support 1 16.
  • a robotic plate-handling device may use the access path to place the culture plate 102 precisely on the support 1 16 for imaging, and then to remove the culture plate to a designated output channel/slide.
  • the culture plate may be placed in an output channel representing one of the up to four categories described above.
  • the opaque cover 1 12 is an aluminium plate that extends across the width of the frame 1 18 and effectively splits the frame 1 18 into a top enclosure 122 and bottom enclosure 124.
  • the opaque cover 1 12 includes a hole 126 to allow light from the lighting device 1 14 to transmit through to the culture plate 102.
  • the width of the hole 126 is just slightly larger than the width of the culture plate 102 (which is 90mm in this example and is typically between 88 and 100 mm) and is less than the diameter of the ring light 1 10. This prevents light emitted from the ring light 1 10 from reflecting from the bottom surface 128 of the frame 1 18 or the surface of the flat panel light 1 14 and back through the culture plate 102.
  • the frame 1 18 also includes light baffles 130 positioned below the opaque cover 1 12.
  • Means 131 for changing the position of the ring light 1 10 relative to the support 1 16 are also provided in the form of a rack and pinion assembly.
  • the frame 1 18, opaque cover 1 12 and light baffles 130 define a cavity 132 such that the support 1 16 supports the culture plate 102 between the image capture device 104 and the cavity 132.
  • the support (glass stage) 1 16 seals the cavity 132 and prevents unwanted material from falling into the cavity 132.
  • the opaque cover 1 12 prevents light from the ring light 1 10 from illuminating visible areas of the cavity 132. In this configuration, the cavity 132 looks like a black background.
  • a side angle light 134 is used to illuminate the culture plate 102 from an angle to highlight any surface topography on the agar, such as dimples or a granular texture.
  • An alternative to the side angle light 134 is to activate only some of the LEDs in the ring light 1 10, such that the culture plate 102 is illuminated from one direction only.
  • a processing means such as a computer 136 is connected to the image capture device 104, the ring light 1 10 and the lighting device 1 14 via a physical or wireless interface.
  • the computer 136 may include a processor 138 and memory 140 storing software 142 for activating the different components, capturing raw data and processing the data.
  • a library of images, metadata and other information may be stored at the computer 136, or may be accessible at the computer 136 via a network
  • An image acquisition process using the system 100 may be suitable for obtaining images for use in classifying microbial growth on the plate 102 using a trained machine learning classifier, or in training such a classifier.
  • a manual process where the steps of placement, image capture, and analysis of growth parameters maybe performed by a human operator, but it will also be appreciated that many of the steps of the process may be automated and performed by software or by a robotic device.
  • FIG. 2 illustrates an embodiment of a system in accordance with an aspect of the present invention.
  • a system for testing the operation of an image capture device 201 comprising a sensor 203 and a lens 205 are supported by a frame (not shown) above a support 207 at a distance from the support.
  • the lens used in the illustrated system has a focal length of 16mm which provides a field of view of 30°45 ⁇
  • the support 207 is provided by a planar platform, however alternative supports may be used such as a plate gripper that support the culture plate in place by gripping the plate edges.
  • a plate gripper provides the advantage of being both able to support the culture plate (without obstructing the view of the surface of the culture plate, or impeding the rear lighting of the base of the culture plate), and permitting manipulation or the culture plate.
  • a culture plate 209 is shown positioned at an imaging station which is central to the optical axis (dashed and dotted line) of the image capture device 201 .
  • the imaging station may include a plate gripper assembly (not shown) which places, locates and removes the culture plate from a predetermined position relative to the camera.
  • the plate gripper encompasses the plate and can manipulate the positioning of the plate.
  • the plate gripper supports the plate relative to the camera and therefore performs the role of the support 207.
  • the dashed lines 21 1 and 213 indicate the horizontal and vertical field of view of the image capture system 201 ,
  • the illustrated embodiment of the system includes a light source 215 positioned in between the image capture device 201 and the support 207 which can be used to illuminate the image capture device facing- (front-) surface of the culture plate. Additionally, the system includes a light source 217 positioned on the side of the support opposite to the image capture device 201 which permits the illumination of the rear of the culture plate. Suitable light sources are known in the art; however, in a preferred from the front 213 and rear 217 light sources are provided by ring lights. The use of ring lights is particularly advantageous for several reasons. Firstly, with respect to the front ring light 215, the ring shape of the light allows for imaging of the plate (as any additional visual elements) through the aperture of the light.
  • the ring light 215 provides substantially uniform lighting thereby providing substantially consistent light across the surface of the culture plate which assists in minimising shadow.
  • the light is mounted at a relatively close proximity (95mm) to the support 215, this results in a low angle of incidence for light falling on the culture plate preventing specular reflection of the light source from a central surface of the medium being captured by the image capture device.
  • the ring light 2015 may include a plurality of LEDs arranged in a circular array and a diffuser associated with the LEDs.
  • the LEDs may be any suitable LEDs.
  • LEDs may be evenly spaced around the array in a single ring or in multiple rings, !n one arrangement, the LEDs may be selectively iliuminabie so that, for example, only half or a smaller fraction of the ring light is activated at one time. This may provide angled lighting of the culture plate 209, and may be useful in highlighting surface topography of the medium. To ensure a uniform distribution of light intensity the number of LEDs in the array may be greater than 50, preferably greater than 180 so that a LED is spaced every 2 degrees. This is further assisted by the diffuser to smooth out the light distribution.
  • the ring light may be a fluorescent light or a plurality of fibre optic sources with a diffuser.
  • the rear ring light 217 allows for back lighting of the culture plate 207.
  • the support 207 upon which the culture plate 209 is positioned permits the transit of light. This can be achieved by several means such as providing an aperture in the support 207, (such as when the support is provided by plate grippers) or the support 207 can be made of a transparent material.
  • a first visual element 219 and a second visual element 221 are mounted relative to the support 207.
  • the upper most surface of the first visual element 219 is further from the image capture device 201 than the upper most surface of the second visual element 221 .
  • the upper surface of the first visual element 219 and the upper surface of the second visual element 221 bound the minimum and maximum expected distance to the medium in a supported culture plate 209. This permits the testing of the resolvable detail of the image capture device across this range, thereby ensuring that a sufficient operational value for the resolvable detail is achieved for any possible culture medium depth.
  • the distance between the visual elements can be greater or less that the depth of the well of the culture plate.
  • Figure 3 illustrates a top view of the physical layout of the first visual element 219 and the second visual element 221 in relation to a positioned culture plate 209 in the image station. Further illustrated in Figure 3 is the field of view for the image capture device at the lower visual element (1 18,5mm x 99,5mm) and the field of view at the upper target (108.5mm x 91 mm).
  • Figure 4 illustrates a perspective view of the first visual element 219 and the second visual element 221 in a captured image. As can be seen by comparing Figure 3 and Figure 4, the perspective view of the second visual element 221 (being the visual element further from the image capture system) in Figure 4 results in the first visual element 219 appearing larger (relative to the second visual element 221 ) than its actual physical size.
  • the perspective effect in the captured image can be compensated for during measurement of the resolvable detail of the visual elements.
  • the physical size of the first visual element 219 and second visual element 221 may be different such that they appear of equal size in a captured image.
  • Figure 3 and Figure 4 also illustrate the positioning of the first and second visual elements (219 and 221 ) within the field of view of the image capture device and relative to a culture plate 209 when positioned at an image station on the support 207.
  • Figure 5 illustrates the dimensions and layout of one of the first or second visual elements.
  • the visual elements include a face including a background having a first colour 501 and a shape 503 on the background, the shape having a second colour.
  • the first and second colours of the visual elements have low contrast relative to each other. By using colours with low contrast, more reliable and reproducible results can be achieved compared to high contrast colours such as black and white. High contrast colours, such as black and white, can produce aberrant readouts which are determined to be artificially sharp.
  • the background colour and the colour of the shape are different shades of grey.
  • the background colour is Pantone 427C which substantially corresponds to 20% black and the shape is Pantone 425C which substantially corresponds to 80% black.
  • the illustrated shape is a general "L" shape which is slanted to provide at least one slanted edge which defines a transition between the colour of the
  • the L-shape provides four slated edges, two which are slanted relative to vertical (509 and 51 1 ) and two which are slanted relative to horizontal (513 and 515). This allows for assessment of four slanted edges for each of the first and second visual elements.
  • the L-shape is rotated 5 degrees anticlockwise, thereby providing two vertical edges which are rotated 5 degrees anticlockwise relative to vertical (509 and 51 1 ) and two horizontal edges which are rotated 5 degrees anti-c!ock wise relative to horizontal (513 and 515).
  • the slanted edges can be in the range of 2 degrees to 22.5 degrees and may be slanted clockwise.
  • the slanted edges of the L-shape do not need to be parallel and as such the L-shape may provide two vertical edges of differing angles relative to the image capture device and two horizontal edges of differing angles relative to the image capture device.
  • the third visual element 505 includes a third colour and the fourth visual element 507 includes a fourth colour.
  • the third visual element 505 and the fourth visual element 507 can be used to test the gross colour response of the image capture device from the measured colour of the third and fourth visual elements in a capture image.
  • the third visual element 505 is used to test the white balance of the image capture system. Consequently, the third visual element 505 is grey in colour, preferably the colour is (approximately) 50% black, preferably the colour is Pantone Cool Gray 7C.
  • the fourth visual element 507 can be used to test colour response of the image capture system. Consequently, the fourth visual element 507 includes the colour Pantone 674C.
  • the third and fourth visual elements 505, 507 are associated with, and copianar with, one of the first or second visual element.
  • the third and fourth visual elements are positioned within the field of view of the image capture device and therefore provide a control for the gross colour in each captured image.
  • the gross colour of the image capture device can be altered primarily by changes in light. In the context of the apparatus and system illustrated in Figures 1 and 2, changes in light may happen as the ring lights (215 and 217) age. Alternatively, changes in gross colour response may occur as a result of aberrant processing of the digital signal from the sensor, or problems in the sensor itself.
  • the resolvable detail of at least one of the straight edges of each of the visual elements needs to be measured and the operational value of the resolvable detail needs to be determined to test if it exceeds a minimum operational value.
  • This process is initiated by identifying a rectangular region of interest (ROI) which includes at least a portion of one of the provided slanted straight edges and a portion of the background adjacent the edge.
  • ROI rectangular region of interest
  • Figure 6 illustrated the relative positioning of each RO! 601 , 603, 605, 607 for each of the slanted straight edges 509, 51 1 , 513, 515 provided on each of the visual elements. Additionally, the layout and measurements of the L-shape on the background of the first or second visual element is illustrated.
  • Figure 7 illustrates a crop of one of the ROI of a slanted horizontal edge 701 .
  • These four ROI for each of the first and second visual elements 219, 221 are processed to measure an edge Spatial Frequency Response (e-SFR), wherein the SFR is output as a SFR curve of spatial frequency values per spatial frequency (pixel utilisation levels), and the SFR indicates the resolvable detail for the electronic image capture system 201 .
  • e-SFR Spatial Frequency Response
  • SFR Spatial Frequency Response
  • the SFR algorithm is executed on each edge of the shape in the first and second visual elements 219, 221 resulting in 8 measurements. These measurements are then combined and compared against a minimum utilisation level at a given spatial frequency value to obtain a pass or fail result for that test of the electronic image capture system 201 .
  • the measurements of the SFR are displayed as curves and are averaged to yield a single curve. That is, the e-SFR for the four edges of each shape are averaged to generate an averaged SFR which is compared against the minimum operational value for the electronic image capture system 201.
  • the specific minimum operational value for an given system will be dependent on the requirements of the system. However, in the exemplified context of an apparatus for imaging and analysing microbial growth on cell culture plates, the minimum operation value is set as a SFR curve with a special frequency above 0.5 at a utilisation level of 10 line pairs per pixel.
  • Figure 8 illustrates SFR curves averaged in the above described way for six tests, with respect to the minimum utilisation level of 10 line pairs per pixel at a spatial frequency value of 0.5.
  • the six tests of the electronic image capture system 201 are simulated tests with the lens 205 of the system having its focussed changed to simulate movement of the focal plane and associated depth of field. It can be seen that two tests displayed as the curves marked as "Focussed" 801 , 803 passed the minimum threshold utilisation level of 10 line pairs per pixel at a spatial frequency value of 0.5. Accordingly, the resolvable detail of the electronic image capture system here exceeds the minimum operational value for the electronic image capture system at the particular distance to the visual element being tested.
  • the tests marked with focus levels -1 (809), -0.5 (807), 1 (81 1 ) and 0.5 (805) fail the minimum threshold utilisation level of 10 line pairs per pixel at a spatial frequency value of 0.5.
  • the code for the implementation of the testing operation to be implemented by the processor 138 of the computer 136 is broken into two separate pieces.
  • a set of low-level functions is contained in a C++ class called SFRUtils.
  • a set of higher-level functions is contained in a class called SFRChart, This class implements the selection and region ⁇ of ⁇ interest (ROI) extraction for the test testing operation.
  • the RO!s are cropped from the image of the test chart taken using the electronic image capture system and passed down to SFRUtils.
  • the SFRUtils contains the low-level functions used to compute the SFR of a particular region.
  • the key method in this class is processHorizontalROI which, when supplied an image of an ROI, produces the SFR for that region.
  • the steps involved include: determining the location of the step between the two colours (e.g. light grey and dark grey) and fitting a line to it; using the line to straighten the edges and super-sample them, forming a 4x resolution 1 D image of the transition from dark to light grey; differentiating the image to form an edge-spread-function (ESF);
  • ESF edge-spread-function
  • step 1 to locate the transition from dark to light grey, the image is
  • step 2 once the line has been computed, the image is straightened and super-sampled. This is achieved by using the line equation with each column index to determine a floating point y coordinate. The column is then shifted by the difference between y and the middle of the image. The shift is performed by multiplying the difference by 4, converting to an integer and then adding the entire column to a vector 4 times the height of the image. Every column in the image is added in this fashion, the end result being a vector containing a 4x sampled 1 D copy of the edge. Each row is then divided by the number of samples which contributed to it.
  • step 3 the 1 D version of the image is differentiated.
  • a simple numeric differentiation is used where the next element is subtracted from the previous element, and then divided by two to give the derivative of the current element.
  • step 4 the ESF is now weighted with a hamming window to reduce the influence of elements towards the edge of the ESF, and focus on the inner elements around the transition itself.
  • step 5 the absolute value of the discrete Fourier transform is computed to yield the SFR itself.
  • the system can easily approximate the appropriate positioning of the RO! relative to the visual elements.
  • the coordinate positions of the visual elements in a captured image is illustrated in Figure 4.
  • Figures 5 and 6 illustrate the coordinates and layout of the various visual elements and the positioning of the ROI on the first and second visual elements relative to the field of view of the image capture system. The dimensions illustrated in Figure 6 are correct for the furthest of the first and second visual elements.
  • the positioning of the ROI and shape on the closer of the first and second visual element are the same but scaled up by the appropriate factor as a result of the generated perspective effect. While positioning of the visual elements relative to the image capture system is generally fixed the actual position of the visual elements in the captured image may vary by approximately ⁇ 1 .25mm of their expected positions. Consequently, the algorithm used in measuring the resolvable detail of the first and second visual elements 219, 221 and the gross colour of the third and fourth visual elements 505, 507 includes a localisation step, to obtain an accurate position for the target.
  • the method 901 includes the steps of: providing 903 a first visual element at a first distance to an image capture device.
  • the first visual element 219 as per the above embodiments includes a face, the face of the first visual element 219 includes a first colour 501 and a second colour 51 1 , wherein the transition between the first colour and the second colour is defined by at least straight edge; providing 905 a second visual element 221 as per the above embodiments includes a face, the face of the first visual element 219 includes a first colour 501 and a second colour 51 1 , wherein the transition between the first colour and the second colour is defined by at least straight edge; capturing 907 an image of the faces of both the first 219 and second 221 visual elements with the image capture device 201 ; and determining 91 1 whether the resolvable detail exceeds a minimum operational value for each of the first 219 and second 221 visual elements.

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Investigating Or Analysing Materials By Optical Means (AREA)

Abstract

A method and system for testing operation of an electronic image capture device and system, including the use of visual elements at multiple distances from an image capture device, and which bound a desired depth of field, to permit assessment of the resolvable detail of the image capture system across the depth of field. Additionally, the method and system includes visual elements which permit the assessment of the gross colour response of the image capture system.

Description

Title of Invention
Method and System for Validating Resolvable Detail across a Depth of Field Technical Field
[0001 ] The present invention relates to a method, and system for testing operation of an electronic image capture device and system. Particularly, but not exclusively, the present invention relates to the use of visual elements at multiple distances from an image capture device, and which bound a desired depth of field, to permit assessment of the resolvable detail of the image capture system across the depth of field. Additionally, the method and system includes visual elements which permit the assessment of the gross colour response of the image capture system.
Background of Invention
[0002] When capturing an image of an object, it is desirable to reproduce the features of the object as accurately as possible. This includes accurately reproducing features such as colour temperature, hue and saturation, and capturing the image at a sufficient resolution to allow for the identification and distinguishing of fine details. Consequently, it is important to be able to assess and validate the operation of an image capture system.
[0003] Many factors can influence the image quality of an image capture system including lighting, lens quality and colour, focus, the aperture and focal length of the lens used, the distance of the objected to be imaged from the image capture system and in the case of digital image capture device the number of photo-elements on the camera sensor and the internal processing of the device such as compression and correction of the image. Changes in any of these factors can result in gross changes in the captured images. For example a change in temperature or hue of the lighting can change the white balance and colour saturation of the image.
[0004] Furthermore, changes in focus, changes in the aperture of the lens, or changes in the distance of an object from an image capture system may result in a change in the resolvable detail of the object. For example, if the focus of an image capture device drifts then the distance to the focal plane will change. Moreover, if the aperture of the image capture device is widened (e.g. the lens is stopped up) then the nearest and farthest distances from the image capture device that an object will appear acceptably sharp become closer (the depth of field decreases). This effect is reversed when the aperture is narrowed.
[0005] The size of the depth of field of an image capture device and the distance of the focal plane from the image capture device, relative to an object to be imaged, are major determinants of the resolution of the object in a captured image.
[0006] In its simplest definition, resolution of an electronic image capture system is defined by the system's ability to capture and process finely spaced details. That is, the resolution indicates the highest spatial frequency that any image capture system can produce. However, measuring resolution in this way can be insufficient to test whether an electronic image capture system is operating correctly. A more accurate metric for measuring the resolvable detail of a captured image is to use the spatial frequency response (SFR) of the electronic image capture system. The SFR can be used to assess contrast loss as spatial frequency increases. Generally, as the distance between visual elements decreases (e.g. the spatial frequency increases), the contrast between those elements decreases to a point whereby the difference between the elements can no longer be sufficiently distinguished
[0007] In order to measure the resolvable detail of an electronic image capture system, a test chart can be used such as that provided by the International Standard: ISO 12233:2014 Photography - Electronic still picture imaging - Resolution and spatial frequency responses. As will be appreciated by those persons skilled in the art, test charts consist of a card containing visual elements which are can be analysed to determine the SFR of a captured image of the chart. However the use of a test chart provides information on the resolvable detail of the image capture device at one distance. As such it provides no information on the depth of field of the image capture system.
[0008] One application where the resolvable detail of an image capture system is important is the field of automated pathology services such as analysis of solid growth culture medium plates in pathology laboratories. Automated pathology image capture and analysis offers the ability to rapidly increase the efficiency and decrease the cost of pathology services by reducing manual processing of pathology samples.
However, there are technical challenges that need to be overcome to allow
reproducible and reliable results. For example, it is important that the image capture system and any cell culture plate support are configured in an appropriate manner so that an image of the cell culture plate, and any associated bacteria growth, can be produced with enough resolvable detail to allow accurate analysis. However, this task is complicated by the fact that the depth of culture medium in any given plate can vary and as such the distance of the surface of the cell culture medium (and any colonies thereon) from the image capture system can therefore vary, even if the base of the plates are positioned at a consistent distance. Consequently, there is a need to ensure that irrespective of the depth of the medium in the plate, the depth of field of the image capture device is such that it can produce an image of the surface of the medium in the plate (and any associated colonies) with sufficient resolvable detail to allow accurate analysis.
[0009] Before turning to a summary of the present invention, it must be
appreciated that the above description of the prior art has been provided merely as background to explain the context of the invention. It is not to be taken as an admission that any of the material referred to was published or known, or was a part of the common general knowledge in the relevant art.
Summary of Invention
[0010] In one aspect, the present invention provides a method of testing operation of an image capture system, the method including: providing a first visual element including a face at a first distance from an image capture device and a second visual element including a face at a second distance from the image capture device, the face of each of the first and second visual elements including a first colour and a second colour wherein the transition between the first colour and the second colour is defined by at least one straight edge, the edge oriented such that it is between 2 degrees and 88 degrees rotated from vertical or horizontal relative to the orientation of the image capture device; capturing an image of the faces of both the first and second visual element with the image capture device; measuring resolvable detail of the captured image by analysing the at least one straight edge of the first and the second visual element; and determining whether the resolvable detail exceeds a minimum operational value for each of the first and second visual elements.
[001 1 ] In another aspect, the invention provides a system for testing operation of an image capture device, the system including: an image captured device; a first visual element including a face at a first distance from the image capture device and a second visual element including a face at a second distance from the image capture device, the face of each of the first and second visual elements including a first colour and a second colour wherein the transition between the first colour and the second colour is defined by at least one straight edge, the edge oriented such that it is between 2 degrees and 88 degrees rotated from vertical or horizontal relative to the orientation of the image capture device; and a processor for measuring resolvable detail of the image capture device by analysing the at least one straight edge of the first visual element and the at least one straight edge of the second visual element in a captured image and determining whether the resolvable detail exceeds a minimum operational value for each of the first and second visual elements.
[0012] Providing a first visual element and a second visual element permits the assessment of the resolvable detail at two distances from the image capture device corresponding to the first and second distance of the first and second visual elements. When the resolvable detail for each of first and second visual elements is above the minimum operational value, the depth of field of the image capture device is
considered as acceptable. Consequently, the image capture device will be able to provide an image of an object at distance at or between the distance of the first and second visual elements with a resolvable detail at or above the minimum operational value.
[0013] In some embodiments, the system for testing includes an electronic image capture device. In some embodiments the image capture device is an electronic camera.
[0014] In some embodiments, the system for testing operation of an image capture device includes a support for supporting objects to be imaged, the support being positionable at a predetermined distance from the image capture device. In one embodiment, the system under test is used for imaging and analysing microbial growth on a solid culture medium in a culture plate, and the minimum operational value is a sufficient value for the image capture system to be used for this analysis. Therefore, it is preferably for the support to be adapted to support a culture plate for containing a culture medium such as agar. In some embodiments, the support defines a plane and is adapted to abut the culture plate. The support may abut the underside of the culture plate or may encompass and abut the circumference of the culture plate.
[0015] As will be understood by those persons skilled in the art, a culture plate for microbiology (also known as a Petri Dish) generally consists of a round disc with raised edged which define a circular well for deposition of liquid medium which sets in the plate to form a solid culture medium. Such culture plates typically include a lid for minimizing contamination of the medium and for assisting in preventing dehydration of the medium. The combination of the medium and the plate is hereinafter referred to throughout the specification as a "culture plate".
[0016] The image capture system has been found, in on example, to provide sufficiently accurate images of microbial growth on the culture plate in order to provide a microbiological assessment. This assessment may be performed manually by a skilled laboratory technologist. Alternatively, the assessment may be automated and performed using a classifier that has been trained using a machine learning algorithm. Images obtained using the apparatus may be processed and used as input into the classifier. An example of an electronic image capture system, including such a classifier, is described in the Applicant's Australian patent 2012225196 titled
"Method and Software for Analysing Microbial Growth", the contents of which are herein incorporated by reference,
[0017] The method and systems allow for the evaluation of the resolvable detail of the image capture device at two distances from the image capture device. In some embodiments, the first visual element and the second visual element are associated with the support. Preferably the first visual element and second visual element are mounted on, or fixed to, the support. Alternatively, the first and second visual elements are mounted relative to the support such that they have a defined spatial relationship with the support but are independent of the support. Associating the 8
visual elements with the support allows for the evaluation of the resolvable detail within a range relative to the support.
[0018] In some embodiments the distance to the face of the first visual element from the image capture device is the maximum expected distance to an objected being supported. In some embodiments the distance to the face of the second visual element from the image capture device is the minimum distance expected to an objected being supported. In the preferred embodiment of imaging a cell culture plate, the desired range for assessment of the resolvable detail extends from the bottom of the well of the plate to the upper edge of the rim of the plate, when supported. Consequently, it can be established that the surface of the culture medium in the plate, and any associated microorganisms, can be imaged at or above a specified resolvable detail irrespective of the depth of the culture medium in the plate. In some embodiments the distance to the face of the first visual element from the image capture device is 1 mm or less than the distance of the support from the image capture device. In some embodiments the distance to the face of the second visual element from the image capture device is 15mm or less than the distance of the support from the image capture device.
[0019] The first and second visual elements include a face including a first colour and a second colour. The transition from the first colour to the second colour is defined by at least one straight edge. Importantly this edge is slanted relative to either vertical or horizontal relative to the orientation of the image capture system. In the embodiment, a slanted edge is considered a straight edge wherein the direction of the edge is between 2 degrees and 22.5 degrees from vertical or horizontal relative to the orientation of the image capture system. In some embodiments the direction of the edge is between 2 degrees and 10 degrees relative to horizontal or vertical relative to the image capture device. In some embodiments the direction of the edge is 5 degrees relative to horizontal or vertical relative to the image capture device.
[0020] The use of a slanted edge on the face of the visual elements allows for an edge-based Spatial Frequency Response (e-SFR) to be determined for each of the first and second visual elements as set out in the international standards for
Photography - Electronic still picture imaging - Resolution and spatial frequency response - !SO12233:2014, the contents of which is herein incorporated b way of this reference.
[0021 ] In the context of the above international standards, the term "operational value" as used in the context of resolvable detail refers to the required resolvable detail needed to allow analysis of the imaged object. As would be understood to persons skilled in the art, SFR is a multi-valued metric that assesses the contrast loss as a function of spatial frequency. In this context the minimum operational value will be expressed as a required contrast (modulation level or SFR values in the range of 1 to 0) for a minimum spatial frequency. In other words the image capture device will meet the minimum operational value when the minimum required SFR occurs at or above a desired spatial frequency. Any suitable SFR can be chosen, however it is advantageous to select a SFR in the linear range of a SFR curve. Therefore, in some embodiments the SFR is measured at 0.5 (or 50% contrast). In some embodiments, whereby the image capture device uses a digital sensor, the spatial frequency will be expressed as a sampling frequency (cycles/pixel or cycles per mm), line width per picture height (LW/PH) or lines per pixel. For example, the minimum operational value for the electronic image capture device may be met when the SFR is 0.5 at or above the spatial frequency of 10 lines per pixel.
[0022] In some embodiments the visual elements include at least two straight edges defining a transition between the first colour and the second colour. As such the operational value of each of the visual elements can be an average of each of these two edges. In one form of this embodiment, one of the at least two edges is slanted relative to vertical and a second of the at least two edges is slanted relative to horizontal in relation to the orientation of the image capture device. In some embodiments one of the at least two straight edges is between 2 degrees and 22.5 degrees rotated from vertical relative to the orientation of the image capture device and a second edge of the at least two straight edges is between 2 degrees and 22.5 degrees rotated from horizontal relative to the orientation of the image capture device. In some embodiments, the direction of the at least two edges is between 2 degrees and 10 degrees relative to horizontal or vertical relative to the image capture device. In some embodiments, the direction of the at least two edges is 5 degrees relative to horizontal or vertical relative to the image capture device. In some embodiments, the at least two edges are perpendicular to each other. [0023] In some embodiments, the visual elements include at least four straight edges defining a transition between the first colour to the second colour. In some embodiments, a first and second edge of the at least four straight edges is between 2 and 22.5, or 2 to 10 degrees, or 5 degrees rotated from vertical relative to the orientation of the image capture device and a third and fourth edge of the at least four straight edges is between 2 degrees and 22.5 degrees, or 2 to 10 degrees, or 5 degrees rotated from horizontal relative to the orientation of the image capture device thereby providing at least two edges for assessment of the vertical and horizontal resolvable detail. In some embodiments the first and second edge are parallel to each other and the third and fourth edge are parallel to each other. As such the operational value of each of the visual elements can be an average of each of these four edges. Alternatively, the operational value for the vertical resolvable detail can be derived by averaging the operational value for each of the slanted horizontal edges, and the horizontal resolvable detail can be derived by averaging the
operational value for each of the slanted vertical edges
[0024] As would be understood by persons skilled in the art, horizontal resolution values are measured in the longer image dimension which corresponds to the horizontal direction for a "landscape" image orientation, while vertical resolution values are measured in the shorter image dimension which corresponds to the vertical direction for a "landscape" image orientation. In the case of an electronic image system, the alignment is expressed relative to the orientation of the rows and columns of the photo-elements which comprise the sensor array. The slanted vertical line(s) of the first and second visual elements can be used to assess the horizontal resolution and the slanted horizontal iine(s) of the first and second visual elements can be used to assess the vertical image resolution,
[0025] In some embodiments, the edges are provided by the visual elements having a background including the first colour and shape provided on the background, the shape including a second colour. The two colours being coplanar. In some embodiments, the shape provided on the background is a general "L" shape, slanted off vertical and horizontal. This provides two lines slanted relative to vertical, and two lines slanted relative to horizontal. As would be understood, other shapes can be used such as squares, rectangles or parallelograms. [0026] Optical systems create perspective effects whereby objects of the same size, but at differing distances from the image capture device will appear different in size. For example the further an object is from the image capture device, the smaller it will appear in a captured image. As the distance to the first visual element from the image capture system compared to the distance of the second visual element, in some embodiments the first visual element is further from the image capture device than the second visual element and the face of the first visual element appears larger than the second visual element in a captured image. The increase in size of the first visual element, relative to the second visual element, will depend on the distance between the two visual elements as a proportion of the total distance from the image capture device. In some embodiments the first visual element appears between 8% and 12% larger than the second visual element in the captured image, !n some embodiment the first visual element appears approximately 10% larger than the second visual element. As would be understood subsequent analysis of the first and second visual elements will need to take this perspective effect in the captured image in to account. Thereby is some embodiments, the method includes the step of compensation for the perspective effects in the captured image.
[0027] Preferably, the first and second colours of the visual elements have low contrast relative to each other. By using colours with low contrast, more reliable and reproducible results can be achieved compared to high contrast colours such as black and white. In some embodiments the first and second colours are different shades of grey. In some embodiments the first colour is between 15% and 25% black. In some embodiments the first colour is (approximately) 20% black. In some embodiments the first colour is Pantone 427C. In some embodiments the second colour is between 75% black and 85% black. In some embodiments the second colour is
(approximately) 80% black. In some embodiments the second colour is Pantone 425C. In some embodiments the first colour is the used in the background of the first and second visual elements.
[0028] While the first and second visual elements can be used to measure and validate the resolvable detail, the method relates generally to testing the operation of the image capture system and device. As such, in some embodiments the first and second aspects of the present invention include a third and fourth visual element together with the first and second visual elements, the third visual element including a third colour and the fourth visual element including a fourth colour, !n some
embodiments of the first aspect of the invention, the method further includes the steps of providing the third and fourth visual element; measuring the colour of the third and fourth visual elements in the capture image; and determining the gross colour response of the image capture system from the measured colour of the third and fourth visual elements in the captured image. Consequently, the term "operation of an image capture system/device" encompasses both resolvable detail and gross colour response of the image capture device and system. In some embodiments, the method and system only relates to testing the resolvable detail.
[0029] In some embodiments of the invention, the third colour is selected to permit assessment the white balance of the captured image. In some embodiments the third colour is a shade of grey. In some embodiments the third colour is (approximately) 50% black, preferably the third colour is Pantone Cool Gray 7C. In some
embodiments the fourth colour is selected to assess the colour response of the image capture system, preferably the fourth colour is Pantone 674C. That is, the colours are selected so that they have different properties relative to each other, and this makes gross errors easy to spot and to permit detection of major camera faults.
[0030] Preferably, the visual elements are positioned such that they are in the field of view of the image capture device together with the object to be imaged. This permits the testing of the operation of the image capture device and system in each image captured during operation. As such the visual elements can be used as internal controls within an image to ensure that the image was captured within appropriate operational values and parameters.
Brief Description of Drawings
[0031 ] The invention is further illustrated in the following embodiments described with reference to the accompanying drawings. The drawings are for the purpose of describing particular embodiments only and are not intended to be limiting with respect to the above description.
[0032] Figure 1 illustrates an apparatus suitable to the imaging and analysis of microbial growth on culture plates which is suitable for use with an embodiment of the method and system of the present invention. [0033] Figure 2 illustrates a layout of a system suitable for the imaging a culture plate.
[0034] Figure 3 illustrates the physical layout of the first, second, third and fourth visual elements relative to the field of view of the image capture system.
[0035] Figure 4 illustrates the apparent (perspective) view of the first, second, third and fourth visual elements in a captured image.
[0036] Figure 5 illustrates the dimension and layout of the third and fourth visual element relative to one of the first or second visual elements.
[0037] Figure 6 illustrates the location of the regions of interest relative to the L- shape on the first or second visual elements.
[0038] Figure 7 illustrates a cropped region of interest to be processed for determination of the resolvable detail.
[0039] Figure 8 illustrates a spatial frequency curve for a slanted horizontal and vertical edge using a focussed and defocussed electronic camera.
[0040] Figure 9 illustrates a flow chart of the method of an embodiment of the present invention.
Detailed Description
[0041 ] Figure 1 illustrates an embodiment of an apparatus for the automated imaging and analysis of microbial growth plates as described in the Applicant's published PCT application WO2012/1 19190 A1 (herein incorporated by way of this reference) which is suitable for use with the method and system the present invention. Figure 1 shows an embodiment of an apparatus 100 for use in analysing microbial growth on a medium in a culture plate 102 in the form of an agar plate. The apparatus 100 includes the following components.
[0042] An image capture device 104 in the form of a high resolution digital camera 106 of machine vision quality with an appropriate fixed focal length lens is positioned above a ring light 1 10. [0043] The ring light 1 10 has a large diameter relative to the diameter of the culture plate 102. In this example, the ring light has a diameter of 180mm . The ring light 1 10 contains several hundred white LEDs arranged in a circular array and a diffuser. This light provides low angle, diffused side lighting to enable the culture plate to be uniformly illuminated. The ring light 1 10 is positioned around 40 mm above an opaque cover 1 12 that forms part of the frame 1 18, and thus about 30 mm above the culture plate 102. The positioning of the ring light 1 10 so that light from the white LEDs impinge on the surface of the culture plate 102 at a low angle prevents a specular reflection of the LEDs from a central surface of the medium being captured by the image capture device 104.
[0044] A lighting device 1 14 in the form of a flat panel light based on an array of white LEDs behind a diffuser. The lighting device 1 14 is located about 150 mm below the opaque cover 1 12. This distance is chosen so that light from the ring light 1 10 falls on the baffles rather than the light 1 14, to reduce rear illumination of the culture plate 102.
[0045] A support 1 16 for supporting the culture plate 102 in the direct field of view of the image capture device 104. The support 1 16 is a transparent glass stage that is 3 mm thick. The glass may be replaced if it becomes scratched over time. The support 1 16 includes two or more triangle shaped transparent positioning elements for positioning the culture plate 102 on the support. The apexes of the triangles point towards the centre of the support for placement of the culture plate 102 so that the apexes touch the circumference of the culture plate 102.
[0046] A frame 1 18 positions the image capture device 104, support 1 16, ring light 1 10 and lighting device 1 14 relative to each other. The frame 1 18 is made of an opaque material, such as sheet metal or plastic, which reduces the amount of light entering the apparatus 100. The internal surfaces of the apparatus 100 are blackened where possible to reduce reflection of light from the internal surfaces into the lens 108.
[0047] The frame 1 18 includes a door 120 providing an access patch for a human operator to place the culture plate 102 on the support 1 16. Alternatively, a robotic plate-handling device may use the access path to place the culture plate 102 precisely on the support 1 16 for imaging, and then to remove the culture plate to a designated output channel/slide. For example, the culture plate may be placed in an output channel representing one of the up to four categories described above.
[0048] The opaque cover 1 12 is an aluminium plate that extends across the width of the frame 1 18 and effectively splits the frame 1 18 into a top enclosure 122 and bottom enclosure 124. The opaque cover 1 12 includes a hole 126 to allow light from the lighting device 1 14 to transmit through to the culture plate 102. The width of the hole 126 is just slightly larger than the width of the culture plate 102 (which is 90mm in this example and is typically between 88 and 100 mm) and is less than the diameter of the ring light 1 10. This prevents light emitted from the ring light 1 10 from reflecting from the bottom surface 128 of the frame 1 18 or the surface of the flat panel light 1 14 and back through the culture plate 102.
[0049] The frame 1 18 also includes light baffles 130 positioned below the opaque cover 1 12.
[0050] Means 131 for changing the position of the ring light 1 10 relative to the support 1 16 are also provided in the form of a rack and pinion assembly.
[0051 ] The frame 1 18, opaque cover 1 12 and light baffles 130 define a cavity 132 such that the support 1 16 supports the culture plate 102 between the image capture device 104 and the cavity 132. The support (glass stage) 1 16 seals the cavity 132 and prevents unwanted material from falling into the cavity 132. When the ring light 1 10 is illuminated and the lighting device 1 14 is off, the opaque cover 1 12 prevents light from the ring light 1 10 from illuminating visible areas of the cavity 132. In this configuration, the cavity 132 looks like a black background.
[0052] A side angle light 134 is used to illuminate the culture plate 102 from an angle to highlight any surface topography on the agar, such as dimples or a granular texture. An alternative to the side angle light 134 is to activate only some of the LEDs in the ring light 1 10, such that the culture plate 102 is illuminated from one direction only.
[0053] A processing means such as a computer 136 is connected to the image capture device 104, the ring light 1 10 and the lighting device 1 14 via a physical or wireless interface. The computer 136 may include a processor 138 and memory 140 storing software 142 for activating the different components, capturing raw data and processing the data.
[0054] A library of images, metadata and other information may be stored at the computer 136, or may be accessible at the computer 136 via a network
[0055] It will be appreciated that different components may be substituted for any of the above described components of the device, and that the distance between components and position of components may be adjusted. For example, although the camera 106 and lens 108 are shown inside the frame 1 18, in another example, they could be positioned outside the frame 1 18, with the lens 108 protruding through a hole in the top surface of the frame 1 18. Also, the width of the frame 1 18 could be decreased to reduce the overall size of the system 100.
[0056] An image acquisition process using the system 100 may be suitable for obtaining images for use in classifying microbial growth on the plate 102 using a trained machine learning classifier, or in training such a classifier. A manual process where the steps of placement, image capture, and analysis of growth parameters maybe performed by a human operator, but it will also be appreciated that many of the steps of the process may be automated and performed by software or by a robotic device.
[0057] Figure 2 illustrates an embodiment of a system in accordance with an aspect of the present invention. Specifically, a system for testing the operation of an image capture device 201 , comprising a sensor 203 and a lens 205 are supported by a frame (not shown) above a support 207 at a distance from the support. The lens used in the illustrated system has a focal length of 16mm which provides a field of view of 30°45\
[0058] In the illustrated embodiment the support 207 is provided by a planar platform, however alternative supports may be used such as a plate gripper that support the culture plate in place by gripping the plate edges. A plate gripper provides the advantage of being both able to support the culture plate (without obstructing the view of the surface of the culture plate, or impeding the rear lighting of the base of the culture plate), and permitting manipulation or the culture plate. A culture plate 209 is shown positioned at an imaging station which is central to the optical axis (dashed and dotted line) of the image capture device 201 . The imaging station may include a plate gripper assembly (not shown) which places, locates and removes the culture plate from a predetermined position relative to the camera. In one form the plate gripper encompasses the plate and can manipulate the positioning of the plate. In this form the plate gripper supports the plate relative to the camera and therefore performs the role of the support 207. The dashed lines 21 1 and 213 indicate the horizontal and vertical field of view of the image capture system 201 ,
[0059] The illustrated embodiment of the system includes a light source 215 positioned in between the image capture device 201 and the support 207 which can be used to illuminate the image capture device facing- (front-) surface of the culture plate. Additionally, the system includes a light source 217 positioned on the side of the support opposite to the image capture device 201 which permits the illumination of the rear of the culture plate. Suitable light sources are known in the art; however, in a preferred from the front 213 and rear 217 light sources are provided by ring lights. The use of ring lights is particularly advantageous for several reasons. Firstly, with respect to the front ring light 215, the ring shape of the light allows for imaging of the plate (as any additional visual elements) through the aperture of the light.
Furthermore, the ring light 215 provides substantially uniform lighting thereby providing substantially consistent light across the surface of the culture plate which assists in minimising shadow. Preferably the light is mounted at a relatively close proximity (95mm) to the support 215, this results in a low angle of incidence for light falling on the culture plate preventing specular reflection of the light source from a central surface of the medium being captured by the image capture device.
[0060] The ring light 2015 may include a plurality of LEDs arranged in a circular array and a diffuser associated with the LEDs. Alternatively, the LEDs may
themselves produce diffuse light and a separate diffuser may not be required. LEDs may be evenly spaced around the array in a single ring or in multiple rings, !n one arrangement, the LEDs may be selectively iliuminabie so that, for example, only half or a smaller fraction of the ring light is activated at one time. This may provide angled lighting of the culture plate 209, and may be useful in highlighting surface topography of the medium. To ensure a uniform distribution of light intensity the number of LEDs in the array may be greater than 50, preferably greater than 180 so that a LED is spaced every 2 degrees. This is further assisted by the diffuser to smooth out the light distribution. In other alternatives, the ring light may be a fluorescent light or a plurality of fibre optic sources with a diffuser.
[0061 ] The rear ring light 217 allows for back lighting of the culture plate 207. As such in the support 207 upon which the culture plate 209 is positioned permits the transit of light. This can be achieved by several means such as providing an aperture in the support 207, (such as when the support is provided by plate grippers) or the support 207 can be made of a transparent material.
[0062] A first visual element 219 and a second visual element 221 are mounted relative to the support 207. As can be seen the upper most surface of the first visual element 219 is further from the image capture device 201 than the upper most surface of the second visual element 221 . Preferably the upper surface of the first visual element 219 and the upper surface of the second visual element 221 bound the minimum and maximum expected distance to the medium in a supported culture plate 209. This permits the testing of the resolvable detail of the image capture device across this range, thereby ensuring that a sufficient operational value for the resolvable detail is achieved for any possible culture medium depth. However, as would be understood, the distance between the visual elements can be greater or less that the depth of the well of the culture plate.
[0063] Figure 3 illustrates a top view of the physical layout of the first visual element 219 and the second visual element 221 in relation to a positioned culture plate 209 in the image station. Further illustrated in Figure 3 is the field of view for the image capture device at the lower visual element (1 18,5mm x 99,5mm) and the field of view at the upper target (108.5mm x 91 mm). Figure 4 illustrates a perspective view of the first visual element 219 and the second visual element 221 in a captured image. As can be seen by comparing Figure 3 and Figure 4, the perspective view of the second visual element 221 (being the visual element further from the image capture system) in Figure 4 results in the first visual element 219 appearing larger (relative to the second visual element 221 ) than its actual physical size. As such the perspective effect in the captured image can be compensated for during measurement of the resolvable detail of the visual elements. Alternatively, the physical size of the first visual element 219 and second visual element 221 may be different such that they appear of equal size in a captured image. A further alternative is that there is no compensation made for the perspective effect resultant form the different distances of the first visual element 219 and the second visual element 221 from the image capture device 201 ,
[0064] Figure 3 and Figure 4 also illustrate the positioning of the first and second visual elements (219 and 221 ) within the field of view of the image capture device and relative to a culture plate 209 when positioned at an image station on the support 207.
[0065] Figure 5 illustrates the dimensions and layout of one of the first or second visual elements. As can be seen, the visual elements include a face including a background having a first colour 501 and a shape 503 on the background, the shape having a second colour. Preferably, the first and second colours of the visual elements have low contrast relative to each other. By using colours with low contrast, more reliable and reproducible results can be achieved compared to high contrast colours such as black and white. High contrast colours, such as black and white, can produce aberrant readouts which are determined to be artificially sharp. In the exemplified embodiment of the invention illustrated in Figures 3 to 7, the background colour and the colour of the shape are different shades of grey. Preferably, the background colour is Pantone 427C which substantially corresponds to 20% black and the shape is Pantone 425C which substantially corresponds to 80% black.
However, it is to be understood that while the presently embodied colours have been demonstrated to function in the context of the present invention, a wide range of greys and colours may be suitable for use.
[0066] The illustrated shape is a general "L" shape which is slanted to provide at least one slanted edge which defines a transition between the colour of the
background and the colour of the shape. In the exemplified embodiment, the L-shape provides four slated edges, two which are slanted relative to vertical (509 and 51 1 ) and two which are slanted relative to horizontal (513 and 515). This allows for assessment of four slanted edges for each of the first and second visual elements.
[0067] In the illustrated embodiment the L-shape is rotated 5 degrees anticlockwise, thereby providing two vertical edges which are rotated 5 degrees anticlockwise relative to vertical (509 and 51 1 ) and two horizontal edges which are rotated 5 degrees anti-c!ock wise relative to horizontal (513 and 515). However, in accordance with the invention the slanted edges can be in the range of 2 degrees to 22.5 degrees and may be slanted clockwise. Furthermore, the slanted edges of the L-shape do not need to be parallel and as such the L-shape may provide two vertical edges of differing angles relative to the image capture device and two horizontal edges of differing angles relative to the image capture device.
[0068] Furthermore, it is to be understood that other shapes are suitable for use in the present invention. For example, squares or rectangles could be used, as could parallelograms, or any shape which provides at least one straight edge with an appropriate angle.
[0069] Further illustrated in Figure 5 are a third visual element 505 and a fourth visual element 507. The third visual element 505 includes a third colour and the fourth visual element 507 includes a fourth colour. The third visual element 505 and the fourth visual element 507 can be used to test the gross colour response of the image capture device from the measured colour of the third and fourth visual elements in a capture image. In the illustrated embodiment of Figure 5, the third visual element 505 is used to test the white balance of the image capture system. Consequently, the third visual element 505 is grey in colour, preferably the colour is (approximately) 50% black, preferably the colour is Pantone Cool Gray 7C. The fourth visual element 507 can be used to test colour response of the image capture system. Consequently, the fourth visual element 507 includes the colour Pantone 674C.
[0070] In a preferred embodiment the third and fourth visual elements 505, 507 are associated with, and copianar with, one of the first or second visual element. Like the first and second visual elements, the third and fourth visual elements are positioned within the field of view of the image capture device and therefore provide a control for the gross colour in each captured image. As would be understood the gross colour of the image capture device can be altered primarily by changes in light. In the context of the apparatus and system illustrated in Figures 1 and 2, changes in light may happen as the ring lights (215 and 217) age. Alternatively, changes in gross colour response may occur as a result of aberrant processing of the digital signal from the sensor, or problems in the sensor itself. [0071 ] Upon capturing an image of the face of the first and second visual element, the resolvable detail of at least one of the straight edges of each of the visual elements needs to be measured and the operational value of the resolvable detail needs to be determined to test if it exceeds a minimum operational value. This process is initiated by identifying a rectangular region of interest (ROI) which includes at least a portion of one of the provided slanted straight edges and a portion of the background adjacent the edge.
[0072] Figure 6 illustrated the relative positioning of each RO! 601 , 603, 605, 607 for each of the slanted straight edges 509, 51 1 , 513, 515 provided on each of the visual elements. Additionally, the layout and measurements of the L-shape on the background of the first or second visual element is illustrated.
[0073] Figure 7 illustrates a crop of one of the ROI of a slanted horizontal edge 701 . In the embodiment there are a possible four ROI for each of the first and second visual elements 219, 221 two for each of the slanted vertical edges 509, 51 1 and two for each of the slated horizontal edges 513, 515. These four ROI for each of the first and second visual elements 219, 221 are processed to measure an edge Spatial Frequency Response (e-SFR), wherein the SFR is output as a SFR curve of spatial frequency values per spatial frequency (pixel utilisation levels), and the SFR indicates the resolvable detail for the electronic image capture system 201 .
[0074] The method of characterising an edge Spatial Frequency Response (SFR) is provided by an SFR algorithm in the above mentioned ISO standard (ISO
12233:2014). This method allows measurement of the resolvable detail that a particular optical system, including the camera, can capture. The SFR method relies on the availability of at least two straight edges, with approximately 5 degrees to each axis, and the edges must transition from light grey to dark grey. Figure 7 shows the edge 701 of one region of interest transitioning from light grey in the background 501 to dark grey in the L-shape 503.
[0075] The SFR algorithm is executed on each edge of the shape in the first and second visual elements 219, 221 resulting in 8 measurements. These measurements are then combined and compared against a minimum utilisation level at a given spatial frequency value to obtain a pass or fail result for that test of the electronic image capture system 201 . The measurements of the SFR are displayed as curves and are averaged to yield a single curve. That is, the e-SFR for the four edges of each shape are averaged to generate an averaged SFR which is compared against the minimum operational value for the electronic image capture system 201.
[0076] The specific minimum operational value for an given system will be dependent on the requirements of the system. However, in the exemplified context of an apparatus for imaging and analysing microbial growth on cell culture plates, the minimum operation value is set as a SFR curve with a special frequency above 0.5 at a utilisation level of 10 line pairs per pixel.
[0077] Figure 8 illustrates SFR curves averaged in the above described way for six tests, with respect to the minimum utilisation level of 10 line pairs per pixel at a spatial frequency value of 0.5. The six tests of the electronic image capture system 201 are simulated tests with the lens 205 of the system having its focussed changed to simulate movement of the focal plane and associated depth of field. It can be seen that two tests displayed as the curves marked as "Focussed" 801 , 803 passed the minimum threshold utilisation level of 10 line pairs per pixel at a spatial frequency value of 0.5. Accordingly, the resolvable detail of the electronic image capture system here exceeds the minimum operational value for the electronic image capture system at the particular distance to the visual element being tested. The tests marked with focus levels -1 (809), -0.5 (807), 1 (81 1 ) and 0.5 (805) fail the minimum threshold utilisation level of 10 line pairs per pixel at a spatial frequency value of 0.5.
[0078] In one embodiment, the code for the implementation of the testing operation to be implemented by the processor 138 of the computer 136 is broken into two separate pieces. A set of low-level functions is contained in a C++ class called SFRUtils. A set of higher-level functions is contained in a class called SFRChart, This class implements the selection and region~of~interest (ROI) extraction for the test testing operation. Also, the RO!s are cropped from the image of the test chart taken using the electronic image capture system and passed down to SFRUtils.
[0079] The SFRUtils contains the low-level functions used to compute the SFR of a particular region. The key method in this class is processHorizontalROI which, when supplied an image of an ROI, produces the SFR for that region. The steps involved include: determining the location of the step between the two colours (e.g. light grey and dark grey) and fitting a line to it; using the line to straighten the edges and super-sample them, forming a 4x resolution 1 D image of the transition from dark to light grey; differentiating the image to form an edge-spread-function (ESF);
applying a hamming window to the differentiate function to weight samples in the centre more highly than those at the edges; and taking the absolute value of the discrete Fourier transform of the processed ESF to yield the SFR.
[0080] Each of the above steps can be broken down into an additional set of steps. In step 1 , to locate the transition from dark to light grey, the image is
differentiated to find edges. Within each column of the image, the maximum of the gradient is computed, which yields the position of the edge. Once the edge positions have been located, a linear fit is computed, which is then tuned via iferativeiy reweighted least squares (IRLS). !RLS requires solving a linear system of equations and currently uses a matrix inverse to determine the solution to the system.
[0081 ] In step 2, once the line has been computed, the image is straightened and super-sampled. This is achieved by using the line equation with each column index to determine a floating point y coordinate. The column is then shifted by the difference between y and the middle of the image. The shift is performed by multiplying the difference by 4, converting to an integer and then adding the entire column to a vector 4 times the height of the image. Every column in the image is added in this fashion, the end result being a vector containing a 4x sampled 1 D copy of the edge. Each row is then divided by the number of samples which contributed to it.
[0082] In step 3, the 1 D version of the image is differentiated. A simple numeric differentiation is used where the next element is subtracted from the previous element, and then divided by two to give the derivative of the current element.
[0083] In step 4, the ESF is now weighted with a hamming window to reduce the influence of elements towards the edge of the ESF, and focus on the inner elements around the transition itself.
[0084] In step 5, the absolute value of the discrete Fourier transform is computed to yield the SFR itself. [0085] As the visual elements are positioned relative to the plate support, and are at a fixed position relative to the image capture device (when in proper alignment) the system can easily approximate the appropriate positioning of the RO! relative to the visual elements. The coordinate positions of the visual elements in a captured image is illustrated in Figure 4. Figures 5 and 6 illustrate the coordinates and layout of the various visual elements and the positioning of the ROI on the first and second visual elements relative to the field of view of the image capture system. The dimensions illustrated in Figure 6 are correct for the furthest of the first and second visual elements. However, the positioning of the ROI and shape on the closer of the first and second visual element are the same but scaled up by the appropriate factor as a result of the generated perspective effect. While positioning of the visual elements relative to the image capture system is generally fixed the actual position of the visual elements in the captured image may vary by approximately ±1 .25mm of their expected positions. Consequently, the algorithm used in measuring the resolvable detail of the first and second visual elements 219, 221 and the gross colour of the third and fourth visual elements 505, 507 includes a localisation step, to obtain an accurate position for the target.
[0086] The above steps are repeated for each of the first and second visual elements 219, 221 and the operational value of each of these visual elements are determined to test if they meet the minimum requirement. As such an SFR curve (such as that illustrated in Figure 8) will be generated for each of the first and second visual element, and the curve will be assessed relative to the selected minimum utilisation level relative to the desired SFR. Resultantly, when the average resolvable detail of the straight edge of each of the first and the second visual elements exceeds the minimum operational value, it can be determined that the resolvable detail in the range of distance between the closest and furthest visual elements will also be above the minimum operational value. As such any object imaged within this range will be imaged at an appropriate resolvable detail.
[0087] Turning now to Figure 9, there is shown a summary of a method 901 of testing operation of an electronic image capture system. The method 901 includes the steps of: providing 903 a first visual element at a first distance to an image capture device. The first visual element 219 as per the above embodiments includes a face, the face of the first visual element 219 includes a first colour 501 and a second colour 51 1 , wherein the transition between the first colour and the second colour is defined by at least straight edge; providing 905 a second visual element 221 as per the above embodiments includes a face, the face of the first visual element 219 includes a first colour 501 and a second colour 51 1 , wherein the transition between the first colour and the second colour is defined by at least straight edge; capturing 907 an image of the faces of both the first 219 and second 221 visual elements with the image capture device 201 ; and determining 91 1 whether the resolvable detail exceeds a minimum operational value for each of the first 219 and second 221 visual elements.
[0088] If is to be understood that various alterations, additions and/or
modifications may be made to the parts previously described without departing from the ambit of the present invention, and that, in the light of the above teachings, the present invention may be implemented in a variety of manners as would be understood by the skilled person.

Claims

Claims
1 . A method of testing operation of an image capture system, the method including:
providing a first visual element including a face at a first distance from an image capture device and a second visual element including a face at a second distance from the image capture device, the face of each of the first and second visual elements including a first colour and a second colour wherein the transition between the first colour and the second colour is defined by at least one straight edge, the edge oriented such that it is between 2 degrees and 88 degrees rotated from vertical or horizontal relative to the orientation of the image capture device;
capturing an image of the faces of both the first and second visual element with the image capture device;
measuring resolvable detail of the captured image by analysing the at least one straight edge of the first and the second visual element; and
determining whether the resolvable detail exceeds a minimum operational value for each of the first and second visual elements,
2. The method of claim 1 , wherein the edge is oriented such that it is between 2 degrees and 22.5 degrees rotated from vertical or horizontal.
3. The method of claim 1 or claim 2, wherein the visual elements include at least two straight edges defining a transition between the first colour and the second colour.
4. The method of claim 3, wherein one of the at least two straight edges is between 2° and 22.5° rotated from vertical relative to the orientation of the image capture device and a second edge of the at least two straight edges is between 2° and 22.5° rotated from horizontal relative to the orientation of the image capture device.
5. The method of claim 3 or claim 4, wherein the at least two straight edges are perpendicular to each other.
6. The method of claim 1 , wherein the visual elements include at least four straight edges defining a transition between the first colour to the second colour.
7. The method of claim 6, wherein a first and second edge of the at least four straight edges is between 2 degrees and 88 degrees rotated from vertical relative to the orientation of the image capture device and a third and fourth edge of the at least four straight edges is between 2 degrees and 88 degrees rotated from horizontal relative to the orientation of the image capture device.
8. The method of claim 6, wherein a first and second edge of the at least four straight edges is between 2 degrees and 22.5 degrees rotated from vertical relative to the orientation of the image capture device and a third and fourth edge of the at least four straight edges is between 2 degrees and 22.5 degrees rotated from horizontal relative to the orientation of the image capture device.
9. The method of claim 7 or claim 8, wherein the first and second edge are parallel to each other and the third and further edge are parallel to each other.
10. The method of any one of claims 1 to 9, wherein the visual elements have a background having the first colour and shape provided on the background, the shape having the second colour.
1 1 . The method of claim 10, wherein the shape is a general "L" shape.
12. The method of any one of claims 1 to 1 1 , wherein the first and second colours are different shades of grey.
13. The method of any one of claims 1 to 12, wherein the first colour is Pantone 427C
14. The method of any one of claims 1 to 13, wherein the second colour is
Pantone 425C.
15. The method of any one of claims 1 to 14, further comprising:
providing a third and fourth visual element together with the first and second visual elements, the third visual element including a third colour and the fourth visual element including a fourth colour;
measuring the colour of the third and fourth visual elements in the capture image; and
determining the gross colour response of the image capture device from the measured colour of the third and fourth visual elements in the capture image.
16. The method of claim 15, wherein the third colour is selected to permit testing of the white balance of the captured image.
17. The method of claim 15 or claim 16, wherein the third colour is a shade of grey, preferably 50% black.
18. The method of any one of claims 15 to 17, wherein the third colour is Pantone Cool Gray 7C.
19. The method of any one of claims 15 to 18, wherein the fourth colour is selected to test the colour response of the image capture device.
20. The method of any one of claims 15 to 19, wherein the fourth colour is Pantone 674C.
21 . A system for testing operation of an image capture device, the system including:
an image captured device;
a first visual element including a face at a first distance from the image capture device and a second visual element including a face at a second distance from the image capture device, the face of each of the first and second visual elements including a first colour and a second colour wherein the transition between the first colour and the second colour is defined by at least one straight edge, the edge oriented such that it is between 2 degrees and 88 degrees rotated from vertical or horizontal relative to the orientation of the image capture device; and
a processor for measuring resolvable detail of the image capture device by analysing the at least one straight edge of the first visual element and the at least one straight edge of the second visual element in a captured image and determining whether the resolvable detail exceeds a minimum operational value for each of the first and second visual elements.
22. The system of claim 21 , wherein the edge is oriented such that it is between 2 degrees and 22.5 degrees rotated from vertical or horizontal.
23. The system of claim 21 or claim 22 further including a support for supporting objects to be imaged, the support positioned at a predetermined distance from the image capture device.
24. The system of claim 23, wherein the first visual element and the second visual element are associated with the support.
25. The system of claim 23 or claim 24, wherein the first visual element and the second visual element are mounted on the support.
26. The system of any one of claims 23 to 25, wherein the distance to the face of the first visual element from the image capture device is the maximum expected distance to an objected being supported,
27. The system of any one of claims 23 to 26, wherein the distance to the face of the second visual element from the image capture device is the minimum distance expected to an objected on the support.
28. The system of any one of claims 21 to 27, wherein the visual elements include at least two straight edges defining a transition between the first colour to the second colour.
29. The system of claim 28, wherein one of the at least two straight edges is between 2 degrees and 88 degrees rotated from vertical relative to the orientation of the image capture device and a second edge of the at least two straight edges is between 2 degrees and 88 degrees rotated from horizontal relative to the orientation of the image capture device.
30. The system of claim 28, wherein one of the at least two straight edges is between 2 degrees and 22.5 degrees rotated from vertical relative to the orientation of the image capture device and a second edge of the at least two straight edges is between 2 degrees and 22.5 degrees rotated from horizontal relative to the
orientation of the image capture device.
31 . The system of any one of claims 28 to 30, wherein the at least two straight edges are perpendicular to each other.
32. The system of claim 21 to 27, wherein the visual elements include at least four straight edges defining a transition between the first colour to the second colour.
33. The system of claim 32, wherein a first and second edge of the at least four straight edges is between 2 degrees and 88 degrees rotated from vertical relative to the orientation of the image capture device and a third and fourth edge of the at least four straight is between 2 degrees and 88 degrees rotated from horizontal relative to the orientation of the image capture device.
34. The system of claim 32, wherein a first and second edge of the at least four straight edges is between 2° and 22.5° rotated from vertical relative to the orientation of the image capture device and a third and fourth edge of the at least four straight is between 2° and 22.5° rotated from horizontal relative to the orientation of the image capture device.
35. The system of claim 33 or claim 34, wherein the first and second edge are parallel to each other and the third and further edge are parallel to each other.
36. The system of any one of claims 21 to 35, wherein the visual elements have a background having the first colour and shape provided on the background, the shape having the second colour.
37. The system of claim 36, wherein the shape is a general "L" shape.
38. The system of any one of claims 21 to 37, wherein the first and second colours are different shades of grey.
39. The system of any one of claims 21 to 38, wherein the first colour is Pantone 427C.
40. The system of any one of claims 22 to 39, wherein the second colour is
Pantone 425C.
41 . The system of any one of claims 21 to 40, further comprising a third and fourth visual element, the third visual element including a third colour and the fourth visual element including a fourth colour.
42. The system of claim 41 , wherein the third colour is selected to permit testing of the white balance of the captured image.
43. The system of claim 41 or claim 42, wherein the third colour is a shade of grey.
44. The system of any one of claims 41 to 43, wherein the third colour is Pantone Cool Gray 7C
45. The system of any one of claims 41 to 44, wherein the fourth colour is selected to test colour response.
46. The system of any one of claims 41 to 45, wherein the fourth colour is Pantone 674C
PCT/AU2017/050306 2016-04-08 2017-04-07 Method and system for validating resolvable detail across a depth of field WO2017173501A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
AU2016901315A AU2016901315A0 (en) 2016-04-08 Method and System for Validating Resolvable Detail across a Depth of Field
AU2016901315 2016-04-08

Publications (1)

Publication Number Publication Date
WO2017173501A1 true WO2017173501A1 (en) 2017-10-12

Family

ID=60000547

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/AU2017/050306 WO2017173501A1 (en) 2016-04-08 2017-04-07 Method and system for validating resolvable detail across a depth of field

Country Status (1)

Country Link
WO (1) WO2017173501A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114157857A (en) * 2021-11-01 2022-03-08 信利光电股份有限公司 Camera module qualification detection method based on full depth of field

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090015848A1 (en) * 2007-07-12 2009-01-15 Samsung Electronics Co., Ltd. Apparatus and method of calculating resolution
US20120268579A1 (en) * 2009-03-31 2012-10-25 Intuitive Surgical Operations, Inc. Targets, fixtures, and workflows for calibrating an endoscopic camera

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090015848A1 (en) * 2007-07-12 2009-01-15 Samsung Electronics Co., Ltd. Apparatus and method of calculating resolution
US20120268579A1 (en) * 2009-03-31 2012-10-25 Intuitive Surgical Operations, Inc. Targets, fixtures, and workflows for calibrating an endoscopic camera

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
USING ESFR, 4 April 2016 (2016-04-04), Retrieved from the Internet <URL:https://web.archive.org/web/20160404234127/http://www.imatest.com/docs/esfnso_instructions> [retrieved on 20170608] *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114157857A (en) * 2021-11-01 2022-03-08 信利光电股份有限公司 Camera module qualification detection method based on full depth of field

Similar Documents

Publication Publication Date Title
US9109194B2 (en) Device for harvesting bacterial colony and method therefor
EP2681303B1 (en) Image capture and lighting apparatus
JP6777726B2 (en) Colony contrast collection
JP5044633B2 (en) Quantitative video microscopy and related system and computer software program products
JP5997185B2 (en) Method and software for analyzing microbial growth
US9576181B2 (en) Bio-imaging method
CN111263076B (en) System and method for image acquisition using supervised high quality imaging
US20140161330A1 (en) Bio-imaging method and system
US20110285837A1 (en) Methods and systems for identifying well wall boundaries of microplates
AU2002334590A1 (en) Method quantitative video-microscopy and associated system and computer software program product
US10407708B2 (en) Method and system for determining microorganism growth
WO2014167566A1 (en) Apparatus for inspection and quality assurance of material samples
CN112964652A (en) Rapid detection device, system and detection method for solution colorimetric analysis
CN111263817A (en) Method and system for automated assessment of antibiotic susceptibility
WO2017173501A1 (en) Method and system for validating resolvable detail across a depth of field
CN117274295A (en) Novel array type urine test paper instant detection and analysis method
WO2017173500A1 (en) Method and test chart for testing operation of an image capture system
CN112577905A (en) Urine color detection method and analyzer
US9122904B2 (en) Method for optimization of quantitative video-microscopy and associated system
US20220260479A1 (en) Particle quantitative measurement device
CN116420067A (en) Method for controlling automatic exposure setting of mobile device with camera
CN216816436U (en) Color sampling device for isolating ambient light interference
CN117310965A (en) Color correction method for microscope image and microscope system

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 17778485

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22/02/2019)

122 Ep: pct application non-entry in european phase

Ref document number: 17778485

Country of ref document: EP

Kind code of ref document: A1