GB2606136A - Digital assessment process - Google Patents

Digital assessment process Download PDF

Info

Publication number
GB2606136A
GB2606136A GB2104655.2A GB202104655A GB2606136A GB 2606136 A GB2606136 A GB 2606136A GB 202104655 A GB202104655 A GB 202104655A GB 2606136 A GB2606136 A GB 2606136A
Authority
GB
United Kingdom
Prior art keywords
image
features
anchor
assessment system
quality control
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2104655.2A
Other versions
GB202104655D0 (en
Inventor
Lai Petersen Jesper
Le Blanc Robert
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Health First Systems Ltd
Original Assignee
Health First Systems Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Health First Systems Ltd filed Critical Health First Systems Ltd
Priority to GB2104655.2A priority Critical patent/GB2606136A/en
Publication of GB202104655D0 publication Critical patent/GB202104655D0/en
Publication of GB2606136A publication Critical patent/GB2606136A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • G06T7/74Determining position or orientation of objects or cameras using feature-based methods involving reference images or patches
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2576/00Medical imaging apparatus involving image processing or analysis
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N33/00Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
    • G01N33/48Biological material, e.g. blood, urine; Haemocytometers
    • G01N33/50Chemical analysis of biological material, e.g. blood, urine; Testing involving biospecific ligand binding methods; Immunological testing
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N33/00Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
    • G01N33/48Biological material, e.g. blood, urine; Haemocytometers
    • G01N33/50Chemical analysis of biological material, e.g. blood, urine; Testing involving biospecific ligand binding methods; Immunological testing
    • G01N33/53Immunoassay; Biospecific binding assay; Materials therefor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20061Hough transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20112Image segmentation details
    • G06T2207/20132Image cropping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30072Microarray; Biochip, DNA array; Well plate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30204Marker
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing

Abstract

A method of assessing the result of a medical test comprises: recording a digital image of a testing device 10. The testing device includes at least one anchor feature 20 and a test area 12. The anchor features have at least one known property. The method further comprises locating at least one anchor feature on the digital image and ascertaining the location of the test area on the digital image. The location of the test area is ascertained by comparing a property of the anchor features on the image to a property of the anchor features on the testing device. The method further comprises determining whether a visual indication 18 exists on the test area and outputting a positive or negative test result depending on that determination. The test area may comprise a quality control feature 16 in addition to the visual indication. The visual indicator may be a line or may be of a certain colour. The quality control feature may be a line or may be of a certain colour.

Description

DIGITAL ASSESSMENT PROCESS
The present invention relates to a digital assessment process for medical tests in which the result is indicated visually, for example by the presence or absence of a visual indication.
BACKGROUND TO THE INVENTION
Inexpensive medical diagnostic and screening tests have now been developed, in which a sample is dosed onto a substrate and causes indicia such as a visible test line to become apparent if a reagent is present. This information may be used as a diagnostic aid or screening test for a disease. A human operative is required to assess the presence or absence of the indicia to determine the result of the test.
A particular example of such a test is one used for bowel cancer screening, commonly known as a faecal immunochemical test. In one example, the test comprises applying a sample taken from water from a toilet bowl to a test area of a testing device. After processing, a line appears on the test area if globin from human haemoglobin is present in sufficient quantity in the sample. This provides an indication that blood is present in the stool which may indicate the presence or possible development of bowel cancer in the test subject. The testing device may be made of plastic or laminated cardboard and includes printed instructions for performing the test. It is thus extremely cheap to produce and is designed to be disposable.
A disadvantage of tests of this type is that they are affected by human performance factors during the assessment of the test result, i.e. a human operative may fail to accurately discern the presence or absence of a quality control line or the test line or may incorrectly record the result. In medical screening, consistently accurate results are important as false negatives and false positives can each have significant negative consequences.
The object of the invention is to provide an automatic assessment process which accurately assesses medical screening tests.
STATEMENT OF INVENTION
According to the present invention, there is provided a method of assessing the result of a medical test, the method comprising: a) recording a digital image of a testing device, including in one example a test card, the testing device including a plurality of anchor features and a test area, the anchor features having a known location relative to the test area; and b) using automated image processing means to: i. locate the anchor features on the digital image; ii. ascertain the location of the test area on the digital image from the anchor features located in step (b. i); and iii. determine whether a visual indication exists on the test area, and outputting a positive or negative test result depending on that determination.
The method allows a computer system to output a positive or negative test result based on the digital image of the testing device. Computer systems operate in a predictable and deterministic manner, and a human operative is not required to read the test result by this method, so the result of the test may be ascertained in a more reliable and accurate manner by utilising this method.
The role of human unreliability in the medical test has been largely mitigated, if not removed entirely. This enhances the accuracy and reliability of medical screening without needing to change the medical or scientific basis of the test itself.
In one example, the anchor features may be parts of a single larger anchor formation or they may be a plurality of distinct anchor features. The anchor features allow the test area in the digital image to be located by providing a spatial reference frame that is easily recognised in the digital image. The position of the test area can then be interpolated from this spatial reference frame using known information about the physical location of the test area and anchor features on the testing device, compensating for the unknown position and orientation of the testing device relative to the image capturing means. The computer system can thus reliably locate the test area on the digital image and identify whether the visual indication is present, without the test results being affected by different orientations of the testing device.
The digital image may be recorded by a camera. Cameras are readily available, for example in smartphones or as individual digital cameras, and so this allows the test to be processed at any location where the digital image can be taken, stored, or transmitted. The method of the invention is able to compensate for the fact that the testing device may be inconsistently positioned in the frame of the camera in each instance of a test.
The digital image of the testing device may be stored for later access. Storing the digital image of the testing device creates a digital record of the testing device, which can be used to back-check results later for accuracy, and allows easier record maintenance and audit processes in a laboratory, clinic, or other patient management context. The digital record could be stored on a database together with final and/or any intermediate results, and can be forwarded to a doctor or clinic if requested.
Known properties of the anchor features may include relative distances between pairs of anchor features, a ratio of sizes between a pair of anchor features and a shape of at least one anchor feature. Note that the plurality of anchor features could be points on a single larger anchor formation, for example, the corners of a triangle. In this case, 'relative distances between pairs of anchor features' is equivalent to the shape of the single larger anchor formation.
A hue filter may be applied to the digital image to exclude parts of the image outside a hue range including a known colour of the anchor features.
Applying a hue filter reduces the amount of digital data "noise" in the image by filtering out all parts of the image outside the range, leaving only parts of the image that are likely to form part of the anchor features.
It will be understood that 'exclude' here means to exclude from consideration in further steps relating to detection of the anchor features. Pixels excluded in this step may later be considered in the search for, for example, a quality control feature, which may be of a different hue to the anchor features.
Step (b. ) may include the sub-steps of: 1. extracting features from the digital image, the features being any of: edges; contours and circles; 2. determining whether the correct anchor features have been extracted, and if not: 3. adjusting parameters of the digital image or the feature extraction algorithm and returning to step 2.
The features may be extracted from the image using a feature extraction algorithm. Examples of suitable feature extraction algorithms which may be useful in embodiments are the Canny method, Hough transformations and the Harris corner detector.
The parameter adjustment may also be based on conditions in the image, for example the average brightness of the image.
Feature extraction allows further computation to be performed automatically that may not be possible with a raw image file.
The correct anchor features may be considered to have been identified if the same number of anchor features has been identified on the image as is known to exist on the testing device, and/or when the relative distances between the anchor features identified on the image match the relative distances between the anchor features on the testing device to within a set tolerance.
Note that the relative distances between the anchor features are used both to verify that the correct anchor features have been identified, and also used to ascertain the location of the test area later. This is possible if the tolerance threshold for rejecting a candidate anchor feature on the basis of incorrect relative distances is higher than a set minimum. This minimum reflects the largest difference of the relative distances between the anchor features from their expected relative distances that is likely to be due to a slight skew of the testing device in the frame, as opposed to due to misidentification of the anchor features. It is assumed that users will capture images of the testing device in which the skew is relatively small, i.e. it more or less looks like a plan view to the user's eye, so that large deviations from the expected geometry may be assumed to be due to misidentification of the anchor features.
At least one anchor feature may include a known pattern, and determining 15 whether the correct anchor features have been extracted may include recognising that pattern. This provides a further means of verifying that the correct anchor features have been identified.
The darkness or lightness and the colour temperature are examples of parameters that may be adjusted. Other metrics known in the field of image processing may be used in embodiments. Many feature extraction algorithms, including the ones mentioned above, are sensitive to lighting conditions, noise and other imperfections within the image. Adjusting parameters of the image allows control for timing influences -for example differences in the amount of time allowed to elapse between dosing the testing device with a test sample and recording the image, colour variability, intensity differences and variable lighting. Without correct adjustment, the required anchor features may not be extracted, instead, erroneous candidate anchor features may be extracted.
The iterative steps of extracting, checking and adjusting allow the parameter space to be explored in a systematic manner, ensuring that sufficiently accurate results are achieved, sufficiently accurate results meaning that the candidate anchor features do correspond to the actual anchor features, and not just to noise in the image.
Step (b.ii) may include comparing relative sizes of the anchor features on the image to the known relative sizes of the anchor features on the testing device.
This provides an estimate of the degree of distortion of the testing device in the image due to the orientation of the testing device relative to the image recording means, so that this may be corrected for when locating the test area in step (b.ii). Again, the anchor features could form part of a larger anchor formation, in which case the relative sizes of the anchor features are equivalent to the shape of the larger anchor formation.
Relative distances on the testing device between at least three anchor features may be known, and relative distances between those anchor features on the image may be compared to the relative distances between those anchor features on the testing device. This provides another method of assessing the skew of the testing device so that it may be taken into account when locating the testing area Ratios of the relative distances between at least three anchor features on the image to the relative distances between those anchor features on the testing device may be calculated.
When the test area has been located, the image may be cropped to include only the test area, that is to say, further processing may take place on only that pad of the image identified as the test area. Once the anchor points have been identified and used to compensate for the positioning of the testing device relative to the image capturing means, the rest of the image is no longer required and may be removed.
A quality control feature may be present on the test area, to indicate whether the test process has been completed successfully or whether the test process has failed and is incomplete. If the control feature indicates the test process was completed properly, the test result may then be assessed. If not, the test process is invalid and no result may be determined from the test.
The method may further include the step of determining whether the quality control feature is visible, and outputting a test result of 'failed' if the quality control feature is not visible, as the quality control feature provides a means of identifying whether the test has worked correctly. If the quality control feature is not visible, the test has failed. The quality control feature also provides an additional positioning means by which the area at which the visual indication is expected to appear may be identified.
The quality control feature may be of a known predetermined colour. In this case, a hue filter may be applied for identifying only areas of the known predetermined colour on the digital image.
This reduces the amount of noise in the image by filtering out all parts of the image that are not of the known predetermined colour, leaving only parts of the image that are likely to form part of the quality control feature. Other indicia on the testing device, for example the anchor features, should be of a different hue to the quality control feature.
A feature extraction algorithm may be applied to the digital image to extract features which may represent the quality control feature, the features form ing a candidate quality control feature.
The quality control feature may be a line. Using a line for the quality control feature is advantageous because a line can be recognisable within the digital image by, for example, a line detection algorithm. It is also advantageous to extract orientation data from a line, which can be used to aid correct determination of whether the visual indication is present, as described later.
The feature extraction algorithm may be a line transformation algorithm and the extracted features may be line items.
The extremities of the smallest rectangle surrounding the extracted features may be identified. The rectangle gives an estimated outline of the candidate quality control feature.
A visual strength of the candidate quality control feature may be calculated by evaluating the percentage of the total area of the rectangle that the extracted features fill. This provides an objective numerical measure of the visual strength of the candidate quality control feature.
The quality control feature may be determined to be visible if the visual strength of the candidate quality control feature is above a predetermined threshold, and the quality control feature may be determined not to be visible if the visual strength of the candidate quality control feature is below the predetermined threshold.
The method thus provides an objective and repeatable determination of whether the test has been a success or a failure. This avoids false positives and negatives that may arise when human technicians believe that a test has succeeded when in fact it has failed.
The image may then be cropped further. This reduces the number of pixels that must be considered by feature extraction algorithms, increasing the efficiency of the method, and removes noise from the image.
The extent of the cropping may be determined by the extent of the quality control feature on the image. Using the quality control feature provides an efficient way to determine the area to be cropped to, as the quality control feature was already identified earlier in the method, so no further computation is required.
If the quality control feature is a line, the line may form an edge of the test area. The orientation and position of the quality control feature can thus be used to determine the area at which the visual indication may be expected to appear. In an embodiment, the testing device may have the visual indication immediately above the quality control line, the visual indication being a line parallel with the quality control line.
The quality control feature helps to locate more precisely the area where the visual indication may appear, so that the image may be cropped to be smaller than could be done using the anchor features alone as guides, further reducing the amount of noise in the image.
The quality control feature may be cropped from the image. The visual indication Cif present) is likely to be fainter or even significantly fainter than the quality control feature. This makes later processing steps less effective if the quality control feature is not removed from consideration.
The image may then be converted to greyscale. Conversion of the image to greyscale simplifies computations performed on the image, as each operation is applied to a scalar field only, where previously the operand was a vector field. The visual indication is distinguished from the background by intensity, not colour, so a greyscale image will suffice for identification of the visual indication.
The contrast of the image may then be enhanced by calculating a histogram of the greyscale image and equalising that histogram. Before this step, the image was likely to be low-contrast. Enhancing the contrast of the image improves the performance of the feature extraction algorithms.
A background light pattern may then be subtracted from the image to enhance the signal to noise ratio. This is especially necessary if the above described contrast enhancement has been performed, as contrast enhancement may increase the strength of background noise in the image. The test area includes a large background area that is not part of the expected visual indication. This background area is monochrome. The background area will reflect and scatter light into the image recording device, that light not being part of the signal the method endeavours to detect. Removing this light pattern expedites detection of the signal.
The background light pattern may be obtained by applying a Gaussian blur to the image. The expected visual indication is a highly spatially localised feature, being a narrow strip of the test area. It is therefore represented chiefly in high spatial frequency components in the Fourier transformation of the image. The background area is large in extent and substantially uniform and is therefore represented chiefly in low spatial frequencies. Convolving a Gaussian kernel with the image applies a simple multiplicative Gaussian filter in Fourier space, removing high spatial frequency components and leaving the low spatial frequency components. This results in an image of the background. Subtraction of the resulting image from the unfiltered image therefore partially isolates the visual indication if present. This significantly improves the line recognition process.
Presence of the visual indication may then be identified by applying a feature extraction algorithm to the image to extract features which may represent the visual indication, the features forming a candidate visual indication. This converts the data in the image into a form well suited to further computation.
The visual indication may be a line. A line is easily recognised by feature extraction algorithms and provides other advantages which will become apparent below.
The feature extraction algorithm may be a Hough line transformation algorithm and the extracted features may be line items. This efficiently provides accurate results when the visual indication is a line.
If the quality control feature is or includes a line (the quality control line), the visual indication line may be parallel or orthogonal to the quality control line. This allows the orientation of the quality control line, which is readily ascertained by virtue of its being a line, to be used as a guide and a check in seeking the visual indication line as explained below.
Line items that are not parallel to the quality control feature line may be removed. This provides a convenient method of distinguishing signal from noise, signal meaning features that truly represent the visual indication line. This is necessary because the noise to signal ratio of the area of the image under consideration is higher than it was in the step of identifying the quality control feature because the visual indication line is likely to be much fainter than the quality control line. Some incidental groupings of pixels may therefore be recognised as lines. However, these lines will be aligned at random. Signal lines therefore have a greater probability of being parallel to the quality control line. This step therefore removes a higher proportion of noise lines than signal lines, decreasing the noise to signal ratio.
The extremities of the smallest rectangle surrounding the extracted features may then be identified. The rectangle gives an estimated outline of the candidate visual indication.
A visual strength of the candidate visual indication may then be calculated by evaluating the percentage of the total area of the rectangle that the extracted features fill. This provides a numerical measure of the visual strength of the candidate visual indication.
The visual indication may be determined to exist if the visual strength of the candidate visual indication is above a predetermined threshold, and determined not to exist if the visual strength of the candidate visual indication is below the predetermined threshold.
The method thus provides an objective and repeatable determination of whether the test has had a positive or a negative result. This avoids false positives and negatives that may arise when human technicians believe that a test has had a positive result when in fact the result has been negative or vice versa, or just wrongly record results, for example transposing results for two different tests carried out in the same laboratory.
At least one anchor feature may be characteristic to the medical test to be performed by the testing device, and the method may further include the step of recognising that anchor feature and identifying the medical test to be performed by the testing device. This allows the same system to be applied to multiple medical tests without risk of confusion.
According to a second aspect of the invention, there is provided an assessment system for assessing the result of a medical test on a testing device, the system comprising: digital image recording means for recording an image of the testing device, automated image processing means adapted to locate a plurality of anchor features on the digital image; ascertain the location of a test area on the digital image from the anchor features; and determine whether a visual indication exists on the test area, and means adapted to output a positive or negative test result depending on the determination of whether a visual indication exists on the test area.
Preferable and/or optional features of the assessment system are set out in claims 46 to 88.
BRIEF DESCRIPTION OF THE DRAWINGS
For a better understanding of the present invention, and to show more clearly how it may be carried into effect, reference will now be made by way of example only to the accompanying drawings, in which: Figure 1 shows a schematic of a testing device; Figure 2 shows a digital image of the testing device with crop outlines 30 superimposed; and Figure 3 shows a partially processed and cropped digital image of the testing device.
DESCRIPTION OF PREFERRED EMBODIMENTS
Referring firstly to Figure 1, a testing device is indicated generally at 10. The testing device is a flat section of laminated card. It could also be produced from plastic or any other inexpensive waterproof material. In this embodiment, the test is a bowel cancer screening test, but the invention could equally well be applied to any medical test in which a visual indication is used. To perform the test, a subject uses a long handle paintbrush to apply water from a toilet bowl to the test card after using the toilet. If there is a concentration of blood in the stool of the test subject, a haemoglobin sensing test strip is activated, producing a visual indication of the presence of haemoglobin. A quality control line appears on the test card to indicate whether the test assessment process has been completed correctly. The test line to indicate the presence of globin in the sample is located parallel to and close to the quality control line. It is likely to be significantly fainter than the quality control line. In some embodiments the haemoglobin test line and quality control line are present on a test element which is made of a different material to the rest of the test card. The test element may even be movable relative to the rest of the test card. The result of the test is ascertained by visually examining the test card for the presence of the visual indication. Presence of blood in the stool may indicate the presence of or the potential development of bowel cancer and thus require further testing.
The invention provides a method of automatically assessing whether a visual indication 18 is present on a test area 12 of the testing device 10. In this embodiment, the test area 12 is a haemoglobin screening test area 12. The invented method may be implemented in computer software. A high-resolution photograph of the testing device 10 and test area 12 is taken. The photograph may be taken with a camera, for example, a smartphone or stand-alone digital camera. The photograph is stored in digital format. The system of the invention then processes this digital image through a set of steps to achieve a reliable analysis of the bowel cancer screening test. Processing of the digital image may occur locally, i.e. at the location of the test, for example on a laptop computer receiving data from the smartphone or camera used to record the image, or it may occur remotely, for example by uploading the image to a server and performing the processing on the server.
The initial processing step is to identify the test card 10 and establish a reference frame in the image. The method identifies a set of known anchor features 20 that are characteristic to the bowel cancer screening testing device 10. These anchor features 20 are geometric shapes that occur on the testing device with known features including position, size, relative distance, colour and pattern. In this case, they are single digit numbers enclosed in circles, in specific locations with specific distances relative to each other and the position of the test area 12. There are 3 anchor features. The anchor features are blue.
To reduce noise in the digital image, all parts of the image outside a colour range including that colour are removed. This is done by converting the digital representation of the image, which is typically in the RGB (Red, Green, Blue) colour space to the HSV (Hue, Saturation, Value) colour space, then applying a hue threshold filter to only include a hue range including the expected colour.
The method then uses at least one feature extraction algorithm to extract features from the digital image such as edges, contours and circles. The algorithm may be any of the Canny method, Hough Transformations and the Harris corner detector. Several algorithms may be used sequentially or together.
These algorithms are sensitive to lighting conditions, noise and other imperfections within the source image. Parameters that affect the operation of the algorithms are adjusted based on conditions of the given image.
After application, input parameters that define the algorithm's sensitivity are adjusted. Properties of the image are also adjusted. The feature extraction algorithms are then reapplied. This is repeated until a set of features that uniquely matches the characteristics of the expected anchor features 20 is identified. This is considered the case when both the expected number of anchor features 20 are identified and the identified anchor features 1, 2, 3 are substantially of the expected position, size and relative distance to each other.
Once the anchor features 20 are identified, the next step is to locate the test area 12 in the digital image. The test area is located by using the relative position and dimension detail extracted from the identified anchor features 20 on the source image, and the known characteristics of the anchor features 20 on the physical testing device 10.
Ratios of the average size of the anchor features 20 and their relative distances from one another as identified on the digital image to their actual relative sizes and distances on the testing device 10 are calculated. This allows for correction of perspectival skew in the image arising from the orientation of the testing device in three dimensions relative to the image capturing means. For example, anchor feature 2 may appear smaller than anchor feature 1 in the image when they are the same size on the testing device due to perspective.
The position of the testing area 12 in the image can then be ascertained based on knowledge of its position on the testing device relative to the anchor features 20. For example, the test area may be known to be located halfway between anchor features 1 and 2 on the physical test card, which after perspective correction may be known to correspond to one third of the way between features 1 and 2 in the image.
Note that the relative distances between the anchor features 20 are used both to verify that the correct anchor features 20 have been identified, and used to ascertain the location of the test area later. This is possible if the tolerance threshold for rejecting a candidate anchor feature 20 is higher than a set minimum. This minimum reflects the largest difference of the relative distances between the anchor features from their expected relative distances that is likely to be due to a slight skew of the testing device in the frame, as opposed to due to misidentification of the anchor features 20. It is assumed that users will capture images of the testing device in which the skew is relatively small, so that large deviations from the expected geometry may be assumed to be due to misidentification of the anchor features 20.
Once the test area 12 has been identified, the image is cropped to the test area.
In Figures 1 and 2, the larger rectangle 12 indicates the extent of this cropping.
The next step is to assess whether a quality control line 16 is visible on the test area 12. This is done by converting the digital image to the HSV colour space and applying a hue threshold in the range of an anticipated colour of the quality control line 16. This colour is different to the colour of the anchor features. The colour of the anchor features and other elements of the testing device 10 should be chosen to distinguish the quality control line 16 from the rest of the testing device 10 and from anticipated surroundings of the testing device 10 that may appear in the digital image. In this embodiment, it is red.
Following this, a Hough line transformation algorithm is applied to the cropped and hue filtered digital image. This extracts a number of individual line items from interconnected pixels that have passed the colour threshold filter.
The set of line items is then iterated over and the 4 extremities of the virtual rectangle surrounding these points (i.e. corner points 1: X(min), Y(min) 2: X(min), Y(max), 3: X(max), Y(max) 4: X(max), Y(min)) are identified. This is a candidate quality control line.
The visual strength of the candidate quality control line is then calculated by assessing the percentage of the total area of the virtual rectangle that the line items fill. If the visual strength exceeds a minimum threshold, this will trigger recognition of the quality control line 16.
The final step is the identification of whether a visual indication line 18 exists on the test area 12. The first part of this step is to further crop the part of the image being analysed to a smaller image, cropped vertically based on the X(min) and X(max) values identified previously. The extent of this cropping is indicated on Figure 1 by a dashed line 22, and on Figure 2 by the smaller rectangle 22. The cropped area is shown in Figure 3. The quality control line 16 is so located on the testing device 10 as to form an edge of a rectangle containing the expected location of the visual indication. This rectangle is indicated in a dashed line in figure 1. The image is cropped to this rectangle to exclude the quality control line.
The result is an image of the area of the test area 12 where the visual indication line 18 would appear. The purpose is to remove any remaining parts of the image that do not form part of the test area 12. Any noise on the image outside the test area 12 could impact the correctness and reliability of the discernment of the visual indication line 18.
Once cropped, the image is converted to the greyscale spectrum. The image at this point has a very low range of contrast, which can complicate reliable identification of a visual indication, as this may be faint (much more so than the quality control line). In Figure 2, the visual indication 18 is significantly fainter than the quality control line 16. To enhance the contrast, the histogram of the grayscale range of the image is calculated and equalised to obtain a histogram with uniform distribution of values.
A potential side effect of histogram equalisation is increased background noise. A further aspect of the invention is to compensate for this by making use of the fact that the background of the test area 12 is generally monochrome. An average light pattern of the cropped test area is calculated by applying a Gaussian blur and subtracting that light pattern from the grayscale image. The kernel of the Gaussian blur should be large to obtain a smooth representation of the background. The signal ratio of any slightly darker feature that may appear on the image is thus enhanced.
The last part of this step is the recognition of the visual indication line 18 using feature extraction algorithms. In this embodiment, line transformation algorithms are used to extract a set of line items. As there is more noise in the image, some incidental groupings of pixels may be recognised as lines. The visual indication line, if present, is known to be orientated on the testing device parallel to the quality control line. Line items arising from incidental groupings of pixels will be orientated at random. Line items that are not parallel to the quality control line are removed. This increases the signal to noise ratio of the image. In the case that the quality control line and the visual indication line are present on a test element that is movable relative to the rest of the test card, this provides an important means of verifying that a line may be part of the visual indication line. In this case the visual indication line may move slightly relative to the anchor points but is in a fixed orientation relative to the quality control line.
The set of line items is then iterated over and the 4 extremities of the virtual rectangle surrounding these points (i.e. corner points 1: X(m in), Y(min) 2: X(min), Y(max), 3: X(max), Y(max) 4: X(max), Y(min)) are identified. This is a candidate visual indication.
Finally, based on a threshold of the percentage of the virtual rectangle taken up by any identified lines, a result of either 'negative' or 'positive' is output as an indication of the presence or absence of a visual indication line 18 on the testing device 10. This information is captured and replaces the need for a manual interpretation by a lab technician.
These embodiments are provided by way of example only, and various changes and modifications will be apparent to persons skilled in the art without departing from the scope of the present invention as defined by the appended claims.

Claims (88)

  1. CLAIMS1. A method of assessing the result of a medical test, the method comprising: a. recording a digital image of a testing device, the testing device including a plurality of anchor features and a test area, the anchor features having a known location relative to the test area; b. using automated image processing means to i. locate the anchor features on the digital image; ii. ascertain the location of the test area on the digital image from the anchor features located in step (b.i); and iii. determine whether a visual indication exists on the test area, and outputting a positive or negative test result depending on that determination.
  2. 2. A method as claimed in claim 1, in which the anchor features are part of a single larger anchor formation.
  3. 3. A method as claimed in claim 1, in which there is a plurality of distinct anchor features.
  4. 4. A method as claimed in any preceding claim, in which the digital image is recorded by a camera.
  5. 5. A method as claimed in any preceding claim, in which the digital image is stored for later access.
  6. 6. A method as claimed in claim 1, in which one of a relative distance between at least two pairs of anchor features, a ratio of sizes between a pair of anchor features or a shape of the or each anchor feature is known.
  7. 7. A method as claimed in any preceding claim, in which a colour of at least one anchor feature is known, and step (b) includes applying a hue filter to exclude parts of the image outside a hue range including that colour.
  8. 8. A method as claimed in any preceding claim, in which step (b.i) includes the sub-steps of: 1. extracting features from the digital image, the features being any of: edges; contours and circles; 2. determining whether the correct anchor features have been extracted, and if not: 3. adjusting parameters of the digital image or the feature extraction algorithm based on conditions of the image and returning to step 2.
  9. 9. A method as claimed in claim 8, in which the features are extracted from the image using a feature extraction algorithm.
  10. 10. A method as claimed in claim 9, in which the feature extraction algorithm used is at least one of: the Canny method, Hough transformations and the Harris corner detector.
  11. 11. A method as claimed in any of claims 8 to 10, in which the correct anchor features are considered to have been identified if the same number of anchor features has been identified on the image as is known to exist on the testing device.
  12. 12. A method as claimed in any of claims 8 to 11, in which the correct anchor features are considered to have been identified if the relative distances between the anchor features identified on the image match the relative distances between the anchor features on the testing device to within a set tolerance.
  13. 13 A method as claimed in any of claims 8 to 12, in which at least one anchor feature includes a known pattern, and determining whether the correct anchor features have been extracted includes recognising that pattern.
  14. 14. A method as claimed in any preceding claim, in which relative sizes of the anchor features on the image are compared with known relative sizes of the anchor features on the testing device for estimating and correcting for distortion of the testing device in the image.
  15. A method as claimed in any preceding claim, in which a ratio of a relative distance between a pair of anchor features on the image to the relative distance between that pair of anchor features on the testing device is calculated.
  16. 16. A method as claimed in any preceding claim, in which the image is cropped to include only the test area.
  17. 17. A method as claimed in any preceding claim, in which a quality control feature is present on the test area.
  18. 18. A method as claimed in claim 17, including the step of determining whether the quality control feature is visible.
  19. 19. A method as claimed in claim 18, in which a test result of failed' is output if the quality control feature is not visible.
  20. 20. A method as claimed in any of claims 17 to 19, in which the quality control feature is of a known predetermined colour and a hue filter is applied for identifying only areas of the known predetermined colour on the digital image.
  21. 21. A method as claimed in any of claims 17 to 20, in which a feature extraction algorithm is applied to the digital image to extract features which form a candidate quality control feature.
  22. 22. A method as claimed in any of claims 17 to 21, in which the quality control feature is a line.
  23. 23. A method as claimed in any of claims 17 to 22 in which the feature extraction algorithm is a line transformation algorithm and the extracted features are line items.
  24. 24. A method as claimed in any of claims 21 to 23, in which the extremities of the smallest rectangle surrounding the extracted features are identified.
  25. 25. A method as claimed in claim 24, in which a visual strength of the candidate quality control feature is calculated by evaluating the percentage of the total area of the rectangle that the extracted features fill.
  26. 26. A method as claimed in claim 25, in which the quality control feature is determined to be visible if the visual strength of the candidate quality control feature is above a predetermined threshold, and the quality control feature is determined not to be visible if the visual strength of the candidate quality control feature is below the predetermined threshold.
  27. 27. A method as claimed in any preceding claim in which the image is cropped.
  28. 28. A method as claimed in claim 27, in which the extent of the cropping is determined by the extent of the quality control feature on the image.
  29. 29. A method as claimed in claim 27 or claim 28, in which the image is cropped to include only the test area.
  30. 30. A method as claimed in any of claims 27 to 29, in which the quality control feature is cropped from the image.
  31. 31. A method as claimed in any preceding claim in which at least a part of the image is converted to grayscale.
  32. 32. A method as claimed in any preceding claim in which the contrast of at least part of the image is enhanced.
  33. 33. A method as claimed in claim 32 in which the contrast is enhanced by equalising a histogram of the image.
  34. 34. A method as claimed in any preceding claim, in which a background light pattern is subtracted from at least part of the image to enhance the signal to noise ratio.
  35. 35. A method as claimed in claim 34, in which the background light pattern is obtained by applying a Gaussian blur to at least part of the image.
  36. 36 A method as claimed in any preceding claim in which a feature extraction algorithm is applied to the image and features are extracted to determine the presence of the visual indication.
  37. 37. A method as claimed in any preceding claim, in which the visual indication is a line.
  38. 38. A method as claimed in claim 37 when dependent upon claim 36, in which the feature extraction algorithm is a Hough line transformation algorithm and the extracted features are line items.
  39. 39. A method as claimed in claim 37 or claim 38 when dependent upon claim 27, in which the visual indication line is parallel or orthogonal to the quality control feature line.
  40. 40. A method as claimed in claim 39, in which line items that are not parallel to the quality control line are removed.
  41. 41. A method as claimed in any of claims 36 to 40, in which the extremities of the smallest rectangle surrounding the extracted features are identified.
  42. 42 A method as claimed in claim 41, in which a visual strength of a candidate visual indication is calculated by evaluating the percentage of the total area of the rectangle that the extracted features fill.
  43. 43. A method as claimed in claim 42, in which the visual indication is determined to exist if the visual strength of the candidate visual indication is above a predetermined threshold, and determined not to exist if the visual strength of the candidate visual indication is below the predetermined threshold.
  44. 44 A method as claimed in any preceding claim, in which at least one anchor feature is characteristic to the medical test to be performed by the testing device, further including the step of recognising that anchor feature and identifying the medical test to be performed by the testing device.
  45. An assessment system for assessing the result of a medical test on a testing device, the system comprising: digital image recording means for recording an image of the testing device, automated image processing means adapted to locate a plurality of anchor features on the digital image; ascertain the location of a test area on the digital image from the anchor features; and determine whether a visual indication exists on the test area, and means for outputting a positive or negative test result depending on the determination of whether a visual indication exists on the test area.
  46. 46. An assessment system as claimed in claim 45, in which the anchor features are part of a single larger anchor formation.
  47. 47. An assessment system as claimed in claim 45, in which there is a plurality of distinct anchor features.
  48. 48. An assessment system as claimed in any of claims 45 to 47, in which the digital image recording means is a camera.
  49. 49. An assessment system as claimed in any of claims 45 to 48, also including storage means adapted store the digital image for later access.
  50. SO. An assessment system as claimed in claim 45, in which one of a relative distance between at least two pairs of anchor features, a ratio of sizes between a pair of anchor features or a shape of the or each anchor feature is known.
  51. 51 An assessment system as claimed in any of claims 45 to 50, in which a colour of at least one anchor feature is known, and the image processing means is further adapted to apply a hue filter to exclude parts of the image outside a hue range including that colour.
  52. 52. An assessment system as claimed in any of claims 45 to 51, in which the image processing means is further adapted to: 1. extract features from the digital image, the features being any of: edges; contours and circles; 2. determine whether the correct anchor features have been extracted, and if not: 3. adjust parameters of the digital image or the feature extraction algorithm based on conditions of the image and returning to step 2.
  53. 53. An assessment system as claimed in claim 52, in which the image processing means includes a feature extraction algorithm for extracting features from the digital image.
  54. 54 An assessment system as claimed in claim 53, in which the feature extraction algorithm is at least one of: the Canny method, Hough transformations and the Harris corner detector.
  55. An assessment system as claimed in any of claims 52 to 54, in which the correct anchor features are considered to have been identified if the same number of anchor features has been identified on the image as is known to exist on the testing device.
  56. 56 An assessment system as claimed in any of claims 52 to 55, in which the means for determining whether the correct anchor features have been extracted makes that determination positively if the relative distances between the anchor features identified on the image match the relative distances between the anchor features on the testing device to within a set tolerance and negatively if not.
  57. 57. An assessment system as claimed in any of claims 52 to 56, further adapted to recognise a known pattern of at least one anchor feature for helping to determine whether the correct anchor features have been extracted.
  58. 58. An assessment system as claimed in any of claims 45 to 57, in which the means for determining whether a visual indication exists on the test area is further adapted to compare relative sizes of the anchor features on the image with known relative sizes of the anchor features on the testing device for estimating and correcting for distortion of the testing device in the image.
  59. 59. An assessment system as claimed in any of claims 45 to 58, in which the means for determining whether a visual indication exists on the test area is further adapted to calculate a ratio of a relative distance between a pair of anchor features on the image to the relative distance between that pair of anchor features on the testing device.
  60. An assessment system as claimed in any of claims 45 to 59, in which the means for determining whether a visual indication exists on the test area is further adapted to crop the image to include only the test area.
  61. 61. An assessment system as claimed in any of claims 45 to 60, in which a quality control feature is present on the test area.
  62. 62. An assessment system as claimed in claim 61, which is adapted to determine whether the quality control feature is visible.
  63. 63. An assessment system as claimed in claim 62, in which a test result of 'failed' is output if the quality control feature is not visible.
  64. 64. An assessment system as claimed in any of claims 61 to 63, in which the assessment system is adapted to apply a hue filter for identifying only areas of a known predetermined colour of the quality control feature on the digital image.
  65. 65. An assessment system as claimed in any of claims 61 to 64, which is further adapted to apply a feature extraction algorithm to the digital image to extract features which form a candidate quality control feature.
  66. 66. An assessment system as claimed in any of claims 61 to 65, in which the quality control feature is a line.
  67. 67 An assessment system as claimed in any of claims 61 to 66 in which the feature extraction algorithm is a line transformation algorithm and the extracted features are line items.
  68. 68. An assessment system as claimed in any of claims 61 to 67, further adapted to identify the extremities of the smallest rectangle surrounding the extracted features.
  69. 69. An assessment system as claimed in claim 68, further adapted to calculate a visual strength of the candidate quality control feature by evaluating the percentage of the total area of the rectangle that the extracted features fill.
  70. 70. An assessment system as claimed in claim 69, in which the quality control feature is determined to be visible if the visual strength of the candidate quality control feature is above a predetermined threshold, and the quality control feature is determined not to be visible if the visual strength of the candidate quality control feature is below the predetermined threshold.
  71. 71. An assessment system as claimed in any of claims 45 to 70, further adapted to crop the image.
  72. 72 An assessment system as claimed in claim 71, in which the extent of the cropping is determined by the extent of the quality control feature on the image.
  73. 73. An assessment system as claimed in claim 71 or claim 72, in which the image is cropped to include only the test area.
  74. 74. An assessment system as claimed in any of claims 71 to 73, in which the quality control feature is cropped from the image.
  75. 75. An assessment system as claimed in any of claims 45 to 74, further adapted to convert at least a part of the image to grayscale.
  76. 76. An assessment system as claimed in any of claims 45 to 75, further adapted to enhance the contrast of at least part of the image.
  77. 77. An assessment system as claimed in claim 76 in which the contrast is enhanced by equalising a histogram of the image.
  78. 78. An assessment system as claimed in any of claims 45 to 77, further adapted to subtract a background light pattern from at least part of the image to enhance the signal to noise ratio.
  79. 79. An assessment system as claimed in claim 78, in which the background light pattern is obtained by applying a Gaussian blur to at least part of the image.
  80. 80. An assessment system as claimed in any of claims 45 to 79, further adapted to apply a feature extraction algorithm to the image to determine the presence of the visual indication.
  81. 81. An assessment system as claimed in any of claims 45 to 80, in which the visual indication is a line.
  82. 82. An assessment system as claimed in claim 81 when dependent upon claim 80, in which the feature extraction algorithm is a Hough line transformation algorithm and the extracted features are line items.
  83. 83. An assessment system as claimed in claim 81 or claim 82 when dependent upon claim 71, in which the visual indication line is parallel or orthogonal to the quality control feature line.
  84. 84. An assessment system as claimed in claim 83, further adapted to remove line items that are not parallel to the quality control line.
  85. An assessment system as claimed in any of claims 80 to 84, further adapted to identify the extremities of the smallest rectangle surrounding the extracted features.
  86. 86 An assessment system as claimed in claim 85, further adapted to calculate a visual strength of a candidate visual indication by evaluating the percentage of the total area of the rectangle that the extracted features fill.
  87. 87. An assessment system as claimed in claim 86, further adapted to determine the visual indication to exist if the visual strength of the candidate visual indication is above a predetermined threshold, and determining it not to exist if the visual strength of the candidate visual indication is below the predetermined threshold.
  88. 88 An assessment system as claimed in any of claims 45 to 87, in which at least one anchor feature is characteristic to the medical test to be performed by the testing device, further including means of recognising that anchor feature and identifying the medical test to be performed by the testing device.
GB2104655.2A 2021-03-31 2021-03-31 Digital assessment process Pending GB2606136A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB2104655.2A GB2606136A (en) 2021-03-31 2021-03-31 Digital assessment process

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB2104655.2A GB2606136A (en) 2021-03-31 2021-03-31 Digital assessment process

Publications (2)

Publication Number Publication Date
GB202104655D0 GB202104655D0 (en) 2021-05-12
GB2606136A true GB2606136A (en) 2022-11-02

Family

ID=75783649

Family Applications (1)

Application Number Title Priority Date Filing Date
GB2104655.2A Pending GB2606136A (en) 2021-03-31 2021-03-31 Digital assessment process

Country Status (1)

Country Link
GB (1) GB2606136A (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190384890A1 (en) * 2018-06-15 2019-12-19 Reliant Immune Diagnostics, Inc. System and method for digital remote primary, secondary, and tertiary color calibration via smart device in analysis of medical test results
WO2021092595A1 (en) * 2019-11-07 2021-05-14 Essenlix Corporation Improvements of lateral flow assay and vertical flow assay

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190384890A1 (en) * 2018-06-15 2019-12-19 Reliant Immune Diagnostics, Inc. System and method for digital remote primary, secondary, and tertiary color calibration via smart device in analysis of medical test results
WO2021092595A1 (en) * 2019-11-07 2021-05-14 Essenlix Corporation Improvements of lateral flow assay and vertical flow assay

Also Published As

Publication number Publication date
GB202104655D0 (en) 2021-05-12

Similar Documents

Publication Publication Date Title
EP3132380B1 (en) Method for estimating a quantity of a blood component in a fluid canister
US9928592B2 (en) Image-based signal detection for object metrology
Barbedo An automatic method to detect and measure leaf disease symptoms using digital image processing
TWI351654B (en) Defect detecting device, image sensor device, imag
EP1653409B1 (en) Method and apparatus for red-eye detection
EP1991948B1 (en) An iris recognition system having image quality metrics
US20170262985A1 (en) Systems and methods for image-based quantification for allergen skin reaction
US20170262979A1 (en) Image correction and metrology for object quantification
CN107330433B (en) Image processing method and device
US11189056B2 (en) Methods and devices for performing an analytical measurement based on a color formation reaction
CN104637046B (en) Image detection method and device
US20170262965A1 (en) Systems and methods for user machine interaction for image-based metrology
US20170258391A1 (en) Multimodal fusion for object detection
JP7228585B2 (en) Method and apparatus for making analytical measurements
US11733151B2 (en) Assay detection, accuracy and reliability improvement
US8503734B2 (en) Detecting image detail level
van Zwanenberg et al. Edge detection techniques for quantifying spatial imaging system performance and image quality
US20030112459A1 (en) Document authenticity discriminating apparatus and method therefor
GB2606136A (en) Digital assessment process
CN113450383B (en) Quantitative analysis method, device, equipment and medium of immunochromatographic test paper
WO2023034441A1 (en) Imaging test strips
CN115456888A (en) Correction method and device for electronic artistic examination works, electronic equipment and medium
Makinana et al. Quality parameter assessment on iris images
KR20190054369A (en) Apparatus for measuring integrity of logistics container based on image and method for the same
Guan et al. A new metric for latent fingerprint image preprocessing