WO2023034441A1 - Imaging test strips - Google Patents

Imaging test strips Download PDF

Info

Publication number
WO2023034441A1
WO2023034441A1 PCT/US2022/042243 US2022042243W WO2023034441A1 WO 2023034441 A1 WO2023034441 A1 WO 2023034441A1 US 2022042243 W US2022042243 W US 2022042243W WO 2023034441 A1 WO2023034441 A1 WO 2023034441A1
Authority
WO
WIPO (PCT)
Prior art keywords
smartphone
candidate
analyte
identifier
strength
Prior art date
Application number
PCT/US2022/042243
Other languages
French (fr)
Inventor
Mayank Kumar
Kevin J. Miller
Russell Joseph CONWAY
Jeffrey Douglas CONWAY
Thomas QUARRE
Steven Scherf
Siddarth Satish
Keng-Tsai LIN
Original Assignee
Exa Health, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Exa Health, Inc. filed Critical Exa Health, Inc.
Publication of WO2023034441A1 publication Critical patent/WO2023034441A1/en

Links

Classifications

    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H10/00ICT specially adapted for the handling or processing of patient-related medical or healthcare data
    • G16H10/40ICT specially adapted for the handling or processing of patient-related medical or healthcare data for data related to laboratory analysis, e.g. patient specimen analysis
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H30/00ICT specially adapted for the handling or processing of medical images
    • G16H30/40ICT specially adapted for the handling or processing of medical images for processing medical images, e.g. editing
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/40ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the management of medical equipment or devices, e.g. scheduling maintenance or upgrades
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H40/00ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices
    • G16H40/60ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices
    • G16H40/63ICT specially adapted for the management or administration of healthcare resources or facilities; ICT specially adapted for the management or operation of medical equipment or devices for the operation of medical equipment or devices for local operation
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/20ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for computer-aided diagnosis, e.g. based on medical expert systems
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/70ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for mining of medical data, e.g. analysing previous cases of other patients
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning
    • G06N20/10Machine learning using kernel methods, e.g. support vector machines [SVM]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/10Information and communication technologies [ICT] supporting adaptation to climate change, e.g. for weather forecasting or climate simulation

Definitions

  • the subject matter disclosed herein generally relates to the technical field of special-purpose machines that facilitate analysis of test strips, including software-configured computerized variants of such special-purpose machines and improvements to such variants, and to the technologies by which such special-purpose machines become improved compared to other specialpurpose machines that facilitate analysis of test strips.
  • the present disclosure addresses systems and methods to facilitate neural network analysis of test strips.
  • LFA Lateral Flow Assay
  • LFA test strips are cost-effective, simple, rapid, and portable tests (e.g., contained within LFA testing devices) that have become popular in biomedicine, agriculture, food science, and environment science, and have attracted considerable interest for their potential to provide instantaneous diagnostic results directly to patients.
  • LFA-based tests are widely used in hospitals, physicians’ offices, and clinical laboratories for qualitative and quantitative detection of specific antigens and antibodies, as well as for products of gene amplification.
  • LFA tests have widespread and growing applications (e.g., in pregnancy tests, malaria tests, tests for COVID-19 antibody tests, C OVID- 10 antigen tests, or drug tests) and are well-suited for point-of-care (POC) applications.
  • POC point-of-care
  • FIG. 1 is an illustration of a scan card with colorimetric calibration guides (e.g., color calibration swatches) and radiometric calibration guides (e.g., calibration lines), according to some example embodiments.
  • colorimetric calibration guides e.g., color calibration swatches
  • radiometric calibration guides e.g., calibration lines
  • FIG. 2 is a graph illustrating how different smartphone models recorded different test line strengths, according to some example embodiments.
  • FIGS. 3-8 are graphs illustrating per-phone calibration curves for various smartphones, as learned using a calibration dataset, according to some example embodiments.
  • FIGS. 9-14 are graphs illustrating examples of the vector of calibration line strength, as measured for some smartphone models, according to some example embodiments.
  • FIG. 15 is a scatter plot of the test line strength at a certain antigen concentrations versus calibration line strength at a certain calibration line index, according to some example embodiments.
  • FIG. 16 is a bar graph illustrating a distribution of line strength measurements across different smartphone models, according to some example embodiments.
  • FIG. 17 is a spatial graph that illustrates example two-dimensional (2D) data points indicating glare versus non-glare, according to some example embodiments.
  • FIGS. 18 and 19 are bar graphs illustrating reductions in the likelihood of encountering glare, according to some example embodiments.
  • FIG. 20 is a schematic diagram illustrating design concepts that facilitate achieving high quality imaging of an LFA test strip, according to some example embodiments.
  • FIGS. 21-25 and FIGS. 26-31 are sets of dimensioned views of the lightbox, according to some example embodiments.
  • FIG. 32 is a block diagram illustrating components of a machine (e.g., a computer system, such as a smartphone), according to some example embodiments, able to read instructions from a machine-readable medium and perform any one or more of the methodologies discussed herein.
  • a machine e.g., a computer system, such as a smartphone
  • FIG. 33 is a flowchart illustrating operations in a method of imaging an LFA test kit, according to some example embodiments.
  • Example methods facilitate analysis of test strips (e.g., an LFA test strip within an LFA test kits or other LFA test device), including analysis of a test strip by one or more neural networks, and example systems (e.g., special-purpose machines configured by special-purpose software) are configured to facilitate such analysis of test strips. Examples merely typify possible variations. Unless explicitly stated otherwise, structures (e.g., structural components, such as modules) are optional and may be combined or subdivided, and operations (e.g., in a procedure, algorithm, or other function) may vary in sequence or be combined or subdivided. In the following description, for purposes of explanation, numerous specific details are set forth to provide a thorough understanding of various example embodiments. It will be evident to one skilled in the art, however, that the present subject matter may be practiced without these specific details.
  • LFA test strips usually have a designated control line region and a test line region. Typically, results can be interpreted within 5-30 minutes after putting a sample within the designated sample well of the LFA test device (e.g., an LFA test kit).
  • the LFA test device e.g., LFA test kit
  • the LFA test device may take the example form of an LFA test cassette, and the LFA test device typically has at least one sample well for receiving the sample to be applied to an LFA test strip inside the LFA test device.
  • the results can be read by a trained healthcare practitioner (HCP) in a qualitative manner, such as by visually determining the presence or absence of a test result line appearing on the LFA test strip.
  • HCP trained healthcare practitioner
  • the methods and systems e.g., computer systems, such as smartphones or other mobile devices
  • These technologies may be used individually or in any suitable combination, and include:
  • (6) one or more methods of providing smartphone-independent semi-quantitative readings of LFA test strips such as by: a. training one or more neural networks using unique smartphone specific IDs (e.g., learned embeddings), or b. using the printed lines on a scan card (e.g., measured embeddings);
  • one or more algorithms for improving the sensitivity and robustness of LFA test strip readings, including, for example: a. glare avoidance by restricting the angle of imaging to angles beyond +/- 5 degrees of head-on image capture to minimize the impact of glare (e.g., in combination with a glare guardrail), b. a glare guardrail using edge density to reject images with high glare, c. a blur guardrail to reject images of LFA test strips where such images are blurred due to focus blur, motion blur, or both, during image capture, and d. a Control Line Insufficient Fluid (CLIF) detector.
  • CLIF Control Line Insufficient Fluid
  • any one or more of the methods and systems disclosed herein can be used to facilitate use of a trained neural network to interpret LFA test results, captured in images of LFA test cassettes, where such images are taken by various different smartphone makes, models, and individual devices, for a variety of applications, such as malaria tests, COVID-19 antibody tests, COVID-19 antigen tests, cancer tests, and the like, and can be adapted to work with any number of different makes, models, or other types of LFA test devices (e.g., various LFA test cassettes) that house LFA test strips.
  • LFA test devices e.g., various LFA test cassettes
  • FIG. 1 is an illustration of a scan card with colorimetric calibration guides (e.g., color calibration swatches, shown as areas with various stippling and cross-hatching) and radiometric calibration guides (e.g., calibration lines), according to some example embodiments.
  • colorimetric calibration guides e.g., color calibration swatches, shown as areas with various stippling and cross-hatching
  • radiometric calibration guides e.g., calibration lines
  • the scan card implements a new design that includes color swatches specifically configured (e.g., sized, positioned, colored, or any suitable combination thereof) to aid in light and color normalization of a captured image of an LFA test strip across different smartphone models, for example, to perform on-the-fly light and color normalization to ensure that the normalized image has a calibrated or otherwise standardized color distribution, histogram, or both, regardless and independent of the smartphone model used for imaging the LFA test strip or the lighting conditions available while imaging the LFA test strip.
  • color swatches specifically configured (e.g., sized, positioned, colored, or any suitable combination thereof) to aid in light and color normalization of a captured image of an LFA test strip across different smartphone models, for example, to perform on-the-fly light and color normalization to ensure that the normalized image has a calibrated or otherwise standardized color distribution, histogram, or both, regardless and independent of the smartphone model used for imaging the LFA test strip or the lighting conditions available while imaging the LFA
  • the scan card also includes calibration lines printed on the side of the scan card. These calibration lines are specifically configured (e.g., sized, positioned, colored, or any suitable combination thereof) to depict the range of line strength usually seen in an LFA test strip’s test line or its control line and would aid in performing a per-device radiometric calibration of the LFA test strip reader.
  • the radiometric calibration may be helpful where different smartphone models (e.g., different smartphone camera models) may map the intensities of test lines and control lines differently onto the red-green- blue (RGB) color space.
  • a set of eight color swatches is printed near the top of the scan card to help in light and color normalization. These swatches are configured (e.g., selected in terms of color, brightness, size, or any combination thereof) to span the range of the RGB color space and to print reliably on the paper substrate of the scan card.
  • a set of eight calibration lines (e.g., color lines, indicated with different types of stippling or cross-hatching) is printed in the right side region of the scan card, the left side region of the scan card, or both.
  • the lines in each set of lines have varying line strength that spans the range of line strength usually seen in an LFA test strip. These printed lines aid in radiometric calibration per smartphone. Black separator lines are printed in between the calibration lines to facilitate localization of the faint lines within the image.
  • the sets of lines may be printed both on the left side region and the right side region, for example, to provide redundancy for avoiding problems associated with occlusion, glare (e.g., from a flash or other light source) falling on one or the other sets of lines, or other problematic lighting condition.
  • glare e.g., from a flash or other light source
  • one or more quick response (QR) codes may appear on the scan card, for example, to facilitate alignment of the scan card, and thus facilitate the above-described normalizations for colorimetry (e.g., white balance or other color correction), brightness (e.g., line strength), or both.
  • QR quick response
  • a set of color swatches with the same colors used in the above-described set of calibration lines are also printed near the bottom of the scan card, for example, to aid in any QC processes that require using a spectrophotometer to ensure that the correct colors have been printed onto the scan card.
  • Such swatches for this purpose may be configured to be at least a predetermined minimum size, for example, to facilitate proper checking by a spectrophotometer.
  • One or more of the following methods may help to make an LFA test strip reader more independent of smartphone makes, models, or individual devices.
  • Such methods include one or more methods to provide semi- quantitative results of LFA test strip readings (e.g., selected from several available levels of extent, such as, a negative value, a weak or mild positive value, and a strong positive value) or fully quantitative results of LFA test strip readings (e.g., a floating point value that numerically encodes or otherwise represents line strength or concentration), in a smartphone-agnostic manner.
  • Such methods may also provide qualitative (e.g., positive vs negative) results of LFA test strip readings, and accordingly, one or more of the methods disclosed herein may be equally applicable to any qualitative LFA test strip reader and may improve its performance across different smartphone models.
  • FIG. 2 is a graph illustrating how different smartphone models recorded different test line strengths for a set of twenty test cassettes at the limit of detection (LOD) and at 2xLOD concentration of a heat-inactivated virus.
  • LOD limit of detection
  • FIG. 2 a Samsung S8 phone would record a line strength of 0.06 AU, whereas an iPhone 11 Pro Max would record a line strength of 0.18 AU, representing a 3x increase in the measured test line strength for the same test cassettes when imaged from different smartphone models.
  • any one or more of several online and offline approaches to perform smartphone calibration may be useful in obtaining smartphone-agnostic results in line strength measurement from widely varying smartphone models.
  • This approach allows one to leam a calibration curve for a specific individual device (e.g.,. a specific individual smartphone).
  • the resulting smartphone-specific calibration curve can then be used to map the test line strength measurement from that smartphone to the concentration of N-protein that could be smartphone-agnostic.
  • the smartphone-specific calibration curve can be used to map the test line strength measured from that smartphone to a reference test line strength, as if measured using a reference smartphone.
  • Such a per-phone calibration may be performed by learning a functional relationship between the concentration of N-protein / heat-inactivated virus and the test line strength that is measured by that smartphone in the operating range of the LFA test strip.
  • the exact range (e.g., span) of concentration may vary, for example, based on the application, the test cassette type, the test cassette lot, the N-protein or heat-inactivated virus used, or any suitable combination thereof. Accordingly, the range can be determined (e.g., decided or selected) based on the specific type of calibration desired (e.g., calibration within a linear-range only or a full-range calibration).
  • a calibration dataset using different smartphone models to image one or more reference LFA test cassettes that are imaged inside a lightbox.
  • the lightbox may be designed or otherwise configured to ensure that the imaging of the LFA test cassettes can happen under a constant ambient lighting condition, imaging angle, and imaging distance, thus minimizing the impact of these covariates in the measurement process.
  • each image of the LFA test cassette is analyzed to obtain a measure of line strength for the test line. For example, the following equation may be used:
  • FIGS. 3-8 are graphs illustrating per-phone calibration curves for various smartphones, as learned using a calibration dataset. Each calibration curve can be used to make phone-agnostic predictions.
  • the Y-axis shows per- phone measured test line strength, and the X-axis shows the concentration of N- protein used.
  • the concentration and the line strength measurement can be used to obtain a phone-agnostic concentration measurement, given the image and the known phone model.
  • the line strength measured by a specific smartphone e.g., a Samsung S8
  • a reference smartphone e.g., iPhone 11 Pro Max
  • the captured image depicts at least one set of the calibration lines printed on the side regions of the scan card.
  • the image that depicts these calibration lines can be used to extract calibration line strength using one or more computer vision techniques.
  • Calibration line strength can be measured in any of various ways. For example, one way would be to define calibration line strength in a manner similar to the CV Line Strength discussed above:
  • Calibration line strength (Intensity of the background paper - intensity of the peak of any of the printed lines ) / Intensity of the background paper (Eq. 2)
  • Each of the printed calibration lines on the scan card may have a different calibration line strength, and the vector of the measured calibration line strengths corresponding to the set of printed lines can be treated as the signature of the smartphone, representing (e.g., reflecting) how the smartphone’s camera and image processing pipeline maps the printed colors on the scan card to the RGB color space.
  • FIGS. 9-14 are graphs illustrating examples of the vector of calibration line strength, as measured for some smartphone models. That is, FIGS. 9-14 illustrate examples of per-phone calibration vectors obtained by extracting calibration line strength from images of calibration lines printed on the scan card.
  • Each of these calibration vectors can be used to leam a per-phone online radiometric calibration for a specific smartphone model, for example, by fitting a relationship between the calibration line index and the calibration line strength, thus allowing online calibration of any new smartphone (e.g., even an unseen new smartphone).
  • One or more of various analysis techniques may be used in such online calibration to make smartphone-based reading and analysis of LFA test strips more smartphone-independent (e.g., smartphone-agnostic).
  • Some examples of such techniques include:
  • (5) feeding the calibration lines into a neural network alongside a crop of an image of the LFA test strip for example: a. by feeding in the colors of the calibration lines, the parameters of a learned color-correction function, or a learned index-to-color function; or b. by feeding in an image of the calibration lines, and then causing the neural network to learn the exact function to be learned based on the input image.
  • printed calibration lines on a scan card may be imaged and processed to obtain calibration line strength, which can then be used to perform online per-phone calibration.
  • the ranges of colors, line strengths, swatch sizes, line dimensions, or any suitable combination thereof may be chosen (e.g., selected) to be similar to those of the test line, the control line, or both, for a particular LFA test strip. This may have the effect of ensuring that the calibration curve learned using the printed calibration lines can be applied to the particular LFA test strip’s test line, control line, or both.
  • FIG. 15 is a scatter plot of the test line strength at a certain antigen concentrations versus calibration line strength at a certain calibration line index, which shows across several smartphone models (e.g., from both the iPhone family and the Android family) that, as the measured test line strength increases, so does the calibration line strength, which verifying the above-proposed approach for online calibration.
  • FIG. 15 plots test line strength at a 0.08 ng/ml antigen concentration versus calibration line strength at line index 2 and clearly indicates two clusters of smartphone models, as well as a linear relationship between test line strength and calibration line strength. Based on the scatter plot shown in FIG.
  • the represented smartphones may be clustered into two categories: (i) low-end smartphones for reading LFA test strips, and (ii) high-end smartphones for reading LFA test strips.
  • a smartphone can be categorized as low-end or high-end solely based on its measurement of calibration line strength.
  • a system or method may be configured (e.g., set) to have a lower limit for the calibration line strength at a certain line index, which would allow that system or method to automatically reject an unseen candidate smartphone (e.g., proposed to be used as an LFA test strip reader).
  • Such selecting or rejecting of smartphones may be directly correlated to their actual performance in reading test lines, control lines, or both, and provide better results compared to relying solely on camera parameters (e.g., sensor resolution, bit-depth, read-out noise, or any suitable combination thereof).
  • camera parameters e.g., sensor resolution, bit-depth, read-out noise, or any suitable combination thereof.
  • selecting or rejecting of smartphones may also take into account any modification, corruption, or enhancement of image data preformed by the image processing pipeline present in different operating systems of different smartphone vendors. This may have the effect of ensuring that the criteria for selection or rejection of smartphones account for both hardware (e.g., camera) and software (e.g., image processing pipeline) in the automatic decision-making process.
  • one or both of the sets (e.g., palettes) of calibration lines on the scan card includes a series of lines that have different colors on each side (e.g., between the top and bottom QR codes). These lines may be separated by black separator lines (e.g., black separator bars), which may make them easier to locate using one or more computer vision techniques.
  • black separator lines e.g., black separator bars
  • a suitably configured system or method detects the QR codes and their comers.
  • the system or method takes the comers of the top and bottom QR codes and uses these comers in a homography transform to find a search space around the line palette.
  • the system or method locates the black separator bars by applying a threshold to the grayscale values to get a mask.
  • the suitably configured system or method applies a box blur before computing the mask.
  • the system or method computes the connected components and discards any component whose area is not within a certain range, relative to the search area. If the system or method does not detect the expected number of black separator bars (e.g., 8 bars), the system or method may flag the line palette, the scan card, or both, as unextractable. Otherwise, the system or method sorts the black bar masks by projecting their centers onto the difference vector between the centers of the QR codes. Then, the system or method takes pairs of adjacent black bar masks and fits rectangles (e.g., encompassing rectangles) around them, for example, by performing a tight rotated rectangle fit. The system or method may shrink the width and height of each rectangle to obtain a rectangle that contains most of the pixels of the calibration line between the black bars. The system or method may also extract background pixels to the left, the right, or both, of the calibration lines by further manipulating the rectangle.
  • the expected number of black separator bars e.g. 8 bars
  • the system or method may flag the line palette, the scan card, or both, as
  • the calibration lines may be fully or partially obscured, corrupted, or otherwise degraded by glare.
  • glare only impacts one side of the scan card, though in certain scenarios, glare may impact both sides of the scan card.
  • glare adversely impacts one side (e.g., left side) of the scan card, but the set of calibration lines printed on other side (e.g., right side) of the scan card is unextractable for other reasons.
  • a suitably configured system or method selects the calibration lines from the side that has the least amount of glare, for example, to minimize corruption of the measured calibration line strength.
  • a suitably configured system or method may use the color of the black separation lines (e.g., black separation bars) to quantify glare.
  • black separation lines e.g., black separation bars
  • these masks by definition only include dark pixels.
  • the system or method may isolate the intersections of adjacent rectangles, optionally with a slight reduction of width and height in the rectangles to avoid accidentally including any background pixels. The system or method then takes the average color of each black region, then normalizes the average color based on the average color of all background regions, and then converts the normalized average color to grayscale with equal RGB channel weights.
  • the system or method then takes the maximum grayscale value of the black regions and uses that maximum grayscale value as a glare score.
  • the system or method compares this glare score to a predefined threshold score and decides whether a line palette is acceptable. If both line palettes are acceptable, the system or method may then choose the line palette that has the lower glare score.
  • a suitably configured system or method uses a detection algorithm, such as Faster-RCNN, SSD, CenterNet, DETR, or the like, to detect line palette, black separator bars, or both.
  • the black separator bars may be easier to detect, as some of the lines in the palette may be extremely faint.
  • a suitably configured system or method uses a segmentation net, such as UNet, to segment the black separator bars, the calibration lines, or both.
  • the system or method trains a neural network to regress the comers of the calibration lines, the comers of the black bars, or both.
  • the system or method uses template matching to detect the black separator bars, the calibration lines, or both.
  • the system or method computes a one-dimensional (ID) ID color profile along a line from one QR code to another QR code, and the peaks and troughs are used to locate the calibration lines, the black separator bars, or both.
  • the system or method uses a homography transform to directly locate the palette of calibration lines based on locations of the comers of the QR codes, although such example embodiments may be less robust to bending of the scan card.
  • a suitably configured system or method implements an edge density algorithm described below for detecting glare in images of LFA test strips may be used as an alternative way to compute or otherwise generate a glare score for each black separator bar.
  • the maximum glare score may be the glare score of the entire palette of calibration lines.
  • the system or method trains a neural network to classify glare.
  • a goal of quantitative analysis of an image of an LFA test strip may be to predict the concentration of analyte, whereas for semi-quantitative analysis, a goal may be to classify the strength of the analyte (e.g., the SARS-COV2- virus), given an image of the LFA test strip.
  • the analyte e.g., the SARS-COV2- virus
  • FIG. 16 is a bar graph illustrating a distribution of line strength measurements across different smartphone models and in which each row represents a smartphone model and each colored box-plot within that row correspond to four specified (e.g., desired) levels of an LFA test strip reader, namely: (i) negative (in red), (ii) weak positive (in green), (iii) mild positive (in blue), and (iv) strong positive (in purple).
  • LFA test strip reader e.g., desired
  • FIG. 16 illustrates the line strength measurement for a specific smartphone model is separable for each of these semi-quantitative levels
  • the line strength measurements across different smartphone models are not separable.
  • suitably configured system and methods perform semi-quantitative or quantitative reading and analysis of images depicting LFA test stops, in a manner that can be generalized across several different makes and models of smartphones.
  • a suitably configured system or method may use any trainable (e.g., learnable) artificial intelligence (Al) model, such as a neural network, that takes as input the image of the LFA test strip and an identifying (e.g., uniquely identifying among a set of smartphones) vector that represents the specific smartphone (e.g., a smartphone ID).
  • Al artificial intelligence
  • a neural network may be trained to perform one or more regression or classification tasks to directly predict the semi-quantitative or quantitative output corresponding not the input image of the LFA test strip.
  • Such an approach may be considered as an early fusion approach, in which the phone ID vector is directly fed as an input to a trainable Al model.
  • a smartphone ID vector There are several ways to obtain a smartphone ID vector. One way is to use the parameters of the calibration curve obtained as part of an offline calibration (e.g., as described above) of each smartphone as an input to the neural network. Another way is to directly use the calibration line strength vector obtained as part of an online calibration (e.g., as described above) as a smartphone ID vector.
  • the online approach may provide benefits in being able to generalize to new or otherwise unseen smartphone models that are not available during the training phase of the neural network.
  • Another way to obtain a smartphone ID vector is to use a one-hot encoding vector, and stack an embedding layer of the neural network as the first stage to process the one-hot smartphone ID vector to obtain smartphone embeddings.
  • Such an approach would be able to leam any non-linear dependency between smartphone IDs and the way test lines or control lines appear in captured images of LFA test strips.
  • a suitably configured system or method combines any two or more of these approaches to obtain a smartphone ID vector, while training a neural network.
  • a suitably configured system or method uses a backbone neural network model (e.g., with a backbone architecture) that is common across all smartphone models to process an images of an LFA test strip.
  • the output of the backbone model may be a multidimensional vector representing the test strip image.
  • the system or method may add (e.g., concatenate) the smartphone ID vector of any type to the output vector of the backbone model, and then train a separate classification or regression model to combine both the image-level features and the smartphone ID to obtain a phone-agnostic semi-quantitative or quantitative readout (e.g., prediction) from the image of the LFA test strip.
  • a phone-agnostic semi-quantitative or quantitative readout e.g., prediction
  • the backbone neural network may be or include a full-fledged neural network designed to predict the test line strength, control line strength, line locations, or any suitable combination thereof.
  • the system or method then stacks a second-stage classification or regression model to directly operate on the line strength prediction, the line presence logits, the smartphone ID vector, or any suitable combination thereof, for example, to perform a quantitative or semi- quantitative prediction of readout results from an image of an LFA test strip.
  • One or more other suitable variants of neural network architecture may be employed to obtain a neural network architecture that optimally combines the smartphone ID information with the image of the LFA test strip to obtain a phone-agnostic model.
  • LFA test strip readers work on a large variety of smartphone models is handling various image quality issues present in different smartphone models.
  • smartphones Apart from varying radiometric and colorimetric parameters, smartphones generally also vary in terms of their placement of photographic flash (e.g., from the smartphone’s flashlight) with respect to camera position, the way their hardware focuses the image of the target LFA test strip, and issues that may arise due to noisy camera sensors.
  • a suitably configured system or method as described herein solves one or more challenges that one or more specific models of smartphone may face due to their individual specific designs, and such solutions may involve one or more special algorithms to deal with bad quality images, one or more special procedures to capture good quality image in spite of a limiting hardware, or any suitable combination thereof.
  • flash and camera placement for some smartphone models is such that there is a high probability that a captured image of an LFA test strip may be glary, for example, due to the flash or any other bright light source in the vicinity.
  • Glare generally happens when the light falling on the LFA test strip reflects off the wet surface of the test strip and is recorded by the camera, such as where surface reflection is greater than body reflection.
  • a faint test line or a faint control line may be occluded or overpowered by the glare and thus become unusable (e.g., invisible or otherwise unable to be accurately detected in strength, color, or size) in the image of the LFA test strip.
  • This glare may have the effect of reducing analysis sensitivity, which may result in false negative predictions.
  • LFA test strip e.g., within an LFA test cassette
  • a blurry image of an LFA test strip may cause faint test lines, faint control lines, or both, to be unusable (e.g., invisible or otherwise unable to be accurately detected in strength, color, or size). This blur may result in loss of sensitivity in the analysis of the image, which may result in false negative predictions.
  • Blurry images may also pose challenges regarding detection of LFA test strips (e.g., within LFA test cassettes), other algorithmic computer vision tasks, or both.
  • glare Since the glare on the test strip is dependent on the exact angle at which the smartphone is held during imaging, glare may be avoided by restricting the imaging of the LFA test strip from camera angles known to cause higher incidence of glare on the LFA test strip.
  • Glare in the example for of direct surface reflection happens predominantly when the imaging camera angle between the smartphone (e.g., configured and operating as an LFA test strip reader) and the LFA test strip is within a 5 degree deviation (e.g., tilt) from directly head-on.
  • FIG. 17 is a spatial graph that illustrates example two-dimensional (2D) data points indicating glare versus non-glare, where the x-axis is the tilting angle left or right away from head-on with respect to horizontal displacement, also known as the yaw angle, and the y-axis is the tilting angle up or down away from head-on with respect to vertical displacement, also known as the pitch angle.
  • Each data point is a combination of yaw angle and pitch angle, and red data points represent angle combinations where glare was observed, while blue data points represent angle combinations where glare was not observed.
  • glare predominantly happens within +/- 5 degree window in both pitch and roll angles.
  • a suitably configured system or method may effectively reduce the chance of glare.
  • a 40% reduction in the likelihood of glare may be obtained by implementing a threshold of 5 degrees of tilt.
  • an 80% reduction may be obtained by implementing a threshold of 10 degree of tilt.
  • FIGS. 18 and 19 are bar graphs illustrating such reductions in the likelihood of encountering glare. Therefore, in various example embodiments, a suitably configured system or method implements a general methodology for reading an image of an LFA test strip using with a smartphone:
  • a typical (e.g., standard) time to obtain a usable readout from an average LFA test strip begins around 15 minutes after applying the biological sample into the sample well and may extend up to 20-30 minutes after the sample application.
  • the LFA test strip may be insufficiently (e.g., not fully) developed.
  • test lines may reduce in strength over time thereafter, and there may be situations, after a period of time, when the concentration of analyte is significantly higher than indicated by the strength of the test line. Therefore, it may be helpful to combine multiple images, captured over time, of an LFA test strip to improve the sensitivity of the reading and analysis of the combined images, to reduce the chances of glare impacting the reading and analysis, or both.
  • a suitably configured system or method may implement one or more of the following procedural operations, which allow monitoring of a test line (e.g., as a test line signal) over time within a test strip reading window (e.g., an optimal LFA test strip readout window).
  • a test strip reading window e.g., an optimal LFA test strip readout window.
  • Such a reading window may be defined as a period time, for example, from 15 minutes after applying the sample to 30 minutes after applying the sample.
  • Example operations include:
  • test lines, control lines, or both perform reading and analysis of the test lines, control lines, or both, depicted in only those images among the multiple acquired images that are well- developed and clear (e.g., clean) in depicting the LFA test strip.
  • Systems and methods that implement one or more of these procedural operations may optimize the readout from a smartphone-based LFA test strip reader, as the LFA test strip develops over time after application of the sample and exhibits different strengths of one or more control lines or test lines over time.
  • Another approach to avoid glare in images is to develop or otherwise implement a guardrail algorithm that would automatically detect glare in an image of an LFA test strip, reject the image, and prompt the end user to take remedial action, such as recapturing the image or adjusting the lighting conditions and then recapturing the image.
  • a suitably configured system or method may implement all or part of the following guardrail algorithm to detect glare in a captured image that depicts the LFA test strip of an LFA test cassette.
  • edge density e.g., the percentage of pixels that are edge pixels
  • a significant downside of glare is that glare can cause a false negative reading. Under certain conditions, glare may cause a false positive reading, but glare is unlikely to cause a strong false positive reading.
  • a suitably configured system or method may omit (e.g., skip) the glare guardrail and thus reduce or avoid the risk of a false alarm.
  • a suitably configured system or method implements an edge detection algorithm other than Canny, such as Sobel or Laplace with a threshold.
  • the system or method may also apply blurring before edge detection.
  • the system or method may aggregate the edge strengths, for example, by an average, an L2 average, an L3 average, etc., by inputting (e.g., plugging) individual edge strengths into a learnable function (e.g., a sigmoid) before aggregating the edge strengths, or any suitable combination thereof.
  • a learnable function e.g., a sigmoid
  • the system or method may cause a spatial weighting map to be learned (e.g., by any one or more of the Al models discussed above), instead of taking a uniform average across a predetermined region.
  • a suitably configured system or method computes one or more features of an image.
  • computed image features include: edge density, edge histogram, color histogram, color variance, color entropy, local-binary -pattern histogram, or any suitable combination thereof.
  • the system or method may then feed one or more of these computed features into a classifier (e.g., an SVM, a logistic regression, or a neural network).
  • a classifier e.g., an SVM, a logistic regression, or a neural network.
  • a suitably configured system or method trains a neural network, such as convolutional neural net or a vision transformer, to predict the presence of glare in the result well of the LFA test cassette or in a cropped region of the image in which the result well appears.
  • a neural network such as convolutional neural net or a vision transformer
  • Real data, synthetic data, or both may be used by the system or method to train the neural network.
  • Suitable synthetic data may be synthesized (e.g., by the suitably configured system or method) by using salt-and-pepper noise, Perlin noise, data generated by a Markov network (e.g., fitted to actual glare images), or any suitable combination thereof. In many situations, very local dependencies exist between or among the pixels of an image that depicts glare.
  • a guardrail approach may similarly be used to directly reject blurry images that may be acquired by a smartphone camera unable to focus on the LFA test strip.
  • Such a guardrail approach may include accordingly instructing the end user to place the smartphone’s camera a bit further away and then to recapture the image of the LFA test strip.
  • a suitably configured system or method may implement one or more of the following procedural operations.
  • the system or method automatically selects (e.g., chooses) a region of the image, where the region is expected to have sharp edges.
  • This region may depict text on the surface of the LFA test cassette, such as text that is nearest to the test line of the LFA test strip.
  • the region may additionally or alternatively depict one or more of the QR codes on the scan card.
  • the system or method may locate the former by taking a homography transform from a region detected and labelled as“inner-testkit” (e.g., in implementing a glare guardrail).
  • the system or method converts the edge region to grayscale and normalizes its lighting, for example, by setting the average grayscale value to a predefined constant.
  • This normalization technique may work especially well, because blurring is a linear operation and therefore should have no influence on the average intensity.
  • the suitably configured system or method computes an edge strength map, for example, using a Sobel filter, a Laplace filter, some other filter, or any suitable combination thereof.
  • the system or method may also apply a smoothing filter (e.g., a box blur, a Gaussian blur, a median filter, or any suitable combination thereof) to reduce or eliminate false edges due to noise.
  • the system or method may then aggregate the edge strengths, for example, by taking a predefined percentile (e.g., the 90th percentile), the mean, the median, the standard deviation, the L2 average, the L3 average, or any suitable combination thereof. In many situations, taking the predefined percentile provides good results, because the edge strengths tend to follow a bimodal distribution. If the predefined percentile is below a certain threshold, then the system or method may flag the image as blurry and ask the user to reimage (e.g., recapture the image).
  • a predefined percentile e.g., the 90th percentile
  • the mean e.g., the median, the standard deviation, the L2 average, the L3 average, or any suitable combination thereof.
  • the predefined percentile provides good results, because the edge strengths tend to follow a bimodal distribution. If the predefined percentile is below a certain threshold, then the system or method may flag the image as blurry and ask the user to reimage (e.g., recapture the image).
  • a suitably configured system or method takes the grayscale histogram, instead of the edge strength histogram, and looks at the height of one or more bins that correspond to a gray color (e.g., somewhere between black and white). Since light normalization ensures that any text will be black, and the background will be white, the only gray pixels should come from blurring. Thus, if the height of the one or more gray bins is above a predetermined threshold height, then the system or method may flag the image as blurry.
  • a gray color e.g., somewhere between black and white
  • a suitably configured system or method extracts one or more image features, such as the Sobel 90th percentile, the gray bin height, a gray histogram, an edge strength histogram, or any suitable combination thereof, and feeds one or more of these image features into a classifier (e.g., an SVM, a logistic regression, or a neural network) to predict blurriness.
  • a classifier e.g., an SVM, a logistic regression, or a neural network
  • the system or method uses a convolutional neural network, a vision transformer, or both, to predict blurriness. Any one or more of these methodologies may be trained by the system or method based on real data, synthetic data, or both.
  • the system or method may simulate focus blur with a Gaussian kernel, simulated motion blur with a line kernel, or both. In some situations, the sigma of the Gaussian kernel and width of the line kernel are used by the system or method to quantify the amount of blur, to provide extra supervision during training, or both.
  • a characteristic two-level color pattern may result (e.g., light on top, dark on bottom), and this two-level color pattern may cause a false detection of a control line. Even if no false detection of a control line occurs, it is generally unsafe to try and interpret an LFA test strip that has insufficient buffer fluid. Therefore, in various example embodiments, a suitably configured system or method implements a CLIF guardrail, which attempts to detect this two-level color pattern.
  • a suitably configured system or method counts the number of light-to-dark horizontal edges and the number of dark-to-light horizontal edges and then compares these counts to determine whether the light-to-dark horizontal edges outnumber the dark-to-light horizontal edges.
  • the characteristic two-level CLIF color pattern has one light-to-dark horizontal edge, while a sufficiently strong control line has both a light-to-dark horizontal edge and a dark-to-light horizontal edge. Accordingly, a system or method that implements this edge-counting rule is able to detect and flag the presence of a CLIF condition depicted in an image of an LFA test strip.
  • a suitably configured system or method starts by converting the RGB values to one channel, for example, by taking only the green channel.
  • the system or method may take a gray channel or perform any other weighted sum or transform to turn three color component (e.g., RGB) channels into one color component channel (e.g., green only).
  • the system or method may calculate an average across the rows to obtain a ID profile.
  • the system or method may then use a smoothing filter (e.g., uniform or Gaussian) to remove noise from this ID profile and thus obtain a reliable ID gradient based on this ID profile.
  • a smoothing filter e.g., uniform or Gaussian
  • a suitably configured system or method may use the following state-machine rule to identify one or more low triggers, one or more high triggers, or any suitable combination thereof:
  • the system or method may safeguard against duplicate triggers by merging low triggers with nearby adjacent low triggers and merging high triggers with nearby adjacent high triggers. Additionally, or alternatively, the system or method may further safeguard against duplicate triggers from fluctuations near the predetermined thresholds by having a buffer zone around each threshold and checking that the derivative fully crossed both ends of the buffer zone before identifying the corresponding point as low trigger or as a high trigger.
  • a suitably configured system or method guards against two potential failures.
  • the edges of a control line might be unclear or otherwise difficult to ascertain (e.g., “on the fence”), such that the control line has only a high trigger or only a low trigger.
  • a suitably configured system or method may use the following logic to ensure that the two corresponding triggers are paired (e.g., “latched”) together: make the low trigger threshold greater in magnitude than the high trigger threshold, and remove any high trigger that does not appear shortly after a low trigger. (Rule 2)
  • This logic ensures that the system or method will detect both or neither edges of the control line and not impact the parity of the triggers.
  • the second potential failure is that the characteristic two-level CLIF color pattern may not always be monotonic, since surface tension can cause the color pattern to become dark and then become slightly lighter.
  • the system or method may find the peak derivative magnitudes of the triggers and apply the following rule: only allow a high trigger if its magnitude is at least some multiplier (e.g. 0.333) times the magnitude of the low trigger that it follows. (Rule 3)
  • the system or method may check whether the low triggers outnumber the high triggers. If yes, then the system or method may flag the image as exhibiting a CLIF condition.
  • a suitably configured system or method uses a “level set” algorithm, instead of looking at derivatives.
  • a “level set” algorithm finds the y-coordinate that maximizes the difference between the average intensity of the pixels above y and the average intensity of the pixels below y.
  • Linear detrending could be used by the system or method to keep light gradients from influencing this difference. Dilations may be used by the system or method to minimize the impact of the control line, the test line, or both.
  • a suitably configured system or method trains a convolutional neural network, a vision transformer, or some other neural network to predict the presence or absence of a two-level CLIF color pattern, for example, from the image of the LFA test strip image or from the ID row-average profile.
  • This neural network may be trained by the system or method based on real examples, synthetic examples (e.g., when real examples are difficult to reproduce), or both. Additionally, or alternatively, the neural network may be pretrained on synthetic examples and then fine-tuned on real examples.
  • Synthetic examples may be generated (e.g., by the system or the method) by creating a two-level color pattern and then adding some noise (e.g., Perlin noise), blurring, or both, and then synthesizing one or more control lines, test lines, or both. Additionally, or alternatively, the system or method may take real data and create synthetic examples by stretching and contracting, repeating and cropping, or both, the known light and dark sections of the CLIF pattern. The location of the CLIF pattern may be used by the system or method as extra supervision in the training of the neural network.
  • some noise e.g., Perlin noise
  • blurring e.g., blurring, or both
  • the system or method may take real data and create synthetic examples by stretching and contracting, repeating and cropping, or both, the known light and dark sections of the CLIF pattern.
  • the location of the CLIF pattern may be used by the system or method as extra supervision in the training of the neural network.
  • certain combinations of image features may be fed by the system or method into an SVM, a logistic regression, a neural network, or some other classifier to learn to detect and flag CLIF patterns.
  • An end-goal of a smartphone-based LFA test strip reader is to ensure robust operation (e.g., reading and analyzing images of LFA test strips) under ambient light settings and under varied imaging conditions, such as different imaging distances and imaging angles and across different smartphone models.
  • Examples of these covariates include: (i) imaging distance, (ii) imaging angle, (iii) ambient lighting, (iv) location of smartphone camera and flash relative to the LFA test strip, and (v) various forms of blur (e.g., motion blur).
  • a special lightbox may be utilized for image capture.
  • the special lightbox provides improved (e.g., optimal) imaging conditions that may avoid problems, such as glare on the test strip, a blurry image of the test strip, shadows or directional lighting falling on the test strip, or any combination thereof.
  • any one or more of the systems and methods discussed herein may be incorporate use of this special lightbox, for example, to facilitate improved (e.g., optimal) and repeatable imaging of an LFA test strip (e.g., within an LFA test cassette).
  • Dimensions, angles, and other physical parameters of the special lightbox may provide improved (e.g., optimal) results when an LFA test strip is imaged by any mobile camera device, such as a smartphone.
  • the lightbox may be constructed using off-the-shelf cardboard material to provide an optical enclosure for smartphone-based LFA test strip readers.
  • one or more features present in the lightbox may include:
  • an optimal and generalized imaging window for smartphone cameras e.g., an imaging window 3.81 cm (1.5 inches wide) x 4.445 cm (1.75 inches) long, such that the imaging window allows for most smartphone cameras and their flash to fit within the imaging window and therefore allow a single lightbox to be usable across widely varying smartphone hardware;
  • an imaging platform on the top surface inclined at a 7.5 degree pitch angle, which may tilt the smartphone (e.g., running an LFA test strip reader app) up for optimal imaging, such as to avoid glare from the smartphone flash falling on the test strip region of an LFA test cassette;
  • the smartphone e.g., running an LFA test strip reader app
  • LEDs light emitting diodes
  • FIG. 20 is a schematic diagram illustrating design concepts that facilitate achieving high quality imaging of an LFA test strip (e.g., within an LFA test cassette), according to some example embodiments.
  • the left half of FIG. 20 contains two top views of the special lightbox.
  • the leftmost top view is a top view of the exterior of the lightbox, with the imaging window in the center of the top surface of the light box.
  • the rightmost top view is top view of the interior of the lightbox (e.g., with top surface removed), with the LFA test cassette visible in the center of the top view.
  • the LFA test cassette may be placed at a central location within a designated and marked region (e.g., on the bottom surface of the lightbox).
  • the right half of FIG. 20 contains a side elevation view of the lightbox and illustrates the angled design of the lightbox, such that any smartphone placed on the top surface for imaging an LFA test strip underneath is always positioned at a certain pitch angle for avoiding or minimizing glare in the resulting captured image of the LFA test strip.
  • FIGS. 21-25 and FIGS. 26-31 are sets of dimensioned views of the lightbox, according to some example embodiments.
  • the lightbox blocks ambient light from reaching an LFA test cassette placed inside the lightbox, and the lightbox provides a standardized lighting environment for capturing an image of the LFA test cassette.
  • the lightbox holds smartphones place on the top surface at a consistent distance and angle relative to the LFA test cassette.
  • the angle e.g., pitch angle
  • the height from the smartphone’s camera lens to the LFA test cassette may be 12.7 cm (5 inches).
  • the lightbox has an imaging window (e.g., a cutout in the top surface) for the lenses and flashes of various smartphone models to facilitate capture of images without the obstructions to the imaging hardware.
  • the dimensions of the imaging window may be 4.445. cm (1.75 inches) long x 3.81 cm (1.5 inches) wide.
  • the imaging window may be centrally located on top surface (e.g., top plane) of the lightbox.
  • the lightbox is constructed using Uline S-15058 cardboard.
  • the base dimensions of the lightbox may be 35.2425 cm (13.875 inches) long x 25.0825 cm (9.875 inches) wide.
  • the top dimensions of the lightbox may be 34.925 cm (13.75 inches) long x 27.6225 cm (10.875 inches) wide.
  • the front surface (e.g., front plane) dimensions of the lightbox may be 22.5425 cm (8.875 inches) long x 10.795 cm (4.25 inches) wide.
  • the back surface (e.g., backplane) dimensions of the lightbox may be 22.5425 cm (8.875 inches) long x 14.605 cm (5.75 inches) wide.
  • the side surfaces (e.g., side planes) of the lightbox may each be 34.925 cm (13.75 inches) long.
  • FIG. 32 is a block diagram illustrating components of a machine 1100, according to some example embodiments, able to read instructions 1124 from a machine-readable medium 1122 (e.g., a non-transitory machine-readable medium, a machine-readable storage medium, a computer-readable storage medium, or any suitable combination thereof) and perform any one or more of the methodologies discussed herein, in whole or in part.
  • a machine-readable medium 1122 e.g., a non-transitory machine-readable medium, a machine-readable storage medium, a computer-readable storage medium, or any suitable combination thereof
  • FIG. 32 is a block diagram illustrating components of a machine 1100, according to some example embodiments, able to read instructions 1124 from a machine-readable medium 1122 (e.g., a non-transitory machine-readable medium, a machine-readable storage medium, a computer-readable storage medium, or any suitable combination thereof) and perform any one or more of the methodologies discussed herein, in whole or in part.
  • FIG. 32 shows the machine 1100 in the example form of a computer system (e.g., a computer) within which the instructions 1124 (e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine 1100 to perform any one or more of the methodologies discussed herein may be executed, in whole or in part.
  • the instructions 1124 e.g., software, a program, an application, an applet, an app, or other executable code
  • the machine 1100 operates as a standalone device or may be communicatively coupled (e.g., networked) to other machines.
  • the machine 1100 may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a distributed (e.g., peer-to-peer) network environment.
  • the machine 1100 may be a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a cellular telephone, a smart phone, a set-top box (STB), a personal digital assistant (PDA), a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions 1124, sequentially or otherwise, that specify actions to be taken by that machine.
  • PC personal computer
  • PDA personal digital assistant
  • STB set-top box
  • web appliance a network router, a network switch, a network bridge, or any machine capable of executing the instructions 1124, sequentially or otherwise, that specify actions to be taken by that machine.
  • the machine 1100 includes a processor 1102 (e.g., one or more central processing units (CPUs), one or more graphics processing units (GPUs), one or more digital signal processors (DSPs), one or more application specific integrated circuits (ASICs), one or more radio-frequency integrated circuits (RFICs), or any suitable combination thereof), a main memory 1104, and a static memory 1106, which are configured to communicate with each other via a bus 1108.
  • the processor 1102 contains solid-state digital microcircuits (e.g., electronic, optical, or both) that are configurable, temporarily or permanently, by some or all of the instructions 1124 such that the processor 1102 is configurable to perform any one or more of the methodologies described herein, in whole or in part.
  • a set of one or more microcircuits of the processor 1102 may be configurable to execute one or more modules (e.g., software modules) described herein.
  • the processor 1102 is a multicore CPU (e.g., a dual-core CPU, a quad-core CPU, an 8-core CPU, or a 128-core CPU) within which each of multiple cores behaves as a separate processor that is able to perform any one or more of the methodologies discussed herein, in whole or in part.
  • beneficial effects described herein may be provided by the machine 1100 with at least the processor 1102, these same beneficial effects may be provided by a different kind of machine that contains no processors (e.g., a purely mechanical system, a purely hydraulic system, or a hybrid mechanical-hydraulic system), if such a processor-less machine is configured to perform one or more of the methodologies described herein.
  • processors e.g., a purely mechanical system, a purely hydraulic system, or a hybrid mechanical-hydraulic system
  • the machine 1100 may further include a graphics display 1110 (e.g., a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, a cathode ray tube (CRT), or any other display capable of displaying graphics or video).
  • a graphics display 1110 e.g., a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, a cathode ray tube (CRT), or any other display capable of displaying graphics or video).
  • PDP plasma display panel
  • LED light emitting diode
  • LCD liquid crystal display
  • CTR cathode ray tube
  • the machine 1100 may also include an alphanumeric input device 1112 (e.g., a keyboard or keypad), a pointer input device 1114 (e.g., a mouse, a touchpad, a touchscreen, a trackball, a joystick, a stylus, a motion sensor, an eye tracking device, a data glove, or other pointing instrument), a data storage 1116, an audio generation device 1118 (e.g., a sound card, an amplifier, a speaker, a headphone jack, or any suitable combination thereof), and a network interface device 1120.
  • an alphanumeric input device 1112 e.g., a keyboard or keypad
  • a pointer input device 1114 e.g., a mouse, a touchpad, a touchscreen, a trackball, a joystick, a stylus, a motion sensor, an eye tracking device, a data glove, or other pointing instrument
  • a data storage 1116 e.g., an audio generation device 1118 (e
  • the data storage 1116 (e.g., a data storage device) includes the machine-readable medium 1122 (e.g., a tangible and non-transitory machine- readable storage medium) on which are stored the instructions 1124 embodying any one or more of the methodologies or functions described herein.
  • the instructions 1124 may also reside, completely or at least partially, within the main memory 1104, within the static memory 1106, within the processor 1102 (e.g., within the processor’s cache memory), or any suitable combination thereof, before or during execution thereof by the machine 1100. Accordingly, the main memory 1104, the static memory 1106, and the processor 1102 may be considered machine-readable media (e.g., tangible and non-transitory machine- readable media).
  • the instructions 1124 may be transmitted or received over a network 190 via the network interface device 1120.
  • the network interface device 1120 may communicate the instructions 1124 using any one or more transfer protocols (e.g., hypertext transfer protocol (HTTP)).
  • HTTP hypertext transfer protocol
  • the machine 1100 may be a portable computing device (e.g., a smart phone, a tablet computer, or a wearable device) and may have one or more additional input components 1130 (e.g., sensors or gauges).
  • a portable computing device e.g., a smart phone, a tablet computer, or a wearable device
  • additional input components 1130 e.g., sensors or gauges
  • Examples of such input components 1130 include an image input component (e.g., one or more cameras), an audio input component (e.g., one or more microphones), a direction input component (e.g., a compass), a location input component (e.g., a global positioning system (GPS) receiver), an orientation component (e.g., a gyroscope), a motion detection component (e.g., one or more accelerometers), an altitude detection component (e.g., an altimeter), a temperature input component (e.g., a thermometer), and a gas detection component (e.g., a gas sensor).
  • an image input component e.g., one or more cameras
  • an audio input component e.g., one or more microphones
  • a direction input component e.g., a compass
  • a location input component e.g., a global positioning system (GPS) receiver
  • GPS global positioning system
  • an orientation component e.g.,
  • Input data gathered by any one or more of these input components 1130 may be accessible and available for use by any of the modules described herein (e.g., with suitable privacy notifications and protections, such as opt-in consent or opt-out consent, implemented in accordance with user preference, applicable regulations, or any suitable combination thereof).
  • the term “memory” refers to a machine-readable medium able to store data temporarily or permanently and may be taken to include, but not be limited to, random-access memory (RAM), read-only memory (ROM), buffer memory, flash memory, and cache memory. While the machine-readable medium 1122 is shown in an example embodiment to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) able to store instructions.
  • machine- readable medium shall also be taken to include any medium, or combination of multiple media, that is capable of carrying (e.g., storing or communicating) the instructions 1124 for execution by the machine 1100, such that the instructions 1124, when executed by one or more processors of the machine 1100 (e.g., processor 1102), cause the machine 1100 to perform any one or more of the methodologies described herein, in whole or in part.
  • a “machine- readable medium” refers to a single storage apparatus or device, as well as cloud-based storage systems or storage networks that include multiple storage apparatus or devices.
  • machine-readable medium shall accordingly be taken to include, but not be limited to, one or more tangible and non- transitory data repositories (e.g., data volumes) in the example form of a solid- state memory chip, an optical disc, a magnetic disc, or any suitable combination thereof.
  • tangible and non- transitory data repositories e.g., data volumes
  • a “non-transitory” machine-readable medium specifically excludes propagating signals per se.
  • the instructions 1124 for execution by the machine 1100 can be communicated via a carrier medium (e.g., a machine-readable carrier medium).
  • a carrier medium include a non-transient carrier medium (e.g., a non-transitory machine-readable storage medium, such as a solid-state memory that is physically movable from one place to another place) and a transient carrier medium (e.g., a carrier wave or other propagating signal that communicates the instructions 1124).
  • Modules may constitute software modules (e.g., code stored or otherwise embodied in a machine-readable medium or in a transmission medium), hardware modules, or any suitable combination thereof.
  • a “hardware module” is a tangible (e.g., non-transitory) physical component (e.g., a set of one or more processors) capable of performing certain operations and may be configured or arranged in a certain physical manner.
  • one or more computer systems or one or more hardware modules thereof may be configured by software (e.g., an application or portion thereof) as a hardware module that operates to perform operations described herein for that module.
  • a hardware module may be implemented mechanically, electronically, hydraulically, or any suitable combination thereof.
  • a hardware module may include dedicated circuitry or logic that is permanently configured to perform certain operations.
  • a hardware module may be or include a special-purpose processor, such as a field programmable gate array (FPGA) or an ASIC.
  • FPGA field programmable gate array
  • a hardware module may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations.
  • a hardware module may include software encompassed within a CPU or other programmable processor. It will be appreciated that the decision to implement a hardware module mechanically, hydraulically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations.
  • the phrase “hardware module” should be understood to encompass a tangible entity that may be physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein.
  • the phrase “hardware-implemented module” refers to a hardware module. Considering example embodiments in which hardware modules are temporarily configured (e.g., programmed), each of the hardware modules need not be configured or instantiated at any one instance in time. For example, where a hardware module includes a CPU configured by software to become a special-purpose processor, the CPU may be configured as respectively different special-purpose processors (e.g., each included in a different hardware module) at different times.
  • Software e.g., a software module
  • Hardware modules can provide information to, and receive information from, other hardware modules. Accordingly, the described hardware modules may be regarded as being communicatively coupled. Where multiple hardware modules exist contemporaneously, communications may be achieved through signal transmission (e.g., over circuits and buses) between or among two or more of the hardware modules. In embodiments in which multiple hardware modules are configured or instantiated at different times, communications between such hardware modules may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware modules have access. For example, one hardware module may perform an operation and store the output of that operation in a memory (e.g., a memory device) to which it is communicatively coupled. A further hardware module may then, at a later time, access the memory to retrieve and process the stored output. Hardware modules may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information from a computing resource).
  • a resource e.g., a collection of information from a computing resource
  • processors may be temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented modules that operate to perform one or more operations or functions described herein.
  • processor-implemented module refers to a hardware module in which the hardware includes one or more processors. Accordingly, the operations described herein may be at least partially processor-implemented, hardware-implemented, or both, since a processor is an example of hardware, and at least some operations within any one or more of the methods discussed herein may be performed by one or more processor-implemented modules, hardware-implemented modules, or any suitable combination thereof.
  • processors may perform operations in a “cloud computing” environment or as a service (e.g., within a “software as a service” (SaaS) implementation). For example, at least some operations within any one or more of the methods discussed herein may be performed by a group of computers (e.g., as examples of machines that include processors), with these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., an application program interface (API)). The performance of certain operations may be distributed among the one or more processors, whether residing only within a single machine or deployed across a number of machines.
  • SaaS software as a service
  • the one or more processors or hardware modules may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the one or more processors or hardware modules may be distributed across a number of geographic locations.
  • FIG. 33 is a flowchart illustrating operations in a method 3300 of imaging an LFA test kit, according to some example embodiments.
  • the method 3300 may be performed partly or fully by one or more machines (e.g., computer systems, smartphones, or other devices), such as the machine 1100 discussed with respect to FIG. 32 (e.g., implementing one or more operations discussed above with respect to FIG. 16).
  • the method 3300 includes one or more of operations 3310, 3320, 3330, 3340, 3350, or 3360.
  • operations 3310, 3320, and 3330 may be performed by one machine (e.g., a computer system), and operations 3340, 3350, and 3360 may be performed by another machine (e.g., a smartphone).
  • a machine accesses training data that includes reference values of analyte strength with corresponding reference smartphone identifiers and corresponding reference images.
  • Each of the reference images may depict a corresponding reference test strip (e.g., a reference LFA test strip) photographed by a corresponding reference smartphone identified by a corresponding reference smartphone identifier.
  • the machine trains an artificial intelligence (Al) model, based on the training data assessed in operation 3310.
  • the machine trains the Al model to output a predicted value of analyte strength based on a candidate smartphone identifier and a candidate image that depicts a candidate test strip (e.g., a candidate LFA test strip) photographed by a candidate smartphone identified by the candidate smartphone identifier.
  • Al artificial intelligence
  • the machine e.g., the computer system
  • the trained Al model to the candidate smartphone (e.g., to enable the candidate smartphone to perform operations 3340, 3350, and 3360 of the method 3300).
  • a machine obtains an artificial intelligence (Al) model (e.g., from another machine that performed operation 3330).
  • Al artificial intelligence
  • the obtained Al model is trained to output a predicted value of analyte strength based on a candidate smartphone identifier and a candidate image that depicts a candidate test strip (e.g., a candidate LFA test strip) photographed by a candidate smartphone identified by the candidate smartphone identifier.
  • the Al model may be trained based on training data that includes reference values of analyte strength with corresponding reference smartphone identifiers and corresponding reference images, and each of the reference images may depict a corresponding reference test strip (e.g., a reference LFA test strip) photographed by a corresponding reference smartphone identified by a corresponding reference smartphone identifier.
  • a reference test strip e.g., a reference LFA test strip
  • the machine e.g., the smartphone
  • the machine generates the predicted value of analyte strength by inputting the candidate smartphone identifier and the candidate image into the Al model obtained in operation 3340.
  • the Al model outputs the predicted value of analyte strength.
  • the machine e.g., the smartphone
  • the machine causes presentation of the predicted value of analyte strength, as generated in operation 3350.
  • the machine may itself present the generated predicted value of analyte strength.
  • the machine may send the generated predicted value of analyte strength to a different machine (e.g., a smartwatch communicatively coupled to the smartphone) and cause that different machine to present the generated predicted value of analyte strength.
  • a first example provides a method comprising: accessing, by one or more processors of a machine, training data that includes reference values of analyte strength with corresponding reference smartphone identifiers and corresponding reference images, each of the reference images depicting a corresponding reference test strip (e.g., a reference LFA test strip) photographed by a corresponding reference smartphone identified by a corresponding reference smartphone identifier; training, by the one or more processors of the machine and based on the training data, an artificial intelligence (Al) model to output a predicted value of analyte strength based on a candidate smartphone identifier and a candidate image that depicts a candidate test strip (e.g., a candidate LFA test strip) photographed by a candidate smartphone identified by the candidate smartphone identifier; and providing, by one or more processors of the machine, the trained Al model to the candidate smartphone.
  • Al artificial intelligence
  • a second example provides a method according to the first example, wherein: a reference smartphone identifier among the reference smartphone identifiers includes a vector of parameters that define a camera calibration curve of a smartphone model of the reference smartphone identified by the reference smartphone identifier.
  • a third example provides a method according to the first example or the second example, wherein: a reference smartphone identifier among the reference smartphone identifiers includes a vector of embeddings derived from a one-hot vector that encodes a smartphone model of the reference smartphone identified by the reference smartphone identifier.
  • a fourth example provides a method accordingly to any of the first through third examples, wherein: the candidate smartphone identifier includes a vector of parameters that define a camera calibration curve of a smartphone model of the candidate smartphone identified by the candidate smartphone identifier.
  • a fifth example provides a method according to any of the first through fourth examples, wherein: the candidate smartphone identifier includes a vector of parameters that define a camera calibration curve of the candidate smartphone identified by the candidate smartphone identifier.
  • a sixth example provides a method according to any of the first through fifth examples, wherein: the reference values of analyte strength indicate reference concentrations of an analyte; and the predicted value of analyte strength indicates a predicted concentration of the analyte.
  • a seventh example provides a method according to any of the first through fifth examples, wherein: the reference values of analyte strength indicate reference classifications of reference concentrations of an analyte; and the predicted value of analyte strength indicates a predicted classification of a candidate concentration of the analyte.
  • An eighth example provides a machine-readable medium (e.g., a non-transitory machine-readable storage medium) comprising instructions that, when executed by one or more processors of a machine, cause the machine to perform operations comprising: accessing training data that includes reference values of analyte strength with corresponding reference smartphone identifiers and corresponding reference images, each of the reference images depicting a corresponding reference test strip (e.g., a reference LFA test strip) photographed by a corresponding reference smartphone identified by a corresponding reference smartphone identifier; based on the training data, training an artificial intelligence (Al) model to output a predicted value of analyte strength based on a candidate smartphone identifier and a candidate image that depicts a candidate test strip (e.g., a candidate LFA test strip) photographed by a candidate smartphone identified by the candidate smartphone identifier; and providing the trained Al model to the candidate smartphone.
  • a machine-readable medium e.g., a non-transitory machine-readable storage medium
  • a ninth example provides a machine-readable medium according to the eighth example, wherein: a reference smartphone identifier among the reference smartphone identifiers includes a vector of parameters that define a camera calibration curve of a smartphone model of the reference smartphone identified by the reference smartphone identifier.
  • a tenth example provides a machine-readable medium according to the eighth example or the ninth example, wherein: a reference smartphone identifier among the reference smartphone identifiers includes a vector of embeddings derived from a one-hot vector that encodes a smartphone model of the reference smartphone identified by the reference smartphone identifier.
  • An eleventh example provides a machine-readable medium according to any of the eighth through tenth examples, wherein: the candidate smartphone identifier includes a vector of parameters that define a camera calibration curve of a smartphone model of the candidate smartphone identified by the candidate smartphone identifier.
  • a twelfth example provides a machine-readable medium according to any of the eighth through eleventh examples, wherein: the candidate smartphone identifier includes a vector of parameters that define a camera calibration curve of the candidate smartphone identified by the candidate smartphone identifier.
  • a thirteenth example provides a machine-readable medium according to any of the eighth through twelfth examples, wherein: the reference values of analyte strength indicate reference concentrations of an analyte; and the predicted value of analyte strength indicates a predicted concentration of the analyte.
  • a fourteenth example provides a machine-readable medium according to any of the eighth through twelfth examples, wherein: the reference values of analyte strength indicate reference classifications of reference concentrations of an analyte; and the predicted value of analyte strength indicates a predicted classification of a candidate concentration of the analyte.
  • a fifteenth example provides a system (e.g., a server system or other computer system) comprising: one or more processors; and a memory storing instructions that, when executed by at least one processor among the one or more processors, cause the system to perform operations comprising: accessing training data that includes reference values of analyte strength with corresponding reference smartphone identifiers and corresponding reference images, each of the reference images depicting a corresponding reference test strip (e.g., a reference LFA test strip) photographed by a corresponding reference smartphone identified by a corresponding reference smartphone identifier; based on the training data, training an artificial intelligence (Al) model to output a predicted value of analyte strength based on a candidate smartphone identifier and a candidate image that depicts a candidate test strip (e.g., a candidate LFA test strip) photographed by a candidate smartphone identified by the candidate smartphone identifier; and providing the trained Al model to the candidate smartphone.
  • a system e.g., a server system or other computer system
  • a sixteenth example provides a system according to the fifteenth example, wherein: a reference smartphone identifier among the reference smartphone identifiers includes a vector of parameters that define a camera calibration curve of a smartphone model of the reference smartphone identified by the reference smartphone identifier.
  • a seventeenth example provides a system according to the fifteenth example or the sixteenth example, wherein: a reference smartphone identifier among the reference smartphone identifiers includes a vector of embeddings derived from a one-hot vector that encodes a smartphone model of the reference smartphone identified by the reference smartphone identifier.
  • An eighteenth example provides a system according to any of the fifteenth through seventeenth examples, wherein: the candidate smartphone identifier includes a vector of parameters that define a camera calibration curve of a smartphone model of the candidate smartphone identified by the candidate smartphone identifier.
  • a nineteenth example provides a system according to any of the fifteenth through eighteenth examples, wherein: the candidate smartphone identifier includes a vector of parameters that define a camera calibration curve of the candidate smartphone identified by the candidate smartphone identifier.
  • a twentieth example provides a system according to any of the fifteenth through nineteenth examples, wherein: the reference values of analyte strength indicate reference concentrations of an analyte; and the predicted value of analyte strength indicates a predicted concentration of the analyte.
  • a twenty-first example provides a method comprising: obtaining, by one or more processors of a smartphone, an artificial intelligence (Al) model trained to output a predicted value of analyte strength based on a candidate smartphone identifier and a candidate image that depicts a candidate test strip (e.g., a candidate LFA test strip) photographed by a candidate smartphone identified by the candidate smartphone identifier, the Al model being trained based on training data that includes reference values of analyte strength with corresponding reference smartphone identifiers and corresponding reference images, each of the reference images depicting a corresponding reference test strip (e.g., a reference LFA test strip) photographed by a corresponding reference smartphone identified by a corresponding reference smartphone identifier; generating, by the one or more processors of the smartphone, the predicted value of analyte strength by inputting the candidate smartphone identifier and the candidate image to the Al model, the Al model outputting the predicted value of analyte strength; and presenting, by the one or more
  • a twenty-second example provides a machine-readable medium (e.g., a non-transitory machine-readable storage medium) comprising instructions that, when executed by one or more processors of a machine, cause the machine to perform operations comprising: obtaining an artificial intelligence (Al) model trained to output a predicted value of analyte strength based on a candidate smartphone identifier and a candidate image that depicts a candidate test strip (e.g., a candidate LFA test strip) photographed by a candidate smartphone identified by the candidate smartphone identifier, the Al model being trained based on training data that includes reference values of analyte strength with corresponding reference smartphone identifiers and corresponding reference images, each of the reference images depicting a corresponding reference test strip (e.g., a reference LFA test strip) photographed by a corresponding reference smartphone identified by a corresponding reference smartphone identifier; generating the predicted value of analyte strength by inputting the candidate smartphone identifier and the candidate image to the Al model, the Al model
  • a twenty-third example provides a system (e.g., a smartphone or other computer system) comprising: one or more processors; and a memory storing instructions that, when executed by at least one processor among the one or more processors, cause the system to perform operations comprising: obtaining an artificial intelligence (Al) model trained to output a predicted value of analyte strength based on a candidate smartphone identifier and a candidate image that depicts a candidate test strip (e.g., a candidate LFA test strip) photographed by a candidate smartphone identified by the candidate smartphone identifier, the Al model being trained based on training data that includes reference values of analyte strength with corresponding reference smartphone identifiers and corresponding reference images, each of the reference images depicting a corresponding reference test strip (e.g., a reference LFA test strip) photographed by a corresponding reference smartphone identified by a corresponding reference smartphone identifier; generating the predicted value of analyte strength by inputting the candidate smartphone identifier and the candidate image to
  • a twenty-fourth example provides a carrier medium carrying machine-readable instructions for controlling a machine to carry out the operations (e.g., method operations) performed in any one of the previously described examples.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Public Health (AREA)
  • Biomedical Technology (AREA)
  • Epidemiology (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Data Mining & Analysis (AREA)
  • Business, Economics & Management (AREA)
  • General Business, Economics & Management (AREA)
  • Databases & Information Systems (AREA)
  • Pathology (AREA)
  • Radiology & Medical Imaging (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Investigating Or Analysing Biological Materials (AREA)

Abstract

One or more of the methods and systems disclosed herein can be used to facilitate use of a trained neural network to interpret, for example, lateral flow assay (LFA) test results, captured in images of LFA test cassettes, where such images are taken by various different smartphone makes, models, and individual devices, for a variety of applications.

Description

IMAGING TEST STRIPS
PRIORITY CLAIM
[0000] This application claims the priority benefit of U.S. Provisional Patent Application No. 63/239,537, filed September 1, 2021, which is incorporated herein by reference in its entirety.
TECHNICAL FIELD
[0001] The subject matter disclosed herein generally relates to the technical field of special-purpose machines that facilitate analysis of test strips, including software-configured computerized variants of such special-purpose machines and improvements to such variants, and to the technologies by which such special-purpose machines become improved compared to other specialpurpose machines that facilitate analysis of test strips. Specifically, the present disclosure addresses systems and methods to facilitate neural network analysis of test strips.
BACKGROUND
Lateral Flow Assay (LFA) is a type of paper-based platform used to detect the concentration of analyte in a liquid sample. LFA test strips are cost-effective, simple, rapid, and portable tests (e.g., contained within LFA testing devices) that have become popular in biomedicine, agriculture, food science, and environment science, and have attracted considerable interest for their potential to provide instantaneous diagnostic results directly to patients. LFA-based tests are widely used in hospitals, physicians’ offices, and clinical laboratories for qualitative and quantitative detection of specific antigens and antibodies, as well as for products of gene amplification. LFA tests have widespread and growing applications (e.g., in pregnancy tests, malaria tests, tests for COVID-19 antibody tests, C OVID- 10 antigen tests, or drug tests) and are well-suited for point-of-care (POC) applications. BRIEF DESCRIPTION OF THE DRAWINGS
[0002] Some embodiments are illustrated by way of example and not limitation in the figures of the accompanying drawings.
[0003] FIG. 1 is an illustration of a scan card with colorimetric calibration guides (e.g., color calibration swatches) and radiometric calibration guides (e.g., calibration lines), according to some example embodiments.
[0004] FIG. 2 is a graph illustrating how different smartphone models recorded different test line strengths, according to some example embodiments.
[0005] FIGS. 3-8 are graphs illustrating per-phone calibration curves for various smartphones, as learned using a calibration dataset, according to some example embodiments.
[0006] FIGS. 9-14 are graphs illustrating examples of the vector of calibration line strength, as measured for some smartphone models, according to some example embodiments.
[0007] FIG. 15 is a scatter plot of the test line strength at a certain antigen concentrations versus calibration line strength at a certain calibration line index, according to some example embodiments.
[0008] FIG. 16 is a bar graph illustrating a distribution of line strength measurements across different smartphone models, according to some example embodiments.
[0009] FIG. 17 is a spatial graph that illustrates example two-dimensional (2D) data points indicating glare versus non-glare, according to some example embodiments.
[0010] FIGS. 18 and 19 are bar graphs illustrating reductions in the likelihood of encountering glare, according to some example embodiments.
[0011] FIG. 20 is a schematic diagram illustrating design concepts that facilitate achieving high quality imaging of an LFA test strip, according to some example embodiments.
[0012] FIGS. 21-25 and FIGS. 26-31 are sets of dimensioned views of the lightbox, according to some example embodiments. [0013] FIG. 32 is a block diagram illustrating components of a machine (e.g., a computer system, such as a smartphone), according to some example embodiments, able to read instructions from a machine-readable medium and perform any one or more of the methodologies discussed herein.
[0014] FIG. 33 is a flowchart illustrating operations in a method of imaging an LFA test kit, according to some example embodiments.
DETAILED DESCRIPTION
[0015] Example methods (e.g., algorithms) facilitate analysis of test strips (e.g., an LFA test strip within an LFA test kits or other LFA test device), including analysis of a test strip by one or more neural networks, and example systems (e.g., special-purpose machines configured by special-purpose software) are configured to facilitate such analysis of test strips. Examples merely typify possible variations. Unless explicitly stated otherwise, structures (e.g., structural components, such as modules) are optional and may be combined or subdivided, and operations (e.g., in a procedure, algorithm, or other function) may vary in sequence or be combined or subdivided. In the following description, for purposes of explanation, numerous specific details are set forth to provide a thorough understanding of various example embodiments. It will be evident to one skilled in the art, however, that the present subject matter may be practiced without these specific details.
[0016] LFA test strips usually have a designated control line region and a test line region. Typically, results can be interpreted within 5-30 minutes after putting a sample within the designated sample well of the LFA test device (e.g., an LFA test kit). The LFA test device (e.g., LFA test kit) may take the example form of an LFA test cassette, and the LFA test device typically has at least one sample well for receiving the sample to be applied to an LFA test strip inside the LFA test device. The results can be read by a trained healthcare practitioner (HCP) in a qualitative manner, such as by visually determining the presence or absence of a test result line appearing on the LFA test strip.
[0017] However, qualitative assessment by a human HCP may be subjective and error prone, particularly for faint lines that are difficult to visually identify. Instead, quantitative assessment of line presence or absence, such as by measuring line intensity or other indicator of line strength, may be more desirable for accurate reading of faint test result lines. Fully or partially quantitative approaches directly quantify the intensity or strength of the test result line or can potentially determine the concentration of the analyte in the sample based on the quantified intensity or other quantified strength of the test result line. Dedicated hardware devices to acquire images of LFA test strips and image processing software to perform colorimetric analysis to determine line strength often rely upon control of dedicated illumination, blockage of external lighting, and expansive equipment and software to function properly. More flexible and less expensive approaches may be beneficial.
[0018] The methods and systems (e.g., computer systems, such as smartphones or other mobile devices) discussed herein implement one or more technologies for smartphone-based LFA reading and analysis consistently and reliably across multiple types (e.g., makes, models, or both) of smartphone. These technologies may be used individually or in any suitable combination, and include:
[0019] (1) smartphone camera calibration using a specially designed scan card with calibration color swatches and calibration lines printed on the scan card, such that: a. the calibration color swatches help normalize for color variations across different smartphone models, and b. the calibration lines help normalize for different response curves of different cameras in the smartphones;
[0020] (2) one or more methods of learning a smartphone-to-smartphone relationship such that line strength measurements from one smartphone can be derived using line strength measurements from another smartphone, which may be useful in setting smartphone-independent (e.g., smartphone-agnostic) thresholds for quality control (QC) processes, where such methods may: a. utilize one or more lightboxes designed for QC purposes, or b. reduce variability in LFA images by using such one or more lightboxes, or both; [0021] (3) one or more methods of reading and analyzing LFA test strips in a smartphone-independent manner by using per-device (e.g., per-smartphone) calibration, which may facilitate whitelisting smartphone models after performing in-house experimentation, such as experiments with a lightbox;
[0022] (4) one or more methods of reading and analyzing LFA test strips in a smartphone-independent manner by using printed lines on a scan card, using a color calibration matrix of swatches for white balancing, or both, which may facilitate whitelisting smartphone models or individual smartphones (e.g., as individual device) by end users (e.g., smartphone customers);
[0023] (5) one or more methods of providing semi-quantitative or quantitative readings (e.g., readouts) of LFA test strips (e.g., as exhibiting weak, mild, or strong lines), in contrast to purely qualitative readouts (e.g., positive or negative), which may be beneficial for types of tests where line strength, analyte concentration, or both, have diagnostic value;
[0024] (6) one or more methods of providing smartphone-independent semi-quantitative readings of LFA test strips, such as by: a. training one or more neural networks using unique smartphone specific IDs (e.g., learned embeddings), or b. using the printed lines on a scan card (e.g., measured embeddings);
[0025] (7) one or more methods of improving the sensitivity of LFA test strip readings by combining multiple images captured over time from a single LFA test strip, which can improve the sensitivity of such readings without compromising on specificity;
[0026] (8) one or more algorithms (e.g., guardrail algorithms or other image processing algorithms) for improving the sensitivity and robustness of LFA test strip readings, including, for example: a. glare avoidance by restricting the angle of imaging to angles beyond +/- 5 degrees of head-on image capture to minimize the impact of glare (e.g., in combination with a glare guardrail), b. a glare guardrail using edge density to reject images with high glare, c. a blur guardrail to reject images of LFA test strips where such images are blurred due to focus blur, motion blur, or both, during image capture, and d. a Control Line Insufficient Fluid (CLIF) detector.
[0027] By virtue of one or more of these technologies, implemented individually or in combination, any one or more of the methods and systems disclosed herein can be used to facilitate use of a trained neural network to interpret LFA test results, captured in images of LFA test cassettes, where such images are taken by various different smartphone makes, models, and individual devices, for a variety of applications, such as malaria tests, COVID-19 antibody tests, COVID-19 antigen tests, cancer tests, and the like, and can be adapted to work with any number of different makes, models, or other types of LFA test devices (e.g., various LFA test cassettes) that house LFA test strips.
Scan Card for Smartphone-Independent LFA Imaging
[0028] FIG. 1 is an illustration of a scan card with colorimetric calibration guides (e.g., color calibration swatches, shown as areas with various stippling and cross-hatching) and radiometric calibration guides (e.g., calibration lines), according to some example embodiments.
[0029] The scan card implements a new design that includes color swatches specifically configured (e.g., sized, positioned, colored, or any suitable combination thereof) to aid in light and color normalization of a captured image of an LFA test strip across different smartphone models, for example, to perform on-the-fly light and color normalization to ensure that the normalized image has a calibrated or otherwise standardized color distribution, histogram, or both, regardless and independent of the smartphone model used for imaging the LFA test strip or the lighting conditions available while imaging the LFA test strip.
[0030] Additionally, the scan card also includes calibration lines printed on the side of the scan card. These calibration lines are specifically configured (e.g., sized, positioned, colored, or any suitable combination thereof) to depict the range of line strength usually seen in an LFA test strip’s test line or its control line and would aid in performing a per-device radiometric calibration of the LFA test strip reader. The radiometric calibration may be helpful where different smartphone models (e.g., different smartphone camera models) may map the intensities of test lines and control lines differently onto the red-green- blue (RGB) color space.
[0031] As shown in FIG. 1, a set of eight color swatches is printed near the top of the scan card to help in light and color normalization. These swatches are configured (e.g., selected in terms of color, brightness, size, or any combination thereof) to span the range of the RGB color space and to print reliably on the paper substrate of the scan card.
[0032] In addition, as shown in FIG. 1, a set of eight calibration lines (e.g., color lines, indicated with different types of stippling or cross-hatching) is printed in the right side region of the scan card, the left side region of the scan card, or both. The lines in each set of lines have varying line strength that spans the range of line strength usually seen in an LFA test strip. These printed lines aid in radiometric calibration per smartphone. Black separator lines are printed in between the calibration lines to facilitate localization of the faint lines within the image. The sets of lines may be printed both on the left side region and the right side region, for example, to provide redundancy for avoiding problems associated with occlusion, glare (e.g., from a flash or other light source) falling on one or the other sets of lines, or other problematic lighting condition.
[0033] Furthermore, as shown in FIG. 1, one or more quick response (QR) codes may appear on the scan card, for example, to facilitate alignment of the scan card, and thus facilitate the above-described normalizations for colorimetry (e.g., white balance or other color correction), brightness (e.g., line strength), or both.
[0034] In some example embodiments, a set of color swatches with the same colors used in the above-described set of calibration lines are also printed near the bottom of the scan card, for example, to aid in any QC processes that require using a spectrophotometer to ensure that the correct colors have been printed onto the scan card. Such swatches for this purpose may be configured to be at least a predetermined minimum size, for example, to facilitate proper checking by a spectrophotometer. Smartphone-Independent LFA Test Strip Reader
[0035] One or more of the following methods may help to make an LFA test strip reader more independent of smartphone makes, models, or individual devices. Such methods include one or more methods to provide semi- quantitative results of LFA test strip readings (e.g., selected from several available levels of extent, such as, a negative value, a weak or mild positive value, and a strong positive value) or fully quantitative results of LFA test strip readings (e.g., a floating point value that numerically encodes or otherwise represents line strength or concentration), in a smartphone-agnostic manner. Such methods may also provide qualitative (e.g., positive vs negative) results of LFA test strip readings, and accordingly, one or more of the methods disclosed herein may be equally applicable to any qualitative LFA test strip reader and may improve its performance across different smartphone models.
[0036] FIG. 2 is a graph illustrating how different smartphone models recorded different test line strengths for a set of twenty test cassettes at the limit of detection (LOD) and at 2xLOD concentration of a heat-inactivated virus. As shown in FIG. 2, a Samsung S8 phone would record a line strength of 0.06 AU, whereas an iPhone 11 Pro Max would record a line strength of 0.18 AU, representing a 3x increase in the measured test line strength for the same test cassettes when imaged from different smartphone models. According to the systems and methods discussed herein, any one or more of several online and offline approaches to perform smartphone calibration may be useful in obtaining smartphone-agnostic results in line strength measurement from widely varying smartphone models.
Approach 1: offline calibration per -device (e.g., per -phone)
[0037] This approach allows one to leam a calibration curve for a specific individual device (e.g.,. a specific individual smartphone). The resulting smartphone-specific calibration curve can then be used to map the test line strength measurement from that smartphone to the concentration of N-protein that could be smartphone-agnostic. Alternatively, or additionally, the smartphone-specific calibration curve can be used to map the test line strength measured from that smartphone to a reference test line strength, as if measured using a reference smartphone. [0038] Such a per-phone calibration may be performed by learning a functional relationship between the concentration of N-protein / heat-inactivated virus and the test line strength that is measured by that smartphone in the operating range of the LFA test strip. The exact range (e.g., span) of concentration may vary, for example, based on the application, the test cassette type, the test cassette lot, the N-protein or heat-inactivated virus used, or any suitable combination thereof. Accordingly, the range can be determined (e.g., decided or selected) based on the specific type of calibration desired (e.g., calibration within a linear-range only or a full-range calibration).
[0039] As a first step to leam the per-phone calibration, collect a calibration dataset using different smartphone models to image one or more reference LFA test cassettes that are imaged inside a lightbox. The lightbox may be designed or otherwise configured to ensure that the imaging of the LFA test cassettes can happen under a constant ambient lighting condition, imaging angle, and imaging distance, thus minimizing the impact of these covariates in the measurement process. Once a multi-smartphone reference dataset for calibration is collected, each image of the LFA test cassette is analyzed to obtain a measure of line strength for the test line. For example, the following equation may be used:
CV Line Strength = (Intensity of background strip pixels
- Intensity of the peak of the test line pixel) / intensity of background strip pixels (Eq. 1)
Then, a relationship can be learned between the concentration of N-protein and the line strength measured by each of the smartphones being calibrated.
[0040] FIGS. 3-8 are graphs illustrating per-phone calibration curves for various smartphones, as learned using a calibration dataset. Each calibration curve can be used to make phone-agnostic predictions. The Y-axis shows per- phone measured test line strength, and the X-axis shows the concentration of N- protein used.
[0041] Once a relationship is established between the concentration and the line strength measurement, e.g., as shown per-phone in FIGS. 3-8, then these calibration curves can be used to obtain a phone-agnostic concentration measurement, given the image and the known phone model. Alternatively, or additionally, the line strength measured by a specific smartphone (e.g., a Samsung S8) can be used to predict its equivalent line strength that would be measured using a reference smartphone, e.g., iPhone 11 Pro Max, thus making the line strength measurement from a specific smartphone to be phone-agnostic, as its line strength can be mapped to an equivalent reference smartphone’s line strength value.
[0042] The above-described per-phone calibration methodology can also be used for one or more of several downstream applications, such as:
(1) performing smartphone-agnostic qualitative decision-making based on smartphone-agnostic line strength measurement of an LFA test strip (e.g., as a feature in any decision function);
(2) performing smartphone-agnostic semi-quantitative or quantitative prediction by using the smartphone-agnostic concentration prediction or smartphoneagnostic line strength prediction as features; and
(3) determining a smartphone-agnostic threshold for line strength that can be used for QC in the manufacturing of LFA test cassettes, for example, where a single smartphone model may perform QC in a manner that results in all the tested smartphones being compatible with the manufactured lot of test cassettes.
Approach 2: online calibration per-device (e.g., per-phone)
[0043] One potential downside of the offline per-phone calibration methodology discussed above is that support for any new smartphone model generally involves collection of experimental data using that new smartphone model. This approach may encounter operational challenges as new smartphones may be released frequently into the market and may thus entail the generation of a new calibration curve for each of new smartphone (e.g., as part of a benchtop study). In such situations, it may be more helpful to set up an online calibration mechanism that allows for a new smartphone to be calibrated by an end user (e.g., at the customer end) without performing controlled lightbox experiments of the type discussed in Approach 1 above. Such an online user- focused approach is presently discussed here. [0044] For performing online calibration, the smartphone to be calibrated captures an image of the scan card discussed above with respect to FIG. 1. In particular, the captured image depicts at least one set of the calibration lines printed on the side regions of the scan card. The image that depicts these calibration lines can be used to extract calibration line strength using one or more computer vision techniques. Calibration line strength can be measured in any of various ways. For example, one way would be to define calibration line strength in a manner similar to the CV Line Strength discussed above:
Calibration line strength = (Intensity of the background paper - intensity of the peak of any of the printed lines ) / Intensity of the background paper (Eq. 2)
[0045] Each of the printed calibration lines on the scan card may have a different calibration line strength, and the vector of the measured calibration line strengths corresponding to the set of printed lines can be treated as the signature of the smartphone, representing (e.g., reflecting) how the smartphone’s camera and image processing pipeline maps the printed colors on the scan card to the RGB color space. FIGS. 9-14 are graphs illustrating examples of the vector of calibration line strength, as measured for some smartphone models. That is, FIGS. 9-14 illustrate examples of per-phone calibration vectors obtained by extracting calibration line strength from images of calibration lines printed on the scan card. Each of these calibration vectors can be used to leam a per-phone online radiometric calibration for a specific smartphone model, for example, by fitting a relationship between the calibration line index and the calibration line strength, thus allowing online calibration of any new smartphone (e.g., even an unseen new smartphone).
[0046] One or more of various analysis techniques may be used in such online calibration to make smartphone-based reading and analysis of LFA test strips more smartphone-independent (e.g., smartphone-agnostic). Some examples of such techniques include:
(1) fitting a function that maps calibration line index to strength, and then feeding the measured test line strength into the inverse of this function to obtain a phone agnostic feature; (2) making a kernel that compares test line strength to calibration line strength, and then using the kernel to get a weighted sum of the indices;
(3) for each calibration line, taking a sigmoid of test line strength minus calibration line strength, and then adding up the sigmoids to obtain a smartphone-agnostic prediction feature;
(4) directly feeding the vector of calibration line strength into a neural network that is then trained to perform smartphone-agnostic qualitative or quantitative LFA test strip readings, analyses, predictions, or any suitable combination thereof, for example where the neural network leams a non-linear relationship between calibration lines and the test and control lines; and
(5) feeding the calibration lines into a neural network alongside a crop of an image of the LFA test strip, for example: a. by feeding in the colors of the calibration lines, the parameters of a learned color-correction function, or a learned index-to-color function; or b. by feeding in an image of the calibration lines, and then causing the neural network to learn the exact function to be learned based on the input image.
[0047] As noted above, printed calibration lines on a scan card may be imaged and processed to obtain calibration line strength, which can then be used to perform online per-phone calibration. As part of the design of the scan card, the ranges of colors, line strengths, swatch sizes, line dimensions, or any suitable combination thereof, may be chosen (e.g., selected) to be similar to those of the test line, the control line, or both, for a particular LFA test strip. This may have the effect of ensuring that the calibration curve learned using the printed calibration lines can be applied to the particular LFA test strip’s test line, control line, or both.
[0048] FIG. 15 is a scatter plot of the test line strength at a certain antigen concentrations versus calibration line strength at a certain calibration line index, which shows across several smartphone models (e.g., from both the iPhone family and the Android family) that, as the measured test line strength increases, so does the calibration line strength, which verifying the above-proposed approach for online calibration. In particular, FIG. 15 plots test line strength at a 0.08 ng/ml antigen concentration versus calibration line strength at line index 2 and clearly indicates two clusters of smartphone models, as well as a linear relationship between test line strength and calibration line strength. Based on the scatter plot shown in FIG. 15, the represented smartphones may be clustered into two categories: (i) low-end smartphones for reading LFA test strips, and (ii) high-end smartphones for reading LFA test strips. As evidenced by the data in FIG. 15, a smartphone can be categorized as low-end or high-end solely based on its measurement of calibration line strength.
[0049] Furthermore, using the above-discussed online calibration, a system or method may be configured (e.g., set) to have a lower limit for the calibration line strength at a certain line index, which would allow that system or method to automatically reject an unseen candidate smartphone (e.g., proposed to be used as an LFA test strip reader). Such selecting or rejecting of smartphones may be directly correlated to their actual performance in reading test lines, control lines, or both, and provide better results compared to relying solely on camera parameters (e.g., sensor resolution, bit-depth, read-out noise, or any suitable combination thereof). Moreover, such selecting or rejecting of smartphones may also take into account any modification, corruption, or enhancement of image data preformed by the image processing pipeline present in different operating systems of different smartphone vendors. This may have the effect of ensuring that the criteria for selection or rejection of smartphones account for both hardware (e.g., camera) and software (e.g., image processing pipeline) in the automatic decision-making process.
Calibration line extraction from images of scan card
[0050] According to various example embodiments, one or both of the sets (e.g., palettes) of calibration lines on the scan card includes a series of lines that have different colors on each side (e.g., between the top and bottom QR codes). These lines may be separated by black separator lines (e.g., black separator bars), which may make them easier to locate using one or more computer vision techniques.
[0051] As one example of how to extract a set of calibration lines (e.g., a palette of calibration lines), a suitably configured system or method detects the QR codes and their comers. To extract a line palette for one side of the scan card, the system or method takes the comers of the top and bottom QR codes and uses these comers in a homography transform to find a search space around the line palette. The system or method locates the black separator bars by applying a threshold to the grayscale values to get a mask. To handle small dot patterns that may have resulted from the printing of the scan card, the suitably configured system or method applies a box blur before computing the mask. Once the mask is computed, the system or method computes the connected components and discards any component whose area is not within a certain range, relative to the search area. If the system or method does not detect the expected number of black separator bars (e.g., 8 bars), the system or method may flag the line palette, the scan card, or both, as unextractable. Otherwise, the system or method sorts the black bar masks by projecting their centers onto the difference vector between the centers of the QR codes. Then, the system or method takes pairs of adjacent black bar masks and fits rectangles (e.g., encompassing rectangles) around them, for example, by performing a tight rotated rectangle fit. The system or method may shrink the width and height of each rectangle to obtain a rectangle that contains most of the pixels of the calibration line between the black bars. The system or method may also extract background pixels to the left, the right, or both, of the calibration lines by further manipulating the rectangle.
[0052] In some situations, the calibration lines may be fully or partially obscured, corrupted, or otherwise degraded by glare. Usually, such glare only impacts one side of the scan card, though in certain scenarios, glare may impact both sides of the scan card. In other scenarios, glare adversely impacts one side (e.g., left side) of the scan card, but the set of calibration lines printed on other side (e.g., right side) of the scan card is unextractable for other reasons. Accordingly, a suitably configured system or method selects the calibration lines from the side that has the least amount of glare, for example, to minimize corruption of the measured calibration line strength. To ensure rejection of line palettes adversely affected by glare, or to select the palette least affected by glare when both sides have extractable calibration lines, a suitably configured system or method may use the color of the black separation lines (e.g., black separation bars) to quantify glare. Although masks for the black separation bars may have already been computed, these masks by definition only include dark pixels. To obtain a more reliable mask, the system or method may isolate the intersections of adjacent rectangles, optionally with a slight reduction of width and height in the rectangles to avoid accidentally including any background pixels. The system or method then takes the average color of each black region, then normalizes the average color based on the average color of all background regions, and then converts the normalized average color to grayscale with equal RGB channel weights. The system or method then takes the maximum grayscale value of the black regions and uses that maximum grayscale value as a glare score. The system or method compares this glare score to a predefined threshold score and decides whether a line palette is acceptable. If both line palettes are acceptable, the system or method may then choose the line palette that has the lower glare score.
[0053] In some alternative example embodiments, a suitably configured system or method uses a detection algorithm, such as Faster-RCNN, SSD, CenterNet, DETR, or the like, to detect line palette, black separator bars, or both. The black separator bars may be easier to detect, as some of the lines in the palette may be extremely faint. In further alternative example embodiments, a suitably configured system or method uses a segmentation net, such as UNet, to segment the black separator bars, the calibration lines, or both. In still further alternative example embodiments, the system or method trains a neural network to regress the comers of the calibration lines, the comers of the black bars, or both. In yet further alternative example embodiments, the system or method uses template matching to detect the black separator bars, the calibration lines, or both. In even further example embodiments, the system or method computes a one-dimensional (ID) ID color profile along a line from one QR code to another QR code, and the peaks and troughs are used to locate the calibration lines, the black separator bars, or both. In still additional example embodiments, the system or method uses a homography transform to directly locate the palette of calibration lines based on locations of the comers of the QR codes, although such example embodiments may be less robust to bending of the scan card. [0054] According to some example embodiments, a suitably configured system or method implements an edge density algorithm described below for detecting glare in images of LFA test strips may be used as an alternative way to compute or otherwise generate a glare score for each black separator bar. In such example embodiments, the maximum glare score may be the glare score of the entire palette of calibration lines. In certain example embodiments, the system or method trains a neural network to classify glare.
Smartphone-Independent Semi-Quantitative or Quantitative Analysis
[0055] To make a smartphone-based reader of LFA test strips more phoneagnostic, it is beneficial to account for device-to-device (e.g., phone-to-phone) differences in the acquired images of LFA test strips, for example, due to variations in white-balancing, gamma-correction, resolution, noise, focus, or any suitable combination thereof, in different models of smartphones. The impact of phone-to-phone differences may be significantly more apparent for quantitative or semi-quantitative analysis and results, compared to qualitative analysis and results. A goal of quantitative analysis of an image of an LFA test strip may be to predict the concentration of analyte, whereas for semi-quantitative analysis, a goal may be to classify the strength of the analyte (e.g., the SARS-COV2- virus), given an image of the LFA test strip. In many situations, relying on whatever line strength value was recorded by different smartphones may not be an effective methodology, for example, due to a presence of significant differences in imaging by various smartphone models.
[0056] FIG. 16 is a bar graph illustrating a distribution of line strength measurements across different smartphone models and in which each row represents a smartphone model and each colored box-plot within that row correspond to four specified (e.g., desired) levels of an LFA test strip reader, namely: (i) negative (in red), (ii) weak positive (in green), (iii) mild positive (in blue), and (iv) strong positive (in purple). As shown in FIG. 16, although the line strength measurement for a specific smartphone model is separable for each of these semi-quantitative levels, the line strength measurements across different smartphone models are not separable. Accordingly, suitably configured system and methods perform semi-quantitative or quantitative reading and analysis of images depicting LFA test stops, in a manner that can be generalized across several different makes and models of smartphones.
[0057] To perform quantitative or semi-quantitative reading of images of LFA test strips, in a smartphone-independent manner, a suitably configured system or method may use any trainable (e.g., learnable) artificial intelligence (Al) model, such as a neural network, that takes as input the image of the LFA test strip and an identifying (e.g., uniquely identifying among a set of smartphones) vector that represents the specific smartphone (e.g., a smartphone ID). Having the smartphone ID as input would be helpful for any Al model to account for device-to-device (e.g., phone-to-phone) differences in measurement of line strength (e.g., as highlighted in FIG. 16). For example, a neural network may be trained to perform one or more regression or classification tasks to directly predict the semi-quantitative or quantitative output corresponding not the input image of the LFA test strip. Such an approach may be considered as an early fusion approach, in which the phone ID vector is directly fed as an input to a trainable Al model. There are several ways to obtain a smartphone ID vector. One way is to use the parameters of the calibration curve obtained as part of an offline calibration (e.g., as described above) of each smartphone as an input to the neural network. Another way is to directly use the calibration line strength vector obtained as part of an online calibration (e.g., as described above) as a smartphone ID vector. The online approach may provide benefits in being able to generalize to new or otherwise unseen smartphone models that are not available during the training phase of the neural network. Another way to obtain a smartphone ID vector is to use a one-hot encoding vector, and stack an embedding layer of the neural network as the first stage to process the one-hot smartphone ID vector to obtain smartphone embeddings. Such an approach would be able to leam any non-linear dependency between smartphone IDs and the way test lines or control lines appear in captured images of LFA test strips. In various example embodiments, a suitably configured system or method combines any two or more of these approaches to obtain a smartphone ID vector, while training a neural network.
[0058] In certain example embodiments, a suitably configured system or method uses a backbone neural network model (e.g., with a backbone architecture) that is common across all smartphone models to process an images of an LFA test strip. The output of the backbone model may be a multidimensional vector representing the test strip image. The system or method may add (e.g., concatenate) the smartphone ID vector of any type to the output vector of the backbone model, and then train a separate classification or regression model to combine both the image-level features and the smartphone ID to obtain a phone-agnostic semi-quantitative or quantitative readout (e.g., prediction) from the image of the LFA test strip. Such an approach may be considered as a late fusion approach. In various variants of this late fusion approach, the backbone neural network may be or include a full-fledged neural network designed to predict the test line strength, control line strength, line locations, or any suitable combination thereof. In some variants, the system or method then stacks a second-stage classification or regression model to directly operate on the line strength prediction, the line presence logits, the smartphone ID vector, or any suitable combination thereof, for example, to perform a quantitative or semi- quantitative prediction of readout results from an image of an LFA test strip. One or more other suitable variants of neural network architecture (e.g., combining convolutional neural network, embedding layers, attention layers, fully connected layers, and the like) may be employed to obtain a neural network architecture that optimally combines the smartphone ID information with the image of the LFA test strip to obtain a phone-agnostic model.
Reducing Image Corruption in Different Smartphone Models
[0059] Another aspect of making LFA test strip readers work on a large variety of smartphone models is handling various image quality issues present in different smartphone models. Apart from varying radiometric and colorimetric parameters, smartphones generally also vary in terms of their placement of photographic flash (e.g., from the smartphone’s flashlight) with respect to camera position, the way their hardware focuses the image of the target LFA test strip, and issues that may arise due to noisy camera sensors. According to various example embodiments, a suitably configured system or method as described herein solves one or more challenges that one or more specific models of smartphone may face due to their individual specific designs, and such solutions may involve one or more special algorithms to deal with bad quality images, one or more special procedures to capture good quality image in spite of a limiting hardware, or any suitable combination thereof.
[0060] In some situations, flash and camera placement for some smartphone models is such that there is a high probability that a captured image of an LFA test strip may be glary, for example, due to the flash or any other bright light source in the vicinity. Glare generally happens when the light falling on the LFA test strip reflects off the wet surface of the test strip and is recorded by the camera, such as where surface reflection is greater than body reflection. In such situations, there is a chance that a faint test line or a faint control line may be occluded or overpowered by the glare and thus become unusable (e.g., invisible or otherwise unable to be accurately detected in strength, color, or size) in the image of the LFA test strip. This glare may have the effect of reducing analysis sensitivity, which may result in false negative predictions.
[0061] Furthermore, some camera hardware in some models of smartphone may face difficulty in focusing on an LFA test strip (e.g., within an LFA test cassette) that is imaged at a close distance. In such situations, a blurry image of the test cassette may result, due to a phenomenon known as focus-blur. Blur may also happen if, during the image capture process, the end user accidentally moves the smartphone, resulting in what is known as motion-blur. A blurry image of an LFA test strip may cause faint test lines, faint control lines, or both, to be unusable (e.g., invisible or otherwise unable to be accurately detected in strength, color, or size). This blur may result in loss of sensitivity in the analysis of the image, which may result in false negative predictions. Blurry images may also pose challenges regarding detection of LFA test strips (e.g., within LFA test cassettes), other algorithmic computer vision tasks, or both.
Glare Removal or Avoidance by Proper Angling
[0062] Since the glare on the test strip is dependent on the exact angle at which the smartphone is held during imaging, glare may be avoided by restricting the imaging of the LFA test strip from camera angles known to cause higher incidence of glare on the LFA test strip.
[0063] Glare in the example for of direct surface reflection happens predominantly when the imaging camera angle between the smartphone (e.g., configured and operating as an LFA test strip reader) and the LFA test strip is within a 5 degree deviation (e.g., tilt) from directly head-on.
[0064] FIG. 17 is a spatial graph that illustrates example two-dimensional (2D) data points indicating glare versus non-glare, where the x-axis is the tilting angle left or right away from head-on with respect to horizontal displacement, also known as the yaw angle, and the y-axis is the tilting angle up or down away from head-on with respect to vertical displacement, also known as the pitch angle. Each data point is a combination of yaw angle and pitch angle, and red data points represent angle combinations where glare was observed, while blue data points represent angle combinations where glare was not observed. As shown in FIG. 17, glare predominantly happens within +/- 5 degree window in both pitch and roll angles.
[0065] By setting a threshold on the minimum yaw and pitch angles permitted for imaging LFA test strips, a suitably configured system or method may effectively reduce the chance of glare. In various example embodiments, a 40% reduction in the likelihood of glare may be obtained by implementing a threshold of 5 degrees of tilt. In other example embodiments, an 80% reduction may be obtained by implementing a threshold of 10 degree of tilt.
[0066] FIGS. 18 and 19 are bar graphs illustrating such reductions in the likelihood of encountering glare. Therefore, in various example embodiments, a suitably configured system or method implements a general methodology for reading an image of an LFA test strip using with a smartphone:
(1) determine the minimum tilting angle threshold based on internal device testing that would reduce the incidence of glare on the test strip; and
(2) set an angle-based glare threshold on the smartphone (e.g., within a corresponding LFA test strip scanning app).
Capturing Multiple Image of an LFA Test Strip to Improve Sensitivity
[0067] Generally, a typical (e.g., standard) time to obtain a usable readout from an average LFA test strip (e.g., in an LFA test cassette withing an LFA test kit) begins around 15 minutes after applying the biological sample into the sample well and may extend up to 20-30 minutes after the sample application. For many LFA test strips, at exactly 15 minutes after application of the sample, the LFA test strip may be insufficiently (e.g., not fully) developed. Additionally, if a test strip that was wetted by application of the sample did not have enough time to dry up, there may be an increased the chance of glare (e.g., one or more specular light reflections) adversely impacting the resulting readout from the image of the LFA test strip and thus reducing the sensitivity of the analysis.
Over time, the wet LFA test strips dry up, and the test lines on the LFA test strip fully develop in strength. However, test lines may reduce in strength over time thereafter, and there may be situations, after a period of time, when the concentration of analyte is significantly higher than indicated by the strength of the test line. Therefore, it may be helpful to combine multiple images, captured over time, of an LFA test strip to improve the sensitivity of the reading and analysis of the combined images, to reduce the chances of glare impacting the reading and analysis, or both.
[0068] According to various example embodiments, a suitably configured system or method may implement one or more of the following procedural operations, which allow monitoring of a test line (e.g., as a test line signal) over time within a test strip reading window (e.g., an optimal LFA test strip readout window). Such a reading window may be defined as a period time, for example, from 15 minutes after applying the sample to 30 minutes after applying the sample. Example operations include:
(1) acquiring multiple images during the test strip reading window with a camera;
(2) analyzing each image individually and groups of images collectively with signal processing, learning-based Al algorithms, or any suitable combination thereof, to detect test lines, control lines, or both, with higher sensitivity; and
(3) rejecting any images in which glare is detected or otherwise indicated; and’
(4) perform reading and analysis of the test lines, control lines, or both, depicted in only those images among the multiple acquired images that are well- developed and clear (e.g., clean) in depicting the LFA test strip.
Systems and methods that implement one or more of these procedural operations may optimize the readout from a smartphone-based LFA test strip reader, as the LFA test strip develops over time after application of the sample and exhibits different strengths of one or more control lines or test lines over time.
Glare Guardrail
[0069] Another approach to avoid glare in images (e.g., to be processed by the smartphone-based LFA test strip reader) is to develop or otherwise implement a guardrail algorithm that would automatically detect glare in an image of an LFA test strip, reject the image, and prompt the end user to take remedial action, such as recapturing the image or adjusting the lighting conditions and then recapturing the image. Accordingly, a suitably configured system or method may implement all or part of the following guardrail algorithm to detect glare in a captured image that depicts the LFA test strip of an LFA test cassette.
(1) converting the captured image to grayscale;
(2) computing a binary edge map from the image (e.g., by using a Canny edge detector);
(3) isolating (e.g., cropping out) a region of the image, where the region is expected to contain a depiction of the test line of the LFA test strip;
(4) computing the edge density (e.g., the percentage of pixels that are edge pixels);
(5) comparing this edge density to a threshold density; and
(6) based on this comparison, determine whether glare is present or not.
[0070] A significant downside of glare is that glare can cause a false negative reading. Under certain conditions, glare may cause a false positive reading, but glare is unlikely to cause a strong false positive reading. Thus, in situations where the system or method has determined a strong positive prediction from reading and analyzing the image of the LFA test strip (e.g., where the prediction is strongly positive, and the test line’s strength score is above a predetermined threshold strength score), a suitably configured system or method may omit (e.g., skip) the glare guardrail and thus reduce or avoid the risk of a false alarm. [0071] In some example embodiments, a suitably configured system or method implements an edge detection algorithm other than Canny, such as Sobel or Laplace with a threshold. The system or method may also apply blurring before edge detection. Instead of computing the average of a binary mask, the system or method may aggregate the edge strengths, for example, by an average, an L2 average, an L3 average, etc., by inputting (e.g., plugging) individual edge strengths into a learnable function (e.g., a sigmoid) before aggregating the edge strengths, or any suitable combination thereof. One or more of such techniques may be helpful in situations where the sigmoid has an infinite slope. In certain example embodiments, the system or method may cause a spatial weighting map to be learned (e.g., by any one or more of the Al models discussed above), instead of taking a uniform average across a predetermined region.
[0072] In certain example embodiments, a suitably configured system or method computes one or more features of an image. Examples of such computed image features include: edge density, edge histogram, color histogram, color variance, color entropy, local-binary -pattern histogram, or any suitable combination thereof. The system or method may then feed one or more of these computed features into a classifier (e.g., an SVM, a logistic regression, or a neural network).
[0073] In various example embodiments, , a suitably configured system or method trains a neural network, such as convolutional neural net or a vision transformer, to predict the presence of glare in the result well of the LFA test cassette or in a cropped region of the image in which the result well appears. Real data, synthetic data, or both, may be used by the system or method to train the neural network. Suitable synthetic data may be synthesized (e.g., by the suitably configured system or method) by using salt-and-pepper noise, Perlin noise, data generated by a Markov network (e.g., fitted to actual glare images), or any suitable combination thereof. In many situations, very local dependencies exist between or among the pixels of an image that depicts glare.
Blur Guardrail
[0074] A guardrail approach may similarly be used to directly reject blurry images that may be acquired by a smartphone camera unable to focus on the LFA test strip. Such a guardrail approach may include accordingly instructing the end user to place the smartphone’s camera a bit further away and then to recapture the image of the LFA test strip.
[0075] To guard against blurriness due to an out-of-focus camera or due to camera motion, a suitably configured system or method may implement one or more of the following procedural operations. First, the system or method automatically selects (e.g., chooses) a region of the image, where the region is expected to have sharp edges. This region may depict text on the surface of the LFA test cassette, such as text that is nearest to the test line of the LFA test strip. The region may additionally or alternatively depict one or more of the QR codes on the scan card. The system or method may locate the former by taking a homography transform from a region detected and labelled as“inner-testkit” (e.g., in implementing a glare guardrail). Once the system or method has located the edge region, the system or method converts the edge region to grayscale and normalizes its lighting, for example, by setting the average grayscale value to a predefined constant. This normalization technique may work especially well, because blurring is a linear operation and therefore should have no influence on the average intensity. Next, the suitably configured system or method computes an edge strength map, for example, using a Sobel filter, a Laplace filter, some other filter, or any suitable combination thereof. The system or method may also apply a smoothing filter (e.g., a box blur, a Gaussian blur, a median filter, or any suitable combination thereof) to reduce or eliminate false edges due to noise. The system or method may then aggregate the edge strengths, for example, by taking a predefined percentile (e.g., the 90th percentile), the mean, the median, the standard deviation, the L2 average, the L3 average, or any suitable combination thereof. In many situations, taking the predefined percentile provides good results, because the edge strengths tend to follow a bimodal distribution. If the predefined percentile is below a certain threshold, then the system or method may flag the image as blurry and ask the user to reimage (e.g., recapture the image).
[0076] In some example embodiments, a suitably configured system or method takes the grayscale histogram, instead of the edge strength histogram, and looks at the height of one or more bins that correspond to a gray color (e.g., somewhere between black and white). Since light normalization ensures that any text will be black, and the background will be white, the only gray pixels should come from blurring. Thus, if the height of the one or more gray bins is above a predetermined threshold height, then the system or method may flag the image as blurry.
[0077] In certain example embodiments, a suitably configured system or method extracts one or more image features, such as the Sobel 90th percentile, the gray bin height, a gray histogram, an edge strength histogram, or any suitable combination thereof, and feeds one or more of these image features into a classifier (e.g., an SVM, a logistic regression, or a neural network) to predict blurriness. Additionally, or alternatively, the system or method uses a convolutional neural network, a vision transformer, or both, to predict blurriness. Any one or more of these methodologies may be trained by the system or method based on real data, synthetic data, or both. The system or method may simulate focus blur with a Gaussian kernel, simulated motion blur with a line kernel, or both. In some situations, the sigma of the Gaussian kernel and width of the line kernel are used by the system or method to quantify the amount of blur, to provide extra supervision during training, or both.
Control Line Insufficient Fluid (CLIF) Guardrail
[0078] If not enough buffer fluid is in the LFA test strip, a characteristic two-level color pattern may result (e.g., light on top, dark on bottom), and this two-level color pattern may cause a false detection of a control line. Even if no false detection of a control line occurs, it is generally unsafe to try and interpret an LFA test strip that has insufficient buffer fluid. Therefore, in various example embodiments, a suitably configured system or method implements a CLIF guardrail, which attempts to detect this two-level color pattern.
[0079] In some example embodiments, a suitably configured system or method counts the number of light-to-dark horizontal edges and the number of dark-to-light horizontal edges and then compares these counts to determine whether the light-to-dark horizontal edges outnumber the dark-to-light horizontal edges. The characteristic two-level CLIF color pattern has one light-to-dark horizontal edge, while a sufficiently strong control line has both a light-to-dark horizontal edge and a dark-to-light horizontal edge. Accordingly, a system or method that implements this edge-counting rule is able to detect and flag the presence of a CLIF condition depicted in an image of an LFA test strip.
[0080] As an example of an operational procedure for CLIF detection, a suitably configured system or method starts by converting the RGB values to one channel, for example, by taking only the green channel. Alternatively, the system or method may take a gray channel or perform any other weighted sum or transform to turn three color component (e.g., RGB) channels into one color component channel (e.g., green only). Next, the system or method may calculate an average across the rows to obtain a ID profile. The system or method may then use a smoothing filter (e.g., uniform or Gaussian) to remove noise from this ID profile and thus obtain a reliable ID gradient based on this ID profile.
[0081] Once the ID gradient is obtained (e.g. using a Sobel filter), a suitably configured system or method may use the following state-machine rule to identify one or more low triggers, one or more high triggers, or any suitable combination thereof:
Whenever the derivative crosses above a predetermined high threshold, identify that point as a high trigger; and whenever the derivative crosses below a predetermined low threshold (e.g., a negative threshold)), identity that point as a low trigger. (Rule 1)
The system or method may safeguard against duplicate triggers by merging low triggers with nearby adjacent low triggers and merging high triggers with nearby adjacent high triggers. Additionally, or alternatively, the system or method may further safeguard against duplicate triggers from fluctuations near the predetermined thresholds by having a buffer zone around each threshold and checking that the derivative fully crossed both ends of the buffer zone before identifying the corresponding point as low trigger or as a high trigger.
[0082] According to certain example embodiments, a suitably configured system or method guards against two potential failures. First, the edges of a control line might be unclear or otherwise difficult to ascertain (e.g., “on the fence”), such that the control line has only a high trigger or only a low trigger. A suitably configured system or method may use the following logic to ensure that the two corresponding triggers are paired (e.g., “latched”) together: make the low trigger threshold greater in magnitude than the high trigger threshold, and remove any high trigger that does not appear shortly after a low trigger. (Rule 2)
This logic ensures that the system or method will detect both or neither edges of the control line and not impact the parity of the triggers.
[0083] The second potential failure is that the characteristic two-level CLIF color pattern may not always be monotonic, since surface tension can cause the color pattern to become dark and then become slightly lighter. To guard against this situation, the system or method may find the peak derivative magnitudes of the triggers and apply the following rule: only allow a high trigger if its magnitude is at least some multiplier (e.g. 0.333) times the magnitude of the low trigger that it follows. (Rule 3)
This rule works because gradient magnitudes of the CLIF pattern are highly asymmetric.
[0084] Once the suitably configured system or method has final triggers, the system or method may check whether the low triggers outnumber the high triggers. If yes, then the system or method may flag the image as exhibiting a CLIF condition.
[0085] In some example embodiments, a suitably configured system or method uses a “level set” algorithm, instead of looking at derivatives. Such an algorithm finds the y-coordinate that maximizes the difference between the average intensity of the pixels above y and the average intensity of the pixels below y. Linear detrending could be used by the system or method to keep light gradients from influencing this difference. Dilations may be used by the system or method to minimize the impact of the control line, the test line, or both.
[0086] In certain example embodiments, a suitably configured system or method trains a convolutional neural network, a vision transformer, or some other neural network to predict the presence or absence of a two-level CLIF color pattern, for example, from the image of the LFA test strip image or from the ID row-average profile. This neural network may be trained by the system or method based on real examples, synthetic examples (e.g., when real examples are difficult to reproduce), or both. Additionally, or alternatively, the neural network may be pretrained on synthetic examples and then fine-tuned on real examples. Synthetic examples may be generated (e.g., by the system or the method) by creating a two-level color pattern and then adding some noise (e.g., Perlin noise), blurring, or both, and then synthesizing one or more control lines, test lines, or both. Additionally, or alternatively, the system or method may take real data and create synthetic examples by stretching and contracting, repeating and cropping, or both, the known light and dark sections of the CLIF pattern. The location of the CLIF pattern may be used by the system or method as extra supervision in the training of the neural network. Further additionally or alternatively, certain combinations of image features, such as the color profile and the histogram, the gradient profile and the histogram, or gradient peaks and their locations, may be fed by the system or method into an SVM, a logistic regression, a neural network, or some other classifier to learn to detect and flag CLIF patterns.
Lightbox (Darkbox) Design
[0087] An end-goal of a smartphone-based LFA test strip reader is to ensure robust operation (e.g., reading and analyzing images of LFA test strips) under ambient light settings and under varied imaging conditions, such as different imaging distances and imaging angles and across different smartphone models. However, for creating a repeatable setup for collecting experimental data (e.g., to obtain per-phone calibration), or for performing QC testing for lots of LFA test cassettes, it may be desirable to control for any covariate that may negatively impact the readout from a smartphone-based reader for LFA test strips. Examples of these covariates include: (i) imaging distance, (ii) imaging angle, (iii) ambient lighting, (iv) location of smartphone camera and flash relative to the LFA test strip, and (v) various forms of blur (e.g., motion blur). To reduce or eliminate variation in the imaging of LFA test strips, a special lightbox may be utilized for image capture. The special lightbox provides improved (e.g., optimal) imaging conditions that may avoid problems, such as glare on the test strip, a blurry image of the test strip, shadows or directional lighting falling on the test strip, or any combination thereof. Any one or more of the systems and methods discussed herein may be incorporate use of this special lightbox, for example, to facilitate improved (e.g., optimal) and repeatable imaging of an LFA test strip (e.g., within an LFA test cassette). Dimensions, angles, and other physical parameters of the special lightbox may provide improved (e.g., optimal) results when an LFA test strip is imaged by any mobile camera device, such as a smartphone.
[0088] The lightbox may be constructed using off-the-shelf cardboard material to provide an optical enclosure for smartphone-based LFA test strip readers. According to various example embodiments, one or more features present in the lightbox may include:
(1) an optically isolated enclosure made of low-cost cardboard material;
(2) an optimal and generalized imaging window for smartphone cameras (e.g., an imaging window 3.81 cm (1.5 inches wide) x 4.445 cm (1.75 inches) long), such that the imaging window allows for most smartphone cameras and their flash to fit within the imaging window and therefore allow a single lightbox to be usable across widely varying smartphone hardware;
(3) an imaging platform on the top surface, inclined at a 7.5 degree pitch angle, which may tilt the smartphone (e.g., running an LFA test strip reader app) up for optimal imaging, such as to avoid glare from the smartphone flash falling on the test strip region of an LFA test cassette;
(4) an interior light strip affixed (e.g., placed) along the top periphery of the inside of the lightbox in all directions to create a uniform and diffused ambient light and thus mimic optimal external ambient lighting as closely as possible (e.g., using light emitting diodes (LEDs) that produce light at 4000K and 1200 lumens, in a light strip of length 35 feet, with 320 LEDs/m) or reproduce one or more other optical parameters suitable for simulating any other desired imaging environment; or
(5) a side hatch for changing the LFA test cassette easily, an open bottom that allows the end user can place the lightbox over the LFA test cassette, or both. [0089] FIG. 20 is a schematic diagram illustrating design concepts that facilitate achieving high quality imaging of an LFA test strip (e.g., within an LFA test cassette), according to some example embodiments. The left half of FIG. 20 contains two top views of the special lightbox. The leftmost top view is a top view of the exterior of the lightbox, with the imaging window in the center of the top surface of the light box. The rightmost top view is top view of the interior of the lightbox (e.g., with top surface removed), with the LFA test cassette visible in the center of the top view. For example, the LFA test cassette may be placed at a central location within a designated and marked region (e.g., on the bottom surface of the lightbox). The right half of FIG. 20 contains a side elevation view of the lightbox and illustrates the angled design of the lightbox, such that any smartphone placed on the top surface for imaging an LFA test strip underneath is always positioned at a certain pitch angle for avoiding or minimizing glare in the resulting captured image of the LFA test strip.
[0090] FIGS. 21-25 and FIGS. 26-31 are sets of dimensioned views of the lightbox, according to some example embodiments. The lightbox blocks ambient light from reaching an LFA test cassette placed inside the lightbox, and the lightbox provides a standardized lighting environment for capturing an image of the LFA test cassette. In the example embodiments shown in FIGS. 21-25 and FIGS. 26-31, the lightbox holds smartphones place on the top surface at a consistent distance and angle relative to the LFA test cassette. The angle (e.g., pitch angle) may be 7.5 degrees. The height from the smartphone’s camera lens to the LFA test cassette may be 12.7 cm (5 inches).
[0091] In some example embodiments, the lightbox has an imaging window (e.g., a cutout in the top surface) for the lenses and flashes of various smartphone models to facilitate capture of images without the obstructions to the imaging hardware. The dimensions of the imaging window may be 4.445. cm (1.75 inches) long x 3.81 cm (1.5 inches) wide. The imaging window may be centrally located on top surface (e.g., top plane) of the lightbox.
[0092] In certain example embodiments, the lightbox is constructed using Uline S-15058 cardboard. The base dimensions of the lightbox may be 35.2425 cm (13.875 inches) long x 25.0825 cm (9.875 inches) wide. The top dimensions of the lightbox may be 34.925 cm (13.75 inches) long x 27.6225 cm (10.875 inches) wide. The front surface (e.g., front plane) dimensions of the lightbox may be 22.5425 cm (8.875 inches) long x 10.795 cm (4.25 inches) wide. The back surface (e.g., backplane) dimensions of the lightbox may be 22.5425 cm (8.875 inches) long x 14.605 cm (5.75 inches) wide. The side surfaces (e.g., side planes) of the lightbox may each be 34.925 cm (13.75 inches) long.
[0093] FIG. 32 is a block diagram illustrating components of a machine 1100, according to some example embodiments, able to read instructions 1124 from a machine-readable medium 1122 (e.g., a non-transitory machine-readable medium, a machine-readable storage medium, a computer-readable storage medium, or any suitable combination thereof) and perform any one or more of the methodologies discussed herein, in whole or in part. Specifically, FIG. 32 shows the machine 1100 in the example form of a computer system (e.g., a computer) within which the instructions 1124 (e.g., software, a program, an application, an applet, an app, or other executable code) for causing the machine 1100 to perform any one or more of the methodologies discussed herein may be executed, in whole or in part.
[0094] In alternative embodiments, the machine 1100 operates as a standalone device or may be communicatively coupled (e.g., networked) to other machines. In a networked deployment, the machine 1100 may operate in the capacity of a server machine or a client machine in a server-client network environment, or as a peer machine in a distributed (e.g., peer-to-peer) network environment. The machine 1100 may be a server computer, a client computer, a personal computer (PC), a tablet computer, a laptop computer, a netbook, a cellular telephone, a smart phone, a set-top box (STB), a personal digital assistant (PDA), a web appliance, a network router, a network switch, a network bridge, or any machine capable of executing the instructions 1124, sequentially or otherwise, that specify actions to be taken by that machine. Further, while only a single machine is illustrated, the term “machine” shall also be taken to include any collection of machines that individually or jointly execute the instructions 1124 to perform all or part of any one or more of the methodologies discussed herein.
[0095] The machine 1100 includes a processor 1102 (e.g., one or more central processing units (CPUs), one or more graphics processing units (GPUs), one or more digital signal processors (DSPs), one or more application specific integrated circuits (ASICs), one or more radio-frequency integrated circuits (RFICs), or any suitable combination thereof), a main memory 1104, and a static memory 1106, which are configured to communicate with each other via a bus 1108. The processor 1102 contains solid-state digital microcircuits (e.g., electronic, optical, or both) that are configurable, temporarily or permanently, by some or all of the instructions 1124 such that the processor 1102 is configurable to perform any one or more of the methodologies described herein, in whole or in part. For example, a set of one or more microcircuits of the processor 1102 may be configurable to execute one or more modules (e.g., software modules) described herein. In some example embodiments, the processor 1102 is a multicore CPU (e.g., a dual-core CPU, a quad-core CPU, an 8-core CPU, or a 128-core CPU) within which each of multiple cores behaves as a separate processor that is able to perform any one or more of the methodologies discussed herein, in whole or in part. Although the beneficial effects described herein may be provided by the machine 1100 with at least the processor 1102, these same beneficial effects may be provided by a different kind of machine that contains no processors (e.g., a purely mechanical system, a purely hydraulic system, or a hybrid mechanical-hydraulic system), if such a processor-less machine is configured to perform one or more of the methodologies described herein.
[0096] The machine 1100 may further include a graphics display 1110 (e.g., a plasma display panel (PDP), a light emitting diode (LED) display, a liquid crystal display (LCD), a projector, a cathode ray tube (CRT), or any other display capable of displaying graphics or video). The machine 1100 may also include an alphanumeric input device 1112 (e.g., a keyboard or keypad), a pointer input device 1114 (e.g., a mouse, a touchpad, a touchscreen, a trackball, a joystick, a stylus, a motion sensor, an eye tracking device, a data glove, or other pointing instrument), a data storage 1116, an audio generation device 1118 (e.g., a sound card, an amplifier, a speaker, a headphone jack, or any suitable combination thereof), and a network interface device 1120.
[0097] The data storage 1116 (e.g., a data storage device) includes the machine-readable medium 1122 (e.g., a tangible and non-transitory machine- readable storage medium) on which are stored the instructions 1124 embodying any one or more of the methodologies or functions described herein. The instructions 1124 may also reside, completely or at least partially, within the main memory 1104, within the static memory 1106, within the processor 1102 (e.g., within the processor’s cache memory), or any suitable combination thereof, before or during execution thereof by the machine 1100. Accordingly, the main memory 1104, the static memory 1106, and the processor 1102 may be considered machine-readable media (e.g., tangible and non-transitory machine- readable media). The instructions 1124 may be transmitted or received over a network 190 via the network interface device 1120. For example, the network interface device 1120 may communicate the instructions 1124 using any one or more transfer protocols (e.g., hypertext transfer protocol (HTTP)).
[0098] In some example embodiments, the machine 1100 may be a portable computing device (e.g., a smart phone, a tablet computer, or a wearable device) and may have one or more additional input components 1130 (e.g., sensors or gauges). Examples of such input components 1130 include an image input component (e.g., one or more cameras), an audio input component (e.g., one or more microphones), a direction input component (e.g., a compass), a location input component (e.g., a global positioning system (GPS) receiver), an orientation component (e.g., a gyroscope), a motion detection component (e.g., one or more accelerometers), an altitude detection component (e.g., an altimeter), a temperature input component (e.g., a thermometer), and a gas detection component (e.g., a gas sensor). Input data gathered by any one or more of these input components 1130 may be accessible and available for use by any of the modules described herein (e.g., with suitable privacy notifications and protections, such as opt-in consent or opt-out consent, implemented in accordance with user preference, applicable regulations, or any suitable combination thereof).
[0099] As used herein, the term “memory” refers to a machine-readable medium able to store data temporarily or permanently and may be taken to include, but not be limited to, random-access memory (RAM), read-only memory (ROM), buffer memory, flash memory, and cache memory. While the machine-readable medium 1122 is shown in an example embodiment to be a single medium, the term “machine-readable medium” should be taken to include a single medium or multiple media (e.g., a centralized or distributed database, or associated caches and servers) able to store instructions. The term “machine- readable medium” shall also be taken to include any medium, or combination of multiple media, that is capable of carrying (e.g., storing or communicating) the instructions 1124 for execution by the machine 1100, such that the instructions 1124, when executed by one or more processors of the machine 1100 (e.g., processor 1102), cause the machine 1100 to perform any one or more of the methodologies described herein, in whole or in part. Accordingly, a “machine- readable medium” refers to a single storage apparatus or device, as well as cloud-based storage systems or storage networks that include multiple storage apparatus or devices. The term “machine-readable medium” shall accordingly be taken to include, but not be limited to, one or more tangible and non- transitory data repositories (e.g., data volumes) in the example form of a solid- state memory chip, an optical disc, a magnetic disc, or any suitable combination thereof.
[0100] A “non-transitory” machine-readable medium, as used herein, specifically excludes propagating signals per se. According to various example embodiments, the instructions 1124 for execution by the machine 1100 can be communicated via a carrier medium (e.g., a machine-readable carrier medium). Examples of such a carrier medium include a non-transient carrier medium (e.g., a non-transitory machine-readable storage medium, such as a solid-state memory that is physically movable from one place to another place) and a transient carrier medium (e.g., a carrier wave or other propagating signal that communicates the instructions 1124).
[0101] Certain example embodiments are described herein as including modules. Modules may constitute software modules (e.g., code stored or otherwise embodied in a machine-readable medium or in a transmission medium), hardware modules, or any suitable combination thereof. A “hardware module” is a tangible (e.g., non-transitory) physical component (e.g., a set of one or more processors) capable of performing certain operations and may be configured or arranged in a certain physical manner. In various example embodiments, one or more computer systems or one or more hardware modules thereof may be configured by software (e.g., an application or portion thereof) as a hardware module that operates to perform operations described herein for that module.
[0102] In some example embodiments, a hardware module may be implemented mechanically, electronically, hydraulically, or any suitable combination thereof. For example, a hardware module may include dedicated circuitry or logic that is permanently configured to perform certain operations. A hardware module may be or include a special-purpose processor, such as a field programmable gate array (FPGA) or an ASIC. A hardware module may also include programmable logic or circuitry that is temporarily configured by software to perform certain operations. As an example, a hardware module may include software encompassed within a CPU or other programmable processor. It will be appreciated that the decision to implement a hardware module mechanically, hydraulically, in dedicated and permanently configured circuitry, or in temporarily configured circuitry (e.g., configured by software) may be driven by cost and time considerations.
[0103] Accordingly, the phrase “hardware module” should be understood to encompass a tangible entity that may be physically constructed, permanently configured (e.g., hardwired), or temporarily configured (e.g., programmed) to operate in a certain manner or to perform certain operations described herein. Furthermore, as used herein, the phrase “hardware-implemented module” refers to a hardware module. Considering example embodiments in which hardware modules are temporarily configured (e.g., programmed), each of the hardware modules need not be configured or instantiated at any one instance in time. For example, where a hardware module includes a CPU configured by software to become a special-purpose processor, the CPU may be configured as respectively different special-purpose processors (e.g., each included in a different hardware module) at different times. Software (e.g., a software module) may accordingly configure one or more processors, for example, to become or otherwise constitute a particular hardware module at one instance of time and to become or otherwise constitute a different hardware module at a different instance of time.
[0104] Hardware modules can provide information to, and receive information from, other hardware modules. Accordingly, the described hardware modules may be regarded as being communicatively coupled. Where multiple hardware modules exist contemporaneously, communications may be achieved through signal transmission (e.g., over circuits and buses) between or among two or more of the hardware modules. In embodiments in which multiple hardware modules are configured or instantiated at different times, communications between such hardware modules may be achieved, for example, through the storage and retrieval of information in memory structures to which the multiple hardware modules have access. For example, one hardware module may perform an operation and store the output of that operation in a memory (e.g., a memory device) to which it is communicatively coupled. A further hardware module may then, at a later time, access the memory to retrieve and process the stored output. Hardware modules may also initiate communications with input or output devices, and can operate on a resource (e.g., a collection of information from a computing resource).
[0105] The various operations of example methods described herein may be performed, at least partially, by one or more processors that are temporarily configured (e.g., by software) or permanently configured to perform the relevant operations. Whether temporarily or permanently configured, such processors may constitute processor-implemented modules that operate to perform one or more operations or functions described herein. As used herein, “processor- implemented module” refers to a hardware module in which the hardware includes one or more processors. Accordingly, the operations described herein may be at least partially processor-implemented, hardware-implemented, or both, since a processor is an example of hardware, and at least some operations within any one or more of the methods discussed herein may be performed by one or more processor-implemented modules, hardware-implemented modules, or any suitable combination thereof.
[0106] Moreover, such one or more processors may perform operations in a “cloud computing” environment or as a service (e.g., within a “software as a service” (SaaS) implementation). For example, at least some operations within any one or more of the methods discussed herein may be performed by a group of computers (e.g., as examples of machines that include processors), with these operations being accessible via a network (e.g., the Internet) and via one or more appropriate interfaces (e.g., an application program interface (API)). The performance of certain operations may be distributed among the one or more processors, whether residing only within a single machine or deployed across a number of machines. In some example embodiments, the one or more processors or hardware modules (e.g., processor-implemented modules) may be located in a single geographic location (e.g., within a home environment, an office environment, or a server farm). In other example embodiments, the one or more processors or hardware modules may be distributed across a number of geographic locations.
[0107] FIG. 33 is a flowchart illustrating operations in a method 3300 of imaging an LFA test kit, according to some example embodiments. The method 3300 may be performed partly or fully by one or more machines (e.g., computer systems, smartphones, or other devices), such as the machine 1100 discussed with respect to FIG. 32 (e.g., implementing one or more operations discussed above with respect to FIG. 16). As shown in FIG. 33, the method 3300 includes one or more of operations 3310, 3320, 3330, 3340, 3350, or 3360. For example, operations 3310, 3320, and 3330 may be performed by one machine (e.g., a computer system), and operations 3340, 3350, and 3360 may be performed by another machine (e.g., a smartphone).
[0108] In operation 3310, a machine (e.g., a computer system) accesses training data that includes reference values of analyte strength with corresponding reference smartphone identifiers and corresponding reference images. Each of the reference images may depict a corresponding reference test strip (e.g., a reference LFA test strip) photographed by a corresponding reference smartphone identified by a corresponding reference smartphone identifier.
[0109] In operation 3320, the machine (e.g., the computer system) trains an artificial intelligence (Al) model, based on the training data assessed in operation 3310. The machine trains the Al model to output a predicted value of analyte strength based on a candidate smartphone identifier and a candidate image that depicts a candidate test strip (e.g., a candidate LFA test strip) photographed by a candidate smartphone identified by the candidate smartphone identifier.
[0110] In operation 3330, the machine (e.g., the computer system) provides the trained Al model to the candidate smartphone (e.g., to enable the candidate smartphone to perform operations 3340, 3350, and 3360 of the method 3300).
[oni] In operation 3340, a machine (e.g., a smartphone, such as the candidate smartphone discussed above with respect to operations 3320 and 3330) obtains an artificial intelligence (Al) model (e.g., from another machine that performed operation 3330). The obtained Al model is trained to output a predicted value of analyte strength based on a candidate smartphone identifier and a candidate image that depicts a candidate test strip (e.g., a candidate LFA test strip) photographed by a candidate smartphone identified by the candidate smartphone identifier. The Al model may be trained based on training data that includes reference values of analyte strength with corresponding reference smartphone identifiers and corresponding reference images, and each of the reference images may depict a corresponding reference test strip (e.g., a reference LFA test strip) photographed by a corresponding reference smartphone identified by a corresponding reference smartphone identifier.
[0112] In operation 3350, the machine (e.g., the smartphone) generates the predicted value of analyte strength by inputting the candidate smartphone identifier and the candidate image into the Al model obtained in operation 3340. As a result, the Al model outputs the predicted value of analyte strength.
[0113] In operation 3360, the machine (e.g., the smartphone) causes presentation of the predicted value of analyte strength, as generated in operation 3350. For example, the machine may itself present the generated predicted value of analyte strength. As another example, the machine may send the generated predicted value of analyte strength to a different machine (e.g., a smartwatch communicatively coupled to the smartphone) and cause that different machine to present the generated predicted value of analyte strength.
[0114] Throughout this specification, plural instances may implement components, operations, or structures described as a single instance. Although individual operations of one or more methods are illustrated and described as separate operations, one or more of the individual operations may be performed concurrently, and nothing requires that the operations be performed in the order illustrated. Structures and their functionality presented as separate components and functions in example configurations may be implemented as a combined structure or component with combined functions. Similarly, structures and functionality presented as a single component may be implemented as separate components and functions. These and other variations, modifications, additions, and improvements fall within the scope of the subject matter herein.
[0115] Some portions of the subject matter discussed herein may be presented in terms of algorithms or symbolic representations of operations on data stored as bits or binary digital signals within a memory (e.g., a computer memory or other machine memory). Such algorithms or symbolic representations are examples of techniques used by those of ordinary skill in the data processing arts to convey the substance of their work to others skilled in the art. As used herein, an “algorithm” is a self-consistent sequence of operations or similar processing leading to a desired result. In this context, algorithms and operations involve physical manipulation of physical quantities. Typically, but not necessarily, such quantities may take the form of electrical, magnetic, or optical signals capable of being stored, accessed, transferred, combined, compared, or otherwise manipulated by a machine. It is convenient at times, principally for reasons of common usage, to refer to such signals using words such as “data,” “content,” “bits,” “values,” “elements,” “symbols,” “characters,” “terms,” “numbers,” “numerals,” or the like. These words, however, are merely convenient labels and are to be associated with appropriate physical quantities.
[0116] Unless specifically stated otherwise, discussions herein using words such as “accessing,” “processing,” “detecting,” “computing,” “calculating,” “determining,” “generating,” “presenting,” “displaying,” or the like refer to actions or processes performable by a machine (e.g., a computer) that manipulates or transforms data represented as physical (e.g., electronic, magnetic, or optical) quantities within one or more memories (e.g., volatile memory, non-volatile memory, or any suitable combination thereof), registers, or other machine components that receive, store, transmit, or display information. Furthermore, unless specifically stated otherwise, the terms “a” or “an” are herein used, as is common in patent documents, to include one or more than one instance. Finally, as used herein, the conjunction “or” refers to a nonexclusive “or,” unless specifically stated otherwise. [0117] The following enumerated descriptions describe various examples of methods, machine-readable media, and systems (e.g., machines, devices, or other apparatus) discussed herein. Any one or more features of an example, taken in isolation or combination, should be considered as being within the disclosure of this application.
[0118] A first example provides a method comprising: accessing, by one or more processors of a machine, training data that includes reference values of analyte strength with corresponding reference smartphone identifiers and corresponding reference images, each of the reference images depicting a corresponding reference test strip (e.g., a reference LFA test strip) photographed by a corresponding reference smartphone identified by a corresponding reference smartphone identifier; training, by the one or more processors of the machine and based on the training data, an artificial intelligence (Al) model to output a predicted value of analyte strength based on a candidate smartphone identifier and a candidate image that depicts a candidate test strip (e.g., a candidate LFA test strip) photographed by a candidate smartphone identified by the candidate smartphone identifier; and providing, by one or more processors of the machine, the trained Al model to the candidate smartphone.
[0119] A second example provides a method according to the first example, wherein: a reference smartphone identifier among the reference smartphone identifiers includes a vector of parameters that define a camera calibration curve of a smartphone model of the reference smartphone identified by the reference smartphone identifier.
[0120] A third example provides a method according to the first example or the second example, wherein: a reference smartphone identifier among the reference smartphone identifiers includes a vector of embeddings derived from a one-hot vector that encodes a smartphone model of the reference smartphone identified by the reference smartphone identifier. [0121] A fourth example provides a method accordingly to any of the first through third examples, wherein: the candidate smartphone identifier includes a vector of parameters that define a camera calibration curve of a smartphone model of the candidate smartphone identified by the candidate smartphone identifier.
[0122] A fifth example provides a method according to any of the first through fourth examples, wherein: the candidate smartphone identifier includes a vector of parameters that define a camera calibration curve of the candidate smartphone identified by the candidate smartphone identifier.
[0123] A sixth example provides a method according to any of the first through fifth examples, wherein: the reference values of analyte strength indicate reference concentrations of an analyte; and the predicted value of analyte strength indicates a predicted concentration of the analyte.
[0124] A seventh example provides a method according to any of the first through fifth examples, wherein: the reference values of analyte strength indicate reference classifications of reference concentrations of an analyte; and the predicted value of analyte strength indicates a predicted classification of a candidate concentration of the analyte.
[0125] An eighth example provides a machine-readable medium (e.g., a non-transitory machine-readable storage medium) comprising instructions that, when executed by one or more processors of a machine, cause the machine to perform operations comprising: accessing training data that includes reference values of analyte strength with corresponding reference smartphone identifiers and corresponding reference images, each of the reference images depicting a corresponding reference test strip (e.g., a reference LFA test strip) photographed by a corresponding reference smartphone identified by a corresponding reference smartphone identifier; based on the training data, training an artificial intelligence (Al) model to output a predicted value of analyte strength based on a candidate smartphone identifier and a candidate image that depicts a candidate test strip (e.g., a candidate LFA test strip) photographed by a candidate smartphone identified by the candidate smartphone identifier; and providing the trained Al model to the candidate smartphone.
[0126] A ninth example provides a machine-readable medium according to the eighth example, wherein: a reference smartphone identifier among the reference smartphone identifiers includes a vector of parameters that define a camera calibration curve of a smartphone model of the reference smartphone identified by the reference smartphone identifier.
[0127] A tenth example provides a machine-readable medium according to the eighth example or the ninth example, wherein: a reference smartphone identifier among the reference smartphone identifiers includes a vector of embeddings derived from a one-hot vector that encodes a smartphone model of the reference smartphone identified by the reference smartphone identifier.
[0128] An eleventh example provides a machine-readable medium according to any of the eighth through tenth examples, wherein: the candidate smartphone identifier includes a vector of parameters that define a camera calibration curve of a smartphone model of the candidate smartphone identified by the candidate smartphone identifier.
[0129] A twelfth example provides a machine-readable medium according to any of the eighth through eleventh examples, wherein: the candidate smartphone identifier includes a vector of parameters that define a camera calibration curve of the candidate smartphone identified by the candidate smartphone identifier.
[0130] A thirteenth example provides a machine-readable medium according to any of the eighth through twelfth examples, wherein: the reference values of analyte strength indicate reference concentrations of an analyte; and the predicted value of analyte strength indicates a predicted concentration of the analyte.
[0131] A fourteenth example provides a machine-readable medium according to any of the eighth through twelfth examples, wherein: the reference values of analyte strength indicate reference classifications of reference concentrations of an analyte; and the predicted value of analyte strength indicates a predicted classification of a candidate concentration of the analyte.
[0132] A fifteenth example provides a system (e.g., a server system or other computer system) comprising: one or more processors; and a memory storing instructions that, when executed by at least one processor among the one or more processors, cause the system to perform operations comprising: accessing training data that includes reference values of analyte strength with corresponding reference smartphone identifiers and corresponding reference images, each of the reference images depicting a corresponding reference test strip (e.g., a reference LFA test strip) photographed by a corresponding reference smartphone identified by a corresponding reference smartphone identifier; based on the training data, training an artificial intelligence (Al) model to output a predicted value of analyte strength based on a candidate smartphone identifier and a candidate image that depicts a candidate test strip (e.g., a candidate LFA test strip) photographed by a candidate smartphone identified by the candidate smartphone identifier; and providing the trained Al model to the candidate smartphone.
[0133] A sixteenth example provides a system according to the fifteenth example, wherein: a reference smartphone identifier among the reference smartphone identifiers includes a vector of parameters that define a camera calibration curve of a smartphone model of the reference smartphone identified by the reference smartphone identifier.
[0134] A seventeenth example provides a system according to the fifteenth example or the sixteenth example, wherein: a reference smartphone identifier among the reference smartphone identifiers includes a vector of embeddings derived from a one-hot vector that encodes a smartphone model of the reference smartphone identified by the reference smartphone identifier.
[0135] An eighteenth example provides a system according to any of the fifteenth through seventeenth examples, wherein: the candidate smartphone identifier includes a vector of parameters that define a camera calibration curve of a smartphone model of the candidate smartphone identified by the candidate smartphone identifier.
[0136] A nineteenth example provides a system according to any of the fifteenth through eighteenth examples, wherein: the candidate smartphone identifier includes a vector of parameters that define a camera calibration curve of the candidate smartphone identified by the candidate smartphone identifier.
[0137] A twentieth example provides a system according to any of the fifteenth through nineteenth examples, wherein: the reference values of analyte strength indicate reference concentrations of an analyte; and the predicted value of analyte strength indicates a predicted concentration of the analyte.
[0138] A twenty-first example provides a method comprising: obtaining, by one or more processors of a smartphone, an artificial intelligence (Al) model trained to output a predicted value of analyte strength based on a candidate smartphone identifier and a candidate image that depicts a candidate test strip (e.g., a candidate LFA test strip) photographed by a candidate smartphone identified by the candidate smartphone identifier, the Al model being trained based on training data that includes reference values of analyte strength with corresponding reference smartphone identifiers and corresponding reference images, each of the reference images depicting a corresponding reference test strip (e.g., a reference LFA test strip) photographed by a corresponding reference smartphone identified by a corresponding reference smartphone identifier; generating, by the one or more processors of the smartphone, the predicted value of analyte strength by inputting the candidate smartphone identifier and the candidate image to the Al model, the Al model outputting the predicted value of analyte strength; and presenting, by the one or more processors of the smartphone, the generated predicted value of analyte strength.
[0139] A twenty-second example provides a machine-readable medium (e.g., a non-transitory machine-readable storage medium) comprising instructions that, when executed by one or more processors of a machine, cause the machine to perform operations comprising: obtaining an artificial intelligence (Al) model trained to output a predicted value of analyte strength based on a candidate smartphone identifier and a candidate image that depicts a candidate test strip (e.g., a candidate LFA test strip) photographed by a candidate smartphone identified by the candidate smartphone identifier, the Al model being trained based on training data that includes reference values of analyte strength with corresponding reference smartphone identifiers and corresponding reference images, each of the reference images depicting a corresponding reference test strip (e.g., a reference LFA test strip) photographed by a corresponding reference smartphone identified by a corresponding reference smartphone identifier; generating the predicted value of analyte strength by inputting the candidate smartphone identifier and the candidate image to the Al model, the Al model outputting the predicted value of analyte strength; and presenting the generated predicted value of analyte strength. [0140] A twenty-third example provides a system (e.g., a smartphone or other computer system) comprising: one or more processors; and a memory storing instructions that, when executed by at least one processor among the one or more processors, cause the system to perform operations comprising: obtaining an artificial intelligence (Al) model trained to output a predicted value of analyte strength based on a candidate smartphone identifier and a candidate image that depicts a candidate test strip (e.g., a candidate LFA test strip) photographed by a candidate smartphone identified by the candidate smartphone identifier, the Al model being trained based on training data that includes reference values of analyte strength with corresponding reference smartphone identifiers and corresponding reference images, each of the reference images depicting a corresponding reference test strip (e.g., a reference LFA test strip) photographed by a corresponding reference smartphone identified by a corresponding reference smartphone identifier; generating the predicted value of analyte strength by inputting the candidate smartphone identifier and the candidate image to the Al model, the Al model outputting the predicted value of analyte strength; and presenting the generated predicted value of analyte strength.
[0141] A twenty-fourth example provides a carrier medium carrying machine-readable instructions for controlling a machine to carry out the operations (e.g., method operations) performed in any one of the previously described examples.

Claims

What is claimed is:
1. A method comprising: accessing, by one or more processors of a machine, training data that includes reference values of analyte strength with corresponding reference smartphone identifiers and corresponding reference images, each of the reference images depicting a corresponding reference test strip photographed by a corresponding reference smartphone identified by a corresponding reference smartphone identifier; training, by the one or more processors of the machine and based on the training data, an artificial intelligence (Al) model to output a predicted value of analyte strength based on a candidate smartphone identifier and a candidate image that depicts a candidate test strip photographed by a candidate smartphone identified by the candidate smartphone identifier; and providing, by one or more processors of the machine, the trained Al model to the candidate smartphone.
2. The method of claim 1, wherein: a reference smartphone identifier among the reference smartphone identifiers includes a vector of parameters that define a camera calibration curve of a smartphone model of the reference smartphone identified by the reference smartphone identifier.
3. The method of claim 1, wherein: a reference smartphone identifier among the reference smartphone identifiers includes a vector of embeddings derived from a one- hot vector that encodes a smartphone model of the reference smartphone identified by the reference smartphone identifier.
47
4. The method of claim 1, wherein: the candidate smartphone identifier includes a vector of parameters that define a camera calibration curve of a smartphone model of the candidate smartphone identified by the candidate smartphone identifier.
5. The method of claim 1, wherein: the candidate smartphone identifier includes a vector of parameters that define a camera calibration curve of the candidate smartphone identified by the candidate smartphone identifier.
6. The method of claim 1, wherein: the reference values of analyte strength indicate reference concentrations of an analyte; and the predicted value of analyte strength indicates a predicted concentration of the analyte.
7. The method of claim 1, wherein: the reference values of analyte strength indicate reference classifications of reference concentrations of an analyte; and the predicted value of analyte strength indicates a predicted classification of a candidate concentration of the analyte.
8. A machine-readable medium comprising instructions that, when executed by one or more processors of a machine, cause the machine to perform operations comprising: accessing training data that includes reference values of analyte strength with corresponding reference smartphone identifiers and corresponding reference images, each of the reference images depicting a corresponding reference test strip photographed by a corresponding reference smartphone identified by a corresponding reference smartphone identifier; based on the training data, training an artificial intelligence (Al) model to output a predicted value of analyte strength based on a candidate 48 smartphone identifier and a candidate image that depicts a candidate test strip photographed by a candidate smartphone identified by the candidate smartphone identifier; and providing the trained Al model to the candidate smartphone. machine-readable medium of claim 8, wherein: a reference smartphone identifier among the reference smartphone identifiers includes a vector of parameters that define a camera calibration curve of a smartphone model of the reference smartphone identified by the reference smartphone identifier. machine-readable medium of claim 8, wherein: a reference smartphone identifier among the reference smartphone identifiers includes a vector of embeddings derived from a one- hot vector that encodes a smartphone model of the reference smartphone identified by the reference smartphone identifier. machine-readable medium of claim 8, wherein: the candidate smartphone identifier includes a vector of parameters that define a camera calibration curve of a smartphone model of the candidate smartphone identified by the candidate smartphone identifier. machine-readable medium of claim 8, wherein: the candidate smartphone identifier includes a vector of parameters that define a camera calibration curve of the candidate smartphone identified by the candidate smartphone identifier. machine-readable medium of claim 8, wherein: the reference values of analyte strength indicate reference concentrations of an analyte; and the predicted value of analyte strength indicates a predicted concentration of the analyte.
49 machine-readable medium of claim 8, wherein: the reference values of analyte strength indicate reference classifications of reference concentrations of an analyte; and the predicted value of analyte strength indicates a predicted classification of a candidate concentration of the analyte. ystem comprising: one or more processors; and a memory storing instructions that, when executed by at least one processor among the one or more processors, cause the system to perform operations comprising: accessing training data that includes reference values of analyte strength with corresponding reference smartphone identifiers and corresponding reference images, each of the reference images depicting a corresponding reference test strip photographed by a corresponding reference smartphone identified by a corresponding reference smartphone identifier; based on the training data, training an artificial intelligence (Al) model to output a predicted value of analyte strength based on a candidate smartphone identifier and a candidate image that depicts a candidate test strip photographed by a candidate smartphone identified by the candidate smartphone identifier; and providing the trained Al model to the candidate smartphone. system of claim 15, wherein: a reference smartphone identifier among the reference smartphone identifiers includes a vector of parameters that define a camera calibration curve of a smartphone model of the reference smartphone identified by the reference smartphone identifier. system of claim 15, wherein: a reference smartphone identifier among the reference smartphone identifiers includes a vector of embeddings derived from a one-
50 hot vector that encodes a smartphone model of the reference smartphone identified by the reference smartphone identifier.
18. The system of claim 15, wherein: the candidate smartphone identifier includes a vector of parameters that define a camera calibration curve of a smartphone model of the candidate smartphone identified by the candidate smartphone identifier.
19. The system of claim 15, wherein: the candidate smartphone identifier includes a vector of parameters that define a camera calibration curve of the candidate smartphone identified by the candidate smartphone identifier.
20. The system of claim 15, wherein: the reference values of analyte strength indicate reference concentrations of an analyte; and the predicted value of analyte strength indicates a predicted concentration of the analyte.
21. A method comprising: obtaining, by one or more processors of a smartphone, an artificial intelligence (Al) model trained to output a predicted value of analyte strength based on a candidate smartphone identifier and a candidate image that depicts a candidate test strip photographed by a candidate smartphone identified by the candidate smartphone identifier, the Al model being trained based on training data that includes reference values of analyte strength with corresponding reference smartphone identifiers and corresponding reference images, each of the reference images depicting a corresponding reference test strip photographed by a corresponding reference smartphone identified by a corresponding reference smartphone identifier; generating, by the one or more processors of the smartphone, the predicted value of analyte strength by inputting the candidate smartphone identifier and the candidate image to the Al model, the Al model outputting the predicted value of analyte strength; and presenting, by the one or more processors of the smartphone, the generated predicted value of analyte strength.
22. A machine-readable medium comprising instructions that, when executed by one or more processors of a machine, cause the machine to perform operations comprising: obtaining an artificial intelligence (Al) model trained to output a predicted value of analyte strength based on a candidate smartphone identifier and a candidate image that depicts a candidate test strip photographed by a candidate smartphone identified by the candidate smartphone identifier, the Al model being trained based on training data that includes reference values of analyte strength with corresponding reference smartphone identifiers and corresponding reference images, each of the reference images depicting a corresponding reference test strip photographed by a corresponding reference smartphone identified by a corresponding reference smartphone identifier; generating the predicted value of analyte strength by inputting the candidate smartphone identifier and the candidate image to the Al model, the Al model outputting the predicted value of analyte strength; and presenting the generated predicted value of analyte strength.
23. A system comprising: one or more processors; and a memory storing instructions that, when executed by at least one processor among the one or more processors, cause the system to perform operations comprising: obtaining an artificial intelligence (Al) model trained to output a predicted value of analyte strength based on a candidate smartphone identifier and a candidate image that depicts a candidate test strip photographed by a candidate smartphone identified by the candidate smartphone identifier, the Al model being trained based on training data that includes reference values of analyte strength with corresponding reference smartphone identifiers and corresponding reference images, each of the reference images depicting a corresponding reference test strip photographed by a corresponding reference smartphone identified by a corresponding reference smartphone identifier; generating the predicted value of analyte strength by inputting the candidate smartphone identifier and the candidate image to the Al model, the Al model outputting the predicted value of analyte strength; and presenting the generated predicted value of analyte strength.
53
PCT/US2022/042243 2021-09-01 2022-08-31 Imaging test strips WO2023034441A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202163239537P 2021-09-01 2021-09-01
US63/239,537 2021-09-01

Publications (1)

Publication Number Publication Date
WO2023034441A1 true WO2023034441A1 (en) 2023-03-09

Family

ID=85411610

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2022/042243 WO2023034441A1 (en) 2021-09-01 2022-08-31 Imaging test strips

Country Status (1)

Country Link
WO (1) WO2023034441A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116109815A (en) * 2023-04-11 2023-05-12 深圳市易瑞生物技术股份有限公司 Positioning method and device for test card calculation area and terminal equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150055134A1 (en) * 2012-02-03 2015-02-26 University Of Cincinnati Method and system for analyzing a colorimetric assay
WO2018194525A1 (en) * 2017-04-18 2018-10-25 Yeditepe Universitesi Biochemical analyser based on a machine learning algorithm using test strips and a smartdevice
WO2020128146A1 (en) * 2018-12-19 2020-06-25 Actim Oy System and method for analysing the image of a point-of-care test result
US10956810B1 (en) * 2020-11-23 2021-03-23 Audere Artificial intelligence analysis of test strip method, apparatus, and system
WO2021118604A1 (en) * 2019-12-13 2021-06-17 Google Llc Training speech synthesis to generate distinct speech sounds

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20150055134A1 (en) * 2012-02-03 2015-02-26 University Of Cincinnati Method and system for analyzing a colorimetric assay
WO2018194525A1 (en) * 2017-04-18 2018-10-25 Yeditepe Universitesi Biochemical analyser based on a machine learning algorithm using test strips and a smartdevice
WO2020128146A1 (en) * 2018-12-19 2020-06-25 Actim Oy System and method for analysing the image of a point-of-care test result
WO2021118604A1 (en) * 2019-12-13 2021-06-17 Google Llc Training speech synthesis to generate distinct speech sounds
US10956810B1 (en) * 2020-11-23 2021-03-23 Audere Artificial intelligence analysis of test strip method, apparatus, and system

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116109815A (en) * 2023-04-11 2023-05-12 深圳市易瑞生物技术股份有限公司 Positioning method and device for test card calculation area and terminal equipment
CN116109815B (en) * 2023-04-11 2023-07-28 深圳市易瑞生物技术股份有限公司 Positioning method and device for test card calculation area and terminal equipment

Similar Documents

Publication Publication Date Title
Mutlu et al. Smartphone-based colorimetric detection via machine learning
CN107209935B (en) For measuring the system and method for mobile document image quality
TWI756365B (en) Image analysis systems and related methods
US11674883B2 (en) Image-based assay performance improvement
US11333658B2 (en) Urine test strip comprising timer, and method for detecting and analyzing urine test strip
JP5198476B2 (en) Method for determining focus position and vision inspection system
JP2021518025A (en) Focus-weighted machine learning classifier error prediction for microscope slide images
CN107424160A (en) The system and method that image center line is searched by vision system
CN111325717B (en) Mobile phone defect position identification method and equipment
CN107328776A (en) A kind of quick determination method of immune chromatography test card
US20130170756A1 (en) Edge detection apparatus, program and method for edge detection
CN104812288A (en) Image processing device, image processing method, and image processing program
US20230177680A1 (en) Assay reading method
US20150332120A1 (en) Detecting and processing small text in digital media
US20230146924A1 (en) Neural network analysis of lfa test strips
US20230274538A1 (en) Adaptable Automated Interpretation of Rapid Diagnostic Tests Using Self-Supervised Learning and Few-Shot Learning
JP2023502766A (en) Methods for Determining Concentrations of Analytes in Body Fluids
US20220414827A1 (en) Training apparatus, training method, and medium
WO2023034441A1 (en) Imaging test strips
Khalili Moghaddam et al. Smartphone-based quantitative measurements on holographic sensors
Razzell Hollis et al. Quantitative photography for rapid, reliable measurement of marine macro‐plastic pollution
CN117173154A (en) Online image detection system and method for glass bottle
CN111213156A (en) Character recognition sharpness determination
WO2022016022A1 (en) System and method for automated test result diagnostics
KR20150009842A (en) System for testing camera module centering and method for testing camera module centering using the same

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22865518

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22865518

Country of ref document: EP

Kind code of ref document: A1