WO2024036405A1 - Procédé de détection d'un analyte à l'aide d'un apprentissage automatique - Google Patents

Procédé de détection d'un analyte à l'aide d'un apprentissage automatique Download PDF

Info

Publication number
WO2024036405A1
WO2024036405A1 PCT/CA2023/051089 CA2023051089W WO2024036405A1 WO 2024036405 A1 WO2024036405 A1 WO 2024036405A1 CA 2023051089 W CA2023051089 W CA 2023051089W WO 2024036405 A1 WO2024036405 A1 WO 2024036405A1
Authority
WO
WIPO (PCT)
Prior art keywords
sample
analyte
thc
machine learning
learning algorithm
Prior art date
Application number
PCT/CA2023/051089
Other languages
English (en)
Inventor
Seyyedeh Hoda MOZAFFARI
Greter Amelia ORTEGA RODRIGUEZ
Herlys VILTRES COBAS
Syed Rahin AHMED
Seshasai SRINIVASAN
Amin Reza Rajabzadeh
Original Assignee
Eye3Concepts Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Eye3Concepts Inc. filed Critical Eye3Concepts Inc.
Publication of WO2024036405A1 publication Critical patent/WO2024036405A1/fr

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N27/00Investigating or analysing materials by the use of electric, electrochemical, or magnetic means
    • G01N27/26Investigating or analysing materials by the use of electric, electrochemical, or magnetic means by investigating electrochemical variables; by using electrolysis or electrophoresis
    • G01N27/416Systems
    • G01N27/49Systems involving the determination of the current at a single specific value, or small range of values, of applied voltage for producing selective measurement of one or more particular ionic species
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N33/00Investigating or analysing materials by specific methods not covered by groups G01N1/00 - G01N31/00
    • G01N33/48Biological material, e.g. blood, urine; Haemocytometers
    • G01N33/483Physical analysis of biological material
    • G01N33/487Physical analysis of biological material of liquid biological material
    • G01N33/48707Physical analysis of biological material of liquid biological material by electrical means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Definitions

  • This disclosure relates generally to methods of sensing analytes using machine learning.
  • electrochemical sensors have been developed to detect different analytes in biological samples or biologically derived samples.
  • Examples of electrochemical sensors include microfluidic chips and test strips.
  • the advantages of microfluidic chips include fast assay time, reduced volume of reagents and samples, increased accuracy, ease of use and portability.
  • Most biological samples, such as body fluids e.g. oral fluids
  • body fluids e.g. oral fluids
  • Most biological samples, such as body fluids are a complex matrix that make the detection of analytes difficult due to many potential sources of interference in the sample and because of natural variance between different subjects the biological sample is obtained from.
  • body fluids e.g. oral fluids
  • the viscosity of human saliva is approximately 1 .30 times higher than water, thereby affecting the analyte’s diffusion and the reaction rates on the electrodes of electrochemical sensors.
  • saliva has various natural or adulterant electroactive components that may interfere with the analyte electrochemical performance.
  • the pH, conductivity, and the protein-chemical-solid compositions of saliva change over time and vary from subject to subject.
  • a method of sensing an analyte in a sample by electrochemical detection comprising: • receiving the sample on a sample receiving region of an electrochemical sensor, the sample receiving region being in fluid communication with a sensing electrode of the sensor;
  • the processing device • executing, by the processing device, the at least one machine learning algorithm to determine from the electrical signal a presence or an absence of the analyte in the sample.
  • the sensing electrode has a plurality of sensor analytes associated therewith.
  • the electrochemical reaction is an oxidation, a reduction, or an enzymatic reaction.
  • the electrical signal is an electric current.
  • the sample comprises a body fluid, cells from a subject or a biomolecule from the subject.
  • the sample comprises one or more of oral fluid, sputum, urine, tears, blood, plasma, nasal fluid, sweat, cerebral spinal fluid, suspended cells, and feces.
  • the electric potential is applied using a voltammetric technique.
  • the voltammetric technique is selected from square wave voltammetry, cyclic voltammetry, linear sweep voltammetry, and differential pulse voltammetry. In some embodiments, the voltammetric technique is square wave voltammetry. In some embodiments, the target range of electric potential is from 0 to 5 V. In some embodiments, the at least one machine learning algorithm is executed by the processing device to determine the presence or the absence of the analyte in the sample comprising determining a range of values of a concentration of the analyte in the sample.
  • the at least one machine learning algorithm is executed by the processing device to determine the presence orthe absence of the analyte in the sample comprising determining a single value of a concentration of the analyte in the sample. In some embodiments, the at least one machine learning algorithm is executed by the processing device to determine the presence or the absence of the analyte in the sample comprising determining whether the concentration of the analyte in the sample is above or below a predetermined concentration threshold.
  • the at least one machine learning algorithm is configured to decrease noise present in the electrical signal for determining the presence or the absence of the analyte in the sample, the noise resulting from at least one of subject-to-subject variations in the sample, discrepancies between batches of the sensing electrode, and analog compound interference in the sample.
  • the at least one machine learning algorithm is trained with at least one statistical feature of the electrical signal, the at least one statistical feature comprising at least one of a maximum, a minimum, a distance between the maximum and the minimum, a mean, a variance, a skewness, and a kurtosis.
  • the at least one machine learning algorithm is trained with an entirety of the electrical signal.
  • the method further comprising reducing a dimensionality of the electrical signal prior to executing the at least one machine learning algorithm.
  • the dimensionality of the electrical signal is reduced using one of principal component analysis (PCA), locally linear embedding (LLE), multidimensional scaling (MDS), t-distributed stochastic neighbor embedding (t-SNE), and linear discriminant analysis (LDA).
  • PCA principal component analysis
  • LLE locally linear embedding
  • MDS multidimensional scaling
  • t-SNE t-distributed stochastic neighbor embedding
  • LDA linear discriminant analysis
  • the at least one machine learning algorithm is a supervised machine learning algorithm or an unsupervised machine learning algorithm.
  • the at least one machine learning algorithm is configured to perform at least one of a regression analysis and a classification task to determine the concentration of the analyte from the electrical signal.
  • the at least one machine learning algorithm is configured to perform the classification task using one of logistic regression, soft Regression, decision Tree, random forest (RF), and an artificial neural network (ANN).
  • the at least one machine learning algorithm is configured to perform the regression analysis using one of linear regression, gradient descent, polynomial regression, regularized linear model, ridge regression, lasso regression, and support vector machine (SVM).
  • the at least one machine learning algorithm uses XGBoost (extreme Gradient Boosting).
  • the at least one machine learning algorithm comprises a plurality of different machine learning algorithms combined into an ensemble machine learning model.
  • the analyte is a metabolite, a drug of abuse, or a hormone.
  • Fig. 1 is a flow chart of an example method for sensing an analyte in a sample by electrochemical detection.
  • Fig. 2 is a schematic graph of a support vector machine (SVM) regression, which can also be referred to as a support vector regression (SVR).
  • SVM support vector machine
  • FIG. 3 is a schematic representation of a neuron of an artificial neural network (ANN).
  • ANN artificial neural network
  • FIG. 4 is a block diagram of a detection system for detecting the sample analyte.
  • Fig. 5A is a graph showing square wave voltammetry (SWV) signals during the THC deposition of different modified electrodes (m-Zensor, m-Z) with 0, 2, or 5 ng/mL of tetrahydrocannabinol (THC).
  • SWV square wave voltammetry
  • Fig. 5B is a graph showing the raw data of three m-Zensor and one pristine (P-Z) per THC concentration 0, 2, and 5 ng/mL.
  • Fig. 5C is a graph showing an example of the subtraction of the signals for the samples (THC 0, 2, and 5 ng/mL) recovered with m-Z-THC minus the signal obtained with pristine Zensor (Fig. 5A and Fig. 5B).
  • Fig. 6A is a bar graph showing the sensor electrochemical performance for three saliva samples (S1 , S2, S3) with THC collected with OFCD-100 swab and with no filtration.
  • Fig. 6B is a bar graph showing the sensor electrochemical performance for three saliva samples (S1 , S2, S3) with THC collected with OFCD-100 swab and after being filtered with glass wool.
  • the THCi was 100 ng.
  • Fig. 7 is a bar graph showing the sensor electrochemical performance using different THC initial deposition amounts (Dep) of 100, 130, and 150 ng and testing synthetic saliva (SS) and five biological saliva samples (S5-S8) with THC 0, 2, and 5 ng/mL (respectively left to right for each of 100, 130, and 150 ng) collected and filtered with OFCD-100 swab/glass wool.
  • Fig. 8A is a bar graph showing sensor electrochemical performance using different a batch labeled “Batch 2” of Zensor electrodes and THC initial deposition amount of 130 ng.
  • S9-S12 real saliva samples
  • THC 0, 2, and 5 ng/mL collected and filtered with OFCD-100 swab/glass wool were tested.
  • Fig. 8B is a bar graph showing sensor electrochemical performance using a different batch labeled “Batch 3” of Zensor electrodes and THC initial deposition amount of 130 ng.
  • S9-S12 real saliva samples
  • THC 0, 2, and 5 ng/mL collected and filtered with OFCD-100 swab/glass wool were tested.
  • Fig. 9 is a bar graph showing sensor electrochemical performance using different potentiostats (mono-potentiostat (P) and multichannel potentiostat (MP) and THC initial deposition amount of 130 ng.
  • P mono-potentiostat
  • MP multichannel potentiostat
  • Fig. 10A is a schematic representation of the oxidation of the THC and cannabidiol (CBD) molecules.
  • Fig. 10B is a square wave voltammetric (SVM) response of THC-based (m-Z-THC, 130 ng) and CBD-based (m-Z- CBD, 100 ng) sensors in phosphate buffered saline (PBS).
  • SVM square wave voltammetric
  • Fig. 10C is a graph showing the current in function of potential for the electrodes pristine Zensor (P-Z), m-Z-THC (130 ng) and m-Z-CBD (100 ng).
  • Fig. 11 A is a graph showing the SWV response of electrochemical sensors modified with THC (m-Z-THC) for 0, 2, and 5 ng/mL THC detection in the presence of interferences (CBD 0, 10, and 50 ng/mL) in human saliva.
  • Fig. 11 B is a graph showing the SWV response of electrochemical sensors modified with CBD (m-Z-CBD) for 0, 2, and 5 ng/mL CBD detection in the presence of interferences (THC 0, 10, and 50 ng/mL) in human saliva.
  • Fig. 11C is a bar graph showing the SWV response of electrochemical sensors modified with THC (m-Z-THC) for 0, 2, and 5 ng/mL THC detection in the presence of interferences (CBD 0, 10, and 50 ng/mL) in six human saliva samples (S1-S6).
  • Fig. 11 D is a bar graph showing the SWV response of electrochemical sensors modified with CBD (m-Z-CBD) for 0, 2, and 5 ng/mL CBD detection in the presence of interferences (THC 0, 10, and 50 ng/mL) in six human saliva samples (S1-S6).
  • Fig. 12A is a histogram showing the training of saliva samples containing THC 0 ng/mL in the presence of CBD, and using m-Z-THC sensor.
  • Fig. 12B is a histogram showing the results of saliva samples containing THC 0 ng/mL in the presence of CBD, and using m-Z-THC sensor, following the training in Fig. 12A.
  • Fig. 12C is a histogram showing the training of saliva samples containing THC 2 ng/mL in the presence of CBD, and using m-Z-THC sensor.
  • Fig. 12D is a histogram showing the results of saliva samples containing THC 2 ng/mL in the presence of CBD, and using m-Z-THC sensor, following the training in Fig. 12C.
  • Fig. 12E is a histogram showing the training of saliva samples containing THC 5 ng/mL in the presence of CBD, and using m-Z-THC sensor.
  • Fig. 12F is a histogram showing the results of saliva samples containing THC 5 ng/mL in the presence of CBD, and using m-Z-THC sensor, following the training in Fig. 12E.
  • Fig. 13A is a graph showing the feature importance as determined by the mean decrease in impurity (MDI).
  • Fig. 13B is a graph showing the feature importance as determined by the mean decrease in accuracy (MDA).
  • Fig. 14A is a graph showing the accuracy in function of the maximum depth of the training set for a RF model evaluated with Gini impurity (for a number of trees of 5, 10, 20, 40, 80, and 160).
  • Fig. 14B is a graph showing the accuracy in function of the maximum depth of the testing set for a RF model evaluated with Gini impurity (for 5, 10, 20, 40, 80, and 160 trees).
  • Fig. 15A is a graph showing the accuracy in function of the maximum depth of the training set for a RF model evaluated with entropy (for 5, 10, 20, 40, 80, and 160 trees).
  • Fig. 15B is a graph showing the accuracy in function of the maximum depth of the testing set for a RF model evaluated with entropy (for 5, 10, 20, 40, 80, and 160 trees).
  • Fig. 16 is a graph showing the accuracy in function of the number of minimum samples for the RF model.
  • Fig. 17A is a graph showing the accuracy in function of the number of principal components (PC) in the training set for different Kernel functions using the SVM model.
  • Fig. 17B is a graph showing the accuracy in function of the number of principal components (PC) in the testing set for different Kernel functions using the SVM model.
  • Fig. 18A is a graph showing the training accuracy in function of the batch size for ANN design 1 training set.
  • Fig. 18B is a graph showing the training accuracy in function of the batch size for ANN design 1 testing set.
  • Fig. 18C is a graph showing the training accuracy in function of the batch size for ANN design 2 training set.
  • Fig. 18D is a graph showing the training accuracy in function of the batch size for ANN design 2 testing set.
  • Fig. 18E is a graph showing the training accuracy in function of the batch size for ANN design 3 training set.
  • Fig. 18F is a graph showing the training accuracy in function of the batch size for ANN design 3 testing set.
  • Fig. 18G is a graph showing the training accuracy in function of the batch size for ANN design 4 training set.
  • Fig. 18H is a graph showing the training accuracy in function of the batch size for ANN design 4 testing set.
  • Fig. 181 is a graph showing the training accuracy in function of the batch size for ANN design 5 training set.
  • Fig. 18J is a graph showing the training accuracy in function of the batch size for ANN design 5 testing set.
  • Fig. 19 is a graph showing the computation time in function of the batch size for ANN design 3.
  • Machine learning (ML) techniques are at the intersection of statistics and computer science where computers can learn from past data without explicit programming.
  • the major applications of ML algorithms are classification, regression, and anomaly detection tasks.
  • a method of sensing the presence or absence of an analyte in a sample by electrochemical detection using an electrochemical sensor and at least one machine learning (ML) algorithm has the advantage of reducing, limiting or preferably removing the possible interferences during the sensing assay when determining the presence or absence of the analyte, or when determining a concentration range or concentration value of the analyte.
  • the interference can be caused by sample to sample variation between different subjects. When the interference is not accounted for when determining the presence or absence of analyte, and more particularly when determining a concentration range or a concentration value of the analyte, a significant loss of accuracy in the determination would occur.
  • additional sources of interference, error or noise include batch-to-batch variation of commercial electrodes used in electrochemical sensors, the type of electric reader, the saliva collection method and pre-treatment (if any).
  • One of the objectives of the present disclosure is to account for substances that can be an interference in the electrical signal. These interfering substances can generate similar electrical signals, which can result in the wrong analyte detection or in the masking of the electrical signal of the analyte.
  • ML algorithms have been successfully demonstrated herein to be able to fulfill this objective. Additional ways to reduce signal interference, for example, are to perform a sample pre-treatment procedure, such as a solid-phase extraction, a chromatography, or other separation methods to separate out sources of interferences (e.g. molecules or cells). Another example of these methodologies is the use of molecularly imprinted nanoparticles (nanoMIPs) such as sequestering (masking) agents, which can help suppress the interfering signal.
  • nanoMIPs molecularly imprinted nanoparticles
  • electrode modifications with macrocyclic compounds can reduce interference due to their anti-interference capacity against coexisting ions or molecules.
  • the sample may contain electroactive agents that can interfere with the detection of the sample analyte (i.e. act as interference).
  • the electroactive agents can be, without limitations, an organic component (such as for example a polymer, an acid, a base, charged molecules and the like), an inorganic component (such as, for example a salt which can be, without limitation, NaCI, NH4CI, NaH2PC>4, KCI, NasCit, MgCh, Na2CC>3, CaC or a combination thereof) and/or a biological component (such as, for example, a protein such as enzyme, the protein can be, without limitation, albumin, lysozyme, mucin or a combination thereof).
  • an organic component such as for example a polymer, an acid, a base, charged molecules and the like
  • an inorganic component such as, for example a salt which can be, without limitation, NaCI, NH4CI, NaH2PC>4, KCI, NasCit, MgCh, Na2CC>3, Ca
  • ML algorithms offer solutions to a complex and large-size data system involving problems that traditionally required tedious hand-tuning rules and tasks with fluctuating environments.
  • ML refers to computational techniques that are learned from past experiences (i.e. data) to create logical and precise prediction algorithms. The data used in these learning algorithms influence the success of ML models; hence ML is the intersection of data analysis and statistics with computer programming.
  • the ML algorithm of the present disclosure can be supervised or unsupervised.
  • a method 100 of sensing an analyte in a sample by electrochemical detection comprises receiving a sample on the sensing electrode (by providing, for example, the sample to the sample receiving region of the sensor in fluid communication with the sensing electrode).
  • an electric potential scan is applied 104 in a target range of potentials to the sensing electrode to induce an electrochemical reaction with the analyte.
  • the electric signal is measured 106 while the electric potential is applied 104.
  • the electric signal is inputted 108 into a processing device having at least one ML algorithm operating therein.
  • the electric signal is optionally preprocessed 1 10.
  • the processing device executes 112 the at least one ML algorithm to determine from the electric signal the presence or absence of the analyte in the sample.
  • the sample is a biological sample (such as a bodily fluid, cells from a subject or a biomolecule from the subject) that is obtainable non- invasively (e.g., an ex vivo bodily fluid).
  • the sample can be an oral fluid sample such as saliva or sputum, a lavage, or an epithelial swab of a subject’s tissue (e.g. nasal or oral swab).
  • non-invasive bodily fluids include, but are not limited to, urine, sweat, tears, nasal fluid, suspended cells, and feces.
  • the sample may be a bodily fluid that is obtainable invasively such as blood, plasma, suspended cells or cerebral spinal fluid.
  • the sample has been obtained from an animal, such as a mammalian animal (a human or horse for example), a plant or a microorganism.
  • the method of detection described herein can include a step of obtaining a sample from a subject (which can be an animal, a plant or a microbe).
  • the sample can be used without prior treatment and just be received on the sample receiving region of the sensor after collection.
  • the sample can be treated before being received on the sample receiving region of the sensor. In such embodiments, a treated sample will be received on the sample receiving region of the sensor.
  • a treated sample comprises a component of a sample and refers to a sample that has been treated before the detection process.
  • the treatment can include, without limitation, the removal of at least one component of the sample (such as solid residues, proteins, polynucleotides, charged entities, and the like), the dilution of the sample, the freezing of the sample or the heat-treatment of the sample.
  • the treatment is performed in a way so as to preserve, as much as possible, the integrity (and especially the electrochemical state) of the sample analyte.
  • the sample comprises saliva or a saliva component suspected of comprising the sample analyte.
  • the saliva sample can be provided without any treatment steps and received on the sample receiving region of the sensor.
  • the saliva sample can optionally be treated before being received on the sample receiving region.
  • the saliva sample can be submitted to a dilution, a filtration, a centrifugation, a precipitation, a pH adjustment or a combination thereof.
  • the saliva sample is filtered prior to being received in the sample receiving region of the sensor.
  • a partial filtration can be performed during the collection of saliva with different material, such as, for example swabs made of cotton, cellulose, or synthetic fibers.
  • the filtration can be performed with a filter having a pore size of between about 0.1 to 0.5 pm or between about 0.1 to 3 pm including filters having diameters between 10 to 30 mm.
  • the filtering membrane can be, but is not limited to, GHP (hydrophilic propylene), hydrophobic PTFE (Polytetrafluoroethylene), PES (Polyethersulfone), hydrophobic PVDF (Polyvinylidene fluoride), nylon, glass wool (treated or untreated), hydrophobic PTFE (Polytetrafluoroethylene), hydrophilic PTFE or any combination thereof.
  • the pH of the saliva sample can be adjusted by adding a base, an acid ora buffer.
  • the saliva sample can be dissolved in an alcoholic solvent (such as methanol or ethanol), a buffer (such as phosphate buffer saline (PBS)), or a combination of both.
  • an alcoholic solvent such as methanol or ethanol
  • a buffer such as phosphate buffer saline (PBS)
  • the ratio of dilution of the saliva sample is between 1 :10 to 10:1 , between 1 :5 to 5:1 , between 1 .2:1 to 1 :1 .2, or about 1 :1.
  • the sample intended to be used is a solid material (e.g. cells, viruses or a cellular component)
  • a solvent such as water or a buffer
  • the obtained solution or suspension can be subjected to any of the sample treatment steps described herein.
  • the senor can be placed in the oral cavity of the subject to facilitate the contact of the subject’s bodily fluid (saliva in this embodiment) with the sample receiving region of the sensor.
  • the sensor can be placed in the vicinity of the tongue or mandible of the subject and can even, in some further embodiments, be placed in contact with the subject’s tongue or mandible (to gather, for example, submandibular and/or sublingual saliva).
  • the sample can be obtained from a collecting means and then received on the sample receiving region of the sensor.
  • the collecting means may include collecting saliva by placing a porous filter media into the subject’s oral cavity, which absorbs saliva found in the oral cavity, and subsequently expressing the saliva onto the sensor.
  • saliva can be collected by a subject expectorating into a container and subsequently transferring a sufficient volume of saliva onto the sensor.
  • a sufficient volume of the sample is such that the sample covers the surface of the sensing electrode and optionally the baseline electrode of the electrochemical sensor.
  • the volume of the sample received in the sample-receiving region is between about 50 pL to about 1 mL. These values may vary depending on practical implementations.
  • the method 100 provides applying an electric potential scan in a target range of electric potentials to the sensing electrode to induce an electrochemical reaction with the analyte.
  • the electrochemical reaction can be an oxidation reaction, a reduction reaction or an enzymatic reaction that has an electrochemical component (e.g. transfer of electrons or H + ).
  • applying the electric potential scan also causes a portion of the reacted sample analyte (e.g. oxidized or reduced sample) to associate with the surface of the sensing electrode.
  • the target range of electric potentials can vary with the different analytes.
  • the target range of electric potentials is from 0 to 5 V, from 0 to 4 V, from 0 to 3 V, or from 0 to 2 V.
  • the electric potential scan is applied using a voltammetric technique such as square wave voltammetry (SWV), cyclic voltammetry (CV), linear sweep voltammetry (LSV), and differential pulse voltammetry (DPV).
  • SWV square wave voltammetry
  • CV cyclic voltammetry
  • LSV linear sweep voltammetry
  • DPV differential pulse voltammetry
  • the applying 106 of an electric potential scan is preferably applying SWV.
  • Voltammetry techniques are electroanalytical techniques that can detect and/or quantify an analyte, by measuring a current as an applied electric potential is varied (i.e. electric potential scan).
  • the voltammetry techniques can be, but are not limited to, cyclic voltammetry (CV), linear sweep voltammetry (LSV), differential pulse voltammetry (DPV), or square wave voltammetry (SWV).
  • CV is performed by cycling the potential of a working electrode (e.g., the sensing and the baseline electrodes) ramped linearly versus time, and measuring the resulting current.
  • LSV measures the current at the working electrodes (e.g., sensing and baseline electrodes) while the potential between the working electrode and a reference electrode is swept linearly overtime.
  • DPV potential scan is recovered by imposing potential pulses with a constant amplitude. The differences between the currents registered just before and at the end of the pulse are plotted versus the potential.
  • SWV is a large-amplitude differential technique in which a waveform composed of a symmetrical square wave, superimposed on a base staircase potential, is applied to the working electrodes (e.g., the sensing and the baseline electrodes).
  • the analyte of the present disclosure can be any analyte detectable by electrochemistry.
  • the analyte can have an oxidizable group, a reducible group or can undergo an electrochemical reaction with a suitable electrochemical enzyme such as hepatic cytochrome P450 or CYP2C9 for THC and alcohol oxidase (AOX) for alcohol detection (e.g. methanol, ethanol).
  • a suitable electrochemical enzyme such as hepatic cytochrome P450 or CYP2C9 for THC and alcohol oxidase (AOX) for alcohol detection (e.g. methanol, ethanol).
  • AOX alcohol oxidase
  • the analyte may react and associate with the surface of the sensing electrode.
  • the sensing electrode comprises sensing analytes associated therewith that promote the interactions between the analyte (i.e.
  • sample analyte in the sample
  • sample analyte analyte in the sample
  • sample analyte analyte in the sample
  • the surface of the sensing electrode Such embodiments can provide an improved sensitivity of detection.
  • the analyte undergoes an electrochemical reaction it may associate or bind with the sensing electrode surface thereby inducing a change in the resistance and/or conductivity and modifying the current based on the concentration or presence of analyte in the sample.
  • a portion of the oxidized/reduced sample analyte associates with the sensing electrode.
  • the oxidized/reduced sample analytes can be, at least in part, directly associated with the sensing electrode by directly interacting with the surface of the sensing electrode.
  • the oxidized/reduced sample analyte is integrated in a dimer, an oligomer or a polymer of one or more species of the sensor sample analytes (which can be oxidized or reduced) in which at least one monomeric unit is directly associated with the surface of the sensing electrode.
  • the majority or the totality of the sample analyte is oxidized/reduced during the detection.
  • the analyte is a drug of abuse, a metabolite or a hormone.
  • Drugs of abuse include but are not limited to cannabinoids (e.g. THC), benzodiazepines, opiates including natural opioids (morphine, heroin), narcotics (cocaine), semi-synthetic opioids (oxycodone, hydrocodone, oxymorphone, hydromorphone) and synthetic opioids (fentanyl, methadone, tramadol), steroids, alcohols, amphetamines, barbiturates, buprenorphine, methamphetamines, cotinine, phencyclidine (PCP), 3,4-Methylenedioxymethamphetamine (MDMA), hallucinogens (Lysergic acid diethylamide (LSD), Kratom, Psilocybin), ketamine, gamma hydroxybutyrate (GHB), synthetic cannabinoids (K2/s
  • THC cann
  • the opiate can be, for example, morphine, hydromorphone and buprenorphine as well as metabolites thereof.
  • the neurotransmitter can be, for example, dopamine, serotonin, or metabolites thereof.
  • the hormone can be, without limitation, a steroid hormone such as, for example, estradiol, estrogen, testosterone, or metabolites thereof.
  • the analyte is a drug of abuse having a chemical group that can be oxidized and/or a chemical group that can be reduced.
  • the analyte may be a tetrahydrocannabinol or cocaine having an oxidizable group.
  • the analyte is an alcohol (e.g.
  • the electrochemical reaction is an enzymatic reaction.
  • the analyte is glucose in which case a glucose oxidase enzyme can be used for the enzymatic reaction.
  • the electrochemical reaction orthe enzymatic reaction is assisted or mediated by antibodies, metal nanoparticles, multiwalled carbon nanotube (MWCNT), aptamers, or molecularly imprinted polymers (MIPs).
  • the analyte is a cannabinoid.
  • the cannabinoid can be, for example, A 9 -tetrahydrocannabinol (THC), 1 1-hydroxy-A9-tetrahydrocannabinol (11-hydroxy-THC), delta-8- tetrahydrocannabinol (A8-THC), 11 -nor-9-carboxy-tetrahydrocannabinol (1 1-nor-9-carboxy- THC), cannabidiol (CBD), cannabinol (CBN), and glucuronic acid conjugated COOH-THC (gluc- COOH-THC), tetrahydrocannabinolic acid (THCA) or metabolites thereof.
  • THC A 9 -tetrahydrocannabinol
  • 1 1-hydroxy-A9-tetrahydrocannabinol 11-hydroxy-THC
  • delta-8- tetrahydrocannabinol A8-
  • the opiate can be, for example, morphine, hydromorphone and buprenorphine as well as metabolites thereof.
  • the neurotransmitter can be, for example, dopamine, serotonin, or metabolites thereof.
  • the hormone can be, without limitation, a steroid hormone such as, for example, estradiol, estrogen, testosterone, or metabolites thereof.
  • the method 100 provides measuring 106 an electrical signal from the sensing electrode while the electric potential is applied 104.
  • the electrical signal is measured by an electric device.
  • the electrical signal is current and the electric device is an ammeter, a multimeter or a resistor.
  • the current may be measured continuously during the electric potential scan or during a portion of the electric potential scan.
  • the electric device can be coupled or connected to a processing device to provide the electric signal as an input to the processing device (step 108).
  • the electric device is connected to the processing device in the same electric circuit, for example with electric wires.
  • the electric device can be coupled to the processing device and can provide the electric signal through an electromagnetic wave (Bluetooth, WI-FI and the like).
  • the step 106 of measuring the electrical signal may further comprise measuring the electrical signal of a baseline electrode and/or a reference electrode.
  • the baseline electrode is particularly useful in embodiments where the sensing electrode has a plurality of sensor analytes associated therewith.
  • the current measured at the baseline electrode can be used by the ML algorithm to account for the current contribution in the electric signal of the sensor analytes.
  • Reference electrodes can be used in some embodiments to help account for interferences in the sample or may contribute to the electrical signal based on the voltammetry technique selected.
  • the processing device and ML algorithm can receive an electric signal from the sensing electrode and in addition an electric signal from a baseline electrode and/or a reference electrode.
  • the ML algorithm receives a signal from the working electrode before any samples is deposited therein.
  • the electrical signal is preprocessed.
  • the preprocessing 1 10 includes reducing the dimensionality of the electrical signal using one of principal component analysis (PCA), locally linear embedding (LLE), multidimensional scaling (MDS), t-distributed stochastic neighbor embedding (t-SNE), and linear discriminant analysis (LDA).
  • PCA principal component analysis
  • LLE locally linear embedding
  • MDS multidimensional scaling
  • t-SNE t-distributed stochastic neighbor embedding
  • LDA linear discriminant analysis
  • the preprocessing may include selecting a statistical feature of the electrical signal and/or performing a statistical analysis/processing on the electrical signal. Preprocessing of data of electric signals is particularly relevant during the training of the ML algorithm. In some embodiments, the same preprocessing step is performed on the electrical signal measured as that performed on the training set of the ML algorithm.
  • feature rescaling is performed as part of the preprocessing 1 10.
  • Feature rescaling can eliminate the sensitivity of some ML techniques to different scales in the features. These can be linear and non-linear standardization or normalization.
  • the signal can be rescaled using one of the Standard Scaler, Robust Scaler, Min-Max Scaler, and Power Transformer.
  • preprocessing can help mitigate poor quality or insufficient quantity of data which are often considered significant challenges of ML techniques.
  • the performance of ML methods may drop with outliers and noise in the training datasets (increased complexity and computational time). Accordingly, preprocessing allows to obtain a higher-quality dataset.
  • Data cleaning and feature scaling are two main parts of the data preparation phase. Managing missing data is the purpose of data cleaning and can be performed by either eliminating the entire feature or data points related to the feature with missing values, as well as estimating the missing values via reasonable techniques.
  • most ML techniques are sensitive to the scale of numerical attributes. Different rescaling methods can be carried out based on the nature of the ML technique and datasets.
  • the at least one ML algorithm is trained with at least one statistical feature of the electrical signal selected from one or more of a maximum, a minimum, a distance between the maximum and the minimum, a mean, a variance, a skewness, and a kurtosis.
  • the at least one ML algorithm is trained with an entirety of the electrical signal. In general, training with the entirety of the electrical signal can provide an increased accuracy compared to selecting a statistical feature of the electrical signal. However, there is a trade-off in training time and in some cases detection assay time when using an entire electrical signal compared to one or more statistical features. The processing and assay time can be faster when only done with one or more statistical features of the electric signal as opposed to the entirety of the electric signal.
  • the processing device executes at least one ML algorithm to determine from the electrical signal the presence or absence of the analyte in the sample.
  • determining the presence or the absence of the analyte in the sample comprises determining a range of values of a concentration of the analyte in the sample.
  • the output for such embodiments can be a concentration range or a value with a percentage based error (e.g. ⁇ 3 %, ⁇ 5 %, ⁇ 7 %, or ⁇ 10 %) of the sample analyte.
  • determining the presence or the absence of the analyte in the sample comprises determining a single value of a concentration of the analyte in the sample. In other embodiments, determining the presence or absence of the analyte includes determining whether the concentration of the analyte in the sample is above or below a predetermined concentration threshold.
  • the predetermined concentration threshold may for example be a legal limit for a drug of abuse.
  • the output of the device may be a positive result or a negative result (i.e. a concentration of the analyte above or below the threshold respectively).
  • the at least one ML algorithm is configured to decrease noise present in the electrical signal for determining the presence or the absence of the analyte in the sample, the noise resulting from at least one of subject-to-subject variations in the sample, discrepancies between batches of the sensing electrode, and analog compound interference in the sample.
  • analog compound refers to compounds that generate an electrochemical signal similar to that of the analyte.
  • the analog compound when compared to the analyte can be a compound having a similar chemical structure, a similar three dimensional conformation, a similar oxidizable or reducible group, the same oxidizable or reducible group, a similar chemical formula or an isomer.
  • THC Tetrahydrocannabinol
  • CBD cannabidiol
  • THC presents a phenol group, oxidizable at potentials near 0.4 V
  • CBD has two aromatic meta-hydroxyl groups with the same oxidizable capability, at almost the same potential as THC. Accordingly, in one example, CBD acts as an interference (i.e. analog compound) during THC electrochemical detection.
  • the at least one ML algorithm is configured to perform at least one of a regression analysis and a classification task to determine the concentration or the presence of the analyte from the electrical signal.
  • the classification task can be performed using one of logistic regression, soft regression, decision tree, random forest (RF), support vector machine (SVM) and an artificial neural network (ANN).
  • the regression analysis can be performed using one of linear regression, gradient descent, polynomial regression, regularized linear model, ridge regression, lasso regression, RF and SVM. SVM and RF can be used for both the regression analysis and the classification task.
  • the at least one machine learning algorithm uses XGBoost.
  • the at least one ML algorithm comprises a plurality of different ML algorithms combined into an ensemble ML model.
  • Any suitable ML algorithm or combination of ML algorithms can be selected for the present sensing methods.
  • Non-limiting examples of ML algorithms and data processing methods (e.g. dimension reduction) encompassed by the present disclosure are described herein below.
  • Feature scaling techniques include either normalization or standardization algorithms. Normalization consists of restraining values between a range of two specific numbers for example, [0,1 ] and [-1 ,1]. On the other hand, the standardization process transforms data to create a new dataset with specific mean and variance values.
  • the most common linear feature scaling algorithms are Min-Max Scaler, Standard Scaler, MaxAbs Scaler, Robust Scaler, Quantile Transformer Scaler, Power Transformer Scaler. Quantile Transformer and Power Transformer Scaler are the most used feature scaling techniques. Power Transformer Scaler and Quantile are non-linear transformers. Standard Scaler, Robust Scaler and Power Transformer are the most commonly used scaling techniques.
  • Linear feature scaling techniques transform a data point ( ( ) to a new data point (% ⁇ ) by using the following generalized formula:
  • n, a, and N in Table 1 represent the mean, standard deviation, and the number of data points respectively.
  • Interquartile range (IQR) is the range between the 1st and 3rd quartile.
  • M(X) represents the median value of the old dataset.
  • X is a vector consisting of all old data points and x t is a single data point in the X.
  • the Min-Max Scaler is an appropriate feature scaling strategy for a non-Gaussian data distribution.
  • the Standard Scaler is suitable for mainly normally distributed datasets and adjusts the data's mean and variance to desirable values.
  • the Max Abs Scaler is similar in performance to the Min-Max Scaler for a dataset comprised of all positive values.
  • the Robust Scaler removes the median value and transforms data according to the IQR; hence it is the least sensitive to marginal outliers amongst all mentioned linear transformers.
  • monotonic transformations are the principal concept of non-linear scalers like the Quantile Transformer Scaler and the Power Transformer.
  • the former is a non-parametric transformer that uses quantile information and maps the data to a uniform or a normal distribution between 0 and 1 .
  • the latter transformer consists of a family of parametric transformations that map data to a more Gaussian-like distribution by minimizing the skewness and stabilizing variance.
  • the Quantile Transformer Scaler performs rank transformation and usually eliminates anomalies, thus robust to outliers. Nevertheless, non-linear transformations often distort linear correlations and distances in the old datasets.
  • Predicting a numerical value for one or more target outputs based on past experiences, dataset is a typical task in ML which is called Regression. Both Regression and classification tasks require training processes with labeled output data; thus, algorithms are supervised. Linear and Polynomial Regression, Gradient Descent (GD), Regularized Regression, and Support Vector Machine (SVM) are examples of ML regression algorithms.
  • GD Gradient Descent
  • SVM Support Vector Machine
  • Linear Regression is a simple ML regression algorithm suitable for medium-sized datasets.
  • Eq. 2 is the generalized form of LR, where Y, f> , X and e represent the target (dependent variable), the matrix of regression parameters, the independent variable vector, and the error vector respectively.
  • the T superscript is a transpose sign.
  • Regression parameters include regression coefficients and intercept.
  • the objective of linear regression is to calculate the best fit for the regression parameters by optimizing a cost function.
  • a cost function is a measurement of the quality of an algorithm and can be defined based on an evaluation metric.
  • the most common evaluation metric in ML is the mean square error (MSE) that can be defined as follows:
  • the optimized regression parameter for minimizing the MSE can be calculated based on the Normal equation, as per Eq. 4.
  • p m X T . X)- 1 . X T . Y Eq. 4
  • SGD Batch Gradient Descent
  • SGD Stochastic Gradient Descent
  • the SGD approach will decrease the required memory and computational time. Nonetheless, the randomness nature of the SGD leads to a close but not an exact solution. Hence, it can bounce around the optima constantly without settling down for any solution.
  • randomness can be beneficial for irregular and complex cost functions by discarding local optima.
  • Introducing an adjustable learning rate that decreases gradually during the iteration process can resolve the divergence problem in the GD.
  • MBGD Mini-Batch Gradient Descent
  • Polynomial Regression is a family of linear regression models and used when the relationship between any independent variable ( ( ) and target variable can be defined as per Eq. 7, as a polynomial equation of p th degree. It should be noted that Polynomial regression also counts for variables' linear relationship, as well.
  • ML techniques often encounter two major challenges, underfitting and overfitting. These conditions indicate either oversimplification (underfitting) or high complexity (overfitting).
  • the evaluation functions show poor quality for both training and testing sets in the case of underfitting that cannot be fixed by increasing the training size.
  • overfitting indicates the model's high sensitivity to small variations and unwanted noise in the training set. Consequently, overfitting models often perform very well on training sets and underperform on testing sets. This problem can be resolved by increasing the size of the training data set or regularizing the model. Ridge and Lasso Regressions are widely used regularization techniques. Ridge Regression
  • y is a hyperparameter that controls the algorithm. If it is zero, then the Ridge technique will be equal to normal LR. Large y will reduce the complexity of the model and may result in an underfitting model.
  • Support Vector Machine is a ML approach particularly suitable for complex and small-medium size datasets with both classification and regression applications.
  • Linear, Non- Linear and Kernel SVM are the three main types of SVM family for both classification and regression tasks.
  • the main concept of all previously discussed regression techniques was to find a line that fits the training set by minimizing cost functions (least square error).
  • the objective of Linear SVM regression is to find an acceptable margin of error to fit the training set along an appropriate line (hyper plane).
  • Fig. 2 demonstrates a simple univariate regression.
  • the solid black line is the fitting line, and the two red dashed lines are at the vertical distance of v from it. The dashed lines determine the margin of error, where the error is ignored for data points in the region between two lines.
  • C denote the hyperparameter and the slack variable respectively.
  • the slack variable measures the distance of an instance (data point) from its hyperplane. Two conflicting goals are to be satisfied, finding the smallest slack value to decrease the error of the model and the highest acceptable margin of error.
  • the hyperparameter C creates a tradeoff between these incompatible objectives.
  • Non-linear SVM regression models deal with datasets with a non-linear relationship between dependents and independent variables. Accordingly, in some embodiments, finding a suitable hyperplane to fit the training set requires mapping data to a higher dimension space. Kernel methods can be used to define non-linear decision boundaries (hyperplanes). Kernel functions implicitly determine the inner products of transformation functions in a high-dimensional space based on original vectors if it satisfies Mercer's condition, as per Eq. 12. According to Mercer's theorem, Kernel must be continuous and symmetrical. In other words, the Kernel function calculates the dot product of two transformation functions of two vectors in original space by circumventing the calculation of the transformation function and is only based on the vectors themselves. Eq. 12
  • K, ⁇ t>, and ( ) represent Kernel, the transformation function, and the dot operation product respectively.
  • Xj are the two vectors in original space.
  • the most commonly used Kernel functions are summarized in Table 2 where y and r represent constant coefficients.
  • the Gaussian RBF Kernel can transform an instance to an infinite-dimensional space since exponential functions can be extended by the famous Taylor series, y parameter for RBF represents the impact of a single instance on the training set.
  • the Sigmoid Kernel does not meet all the criteria of Mercer's conditions, it instead provides satisfactory results in practice.
  • Classification is another major task that can be resolved by applying various supervised ML techniques, including Logistic Regression, Soft Regression, Decision Tree, Artificial Neural Network (ANN), and SVM.
  • supervised ML techniques including Logistic Regression, Soft Regression, Decision Tree, Artificial Neural Network (ANN), and SVM.
  • Logistic Regression is a probabilistic ML approach that can be applied for binary classification tasks.
  • the probability of an instance to belong to a positive class can be determined via the following equation.
  • Eq. 14 describes the proposed cost function for a single data point. This cost function should be applied to all instances in the training set, and the average of these cost functions should be optimized, as per Eq. 15. This cost function does not have any exact solution; nevertheless, it is a convex function. Hence, GD approaches can be utilized to find the approximate optimal parameters. Softmax Regression
  • Softmax Regression is an extension of Logistic Regression for multiclassification, K classes. The basic idea is to determine the score of an instance for each class and then compute the probability of each class by applying the softmax function, as per Eq. 16. This algorithm classifies only one class at a time.
  • Decision Tree has dual applications and can perform both regression and classification tasks.
  • the foundation of this approach consists of a series of rules and is one of the least sensitive ML techniques to feature scaling. The process starts with dividing the entire training set into two subsets based on a threshold criterion for a single feature. The feature with the purest subsets is the selected feature. Each subset is split in the same manner until the tree reaches its predefined depth or cannot be split anymore.
  • the cost function for this algorithm can be computed as follows:
  • k and t k denote the k th feature and its threshold.
  • G represents the Gini function which is a measurement of impurity.
  • subscripts I and r symbolize left and right.
  • Gini function for the i th node can be determined as follows:
  • P i k is the probability of class k among training instances in the i th node.
  • S is another widely used measurement for impurity, as per Eq. 19.
  • ANN Artificial Neural Network
  • Fig. 3 The output of a neuron without an activation function (f) will be a linear combination of weighted inputs (Wi x) and an added bias (b).
  • f weighted inputs
  • b weighted bias
  • the softmax activation function is often used when a neuron has more than one output.
  • SELU Scald Exponential Linear Unit
  • MLP Multi-Layer Perceptron
  • CNN Convolutional Neural Network
  • RNN Recurrent Neural Network
  • An MLP model consists of an input layer, one or more hidden layers, and an output layer, where each layer has one or more nodes.
  • An MPL model with two or more hidden layers is often called a Deep Learning Network (DNN). Every layer except the output layer has a bias, and all neurons are fully connected in an MPL model.
  • the backpropagation algorithm is the main training algorithm for MLP, where the weights and bias iteratively are adjusted via a gradient descent approach to minimize the cost function. In other words, at every step, the backpropagation algorithm first predicts the output in a forward pass, then computes the error, and then calculates the error contribution from each connection through a backward pass. Finally, it applies a gradient descent approach to adjust the weights and biases.
  • x, n, o, and z represent the input, the mean, the standard deviation of inputs, and the output respectively. Additionally, a>, 6, and e represent the scaling parameters, offset, and smoothing terms. The smoothing term is a very small positive number to tackle zero standard deviation.
  • Introducing a threshold for gradients can be an alternative solution to overcome the gradient exploding challenge. Stopping the training at an early stage and applying regularization techniques are the two major strategies to tackle overfitting. Dropout is a common regularization method that dedicates a probability (dropout rate) to every node. In other words, each node at each iteration has a probability to be ignored. The dropped-out nodes can reactivate in the next iteration. It should be noted that dropping out only occurs during training and not during testing.
  • an ensemble algorithm When an ensemble algorithm aggregates various weak learning techniques and trains them sequentially, it is referred to as the boosting method.
  • Two main boosting algorithms are adaptive boosting and gradient boosting.
  • adaptive boosting the focus is on improving the underfitted instances of training. It introduces a weighting and updating strategy to instances with low accuracy as adding a new predictor to an ensemble. Therefore, the boosting technique enhances the accuracy of the ensemble model gradually, but the training process cannot be parallelized.
  • the basic concept of the gradient boosting approach is similar to adaptive boosting, apart from the type of parameters that need to be updated.
  • the residual errors are modified by subsequently adding a new predictor.
  • the at least one machine learning algorithm described herein uses the XGBoost technique.
  • Random Forest is a popular ensemble technique that combines a group of Decision Tree models.
  • the model consists of various individual trees where each tree is trained on a sample of the training set with often replacement.
  • Random Forest incorporates the hyperparameters of a Decision Tree combined with additional randomness and hyperparameter. The splitting process at each node occurs among random subsets features. This extra randomness creates a higher diversity and a better overall performance.
  • a Random Forest model can calculate the relative importance of each feature by measuring the contribution of a feature on impurity reduction.
  • PCA Principal Component Analysis
  • LLE Locally Linear Embedding
  • MDS Multidimensional Scaling
  • t-SNE t-Distributed Stochastic Neighbor Embedding
  • LDA Linear Discriminant Analysis
  • C d denotes matrix of first d principal components.
  • the optimal first d component is equal to the minimum numberof dimensions that maintain 95% of the original dataset's variance.
  • Implementing the SVD technique in the original version of PCA to calculate principal component vectors requires the storage of the entire training set.
  • Incremental, Randomized, and Kernel PCA are alternative versions of PCA that may either free memory or expedite the training process.
  • LLE Locally Linear Embedding
  • the first step is to identify local neighborhoods consisting of k closest instances.
  • Wj represents the weight of neighboring instance and is determined based on the optimization of w 7 % 7
  • the objective is to find weights while instances are fixed. Finding the optimal position of the instances in a d-dimensional subspace while weights are fixed is the final step of this technique.
  • Multidimensional Scaling is another dimensionality reduction technique that finds the hyperplane while maintaining the distances between instances in the new subspace.
  • t-SNE t-Distributed Stochastic Neighboring Embedding
  • LDA Linear Discriminant Analysis
  • the ML algorithms are executed on the processing device.
  • the electrochemical sensor 204 may be used with a processing device 203 to form a detection system
  • the processing device 203 comprises at least one processor
  • the processing device 203 components may be connected in various ways including, but not limited to, directly coupled, indirectly coupled via a network, and distributed over a wide geographic area and connected via a network (which may be referred to as “cloud computing”). It will be understood that the computing device 203 comprises all analog circuitry necessary to interface with the sensor 204.
  • the processing device 203 may be a server, network appliance, set-top box, embedded device, computer expansion module, personal computer, laptop, personal data assistant, cellular telephone, smartphone device, UMPC tablets, video display terminal, gaming console, electronic reading device, and wireless hypermedia device or any other computing device capable of being configured to carry out the methods described herein.
  • Each processor 201 may be, for example, any type of general-purpose microprocessor or microcontroller, a digital signal processing (DSP) processor, an integrated circuit, a field programmable gate array (FPGA), a reconfigurable processor, a programmable read-only memory (PROM), or any combination thereof.
  • DSP digital signal processing
  • FPGA field programmable gate array
  • PROM programmable read-only memory
  • Memory 202 may include a suitable combination of any type of computer memory that is located either internally or externally such as, for example, random-access memory (RAM), read- only memory (ROM), compact disc read-only memory (CDROM), electro-optical memory, magneto-optical memory, erasable programmable read-only memory (EPROM), and electrically- erasable programmable read-only memory (EEPROM), Ferroelectric RAM (FRAM) or the like.
  • RAM random-access memory
  • ROM read- only memory
  • CDROM compact disc read-only memory
  • electro-optical memory magneto-optical memory
  • EPROM erasable programmable read-only memory
  • EEPROM electrically- erasable programmable read-only memory
  • FRAM Ferroelectric RAM
  • the memory 202 can have the at least one ML algorithm stored therein.
  • the memory 202 can store the training data set and may continuously update the training data set.
  • the memory 202 may have various concentration thresholds stored therein and can store the readings performed
  • the processing device 203 is coupled to a voltage generator, and the electric potential scan is programmed on the processing device 203, for example stored in the memory 202. In such embodiments, the processing device 203 can run the electric potential scan autonomously and automatically using the processor 201 .
  • Each communication interface 205 enables the processing device 203 to interconnect with one or more input/output devices 207, such as a keyboard, mouse, camera, touch screen microphone, display screen and speaker.
  • a display screen may display a symbol or sign that is indicative of the presence or absence of the sample analyte in the sample.
  • the display screen displays the value of the concentration of the sample analyte in the sample.
  • the display screen may display the estimated value of the concentration of the sample analyte in the sample of the subject who provided the sample.
  • the display screen can be a simple display in black and white or a more modern touch screen able to receive commands.
  • a user interface may contain a button or other physical means for the user to signal to the device to begin the analysis of a sample.
  • a network interface enables the processing device 203 to communicate with other components, to exchange data with other components, to access and connect to network resources, to serve applications, and perform other computing applications by connecting to a network (or multiple networks) capable of carrying data including the Internet, Ethernet, plain old telephone service (POTS) line, public switch telephone network (PSTN), integrated services digital network (ISDN), digital subscriber line (DSL), coaxial cable, fiber optics, satellite, mobile, wireless (e.g. Wi-Fi, WiMAX), SS7 signaling network, fixed line, local area network, wide area network, and others, including any combination of these.
  • POTS plain old telephone service
  • PSTN public switch telephone network
  • ISDN integrated services digital network
  • DSL digital subscriber line
  • coaxial cable fiber optics
  • satellite mobile
  • wireless e.g. Wi-Fi, WiMAX
  • SS7 signaling network fixed line, local area network, wide area network, and others, including any combination of these.
  • the processing device 203 can be operable to register and authenticate users (using a login, unique identifier, and password for example) prior to providing access to applications, a local network, network resources, other networks and network security devices.
  • the processing device 203 may serve one user or multiple users.
  • electrochemical sensors that benefit from the method described herein include, but are not limited to, test strips and microfluidic chips.
  • an alcohol test strip or chip a tetrahydrocannabinol strip or chip, an opioid strip or chip, a narcotics strip or chip, a steroid strip or chip, an amphetamine strip or chip, a barbiturate strip or chip, a buprenorphine strip or chip, a methamphetamine strip or chip, a cotinine strip or chip, a PCP strip or chip, a MDMA strip or chip, a LSD strip or chip, etc.
  • the electrochemical sensor and/or the sensing electrode can be disposable.
  • the electrochemical sensor has a sensing electrode having sensing analyte associated therewith.
  • the sensing analyte on the sensing electrode i.e. working electrode (WE)
  • WE working electrode
  • the sensing analyte can be coated via electrodeposition of the same analyte that is expected to be detected later in the sample, although other appropriate techniques are contemplated.
  • the choice of deposition method may vary depending on the analyte.
  • the deposition method should limit or avoid altering the ability of the sensor analytes, once associated with the surface of the sensing electrode, to facilitate the electrochemical reaction of the sample analyte.
  • the deposition method should limit or avoid altering the electrochemical features (such as the conductivity) of the sensing electrode.
  • the sensing electrode is associated or operatively coupled with a plurality of sensor analytes.
  • the sensing electrode is a working electrode designed to facilitate an electrochemical reaction of the sample analyte.
  • a sensor analyte is a chemical species or a mixture of chemical species which is/are associated (directly or indirectly) with the sensing electrode prior to the detection of the sample analyte.
  • the association between the sensor analyte and the sensing electrode can be caused by any physical or chemical interaction (or a combination thereof), including, but not limited to ionic interactions, covalent interactions, hydrogen interactions, van der Waals interactions, and/or electrostatic interactions.
  • the sensor analytes protrude, at least partially, from the surface of the sensing electrode.
  • the sensor analytes can be adsorbed, at least in part, on the surface of the sensing electrode.
  • the sensor analytes can be immobilized, at least in part, on the surface of the sensing electrode.
  • the sensor analytes can be embedded, at least in part, in the sensing electrode.
  • a portion of the plurality of sensor analytes can be directly associated with the surface of the sensing electrode and/or interact directly with the surface of the sensing electrode. In some embodiments, a portion of the plurality of sensor analytes can be indirectly associated with the surface of the sensing electrode. In such embodiments, the sensor analytes can be associated with one or more sensor analytes which is directly associated with the surface of the sensing electrode. In some specific embodiments, the sensor analytes can be integrated into a dimer, an oligomer, or a polymer of one or more species of the sensor analytes in which at least one monomeric unit is directly associated with the surface of the sensing electrode.
  • the plurality of sensor analytes cover at least in part the surface of the sensing electrode. In an embodiment, the plurality of sensor analytes covers at least 10, 20, 30, 40, 50, 60, 70, 80, 90% or more of the surface of the sensing electrode. In an embodiment, the plurality of sensor analytes cover a majority or a totality of the sensing electrode’s surface. In one embodiment, the entire surface of the sensing electrode is covered by the plurality of sensor analytes. In a specific embodiment, at least about 90, 95, 96, 97, 98, 99% or more of the surface of the sensing electrode is covered by the plurality of sensor analytes.
  • the electrochemical sensor comprises a baseline electrode.
  • the baseline electrode is a working electrode designed to detect and optionally quantify the contribution of electroactive agents present in the sample which can interfere with the detection of the sample analyte.
  • the baseline electrode corresponds to the sensing electrode prior to its association with the plurality of sensor analytes.
  • the baseline electrode is a bare working electrode.
  • the baseline electrode can include any suitable conductive material and can be made of the same material as the sensing electrode (without the sensor analytes).
  • the baseline electrode comprises a carbon-based material, a nanomaterial, a metal-based material, or a combination thereof.
  • the baseline electrode comprises carbon, gold, platinum, palladium, ruthenium, rhodium, or a combination thereof.
  • the baseline electrode may be a screen-printed electrode (SPE).
  • SPE screen-printed electrode
  • the baseline electrode may be of any shape or size.
  • Known SPE include, but are not limited to, a Zensor electrode, a Dropsens electrode, a Zimmer Peacock electrode, Flex Medical Electrode or a Kanichi electrode.
  • the baseline electrode is a Zensor carbon-based electrode.
  • the electrochemical sensor includes one or more reference electrodes.
  • each working electrode i.e. the sensing electrode and the baseline electrode
  • two or more working electrodes can be associated with the same reference electrode.
  • the reference electrode is an electrode with a stable and well-defined electrochemical potential against which the potential of other electrodes like the sensing electrode or baseline electrode can be controlled and measured.
  • the reference electrode comprises or consists of silver.
  • the reference electrode is screen printed, it can be prepared with Ag/AgCI ink or Ag ink.
  • the senor includes one or more counter electrodes.
  • each working electrode can be associated with one counter electrode.
  • two or more working electrodes can be associated with the same counter electrode.
  • the counter electrode completes the circuit of a three-electrode cell, as it allows the passage of current. After the sample is placed on a sample receiving region, a potential is applied between the sensing electrode and the reference electrode, and the current induced is measured. At the same time, a potential between the counter electrode and the reference electrode is induced which will generate the same amount of current (reverse current). Therefore the sensing electrode, baseline electrode, reference electrode, and counter electrode are all intended to be in fluid communication with the sample.
  • the counter electrode can be made of the same materials as the sensing electrode and/or the baseline electrode and/or the reference electrode. In one example, the counter electrode comprises or consists of carbon ink or platinum.
  • the Zensor electrodes were thoroughly washed with Milli-Q water and dried with hot airflow.
  • stock solutions of THC were prepared by adding THC (1 mg/mL in methanol) in a mix of solvent methanol/water (3:1 ratio in volume).
  • 1 pL of the previous stocks were dispensed on the working area of the Zensor electrodes to obtain different amounts of 100, 130, and 150 ng of the sensing analyte.
  • the electrodes were dried at room temperature (RT) airflow for 30 seconds and warm airflow for 5 seconds.
  • the electrodes with the analyte deposited were submitted in phosphate buffered saline (PBS) 0.01 M to perform an electrochemical treatment by using square wave voltammetry (SWV) with the following conditions: precondition potential of 0.05 V for 30 s, equilibration time of 3 s, voltammetric potential scan from 0 to 0.8 V with a frequency of 15 Hz, an amplitude of 25 mV, and a step potential of 5 mV to obtain modified Zensor (m-Z-THC) electrodes. After each recording, the m-Z-THC electrodes were thoroughly washed with Milli-Q water and stored at 4 °C degree in N2 riched package until they were ready to be used.
  • SWV square wave voltammetry
  • Saliva from human donors was spat inside 15-50 mL tubes, adequately sealed with parafilm, and labeled with the donator's name and date.
  • the saliva samples were frozen at - 20 °C for long-term storage or cooled at 4 °C be tested within a period of 24 hours.
  • the saliva samples were dispensed in 1.5 mL Eppendorf vials. Next, the samples were spiked with an adequate amount of THC in 0.01 mL of methanol to obtain final concentrations of 0, 2, and 5 ng/mL of THC. After that, an absorbent material swab was introduced inside the vial to collect the sample.
  • the collected THC samples were prepared by adding methanol. Then, 100 pL of the samples were added on electrodes and immediately after, SWV was recorded with the following conditions: precondition potential of 0.05 Vfor30 s, equilibration time of 3 s, voltammetric potential scan from 0 to 0.8 V with a frequency of 15 Hz, an amplitude of 25 mV, and step potential of 5 mV.
  • precondition potential 0.05 Vfor30 s
  • equilibration time 3 s
  • voltammetric potential scan from 0 to 0.8 V with a frequency of 15 Hz, an amplitude of 25 mV, and step potential of 5 mV.
  • Different situations such as using or not using pre-filtration, testing different batches of electrodes, reading with mono-potentiostat and multichannel-potentiostat types of equipment, among others, were evaluated for from the samples of the different saliva donors.
  • the concentration of the analyte is determined with the values of the currents, by subtracting the intensity of the current peaks for the samples recovered with m-Z-THC (sensing electrode) minus the current signal obtained with the baseline electrode pristine (pristine Zensor, p-Z).
  • Table 5 summarizes all the experimental conditions during the sensor data collection. It was also possible to eliminate methanol in the production process.
  • the training and testing accuracies were 100% and 71 %, respectively.
  • THC and cannabidiol (CBD) electrochemical sensor fabrication THC and cannabidiol (CBD) electrochemical sensor fabrication.
  • the working electrode was modified with THC molecules (as described above for m-Z-THC) and in the case of a CBD-based sensor, the working electrode was modified with CBD molecules (m-Z-CBD).
  • THC and CBD-based sensors were prepared following the same approach methodology detailed above. Briefly, before THC or CBD deposition, the Zensor electrodes were washed with Milli-Q water and dried with hot airflow. Following drying, stock solutions of THC or CBD (50 - 150 pg/rnL) were prepared by adding THC or CBD solution (1 mg/mL in methanol) into a mix of methanol/water solvent. Subsequently, 1 pL of the stock solution was dropped onto the WE surface of the Zensor electrodes and left to dry at room temperature airflow for 30 seconds and warm airflow for 5 seconds.
  • the obtained modified electrodes have an initial THC (m-Z-THC) or CBD (m-Z-CBD) deposition of 130 and 100 ng, respectively.
  • the modified electrodes were submitted to electrochemical treatment using square wave voltammetry (SWV) with 0.01 M PBS solution.
  • SWV square wave voltammetry
  • the following conditions were employed to record the electrochemical measurement: precondition potential of 0.05 V for 0 s, equilibration time of 3 s, voltammetric potential scan from 0 to 0.8 V with a frequency of 15 Hz, an amplitude of 25 mV, and step potential of 5 mV.
  • the intensity of the current was proportional to the amount of THC-deposited on the WE.
  • Fresh human saliva was provided by healthy human donors, which was collected by spitting it into a sterilized container. The samples were kept at 4 °C in a refrigerator when not in use. The saliva sample from each donor was vortexed for 5 min each before its use. THC or CBD was spiked in low concentration levels in methanol (0, 2, and 5 ng/mL) into saliva samples. The saliva samples collection and preparation were done following the same protocol above.
  • Fig. 5A shows SWV signals during the THC deposition of different modified electrodes (m- Zensor).
  • Fig. 5B shows raw data of 3 m-Zensor and one pristine (P-Z) per THC concentration 0, 2, and 5 ng/mL.
  • Fig. 5C is an example of the subtraction of the signals for the samples (THC 0, 2, and 5 ng/mL) recovered with m-Z-THC minus the signal obtained with pristine Zensor.
  • the intensity of I regarding the baseline is correlated with the THC concentration in the sample.
  • the biomolecule-free electrochemical approach detected THC in PBS (1.1 ng/mL), simulated saliva (1.6 ng/mL), and real saliva (1.6 ng/mL).
  • Figs. 6A and 6B show the difference in the electrochemical performance of the sensor with THC-saliva samples 0, 2, and 5 ng/mL collected with the swab OFCD-100, filtered and unfiltered with glass wool. After the additional filtration, the intensities corresponding to each concentration depicted better differentiation and fewer interferences contributions.
  • Fig. 7 summarizes the values of THC 0, 2, and 5 ng/mL in different saliva samples, including one synthetic saliva (SS) and five saliva samples from donors (S4-S8) and THC depositions of 100, 130, and 150 ng in each sample.
  • the presence of oxidized THC or CBD on the final working electrode in the sensors facilitates further oxidation of other THC or CBD molecules present in the sample due to possible peer interactions between the analyte in the sample (THC or CBD) and the modified working electrodes in the proposed sensors (m-Z-THC and m-Z-CBD).
  • Both sensors, the m-Z-THC and the m-Z-CBD developed herein, were designed based on the oxidation of the hydroxyl group present in THC and CBD molecules under an applied potential to form C O moieties followed by the formation of quinones, adducts, or more complex structures (Fig. 10A).
  • the presence of THC or CBD species in the sensors (m-Z-THC and m-Z-CBD) enhanced further physical and chemical interactions of the working electrodes with the THC or CBD molecules present in the sample, hence the oxidation process.
  • the SWV was performed in PBS solution.
  • Fig. 11A illustrates an example of the raw data obtained in detecting THC (2 ng/mL) in the presence of different amounts of CBD (0, 10, and 50 ng/mL) using the m-Z-THC sensor. After the analyses, three well-defined peaks appeared between 0.4 and 0.6 V with an intensity higher than 2 pA. A shift to higher potential values was observed when 50 ng/mL of CBD was employed as an interference. As shown in Fig.
  • 1 1 C shows the results of THC detection in the presence of CBD using the m-Z-THC sensor.
  • the intensity of the signals remained between 2-2.5 pA after subtraction. However, as can be seen, the signals could not be differentiated when the sample was analyzed with a different amount of the target analyte and interfering molecule (Fig. 11 C). Similar behavior was observed when the m-Z-CBD sensor was employed for CBD detection under the presence of THC concentrations. In this case, the intensity of the signals after subtraction was between 1 -2 pA lower than the signal obtained when the m-Z-THC sensor was employed for THC detection.
  • Random Forest RF
  • Support Vector Machine SVM
  • Artificial Neural Network ANN
  • RF Random Forest
  • SVM Support Vector Machine
  • ANN Artificial Neural Network
  • Table 8 Description of different datasets used for training for m ⁇ Z-THC sensors [0169] Moreover, proper selection of signal features can play a critical role in the success of a ML model. As a result, ML techniques were trained with only statistical features of signals, including the maximum, minimum, distance between the maximum and the minimum, mean, variance, skewness, and kurtosis or the entire signal (Fig. 5C). Different dimensionality reduction techniques were used on the whole signal. Furthermore, the effect of feature scaling on ML techniques was studied. The training sets for all techniques were split into training and testing. The results for instances with only statistical features on different datasets are summarized in Table 9. The results indicate that RF performed considerably better than SVM and ANN when trained with only statistical features. Nevertheless, it seemed to have suffered from overfitting since the differences between the accuracies of training and testing sets were significant.
  • Table 10 summarizes applying RF, SVM, and ANN techniques on the entire signals.
  • the results demonstrated significant improvements in the accuracy of ML techniques trained over the entire signal features with dimensionality reduction and preprocessing.
  • all ML techniques perform remarkably better on a portion of datasets with the least experimental variation, i.e., df5.
  • the variations in signal shapes for df5 datasets represent mainly saliva variation.
  • Design 1 consisted of one hidden layer with a different number of neurons, ranging between 16 to 256.
  • Design 2 consisted of two hidden layers with an equal number of neurons, ranging from 16 to 256.
  • Design 3 consisted of two hidden layers with the number of neurons in the second layer ranging from 32 to 256 while the first layer has half the number of neurons in the second layer.
  • Design 4 consisted of three hidden layers with an equal number of neurons in each layer, again varied from 16 to 256 in multiples of 2.
  • Design 5 consisted of three hidden layers with the number of neurons in each consecutive layer twice as the previous one. The number of neurons in the last layer for Design 5 is between 64 to 256. The results are shown in Figs. 18A-18J.
  • Finding the best architecture for ANN models of state of art involves trial and error.
  • the more complex structure improves the performance of the ML technique for training datasets but may lead to overfitting and high computational time. It is recommended to start with a simple structure first and increase the complexity of the model, i.e. increasing the number of hidden layers and gradually increasing the number of neurons in consecutive hidden layers.
  • Table 15 The effect of batch-to-batch variation.
  • the RF models used the Gini impurity criterion.
  • the SVM models used the RBF kernel.
  • the fourth design architecture was used for the ANN models.
  • Table 16a Accuracy of ML techniques on different datasets trained with the entire signal for m-Z-CBD sensors.
  • Support Vector Machine, Decision Tree, and Logistic regression were used to classify signals with and without interference for m-Z-THC and m-Z-CBD sensors.
  • Table 16b summarizes the accuracy of each model on training and testing sets for m-Z-CBD sensor. The results demonstrated the superiority of the SVM method over other classification techniques. The entire signal features were used for training and preprocessing, and dimensionality reduction was applied on datasets before training for all methods except Decision Tree. Similar results can be observed for m-Z-THC sensor, as per Table 17.
  • the SVM model was used to classify the class of concentration of target sensor in the presence of THC/CBD.
  • Table 18 summarizes the accuracy of results for training and testing datasets for both sensors. The results demonstrated the capability of SVM method to identify the class in the presence of interference.
  • SVM regression model was deployed to predict the concentration of THC in the presence of CBD.
  • Figs. 12A-F illustrate the histogram of predicted results per class for training and testing sets. The result was auspicious despite SVM being trained by discrete values and not continuous concentration values.
  • RF Random Forest
  • SVM Support Vector Machine
  • Electrodes were rinsed with ultrapure (Milli-Q water), and solutions were prepared using phosphate buffer saline (PBS) purchased from Sigma Aldrich as tablets. PBS solution of 0.01 M with a pH of 7.4 was used as the supporting electrolyte. Cocaine hydrochloride standard solution in methanol (1 mg/mL) was purchased from Sigma-Aldrich (Oakville, Canada). The electrochemical experiments were performed using a PalmSensTM 4 Potentiostat I Galvanostat I Impedance Analyzer connected to a computer using the PalmSensTM PSTrace Software. SPEs with carbon-based working (3 mm/0.071 cm 2 ) and counter electrodes and silver reference were purchased from Zensor R&D (Taichung, Taiwan). Data analysis and image configuration was performed using the Origin 8.5 software.
  • PBS phosphate buffer saline
  • the carbon electrodes used were thoroughly rinsed with Milli-Q water and allowed to air dry before proceeding.
  • 100pL of PBS was pipetted onto the electrode and interrogated under the following square-wave voltammetry (SWV) parameters: equilibration time of 3 s, voltammetric potential scan from 0 to 1.5 V with a frequency of 15 Hz, an amplitude of 25 mV, and a step potential of 5 mV. This step was repeated three times per electrode.
  • SWV square-wave voltammetry
  • This solution was then used to obtain an initial COCi deposition of 100, 150, or 200ng, depending on how much is dispensed onto the working electrode.
  • the COCi solution has been prepared, depending on the deposition required, either 1 , 1.5 or 2 pL of the COCi solution is pipetted onto the working electrode. This solution was then allowed to air dry for approximately 6 minutes. Once the electrodes are dry and the solvent has been adsorbed, they are subjected to cyclic voltammetry (CV) interrogation with different concentrations of cocaine hydrochloride in saliva/PBS ranging from 0 to 100 ng/mL.
  • the samples were prepared using a serial dilution method to ensure the difference in concentration was as accurate as possible.
  • the objective of SVM models is to divide classes with the most possible margin from a hyperplane.
  • Support vectors are the nearest points to the hyperplane’s margin in each class. These outlier points determine the position and orientation of the hyperplane.
  • Finding the hyperplane often requires transforming data from its original dimension into a higher-dimension space. Kernel functions facilitate these transformations based on the similarity and distances between two data points in their original dimension.
  • the present Example accordingly demonstrated the use of ML in the detection of THC, CBD, and cocaine with an electrochemical sensor in saliva samples.
  • inaccuracies due to person-to-person saliva variations, electrode batches discrepancies, and interferences of cannabidiol were observed after the analysis of the traditional concentration vs. current responses.
  • ML algorithms were successfully introduced to analyze the datasets to overcome these setbacks.
  • the classification of THC samples with 0, 2, and 5 ng/mL presented accuracies between 85 % and 92 % for testing.
  • the results showed the capability of ML techniques to classify and predict THC concentration in the presence of CBD interference.

Abstract

L'invention concerne un procédé de détection d'un analyte dans un échantillon par détection électrochimique. L'échantillon est reçu sur une région de réception d'échantillon d'un capteur électrochimique, la région de réception d'échantillon étant en communication fluidique avec une électrode de détection du capteur. Un balayage de potentiel électrique est appliqué dans une plage cible de potentiels électriques à l'électrode de détection pour induire une réaction électrochimique avec l'analyte. Un signal électrique est mesuré à partir de l'électrode de détection tandis que le potentiel électrique est appliqué. Le signal électrique est entré dans un dispositif de traitement ayant au moins un algorithme d'apprentissage automatique fonctionnant à l'intérieur de celui-ci. Le dispositif de traitement exécute l'au moins un algorithme d'apprentissage automatique pour déterminer, à partir du signal électrique, une présence ou une absence de l'analyte dans l'échantillon.
PCT/CA2023/051089 2022-08-18 2023-08-17 Procédé de détection d'un analyte à l'aide d'un apprentissage automatique WO2024036405A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US202263399156P 2022-08-18 2022-08-18
US63/399,156 2022-08-18

Publications (1)

Publication Number Publication Date
WO2024036405A1 true WO2024036405A1 (fr) 2024-02-22

Family

ID=89940262

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2023/051089 WO2024036405A1 (fr) 2022-08-18 2023-08-17 Procédé de détection d'un analyte à l'aide d'un apprentissage automatique

Country Status (1)

Country Link
WO (1) WO2024036405A1 (fr)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2072311A1 (fr) * 1991-06-26 1992-12-27 Ronald E. Betts Capteur hermetique a circuit integre, substrat a conducteurs electriques, dispositif de sotckage a capteur electrochimique, collecteur d'echantillons d'analyte liquide, dispositifd'etalonnage et module multi-usage
CA2373246A1 (fr) * 1999-05-05 2000-11-23 Intec Science, Inc. Systeme d'analyse electrochimique quantitative d'analytes en phase solide
CA2328535A1 (fr) * 1999-12-16 2001-06-16 Roche Diagnostics Corporation Appareil biocapteur
CA2413625A1 (fr) * 2001-12-10 2003-06-10 Lifescan, Inc. Detection passive d'echantillons servant a declencher la synchronisation d'un dosage
US9903831B2 (en) * 2011-12-29 2018-02-27 Lifescan Scotland Limited Accurate analyte measurements for electrochemical test strip based on sensed physical characteristic(s) of the sample containing the analyte and derived biosensor parameters
WO2022174348A1 (fr) * 2021-02-19 2022-08-25 Eye3Concepts Inc. Capteur électrochimique destiné à des analytes phénoliques

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CA2072311A1 (fr) * 1991-06-26 1992-12-27 Ronald E. Betts Capteur hermetique a circuit integre, substrat a conducteurs electriques, dispositif de sotckage a capteur electrochimique, collecteur d'echantillons d'analyte liquide, dispositifd'etalonnage et module multi-usage
CA2373246A1 (fr) * 1999-05-05 2000-11-23 Intec Science, Inc. Systeme d'analyse electrochimique quantitative d'analytes en phase solide
CA2328535A1 (fr) * 1999-12-16 2001-06-16 Roche Diagnostics Corporation Appareil biocapteur
CA2413625A1 (fr) * 2001-12-10 2003-06-10 Lifescan, Inc. Detection passive d'echantillons servant a declencher la synchronisation d'un dosage
US9903831B2 (en) * 2011-12-29 2018-02-27 Lifescan Scotland Limited Accurate analyte measurements for electrochemical test strip based on sensed physical characteristic(s) of the sample containing the analyte and derived biosensor parameters
WO2022174348A1 (fr) * 2021-02-19 2022-08-25 Eye3Concepts Inc. Capteur électrochimique destiné à des analytes phénoliques

Similar Documents

Publication Publication Date Title
Banerjee et al. Black tea classification employing feature fusion of E-Nose and E-Tongue responses
Zhang et al. Machine learning‐reinforced noninvasive biosensors for healthcare
Banerjee et al. A review on combined odor and taste sensor systems
Gebicki Application of electrochemical sensors and sensor matrixes for measurement of odorous chemical compounds
Wei et al. The measurement principles, working parameters and configurations of voltammetric electronic tongues and its applications for foodstuff analysis
EP3899515B1 (fr) Systèmes et procédés de mesure de la réponse cinétique d'éléments de capteur chimique comprenant des varactors en graphène
Stevenson et al. A rapid response electrochemical biosensor for detecting THC in saliva
Albert et al. Cross-reactive chemical sensor arrays
Moreno-Barón et al. Application of the wavelet transform coupled with artificial neural networks for quantification purposes in a voltammetric electronic tongue
US20200138344A1 (en) Electrochemical detection device and method
Lavine et al. Chemometrics
Hanrahan Computational neural networks driving complex analytical problem solving
US20030186461A1 (en) Method and system for using a weighted response
Wen et al. A guide to signal processing algorithms for nanopore sensors
Wei et al. Tracing floral and geographical origins of honeys by potentiometric and voltammetric electronic tongue
Magro et al. Overview of electronic tongue sensing in environmental aqueous matrices: Potential for monitoring emerging organic contaminants
Kodogiannis et al. Artificial odor discrimination system using electronic nose and neural networks for the identification of urinary tract infection
CN107076694A (zh) 用于挥发性有机化合物检测的传感器
US20210270766A1 (en) Portable electrochemical-sensor system for analyzing user health conditions and method thereof
Xue et al. Integrated biosensor platform based on graphene transistor arrays for real-time high-accuracy ion sensing
Pravdová et al. Role of chemometrics for electrochemical sensors
Liang et al. Study on interference suppression algorithms for electronic noses: A review
Cortina-Puig et al. EIS multianalyte sensing with an automated SIA system—An electronic tongue employing the impedimetric signal
Carmel et al. An eNose algorithm for identifying chemicals and determining their concentration
Han et al. A bioinspired artificial gustatory neuron for a neuromorphic based electronic tongue

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23853785

Country of ref document: EP

Kind code of ref document: A1