WO2022197754A1 - Neural network parameter quantization for base calling - Google Patents
Neural network parameter quantization for base calling Download PDFInfo
- Publication number
- WO2022197754A1 WO2022197754A1 PCT/US2022/020462 US2022020462W WO2022197754A1 WO 2022197754 A1 WO2022197754 A1 WO 2022197754A1 US 2022020462 W US2022020462 W US 2022020462W WO 2022197754 A1 WO2022197754 A1 WO 2022197754A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- quantization
- format
- parameters
- group
- neural network
- Prior art date
Links
- 238000013139 quantization Methods 0.000 title claims abstract description 267
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 202
- 238000000034 method Methods 0.000 claims abstract description 96
- 230000015654 memory Effects 0.000 claims abstract description 61
- 238000012163 sequencing technique Methods 0.000 claims description 62
- 238000003860 storage Methods 0.000 claims description 49
- 230000008569 process Effects 0.000 claims description 28
- 238000004590 computer program Methods 0.000 claims description 10
- 238000011068 loading method Methods 0.000 claims description 8
- 238000012549 training Methods 0.000 claims description 7
- 230000004044 response Effects 0.000 claims description 5
- 238000006243 chemical reaction Methods 0.000 description 103
- 125000003729 nucleotide group Chemical group 0.000 description 42
- 238000012545 processing Methods 0.000 description 41
- 239000002773 nucleotide Substances 0.000 description 40
- 239000012530 fluid Substances 0.000 description 31
- 230000002123 temporal effect Effects 0.000 description 30
- 150000007523 nucleic acids Chemical class 0.000 description 28
- 108020004707 nucleic acids Proteins 0.000 description 26
- 102000039446 nucleic acids Human genes 0.000 description 26
- 230000006870 function Effects 0.000 description 25
- 239000000126 substance Substances 0.000 description 25
- 238000005286 illumination Methods 0.000 description 22
- 238000003491 array Methods 0.000 description 20
- 238000010586 diagram Methods 0.000 description 17
- 238000004458 analytical method Methods 0.000 description 16
- 238000004891 communication Methods 0.000 description 16
- 230000003321 amplification Effects 0.000 description 15
- 238000003199 nucleic acid amplification method Methods 0.000 description 15
- 239000000243 solution Substances 0.000 description 15
- 230000005284 excitation Effects 0.000 description 14
- 108020004414 DNA Proteins 0.000 description 12
- 238000001514 detection method Methods 0.000 description 12
- 238000005516 engineering process Methods 0.000 description 12
- 239000007787 solid Substances 0.000 description 12
- 238000004166 bioassay Methods 0.000 description 11
- 102000004169 proteins and genes Human genes 0.000 description 9
- 108090000623 proteins and genes Proteins 0.000 description 9
- 239000000758 substrate Substances 0.000 description 9
- 239000011159 matrix material Substances 0.000 description 8
- 230000002441 reversible effect Effects 0.000 description 8
- 230000008859 change Effects 0.000 description 7
- 238000013135 deep learning Methods 0.000 description 7
- 239000000203 mixture Substances 0.000 description 7
- 238000012384 transportation and delivery Methods 0.000 description 7
- 102000004190 Enzymes Human genes 0.000 description 6
- 108090000790 Enzymes Proteins 0.000 description 6
- 108091028043 Nucleic acid sequence Proteins 0.000 description 6
- 238000009825 accumulation Methods 0.000 description 6
- 238000003556 assay Methods 0.000 description 6
- 238000004422 calculation algorithm Methods 0.000 description 6
- 239000011324 bead Substances 0.000 description 5
- 239000003153 chemical reaction reagent Substances 0.000 description 5
- 238000007667 floating Methods 0.000 description 5
- 238000003384 imaging method Methods 0.000 description 5
- 150000002500 ions Chemical class 0.000 description 5
- 230000003287 optical effect Effects 0.000 description 5
- 239000002699 waste material Substances 0.000 description 5
- 108091034117 Oligonucleotide Proteins 0.000 description 4
- 230000001133 acceleration Effects 0.000 description 4
- 239000012491 analyte Substances 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 4
- 230000007423 decrease Effects 0.000 description 4
- 238000010348 incorporation Methods 0.000 description 4
- 230000003993 interaction Effects 0.000 description 4
- 239000007788 liquid Substances 0.000 description 4
- 238000003062 neural network model Methods 0.000 description 4
- 238000010606 normalization Methods 0.000 description 4
- 239000000376 reactant Substances 0.000 description 4
- 230000009467 reduction Effects 0.000 description 4
- 238000003786 synthesis reaction Methods 0.000 description 4
- 108091032973 (ribonucleotides)n+m Proteins 0.000 description 3
- 108091093088 Amplicon Proteins 0.000 description 3
- 238000007792 addition Methods 0.000 description 3
- 239000007853 buffer solution Substances 0.000 description 3
- 239000003086 colorant Substances 0.000 description 3
- 230000003247 decreasing effect Effects 0.000 description 3
- 125000000524 functional group Chemical group 0.000 description 3
- 230000000670 limiting effect Effects 0.000 description 3
- 239000000463 material Substances 0.000 description 3
- 239000012528 membrane Substances 0.000 description 3
- 230000002093 peripheral effect Effects 0.000 description 3
- 230000005855 radiation Effects 0.000 description 3
- 230000002829 reductive effect Effects 0.000 description 3
- 229920006395 saturated elastomer Polymers 0.000 description 3
- 238000012546 transfer Methods 0.000 description 3
- 108091028732 Concatemer Proteins 0.000 description 2
- 102000016928 DNA-directed DNA polymerase Human genes 0.000 description 2
- 108010014303 DNA-directed DNA polymerase Proteins 0.000 description 2
- 238000012952 Resampling Methods 0.000 description 2
- ISAKRJDGNUQOIC-UHFFFAOYSA-N Uracil Chemical compound O=C1C=CNC(=O)N1 ISAKRJDGNUQOIC-UHFFFAOYSA-N 0.000 description 2
- 230000004913 activation Effects 0.000 description 2
- 238000000429 assembly Methods 0.000 description 2
- 230000000712 assembly Effects 0.000 description 2
- 238000005415 bioluminescence Methods 0.000 description 2
- 230000029918 bioluminescence Effects 0.000 description 2
- 230000015572 biosynthetic process Effects 0.000 description 2
- 239000000872 buffer Substances 0.000 description 2
- 238000001311 chemical methods and process Methods 0.000 description 2
- OPTASPLRGRRNAP-UHFFFAOYSA-N cytosine Chemical compound NC=1C=CNC(=O)N=1 OPTASPLRGRRNAP-UHFFFAOYSA-N 0.000 description 2
- 230000018044 dehydration Effects 0.000 description 2
- 238000006297 dehydration reaction Methods 0.000 description 2
- 230000001419 dependent effect Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000008030 elimination Effects 0.000 description 2
- 238000003379 elimination reaction Methods 0.000 description 2
- 239000000499 gel Substances 0.000 description 2
- 230000002068 genetic effect Effects 0.000 description 2
- UYTPUPDQBNUYGX-UHFFFAOYSA-N guanine Chemical compound O=C1NC(N)=NC2=C1N=CN2 UYTPUPDQBNUYGX-UHFFFAOYSA-N 0.000 description 2
- 238000013090 high-throughput technology Methods 0.000 description 2
- 230000003100 immobilizing effect Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000013507 mapping Methods 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- -1 other biomolecules Substances 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 238000001308 synthesis method Methods 0.000 description 2
- RWQNBRDOKXIBIV-UHFFFAOYSA-N thymine Chemical compound CC1=CNC(=O)NC1=O RWQNBRDOKXIBIV-UHFFFAOYSA-N 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- PHIYHIOQVWTXII-UHFFFAOYSA-N 3-amino-1-phenylpropan-1-ol Chemical compound NCCC(O)C1=CC=CC=C1 PHIYHIOQVWTXII-UHFFFAOYSA-N 0.000 description 1
- 229930024421 Adenine Natural products 0.000 description 1
- GFFGJBXGBJISGV-UHFFFAOYSA-N Adenine Chemical compound NC1=NC=NC2=C1N=CN2 GFFGJBXGBJISGV-UHFFFAOYSA-N 0.000 description 1
- 240000001436 Antirrhinum majus Species 0.000 description 1
- 238000012935 Averaging Methods 0.000 description 1
- 102000003960 Ligases Human genes 0.000 description 1
- 108090000364 Ligases Proteins 0.000 description 1
- 108091028664 Ribonucleotide Proteins 0.000 description 1
- JLCPHMBAVCMARE-UHFFFAOYSA-N [3-[[3-[[3-[[3-[[3-[[3-[[3-[[3-[[3-[[3-[[3-[[5-(2-amino-6-oxo-1H-purin-9-yl)-3-[[3-[[3-[[3-[[3-[[3-[[5-(2-amino-6-oxo-1H-purin-9-yl)-3-[[5-(2-amino-6-oxo-1H-purin-9-yl)-3-hydroxyoxolan-2-yl]methoxy-hydroxyphosphoryl]oxyoxolan-2-yl]methoxy-hydroxyphosphoryl]oxy-5-(5-methyl-2,4-dioxopyrimidin-1-yl)oxolan-2-yl]methoxy-hydroxyphosphoryl]oxy-5-(6-aminopurin-9-yl)oxolan-2-yl]methoxy-hydroxyphosphoryl]oxy-5-(6-aminopurin-9-yl)oxolan-2-yl]methoxy-hydroxyphosphoryl]oxy-5-(6-aminopurin-9-yl)oxolan-2-yl]methoxy-hydroxyphosphoryl]oxy-5-(6-aminopurin-9-yl)oxolan-2-yl]methoxy-hydroxyphosphoryl]oxyoxolan-2-yl]methoxy-hydroxyphosphoryl]oxy-5-(5-methyl-2,4-dioxopyrimidin-1-yl)oxolan-2-yl]methoxy-hydroxyphosphoryl]oxy-5-(4-amino-2-oxopyrimidin-1-yl)oxolan-2-yl]methoxy-hydroxyphosphoryl]oxy-5-(5-methyl-2,4-dioxopyrimidin-1-yl)oxolan-2-yl]methoxy-hydroxyphosphoryl]oxy-5-(5-methyl-2,4-dioxopyrimidin-1-yl)oxolan-2-yl]methoxy-hydroxyphosphoryl]oxy-5-(6-aminopurin-9-yl)oxolan-2-yl]methoxy-hydroxyphosphoryl]oxy-5-(6-aminopurin-9-yl)oxolan-2-yl]methoxy-hydroxyphosphoryl]oxy-5-(4-amino-2-oxopyrimidin-1-yl)oxolan-2-yl]methoxy-hydroxyphosphoryl]oxy-5-(4-amino-2-oxopyrimidin-1-yl)oxolan-2-yl]methoxy-hydroxyphosphoryl]oxy-5-(4-amino-2-oxopyrimidin-1-yl)oxolan-2-yl]methoxy-hydroxyphosphoryl]oxy-5-(6-aminopurin-9-yl)oxolan-2-yl]methoxy-hydroxyphosphoryl]oxy-5-(4-amino-2-oxopyrimidin-1-yl)oxolan-2-yl]methyl [5-(6-aminopurin-9-yl)-2-(hydroxymethyl)oxolan-3-yl] hydrogen phosphate Polymers Cc1cn(C2CC(OP(O)(=O)OCC3OC(CC3OP(O)(=O)OCC3OC(CC3O)n3cnc4c3nc(N)[nH]c4=O)n3cnc4c3nc(N)[nH]c4=O)C(COP(O)(=O)OC3CC(OC3COP(O)(=O)OC3CC(OC3COP(O)(=O)OC3CC(OC3COP(O)(=O)OC3CC(OC3COP(O)(=O)OC3CC(OC3COP(O)(=O)OC3CC(OC3COP(O)(=O)OC3CC(OC3COP(O)(=O)OC3CC(OC3COP(O)(=O)OC3CC(OC3COP(O)(=O)OC3CC(OC3COP(O)(=O)OC3CC(OC3COP(O)(=O)OC3CC(OC3COP(O)(=O)OC3CC(OC3COP(O)(=O)OC3CC(OC3COP(O)(=O)OC3CC(OC3COP(O)(=O)OC3CC(OC3COP(O)(=O)OC3CC(OC3CO)n3cnc4c(N)ncnc34)n3ccc(N)nc3=O)n3cnc4c(N)ncnc34)n3ccc(N)nc3=O)n3ccc(N)nc3=O)n3ccc(N)nc3=O)n3cnc4c(N)ncnc34)n3cnc4c(N)ncnc34)n3cc(C)c(=O)[nH]c3=O)n3cc(C)c(=O)[nH]c3=O)n3ccc(N)nc3=O)n3cc(C)c(=O)[nH]c3=O)n3cnc4c3nc(N)[nH]c4=O)n3cnc4c(N)ncnc34)n3cnc4c(N)ncnc34)n3cnc4c(N)ncnc34)n3cnc4c(N)ncnc34)O2)c(=O)[nH]c1=O JLCPHMBAVCMARE-UHFFFAOYSA-N 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 229960000643 adenine Drugs 0.000 description 1
- 239000011543 agarose gel Substances 0.000 description 1
- 230000002776 aggregation Effects 0.000 description 1
- 238000004220 aggregation Methods 0.000 description 1
- 230000009435 amidation Effects 0.000 description 1
- 238000007112 amidation reaction Methods 0.000 description 1
- 238000000137 annealing Methods 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 239000003012 bilayer membrane Substances 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000006555 catalytic reaction Methods 0.000 description 1
- 238000007385 chemical modification Methods 0.000 description 1
- 239000003795 chemical substances by application Substances 0.000 description 1
- 238000004140 cleaning Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 229940104302 cytosine Drugs 0.000 description 1
- 238000013144 data compression Methods 0.000 description 1
- 239000005547 deoxyribonucleotide Substances 0.000 description 1
- 125000002637 deoxyribonucleotide group Chemical group 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000009792 diffusion process Methods 0.000 description 1
- 238000007865 diluting Methods 0.000 description 1
- 238000010494 dissociation reaction Methods 0.000 description 1
- 208000018459 dissociative disease Diseases 0.000 description 1
- 239000000975 dye Substances 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000002255 enzymatic effect Effects 0.000 description 1
- 230000032050 esterification Effects 0.000 description 1
- 238000005886 esterification reaction Methods 0.000 description 1
- 238000006266 etherification reaction Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000007717 exclusion Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 239000007850 fluorescent dye Substances 0.000 description 1
- 230000002209 hydrophobic effect Effects 0.000 description 1
- 238000011065 in-situ storage Methods 0.000 description 1
- 239000003446 ligand Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004020 luminiscence type Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000002493 microarray Methods 0.000 description 1
- 238000002156 mixing Methods 0.000 description 1
- 230000008450 motivation Effects 0.000 description 1
- 238000007899 nucleic acid hybridization Methods 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000003647 oxidation Effects 0.000 description 1
- 238000007254 oxidation reaction Methods 0.000 description 1
- 239000008188 pellet Substances 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 230000026731 phosphorylation Effects 0.000 description 1
- 238000006366 phosphorylation reaction Methods 0.000 description 1
- 230000000704 physical effect Effects 0.000 description 1
- 229920000642 polymer Polymers 0.000 description 1
- 238000006116 polymerization reaction Methods 0.000 description 1
- 108091033319 polynucleotide Proteins 0.000 description 1
- 239000002157 polynucleotide Substances 0.000 description 1
- 102000040430 polynucleotide Human genes 0.000 description 1
- 229920001184 polypeptide Polymers 0.000 description 1
- 239000000843 powder Substances 0.000 description 1
- 238000007781 pre-processing Methods 0.000 description 1
- 108090000765 processed proteins & peptides Proteins 0.000 description 1
- 102000004196 processed proteins & peptides Human genes 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 239000002096 quantum dot Substances 0.000 description 1
- 238000010223 real-time analysis Methods 0.000 description 1
- 230000008707 rearrangement Effects 0.000 description 1
- 102000005962 receptors Human genes 0.000 description 1
- 108020003175 receptors Proteins 0.000 description 1
- 238000006722 reduction reaction Methods 0.000 description 1
- 230000010076 replication Effects 0.000 description 1
- 238000002165 resonance energy transfer Methods 0.000 description 1
- 239000002336 ribonucleotide Substances 0.000 description 1
- 125000002652 ribonucleotide group Chemical group 0.000 description 1
- 238000007363 ring formation reaction Methods 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 238000005464 sample preparation method Methods 0.000 description 1
- 238000009738 saturating Methods 0.000 description 1
- 238000005204 segregation Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000001179 sorption measurement Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000002194 synthesizing effect Effects 0.000 description 1
- 229940113082 thymine Drugs 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 125000002264 triphosphate group Chemical group [H]OP(=O)(O[H])OP(=O)(O[H])OP(=O)(O[H])O* 0.000 description 1
- 229940035893 uracil Drugs 0.000 description 1
- 238000005406 washing Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16B—BIOINFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR GENETIC OR PROTEIN-RELATED DATA PROCESSING IN COMPUTATIONAL MOLECULAR BIOLOGY
- G16B30/00—ICT specially adapted for sequence analysis involving nucleotides or amino acids
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16B—BIOINFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR GENETIC OR PROTEIN-RELATED DATA PROCESSING IN COMPUTATIONAL MOLECULAR BIOLOGY
- G16B40/00—ICT specially adapted for biostatistics; ICT specially adapted for bioinformatics-related machine learning or data mining, e.g. knowledge discovery or pattern finding
- G16B40/20—Supervised data analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
Definitions
- the technology disclosed relates to artificial intelligence type computers and digital data processing systems and corresponding data processing methods and products for emulation of intelligence (i.e., knowledge based systems, reasoning systems, and knowledge acquisition systems); and including systems for reasoning with uncertainty (e.g., fuzzy logic systems), adaptive systems, machine learning systems, and artificial neural networks.
- intelligence i.e., knowledge based systems, reasoning systems, and knowledge acquisition systems
- systems for reasoning with uncertainty e.g., fuzzy logic systems
- adaptive systems e.g., machine learning systems
- artificial neural networks e.g., neural networks
- the technology disclosed relates to using deep neural networks such as deep convolutional neural networks for analyzing data.
- CNNs Deep Convolution Neural Networks
- GPU Graphics Processing Unit
- FPGA Field Programmable Gate Array
- Convolution involves multiply and accumulate (MAC) operations with four levels of loops that slide along kernel and feature maps.
- the first loop level computes the MAC of pixels within a kernel window.
- the second loop level accumulates the sum of products of the MAC across different input feature maps.
- a final output element in the output feature map is obtained by adding the bias.
- the third loop level slides the kernel window within an input feature map.
- the fourth loop level generates different output feature maps.
- FPGAs have gained increasing interest and popularity in particular to accelerate inference tasks, due to their (1) high degree of reconfigurability, (2) faster development time compared to Application Specific Integrated Circuits (ASICs) to catch up with the rapid evolving of CNNs, (3) good performance, and (4) superior energy efficiency compared to GPUs.
- ASICs Application Specific Integrated Circuits
- the high performance and efficiency of an FPGA can be realized by synthesizing a circuit that is customized for a specific computation to directly process billions of operations with the customized memory systems. For instance, hundreds to thousands of digital signal processing (DSP) blocks on modern FPGAs support the core convolution operation, e.g ., multiplication and addition, with high parallelism.
- DSP digital signal processing
- Dedicated data buffers between external on-chip memory and on-chip processing engines can be designed to realize the preferred dataflow by configuring tens of Mbyte on-chip block random access memories (BRAM) on the FPGA chip.
- BRAM block random access memories
- Fig. l is a simplified diagram of a base calling computation system that comprises a configurable processor.
- Fig. 2 is a simplified data flow diagram which can be executed by a system like that of Fig. 1.
- Fig. 3 illustrates a configuration architecture for components of a configurable or a reconfigurable array supporting base calling operations.
- Fig. 4 is a diagram of a neural network architecture which can be executed using a configurable or a reconfigurable array configured as described herein.
- Fig. 5 is a simplified illustration of an organization of tiles of sensor data used by a neural network architecture like that of Fig. 4.
- Fig. 6 is a simplified illustration of patches of tiles of sensor data used by a neural network architecture like that of Fig. 4.
- Fig. 7 illustrates a configuration of patches of an input tile used by a neural network architecture like that of Fig. 4.
- Fig. 8 illustrates part of a configuration for a neural network like that of Fig. 4 on a configurable or a reconfigurable array, such as a field programmable gate array (FPGA).
- FPGA field programmable gate array
- Fig. 9 is a diagram of another alternative neural network architecture which can be executed using a configurable or a reconfigurable array configured as described herein.
- Fig. 10 illustrates one implementation of a specialized architecture of the neural network-based base caller that is used to segregate processing of data for different sequencing cycles.
- Fig. 11 depicts one implementation of segregated layers, each of which can include convolutions.
- Fig. 12A depicts one implementation of combinatory layers, each of which can include convolutions.
- Fig. 12B depicts another implementation of the combinatory layers, each of which can include convolutions.
- Fig. 13 illustrates layers of a neural network, and corresponding kernels and weights to configure the neural network for base calling operation.
- Fig. 14 illustrates a table depicting example 8-bit binary fixed-point formats, at least some of which can be used to represent parameters (e.g ., weights and biases) of a neural network for base calling.
- parameters e.g ., weights and biases
- Fig. 15A illustrates a Look-Up Table (LUT) usable to select a quantization format for a corresponding neural network parameter.
- LUT Look-Up Table
- Fig. 15B illustrates a scale that represents at least some information of the LUT of Fig. 15 A.
- Fig. 15C illustrates another LUT usable to select a quantization format for a corresponding absolute value of a neural network parameter.
- Fig. 15D illustrates yet another LUT usable to select a quantization format for a corresponding absolute value of a neural network parameter.
- Fig. 16 illustrates a flowchart depicting a method for grouping of neural network parameters, selecting appropriate quantization formats for individual groups, quantizing the neural network parameters of each group in accordance with the corresponding selected quantization format, and using the quantized neural network parameters to configure a neural network topology for base calling.
- Fig. 17 illustrates an example layer- specific grouping of neural network parameters.
- Fig. 18 illustrates an example filter- specific grouping of neural network parameters.
- Fig. 19 illustrates an example kernel- specific grouping of neural network parameters.
- Fig. 20 illustrates a kernel-specific grouping of neural network parameters, where a parameter that has a maximum absolute value among all parameters within a corresponding group is identified.
- Fig. 21 illustrates a multiplication operation and an accumulation operation for an example input data quantization format and an example weight quantization format.
- Fig. 22 illustrates a multiplication operation and an accumulation operation for another example input data quantization format and another example weight quantization format.
- Fig. 23 is a block diagram of a base calling system in accordance with one implementation.
- Fig. 24 is a block diagram of a system controller that can be used in the system of Fig. 23.
- Fig. 25 is a simplified block diagram of a computer system that can be used to implement the technology disclosed.
- Embodiments described herein may be used in various biological or chemical processes and systems for academic or commercial analysis. More specifically, embodiments described herein may be used in various processes and systems where it is desired to detect an event, property, quality, or characteristic that is indicative of a desired reaction.
- embodiments described herein include cartridges, biosensors, and their components as well as bioassay systems that operate with cartridges and biosensors.
- the cartridges and biosensors include a flow cell and one or more sensors, pixels, light detectors, or photodiodes that are coupled together in a substantially unitary structure.
- a “desired reaction” includes a change in at least one of a chemical, electrical, physical, or optical property (or quality) of an analyte-of-interest.
- the desired reaction is a positive binding event (e.g ., incorporation of a fluorescently labeled biomolecule with the analyte-of-interest).
- the desired reaction may be a chemical transformation, chemical change, or chemical interaction.
- the desired reaction may also be a change in electrical properties.
- the desired reaction may be a change in ion concentration within a solution.
- Exemplary reactions include, but are not limited to, chemical reactions such as reduction, oxidation, addition, elimination, rearrangement, esterification, amidation, etherification, cyclization, or substitution; binding interactions in which a first chemical binds to a second chemical; dissociation reactions in which two or more chemicals detach from each other; fluorescence; luminescence; bioluminescence; chemiluminescence; and biological reactions, such as nucleic acid replication, nucleic acid amplification, nucleic acid hybridization, nucleic acid ligation, phosphorylation, enzymatic catalysis, receptor binding, or ligand binding.
- chemical reactions such as reduction, oxidation, addition, elimination, rearrangement, esterification, amidation, etherification, cyclization, or substitution
- binding interactions in which a first chemical binds to a second chemical
- dissociation reactions in which two or more chemicals detach from each other
- fluorescence luminescence
- bioluminescence bioluminescence
- the desired reaction can also be an addition or elimination of a proton, for example, detectable as a change in pH of a surrounding solution or environment.
- An additional desired reaction can be detecting the flow of ions across a membrane (e.g., natural or synthetic bilayer membrane), for example as ions flow through a membrane the current is disrupted and the disruption can be detected.
- a membrane e.g., natural or synthetic bilayer membrane
- the desired reaction includes the incorporation of a fluorescently-labeled molecule to an analyte.
- the analyte may be an oligonucleotide and the fluorescently-labeled molecule may be a nucleotide.
- the desired reaction may be detected when an excitation light is directed toward the oligonucleotide having the labeled nucleotide, and the fluorophore emits a detectable fluorescent signal.
- the detected fluorescence is a result of chemiluminescence or bioluminescence.
- a desired reaction may also increase fluorescence (or Forster) resonance energy transfer (FRET), for example, by bringing a donor fluorophore in proximity to an acceptor fluorophore, decrease FRET by separating donor and acceptor fluorophores, increase fluorescence by separating a quencher from a fluorophore or decrease fluorescence by co-locating a quencher and fluorophore.
- FRET fluorescence resonance energy transfer
- reaction component or “reactant” includes any substance that may be used to obtain a desired reaction.
- reaction components include reagents, enzymes, samples, other biomolecules, and buffer solutions.
- the reaction components are typically delivered to a reaction site in a solution and/or immobilized at a reaction site.
- the reaction components may interact directly or indirectly with another substance, such as the analyte-of-interest.
- reaction site is a localized region where a desired reaction may occur.
- a reaction site may include support surfaces of a substrate where a substance may be immobilized thereon.
- a reaction site may include a substantially planar surface in a channel of a flow cell that has a colony of nucleic acids thereon.
- the nucleic acids in the colony have the same sequence, being for example, clonal copies of a single stranded or double stranded template.
- a reaction site may contain only a single nucleic acid molecule, for example, in a single stranded or double stranded form.
- a plurality of reaction sites may be unevenly distributed along the support surface or arranged in a predetermined manner (e.g ., side-by-side in a matrix, such as in microarrays).
- a reaction site can also include a reaction chamber (or well) that at least partially defines a spatial region or volume configured to compartmentalize the desired reaction.
- reaction chamber and “well” interchangeably.
- reaction chamber or “well” includes a spatial region that is in fluid communication with a flow channel.
- the reaction chamber may be at least partially separated from the surrounding environment or other spatial regions.
- a plurality of reaction chambers may be separated from each other by shared walls.
- the reaction chamber may include a cavity defined by interior surfaces of a well and have an opening or aperture so that the cavity may be in fluid communication with a flow channel. Biosensors including such reaction chambers are described in greater detail in international application no. PCT/US2011/057111, filed on October 20, 2011, which is incorporated herein by reference in its entirety.
- the reaction chambers are sized and shaped relative to solids (including semi-solids) so that the solids may be inserted, fully or partially, therein.
- the reaction chamber may be sized and shaped to accommodate only one capture bead.
- the capture bead may have clonally amplified DNA or other substances thereon.
- the reaction chamber may be sized and shaped to receive an approximate number of beads or solid substrates.
- the reaction chambers may also be filled with a porous gel or substance that is configured to control diffusion or filter fluids that may flow into the reaction chamber.
- sensors are associated with corresponding pixel areas of a sample surface of a biosensor.
- a pixel area is a geometrical construct that represents an area on the biosensor’s sample surface for one sensor (or pixel).
- a sensor that is associated with a pixel area detects light emissions gathered from the associated pixel area when a desired reaction has occurred at a reaction site or a reaction chamber overlying the associated pixel area.
- the pixel areas can overlap.
- a plurality of sensors may be associated with a single reaction site or a single reaction chamber.
- a single sensor may be associated with a group of reaction sites or a group of reaction chambers.
- a “biosensor” includes a structure having a plurality of reaction sites and/or reaction chambers (or wells).
- a biosensor may include a solid-state imaging device (e.g., CCD or CMOS imager) and, optionally, a flow cell mounted thereto.
- the flow cell may include at least one flow channel that is in fluid communication with the reaction sites and/or the reaction chambers.
- the biosensor is configured to fluidically and electrically couple to a bioassay system.
- the bioassay system may deliver reactants to the reaction sites and/or the reaction chambers according to a predetermined protocol (e.g, sequencing-by-synthesis) and perform a plurality of imaging events.
- the bioassay system may direct solutions to flow along the reaction sites and/or the reaction chambers. At least one of the solutions may include four types of nucleotides having the same or different fluorescent labels.
- the nucleotides may bind to corresponding oligonucleotides located at the reaction sites and/or the reaction chambers.
- the bioassay system may then illuminate the reaction sites and/or the reaction chambers using an excitation light source (e.g, solid-state light sources, such as light-emitting diodes or LEDs).
- the excitation light may have a predetermined wavelength or wavelengths, including a range of wavelengths.
- the excited fluorescent labels provide emission signals that may be captured by the sensors.
- the biosensor may include electrodes or other types of sensors configured to detect other identifiable properties.
- the sensors may be configured to detect a change in ion concentration.
- the sensors may be configured to detect the ion current flow across a membrane.
- a “cluster” is a colony of similar or identical molecules or nucleotide sequences or DNA strands.
- a cluster can be an amplified oligonucleotide or any other group of a polynucleotide or polypeptide with a same or similar sequence.
- a cluster can be any element or group of elements that occupy a physical area on a sample surface.
- clusters are immobilized to a reaction site and/or a reaction chamber during a base calling cycle.
- the term “immobilized,” when used with respect to a biomolecule or biological or chemical substance, includes substantially attaching the biomolecule or biological or chemical substance at a molecular level to a surface.
- a biomolecule or biological or chemical substance may be immobilized to a surface of the substrate material using adsorption techniques including non-covalent interactions (e.g ., electrostatic forces, van der Waals, and dehydration of hydrophobic interfaces) and covalent binding techniques where functional groups or linkers facilitate attaching the biomolecules to the surface.
- Immobilizing biomolecules or biological or chemical substances to a surface of a substrate material may be based upon the properties of the substrate surface, the liquid medium carrying the biomolecule or biological or chemical substance, and the properties of the biomolecules or biological or chemical substances themselves.
- a substrate surface may be functionalized (e.g., chemically or physically modified) to facilitate immobilizing the biomolecules (or biological or chemical substances) to the substrate surface.
- the substrate surface may be first modified to have functional groups bound to the surface. The functional groups may then bind to biomolecules or biological or chemical substances to immobilize them thereon.
- a substance can be immobilized to a surface via a gel, for example, as described in US Patent Publ. No. US 2011/0059865 Al, which is incorporated herein by reference.
- nucleic acids can be attached to a surface and amplified using bridge amplification.
- Useful bridge amplification methods are described, for example, in U.S. Patent No. 5,641,658; WO 2007/010251; U.S. Pat. No. 6,090,592; U.S. Patent Publ. No. 2002/0055100 Al; U.S. Patent No. 7,115,400; U.S. Patent Publ. No. 2004/0096853 Al; U.S. Patent Publ. No. 2004/0002090 Al; U.S. Patent Publ. No. 2007/0128624 Al; and U.S. Patent Publ. No. 2008/0009420 Al, each of which is incorporated herein in its entirety.
- the nucleic acids can be attached to a surface and amplified using one or more primer pairs.
- one of the primers can be in solution and the other primer can be immobilized on the surface (e.g, 5'- attached).
- a nucleic acid molecule can hybridize to one of the primers on the surface followed by extension of the immobilized primer to produce a first copy of the nucleic acid.
- the primer in solution then hybridizes to the first copy of the nucleic acid which can be extended using the first copy of the nucleic acid as a template.
- the original nucleic acid molecule can hybridize to a second immobilized primer on the surface and can be extended at the same time or after the primer in solution is extended.
- repeated rounds of extension e.g ., amplification
- the immobilized primer and primer in solution provide multiple copies of the nucleic acid.
- the assay protocols executed by the systems and methods described herein include the use of natural nucleotides and also enzymes that are configured to interact with the natural nucleotides.
- Natural nucleotides include, for example, ribonucleotides (RNA) or deoxyribonucleotides (DNA).
- Natural nucleotides can be in the mono-, di-, or tri phosphate form and can have a base selected from adenine (A), thymine (T), uracil (U), guanine (G) or cytosine (C). It will be understood however that non-natural nucleotides, modified nucleotides or analogs of the aforementioned nucleotides can be used.
- Some examples of useful non-natural nucleotides are set forth below in regard to reversible terminator-based sequencing by synthesis methods.
- items or solid substances may be disposed within the reaction chambers.
- the item or solid may be physically held or immobilized within the reaction chamber through an interference fit, adhesion, or entrapment.
- Exemplary items or solids that may be disposed within the reaction chambers include polymer beads, pellets, agarose gel, powders, quantum dots, or other solids that may be compressed and/or held within the reaction chamber.
- a nucleic acid superstructure such as a DNA ball, can be disposed in or at a reaction chamber, for example, by attachment to an interior surface of the reaction chamber or by residence in a liquid within the reaction chamber.
- a DNA ball or other nucleic acid superstructure can be performed and then disposed in or at the reaction chamber.
- a DNA ball can be synthesized in situ at the reaction chamber.
- a DNA ball can be synthesized by rolling circle amplification to produce a concatemer of a particular nucleic acid sequence and the concatemer can be treated with conditions that form a relatively compact ball.
- DNA balls and methods for their synthesis are described, for example in, U.S. Patent Publication Nos. 2008/0242560 A1 or 2008/0234136 Al, each of which is incorporated herein in its entirety.
- a substance that is held or disposed in a reaction chamber can be in a solid, liquid, or gaseous state.
- base calling identifies a nucleotide base in a nucleic acid sequence.
- Base calling refers to the process of determining a base call (A, C, G, T) for every cluster at a specific cycle.
- base calling can be performed utilizing four-channel, two-channel or one-channel methods and systems described in the incorporated materials of U.S. Patent Application Publication No. 2013/0079232.
- a base calling cycle is referred to as a “sampling event.”
- a sampling event comprises two illumination stages in time sequence, such that a pixel signal is generated at each stage. The first illumination stage induces illumination from a given cluster indicating nucleotide bases A and T in a AT pixel signal, and the second illumination stage induces illumination from a given cluster indicating nucleotide bases C and T in a CT pixel signal.
- the technology disclosed can be implemented on processors like Central Processing Units (CPUs), Graphics Processing Units (GPUs), Field Programmable Gate Arrays (FPGAs), Coarse-Grained Reconfigurable Architectures (CGRAs), Application-Specific Integrated Circuits (ASICs), Application Specific Instruction-set Processor (ASIP), and Digital Signal Processors (DSPs).
- CPUs Central Processing Units
- GPUs Graphics Processing Units
- FPGAs Field Programmable Gate Arrays
- CGRAs Coarse-Grained Reconfigurable Architectures
- ASICs Application-Specific Integrated Circuits
- ASIP Application Specific Instruction-set Processor
- DSPs Digital Signal Processors
- Fig. l is a simplified block diagram of the system for analysis of sensor data from a sequencing system, such as base call sensor outputs.
- the system includes a sequencing machine 100 and a configurable processor 150.
- the configurable processor 150 can execute a neural network-based base caller in coordination with a runtime program executed by the central processing unit Central Processing Unit (CPU) 102.
- the sequencing machine 100 comprises base call sensors and flow cells 101.
- the flow cells can comprise one or more tiles in which clusters of genetic material are exposed to a sequence of analyte flows used to cause reactions in the clusters to identify the bases in the genetic material.
- the sensors sense the reactions for each cycle of the sequence in each tile of the flow cell to provide tile data. Examples of this technology are described in more detail below. Genetic sequencing is a data intensive operation, which translates base call sensor data into sequences of base calls for each cluster of genetic material sensed in during a base call operation.
- the system in this example includes the CPU 102 which executes a runtime program to coordinate the base call operations, memory 103 to store sequences of arrays of tile data, base call reads produced by the base calling operation, and other information used in the base call operations. Also, in this illustration the system includes memory 104 to store a configuration file (or files), such as FPGA bit files, topology of the neural network, and model parameters for the neural network used to configure and reconfigure the configurable processor 150 and execute the neural network. Examples of such model parameters include weight coefficients (also referred to as weights) and/or biases that are to be used to configure the topology of the neural network.
- the sequencing machine 100 can include a program for configuring a configurable processor and in some embodiments a reconfigurable processor to execute the neural network.
- the sequencing machine 100 is coupled by a bus 105 to the configurable processor 150.
- the bus 105 can be implemented using a high throughput technology, such as in one example bus technology compatible with the PCIe standards (Peripheral Component Interconnect Express) currently maintained and developed by the PCI-SIG (PCI Special Interest Group).
- a memory 160 is coupled to the configurable processor 150 by bus 161.
- the memory 160 can be on-board memory, disposed on a circuit board with the configurable processor 150.
- the memory 160 is used for high speed access by the configurable processor 150 of working data used in the base call operation.
- the bus 161 can also be implemented using a high throughput technology, such as bus technology compatible with the PCIe standards.
- Configurable processors including field programmable gate arrays FPGAs, coarse grained reconfigurable arrays CGRAs, and other configurable and reconfigurable devices, can be configured to implement a variety of functions more efficiently or faster than might be achieved using a general purpose processor executing a computer program.
- Configuration of configurable processors involves compiling a functional description to produce a configuration file, referred to sometimes as a bitstream or bit file, and distributing the configuration file to the configurable elements on the processor.
- the configuration file defines the logic functions to be executed by the configurable processor, by configuring the circuit to set data flow patterns, use of distributed memory and other on-chip memory resources, lookup table contents, operations of configurable logic blocks and configurable execution units like multiply-and-accumulate units, configurable interconnects and other elements of the configurable array.
- a configurable processor is reconfigurable if the configuration file may be changed in the field, by changing the loaded configuration file.
- the configuration file may be stored in volatile SRAM elements, in non-volatile read- write memory elements, and in combinations of the same, distributed among the array of configurable elements on the configurable or reconfigurable processor.
- a variety of commercially available configurable processors are suitable for use in a base calling operation as described herein.
- Examples include commercially available products such as Xilinx AlveoTM U200, Xilinx AlveoTM U250, Xilinx AlveoTM U280, Intel/Altera StratixTM GX2800, Intel/Altera StratixTM GX2800, and Intel StratixTM GX10M.
- a host CPU can be implemented on the same integrated circuit as the configurable processor.
- Embodiments described herein implement the multi-cycle neural network using a configurable processor 150.
- the configuration file for a configurable processor can be implemented by specifying the logic functions to be executed using a high level description language HDL or a register transfer level RTL language specification.
- the specification can be compiled using the resources designed for the selected configurable processor to generate the configuration file.
- the same or similar specification can be compiled for the purposes of generating a design for an application-specific integrated circuit which may not be a configurable processor.
- Alternatives for the configurable processor in all embodiments described herein, therefore include a configured processor comprising an application specific ASIC or special purpose integrated circuit or set of integrated circuits, or a system-on-a-chip SOC device, configured to execute a neural network based base call operation as described herein.
- neural network processors In general, configurable processors and configured processors described herein, as configured to execute runs of a neural network, are referred to herein as neural network processors.
- the configurable processor 150 is configured in this example by a configuration file loaded using a program executed by the CPU 102, or by other sources, which configures the array of configurable elements on the configurable processor 150 to execute the base call function.
- the configuration includes data flow logic 151 which is coupled to the buses 105 and 161 and executes functions for distributing data and control parameters among the elements used in the base call operation.
- the configurable processor 150 is configured with base call execution logic 152 to execute a multi-cycle neural network.
- the logic 152 comprises a plurality of multi-cycle execution clusters (e.g., ⁇ 53) which, in this example, includes multi-cycle cluster 1 through multi cycle cluster X.
- the number of multi -cycle clusters can be selected according to a trade-off involving the desired throughput of the operation, and the available resources on the configurable processor.
- the multi-cycle clusters are coupled to the data flow logic 151 by data flow paths 154 implemented using configurable interconnect and memory resources on the configurable processor. Also, the multi-cycle clusters are coupled to the data flow logic 151 by control paths 155 implemented using configurable interconnect and memory resources for example on the configurable processor, which provide control signals indicating available clusters, readiness to provide input units for execution of a run of the neural network to the available clusters, readiness to provide trained parameters for the neural network, readiness to provide output patches of base call classification data, and other control data used for execution of the neural network.
- the configurable processor is configured to execute runs of a multi-cycle neural network using trained parameters to produce classification data for sensing cycles of the base flow operation.
- a run of the neural network is executed to produce classification data for a subject sensing cycle of the base call operation.
- a run of the neural network operates on a sequence including a number N of arrays of tile data from respective sensing cycles of N sensing cycles, where the N sensing cycles provide sensor data for different base call operations for one base position per operation in time sequence in the examples described herein.
- some of the N sensing cycles can be out of sequence if the needed according to a particular neural network model being executed.
- the number N can be any number greater than one.
- sensing cycles of the N sensing cycles represent a set of sensing cycles for at least one sensing cycle preceding the subject sensing cycle and at least one sensing cycle following the subject cycle in time sequence. Examples are described herein in which the number N is an integer equal to or greater than five.
- the data flow logic is configured to move tile data and at least some trained parameters of the model parameters from the memory 160 to the configurable processor for runs of the neural network, using input units for a given run including tile data for spatially aligned patches of the N arrays.
- the input units can be moved by direct memory access operations in one DMA operation, or in smaller units moved during available time slots in coordination with the execution of the neural network deployed.
- Tile data for a sensing cycle as described herein can comprise an array of sensor data having one or more features.
- the sensor data can comprise two images which are analyzed to identify one of four bases at a base position in a genetic sequence of DNA, RNA, or other genetic material.
- the tile data can also include metadata about the images and the sensors.
- the tile data can comprise information about alignment of the images with the clusters such as distance from center information indicating the distance of each pixel in the array of sensor data from the center of a cluster of genetic material on the tile.
- tile data can also include data produced during execution of the multi-cycle neural network, referred to as intermediate data, which can be reused rather than recomputed during a run of the multi -cycle neural network.
- intermediate data data produced during execution of the multi-cycle neural network
- the data flow logic can write intermediate data to the memory 160 in place of the sensor data for a given patch of an array of tile data. Embodiments like this are described in more detail below.
- a system for analysis of base call sensor output, comprising memory ( e.g ., 160) accessible by the runtime program storing tile data including sensor data for a tile from sensing cycles of a base calling operation.
- the system includes a neural network processor, such as configurable processor 150 having access to the memory.
- the neural network processor is configured to execute runs of a neural network using trained parameters to produce classification data for sensing cycles.
- a run of the neural network is operating on a sequence of N arrays of tile data from respective sensing cycles of N sensing cycles, including a subject cycle, to produce the classification data for the subject cycle.
- the data flow logic 151 is provided to move tile data and the trained parameters from the memory to the neural network processor for runs of the neural network using input units including data for spatially aligned patches of the N arrays from respective sensing cycles of N sensing cycles.
- the neural network processor has access to the memory, and includes a plurality of execution clusters, the execution logic clusters in the plurality of execution clusters configured to execute a neural network.
- the data flow logic has access to the memory and to execution clusters in the plurality of execution clusters, to provide input units of tile data to available execution clusters in the plurality of execution clusters, the input units including a number N of spatially aligned patches of arrays of tile data from respective sensing cycles, including a subject sensing cycle, and to cause the execution clusters to apply the N spatially aligned patches to the neural network to produce output patches of classification data for the spatially aligned patch of the subject sensing cycle, where N is greater than 1.
- Fig. 2 is a simplified diagram showing aspects of the base calling operation, including functions of a runtime program executed by a host processor.
- the output of image sensors from a flow cell are provided on lines 200 to image processing threads 201, which can perform processes on images such as resampling, alignment and arrangement in an array of sensor data for the individual tiles, and can be used by processes which calculate a tile cluster mask for each tile in the flow cell, which identifies pixels in the array of sensor data that correspond to clusters of genetic material on the corresponding tile of the flow cell.
- one example algorithm is based on a process to detect clusters which are unreliable in the early sequencing cycles using a metric derived from the softmax output, and then the data from those wells/clusters is discarded, and no output data is produced for those clusters.
- a process can identify clusters with high reliability during the first N (e.g ., 25) base-calls, and reject the others.
- Rejected clusters might be polyclonal or very weak intensity or obscured by fiducials. This procedure can be performed on the host CPU.
- this information would potentially be used to identify the necessary clusters of interest to be passed back to the CPU, thereby limiting the storage required for intermediate data (7.t ⁇ , the ‘dehydration’ step described below could look at all pixels with wells, or it could be implemented more efficiently to only process pixels with wells/clusters that pass the filter).
- the outputs of the image processing threads 201 are provided on lines 202 to a dispatch logic 210 in the CPU which routes the arrays of tile data to a data cache 204 on a high speed bus 203, or on high-speed bus 205 to the multi-cluster neural network processor hardware 220, such as the configurable processor of Figure 1, according to the state of the base calling operation.
- the hardware 220 returns classification data output by the neural network to the dispatch logic 210, which passes the information to the data cache 204, or on lines 211 to threads 202 that perform base call and quality score computations using the classification data, and can arrange the data in standard formats for base call reads.
- the outputs of the threads 202 that perform base calling and quality score computations are provided on lines 212 to threads 203 that aggregate the base call reads, perform other operations such as data compression, and write the resulting base call outputs to specified destinations for utilization by the customers.
- the host can include threads (not shown) that perform final processing of the output of the hardware 220 in support of the neural network.
- the hardware 220 can provide outputs of classification data from a final layer of the multi-cluster neural network.
- the host processor can execute an output activation function, such as a softmax function, over the classification data to configure the data for use by the base call and quality score threads 202.
- the host processor can execute input operations (not shown), such as resampling, batch normalization or other adjustments of the tile data prior to input to the hardware 220.
- Fig. 3 is a simplified diagram of a configuration of a configurable processor such as that of Fig. 1.
- the configurable processor comprises in FPGA with a plurality of high speed PCIe interfaces.
- the FPGA is configured with a wrapper 300 which comprises the data flow logic described with reference to Fig. 1.
- the wrapper 300 manages the interface and coordination with a runtime program in the CPU across the CPU communication link 309 and manages communication with the on-board DRAM 302 ( e.g . memory 160) via DRAM communication link 310.
- the data flow logic in the wrapper 300 provides patch data retrieved by traversing the arrays of tile data on the on-board DRAM 302 for the number N cycles to a cluster 301 and retrieves process data 315 from the cluster 301 for delivery back to the on-board DRAM 302.
- the wrapper 300 also manages transfer of data between the on-board DRAM 302 and host memory, for both the input arrays of tile data, and for the output patches of classification data.
- the wrapper transfers patch data on line 313 to the allocated cluster 301.
- the wrapper provides trained parameters, such as weights and biases on line 312 to the cluster 301 retrieved from the on-board DRAM 302.
- the wrapper provides configuration and control data on line 311 to the cluster 301 provided from, or generated in response to, the runtime program on the host via the CPU communication link 309.
- the cluster can also provide status signals on line 316 to the wrapper 300, which are used in cooperation with control signals from the host to manage traversal of the arrays of tile data to provide spatially aligned patch data, and to execute the multi-cycle neural network over the patch data using the resources of the cluster 301.
- Each cluster can be configured to provide classification data for base calls in a subject sensing cycle using the tile data of multiple sensing cycles described herein.
- model data including kernel data like filter weights and biases can be sent from the host CPU to the configurable processor, so that the model can be updated as a function of cycle number.
- a base calling operation can comprise, for a representative example, on the order of hundreds of sensing cycles.
- Base calling operation can include paired end reads in some embodiments.
- the model trained parameters may be updated once every 20 cycles (or other number of cycles), or according to update patterns implemented for particular systems and neural network models.
- a sequence for a given string in a genetic cluster on a tile includes a first part extending from a first end down (or up) the string, and a second part extending from a second end up (or down) the string
- the trained parameters can be updated on the transition from the first part to the second part.
- image data for multiple cycles of sensing data for a tile can be sent from the CPU to the wrapper 300.
- the wrapper 300 can optionally do some pre-processing and transformation of the sensing data and write the information to the on-board DRAM 302.
- the input tile data for each sensing cycle can include arrays of sensor data including on the order of 4000 x 3000 pixels per sensing cycle per tile or more, with two features representing colors of two images of the tile, and one or two bytes per feature per pixel.
- the array of tile data for each run of the multi-cycle neural network can consume on the order of hundreds of megabytes per tile.
- the tile data also includes an array of DFC data, stored once per tile, or other type of metadata about the sensor data and the tiles.
- the wrapper allocates a patch to the cluster.
- the wrapper fetches a next patch of tile data in the traversal of the tile and sends it to the allocated cluster along with appropriate control and configuration information.
- the cluster can be configured with enough memory on the configurable processor to hold a patch of data including patches from multiple cycles in some systems, that is being worked on in place, and a patch of data that is to be worked on when the current patch of processing is finished using a ping-pong buffer technique or raster scanning technique in various embodiments.
- an allocated cluster When an allocated cluster completes its run of the neural network for the current patch and produces an output patch, it will signal the wrapper.
- the wrapper will read the output patch from the allocated cluster, or alternatively the allocated cluster will push the data out to the wrapper. Then the wrapper will assemble output patches for the processed tile in the DRAM 302.
- the wrapper sends the processed output array for the tile back to the host/CPU in a specified format.
- the on-board DRAM 302 is managed by memory management logic in the wrapper 300.
- the runtime program can control the sequencing operations to complete analysis of all the arrays of tile data for all the cycles in the run in a continuous flow to provide real time analysis.
- Fig. 4 is a diagram of a multi-cycle neural network model which can be executed using the system described herein.
- the example shown in Fig. 4 can be referred to as a five-cycle input, one-cycle output neural network.
- the inputs to the multi-cycle neural network model include five spatially aligned patches (e.g ., 400) from the tile data arrays of five sensing cycles of a given tile. Spatially aligned patches have the same aligned row and column dimensions (x,y) as other patches in the set, so that the information relates to the same clusters of genetic material on the tile in sequence cycles.
- a subject patch is a patch from the array of tile data for cycle N.
- the set of five spatially aligned patches includes a patch from cycle N-2 preceding the subject patch by two cycles, a patch from cycle N-l preceding the subject patch by one cycle, a patch from cycle N+l following the patch from the subject cycle by one cycle, and a patch from cycle N+2 following the patch from the subject cycle by two cycles.
- the model includes a segregated stack 401 of layers of the neural network for each of the input patches.
- stack 401 receives as input, tile data for the patch from cycle N+2, and is segregated from the stacks 402, 403, 404, and 405 so they do not share input data or intermediate data.
- all of the stacks 410-405 can have identical models, and identical trained parameters (such as trained weights and biases).
- the models and trained parameters may be different in the different stacks.
- Stack 402 receives as input, tile data for the patch from cycle N+l.
- Stack 403 receives as input, tile data for the patch from cycle N.
- Stack 404 receives as input, tile data for the patch from cycle N-l.
- Stack 405 receives as input, tile data for the patch from cycle N-2.
- the layers of the segregated stacks each execute a convolution operation of a kernel including a plurality of filters over the input data for the layer.
- the patch 400 may include three features.
- the output of the layer 410 may include many more features, such as 10 to 20 features.
- the outputs of each of layers 411 to 416 can include any number of features suitable for a particular implementation.
- the parameters of the filters are trained parameters for the neural network, such as weights and biases.
- the output feature set (intermediate data) from each of the stacks 401-405 is provided as input to an inverse hierarchy 420 of temporal combinatorial layers, in which the intermediate data from the multiple cycles is combined.
- the inverse hierarchy 420 includes a first layer including three combinatorial layers 421, 422, 423, each receiving intermediate data from three of the segregated stacks, and a final layer including one combinatorial layer 430 receiving intermediate data from the three temporal layers 421, 422,
- the output of the final combinatorial layer 430 is an output patch of classification data for clusters located in the corresponding patch of the tile from cycle N.
- the output patches can be assembled into an output array classification data for the tile for cycle N.
- the output patch may have sizes and dimensions different from the input patches.
- the output patch may include pixel-by-pixel data that can be filtered by the host to select cluster data.
- the output classification data can then be applied to a softmax function 440 (or other output activation function) optionally executed by the host, or on the configurable processor, depending on the particular implementation.
- a softmax function 440 or other output activation function
- An output function different from softmax could be used (e.g ., making a base call output parameter according to largest output, then use a learned nonlinear mapping using context/network outputs to give base quality).
- the output of the softmax function 440 can be provided as base call probabilities for cycle N (450) and stored in host memory to be used in subsequent processing.
- Other systems may use another function for output probability calculation, e.g., another nonlinear model.
- the neural network can be implemented using a configurable processor with a plurality of execution clusters so as complete evaluation of one tile cycle within the duration of the time interval, or close to the duration of the time interval, of one sensing cycle, effectively providing the output data in real time.
- Data flow logic can be configured to distribute input units of tile data and trained parameters to the execution clusters, and to distribute output patches for aggregation in memory.
- Input units of data for a five-cycle input, one-cycle output neural network like that of Fig. 4 are described with reference to Fig. 5 and Fig. 6 for a base call operation using two- channel sensor data.
- the base call operation can execute two flows of analyte and two reactions that generate two channels of signals, such as images, which can be processed to identify which one of four bases is located at a current position in the genetic sequence for each cluster of genetic material.
- a different number of channels of sensing data may be utilized.
- Fig. 5 shows arrays of tile data for five cycles for a given tile, tile M, used for the purposes of executing a five-cycle input, one-cycle output neural network.
- the five-cycle input tile data in this example can be written to the on-board DRAM, or other memory in the system which can be accessed by the data flow logic and, for cycle N-2, includes an array 501 for channel 1 and an array 511 for channel 2, for cycle N-l, an array 502 for channel 1 and an array 512 for channel 2, for cycle N, an array 503 for channel 1 and an array 513 for channel 2, for cycle N+l, an array 504 for channel 1 and an array 514 for channel 2, for cycle N+2, an array 505 for channel 1 and an array 515 for channel 2.
- an array 520 of metadata for the tile can be written once in the memory, in this case a DFC file, included for use as input to the neural network along with each cycle.
- the data flow logic composes input units, which can be understood with reference to Fig. 6, of tile data that includes spatially aligned patches of the arrays of tile data for each execution cluster configured to execute a run of the neural network over an input patch.
- An input unit for an allocated execution cluster is composed by the data flow logic by reading spatially aligned patches (e.g ., 601, 602, 611, 612, 620) from each of the arrays 501-505, 511, 515, 520 of tile data for the five input cycles, and delivering them via data paths (schematically 600) to memory on the configurable processor configured for use by the allocated execution cluster.
- the allocated execution cluster executes a run of the five-cycle input/one-cycle output neural network, and delivers an output patch for the subject cycle N of classification data for the same patch of the tile in the subject cycle N.
- Fig. 7 illustrates a mapping of patches over an array of tile data for a given tile.
- input array 700 of tile data has a width of X pixels and a height of Y pixels.
- the output tile 701 can be reduced by two rows and two columns per layer of the neural network. The reduction by two rows/columns is caused in this example, by the kernel size of 3x3 and the type of (edge) padding in use and can be different in different configurations.
- the output tile 701 of classification data will have a width of X-L pixels, for a neural network comprising L/2 layers of convolutions of this type.
- the output tile of classification data will have a height of Y-L pixels for a neural network comprising L/2 layers.
- L can be 12 pixels.
- the patch areas are not drawn to scale.
- the input patches are formed in an overlapping manner to account for lost pixels that result from the convolutions over the patch dimensions.
- the sizes of the input patches can be chosen according to the particular implementation.
- an input patch may have a dimension of 76 x 76 pixels, with three channels of one or more bytes each.
- An output patch may have a dimension of 64 x 64 pixels.
- base call operation output classifications for A/C/T/G base calls, and the output patch may include four channels of one or more bytes for each pixel representing confidence scores for the classifications.
- the outputs on line 435 are unnormalized confidence scores for four base calls.
- the data flow logic can address the array of tile data to patches in a raster scan fashion, or other scanning fashion to provide input patches (e.g ., 705). For example, for the first available cluster, patch P0,0 can be provided. For a next available cluster, patch P0,1 can be provided. This sequence can be continued in a raster pattern until all of the patches of the tile are delivered to available clusters for processing.
- Output patches (e.g., 706) can be written back in the same address space aligned with their subject input patches in some embodiments, accounting for any differences in the number of bytes per pixel used to encode the data.
- the output patches have an area (number of pixels) reduced relative to the input patches according to the number of convolution layers, and the nature of the convolutions executed.
- Fig. 8 is a simplified representation of a stack of a neural network usable in a system like that of Fig. 4 (e.g, 401 and 420).
- some functions of the neural network are executed on the host (e.g, 800, 802) and other portions of the neural network are executed on the configurable processor (801).
- a first function can be batch normalization (layer 810) formed on the CPU.
- batch normalization as a function may be fused into one or more layers, and no separate batch normalization layer may be present.
- a number of spatial, segregated convolution layers are executed as a first set of convolution layers of the neural network, as discussed above on the configurable processor.
- the first set of convolution layers applies 2D convolutions spatially.
- a first spatial convolution 821 is executed, followed by a second spatial convolution 822, followed by a third spatial convolution 823, and so on for a number L/2 of spatially segregated neural network layers in each stack (L is described with reference to Figure 7).
- the number of spatial layers can be any practical number, which for context may range from a few to more than 20 in different embodiments.
- kernel weights are stored for example in a (1,6,6,3,L) structure since there are 3 input channels to this layer.
- the “6” in this structure is due to storing coefficients in the transformed Winograd domain (the kernel size is 3x3 in the spatial domain but expands in the transform domain).
- the outputs of the stack of spatial layers are provided to temporal layers, including convolution layers 824, 825 executed on the FPGA.
- Layers 824 and 825 can be convolution layers applying ID convolutions across cycles.
- the number of temporal layers can be any practical number, which for context may range from a few to more than 20 in different embodiments.
- the first temporal layer, TEMP CONV O layer 824 reduces the number of cycle channels from 5 to 3, as illustrated in Figure 4.
- the second temporal layer, layer 825 reduces the number of cycle channels from 3 to 1 as illustrated in Fig. 4, and reduces the number of feature maps to four outputs for each pixel, representing confidence in each base call.
- the output of the temporal layers is accumulated in output patches and delivered to the host CPU to apply for example, a softmax function 830, or other function to normalize the base call probabilities.
- Fig. 9 illustrates an alternative implementation showing a 10-input, six-output neural network which can be executed for a base calling operation.
- tile data for spatially aligned input patches from cycles 0 to 9 are applied to segregated stacks of spatial layers, such as stack 901 for cycle 9.
- the outputs of the segregated stacks are applied to an inverse hierarchical arrangement of temporal stacks 920, having outputs 935(2) through 1135(7) providing base call classification data for subject cycles 2 through 7.
- Fig. 10 illustrates one implementation of the specialized architecture of the neural network-based base caller (e.g ., Fig. 4 and Fig. 9) that is used to segregate processing of data for different sequencing cycles. The motivation for using the specialized architecture is described first.
- the neural network-based base caller processes data for a current sequencing cycle, one or more preceding sequencing cycles, and one or more successive sequencing cycles. Data for additional sequencing cycles provides sequence-specific context. The neural network-based base caller learns the sequence-specific context during training and base call them. Furthermore, data for pre and post sequencing cycles provides second order contribution of pre-phasing and phasing signals to the current sequencing cycle.
- the specialized architecture comprises spatial convolution layers that do not mix information between sequencing cycles and only mix information within a sequencing cycle.
- Spatial convolution layers use so-called “segregated convolutions” that operationalize the segregation by independently processing data for each of a plurality of sequencing cycles through a “dedicated, non-shared” sequence of convolutions. The segregated convolutions convolve over data and resulting feature maps of only a given sequencing cycle, i.e., intra-cycle, without convolving over data and resulting feature maps of any other sequencing cycle.
- the input data comprises (i) current data for a current (time t) sequencing cycle to be base called, (ii) previous data for a previous (time t-1) sequencing cycle, and (iii) next data for a next (time t+1) sequencing cycle.
- the specialized architecture then initiates three separate data processing pipelines (or convolution pipelines), namely, a current data processing pipeline, a previous data processing pipeline, and a next data processing pipeline.
- the current data processing pipeline receives as input the current data for the current (time t) sequencing cycle and independently processes it through a plurality of spatial convolution layers to produce a so-called “current spatially convolved representation” as the output of a final spatial convolution layer.
- the previous data processing pipeline receives as input the previous data for the previous (time t-1) sequencing cycle and independently processes it through the plurality of spatial convolution layers to produce a so-called “previous spatially convolved representation” as the output of the final spatial convolution layer.
- the next data processing pipeline receives as input the next data for the next (time t+1) sequencing cycle and independently processes it through the plurality of spatial convolution layers to produce a so- called “next spatially convolved representation” as the output of the final spatial convolution layer.
- the current, previous, and next processing pipelines are executed in parallel.
- the spatial convolution layers are part of a spatial convolutional network (or subnetwork) within the specialized architecture.
- the neural network-based base caller further comprises temporal convolution layers that mix information between sequencing cycles, i.e., inter-cycles.
- the temporal convolution layers receive their inputs from the spatial convolutional network and operate on the spatially convolved representations produced by the final spatial convolution layer for the respective data processing pipelines.
- the inter-cycle operability freedom of the temporal convolution layers emanates from the fact that the misalignment property, which exists in the image data fed as input to the spatial convolutional network, is purged out from the spatially convolved representations by the stack, or cascade, of segregated convolutions performed by the sequence of spatial convolution layers.
- Temporal convolution layers use so-called “combinatory convolutions” that groupwise convolve over input channels in successive inputs on a sliding window basis.
- the successive inputs are successive outputs produced by a previous spatial convolution layer or a previous temporal convolution layer.
- the temporal convolution layers are part of a temporal convolutional network (or subnetwork) within the specialized architecture.
- the temporal convolutional network receives its inputs from the spatial convolutional network.
- a first temporal convolution layer of the temporal convolutional network groupwise combines the spatially convolved representations between the sequencing cycles.
- subsequent temporal convolution layers of the temporal convolutional network combine successive outputs of previous temporal convolution layers.
- the output of the final temporal convolution layer is fed to an output layer that produces an output.
- the output is used to base call one or more clusters at one or more sequencing cycles.
- the specialized architecture processes information from a plurality of inputs in two stages.
- segregated convolutions are used to prevent mixing of information between the inputs.
- combinatory convolutions are used to mix information between the inputs.
- the results from the second stage are used to make a single inference for the plurality of inputs.
- the specialized architecture maps the plurality of inputs to the single inference.
- the single inference can comprise more than one prediction, such as a classification score (e.g ., softmax or pre-softmax base-wise classification scores or base- wise regression scores) for each of the four bases (A, C, T, and G).
- the inputs have temporal ordering such that each input is generated at a different time step and has a plurality of input channels.
- the plurality of inputs can include the following three inputs: a current input generated by a current sequencing cycle at time step (t), a previous input generated by a previous sequencing cycle at time step (t-1), and a next input generated by a next sequencing cycle at time step (t+1).
- each input is respectively derived from the current, previous, and next inputs by one or more previous convolution layers and includes k feature maps.
- each input can include the following five input channels: a red image channel (in red), a red distance channel (in yellow), a green image channel (in green), a green distance channel (in purple), and a scaling channel (in blue).
- each input can include k feature maps produced by a previous convolution layer and each feature map is treated as an input channel.
- Fig. 11 depicts one implementation of segregated layers, each of which can include convolutions.
- Segregated convolutions process the plurality of inputs at once by applying a convolution filter to each input in parallel.
- the convolution filter combines input channels in a same input and does not combine input channels in different inputs.
- a same convolution filter is applied to each input in parallel.
- a different convolution filter is applied to each input in parallel.
- each spatial convolution layer comprises a bank of k convolution filters, each of which applies to each input in parallel.
- Fig. 12A depicts one implementation of combinatory layers, each of which can include convolutions.
- Fig. 12B depicts another implementation of the combinatory layers, each of which can include convolutions.
- Combinatory convolutions mix information between different inputs by grouping corresponding input channels of the different inputs and applying a convolution filter to each group. The grouping of the corresponding input channels and application of the convolution filter occurs on a sliding window basis.
- a window spans two or more successive input channels representing, for instance, outputs for two successive sequencing cycles. Since the window is a sliding window, most input channels are used in two or more windows.
- the different inputs originate from an output sequence produced by a preceding spatial or temporal convolution layer.
- the different inputs are arranged as successive outputs and therefore viewed by a next temporal convolution layer as successive inputs.
- the combinatory convolutions apply the convolution filter to groups of corresponding input channels in the successive inputs.
- the successive inputs have temporal ordering such that a current input is generated by a current sequencing cycle at time step (t), a previous input is generated by a previous sequencing cycle at time step (t-1), and a next input is generated by a next sequencing cycle at time step (t+1).
- each successive input is respectively derived from the current, previous, and next inputs by one or more previous convolution layers and includes k feature maps.
- each input can include the following five input channels: a red image channel (in red), a red distance channel (in yellow), a green image channel (in green), a green distance channel (in purple), and a scaling channel (in blue).
- each input can include k feature maps produced by a previous convolution layer and each feature map is treated as an input channel.
- each temporal convolution layer comprises a bank of k convolution filters, each of which applies to the successive inputs on a sliding window basis.
- Fig. 13 illustrates layers of a neural network, and corresponding kernels and weights to configure the neural network for base calling operation. For example, L number of layers 1302a, 1302b, ..., 1302L are illustrated in Fig. 13. Each layer 1302 of Fig. 13 corresponds to a corresponding layer of the neural network, such as any of the neural network illustrated in Figs.
- layer 1302b of Fig. 13 may correspond to a layer implementing the second spatial convolution 822 of Fig. 8, and so on, and layer 1302L may correspond to a layer implementing the temporal convolution layer 825 of Fig. 8.
- each layer 1302 of Fig. 13 comprises a plurality of filters.
- filters 13041a, 13042a, ..., 1304Na of the layer 1302a are illustrated in Fig. 13.
- the filters 1304 of Fig. 13 correspond to the various filters illustrated in Fig. 11, for example.
- various other layers illustrated in Fig. 13 also includes corresponding filters.
- each filter 1304 comprises a corresponding plurality of kernels 1306 (e.g ., see Fig. 11 for example kernels in a filter).
- kernels 13061al, 13061a2 For example, kernels 13061al, 13061a2,
- a kernel can be used in a convolution operation, or another appropriate operation of the neural network that is to be used for base calling.
- Each kernel comprises one or more matrices (such as a square matrix), such as a 3> ⁇ 3 matrix or a 4x4 matrix (or a matrix of another appropriate dimension) comprising weight coefficients or weights.
- matrices such as a square matrix
- Wl, W2, ..., W9 of kernels 1306_lal and 13061 NL1 are illustrated in Fig. 13.
- Fig. 13 does not illustrate biases
- one or more kernels, one or more filters, and/or one or more layers can also be associated with corresponding biases.
- the weights and biases are loaded in the reconfigurable processor 150 (see Fig. 1) along with a topology of the neural network (e.g., from the memory 104 and/or 160), and the neural network topology loaded within the reconfigurable processor 150 is configured with the weights and biases.
- the configured neural network is used to process cluster data from flow cells, to generate base call classification, as will be discussed in further detail herein.
- numbers are represented using floating point arithmetic.
- floating point arithmetic a floating-point number is represented with a fixed number of significant digits (the significand) and scaled using an exponent in some fixed base, where the base for the scaling is normally two, ten, sixteen, or the like.
- a number that can be represented using floating point arithmetic is of the following form: significand x baseexponent., where the significand is an integer, the base is an integer greater than or equal to two, and exponent is also an integer. For example, 1.2345 is represented as 12345 x 10-4, where 12345 is the significand, 10 is the base, and (-4) is the exponent.
- floating point refers to the fact that a number’s radix point (i.e., the decimal point, or, more commonly in computers, the binary point) can “float” - that is, it can be placed anywhere relative to the significant digits of the number. This position is indicated as the exponent component.
- the weights and biases which are to be uploaded in the configurable processor 150 to configure the neural network for base calling, are not represented using the floating-point arithmetic.
- the configurable processor 150 may be better equipped to handle weights and biases using fixed point arithmetic, and not floating-point arithmetic.
- a fixed-point number representation is a real data type for a number that has a fixed number of digits after (and sometimes also before) the radix point, where the radix point (also sometimes referred to as the decimal point in English decimal notation) is the symbol used in numerical representations to separate the integer part of a number (to the left of the radix point) from its fractional part (to the right of the radix point).
- various parameters of the neural network are, thus, represented using fixed point arithmetic, prior to being uploaded to the configurable processor 150.
- 8 binary bits are used to represent individual weights.
- a MSB Meri Bit
- a sign bit e.g ., if the number represents a signed value.
- the remaining 7 bits can include zero or more integer bits, and zero or more fractional bits, and a sum of the integer and fractional bits is equal to 7.
- Fig. 14 illustrates a table 1400 depicting example 8-bit binary fixed-point formats, at least some of which can be used to represent parameters (e.g, weights and biases) of a neural network for base calling.
- the table 1400 includes eight rows, each representing a corresponding signed binary fixed-point format.
- the fixed-point formats depicted in table 1400 are assumed to be signed fixed-point formats.
- a MSB is reserved for a sign bit. For example, an MSB of 1 indicates that the number is negative and an MSB of 0 indicates that the number is positive, while in another example an MSB of 1 indicates that the number is positive and an MSB of 0 indicates that the number is negative.
- the table 1400 and various other embodiments, examples, and figures of this disclosure assume signed binary fixed-point formats (e.g ., where the MSB is reserved for the sign bit), that is not meant to be limiting the scope of this disclosure, and the teachings of this disclosure can also be applicable to unsigned binary fixed-point formats (e.g., where the MSB is not reserved for the sign bit).
- one or more unsigned binary fixed-point formats may also be used for representing various parameters (such as weights and biases) of a neural network for base calling.
- the seven bits include (i) zero or more integer bits and (ii) zero or more fractional bits.
- the sign bit in the various formats is represented using a checkered box, the zero or more integer bits are represented using diagonal boxes, and the zero or more fractional bits are represented using dotted boxes.
- a black circle represents the radix point or decimal separator between the integer and fractional bits.
- the first column of the table 1400 graphically illustrates the various fixed-point formats, and the second column of the table 1400 indicates the names of the fixed-point formats.
- the format name is generally indicated as Sa.b, in which “S” indicates that this is a signed number format where the MSB is reserved for sign bit.
- the phrase “a” in a Sa.b format indicates a number of integer bits in the format (where “a” can range from 0 to 7), and the phrase “b” in a Sa.b format indicates a number of fractional bits in the format (where “b” can also range from 7 to 0). In the example where the total number of bits is assumed to be 8, the sum of “a” and “b” is always equal to 7 (as the sign bit consumes one bit, the remaining seven bits are shared between the integer bits and the fractional bits).
- a SO.7 fixed-point format in which “S” indicates that this is a signed number format where the MSB is reserved for sign bit.
- the number “0” in the SO.7 format indicates that there is zero number of integer bit, and the number “7” in the SO.7 format indicates that there are seven number of fractional bits in this format.
- the radix point is immediately after the sign bit, thereby indicating that there are no integer bits in this format.
- SI.6 fixed-point format in which “S” indicates that this is a signed number format where the MSB is reserved for sign bit. Furthermore, there is a single integer bit, and six fractional bits in this format. Thus, there are a total of 7 integer and fractional bits, and one sign bit in this 8-bit fixed point format.
- the radix point is after the sign bit and after the one integer bit.
- S2.5 fixed-point format in which “S” indicates that this is a signed number format where the MSB is reserved for sign bit. Furthermore, there are two integer bit, and five fractional bits in this format. Thus, there are a total of 7 integer and fractional bits, and one sign bit in this 8-bit fixed point format.
- the radix point is after the sign bit and after two integer bits.
- S indicates that this is a signed number format where the MSB is reserved for sign bit. Furthermore, there are three integer bit, and four fractional bits in this format. Thus, there are a total of 7 integer and fractional bits, and one sign bit in this 8-bit fixed point format.
- the radix point is after the sign bit and after three integer bits.
- any given fixed-point format can be stored in two’s complement format.
- S a.b e.g., where there are a number of integer bits and b number of fractional bits
- a range of the format is given by:
- a maximum value that can be represented by the format Sa.b is (2a - 2-b), and a minimum value that can be represented by the format Sa.b is (-2a).
- a resolution of a given signed binary fixed-point format Sa.b (where the resolution is a smallest step or difference between two consecutive numbers, and is represented by a value of the LSB) is given by:
- Equations 1 and 2 are derived from “Fixed Point Representation & Fractional Math,” by Oberstar, Erick L. (August 30, 2007), retrieved from http://www. superkits. net/whitepapers/Fixed%20Point%20Representation%20&%20Fractional% 20Math.pdf on March 8, 2021, which is incorporated by reference, as if fully set forth herein.
- Third and fourth columns of the table 1400 provide a range (e.g, a lower and an upper range, respectively) of the various formats, and a fifth column of the table 1400 provides a resolution of the various formats.
- the corresponding resolution represents a maximum possible quantization error, when a parameter is quantized in accordance with the format (e.g ., assuming that the parameter fits within the range of the format).
- the maximum possible quantization error is 0.0078125.
- the range of the SI.6 format is (-2, 1.984375), which is higher than the range (-1, 0.9921875) of the SO.6 format.
- the resolution of the SI.6 format is 0.015625, which is lower than the resolution 0.0078125 of the SO.7 format.
- each of two neural network parameters such as each of two example weights Wexl and Wex2
- each weight is to be quantized using a corresponding appropriate fixed-point format illustrated in table 1400.
- the formats illustrated in Fig. 14 are also referred to as quantization formats.
- the weight Wexl of the neural network for base calling has a decimal value of 0.7
- the weight Wex2 of the neural network for base calling has a decimal value of 1.5.
- the weight Wexl falls within the range of each of the eight quantization formats illustrated in Fig. 14, and accordingly, can be quantized in accordance with any of the formats. However, if the weight Wexl is quantized with a format that has a relatively higher number of integer bits, a corresponding quantization error may be large. Merely as an example, if the weight Wexl having the value of 0.7 is quantized using S7.0 format, the value of the weight in this format would be rounded off to 1, with a relatively large quantization error of 0.3. In contrast, quantizing the weight Wexl in the SO.7 format may result in least quantization error ( e.g ., as the resolution for this format is the lowest). Accordingly, the weight Wexl having the example value of 0.7 can be optimally quantized in accordance with the quantization format SO.7.
- the weight Wex2 having the example value of 1.5 falls within the range of formats SI.6, S2.5, S3.4, and so on, but is outside the range of the format SO.7 (note that the allowed range of SO.7 is -1 to 0.9921875).
- the weight Wex2 having the example value of 1.5 is quantized in accordance with format SO.7, the weight Wex2 would have to be saturated to the 0.9921875. Accordingly, a quantization noise for quantizing the weight Wex2 to format SO.7 would be (1.5- 0.9921875), or 0.5078125.
- the weight Wex2 is quantized in accordance with the format SI.6, the weight Wex2 would fit within the allowed range for this format and the maximum quantization error would be 0.015625. Furthermore, if the weight Wex2 is quantized in accordance with the format S2.5, the weight Wex2 would fit within the allowed range for this format but the quantization error might be as high as 0.03125. Accordingly, in an example, the weight Wex2 having the example value of 1.5 can be optimally quantized in accordance with the quantization format SI.6 (e.g., the weight Wex2 fits within the range of format SI.6, and the format SI.6 has minimum possible quantization error among all formats having ranges that fit this weight).
- the quantization format SI.6 e.g., the weight Wex2 fits within the range of format SI.6, and the format SI.6 has minimum possible quantization error among all formats having ranges that fit this weight.
- an appropriate (e.g, optimal) quantization format to quantize the weight Wex can be selected based on the ranges and the resolutions depicted within the table 1400. For example, given the example weight Wex, an absolute value of the weight
- Wex is 1.5.
- the quantization format selected for this value of Wex is SI. (8-1-1), or SI.6.
- 22 or 4 is also a power of 2 that is higher than 1.5 - but this is not the power of 2 that is higher than and “nearest” to 1.5. Rather, 21 or 2 is the power of 2 is higher than and nearest to 1.5.
- equation 3 can be used to select an appropriate quantization format for a neural network parameter (such as a weight or a bias).
- a lookup table can also be used to select an appropriate quantization format for the neural network parameter, where the lookup table can be generated from the table 1400 of Fig. 14.
- Fig. 15A illustrates a Look-Up Table (LUT) 1500 usable to select a quantization format for a corresponding neural network parameter
- Fig. 15B illustrates a scale that represents at least some information of the LUT 1500 of Fig. 15 A.
- the neural network parameter can be a weight or a bias that is to be used to configure the neural network for base calling.
- the sixth column of the LUT 1500 of Fig. 15A represents ranges corresponding to the formats depicted in the first and second columns of the LUT 1500.
- LUT 1500 is generated from the table 1400 of Fig. 14, by adding the sixth column to the table 1400.
- the format SO.7 is to be selected if the neural network parameter is within the range -1 to 0.9921875.
- the format SI.6 is to be selected if the neural network parameter is within either of the range -2 to -1, or within the range 0.9921875 to 1.984375, as also depicted in Fig. 15B.
- the format S2.5 is to be selected if the neural network parameter is within either of the range -4 to -2, or within the range 1.984375 to 3.96875, as also depicted in Fig. 15B.
- Other ranges for other formats are also illustrated in Figs. 15A and 15B. Note that Fig. 15B is not drawn to the scale, and only a few ranges are illustrated in Fig. 15B for purposes of illustrative clarity.
- Fig. 15C illustrates another LUT 1550 usable to select a quantization format for a corresponding absolute value of a neural network parameter.
- the sixth column of the LUT 1500 of Fig. 15A included both positive and negative ranges
- the sixth column of the LUT 1550 of Fig. 15C includes only positive ranges and assumes that an absolute value of the neural network parameter is assumed when using the LUT 1550.
- the sixth column of LUT 1550 is an approximation of the sixth column of the LUT 1500.
- the actual optimal range for the format S3.4 is -4 to -8, and from 3.96875 to 7.9375, as depicted in the LUT 1500 of Fig.
- LUT 15A is approximated to an absolute value range of 4 to 8 in the LUT 1550 of Fig. 15C.
- a neural network parameter having a value of 7.99 would be assigned the format S4.3 in accordance with LUT 1500 of Fig. 15 A, but would be assigned the format S3.4 in accordance with LUT 1550 of Fig. 15C.
- the choice of whether to use the LUT 1500 or 1550 is implementation specific. For example, the slight loss in accuracy in using LUT 1550 over LUT 1500 is compensated by the ease of use of the LUT 1550. In an example, where high accuracy is desired, the LUT 1500 of Fig. 15A may be used; and where ease of use is desired, the LUT 1550 of Fig. 15C may be used.
- the sixth column of the LUT 1550 of Fig. 15C can also be derived using the above discussed equation 3.
- Fig. 15D illustrates yet another LUT 1570 usable to select a quantization format for a corresponding absolute value of a neural network parameter.
- the sixth column of the LUT 1500 of Fig. 15A included both positive and negative ranges
- the sixth column of the LUT 1570 of Fig. 15D includes only positive ranges and assumes that an absolute value of the neural network parameter is assumed when using the LUT 1570.
- a range in the sixth column for a specific format is based on an actual upper range of the format depicted in the fourth column of the LUT 1570.
- equation 3 and/or any of the LUTs 1500, 1550, or 1570 are to be used to select an appropriate (e.g ., optimal) quantization format for a given neural network parameter (e.g., a weight or a bias to be used to configure the neural network), and the choice of equation 3 or any of the LUTs is implementation specific.
- a given neural network parameter e.g., a weight or a bias to be used to configure the neural network
- Fig. 16 illustrates a flowchart depicting a method 1600 for grouping of neural network parameters, selecting appropriate (e.g, optimal) quantization formats for individual groups, quantizing the neural network parameters of each group in accordance with the corresponding selected quantization format, and using the quantized neural network parameters to configure a neural network topology for base calling.
- Operations 1604 to 1620 of the method 1600 can be performed by the CPU 102 (see Fig. 1) and/or by a remote computing machine (e.g, hosted by a deep learning cloud platform) that is remotely located relative to the sequencing machine 100.
- a remote computing machine e.g, hosted by a deep learning cloud platform
- solely the CPU 102 can perform the operations 1604-1620.
- solely the remote computing machine can perform the operations 1604-1620.
- the CPU 102 and the remote computing machine can perform the operations 1604-1620 (e.g ., some of the operations are performed by the remote computing machine, while some other operations are performed by the CPU 102).
- operations 1604-1616 e.g., where quantization formats are selected, and corresponding control information are generated
- operation 1620 e.g., where the actual quantization is performed
- operations 1604-1608 e.g, grouping of parameters
- operations 1612-1610 can be performed by the CPU 102 in the local machine. Any other combination of division of operations between the remote and the local machine is also possible.
- one or more of the operations 1604-1620 are performed by quantization logic being executed by the CPU 102 or by the remote computing machine.
- a plurality of neural network parameters are received, where the parameters are usable to configure a neural network topology for base calling.
- the parameters include weights and biases, as discussed.
- the parameters can be generated and tuned by training a neural network for base calling. Training of a neural network for base calling, to generate neural network parameters such as weights and biases, are discussed in further detail in U.S. Nonprovisional Patent Application No. 16/825,987, titled “TRAINING DATA GENERATION FOR ARTIFICIAL INTELLIGENCE-BASED SEQUENCING,” filed 20 March 2020 (Attorney Docket No. ILLM 1008-16/IP-1693-US), which is incorporated by reference as if fully set forth herein.
- the parameters can be either received by the CPU 102 within the sequencing machine 100, or by the remote computing machine.
- the CPU 102 can access a memory storing the parameters, such as the memory 104 (or memory 160).
- the parameters can be received by the CPU 102 from the remote computing machine over a network, such as the Internet.
- the method 1600 then proceeds from 1604 to 1608, where the plurality of parameters is grouped in a plurality of groups. Any appropriate criteria may be used to group the parameters.
- Fig. 17 illustrates an example layer-specific grouping of neural network parameters
- Fig. 18 illustrates an example filter- specific grouping of neural network parameters
- Fig. 19 illustrates an example kernel- specific grouping of neural network parameters.
- Figs. 17-19 illustrate grouping of weights (i.e., specifically discusses weights as an example of neural network parameters), the teachings of these figures can be applied to other types of neural network parameters as well, such as biases. [0186] Referring to each of Figs.
- the topology of the neural network for base calling comprises a plurality of layers, such as layers 1302a, 1302b, ..., 1302L, as also discussed in further detail with respect to Fig. 13.
- Each layer 1302 comprises a plurality of filters 1304, and each filter 1304 comprises a plurality of kernels 1306, and each kernel comprises a plurality of weights ( e.g ., the figures illustrate an example kernel 1306_lal comprising weights Wl, ...,
- each dotted oval shape encompasses components including a corresponding group of weights.
- weights of the neural network topology are grouped in accordance with corresponding layers in which the weights are included.
- weights of layer 1302a are grouped in a first weight group 1710a
- weights of layer 1302b are grouped in a second weight group 1710b
- weights of layer 1302L are grouped in a Lth weight group 1710L, and so on.
- weights of each kernel included in each filter of the layer 1302a are grouped in the first weight group 1710a
- weights of each kernel included in each filter of the layer 1302b are grouped in the second weight group 1710b, and so on.
- weights of the neural network topology are grouped in accordance with corresponding filters in which the weights are included.
- weights of various kernels included in a filter 1304 1a of the layer 1302a are grouped in a corresponding weight group 1810 1a
- weights of various kernels included in a filter 1304_2a of the layer 1302a are grouped in a corresponding weight group 1810_2a
- weights of various kernels included in a filter 1304_Na of the layer 1302a are grouped in a corresponding weight group 1810_Na
- weights of various kernels included in a filter 1304 1L of the layer 1302L are grouped in a corresponding weight group 1810 1L
- weights of kernels of each filter 1304 are grouped in a corresponding specific weight group.
- weights of the neural network topology are grouped in accordance with corresponding kernels in which the weights are included.
- weights of a kernel 1306_lal included in a filter 1304 1a of the layer 1302a are grouped in a corresponding weight group 1910 1a
- weights of a kernel 1306_lak included in the filter 1304 1a of the layer 1302a are grouped in a corresponding weight group 1910_lak
- weights of a kernel 1306 NL1 included in a filter 1304 NL of the layer 1302L are grouped in a corresponding weight group 1910 NL1, and so on.
- weights of each kernel are grouped in a corresponding weight group.
- Figs. 17-19 illustrate example ways to group the weights. There may be other criteria for grouping the weights. For example, weights to process data from a first channel can be grouped in a first group, weights to process data from a second channel can be grouped in a second group, and so on. [0191] In Fig. 17 where a layer-specific weight grouping is done, each weight group includes a relatively higher number of weights. In contrast, In Fig. 19 where a kernel-specific weight grouping is done, each weight group includes a relatively smaller number of weights. In Fig. 18 where a filter-specific weight grouping is done, each weight group includes a relatively moderate number of weights. Thus, referring again to 1608 of Fig. 16, the manner in which the grouping is done can dictate a number of parameters included in each group, and the grouping criteria can be implementation specific. Further discussion on grouping criteria will be presented herein later in turn.
- Fig. 20 illustrates a kernel-specific grouping of neural network parameters (e.g ., similar to Fig. 19), where a parameter that has a maximum absolute value among all parameters within a corresponding group is identified.
- example weights of weight groups 1910_lal and 1910 NL1 are illustrated.
- the weight group 1910_lal comprises weights of the kernel 1306_lal
- the weight group 1910 NL1 comprises weights of the kernel 1306 NL1.
- the example weights of the example weight group 1910_lal are 0.89, -0.36, 0.24, 0.25, 0.29, -0.29, 0.01,
- the example weights of the example weight group 1910_NL1 are 0.93, 0.86,
- weights of the two weight groups are generally between -1 to +1. However, there is one weight in the weight group 1910_NL1 having value -1.29, which may be considered an outlier.
- a maximum absolute value among absolute values of all parameters within the group 1910_lal is identified as 0.97
- a maximum absolute value among absolute values of all parameters within the group 1910 NL1 is identified as 1.29.
- the method 1600 of Fig. 16 then proceeds from 1612 to 1616, where for each group, (i) a corresponding quantization format is selected (e.g., using equation 3 and/or any of the LUTs 1500, 1550, or 1570), based on the maximum absolute value for the group, and (ii) corresponding control information identifying the selected quantization format is generated.
- quantization parameters e.g, which may be stored in a memory that is either coupled to the CPU 102 or the remote computing machine
- the quantization logic e.g, being executed either by the CPU 102 or by the remote computing machine selects, for each group, a corresponding quantization format from the plurality of available quantization formats, based on the maximum absolute value for the group.
- the quantization format is selected based on the maximum absolute value for the group 1910_lal, where the maximum absolute value for the group 1910 1 al is 0.97.
- any of equation 3, or LUTs 1500, 1550, or 1570 can be used to select a quantization format for the parameters of the group 1910_lal.
- the quantization format corresponding to the parameter value 0.97 is SO.7 format, and accordingly, quantization format SO.7 is selected for all the parameters of the group 1910_lal.
- the quantization format is selected based on the maximum absolute value for the group 1910 NL1, where the maximum absolute value for the group 1910_NL1 is 1.29.
- any of equation 3, or LUTs 1500, 1550, or 1570 can be used to select a quantization format for the parameters of the group 1910 NL1.
- the quantization format corresponding to the parameter value 1.29 is SI.6 format, and accordingly, quantization format SI.6 is selected for all the parameters of the group 1910 NL1.
- control information identifying the selected quantization format is generated. For example, control bits 1920_lal are generated that identifies the selected quantization format SO.7 for the group 1910_lal. Similarly, control bits 1920 NL1 are generated that identifies the selected quantization format SI.6 for the group 1910 NL1.
- quantization formats are selected for various other parameter groups, and corresponding control information identifying the selected quantization formats are also generated.
- the method 1600 of Fig. 16 then proceeds from 1616 to 1620, where for each group, individual parameters within the group are quantized in accordance with the selected quantization format.
- the weights in the weight group 1910_lal may be initially (e.g ., prior to operations 1620 of Fig. 16) in a prequantized number format, such as a floating-point format or another appropriate number format. That is, prior to operations 1620, the weights are prequantized weights (i.e., the weights have not been quantized yet, and the prequantized weights are in an appropriate prequantized number format).
- the weights of the weight group 19 lO lal are quantized in accordance with the quantization format SO.7 selected for that group.
- the weights in the weight group 1910 NL1 may be initially in a floating point format (or another appropriate number format).
- the weights of the weight group 1910 NL1 are quantized in accordance with the quantization format SI.6 selected for that group.
- Fig. 20 is used as an example to illustrate various operations of the method 1600 (such as operations 1616, 1620), the specific examples illustrated in Fig. 20 are not intended to limit the scope of this disclosure.
- Fig. 20 illustrates a kernel-specific grouping
- another appropriate grouping may be used, such as a layer- specific grouping (see Fig. 17), a filter-specific grouping (see Fig. 18), or grouping based on another appropriate criteria (e.g ., channel-specific grouping).
- Fig. 20 illustrates weights
- the grouping and quantization is not limited to neural network weights, and the teachings of this disclosure is applicable to other types of neural network parameters, such as biases.
- one or more configuration files are generated (e.g., either by the CPU 102, or by the remote computing machine) after the operations 1620, where a configuration file that includes quantized parameters and/or the corresponding control information of one or more groups.
- a runtime logic or runtime program (discussed herein earlier) being executed in the CPU 102 generates the configuration file(s) that includes the quantized parameters of individual groups, as well as the control information for the individual groups.
- a quantization format selected for a group is applicable to all parameters of the group. For example, for the group 1910 NL1, ideally, the format SI.6 is optimal for the weight -1.29, and the format SO.7 is optimal for the remaining weights. However, if individual weights of the group 1910 NL1 are to be assigned corresponding different quantization formats, then each weight of the group has to have corresponding control bits, which will increase the control signal overhead. Accordingly, instead of selecting individual formats for individual weights in a group, the entire group is assigned a common quantization format, thereby decreasing the control signal overhead. All weights in the group 1910 NL1 are to be quantized in accordance with the selected quantization format for the group 1910 NL1.
- a decision on a size of the group may be based on a desired level of control signal overhead and/or desired level of quantization error, and may be implementation specific.
- the method 1600 proceeds to 1640. It may be noted that in case the operations 1604 - 1620 are performed remotely by a remote computing machine, the quantized weights at the end of operations 1620 are transmitted from the remote computing machine to the sequencing machine 100, and stored within the memory 103 and/or memory 160.
- Operation 1640 is performed by the CPU 102 and/or the data flow logic 151 (see Fig. 1).
- the neural network topology e.g, that is to be used for base calling
- the quantized parameters are loaded to the configurable processor 150, along with the control information.
- loading the quantized parameters and the control information involves loading the configuration file(s) that includes the quantized parameters and the associated control information.
- the method 1600 then proceeds from 1640 to 1644, where each loaded parameter is interpreted in accordance with the corresponding control information (e.g, within the configurable processor 150), and the neural network is configured using the interpreted parameters.
- each quantized parameter is a corresponding 8-bit binary number (e.g, assuming that 8-bit quantization formats are used), and without the control information, the 8-bit binary numbers are meaningless.
- 11111111 quantized with a SO.7 format is different from 11111111 quantized with a S 1.6 format (e.g. , the radix or decimal points for these two different formats are placed at different corresponding positions).
- a mere 11111111, without accompanying control information indicating the format used for associated quantization would not convey any meaningful information.
- the corresponding control information identifying the corresponding selected quantization format is also loaded in the configurable processor 150; and at 1644, the parameters are interpreted using the corresponding control information.
- the various 8-bit numbers representing the weights of the weight group 1910_lal were quantized using the format SO.7, and the corresponding control bits 1920_lal identify the format SO.7. Accordingly, at 1644, the 8-bit weights of the weight group 1910_lal are interpreted in accordance with the format SO.7. For example, when processing the 8-bit weights of the weight group 1910_lal, the configurable processor 150 knows that for each 8-bit weight in this group, the MSB is the sign bit, the radix or decimal point is immediately after the sign bit, and the remaining 7 bits are fractional bits ( e.g ., see first row of Fig.
- the configurable processor 150 interprets and processes the 8-bit weights of this group accordingly.
- the configurable processor 150 knows that for each 8-bit weight in this group, the MSB is the sign bit, followed by an integer bit, then the radix or decimal point, and the remaining 6 bits are fractional bits (e.g., see second row of Fig. 15 A) - the configurable processor 150 interprets and processes the 8-bit weights of this group accordingly.
- the configurable processor 150 shifts the radix point of the weights, based on the associated control information. For example, when processing weights of the group 1910_lal that is quantized in accordance with the format SO.7, the configurable processor 150 shifts the radix point to immediately after the sign bit (which is the MSB). When processing weights of the group 1910_NL1 that is quantized in accordance with the format SI.6, the configurable processor 150 shifts the radix point between the second and third bits (e.g, the radix point is now after the MSB or sign bit, and a single integer bit). Thus, the configurable processor 150 shifts the radix point within a weight, depending on the quantization format used for the weight.
- the interpreted parameters (such as the interpreted weights and biases) are used by the configurable processor 150 to configure the neural network topology (e.g, that was also loaded at 1640).
- the method 1600 then proceeds from 1644 to 1648, where the configured neural network is applied on sensor data generated by various sensors of the sequencing machine 100, to produce base call classification data.
- Application of a configured neural network on sensor data generated by various sensors of the sequencing machine 100, to produce base call classification data is discussed in further detail in co-pending U.S. Nonprovisional Patent Application No. 16/826,126, titled “ARTIFICIAL INTELLIGENCE-BASED BASE CALLING,” filed 20 March 2020 (Attorney Docket No. ILLM 1008-18/IP- 1744-US), which is incorporated by reference as if fully set forth herein.
- a parameter such as an outlier
- a next higher quantization format e.g, having higher number of integer bits
- SI.6 the quantization format SI.6. If that “-1.29” outlier weight was not considered or ignored when selecting the quantization format, then the group could have been assigned the quantization format SO.7.
- the entire weight group is assigned the quantization format SI.6, which increases quantization noise for individual weights of that weight group.
- it may be preferable to disregard such an outlier when selecting the quantization format e.g ., to reduce the overall quantization noise for all weights of the group.
- the quantization format SO.7 may be selected for weight group 1910 NL1.
- the weight -1.29 is then saturated to -1, which is the minimum allowed value for this quantization format. This results in higher quantization noise for the outlier weight, but overall higher quantization resolution (or lower quantization noise) for all other weights of that weight group.
- the method 1600 may be modified, to apply any appropriate criteria to select a quantization format. For example, if there is only one outlier outside a quantization format range, the outlier is ignored when selecting the quantization format. For example, as there is only one weight (e.g, -1.29) that is outside the range of the quantization format SO.7, the quantization format SO.7 is selected for the weight group 1910 NL1 (the weight -1.29 is then saturated to -1). However, in an example, if there is more than one outlier outside the quantization format range, the outliers are not ignored, and the next higher quantization format (e.g, with higher number of integers) is selected.
- the next higher quantization format e.g, with higher number of integers
- a threshold value may be pre-specified (e.g, each quantization format can have a corresponding threshold value).
- the threshold value for the quantization format SO.7 may be 0.2.
- the next higher quantization format e.g, quantization format SI.6 is selected.
- the next higher quantization format e.g, quantization format SI.6 is selected.
- Fig. 21 illustrates a multiplication and an accumulation operation for an example input data quantization format and an example weight quantization format.
- input data 2102 is in a format SO.7.
- the input data 2102 may be a tensor in a matrix form, merely as an example, and individual entries of the input tensor is quantized in the SO.7 format.
- a kernel comprising weights 2104 are to be convoluted with the input 2102, and individual ones of the weights 2104 are also in the SO.7 format. Assume a convolution operation that involves multiplication of individual weights with individual entries of the input matrix.
- values in SO.7 format can take values in the range from -1 to +0.9921875, and in steps of 2-7 or 0.0078125, as illustrated in table 1400.
- the quantization is performed to symmetrically limit the SO.7 format range, so that it is in the range from -0.9921875 to +0.9921875 in steps of 0.0078125 ( e.g ., see LUT 1570 in Fig. 15D), and -1 is excluded from this format (e.g., as discussed with respect to Fig. 15D), so that only 255 values are possible within this format.
- output 2108 is a multiplication (e.g, block 2106) of the input 2102 in SO.7 format and weight 2104 in SO.7 format
- the output 2108 also has a specific range.
- an accumulator 2120 accumulates outputs 2108 of multiple such multiplications. Accordingly, as the accumulator output 2122 is a sum of multiple such outputs, the accumulator output 2122 can be outside the range of the SO.14 format. Merely as an example, if there are four consecutive outputs 2108 having values of 0.98, 0.88, 0.85, and 0.87, the accumulator output from accumulating the four outputs 2108 would be 3.58, which would be outside the range allowed by the SO.14 format, and the accumulator output has to saturate to the maximum value allowed by the SO.14 format.
- the accumulator output has a format that has a higher number of integers (e.g, higher than the zero number of integer present in the SO.14 format of the output 2108).
- the accumulator output is in accordance with the S4.14 format.
- S4.14 format With a S4.14 format, at least 16 outputs 2108 (e.g, each having maximum possible value in the SO.14 format) can be summed and accumulated, because 24 or about 16 is a maximum number that can be represented by the S4.14 format.
- more than 16 number of outputs 2108 can be accumulated by the accumulator 2120 without saturating, as many of the outputs 2108 may not have the maximum possible value in the SO.14 format, and some of the outputs 2108 are positive while some are negative, thereby averaging out and decreasing the value of the accumulator output.
- Fig. 22 illustrates a multiplication and an accumulation operation for an example input data quantization format and an example weight quantization format.
- input data 2202 is in a format SO.7, similar to Fig. 21.
- a kernel comprising weights 2204 are to be convoluted with the input 2202.
- individual ones of the weights 2204 are in the SI.6 format (note that in Fig. 21, the weights were stored in a different format).
- a convolution operation that involves multiplication of individual weights with individual entries of the input matrix.
- input 2202 in SO.7 format can take values in the range from -1 to +0.9921875, and in steps of 2-7 or 0.0078125, as illustrated in table 1400.
- the quantization is performed to symmetrically limit the SO.7 format range, so that it is in the range from -0.9921875 to +0.9921875 in steps of 0.0078125, and -1 is excluded from this format ( e.g ., as discussed with respect to Fig. 15D), so that only 255 values are possible within this format.
- Weight values 2204 in SI.6 format can take values in the range from -2 to +1.984375, and in steps of 2-6, as illustrated in table 1400. Note that the resolution of the weights 2204 is 2- 6, whereas the resolution of the weights 2104 of Fig. 21 is 2-7 - accordingly, quantization noise for the weights 2204 is larger than the quantization noise for the weights 2104.
- output 2208 is a multiplication (e.g., block 2206) of the input 2202 in SO.7 format and weight 2204 in SI.6 format
- the output 2208 also has a specific range.
- the output 2208 of Fig. 22 has a higher range, but a lower precision or resolution.
- an accumulator 2220 accumulates outputs 2208 of multiple such multiplications. Accordingly, as the accumulator output 2222 is a sum of multiple such outputs 2208, the accumulator output can be outside the range of the SI.13 format, as also discussed with respect to Fig. 21. Accordingly, to prevent saturation, the accumulator output has a format that has a higher number of integers ( e.g . , higher than the one integer present in the S 1.13 format of the output 2208). Merely as an example, the accumulator output 222 are in accordance with S5.13 format.
- the S5.13 format output 2222 of the accumulator 2220 is rounded off by a rounding operator 2240 to a SO.7 format of the final output 2244.
- the 8 most significant fractional bits are rounded off to the 7 fractional bits of the SO.7 format.
- Fig. 23 is a block diagram of a base calling system 2300 in accordance with one implementation.
- the base calling system 2300 may operate to obtain any information or data that relates to at least one of a biological or chemical substance.
- the base calling system 2300 is a workstation that may be similar to a bench-top device or desktop computer. For example, a majority (or all) of the systems and components for conducting the desired reactions can be within a common housing 2316.
- the base calling system 2300 is a nucleic acid sequencing system (or sequencer) configured for various applications, including but not limited to de novo sequencing, resequencing of whole genomes or target genomic regions, and metagenomics. The sequencer may also be used for DNA or RNA analysis.
- the base calling system 2300 may also be configured to generate reaction sites in a biosensor.
- the base calling system 2300 may be configured to receive a sample and generate surface attached clusters of clonally amplified nucleic acids derived from the sample. Each cluster may constitute or be part of a reaction site in the biosensor.
- the exemplary base calling system 2300 may include a system receptacle or interface 2312 that is configured to interact with a biosensor 2302 to perform desired reactions within the biosensor 2302.
- the biosensor 2302 is loaded into the system receptacle 2312.
- a cartridge that includes the biosensor 2302 may be inserted into the system receptacle 2312 and in some states the cartridge can be removed temporarily or permanently.
- the cartridge may include, among other things, fluidic control and fluidic storage components.
- the base calling system 2300 is configured to perform a large number of parallel reactions within the biosensor 2302.
- the biosensor 2302 includes one or more reaction sites where desired reactions can occur.
- the reaction sites may be, for example, immobilized to a solid surface of the biosensor or immobilized to beads (or other movable substrates) that are located within corresponding reaction chambers of the biosensor.
- the reaction sites can include, for example, clusters of clonally amplified nucleic acids.
- the biosensor 2302 may include a solid-state imaging device (e.g ., CCD or CMOS imager) and a flow cell mounted thereto.
- the flow cell may include one or more flow channels that receive a solution from the base calling system 2300 and direct the solution toward the reaction sites.
- the biosensor 2302 can be configured to engage a thermal element for transferring thermal energy into or out of the flow channel.
- the base calling system 2300 may include various components, assemblies, and systems (or sub-systems) that interact with each other to perform a predetermined method or assay protocol for biological or chemical analysis.
- the base calling system 2300 includes a system controller 2304 that may communicate with the various components, assemblies, and sub-systems of the base calling system 2300 and also the biosensor 2302.
- the base calling system 2300 may also include a fluidic control system 2306 to control the flow of fluid throughout a fluid network of the base calling system 2300 and the biosensor 2302; a fluidic storage system 2308 that is configured to hold all fluids (e.g., gas or liquids) that may be used by the bioassay system; a temperature control system 2310 that may regulate the temperature of the fluid in the fluid network, the fluidic storage system 2308, and/or the biosensor 2302; and an illumination system 2309 that is configured to illuminate the biosensor 2302.
- the cartridge may also include fluidic control and fluidic storage components.
- the base calling system 2300 may include a user interface 2314 that interacts with the user.
- the user interface 2314 may include a display 2313 to display or request information from a user and a user input device 2315 to receive user inputs.
- the display 2313 and the user input device 2315 are the same device.
- the user interface 2314 may include a touch-sensitive display configured to detect the presence of an individual's touch and also identify a location of the touch on the display.
- other user input devices 2315 may be used, such as a mouse, touchpad, keyboard, keypad, handheld scanner, voice-recognition system, motion-recognition system, and the like.
- the base calling system 2300 may communicate with various components, including the biosensor 2302 (e.g ., in the form of a cartridge), to perform the desired reactions.
- the base calling system 2300 may also be configured to analyze data obtained from the biosensor to provide a user with desired information.
- the system controller 2304 may include any processor-based or microprocessor- based system, including systems using microcontrollers, reduced instruction set computers (RISC), application specific integrated circuits (ASICs), field programmable gate array (FPGAs), logic circuits, and any other circuit or processor capable of executing functions described herein.
- RISC reduced instruction set computers
- ASICs application specific integrated circuits
- FPGAs field programmable gate array
- the system controller 2304 executes a set of instructions that are stored in one or more storage elements, memories, or modules in order to at least one of obtain and analyze detection data.
- Detection data can include a plurality of sequences of pixel signals, such that a sequence of pixel signals from each of the millions of sensors (or pixels) can be detected over many base calling cycles.
- Storage elements may be in the form of information sources or physical memory elements within the base calling system 2300.
- the set of instructions may include various commands that instruct the base calling system 2300 or biosensor 2302 to perform specific operations such as the methods and processes of the various implementations described herein.
- the set of instructions may be in the form of a software program, which may form part of a tangible, non-transitory computer readable medium or media.
- the terms “software” and “firmware” are interchangeable and include any computer program stored in memory for execution by a computer, including RAM memory, ROM memory, EPROM memory, EEPROM memory, and non-volatile RAM (NVRAM) memory.
- RAM memory random access memory
- ROM memory read only memory
- EPROM memory electrically erasable programmable read-only memory
- EEPROM memory electrically erasable programmable read-only memory
- NVRAM non-volatile RAM
- the software may be in various forms such as system software or application software. Further, the software may be in the form of a collection of separate programs, or a program module within a larger program or a portion of a program module. The software also may include modular programming in the form of object-oriented programming. After obtaining the detection data, the detection data may be automatically processed by the base calling system 2300, processed in response to user inputs, or processed in response to a request made by another processing machine (e.g., a remote request through a communication link).
- the system controller 2304 includes an analysis module 2438 (see Fig. 24). In other implementations, system controller 2304 does not include the analysis module 2438 and instead has access to the analysis module 2438 ( e.g ., the analysis module 2438 may be separately hosted on cloud).
- the system controller 2304 may be connected to the biosensor 2302 and the other components of the base calling system 2300 via communication links.
- the system controller 2304 may also be communicatively connected to off-site systems or servers.
- the communication links may be hardwired, corded, or wireless.
- the system controller 2304 may receive user inputs or commands, from the user interface 2314 and the user input device 2315.
- the fluidic control system 2306 includes a fluid network and is configured to direct and regulate the flow of one or more fluids through the fluid network.
- the fluid network may be in fluid communication with the biosensor 2302 and the fluidic storage system 2308.
- select fluids may be drawn from the fluidic storage system 2308 and directed to the biosensor 2302 in a controlled manner, or the fluids may be drawn from the biosensor 2302 and directed toward, for example, a waste reservoir in the fluidic storage system 2308.
- the fluidic control system 2306 may include flow sensors that detect a flow rate or pressure of the fluids within the fluid network. The sensors may communicate with the system controller 2304.
- the temperature control system 2310 is configured to regulate the temperature of fluids at different regions of the fluid network, the fluidic storage system 2308, and/or the biosensor 2302.
- the temperature control system 2310 may include a thermocycler that interfaces with the biosensor 2302 and controls the temperature of the fluid that flows along the reaction sites in the biosensor 2302.
- the temperature control system 2310 may also regulate the temperature of solid elements or components of the base calling system 2300 or the biosensor 2302.
- the temperature control system 2310 may include sensors to detect the temperature of the fluid or other components. The sensors may communicate with the system controller 2304.
- the fluidic storage system 2308 is in fluid communication with the biosensor 2302 and may store various reaction components or reactants that are used to conduct the desired reactions therein.
- the fluidic storage system 2308 may also store fluids for washing or cleaning the fluid network and biosensor 2302 and for diluting the reactants.
- the fluidic storage system 2308 may include various reservoirs to store samples, reagents, enzymes, other biomolecules, buffer solutions, aqueous, and non-polar solutions, and the like.
- the fluidic storage system 2308 may also include waste reservoirs for receiving waste products from the biosensor 2302.
- the cartridge may include one or more of a fluidic storage system, fluidic control system or temperature control system.
- a cartridge can have various reservoirs to store samples, reagents, enzymes, other biomolecules, buffer solutions, aqueous, and non-polar solutions, waste, and the like.
- a fluidic storage system, fluidic control system or temperature control system can be removably engaged with a bioassay system via a cartridge or other biosensor.
- the illumination system 2309 may include a light source (e.g ., one or more LEDs) and a plurality of optical components to illuminate the biosensor.
- light sources may include lasers, arc lamps, LEDs, or laser diodes.
- the optical components may be, for example, reflectors, dichroics, beam splitters, collimators, lenses, filters, wedges, prisms, mirrors, detectors, and the like.
- the illumination system 2309 may be configured to direct an excitation light to reaction sites.
- fluorophores may be excited by green wavelengths of light, as such the wavelength of the excitation light may be approximately 532 nm.
- the illumination system 2309 is configured to produce illumination that is parallel to a surface normal of a surface of the biosensor 2302. In another implementation, the illumination system 2309 is configured to produce illumination that is off-angle relative to the surface normal of the surface of the biosensor 2302. In yet another implementation, the illumination system 2309 is configured to produce illumination that has plural angles, including some parallel illumination and some off- angle illumination.
- the system receptacle or interface 2312 is configured to engage the biosensor 2302 in at least one of a mechanical, electrical, and fluidic manner.
- the system receptacle 2312 may hold the biosensor 2302 in a desired orientation to facilitate the flow of fluid through the biosensor 2302.
- the system receptacle 2312 may also include electrical contacts that are configured to engage the biosensor 2302 so that the base calling system 2300 may communicate with the biosensor 2302 and/or provide power to the biosensor 2302.
- the system receptacle 2312 may include fluidic ports (e.g., nozzles) that are configured to engage the biosensor 2302.
- the biosensor 2302 is removably coupled to the system receptacle 2312 in a mechanical manner, in an electrical manner, and also in a fluidic manner.
- the base calling system 2300 may communicate remotely with other systems or networks or with other bioassay systems 2300. Detection data obtained by the bioassay system(s) 2300 may be stored in a remote database.
- Fig. 24 is a block diagram of the system controller 2304 that can be used in the system of Fig. 23.
- the system controller 2304 includes one or more processors or modules that can communicate with one another.
- Each of the processors or modules may include an algorithm (e.g, instructions stored on a tangible and/or non-transitory computer readable storage medium) or sub-algorithms to perform particular processes.
- the system controller 2304 is illustrated conceptually as a collection of modules, but may be implemented utilizing any combination of dedicated hardware boards, DSPs, processors, etc. Alternatively, the system controller 2304 may be implemented utilizing an off-the-shelf PC with a single processor or multiple processors, with the functional operations distributed between the processors.
- a communication port 2420 may transmit information (e.g ., commands) to or receive information (e.g., data) from the biosensor 2302 (Fig. 23) and/or the sub-systems 2306, 2308, 2310 (Fig. 23). In implementations, the communication port 2420 may output a plurality of sequences of pixel signals. A communication port 2420 may receive user input from the user interface 2314 (Fig.
- Data from the biosensor 2302 or sub-systems 2306, 2308, 2310 may be processed by the system controller 2304 in real-time during a bioassay session. Additionally, or alternatively, data may be stored temporarily in a system memory during a bioassay session and processed in slower than real-time or off-line operation.
- the system controller 2304 may include a plurality of modules 2431-2439 that communicate with a main control module 2430.
- the main control module 2430 may communicate with the user interface 2314 (Fig. 23).
- the modules 2431-2439 are shown as communicating directly with the main control module 2430, the modules 2431-2439 may also communicate directly with each other, the user interface 2314, and the biosensor 2302. Also, the modules 2431-2439 may communicate with the main control module 2430 through the other modules.
- the plurality of modules 2431-2439 include system modules 2431-2433, 2439 that communicate with the sub-systems 2306, 2308, 2310, and 2309, respectively.
- the fluidic control module 2431 may communicate with the fluidic control system 2306 to control the valves and flow sensors of the fluid network for controlling the flow of one or more fluids through the fluid network.
- the fluidic storage module 2432 may notify the user when fluids are low or when the waste reservoir is at or near capacity.
- the fluidic storage module 2432 may also communicate with the temperature control module 2433 so that the fluids may be stored at a desired temperature.
- the illumination module 2439 may communicate with the illumination system 2309 to illuminate the reaction sites at designated times during a protocol, such as after the desired reactions (e.g, binding events) have occurred. In some implementations, the illumination module 2439 may communicate with the illumination system 2309 to illuminate the reaction sites at designated angles.
- the plurality of modules 2431-2439 may also include a device module 2434 that communicates with the biosensor 2302 and an identification module 2435 that determines identification information relating to the biosensor 2302.
- the device module 2434 may, for example, communicate with the system receptacle 2312 to confirm that the biosensor has established an electrical and fluidic connection with the base calling system 2300.
- the identification module 2435 may receive signals that identify the biosensor 2302.
- the identification module 2435 may use the identity of the biosensor 2302 to provide other information to the user. For example, the identification module 2435 may determine and then display a lot number, a date of manufacture, or a protocol that is recommended to be run with the biosensor 2302.
- the plurality of modules 2431-2439 also includes an analysis module 2438 (also called signal processing module or signal processor) that receives and analyzes the signal data (e.g ., image data) from the biosensor 2302.
- Analysis module 2438 includes memory (e.g ., RAM or Flash) to store detection data.
- Detection data can include a plurality of sequences of pixel signals, such that a sequence of pixel signals from each of the millions of sensors (or pixels) can be detected over many base calling cycles.
- the signal data may be stored for subsequent analysis or may be transmitted to the user interface 2314 to display desired information to the user.
- the signal data may be processed by the solid-state imager (e.g., CMOS image sensor) before the analysis module 2438 receives the signal data.
- the solid-state imager e.g., CMOS image sensor
- the analysis module 2438 is configured to obtain image data from the light detectors at each of a plurality of sequencing cycles.
- the image data is derived from the emission signals detected by the light detectors and process the image data for each of the plurality of sequencing cycles through a neural network (e.g, a neural network-based template generator 2448, a neural network-based base caller 2458 (e.g, Fig. 4 and Fig. 10), and/or a neural network-based quality scorer 2468) and produce a base call for at least some of the analytes at each of the plurality of sequencing cycle.
- a neural network e.g, a neural network-based template generator 2448, a neural network-based base caller 2458 (e.g, Fig. 4 and Fig. 10), and/or a neural network-based quality scorer 2468
- Protocol modules 2436 and 2437 communicate with the main control module 2430 to control the operation of the sub-systems 2306, 2308, and 2310 when conducting predetermined assay protocols.
- the protocol modules 2436 and 2437 may include sets of instructions for instructing the base calling system 2300 to perform specific operations pursuant to predetermined protocols.
- the protocol module may be a sequencing-by-synthesis (SBS) module 2436 that is configured to issue various commands for performing sequencing-by synthesis processes.
- SBS sequencing-by-synthesis
- extension of a nucleic acid primer along a nucleic acid template is monitored to determine the sequence of nucleotides in the template.
- the underlying chemical process can be polymerization ( e.g .
- fluorescently labeled nucleotides are added to a primer (thereby extending the primer) in a template dependent fashion such that detection of the order and type of nucleotides added to the primer can be used to determine the sequence of the template.
- commands can be given to deliver one or more labeled nucleotides, DNA polymerase, etc., into/through a flow cell that houses an array of nucleic acid templates.
- the nucleic acid templates may be located at corresponding reaction sites.
- the nucleotides can further include a reversible termination property that terminates further primer extension once a nucleotide has been added to a primer.
- a nucleotide analog having a reversible terminator moiety can be added to a primer such that subsequent extension cannot occur until a deblocking agent is delivered to remove the moiety.
- a command can be given to deliver a deblocking reagent to the flow cell (before or after detection occurs).
- One or more commands can be given to effect wash(es) between the various delivery steps.
- the cycle can then be repeated n times to extend the primer by n nucleotides, thereby detecting a sequence of length n.
- Exemplary sequencing techniques are described, for example, in Bentley et ak, Nature 456:53-59 (2008); WO 04/018497; US 7,057,026; WO 91/06678; WO 07/123744; US 7,329,492; US 7,211,414; US 7,315,019; US 7,405,281, and US 2008/014708082, each of which is incorporated herein by reference.
- nucleotide delivery step of an SBS cycle either a single type of nucleotide can be delivered at a time, or multiple different nucleotide types (e.g, A, C, T and G together) can be delivered.
- nucleotide delivery configuration where only a single type of nucleotide is present at a time, the different nucleotides need not have distinct labels since they can be distinguished based on temporal separation inherent in the individualized delivery. Accordingly, a sequencing method or apparatus can use single color detection. For example, an excitation source need only provide excitation at a single wavelength or in a single range of wavelengths.
- sites that incorporate different nucleotide types can be distinguished based on different fluorescent labels that are attached to respective nucleotide types in the mixture.
- four different nucleotides can be used, each having one of four different fluorophores.
- the four different fluorophores can be distinguished using excitation in four different regions of the spectrum.
- four different excitation radiation sources can be used.
- fewer than four different excitation sources can be used, but optical filtration of the excitation radiation from a single source can be used to produce different ranges of excitation radiation at the flow cell.
- fewer than four different colors can be detected in a mixture having four different nucleotides.
- pairs of nucleotides can be detected at the same wavelength, but distinguished based on a difference in intensity for one member of the pair compared to the other, or based on a change to one member of the pair ( e.g ., via chemical modification, photochemical modification or physical modification) that causes apparent signal to appear or disappear compared to the signal detected for the other member of the pair.
- Exemplary apparatus and methods for distinguishing four different nucleotides using detection of fewer than four colors are described for example in US Pat. App. Ser. Nos. 61/538,294 and 61/619,878, which are incorporated herein by reference in their entireties.
- U.S. Application No. 13/624,200 which was filed on September 21, 2012, is also incorporated by reference in its entirety.
- the plurality of protocol modules may also include a sample-preparation (or generation) module 2437 that is configured to issue commands to the fluidic control system 2306 and the temperature control system 2310 for amplifying a product within the biosensor 2302.
- a sample-preparation (or generation) module 2437 may issue instructions to the fluidic control system 2306 to deliver necessary amplification components to reaction chambers within the biosensor 2302.
- the reaction sites may already contain some components for amplification, such as the template DNA and/or primers.
- the amplification module 2437 may instruct the temperature control system 2310 to cycle through different temperature stages according to known amplification protocols.
- the amplification and/or nucleotide incorporation is performed isothermally.
- the SBS module 2436 may issue commands to perform bridge PCR where clusters of clonal amplicons are formed on localized areas within a channel of a flow cell. After generating the amplicons through bridge PCR, the amplicons may be “linearized” to make single stranded template DNA, or sstDNA, and a sequencing primer may be hybridized to a universal sequence that flanks a region of interest. For example, a reversible terminator-based sequencing by synthesis method can be used as set forth above or as follows.
- Each base calling or sequencing cycle can extend an sstDNA by a single base which can be accomplished for example by using a modified DNA polymerase and a mixture of four types of nucleotides.
- the different types of nucleotides can have unique fluorescent labels, and each nucleotide can further have a reversible terminator that allows only a single-base incorporation to occur in each cycle. After a single base is added to the sstDNA, excitation light may be incident upon the reaction sites and fluorescent emissions may be detected. After detection, the fluorescent label and the terminator may be chemically cleaved from the sstDNA. Another similar base calling or sequencing cycle may follow.
- the SBS module 2436 may instruct the fluidic control system 2306 to direct a flow of reagent and enzyme solutions through the biosensor 2302.
- Exemplary reversible terminator-based SBS methods which can be utilized with the apparatus and methods set forth herein are described in US Patent Application Publication No. 2007/0166705 Al, US Patent Application Publication No. 2006/0188901 Al, US Patent No. 7,057,026, US Patent Application Publication No. 2006/0240439 Al, PCT Publication No. WO 05/065814, and PCT Publication No. WO 06/064199, each of which is incorporated herein by reference in its entirety.
- the amplification and SBS modules may operate in a single assay protocol where, for example, template nucleic acid is amplified and subsequently sequenced within the same cartridge.
- the base calling system 2300 may also allow the user to reconfigure an assay protocol.
- the base calling system 2300 may offer options to the user through the user interface 2314 for modifying the determined protocol. For example, if it is determined that the biosensor 2302 is to be used for amplification, the base calling system 2300 may request a temperature for the annealing cycle. Furthermore, the base calling system 2300 may issue warnings to a user if a user has provided user inputs that are generally not acceptable for the selected assay protocol.
- the biosensor 2302 includes millions of sensors (or pixels), each of which generates a plurality of sequences of pixel signals over successive base calling cycles.
- the analysis module 2438 detects the plurality of sequences of pixel signals and attributes them to corresponding sensors (or pixels) in accordance to the row-wise and/or column-wise location of the sensors on an array of sensors.
- Each sensor in the array of sensors can produce sensor data for a tile of the flow cell, where a tile in an area on the flow cell at which clusters of genetic material are disposed during the based calling operation.
- the sensor data can comprise image data in an array of pixels.
- the sensor data can include more than one image, producing multiple features per pixel as the tile data.
- Logic e.g ., data flow logic
- Logic can be implemented in the form of a computer product including a non-transitory computer readable storage medium with computer usable program code for performing the method steps described herein.
- the “logic” can be implemented in the form of an apparatus including a memory and at least one processor that is coupled to the memory and operative to perform exemplary method steps.
- the “logic” can be implemented in the form of means for carrying out one or more of the method steps described herein; the means can include (i) hardware module(s), (ii) software module(s) executing on one or more hardware processors, or (iii) a combination of hardware and software modules; any of (i)-(iii) implement the specific techniques set forth herein, and the software modules are stored in a computer readable storage medium (or multiple such media).
- the logic implements a data processing function.
- Fig. 25 is a simplified block diagram of a computer system 2500 that can be used to implement the technology disclosed.
- Computer system 2500 includes at least one central processing unit (CPU) 2572 that communicates with a number of peripheral devices via bus subsystem 2555.
- CPU central processing unit
- peripheral devices can include a storage subsystem 2510 including, for example, memory devices and a file storage subsystem 2536, user interface input devices 2538, user interface output devices 2576, and a network interface subsystem 2574.
- the input and output devices allow user interaction with computer system 2500.
- Network interface subsystem 2574 provides an interface to outside networks, including an interface to corresponding interface devices in other computer systems.
- User interface input devices 2538 can include a keyboard; pointing devices such as a mouse, trackball, touchpad, or graphics tablet; a scanner; a touch screen incorporated into the display; audio input devices such as voice recognition systems and microphones; and other types of input devices.
- pointing devices such as a mouse, trackball, touchpad, or graphics tablet
- audio input devices such as voice recognition systems and microphones
- use of the term “input device” is intended to include all possible types of devices and ways to input information into computer system 2500.
- User interface output devices 2576 can include a display subsystem, a printer, a fax machine, or non-visual displays such as audio output devices.
- the display subsystem can include an LED display, a cathode ray tube (CRT), a flat-panel device such as a liquid crystal display (LCD), a projection device, or some other mechanism for creating a visible image.
- the display subsystem can also provide a non-visual display such as audio output devices.
- output device is intended to include all possible types of devices and ways to output information from computer system 2500 to the user or to another machine or computer system.
- Storage subsystem 2510 stores programming and data constructs that provide the functionality of some or all of the modules and methods described herein. These software modules are generally executed by deep learning processors 2578.
- the neural networks are implemented using deep learning processors 2578 can be configurable and reconfigurable processors, field-programmable gate arrays (FPGAs), application-specific integrated circuits (ASICs), and/or coarse-grained reconfigurable architectures (CGRAs) and graphics processing units (GPUs) other configured devices.
- Deep learning processors 2578 can be hosted by a deep learning cloud platform such as Google Cloud PlatformTM, XilinxTM, and CirrascaleTM.
- Examples of deep learning processors 2578 include Google’s Tensor Processing Unit (TPU)TM, rackmount solutions like GX4 Rackmount SeriesTM, GX149 Rackmount SeriesTM, NVIDIA DGX-1TM, Microsoft’ Stratix V FPGATM, Graphcore’s Intelligent Processor Unit (IPU)TM, Qualcomm’s Zeroth PlatformTM with Snapdragon processorsTM, NVIDIA’ s VoltaTM, NVIDIA’ s DRIVE PXTM, NVIDIA’ s JETSON TX1/TX2 MODULETM, Intel’s NirvanaTM, Movidius VPUTM, Fujitsu DPITM, ARM’s DynamicIQTM, IBM TrueNorthTM, and others.
- TPU Tensor Processing Unit
- rackmount solutions like GX4 Rackmount SeriesTM, GX149 Rackmount SeriesTM, NVIDIA DGX-1TM, Microsoft’ Stratix V FPGATM, Graphcore’s Intelligent Processor Unit (IPU)TM
- Memory subsystem 2522 used in the storage subsystem 2510 can include a number of memories including a main random access memory (RAM) 2534 for storage of instructions and data during program execution and a read only memory (ROM) 2532 in which fixed instructions are stored.
- a file storage subsystem 2536 can provide persistent storage for program and data files, and can include a hard disk drive, a floppy disk drive along with associated removable media, a CD-ROM drive, an optical drive, or removable media cartridges.
- the modules implementing the functionality of certain implementations can be stored by file storage subsystem 2536 in the storage subsystem 2510, or in other machines accessible by the processor.
- Bus subsystem 2555 provides a mechanism for letting the various components and subsystems of computer system 2500 communicate with each other as intended. Although bus subsystem 2555 is shown schematically as a single bus, alternative implementations of the bus subsystem can use multiple busses.
- Computer system 2500 itself can be of varying types including a personal computer, a portable computer, a workstation, a computer terminal, a network computer, a television, a mainframe, a server farm, a widely -distributed set of loosely networked computers, or any other data processing system or user device. Due to the ever-changing nature of computers and networks, the description of computer system 2500 depicted in Fig 25 is intended only as a specific example for purposes of illustrating the preferred implementations of the present invention. Many other configurations of computer system 2500 are possible having more or less components than the computer system depicted in Fig. 25.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Theoretical Computer Science (AREA)
- Biophysics (AREA)
- Data Mining & Analysis (AREA)
- General Health & Medical Sciences (AREA)
- Software Systems (AREA)
- Biomedical Technology (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Artificial Intelligence (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Biotechnology (AREA)
- Mathematical Physics (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- General Physics & Mathematics (AREA)
- Molecular Biology (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Chemical & Material Sciences (AREA)
- Proteomics, Peptides & Aminoacids (AREA)
- Analytical Chemistry (AREA)
- Neurology (AREA)
- Bioethics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Databases & Information Systems (AREA)
- Epidemiology (AREA)
- Public Health (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
- Measuring Or Testing Involving Enzymes Or Micro-Organisms (AREA)
Abstract
Description
Claims
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP22714690.9A EP4309080A1 (en) | 2021-03-16 | 2022-03-15 | Neural network parameter quantization for base calling |
CN202280005057.3A CN115699019A (en) | 2021-03-16 | 2022-03-15 | Neural network parameter quantification for base detection |
CA3183567A CA3183567A1 (en) | 2021-03-16 | 2022-03-15 | Neural network parameter quantization for base calling |
AU2022237501A AU2022237501A1 (en) | 2021-03-16 | 2022-03-15 | Neural network parameter quantization for base calling |
Applications Claiming Priority (8)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202163161880P | 2021-03-16 | 2021-03-16 | |
US202163161896P | 2021-03-16 | 2021-03-16 | |
US63/161,896 | 2021-03-16 | ||
US63/161,880 | 2021-03-16 | ||
US17/687,551 US20220301657A1 (en) | 2021-03-16 | 2022-03-04 | Tile location and/or cycle based weight set selection for base calling |
US17/687,583 US20220300811A1 (en) | 2021-03-16 | 2022-03-04 | Neural network parameter quantization for base calling |
US17/687,583 | 2022-03-04 | ||
US17/687,551 | 2022-03-04 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022197754A1 true WO2022197754A1 (en) | 2022-09-22 |
Family
ID=81325254
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2022/020462 WO2022197754A1 (en) | 2021-03-16 | 2022-03-15 | Neural network parameter quantization for base calling |
PCT/US2022/020460 WO2022197752A1 (en) | 2021-03-16 | 2022-03-15 | Tile location and/or cycle based weight set selection for base calling |
Family Applications After (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2022/020460 WO2022197752A1 (en) | 2021-03-16 | 2022-03-15 | Tile location and/or cycle based weight set selection for base calling |
Country Status (1)
Country | Link |
---|---|
WO (2) | WO2022197754A1 (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10491239B1 (en) * | 2017-02-02 | 2019-11-26 | Habana Labs Ltd. | Large-scale computations using an adaptive numerical format |
US20200125947A1 (en) * | 2018-10-17 | 2020-04-23 | Samsung Electronics Co., Ltd. | Method and apparatus for quantizing parameters of neural network |
Family Cites Families (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CA2044616A1 (en) | 1989-10-26 | 1991-04-27 | Roger Y. Tsien | Dna sequencing |
US6090592A (en) | 1994-08-03 | 2000-07-18 | Mosaic Technologies, Inc. | Method for performing amplification of nucleic acid on supports |
US5641658A (en) | 1994-08-03 | 1997-06-24 | Mosaic Technologies, Inc. | Method for performing amplification of nucleic acid with two primers bound to a single solid support |
JP2001517948A (en) | 1997-04-01 | 2001-10-09 | グラクソ、グループ、リミテッド | Nucleic acid sequencing |
AR021833A1 (en) | 1998-09-30 | 2002-08-07 | Applied Research Systems | METHODS OF AMPLIFICATION AND SEQUENCING OF NUCLEIC ACID |
DE60131194T2 (en) | 2000-07-07 | 2008-08-07 | Visigen Biotechnologies, Inc., Bellaire | SEQUENCE PROVISION IN REAL TIME |
AU2002227156A1 (en) | 2000-12-01 | 2002-06-11 | Visigen Biotechnologies, Inc. | Enzymatic nucleic acid synthesis: compositions and methods for altering monomer incorporation fidelity |
AR031640A1 (en) | 2000-12-08 | 2003-09-24 | Applied Research Systems | ISOTHERMAL AMPLIFICATION OF NUCLEIC ACIDS IN A SOLID SUPPORT |
US7057026B2 (en) | 2001-12-04 | 2006-06-06 | Solexa Limited | Labelled nucleotides |
US20040002090A1 (en) | 2002-03-05 | 2004-01-01 | Pascal Mayer | Methods for detecting genome-wide sequence variations associated with a phenotype |
EP3002289B1 (en) | 2002-08-23 | 2018-02-28 | Illumina Cambridge Limited | Modified nucleotides for polynucleotide sequencing |
GB0321306D0 (en) | 2003-09-11 | 2003-10-15 | Solexa Ltd | Modified polymerases for improved incorporation of nucleotide analogues |
ES2949821T3 (en) | 2004-01-07 | 2023-10-03 | Illumina Cambridge Ltd | Molecular arrays |
AU2005296200B2 (en) | 2004-09-17 | 2011-07-14 | Pacific Biosciences Of California, Inc. | Apparatus and method for analysis of molecules |
EP1828412B2 (en) | 2004-12-13 | 2019-01-09 | Illumina Cambridge Limited | Improved method of nucleotide detection |
US8045998B2 (en) | 2005-06-08 | 2011-10-25 | Cisco Technology, Inc. | Method and system for communicating using position information |
DK2463386T3 (en) | 2005-06-15 | 2017-07-31 | Complete Genomics Inc | Nucleic acid analysis using random mixtures of non-overlapping fragments |
GB0514936D0 (en) | 2005-07-20 | 2005-08-24 | Solexa Ltd | Preparation of templates for nucleic acid sequencing |
GB0517097D0 (en) | 2005-08-19 | 2005-09-28 | Solexa Ltd | Modified nucleosides and nucleotides and uses thereof |
US7405281B2 (en) | 2005-09-29 | 2008-07-29 | Pacific Biosciences Of California, Inc. | Fluorescent nucleotide analogs and uses therefor |
GB0522310D0 (en) | 2005-11-01 | 2005-12-07 | Solexa Ltd | Methods of preparing libraries of template polynucleotides |
US20080009420A1 (en) | 2006-03-17 | 2008-01-10 | Schroth Gary P | Isothermal methods for creating clonal single molecule arrays |
CN101460953B (en) | 2006-03-31 | 2012-05-30 | 索雷克萨公司 | Systems and devices for sequence by synthesis analysis |
US20080242560A1 (en) | 2006-11-21 | 2008-10-02 | Gunderson Kevin L | Methods for generating amplified nucleic acid arrays |
US7595882B1 (en) | 2008-04-14 | 2009-09-29 | Geneal Electric Company | Hollow-core waveguide-based raman systems and methods |
HRP20211523T1 (en) | 2011-09-23 | 2021-12-24 | Illumina, Inc. | Compositions for nucleic acid sequencing |
NL2018852B1 (en) * | 2017-05-05 | 2018-11-14 | Illumina Inc | Optical distortion correction for imaged samples |
US11783917B2 (en) * | 2019-03-21 | 2023-10-10 | Illumina, Inc. | Artificial intelligence-based base calling |
EP3969884B1 (en) * | 2019-05-16 | 2024-04-17 | Illumina, Inc. | Systems and methods for characterization and performance analysis of pixel-based sequencing |
-
2022
- 2022-03-15 WO PCT/US2022/020462 patent/WO2022197754A1/en active Application Filing
- 2022-03-15 WO PCT/US2022/020460 patent/WO2022197752A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10491239B1 (en) * | 2017-02-02 | 2019-11-26 | Habana Labs Ltd. | Large-scale computations using an adaptive numerical format |
US20200125947A1 (en) * | 2018-10-17 | 2020-04-23 | Samsung Electronics Co., Ltd. | Method and apparatus for quantizing parameters of neural network |
Also Published As
Publication number | Publication date |
---|---|
WO2022197752A1 (en) | 2022-09-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20220300811A1 (en) | Neural network parameter quantization for base calling | |
US20210265015A1 (en) | Hardware Execution and Acceleration of Artificial Intelligence-Based Base Caller | |
US20210265016A1 (en) | Data Compression for Artificial Intelligence-Based Base Calling | |
US20230041989A1 (en) | Base calling using multiple base caller models | |
WO2022197754A1 (en) | Neural network parameter quantization for base calling | |
EP4309080A1 (en) | Neural network parameter quantization for base calling | |
US20230029970A1 (en) | Quality score calibration of basecalling systems | |
WO2023009758A1 (en) | Quality score calibration of basecalling systems | |
US20230026084A1 (en) | Self-learned base caller, trained using organism sequences | |
US20220415445A1 (en) | Self-learned base caller, trained using oligo sequences | |
KR20240035413A (en) | Base calling using the multi-base caller model | |
EP4381514A1 (en) | Base calling using multiple base caller models | |
CA3224387A1 (en) | Self-learned base caller, trained using organism sequences | |
CN117529780A (en) | Mass fraction calibration of base detection systems | |
CN117546248A (en) | Base detection using multiple base detector model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 22714690 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 3183567 Country of ref document: CA |
|
ENP | Entry into the national phase |
Ref document number: 2022237501 Country of ref document: AU Date of ref document: 20220315 Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2022714690 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2022714690 Country of ref document: EP Effective date: 20231016 |