IL299077A - Tile location and/or cycle based weight set selection for base calling - Google Patents
Tile location and/or cycle based weight set selection for base callingInfo
- Publication number
- IL299077A IL299077A IL299077A IL29907722A IL299077A IL 299077 A IL299077 A IL 299077A IL 299077 A IL299077 A IL 299077A IL 29907722 A IL29907722 A IL 29907722A IL 299077 A IL299077 A IL 299077A
- Authority
- IL
- Israel
- Prior art keywords
- weight set
- sensor data
- weights
- neural network
- sensing cycles
- Prior art date
Links
- 238000013528 artificial neural network Methods 0.000 claims 32
- 230000002123 temporal effect Effects 0.000 claims 25
- 238000000034 method Methods 0.000 claims 14
- 238000012163 sequencing technique Methods 0.000 claims 3
- 238000003556 assay Methods 0.000 claims 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/048—Activation functions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16B—BIOINFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR GENETIC OR PROTEIN-RELATED DATA PROCESSING IN COMPUTATIONAL MOLECULAR BIOLOGY
- G16B30/00—ICT specially adapted for sequence analysis involving nucleotides or amino acids
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16B—BIOINFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR GENETIC OR PROTEIN-RELATED DATA PROCESSING IN COMPUTATIONAL MOLECULAR BIOLOGY
- G16B40/00—ICT specially adapted for biostatistics; ICT specially adapted for bioinformatics-related machine learning or data mining, e.g. knowledge discovery or pattern finding
- G16B40/20—Supervised data analysis
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Biophysics (AREA)
- General Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Computational Linguistics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Biotechnology (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Public Health (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Neurology (AREA)
- Analytical Chemistry (AREA)
- Chemical & Material Sciences (AREA)
- Epidemiology (AREA)
- Databases & Information Systems (AREA)
- Proteomics, Peptides & Aminoacids (AREA)
- Bioethics (AREA)
- Measuring Or Testing Involving Enzymes Or Micro-Organisms (AREA)
- Compression, Expansion, Code Conversion, And Decoders (AREA)
- Road Signs Or Road Markings (AREA)
- Road Paving Structures (AREA)
- Pens And Brushes (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
Claims (20)
1. A system, comprising: a host processor; memory accessible by the host processor storing a topology of a neural network, first, second, and third weight sets for configuring the topology to execute a base calling operation, the first, second, and third weight sets respectively corresponding to first, second, and third subseries of sensing cycles divided from a series of sensing cycles, wherein the first, second, and third weight sets are determined using respective training data generated during the first, second, and third subseries of sensing cycles, and first, second, and third sensor data respectively corresponding to the first, second, and third subseries of sensing cycles; and a configurable processor having access to the memory and configured with data flow logic to load the topology on processing elements of the configurable processor, load the first sensor data on the processing elements, load the first weight set on the processing elements to configure the topology with weights in the first weight set, and cause the neural network to apply the weights in the first weight set on the first sensor data to produce first base call classification data for sensing cycles in the first subseries of sensing cycle, load the second sensor data on the processing elements, load the second weight set on the processing elements to configure the topology with weights in the second weight set, and cause the neural network to apply the weights in the second weight set on the second sensor data to produce second base call classification data for sensing cycles in the second subseries of sensing cycle, and load the third sensor data on the processing elements, load the third weight set on the processing elements to configure the topology with weights in the third weight set, and cause the neural network to apply the weights in the third weight set on the third sensor data to produce third base call classification data for sensing cycles in the third subseries of sensing cycle.
2. The system of claim 1, wherein the memory further stores: fourth, fifth, and subsequent weight sets for configuring the topology to execute a base calling operation, the fourth, fifth, and subsequent weight sets respectively corresponding to fourth, fifth, and subsequent subseries of sensing cycles in the series of sensing cycles; and fourth, fifth, and subsequent sensor data for the fourth, fifth, and subsequent subseries of sensing cycles.
3. The system of claim 1 or 2, wherein the configurable processor is configured with data flow logic to: load the fourth sensor data on the processing elements, load the fourth weight set on the processing elements to configure the topology with weights in the fourth weight set, and cause the neural network to apply the weights in the fourth weight set on the fourth sensor data to produce fourth base call classification data for sensing cycles in the fourth subseries of sensing cycle; load the fifth sensor data on the processing elements, load the fifth weight set on the processing elements to configure the topology with weights in the fifth weight set, and cause the neural network to apply the weights in the fifth weight set on the fifth sensor data to produce fifth base call classification data for sensing cycles in the fifth subseries of sensing cycle; and load the subsequent sensor data, and the subsequent weight set on the processing elements to configure the topology with weights in the subsequent weight set, and to cause the neural network to apply the weights in the subsequent weight set on the subsequent sensor data to produce subsequent base call classification data for sensing cycles in the subsequent subseries of sensing cycle.
4. The system of any of claims 1-3, wherein the topology takes, as input, sensor data from successive sensing cycles, and the topology includes spatial layers that do not combine the sensor data and resulting feature maps between the successive sensing cycles, and temporal layers that combine resulting feature maps between the successive sensing cycles.
5. The system of any of claims 1-4, wherein the first weight set includes first spatial weights for the spatial layers and first temporal weights for the temporal layers, the second weight set includes second spatial weights for the spatial layers and second temporal weights for the temporal layers, and the third weight set includes third spatial weights for the spatial layers and third temporal weights for the temporal layers.
6. The system of any of claims 1-5, wherein the first weight set includes spatial weights for the spatial layers and first temporal weights for the temporal layers, the second weight set includes second temporal weights for the temporal layers, and the third weight set includes third temporal weights for the temporal layers, and wherein the configurable processor is configured with data flow logic to: load the first sensor data on the processing elements, load the spatial weights and the first temporal weights on the processing elements to configure the spatial layers with the spatial weights and the temporal layers with the first temporal weights, and cause the neural network to apply the configured spatial and temporal layers on the first sensor data to produce first base call classification data for sensing cycles in the first subseries of sensing cycle; load the second sensor data on the processing elements, load the second temporal weights on the processing elements to reconfigure the temporal layers with weights in the second temporal weights, without reconfiguring the spatial layers, and cause the neural network to apply the reconfigured temporal layers and the previously configured spatial layers on the second sensor data to produce second base call classification data for sensing cycles in the second subseries of sensing cycle; and load the third sensor data on the processing elements, load the third temporal weights on the processing elements to reconfigure the temporal layers with weights in the third temporal weights, without reconfiguring the spatial layers, and cause the neural network to apply the reconfigured temporal layers and the previously configured spatial layers on the third sensor data to produce third base call classification data for sensing cycles in the third subseries of sensing cycle.
7. The system of any of claims 1-6, wherein weights in the first, second, and third weight sets are quantized using different scaling factors.
8. The system of any of claims 1-7, wherein weights in the first, second, and third weight sets respectively correspond to first, second, and third sequencing chemistries.
9. The system of any of claims 1-8, wherein weights in the first, second, and third weight sets respectively correspond to first, second, and third sequencing assays.
10. The system of any of claims 1-9, wherein weights in the first, second, and third weight sets respectively correspond to first, second, and third sequencing configurations.
11. A computer-implemented method for generating base call classification data, comprising: loading a topology of a neural network on processing elements of a processor, the processor to execute base call operations; storing (i) first sensor data from clusters corresponding to first one or more tile locations of a flow cell, (ii) second sensor data from clusters corresponding to second one or more tile locations of the flow cell, (iii) a first weight set comprising first one or more weights, and (iv) a second weight set comprising second one or more weights, wherein the first sensor data and the second sensor data are generated during a subset of sensing cycles divided from a series of sensing cycles, and wherein the first weight set and the second weight set are determined using respective training data corresponding to the first one or more tile locations and the second one or more tile locations of the flow cell; configuring the topology of the neural network with the first weight set, and causing the neural network configured with the first weight set to process the first sensor data and to produce first base call classification data for the first one or more tiles and for the subset of sensing cycles; and configuring the topology of the neural network with the second weight set, and causing the neural network configured with the second weight set to process the second sensor data and to produce second base call classification data for the second one or more tiles and for the subset of sensing cycles.
12. The method of claim 11, wherein the subset of sensing cycles is a first subset of sensing cycles, and wherein the method further comprises: storing (i) third sensor data from clusters within the first one or more tiles, (ii) fourth sensor data from clusters within the second one or more tiles, (iii) a third weight set, and (iv) a fourth weight set, wherein the third sensor data and the fourth sensor data are generated during a second subset of sensing cycles in the series of sensing cycles, the second subset of sensing cycles subsequent to the first subset of sensing cycles in the series of sensing cycles; configuring the topology of the neural network with the third weight set, and causing the neural network configured with the third weight set to process the third sensor data and to produce third base call classification data for the first one or more tiles and for the second subset of sensing cycles; and configuring the topology of the neural network with the fourth weight set, and causing the neural network configured with the fourth weight set to process the fourth sensor data and to produce fourth base call classification data for the second one or more tiles and for the second subset of sensing cycles.
13. The method of claim 11 or 12, wherein: the first one or more tiles are within a first area of the flow cell; and the second one or more tiles are within a second area of the flow cell.
14. The method of any of claims 11-13, wherein: the first one or more tiles are edge tiles of the flow cell; and the second one or more tiles are non-edge tiles of the flow cell.
15. The method of any of claims 11-14, further comprising: generating the first weight set by training the neural network on sensor data generated solely from edge tiles; and generating the second weight set by training the neural network on sensor data generated solely from non-edge tiles.
16. A non-transitory computer readable storage medium storing instructions that, when executed by at least one processor, cause a system to: load a topology of a neural network on processing elements of a processor, the processor to execute base call operations; store (i) first sensor data from clusters corresponding to first one or more tile locations of a flow cell, (ii) second sensor data from clusters corresponding to second one or more tile locations of the flow cell, (iii) a first weight set comprising first one or more weights, and (iv) a second weight set comprising second one or more weights, wherein the first sensor data and the second sensor data are generated during a subset of sensing cycles divided from a series of sensing cycles, and wherein the first weight set and the second weight set are determined using respective training data corresponding to the first one or more tile locations and the second one or more tile locations of the flow cell; configure the topology of the neural network with the first weight set, and cause the neural network configured with the first weight set to process the first sensor data and to produce first base call classification data for the first one or more tiles and for the subset of sensing cycles; and configure the topology of the neural network with the second weight set, and cause the neural network configured with the second weight set to process the second sensor data and to produce second base call classification data for the second one or more tiles and for the subset of sensing cycles.
17. The non-transitory computer readable storage medium recited in claim 16, wherein the subset of sensing cycles is a first subset of sensing cycles, and further comprising instructions, that when executed by the at least one processor, cause the system to: store (i) third sensor data from clusters within the first one or more tiles, (ii) fourth sensor data from clusters within the second one or more tiles, (iii) a third weight set, and (iv) a fourth weight set, wherein the third sensor data and the fourth sensor data are generated during a second subset of sensing cycles in the series of sensing cycles, the second subset of sensing cycles subsequent to the first subset of sensing cycles in the series of sensing cycles; configure the topology of the neural network with the third weight set, and cause the neural network configured with the third weight set to process the third sensor data and to produce third base call classification data for the first one or more tiles and for the second subset of sensing cycles; and configure the topology of the neural network with the fourth weight set, and cause the neural network configured with the fourth weight set to process the fourth sensor data and to produce fourth base call classification data for the second one or more tiles and for the second subset of sensing cycles.
18. The non-transitory computer readable storage medium recited in claim 16 or 17, wherein: the first one or more tiles are within a first area of the flow cell; and the second one or more tiles are within a second area of the flow cell.
19. The non-transitory computer readable storage medium recited in claim 16-18, wherein: the first one or more tiles are edge tiles of the flow cell; and the second one or more tiles are non-edge tiles of the flow cell.
20. The non-transitory computer readable storage medium recited in claim 16-19, further comprising instructions, that when executed by the at least one processor, cause the system to: generate the first weight set by training the neural network on sensor data generated solely from edge tiles; and generate the second weight set by training the neural network on sensor data generated solely from non-edge tiles.
Applications Claiming Priority (5)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US202163161896P | 2021-03-16 | 2021-03-16 | |
US202163161880P | 2021-03-16 | 2021-03-16 | |
US17/687,583 US20220300811A1 (en) | 2021-03-16 | 2022-03-04 | Neural network parameter quantization for base calling |
US17/687,551 US20220301657A1 (en) | 2021-03-16 | 2022-03-04 | Tile location and/or cycle based weight set selection for base calling |
PCT/US2022/020460 WO2022197752A1 (en) | 2021-03-16 | 2022-03-15 | Tile location and/or cycle based weight set selection for base calling |
Publications (1)
Publication Number | Publication Date |
---|---|
IL299077A true IL299077A (en) | 2023-02-01 |
Family
ID=85057463
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
IL299077A IL299077A (en) | 2021-03-16 | 2022-03-15 | Tile location and/or cycle based weight set selection for base calling |
Country Status (7)
Country | Link |
---|---|
EP (2) | EP4309080A1 (en) |
JP (1) | JP2024510539A (en) |
KR (1) | KR20230157230A (en) |
CN (2) | CN115699019A (en) |
AU (2) | AU2022238841A1 (en) |
CA (2) | CA3183567A1 (en) |
IL (1) | IL299077A (en) |
-
2022
- 2022-03-15 CN CN202280005057.3A patent/CN115699019A/en active Pending
- 2022-03-15 EP EP22714690.9A patent/EP4309080A1/en active Pending
- 2022-03-15 AU AU2022238841A patent/AU2022238841A1/en active Pending
- 2022-03-15 JP JP2022580969A patent/JP2024510539A/en active Pending
- 2022-03-15 KR KR1020227045560A patent/KR20230157230A/en unknown
- 2022-03-15 AU AU2022237501A patent/AU2022237501A1/en active Pending
- 2022-03-15 CN CN202280005111.4A patent/CN115803815A/en active Pending
- 2022-03-15 CA CA3183567A patent/CA3183567A1/en active Pending
- 2022-03-15 EP EP22714689.1A patent/EP4309179A1/en active Pending
- 2022-03-15 CA CA3183581A patent/CA3183581A1/en active Pending
- 2022-03-15 IL IL299077A patent/IL299077A/en unknown
Also Published As
Publication number | Publication date |
---|---|
JP2024510539A (en) | 2024-03-08 |
KR20230157230A (en) | 2023-11-16 |
CN115699019A (en) | 2023-02-03 |
AU2022237501A1 (en) | 2023-02-02 |
EP4309080A1 (en) | 2024-01-24 |
AU2022238841A1 (en) | 2023-02-02 |
CA3183581A1 (en) | 2022-09-22 |
CN115803815A (en) | 2023-03-14 |
CA3183567A1 (en) | 2022-09-22 |
EP4309179A1 (en) | 2024-01-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20220083480A1 (en) | Exploiting input data sparsity in neural network compute units | |
US11769042B2 (en) | Reconfigurable systolic neural network engine | |
US11157794B2 (en) | Scheduling neural network processing | |
US11934669B2 (en) | Scaling out architecture for DRAM-based processing unit (DPU) | |
CN106847335B (en) | Convolutional calculation storage integration apparatus and method based on resistance-change memory array | |
CN110097174A (en) | Preferential convolutional neural networks implementation method, system and device are exported based on FPGA and row | |
CN110674936A (en) | Neural network processing method and device, computer equipment and storage medium | |
CN109102065A (en) | A kind of convolutional neural networks accelerator based on PSoC | |
CN109165728B (en) | Basic computing unit and computing method of convolutional neural network | |
CN103559093B (en) | The collocation method of a kind of server resource and device | |
KR20200090089A (en) | Method of enabling sparse neural networks on memresistive accelerators | |
US20170083469A1 (en) | Inter-Cluster Data Communication Network for a Dynamic Shared Communication Platform | |
WO2023184835A1 (en) | Three-class vertex degree aware-based 1.5-dimensional graph division method and application | |
CN108021441A (en) | A kind of resources of virtual machine collocation method and device based on cloud computing | |
IL299077A (en) | Tile location and/or cycle based weight set selection for base calling | |
CN112799598B (en) | Data processing method, processor and electronic equipment | |
Wang et al. | SPCIM: Sparsity-Balanced Practical CIM Accelerator With Optimized Spatial-Temporal Multi-Macro Utilization | |
CN110837419B (en) | Reasoning engine system and method based on elastic batch processing and electronic equipment | |
CN111831356A (en) | Weight precision configuration method, device, equipment and storage medium | |
Wei et al. | Reconfigurability, Why It Matters in AI Tasks Processing: A Survey of Reconfigurable AI Chips | |
CN110415162B (en) | Adaptive graph partitioning method facing heterogeneous fusion processor in big data | |
Wang et al. | Reboc: Accelerating block-circulant neural networks in reram | |
Heiss et al. | Partitioning and mapping of parallel programs by selfâorganization | |
Li et al. | Reducing fragmentation on 3d torus-based hpc systems using packing-based job scheduling and job placement reconfiguration | |
CN113095476A (en) | Hardware acceleration device and method for universal tensor calculation |