CN117688971A - Neural network hint modulation - Google Patents
Neural network hint modulation Download PDFInfo
- Publication number
- CN117688971A CN117688971A CN202311169931.XA CN202311169931A CN117688971A CN 117688971 A CN117688971 A CN 117688971A CN 202311169931 A CN202311169931 A CN 202311169931A CN 117688971 A CN117688971 A CN 117688971A
- Authority
- CN
- China
- Prior art keywords
- processor
- neural networks
- neural network
- vehicle
- data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013528 artificial neural network Methods 0.000 title claims abstract description 543
- 238000000034 method Methods 0.000 claims description 538
- 230000015654 memory Effects 0.000 claims description 476
- 230000000007 visual effect Effects 0.000 claims description 54
- 238000005516 engineering process Methods 0.000 abstract description 18
- 238000012545 processing Methods 0.000 description 434
- 230000008569 process Effects 0.000 description 308
- 238000012549 training Methods 0.000 description 235
- 230000006870 function Effects 0.000 description 226
- 238000010586 diagram Methods 0.000 description 207
- 238000010801 machine learning Methods 0.000 description 133
- 230000006854 communication Effects 0.000 description 75
- 238000007667 floating Methods 0.000 description 75
- 238000004891 communication Methods 0.000 description 74
- 238000013473 artificial intelligence Methods 0.000 description 69
- 238000003860 storage Methods 0.000 description 68
- 210000002569 neuron Anatomy 0.000 description 66
- 238000003384 imaging method Methods 0.000 description 64
- 230000001133 acceleration Effects 0.000 description 59
- 238000001514 detection method Methods 0.000 description 54
- 238000012360 testing method Methods 0.000 description 52
- 238000005192 partition Methods 0.000 description 51
- 238000013135 deep learning Methods 0.000 description 46
- 238000004422 calculation algorithm Methods 0.000 description 44
- 239000000872 buffer Substances 0.000 description 43
- 238000007726 management method Methods 0.000 description 43
- 238000005227 gel permeation chromatography Methods 0.000 description 42
- 239000013598 vector Substances 0.000 description 40
- 238000012800 visualization Methods 0.000 description 32
- 230000002093 peripheral effect Effects 0.000 description 30
- 125000000914 phenoxymethylpenicillanyl group Chemical group CC1(S[C@H]2N([C@H]1C(=O)*)C([C@H]2NC(COC2=CC=CC=C2)=O)=O)C 0.000 description 30
- 229920002451 polyvinyl alcohol Polymers 0.000 description 30
- 235000019422 polyvinyl alcohol Nutrition 0.000 description 30
- 239000011159 matrix material Substances 0.000 description 27
- 238000013527 convolutional neural network Methods 0.000 description 25
- 102100034112 Alkyldihydroxyacetonephosphate synthase, peroxisomal Human genes 0.000 description 23
- 101000799143 Homo sapiens Alkyldihydroxyacetonephosphate synthase, peroxisomal Proteins 0.000 description 23
- 238000000848 angular dependent Auger electron spectroscopy Methods 0.000 description 23
- 230000011218 segmentation Effects 0.000 description 22
- 239000012634 fragment Substances 0.000 description 21
- 230000003044 adaptive effect Effects 0.000 description 20
- 238000009877 rendering Methods 0.000 description 19
- HPTJABJPZMULFH-UHFFFAOYSA-N 12-[(Cyclohexylcarbamoyl)amino]dodecanoic acid Chemical compound OC(=O)CCCCCCCCCCCNC(=O)NC1CCCCC1 HPTJABJPZMULFH-UHFFFAOYSA-N 0.000 description 17
- 210000000225 synapse Anatomy 0.000 description 17
- 238000013519 translation Methods 0.000 description 17
- 230000014616 translation Effects 0.000 description 17
- 238000012935 Averaging Methods 0.000 description 16
- 238000012546 transfer Methods 0.000 description 16
- 238000005457 optimization Methods 0.000 description 15
- 238000013500 data storage Methods 0.000 description 14
- 238000009826 distribution Methods 0.000 description 14
- 230000000670 limiting effect Effects 0.000 description 13
- 230000007246 mechanism Effects 0.000 description 13
- 210000000056 organ Anatomy 0.000 description 13
- 230000004044 response Effects 0.000 description 13
- 238000012163 sequencing technique Methods 0.000 description 13
- 230000008093 supporting effect Effects 0.000 description 13
- 230000004913 activation Effects 0.000 description 12
- 238000001994 activation Methods 0.000 description 12
- 238000001914 filtration Methods 0.000 description 12
- 230000008859 change Effects 0.000 description 11
- 230000033001 locomotion Effects 0.000 description 11
- 230000003068 static effect Effects 0.000 description 11
- 238000002604 ultrasonography Methods 0.000 description 11
- 238000004458 analytical method Methods 0.000 description 10
- 230000005540 biological transmission Effects 0.000 description 10
- 238000006243 chemical reaction Methods 0.000 description 10
- 230000001360 synchronised effect Effects 0.000 description 10
- 239000003795 chemical substances by application Substances 0.000 description 9
- 238000002595 magnetic resonance imaging Methods 0.000 description 9
- 238000013507 mapping Methods 0.000 description 9
- 238000007781 pre-processing Methods 0.000 description 9
- 230000010076 replication Effects 0.000 description 9
- 230000009471 action Effects 0.000 description 8
- 238000004590 computer program Methods 0.000 description 8
- 239000012528 membrane Substances 0.000 description 8
- 238000003062 neural network model Methods 0.000 description 8
- 230000006835 compression Effects 0.000 description 7
- 238000007906 compression Methods 0.000 description 7
- 238000012544 monitoring process Methods 0.000 description 7
- 238000003058 natural language processing Methods 0.000 description 7
- 238000012805 post-processing Methods 0.000 description 7
- 238000005070 sampling Methods 0.000 description 7
- 239000004065 semiconductor Substances 0.000 description 7
- 238000012937 correction Methods 0.000 description 6
- 238000007405 data analysis Methods 0.000 description 6
- 230000004927 fusion Effects 0.000 description 6
- 238000002156 mixing Methods 0.000 description 6
- 238000003491 array Methods 0.000 description 5
- 230000003190 augmentative effect Effects 0.000 description 5
- 230000001419 dependent effect Effects 0.000 description 5
- 238000013461 design Methods 0.000 description 5
- 230000018109 developmental process Effects 0.000 description 5
- 230000000694 effects Effects 0.000 description 5
- 230000009467 reduction Effects 0.000 description 5
- 230000002829 reductive effect Effects 0.000 description 5
- 238000004088 simulation Methods 0.000 description 5
- 238000009825 accumulation Methods 0.000 description 4
- 239000000446 fuel Substances 0.000 description 4
- 230000001976 improved effect Effects 0.000 description 4
- 230000003993 interaction Effects 0.000 description 4
- 238000012986 modification Methods 0.000 description 4
- 230000004048 modification Effects 0.000 description 4
- 238000013439 planning Methods 0.000 description 4
- 238000007670 refining Methods 0.000 description 4
- 230000005856 abnormality Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 210000004027 cell Anatomy 0.000 description 3
- 230000001413 cellular effect Effects 0.000 description 3
- 230000001149 cognitive effect Effects 0.000 description 3
- 238000002591 computed tomography Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000013136 deep learning model Methods 0.000 description 3
- 238000003745 diagnosis Methods 0.000 description 3
- 238000002059 diagnostic imaging Methods 0.000 description 3
- 230000002708 enhancing effect Effects 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 230000014509 gene expression Effects 0.000 description 3
- 230000010354 integration Effects 0.000 description 3
- 238000011068 loading method Methods 0.000 description 3
- 238000007620 mathematical function Methods 0.000 description 3
- 239000000203 mixture Substances 0.000 description 3
- 230000008447 perception Effects 0.000 description 3
- 229920001690 polydopamine Polymers 0.000 description 3
- 239000000047 product Substances 0.000 description 3
- 230000000644 propagated effect Effects 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 239000007787 solid Substances 0.000 description 3
- 238000000638 solvent extraction Methods 0.000 description 3
- 230000002123 temporal effect Effects 0.000 description 3
- 229920002803 thermoplastic polyurethane Polymers 0.000 description 3
- 230000007704 transition Effects 0.000 description 3
- 230000002776 aggregation Effects 0.000 description 2
- 238000004220 aggregation Methods 0.000 description 2
- 238000012884 algebraic function Methods 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000001427 coherent effect Effects 0.000 description 2
- 238000011960 computer-aided design Methods 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000001816 cooling Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000002592 echocardiography Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 239000004744 fabric Substances 0.000 description 2
- 230000036541 health Effects 0.000 description 2
- 230000000977 initiatory effect Effects 0.000 description 2
- 238000009434 installation Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000007774 longterm Effects 0.000 description 2
- 238000013508 migration Methods 0.000 description 2
- 230000005012 migration Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000008520 organization Effects 0.000 description 2
- 230000001242 postsynaptic effect Effects 0.000 description 2
- 238000002360 preparation method Methods 0.000 description 2
- 210000005215 presynaptic neuron Anatomy 0.000 description 2
- 230000000306 recurrent effect Effects 0.000 description 2
- 239000000758 substrate Substances 0.000 description 2
- 238000012706 support-vector machine Methods 0.000 description 2
- 238000013526 transfer learning Methods 0.000 description 2
- 230000001052 transient effect Effects 0.000 description 2
- 241000251468 Actinopterygii Species 0.000 description 1
- 101100248200 Arabidopsis thaliana RGGB gene Proteins 0.000 description 1
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 101001018553 Homo sapiens MyoD family inhibitor Proteins 0.000 description 1
- 229920002153 Hydroxypropyl cellulose Polymers 0.000 description 1
- 102100030148 Integrator complex subunit 8 Human genes 0.000 description 1
- 101710092891 Integrator complex subunit 8 Proteins 0.000 description 1
- 102100033694 MyoD family inhibitor Human genes 0.000 description 1
- 238000004497 NIR spectroscopy Methods 0.000 description 1
- 206010028980 Neoplasm Diseases 0.000 description 1
- 241000492493 Oxymeris Species 0.000 description 1
- 206010034972 Photosensitivity reaction Diseases 0.000 description 1
- 101100285899 Saccharomyces cerevisiae (strain ATCC 204508 / S288c) SSE2 gene Proteins 0.000 description 1
- 241000170489 Upis Species 0.000 description 1
- 241000700605 Viruses Species 0.000 description 1
- 230000003213 activating effect Effects 0.000 description 1
- 230000006978 adaptation Effects 0.000 description 1
- 210000003484 anatomy Anatomy 0.000 description 1
- 230000003416 augmentation Effects 0.000 description 1
- 238000013475 authorization Methods 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 210000000988 bone and bone Anatomy 0.000 description 1
- 238000009125 cardiac resynchronization therapy Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000004040 coloring Methods 0.000 description 1
- 238000002485 combustion reaction Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000004883 computer application Methods 0.000 description 1
- 238000005094 computer simulation Methods 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 125000004122 cyclic group Chemical group 0.000 description 1
- 238000013523 data management Methods 0.000 description 1
- 238000013501 data transformation Methods 0.000 description 1
- 238000003066 decision tree Methods 0.000 description 1
- 238000000354 decomposition reaction Methods 0.000 description 1
- 238000002405 diagnostic procedure Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 201000010099 disease Diseases 0.000 description 1
- 208000037265 diseases, disorders, signs and symptoms Diseases 0.000 description 1
- 235000019800 disodium phosphate Nutrition 0.000 description 1
- 238000004980 dosimetry Methods 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 238000007876 drug discovery Methods 0.000 description 1
- 238000002091 elastography Methods 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 230000005669 field effect Effects 0.000 description 1
- 238000010304 firing Methods 0.000 description 1
- 239000012530 fluid Substances 0.000 description 1
- 230000002068 genetic effect Effects 0.000 description 1
- 230000007614 genetic variation Effects 0.000 description 1
- 238000011331 genomic analysis Methods 0.000 description 1
- 230000012010 growth Effects 0.000 description 1
- 235000010977 hydroxypropyl cellulose Nutrition 0.000 description 1
- 238000012905 input function Methods 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 239000013067 intermediate product Substances 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 238000003064 k means clustering Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000012417 linear regression Methods 0.000 description 1
- 238000007477 logistic regression Methods 0.000 description 1
- 230000007787 long-term memory Effects 0.000 description 1
- 239000006249 magnetic particle Substances 0.000 description 1
- 238000012423 maintenance Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000000873 masking effect Effects 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 238000001693 membrane extraction with a sorbent interface Methods 0.000 description 1
- 230000006386 memory function Effects 0.000 description 1
- 230000005055 memory storage Effects 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 229910052751 metal Inorganic materials 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 238000010295 mobile communication Methods 0.000 description 1
- 238000000329 molecular dynamics simulation Methods 0.000 description 1
- 238000012900 molecular simulation Methods 0.000 description 1
- 230000035772 mutation Effects 0.000 description 1
- 208000010125 myocardial infarction Diseases 0.000 description 1
- 238000002610 neuroimaging Methods 0.000 description 1
- 238000009206 nuclear medicine Methods 0.000 description 1
- 238000003909 pattern recognition Methods 0.000 description 1
- 238000000059 patterning Methods 0.000 description 1
- 230000000737 periodic effect Effects 0.000 description 1
- 230000002688 persistence Effects 0.000 description 1
- 239000003208 petroleum Substances 0.000 description 1
- 230000036211 photosensitivity Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 108090000623 proteins and genes Proteins 0.000 description 1
- 238000011002 quantification Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 238000002601 radiography Methods 0.000 description 1
- 238000001959 radiotherapy Methods 0.000 description 1
- 238000007637 random forest analysis Methods 0.000 description 1
- 238000011084 recovery Methods 0.000 description 1
- 238000002310 reflectometry Methods 0.000 description 1
- 230000036279 refractory period Effects 0.000 description 1
- 230000004043 responsiveness Effects 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 230000037390 scarring Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000006403 short-term memory Effects 0.000 description 1
- 230000007958 sleep Effects 0.000 description 1
- 239000004984 smart glass Substances 0.000 description 1
- 239000013589 supplement Substances 0.000 description 1
- 230000000153 supplemental effect Effects 0.000 description 1
- 230000000946 synaptic effect Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000003325 tomography Methods 0.000 description 1
- 238000012876 topography Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
- 238000000844 transformation Methods 0.000 description 1
- 230000001960 triggered effect Effects 0.000 description 1
- 238000010396 two-hybrid screening Methods 0.000 description 1
- 238000010200 validation analysis Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/0409—Adaptive resonance theory [ART] networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Neurology (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses neural network hint adjustment, and in particular discloses a device, a system and a technology for executing a neural network. In at least one embodiment, the most consistent output of one or more pre-trained neural networks will be selected. In at least one embodiment, the most consistent output of the one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks.
Description
Cross Reference to Related Applications
The present application claims the benefit of U.S. provisional patent application No. 63/405,355 entitled "visual-language modeling technique (VISION-LANGUAGE MODELLING TECHNIQUES)" filed 9/2022, the entire contents of which are incorporated herein by reference.
Technical Field
At least one embodiment relates to processing resources for learning adaptive cues for a neural network using a single test image. For example, in accordance with the techniques described herein, at least one embodiment relates to hint adjustment using a single test image.
Background
Text and image processing using neural networks can occupy a significant amount of memory, time, or computational resources. The amount of memory, time, or computing resources used may be improved. For example, training a neural network to identify images and/or text may use a large amount of information and processing. Processing and storage of such information may use additional memory, time, and/or other computing resources. Updating the neural network after training may use even more memory, time, and/or other computing resources, and may interrupt training of the trained neural network. Such updating of trained neural networks can be challenging.
Drawings
FIG. 1 is a block diagram illustrating a computing system in which neural network cues are modulated in accordance with at least one embodiment;
FIG. 2 is a block diagram illustrating a neural network hint adjustment method that learns adaptive hints on the fly (on the fly) using a single test sample, in accordance with at least one embodiment;
FIG. 3 is a flow chart illustrating a process of using a hint adjustment method that learns adaptive hints on the fly using a single test sample in accordance with at least one embodiment;
FIG. 4 is a block diagram illustrating a computing system in which a neural network hint adjustment method that learns adaptive hints on the fly using a single test sample is performed in accordance with at least one embodiment;
FIG. 5 is a process illustrating the adjustment of neural network cues in accordance with at least one embodiment;
FIG. 6 is a block diagram illustrating a computing system in which hints used by a trained visual language model are adjusted in accordance with at least one embodiment;
FIG. 7 is a block diagram illustrating a processor and modules in accordance with at least one embodiment;
FIG. 8 is a block diagram illustrating a driver and/or runtime including one or more libraries for providing one or more Application Programming Interfaces (APIs) in accordance with at least one embodiment;
FIG. 9A illustrates logic in accordance with at least one embodiment;
FIG. 9B illustrates logic in accordance with at least one embodiment;
FIG. 10 illustrates training and deployment of a neural network in accordance with at least one embodiment;
FIG. 11 illustrates an example data center system in accordance with at least one embodiment;
FIG. 12A illustrates an example of an autonomous vehicle in accordance with at least one embodiment;
FIG. 12B illustrates an example of camera position and field of view of the autonomous vehicle of FIG. 12A in accordance with at least one embodiment;
FIG. 12C is a block diagram illustrating an example system architecture of the autonomous vehicle of FIG. 12A in accordance with at least one embodiment;
FIG. 12D is a diagram illustrating a system for communication between one or more cloud-based servers and the autonomous vehicle of FIG. 12A in accordance with at least one embodiment;
FIG. 13 is a block diagram illustrating a computer system in accordance with at least one embodiment;
FIG. 14 is a block diagram illustrating a computer system in accordance with at least one embodiment;
FIG. 15 illustrates a computer system in accordance with at least one embodiment;
FIG. 16 illustrates a computer system in accordance with at least one embodiment;
FIG. 17A illustrates a computer system in accordance with at least one embodiment;
FIG. 17B illustrates a computer system in accordance with at least one embodiment;
FIG. 17C illustrates a computer system in accordance with at least one embodiment;
FIG. 17D illustrates a computer system in accordance with at least one embodiment;
17E and 17F illustrate a shared programming model in accordance with at least one embodiment;
FIG. 18 illustrates an exemplary integrated circuit and associated graphics processor in accordance with at least one embodiment;
19A and 19B illustrate an exemplary integrated circuit and associated graphics processor in accordance with at least one embodiment;
FIGS. 20A and 20B illustrate additional example graphics processor logic in accordance with at least one embodiment;
FIG. 21 illustrates a computer system in accordance with at least one embodiment;
FIG. 22A illustrates a parallel processor in accordance with at least one embodiment;
FIG. 22B illustrates a partition unit in accordance with at least one embodiment;
FIG. 22C illustrates a processing cluster in accordance with at least one embodiment;
FIG. 22D illustrates a graphics multiprocessor in accordance with at least one embodiment;
FIG. 23 illustrates a multiple Graphics Processing Unit (GPU) system in accordance with at least one embodiment;
FIG. 24 illustrates a graphics processor in accordance with at least one embodiment;
FIG. 25 is a block diagram illustrating a processor microarchitecture for a processor in accordance with at least one embodiment;
FIG. 26 illustrates a deep learning application processor in accordance with at least one embodiment;
FIG. 27 is a block diagram illustrating an example neuromorphic processor, in accordance with at least one embodiment;
FIG. 28 illustrates at least a portion of a graphics processor in accordance with at least one embodiment;
FIG. 29 illustrates at least a portion of a graphics processor in accordance with at least one embodiment;
FIG. 30 illustrates at least a portion of a graphics processor in accordance with at least one embodiment;
FIG. 31 is a block diagram of a graphics processing engine of a graphics processor in accordance with at least one embodiment;
FIG. 32 is a block diagram of at least a portion of a graphics processor core in accordance with at least one embodiment;
33A and 33B illustrate thread execution logic including an array of processing elements of a graphics processor core in accordance with at least one embodiment.
FIG. 34 illustrates a parallel processing unit ("PPU") in accordance with at least one embodiment;
FIG. 35 illustrates a general processing cluster ("GPC") in accordance with at least one embodiment;
FIG. 36 illustrates a memory partition unit of a parallel processing unit ("PPU") in accordance with at least one embodiment;
FIG. 37 illustrates a streaming multiprocessor in accordance with at least one embodiment;
FIG. 38 is an example data flow diagram of a high-level computational pipeline in accordance with at least one embodiment;
FIG. 39 is a system diagram of an example system for training, adapting, instantiating, and deploying a machine learning model in a high-level computing pipeline in accordance with at least one embodiment;
FIG. 40 includes an example illustration of a high-level computational pipeline for processing imaging data in accordance with at least one embodiment;
FIG. 41A includes an example data flow diagram of a virtual instrument supporting an ultrasound device in accordance with at least one embodiment;
FIG. 41B includes an example data flow diagram of a virtual instrument supporting a CT scanner in accordance with at least one embodiment;
FIG. 42A illustrates a data flow diagram of a process for training a machine learning model in accordance with at least one embodiment;
FIG. 42B is an example illustration of a client-server architecture utilizing a pre-trained annotation model to enhance annotation tools, according to at least one embodiment; and
FIG. 43 illustrates various components of a system for accessing a large language model in accordance with at least one embodiment.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a more thorough understanding of at least one embodiment. It will be apparent, however, to one skilled in the art that the present inventive concept may be practiced without one or more of these specific details.
In at least one embodiment, an apparatus, system, and/or processor comprising one or more circuits uses a neural network to adjust cues provided as input to the neural network as described herein. In at least one embodiment, these cues are referred to as neural network cues. In at least one embodiment, an apparatus, system, and/or processor comprising one or more circuits with the aid of neural network cues identify elements of an image using the adjusted neural network cues. In at least one embodiment, the neural network infers how to identify elements of an image based on input data including the image and cues. In at least one embodiment, one or more cues of the neural network are adjusted during reasoning. In at least one embodiment, the adjusting includes updating, modifying, or otherwise changing one or more of the one or more cues. In at least one embodiment, the neural network processes the images and/or text during reasoning to adjust the cues to improve recognition of elements of the images.
FIG. 1 is a block diagram illustrating a computing system in which neural network cues are modulated in accordance with at least one embodiment. In at least one embodiment, the processor 102 of the computing system shown in block 100 receives the image 104. In at least one embodiment, the processor 102 of the computing system shown in block 100 receives the hint 108. In at least one embodiment, the processor 102 of the computing system shown in block 100 receives a pre-trained visual language model 106.
In at least one embodiment, the processor 102 is a host processor. In at least one embodiment, the host code is code executed by a host processor, where host refers to a CPU and its memory, and the device code is code executed by a second processor (not shown in FIG. 1), where device refers to the GPU and its memory. In at least one embodiment, the processor 102 is a Central Processing Unit (CPU). In at least one embodiment, the second processor (not shown in FIG. 1) is a device processor. In at least one embodiment, the second processor is a GPU, a parallel processing unit, an FPGA, an ASIC, and/or other processor that may accelerate the execution of computations or operations. In at least one embodiment, the second processor includes a plurality of GPUs, such as GPUs 1710 (1) -1710 (N), and is communicatively coupled to a plurality of multi-core processors 1705 (1) -1705 (M) through high-speed links 1740 (1) -1740 (N), all as described herein at least in connection with fig. 17A-17F. In at least one embodiment, the processor 102 and one or more additional processors form a system on a chip (SoC) that includes one or more circuits for adjusting neural network cues by executing software disclosed in connection with the computing system shown in fig. 1.
In at least one embodiment, the image 104 is sent, indicated, or otherwise provided to the processor 102 using systems, methods, operations, and/or techniques such as those described herein. In at least one embodiment, not shown in fig. 1, the image 104 is sent, indicated, or otherwise provided to the processor 102 using one or more APIs (such as those described herein in connection with at least fig. 8). In at least one embodiment, the image 104 is an image that includes one or more categories of objects (e.g., mug, plate, spoon, dog, person, car, etc.). In at least one embodiment, the image 104 is a test image, such as the test image 202 described herein in connection with at least fig. 2. In at least one embodiment, the image 104 is a single image (e.g., as described herein). In at least one embodiment, the image 104 is referred to as a "test sample" or "single test sample" (e.g., as described herein at least in connection with fig. 3).
In at least one embodiment, the cues 108 are sent, indicated, or otherwise provided to the processor 102 using systems, methods, operations, and/or techniques such as those described herein. In at least one embodiment, not shown in FIG. 1, the hint 108 is sent, indicated, or otherwise provided to the processor 102 using one or more APIs, such as those described herein at least in connection with FIG. 8. In at least one embodiment, the hint 108 is a hint of one or more tags that include one or more elements of the image 104. In at least one embodiment, the prompt 108 is a prompt such as the prompt 204 described herein in connection with at least FIG. 2. In at least one embodiment, the cues 108 are adjusted cues (e.g., cues that are adjusted using one or more systems, methods, operations, techniques, or procedures such as those described herein). In at least one embodiment, the cues 108 are adjusted using one or more steps of at least the process 300 described herein in connection with FIG. 3. In at least one embodiment, the cue 108 is an unadjusted cue (e.g., a cue that has not been adjusted as described herein). In at least one embodiment, the cues 108 are referred to as "text cues", "adjusted cues", or "unadjusted cues" (e.g., as described herein in connection with at least fig. 2 and 3).
In at least one embodiment, the pre-trained visual language model 106 is sent, indicated, or otherwise provided to the processor 102 using systems, methods, operations, and/or techniques such as those described herein. In at least one embodiment, not shown in FIG. 1, the pre-trained visual language model 106 is sent, indicated, or otherwise provided to the processor 102 using one or more APIs, such as those described herein in connection with at least FIG. 8. In at least one embodiment, the pre-trained visual language model 106 is a neural network model (e.g., a learning model), such as those described herein in connection with at least fig. 2-7. In at least one embodiment, the pre-trained visual language model 106 is referred to as a "visual-language model," "trained neural network," or "visual language model" (e.g., as described herein).
In at least one embodiment, the processor 102 performs or otherwise implements one or more operations to enhance the image 110 (e.g., enhance the image 104) by generating one or more randomly enhanced views of the image 104, as described herein at least in connection with fig. 2 and 3. In at least one embodiment, for example, the processor 102 performs or otherwise implements one or more operations to enhance the image 110 by changing at least a portion of the image 104 (e.g., by cropping, zooming, scaling, translating, rotating, and/or otherwise adjusting the content of the image 104).
In at least one embodiment, processor 102 uses the results of enhanced image 110 and/or image 104 to encode text and image 112 as described herein in connection with at least fig. 2 and 3. In at least one embodiment, the processor 102 encodes text and images 112 using the cues 108, as described herein in connection with at least fig. 2 and 3. In at least one embodiment, not shown in FIG. 1, processor 102 processes hint 108 (e.g., subdivides hint 108 into smaller text elements) before encoding text and image 112 using hint 108.
In at least one embodiment, processor 102 filters noisy input 114 using results of encoding text and image 112, as described herein with respect to at least fig. 2 and 3, and updates prompt 116 using results of filtering noisy input 114, as also described herein with respect to at least fig. 2 and 3. In at least one embodiment, the hint 108 (e.g., the result of updating the hint 116) is then sent, indicated, or otherwise provided to the processor 102 using systems, methods, operations, and/or techniques such as those described herein.
In at least one embodiment, the one or more processors (e.g., processor 102 and/or other processors such as those described herein) include one or more circuits for performing the operations or instructions described herein, such as one or more circuits for causing selection of a most consistent output of one or more pre-trained neural networks based at least in part on a plurality of variations (variance) of one or more inputs of the one or more neural networks. In at least one embodiment, the variation includes a change, modification, or other variation to the input. In at least one embodiment, the variation is related to neural network cues. In at least one embodiment, the varying is performed so that the output of the pre-trained neural network is more consistent. For example, in at least one embodiment, the variation is made to make the reasoning about images more consistent by the pre-trained neural network. In at least one embodiment, the one or more processors include one or more circuits for performing the operations or instructions described herein, such as one or more circuits for causing the one or more neural networks to select one or more variations in the characteristics of the one or more text prompts based at least in part on the performance of the one or more neural networks using the one or more variations in the one or more input images. In at least one embodiment, not shown in fig. 1, a non-transitory machine-readable medium has stored thereon a set of instructions that, if executed by one or more processors, perform the operations described herein at least in connection with fig. 1-8, such as operations for causing selection of a most consistent output of one or more pre-trained neural networks based at least in part on a plurality of variations of one or more inputs of the one or more neural networks.
FIG. 2 is a block diagram 200 illustrating a neural network hint adjustment method that learns adaptive hints on the fly using a single test sample, in accordance with at least one embodiment. In at least one embodiment, block diagram 200 illustrates a neural network hint adjustment method that is performed using one or more hardware and/or a set of software computing resources having instructions that, when executed, perform one or more neural network processes such as those described herein. In at least one embodiment, block diagram 200 illustrates a neural network hint adjustment method performed using one or more software programs executing on computer hardware (e.g., processor 102 described herein with respect to at least fig. 1), one or more applications executing on computer hardware (e.g., processor 102 described herein with respect to at least fig. 1), and/or variations thereof. In at least one embodiment, one or more of the processes shown in block diagram 200 are performed by any suitable processing system or unit (e.g., graphics Processing Unit (GPU), general Purpose GPU (GPGPU), parallel Processing Unit (PPU), central Processing Unit (CPU)) (such as those described herein) in any suitable manner, including sequentially, in parallel, and/or variations thereof. In at least one embodiment, block diagram 200 illustrates a neural network hint adjustment method that is performed using a machine learning training framework (such as PyTorch, tensorFlow, boost, caffe, microsoft Cognitive Toolkit/CNTK, MXNet, chainer, keras, deeplearning4j, and/or other training frameworks) to implement and perform the operations described herein such that adaptive hints are learned on the fly with a single test sample.
In at least one embodiment, the components shown in block diagram 200 may include a processor such as those described herein in connection with fig. 1-43. In at least one embodiment, the components shown in block diagram 200 refer to any combination of software logic, firmware logic, hardware logic, and/or circuitry configured to provide the functionality described herein. In at least one embodiment, the software is embodied as a software package, code, and/or instruction set or instructions, and the hardware used in any implementation as described herein includes, for example, hardwired circuitry, programmable circuitry, state machine circuitry, fixed-function circuitry, execution unit circuitry, and/or firmware that stores instructions executed by the programmable circuitry, alone or in any combination. In at least one embodiment, a module (e.g., as described herein at least in connection with fig. 7) may be embodied collectively or separately as a circuit forming part of a larger system (e.g., an integrated circuit ("IC"), a system on a chip ("SoC"), etc.). In at least one embodiment, the components shown in block diagram 200 are implemented via dedicated hardware (such as fixed function circuitry, etc.). In at least one embodiment, the fixed function circuitry comprises a set of fixed function entry points or circuits that provide dedicated logic mapped to fixed purposes or functions. In at least one embodiment, the following description sets forth numerous specific details, such as logic implementations, types and interrelationships of system components, logic partitioning/integration choices, and the like, including but not limited to those described herein. In at least one embodiment, the components shown in block diagram 200 use systems, methods, operations, and techniques such as those described herein to classify images. In at least one embodiment, the components shown in block 200 classify at least a portion of an image. In at least one embodiment, the components shown in block 200 are not limited to image classification and may be applied to other different downstream tasks.
In at least one embodiment, the components shown in block 200 are collectively referred to as a prompt-to-test (TPT) system. In at least one embodiment, the components shown in block diagram 200 include a test image 202, a hint 204, one or more categories 208, one or more enhanced test images 206 (also referred to herein as "enhanced views"), an image encoder 210, a text encoder 212, a confidence selection 214, an average value 216, and a minimization 218 (also referred to herein as "minH"). In at least one embodiment, the components shown in block 200 use a test image 202 as described herein in connection with at least FIG. 2 to adjust cues 204 in operation. In at least one embodiment, the adjusted hints (e.g., adjusted hints 204 as described herein) are adapted to the task so that they can be used in zero sample generalization without requiring task-specific training data or annotations. In at least one embodiment, zero sample generalization is a type of neural network processing whereby a pre-trained Deep Learning (DL) model is used to generate inference results based at least in part on the class or classification of samples (e.g., as described herein at least in connection with fig. 2 and 3). In at least one embodiment, the components shown in block 200 retain the zero sample generalization settings because no additional training data or annotations are used to adjust cues 204.
In at least one embodiment, the components shown in block 200 use pre-training, such as contrast language-image pre-training (CLIP) and large-scale image and noisy text embedding (ALIGN). In at least one embodiment, the CLIP is a neural network trained on one or more (image, text) pairs. In at least one embodiment, natural language processing is used to instruct CLIP to predict the most relevant text segments given an image without directly optimizing the processing to perform a particular task. In at least one embodiment, the CLIP includes two parallel encoders, one encoder mapping the text input to a first feature vector and the other encoder mapping the image input to a second feature vector. In at least one embodiment, the CLIP model is trained with contrast loss that facilitates similarity between the two feature vectors such that text and images are aligned in feature space. In at least one embodiment, visual and linguistic representations are trained jointly from noisy image-to-text (alt-text) data using ALIGN. In at least one embodiment, ALIGN learning ALIGNs visual and linguistic representations of image and text pairs.
In at least one embodiment, the components shown in block 200 utilize CLIP to enhance generalization in a zero sample manner. In at least one embodiment, hint modulation (e.g., as described herein) is used as an ideal method to handle the enhancement. In at least one embodiment, in the inference phase (e.g., as described herein), the available information includes test images 202 without tag information. In at least one embodiment, the prompt 204 includes one or more text elements. In at least one embodiment, the components shown in block 200 optimize hints 204 at test time (e.g., during reasoning) based at least in part on the test image 202. In at least one embodiment, the optimization of hint 204 is formulated as:in at least one embodiment, because the tag is not available for adjustment at the time of testing, the choice to cue for adjustment is not a supervised penalty, as described herein. In at least one embodiment, the TPT is designed to facilitate model prediction across given testsThe consistency of the different enhanced views of the image (e.g., test image 202). In at least one embodiment, the components shown in block 200 use the random enhancement a family to generate N randomly enhanced views of the test image and minimize the entropy of the averaged predictive probability distribution to produce the average 216, as described herein.
In at least one embodiment, text encoder 212 and image encoder 210 generate a selection of opposite confidence levels that may be used in confidence level selection 214. In at least one embodiment, to reduce noise from random boosting, confidence selection 214 is used to filter out views that generate high entropy (i.e., low confidence) predictions (e.g., third confidence labeled with confidence selection 214 of "X") and retain views that generate low entropy (e.g., high confidence) predictions (e.g., first two confidence labeled with check labels of confidence selection 214). In at least one embodiment, such views of an image lack important information (e.g., random enhancement of the image, which removes important image content) required to properly classify the image. In at least one embodiment, the component shown in block 200 selects confidence samples having a prediction entropy below a threshold τ. In at least one embodiment, the component shown in block 200 adapts τ for each test sample (e.g., test image 202) by taking the entropy value at ρ percentiles in the self-entropy of the N enhanced views ranked from low to high (e.g., with confidence from high to low). In at least one embodiment, the high confidence predictions within confidence selections 214 are averaged to produce average 216. In at least one embodiment, the entropy of the averaged predictive probability distribution (e.g., the entropy of the average 216) is used to generate the minimums 218, as described herein.
In at least one embodiment, CLIP is a visual language base model. In at least one embodiment, the components shown in block 200 perform hint adjustment by generating a plurality of randomly enhanced test images 206 given a single sample (e.g., test image 202) at test time, and optimize hint 204 such that the components shown in block 200 have consistent predictions across the different enhanced views. In at least one embodiment, having consistent predictions across the different enhanced views is performed by minimizing edge entropy between the outputs of the enhanced test image 206 (e.g., the different enhanced views), as described herein. In at least one embodiment, the confidence selection 214 filters out noisy enhanced views, as described herein, since some enhancements may result in misleading model predictions. In at least one embodiment, some enhanced views (e.g., enhanced test images 206) with high prediction entropy (e.g., low confidence) are discarded such that only high confidence views are used in the consistency optimization.
In at least one embodiment, hints 204 are updated using back propagation 220 as a result of the consistency optimization. In at least one embodiment, the back propagation 220 is a back propagation or back propagation such as described herein in connection with at least fig. 9A, 9B, and 10.
FIG. 3 is a process 300 illustrating a prompt adjustment method for learning adaptive prompts on the fly using a single test sample in accordance with at least one embodiment. In at least one embodiment, some or all of process 300 (or any other process described herein, or variations and/or combinations thereof) is performed using components of one or more computer systems, such as those described in fig. 1-43 (e.g., processor 102 described herein in connection with at least fig. 1). In at least one embodiment, some or all of process 300 is performed using components of one or more computer systems configured with computer-executable instructions implemented as code (e.g., computer-executable instructions, one or more computer programs, or one or more application programs) that are executed jointly on one or more processors by hardware, software, or a combination thereof. In at least one embodiment, the code is stored on a computer readable storage medium in the form of a computer program comprising a plurality of computer readable instructions executable by one or more processors. In at least one embodiment, the computer-readable storage medium is a non-transitory computer-readable medium. In at least one embodiment, at least some computer readable instructions available for performing process 300 are stored using more than just transitory signals (e.g., propagated transient electrical or electromagnetic transmissions). In at least one embodiment, the non-transitory computer readable medium does not necessarily include non-transitory data storage circuitry (e.g., buffers, caches, and queues) within the transceiver of the transitory signal.
In at least one embodiment, process 300 includes one or more processes for causing a neural network to learn adaptive cues on the fly using a single test sample. In at least one embodiment, process 300 is performed by one or more systems, such as those described in this disclosure. In at least one embodiment, process 300 is performed by a system such as those described in connection with FIG. 2. In at least one embodiment, one or more of the processes 300 are performed in any suitable order (including sequentially, in parallel, and/or variations thereof) and using any suitable processing units (e.g., CPU, GPGPU, GPU, PPU and/or variations thereof). In at least one embodiment, process 300 is performed concurrently on one or more neural networks.
In at least one embodiment, the system executing at least a portion of process 300 includes executable code for obtaining 302 at least a hint. In at least one embodiment, the cues described in FIG. 3 are cues such as those described in connection with FIG. 2. In at least one embodiment, the system executing at least a portion of process 300 includes executable code for retrieving 304 at least the category. In at least one embodiment, the categories described in FIG. 3 are categories such as those described in connection with FIG. 2.
In at least one embodiment, the system performing at least a portion of process 300 includes executable code for encoding 306 the text input (e.g., prompts and categories) via at least a text encoder. In at least one embodiment, the text inputs described in FIG. 3 are text inputs such as those described in connection with FIG. 2. In at least one embodiment, the system performing at least a portion of process 300 includes executable code for obtaining 308 at least a single test sample. In at least one embodiment, the single test sample depicted in FIG. 3 is a single test sample such as that described in connection with FIG. 2.
In at least one embodiment, the system executing at least a portion of process 300 includes executable code for generating 310 at least an enhanced view of a single test sample. In at least one embodiment, the enhanced view depicted in fig. 3 is an enhanced view such as that described in connection with fig. 2. In at least one embodiment, the system performing at least a portion of process 300 includes executable code for encoding 312 the image input (e.g., enhanced view and single test sample) at least via an image encoder. In at least one embodiment, the image inputs described in FIG. 3 are image inputs such as those described in connection with FIG. 2. In at least one embodiment, the system executing at least a portion of process 300 includes executable code for generating 314 at least a confidence selection. In at least one embodiment, the confidence selections described in fig. 3 are confidence selections such as those described in connection with fig. 2.
In at least one embodiment, the system executing at least a portion of process 300 includes executable code for generating at least an average of the confidence selections accepted by 316. In at least one embodiment, the average of the accepted confidence selections described in fig. 3 is an average of the accepted confidence selections such as described in connection with fig. 2. In at least one embodiment, the system performing at least a portion of process 300 includes executable code for obtaining at least 318 a minimum value. In at least one embodiment, the minimum value described in fig. 3 is a minimum value such as that described in connection with fig. 2.
In at least one embodiment, the system executing at least a portion of process 300 includes executable code for updating 320 at least the hint. In at least one embodiment, the update to a hint is an update to a hint such as described in connection with FIG. 2. In at least one embodiment, the system executing at least a portion of process 300 includes executable code for determining 322 whether to execute a next iteration of process 300 (e.g., to update a hint and re-encode via text encoding). In at least one embodiment, if it is determined to perform the next iteration of process 300 (e.g., the "yes" branch), process 300 continues at the retrieve 304 category, as described above. In at least one embodiment, if it is determined that the next iteration of process 300 is not to be performed (e.g., the "no" branch), process 300 uses 324 the updated hints with a visual language model as described herein (e.g., pre-trained visual language model 106 as described herein at least in connection with fig. 1).
FIG. 4 is a block diagram 400 illustrating a computing system in which a neural network hint adjustment method of learning adaptive hints on the fly using a single test sample is performed in accordance with at least one embodiment. In at least one embodiment, the processor 402 receives the image 404, cues 408, and/or a pre-trained visual language model 406. In at least one embodiment, processor 402 is a processor such as processor 102 described herein in connection with at least FIG. 1. In at least one embodiment, image 404 is an image such as image 104 described herein in connection with at least FIG. 1. In at least one embodiment, the prompt 408 is a prompt such as the prompt 108 described herein in connection with at least FIG. 1. In at least one embodiment, the pre-trained visual language model 406 is a pre-trained visual language model such as the pre-trained visual language model 106 described herein in connection with at least FIG. 1. In at least one embodiment, the pre-trained visual language model 406 is static (e.g., as indicated by the lock icon), where the static pre-trained visual language model is not updated or retrained during reasoning (e.g., during reasoning processes such as those described herein).
In at least one embodiment, the processor 402 performs one or more operations to enhance the image 410 (e.g., as described herein at least in connection with fig. 1-3). In at least one embodiment, the processor 402 performs one or more operations that encode text and images using the pre-trained image model 412 (e.g., using the static pre-trained visual language model 406). In at least one embodiment, the processor 402 performs one or more operations for encoding text and images using an image encoder (such as the image encoder 210 described herein in connection with at least fig. 2) using the pre-trained image model 412. In at least one embodiment, the processor 402 performs one or more operations for encoding text and images using an image encoder module (such as the image encoder module 706 described herein in connection with at least fig. 7) using the pre-trained image model 412. In at least one embodiment, the processor 402 performs one or more operations for encoding text and images using a text encoder (such as the text encoder 212 described herein in connection with at least fig. 2) using the pre-trained image model 412. In at least one embodiment, the processor 402 performs one or more operations for encoding text and images using a text encoder module (such as the text encoder module 704 described herein in connection with at least fig. 7) using the pre-trained image model 412.
In at least one embodiment, the processor 402 performs one or more operations of classifying and filtering the enhanced view 414 (e.g., as described herein at least in connection with fig. 1-3). In at least one embodiment, the processor 402 performs one or more operations of classifying and filtering the enhanced view 414 using the results of the enhanced image 410. In at least one embodiment, the processor 402 performs one or more operations of classifying and filtering the enhanced view 414 using a confidence selection (such as the confidence selection 214 described herein in connection with at least fig. 2). In at least one embodiment, the processor 402 performs one or more operations of classifying and filtering the enhanced view 414 using a confidence selection module (such as the confidence selection module 708 described herein in connection with at least fig. 7). In at least one embodiment, the processor 402 performs one or more operations of classifying and filtering the enhanced view 414 using an average value (such as the average value 216 described herein in connection with at least fig. 2). In at least one embodiment, the processor 402 performs one or more operations of classifying and filtering the enhanced view 414 using an averaging module (such as the averaging module 710 described herein in connection with at least fig. 7). In at least one embodiment, the processor 402 performs one or more operations of the update-hint 416 (e.g., as described herein at least in connection with fig. 1-3).
Fig. 5 is a process 500 illustrating adjusting neural network cues in accordance with at least one embodiment. In at least one embodiment, some or all of process 500 (or any other process described herein, or variations and/or combinations thereof) is performed under control of one or more computer systems configured with computer-executable instructions (such as those described in fig. 9A-43) and is implemented as code (e.g., computer-executable instructions, one or more computer programs, or one or more application programs) that is jointly executed on one or more processors by hardware, software, or a combination thereof. In at least one embodiment, the code is stored on a computer-readable storage medium in the form of a computer program comprising a plurality of computer-readable instructions executable by one or more processors, such as those described herein. In at least one embodiment, the computer-readable storage medium is a non-transitory computer-readable medium. In at least one embodiment, a processor (such as processor 102 described herein in connection with at least fig. 1) performs one or more steps of a process 500 of adjusting neural network cues. In at least one embodiment, one or more other processors (such as those described herein) perform one or more steps of process 500 of adjusting neural network cues.
In at least one embodiment, at step 502 of process 500 of adjusting neural network cues, a processor performs one or more operations that at least obtain a pre-trained model. In at least one embodiment, at step 502, the obtained pre-training model is a pre-training visual language model, such as pre-training visual language model 406 described herein in connection with at least fig. 4. In at least one embodiment, after step 502, process 500 continues at step 504.
In at least one embodiment, at step 504 of process 500 of adjusting a neural network hint, a processor performs one or more operations that at least obtain a hint. In at least one embodiment, at step 504, the obtained hint is a hint such as hint 408 described herein in connection with at least FIG. 5. In at least one embodiment, after step 504, process 500 continues at step 506.
In at least one embodiment, at step 506 of process 500 of adjusting neural network cues, the processor performs one or more operations that obtain at least a single image. In at least one embodiment, at step 506, the single image obtained is an image such as image 404 described herein in connection with at least FIG. 4. In at least one embodiment, after step 506, the process 500 continues at step 508.
In at least one embodiment, at step 508 of the process 500 of adjusting neural network cues, the processor performs one or more operations that generate at least a plurality of randomly enhanced views of a single image (e.g., the single image obtained at step 506). In at least one embodiment, at step 508, the one or more operations to generate the plurality of randomly enhanced views of the single image include operations such as those described herein in connection with enhancing image 410 (as described herein at least in connection with fig. 4). In at least one embodiment, after step 508, the process 500 continues at step 510.
In at least one embodiment, at step 510 of the process 500 of adjusting neural network cues, the processor performs one or more operations that predict at least the confidence level across views (e.g., the plurality of randomly enhanced views generated at step 508). In at least one embodiment, at step 510, the one or more operations to predict confidence across views include operations such as those described herein in connection with at least fig. 1-4. In at least one embodiment, after step 510, process 500 continues at step 512.
In at least one embodiment, at step 512 of the process 500 of adjusting neural network cues, the processor performs one or more operations that classify the enhanced view using at least confidence selection (e.g., using the confidence of the cross-view predicted at step 510), as described herein at least in connection with fig. 1-4. In at least one embodiment, after step 512, process 500 continues at step 514.
In at least one embodiment, at step 514 of process 500 of adjusting neural network cues, the processor performs one or more operations that filter out at least noisy views (e.g., high entropy or low confidence views classified at step 512), as described herein at least in connection with fig. 1-4. In at least one embodiment, after step 514, process 500 continues at step 516.
In at least one embodiment, at step 516 of process 500 of adjusting neural network cues, the processor performs one or more operations of calculating at least an average of filtered views (e.g., views remaining after filtering at step 514), as described herein in connection with at least fig. 1-4. In at least one embodiment, after step 516, process 500 continues at step 518.
In at least one embodiment, at step 518 of the process 500 of adjusting neural network cues, the processor performs one or more operations that use at least the average of the filtered views (e.g., the average calculated at step 516) to minimize entropy, as described herein at least in connection with fig. 1-4. In at least one embodiment, after step 518, process 500 continues at step 520.
In at least one embodiment, at step 520 of process 500 of adjusting neural network cues, the processor performs one or more operations of at least updating the cues (e.g., adjusting cues, such as that obtained at step 504, using systems, methods, operations, and techniques as described herein). In at least one embodiment, not shown in fig. 5, at step 520, a determination is made as to whether to continue at the next iteration (e.g., as described herein in connection with at least executing executable code to determine 322 whether to execute the next iteration (as described herein in connection with at least fig. 3)). In at least one embodiment, after step 520, process 500 terminates. In at least one embodiment, not shown in FIG. 5, after step 520, process 500 continues at step 502 to return to the process where the loop was performed.
In at least one embodiment, the operations of process 500 of adjusting neural network cues are performed in a different order than that shown in FIG. 5. In at least one embodiment, the operations of process 500 of adjusting neural network cues are performed simultaneously or in parallel. In at least one embodiment, the operations of process 500 of adjusting neural network cues are performed simultaneously or in parallel, independent of each other (e.g., order independent). In at least one embodiment, the operations of process 500 of adjusting neural network cues are performed by multiple threads executing on a processor (such as those described herein).
FIG. 6 is a block diagram 600 illustrating a computing system in which hints used by a trained visual language model are adjusted in accordance with at least one embodiment. In at least one embodiment, the processor 602 receives the image 604, the cues 608, and the pre-trained visual language model 606. In at least one embodiment, the processor 602 is a processor such as the processor 102 described herein in connection with at least FIG. 1. In at least one embodiment, the processor 602 is a processor such as the processor 402 described herein in connection with at least fig. 4. In at least one embodiment, image 604 is an image such as image 104 described herein in connection with at least FIG. 1. In at least one embodiment, image 604 is an image such as image 404 described herein in connection with at least FIG. 4. In at least one embodiment, the prompt 608 is a prompt such as the prompt 108 described herein in connection with at least FIG. 1. In at least one embodiment, the prompt 608 is a prompt such as the prompt 408 described herein in connection with at least FIG. 4. In at least one embodiment, the pre-trained visual language model 606 is a pre-trained visual language model such as the pre-trained visual language model 106 described herein in connection with at least FIG. 1. In at least one embodiment, the pre-trained visual language model 606 is a pre-trained visual language model such as the pre-trained visual language model 406 described herein in connection with at least FIG. 4. In at least one embodiment, the pre-trained visual language model 606 is static (e.g., as indicated by the lock icon), where the static pre-trained visual language model is not updated or retrained during reasoning (e.g., during reasoning processes such as those described herein).
In at least one embodiment, processor 602 performs one or more operations of processing images and cues and updating cues 612 using trained visual language model 610 (e.g., using pre-trained visual language model 606), as described herein at least in connection with fig. 1-5. In at least one embodiment, training data 614 is used to train pre-trained visual language model 606, as described herein at least in connection with fig. 10, before being used to process images and cues using trained visual language model 610. In at least one embodiment, updating 616 the pre-trained visual language model 606 as a result of training (e.g., as indicated by the check mark) is allowed. In at least one embodiment, updating of the pre-trained visual language model 606 as a result of reasoning (e.g., by the processor 602) is not allowed 618 (e.g., as indicated by the "X" label).
Fig. 7 is a block diagram 700 illustrating a processor and modules in accordance with at least one embodiment. In at least one embodiment, the processor 702 includes one or more processors, such as those described in connection with FIGS. 9A-43. In at least one embodiment, the processor 702 is a processor such as the processor 102 described herein in connection with at least FIG. 1. In at least one embodiment, the processor 702 is any suitable processing unit and/or combination of processing units, such as one or more CPU, GPU, GPGPU, PPU and/or variations thereof. In at least one embodiment, the processor 702 includes or has access to one or more of a text encoder module 704, an image encoder module 706, a confidence selection module 708, an averaging module 710, a neural network reasoning module 712, and a neural network training module 714. In at least one embodiment, the text encoder module 704, the image encoder module 706, the confidence selection module 708, the averaging module 710, the neural network reasoning module 712, and the neural network training module 714 are part of the processor 702 and/or one or more other processors such as those described herein. In at least one embodiment, the text encoder module 704, the image encoder module 706, the confidence selection module 708, the averaging module 710, the neural network reasoning module 712, and the neural network training module 714 are distributed among a plurality of processors that communicate via a bus, a network, writing to a shared memory, and/or any suitable communication process such as those described herein.
In at least one embodiment, a module as used in any implementation described herein refers to any combination of software logic, firmware logic, hardware logic, and/or circuitry configured to provide the functionality described herein unless the context clearly dictates otherwise or explicitly stated to the contrary. In at least one embodiment, the software may be embodied as a software package, code, and/or instruction set or instructions, and "hardware" as used by a processor in any implementation described herein may include, for example, hardwired circuitry, programmable circuitry, state machine circuitry, fixed-function circuitry, execution unit circuitry, and/or firmware that stores instructions executed by the programmable circuitry, either alone or in any combination. In at least one embodiment, the modules may be collectively or individually embodied as circuitry forming part of a larger system (e.g., an Integrated Circuit (IC), a system on a chip (SoC), etc.). In at least one embodiment, the modules perform one or more processes in connection with any suitable processing unit and/or combination of processing units (such as one or more CPU, GPU, GPGPU, PPU and/or variants thereof).
In at least one embodiment, the processor 702 uses the text encoder module 704 to encode text using a text encoder (such as the text encoder 212 as described herein in connection with at least fig. 2). In at least one embodiment, the processor 702 executes the text encoder module 704 and processes such as those described herein by including or otherwise encoding at least instructions that cause execution of the one or more processes or are otherwise available to execute the one or more processes (e.g., by the processor 702). In at least one embodiment, the processor 702 executing the text encoder module 704 obtains or otherwise has one or more APIs such as those described herein. In at least one embodiment, the processor 702 uses the text encoder module 704 to encode text using a text encoder using the systems, methods, operations, and techniques described herein in connection with at least fig. 1-6. In at least one embodiment, the processor 702 uses the text encoder module 704 in conjunction with one or more of the image encoder module 706, the confidence selection module 708, the averaging module 710, the neural network reasoning module 712, and the neural network training module 714 to perform a neural network hint adjustment method that learns adaptive hints in operation using the single test sample using the systems, methods, operations, and techniques described herein in conjunction with at least fig. 1-6.
In at least one embodiment, the processor 702 uses the image encoder module 706 to encode an image using an image encoder (such as the image encoder 210 as described herein in connection with at least fig. 2). In at least one embodiment, the processor 702 executes the image encoder module 706 and processes (such as those described herein) by including or otherwise encoding at least instructions that cause execution of the one or more processes or that are otherwise available for execution of the one or more processes (e.g., by the processor 702). In at least one embodiment, the processor 702 executing the image encoder module 706 obtains or otherwise has one or more APIs, such as those described herein. In at least one embodiment, the processor 702 uses the image encoder module 706 to encode images using an image encoder using the systems, methods, operations, and techniques described herein in connection with at least fig. 1-6. In at least one embodiment, the processor 702 uses the image encoder module 706 in conjunction with one or more of the text encoder module 704, the confidence selection module 708, the averaging module 710, the neural network reasoning module 712, and the neural network training module 714 to perform a neural network hint adjustment method that learns adaptive hints in operation using the single test sample using the systems, methods, operations, and techniques described herein in conjunction with at least fig. 1-6.
In at least one embodiment, the processor 702 uses the confidence selection module 708 to perform confidence selection, such as the confidence selection 214 as described herein in connection with at least fig. 2. In at least one embodiment, the processor 702 executes the confidence selection module 708 and processes such as those described herein by including or otherwise encoding at least instructions that cause execution of the one or more processes or that are otherwise available for execution of the one or more processes (e.g., by the processor 702). In at least one embodiment, the processor 702 executing the confidence selection module 708 obtains or otherwise has one or more APIs (such as those described herein). In at least one embodiment, the processor 702 uses the confidence selection module 708 to perform confidence selection using the systems, methods, operations, and techniques described herein in connection with at least fig. 1-6. In at least one embodiment, the processor 702 uses the confidence selection module 708 in conjunction with one or more of the text encoder module 704, the image encoder module 706, the averaging module 710, the neural network reasoning module 712, and the neural network training module 714 to perform a neural network hint adjustment method that learns adaptive hints in operation using the systems, methods, operations, and techniques described herein in conjunction with at least fig. 1-6 using a single test sample.
In at least one embodiment, the processor 702 uses the averaging module 710 to average the selected confidence (e.g., generated by the confidence selection module 708) using an average (such as the average 216 as described herein in connection with at least fig. 2). In at least one embodiment, the processor 702 executes the averaging module 71 and processes such as those described herein by including or otherwise encoding at least instructions that cause or may otherwise be used to execute the one or more processes (e.g., by the processor 702). In at least one embodiment, the processor 702 executing the averaging module 710 obtains or otherwise has one or more APIs such as those described herein. In at least one embodiment, the processor 702 uses the averaging module 710 to average the selected confidence using the systems, methods, operations, and techniques described herein in connection with at least fig. 1-6. In at least one embodiment, the processor 702 uses the averaging module 710 in conjunction with one or more of the text encoder module 704, the image encoder module 706, the confidence selection module 708, the neural network reasoning module 712, and the neural network training module 714 to perform a neural network hint adjustment method that learns adaptive hints in operation using the single test sample using the systems, methods, operations, and techniques described herein in conjunction with at least fig. 1-6.
In at least one embodiment, the processor 702 uses a neural network reasoning module 712 to perform reasoning using neural networks such as those described herein. In at least one embodiment, the processor 702 executes the neural network inference module 712 and processes such as those described herein by including or otherwise encoding at least instructions that cause execution of the one or more processes or that are otherwise available for execution of the one or more processes (e.g., by the processor 702). In at least one embodiment, the processor 702 executing the neural network reasoning module 712 obtains or otherwise has one or more APIs, such as those described herein. In at least one embodiment, the processor 702 uses the neural network reasoning module 712 to perform reasoning using the neural network with the systems, methods, operations, and techniques described herein in connection with at least FIGS. 1-6. In at least one embodiment, the processor 702 uses the neural network reasoning module 712 in conjunction with one or more of the text encoder module 704, the image encoder module 706, the confidence selection module 708, the averaging module 710, and the neural network training module 714 to perform a neural network hint adjustment method that learns adaptive hints in operation using the systems, methods, operations, and techniques described herein in conjunction with at least fig. 1-6 using a single test sample.
In at least one embodiment, the processor 702 uses the neural network training module 714 to perform training of a neural network, such as those described herein. In at least one embodiment, the processor 702 executes the neural network training module 714 and processes such as those described herein by including or otherwise encoding at least instructions that cause execution of the one or more processes or that are otherwise available for execution of the one or more processes (e.g., by the processor 702). In at least one embodiment, the processor 702 executing the neural network training module 714 obtains or otherwise has one or more APIs such as those described herein. In at least one embodiment, the processor 702 uses the neural network training module 714 to perform training of the neural network using the systems, methods, operations, and techniques described herein in connection with at least fig. 1-6. In at least one embodiment, the processor 702 uses the neural network training module 714 in conjunction with one or more of the text encoder module 704, the image encoder module 706, the confidence selection module 708, the averaging module 710, and the neural network reasoning module 712 to perform a neural network hint adjustment method that learns adaptive hints in operation using the systems, methods, operations, and techniques described herein in conjunction with at least fig. 1-6 using a single test sample.
In at least one embodiment, the processor 702 includes circuitry (circuitry) for causing one or more circuits of the processor 702 to select a most consistent output of one or more pre-trained neural networks using one or more of the text encoder module 704, the image encoder module 706, the confidence selection module 708, the averaging module 710, the neural network reasoning module 712, and/or the neural network training module 714, based at least in part on a plurality of variations of one or more inputs of the one or more neural networks, using the systems, methods, operations, and/or techniques described herein in connection with at least fig. 1-6.
FIG. 8 is a block diagram 800 illustrating a driver and/or runtime including one or more libraries for providing one or more Application Programming Interfaces (APIs) in accordance with at least one embodiment. In at least one embodiment, software program 802 is a software module. In at least one embodiment, software program 802 includes one or more software modules, including but not limited to those described herein in connection with at least FIG. 7. In at least one embodiment, the software modules are further described non-exclusively in FIG. 7. In at least one embodiment, the one or more APIs 810 are a set of software instructions that, if executed, cause the one or more processors to perform one or more computing operations.
In at least one embodiment, the one or more APIs 810 are sets of software instructions that, if executed, cause the one or more processors to perform one or more computing operations such that a most consistent output of the one or more pretrained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks.
In at least one embodiment, one or more APIs 810 are distributed or otherwise provided as part of one or more libraries 806, drivers and/or runtime 804, and/or any other groupings of software and/or executable code described further herein. In at least one embodiment, one or more APIs 810 perform one or more computing operations in response to a call by software program 802. In at least one embodiment, software program 802 is a collection of software code, commands, instructions, or other text sequences for instructing a computing device to perform one or more computing operations and/or to invoke one or more other sets of instructions to be executed, such as API 810 or API function 812. In at least one embodiment, the functionality provided by the one or more APIs 810 includes software functions 812, such as those that are operable to accelerate one or more portions of software program 802 using one or more Parallel Processing Units (PPUs), such as Graphics Processing Units (GPUs).
In at least one embodiment, API 810 is a hardware interface for one or more circuits that perform one or more computing operations. In at least one embodiment, one or more software APIs 810 described herein are implemented as one or more circuits for performing one or more techniques described herein in connection with FIGS. 1-7. In at least one embodiment, one or more software programs 802 include instructions that, if executed, cause one or more hardware devices and/or circuits to perform one or more of the techniques described herein in connection with fig. 1-7.
In at least one embodiment, a software program 802, such as a user-implemented software program, utilizes one or more Application Programming Interfaces (APIs) 810 to perform various computing operations, such as memory reservation, matrix multiplication, arithmetic operations, or any computing operation performed by a Parallel Processing Unit (PPU), such as a Graphics Processing Unit (GPU), as further described herein. In at least one embodiment, one or more APIs 810 provide a set of callable functions 812, referred to herein as APIs, API functions, and/or functions, that perform one or more computing operations, respectively (such as computing operations related to parallel computing). For example, in an embodiment, one or more APIs 810 provide a function 812 for starting a workload, monitoring a workload, and/or terminating a workload, as described herein.
In at least one embodiment, one or more software programs 802 interact with or otherwise communicate with one or more APIs 810 to perform one or more computing operations using one or more PPUs (such as a GPU). In at least one embodiment, one or more computing operations using one or more PPUs include at least one or more sets of computing operations to be accelerated by execution at least in part by the one or more PPUs. In at least one embodiment, one or more software programs 802 interact with one or more APIs 810 to facilitate parallel computing using a remote interface or a local interface.
In at least one embodiment, the interface is software instructions that, if executed, provide access to one or more functions 812 provided by one or more APIs 810. In at least one embodiment, the software program 802 uses a local interface when a software developer compiles one or more software programs 802 in conjunction with one or more libraries 806, the one or more libraries 806 including or otherwise providing access to one or more APIs 810. In at least one embodiment, one or more software programs 802 are statically compiled in connection with a precompiled library 806 or uncompiled source code that includes instructions for executing one or more APIs 810. In at least one embodiment, one or more software programs 802 are dynamically compiled and linked to one or more precompiled libraries 806 comprising one or more APIs 810 using a linker.
In at least one embodiment, the software program 802 uses a remote interface when the software developer executes the software program that utilizes or otherwise communicates with a library 806 that includes one or more APIs 810 over a network or other remote communication medium. In at least one embodiment, one or more libraries 806 including one or more APIs 810 will be executed by a remote computing service (such as a computing resource service provider). In another embodiment, one or more libraries 806 that include one or more APIs 810 are to be executed by any other computing host that provides the one or more APIs 810 to the one or more software programs 802.
In at least one embodiment, a processor executing or using one or more software programs 802 invokes, uses, executes, or otherwise implements one or more APIs 810 to allocate and otherwise manage memory to be used by the software programs 802. In at least one embodiment, one or more software programs 802 utilize one or more APIs 810 to allocate and otherwise manage memory to be used by one or more portions of the software program 802 to accelerate using one or more PPUs (such as a GPU or any other accelerator or processor described further herein). Those software programs 802 request that the processor start, monitor, and/or terminate the workload using functions 812, which in embodiments are provided by one or more APIs 810.
In at least one embodiment, API 810 is an API that facilitates parallel computing. In at least one embodiment, API 810 is any other API described further herein. In at least one embodiment, the API 810 is provided by a driver and/or runtime 804. In at least one embodiment, API 810 is provided by a CUDA user mode driver. In at least one embodiment, API 810 is provided by the CUDA runtime. In at least one embodiment, the driver and/or runtime 804 are data values and software instructions that, if executed, perform or otherwise facilitate the operation of one or more functions 812 of the API 810 during the loading and execution of one or more portions of the software program 802. In at least one embodiment, the driver and/or runtime 804 is a data value and software instructions that, if executed, perform or otherwise facilitate the operation of one or more functions 812 of the API 810 during execution of the software program 802. In at least one embodiment, one or more software programs 802 utilize one or more APIs 810 implemented or otherwise provided by drivers and/or runtime 804 to perform combined arithmetic operations by the one or more software programs 802 during execution by one or more PPUs (such as GPUs).
In at least one embodiment, one or more software programs 802 utilize one or more APIs 810 provided by a driver and/or runtime 804 to perform the combined arithmetic operations of one or more PPUs (such as GPUs). In at least one embodiment, one or more APIs 810 provide combined arithmetic operations through a driver and/or runtime 804, as described above. In at least one embodiment, one or more software programs 802 utilize one or more APIs 810 provided by the driver and/or runtime 804 to allocate or otherwise reserve one or more blocks of memory 814 of one or more PPUs (such as GPUs). In at least one embodiment, one or more software programs 802 allocate or otherwise reserve blocks of memory using one or more APIs 810 provided by the driver and/or runtime 804. In at least one embodiment, one or more APIs 810 are used to perform the combined arithmetic operations, as described herein in connection with FIGS. 1-7.
To improve the availability of the software program 802 to be accelerated by one or more PPUs (e.g., GPUs) and/or the optimization of one or more portions of the software program 802, in one embodiment, one or more APIs 810 provide one or more API functions 812 to initiate, monitor, and/or terminate workloads that are available or used by one or more computing devices, as described above and further described herein in connection with fig. 1-7. In at least one embodiment, block diagram 800 depicts a processor that includes one or more circuits for executing one or more software programs to combine two or more Application Programming Interfaces (APIs) into a single API. In at least one embodiment, block diagram 800 depicts a system comprising one or more processors to execute one or more software programs to combine two or more Application Programming Interfaces (APIs) into a single API. In at least one embodiment, the processor uses the API to perform hint adjustment 816 (e.g., perform a neural network hint adjustment method that learns adaptive hints on the fly with a single test sample), as described herein.
In at least one embodiment, the processor uses the API to perform hint adjustment 816, wherein the processor is configured to perform hint adjustment 816 by causing the one or more circuits to select a most consistent output of the one or more pre-trained neural networks based at least in part on a plurality of variations of one or more inputs of the one or more neural networks.
Logic for logic control
Fig. 9A illustrates logic 915, as described elsewhere herein, that may be used in one or more devices to perform operations such as those discussed herein, in accordance with at least one embodiment. In at least one embodiment, logic 915 is to perform inference and/or training operations associated with one or more embodiments. In at least one embodiment, logic 915 is inference and/or training logic. Details regarding logic 915 are provided below in connection with fig. 9A and/or 9B. In at least one embodiment, logic refers to any combination of software logic, hardware logic, and/or firmware logic for providing the functions or operations described herein, where the logic may be embodied jointly or separately as circuitry forming part of a larger system (e.g., an Integrated Circuit (IC), a system-on-chip (SoC), or one or more processors (e.g., CPU, GPU)).
In at least one embodiment, logic 915 may include, but is not limited to, code and/or data storage 901 for storing forward and/or output weights and/or input/output data and/or other parameters for configuring neurons or layers of a neural network that are trained and/or used to infer in aspects of one or more embodiments. In at least one embodiment, the logic 915 may include or be coupled to a code and/or data store 901 for storing graph code or other software to control timing and/or sequence, wherein weights and/or other parameter information are loaded to configure logic including integer and/or floating point units (collectively referred to as Arithmetic Logic Units (ALUs)). In at least one embodiment, code (such as graph code) loads weight or other parameter information into the processor ALU based on the architecture of the neural network to which the code corresponds. In at least one embodiment, code and/or data store 901 stores weight parameters and/or input/output data for each layer of a neural network trained or used in connection with one or more embodiments during forward propagation of the input/output data and/or weight parameters during training and/or reasoning using aspects of the one or more embodiments. In at least one embodiment, any portion of code and/or data store 901 may be included in other on-chip or off-chip data stores, including the processor's L1, L2, or L3 cache or system memory.
In at least one embodiment, any portion of code and/or data storage 901 may be internal or external to one or more processors or other hardware logic devices or circuits. In at least one embodiment, the code and/or data storage 901 may be cache memory, dynamic random access memory ("DRAM"), static random access memory ("SRAM"), non-volatile memory (e.g., flash memory), or other storage. In at least one embodiment, the choice of whether code and/or data store 901 is internal or external to the processor, e.g., or includes DRAM, SRAM, flash, or some other storage type, may depend on the latency requirements of the training and/or reasoning functions being performed on-chip to the available storage off-chip (vers), the batch size of the data used in the reasoning and/or training of the neural network, or some combination of these factors.
In at least one embodiment, logic 915 may include, but is not limited to, code and/or data storage 905 for storing inverted and/or output weights and/or input/output data corresponding to neurons or layers of a neural network trained and/or used to infer in aspects of one or more embodiments. In at least one embodiment, during training and/or reasoning about aspects of one or more embodiments, code and/or data store 905 stores weight parameters and/or input/output data for each layer of a neural network trained or used in connection with one or more embodiments during back-propagation of the input/output data and/or weight parameters. In at least one embodiment, the logic 915 may include or be coupled to a code and/or data store 905 for storing graph code or other software to control timing and/or sequence, wherein weights and/or other parameter information are loaded to configure logic including integer and/or floating point units (collectively referred to as Arithmetic Logic Units (ALUs)).
In at least one embodiment, the code (such as graph code) causes the loading of weights or other parameter information into the processor ALU based on the architecture of the neural network to which the code corresponds. In at least one embodiment, any portion of code and/or data store 905 may be included with other on-chip or off-chip data stores, including the processor's L1, L2, or L3 cache or system memory. In at least one embodiment, any portion of code and/or data storage 905 may be internal or external to one or more processors or other hardware logic devices or circuitry. In at least one embodiment, the code and/or data storage 905 may be cache memory, DRAM, SRAM, nonvolatile memory (e.g., flash memory), or other storage. In at least one embodiment, the choice of whether code and/or data store 905 is internal or external to the processor, including, for example, DRAM, SRAM, flash, or some other type of storage, may depend on the available storage off-chip, the latency requirements of the training and/or reasoning function being performed, the batch size of the data used in the reasoning and/or training of the neural network, or some combination of these factors.
In at least one embodiment, code and/or data store 901 and code and/or data store 905 may be separate storage structures. In at least one embodiment, code and/or data store 901 and code and/or data store 905 may be the same storage structure. In at least one embodiment, code and/or data store 901 and code and/or data store 905 may be partially combined and partially separated. In at least one embodiment, code and/or data store 901 and any portion of code and/or data store 905 may be included with other on-chip or off-chip data stores, including the processor's L1, L2, or L3 cache or system memory.
In at least one embodiment, the logic 915 may include, but is not limited to, one or more arithmetic logic units ("ALUs") 910 (including integer and/or floating point units) for performing logic and/or mathematical operations based at least in part on or indicated by training and/or reasoning codes (e.g., graph codes), the results of which may result in activations (e.g., output values from layers or neurons within a neural network) stored in an activation store 920 as a function of input/output and/or weight parameter data stored in the code and/or data store 901 and/or the code and/or data store 905. In at least one embodiment, the activations stored in the activation store 920 are generated according to linear algebra and/or matrix-based mathematics performed by the ALU 910 in response to executing instructions or other code, where the weight values stored in the code and/or data store 905 and/or in the code and/or data store 901 are used as operand values as well as other values, such as bias values, gradient information, momentum values, or other parameters or superparameters, any or all of which may be stored in the code and/or data store 905 or the code and/or data store 901 or other on-chip or off-chip storage.
In at least one embodiment, one or more ALUs 910 are included in one or more processors or other hardware logic devices or circuits, while in another embodiment, one or more ALUs 910 may be external to the processors or other hardware logic devices or circuits in which they are used (e.g., coprocessors). In at least one embodiment, the ALU 910 may be included within an execution unit of a processor, or otherwise included in an ALU bank (bank) that is accessible by an execution unit of a processor, which may be within the same processor or distributed among different processors of different types (e.g., central processing unit, graphics processing unit, fixed function unit, etc.). In at least one embodiment, the code and/or data store 901, the code and/or data store 905, and the activation store 920 may share a processor or other hardware logic device or circuitry, while in another embodiment they may be in different processors or other hardware logic devices or circuitry, or some combination of the same and different processors or other hardware logic devices or circuitry. In at least one embodiment, any portion of the activation store 920 may be included with other on-chip or off-chip data stores, including the processor's L1, L2, or L3 cache or system memory. In addition, the inference and/or training code can be stored with other code accessible to a processor or other hardware logic or circuitry, and can be extracted and/or processed using extraction, decoding, scheduling, execution, exit, and/or other logic circuitry of the processor.
In at least one embodiment, the activation storage 920 may be cache memory, DRAM, SRAM, nonvolatile memory (e.g., flash memory), or other storage. In at least one embodiment, the activation store 920 may be wholly or partially within or external to one or more processors or other logic circuits. In at least one embodiment, the choice of whether the activation store 920 is internal or external to the processor, e.g., or includes DRAM, SRAM, flash, or some other storage type, may depend on the available storage on-chip, the latency requirements for performing training and/or reasoning functions, the batch size of data used in reasoning and/or training the neural network, or some combination of these factors.
In at least one embodiment, the logic 915 shown in FIG. 9A may be used in conjunction with an application specific integrated circuit ("ASIC"), such as from GoogleProcessing unit from Graphcore TM Is an reasoning processing unit (IPU) or +.>(e.g., "Lake create") processor. In at least one embodiment, the logic 915 shown in FIG. 9A may be used in combination with central processing unit ("CPU") hardware, graphics processing unit ("GPU") hardware, or other hardware (e.g., a field programmable gate array ("FPGA")). / >
In at least one embodiment, at least one component shown or described with respect to fig. 9A is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 9A is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 9A is used to cause one or more neural networks to select one or more variations in a feature of one or more text cues based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 9A is used to perform at least one aspect described with respect to block diagram 100, block diagram 200, process 300, block diagram 400, process 500, block diagram 600, block diagram 700, block diagram 800, and/or other systems, methods, or operations described herein. In at least one embodiment, the inference and/or training logic 915 is configured to perform at least one aspect described with respect to block diagram 100, block diagram 200, process 300, block diagram 400, process 500, block diagram 600, block diagram 700, block diagram 800, and/or other systems, methods, or operations described herein.
Fig. 9B illustrates logic 915 in accordance with at least one embodiment. In at least one embodiment, logic 915 is inference and/or training logic. In at least one embodiment, the logic 915 may include, but is not limited to, hardware logic in which computing resources are used exclusively or exclusively with weight values or other information corresponding to one or more layers of neurons within a neural network. In at least one embodiment, the logic 915 shown in FIG. 9B may be used in conjunction with an Application Specific Integrated Circuit (ASIC), such as from GoogleProcessing unit from Graphcore TM Is an reasoning processing unit (IPU) or +.>(e.g., "Lake create") processor. In at least one embodiment, the logic 915 shown in FIG. 9B may be used in combination with Central Processing Unit (CPU) hardware, graphics Processing Unit (GPU) hardware, or other hardware, such as a Field Programmable Gate Array (FPGA). In at least one embodiment, logic 915 includes, but is not limited to, code and/or data storage 901 and code and/or data storage 905, which may be used to store code (e.g., graph code), weight values, and/or other information, including bias values, gradient information, momentum values, and/or other parameter or hyper-parameter information. In at least one embodiment shown in fig. 9B, each of code and/or data store 901 and code and/or data store 905 are associated with dedicated computing resources (e.g., computing hardware 902 and computing hardware 906), respectively. In at least one embodiment, each of the computing hardware 902 and 906 includes one or more ALUs that perform a number of operations only on information stored in the code and/or data store 901 and the code and/or data store 905, respectively The result of the mathematical function (e.g., linear algebraic function) is stored in the activation store 920.
In at least one embodiment, each of the code and/or data stores 901 and 905 and the respective computing hardware 902 and 906 correspond to a different layer of the neural network, respectively, such that an activation resulting from one storage/computing pair 901/902 of the code and/or data store 901 and computing hardware 902 is provided as an input to the next storage/computing pair 905/906 of the code and/or data store 905 and computing hardware 906 in order to reflect the conceptual organization of the neural network. In at least one embodiment, each storage/computation pair 901/902 and 905/906 may correspond to more than one neural network layer. In at least one embodiment, additional storage/computation pairs (not shown) may be included in logic 915 after or in parallel with storage/computation pairs 901/902 and 905/906.
In at least one embodiment, at least one component shown or described with respect to fig. 9B is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 9B is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 9B is used to cause one or more neural networks to select one or more variations in a feature of one or more text cues based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 9B is used to perform at least one aspect described with respect to block diagram 100, block diagram 200, process 300, block diagram 400, process 500, block diagram 600, block diagram 700, block diagram 800, and/or other systems, methods, or operations described herein.
Neural network training and deployment
FIG. 10 illustrates training and deployment of deep neural networks in accordance with at least one embodiment. In at least one embodiment, the training data set 1002 is used to train the untrained neural network 1006. In at least one embodiment, the training frame 1004 is a PyTorch frame, while in other embodiments, the training frame 1004 is a TensorFlow, boost, caffe, microsoft Cognitive Toolkit/CNTK, MXNet, chainer, keras, deep training 4j or other training frame. In at least one embodiment, the training framework 1004 trains the untrained neural network 1006 and enables it to be trained using the processing resources described herein to generate a trained neural network 1008. In at least one embodiment, the weights may be selected randomly or by pre-training using a deep belief network. In at least one embodiment, training may be performed in a supervised, partially supervised, or unsupervised manner.
In at least one embodiment, supervised learning is used to train the untrained neural network 1006, wherein the training data set 1002 includes inputs paired with desired outputs for the inputs, or wherein the training data set 1002 includes inputs having known outputs and the output of the untrained neural network 1006 is manually hierarchical. In at least one embodiment, the untrained neural network 1006 is trained in a supervised manner and processes inputs from the training data set 1002 and compares the resulting outputs to a set of expected or desired outputs. In at least one embodiment, the error is then propagated back through the untrained neural network 1006. In at least one embodiment, the training framework 1004 adjusts weights that control the untrained neural network 1006. In at least one embodiment, the training framework 1004 includes a tool for monitoring the extent to which the untrained neural network 1006 converges to a model (such as the trained neural network 1008) suitable for generating a correct answer (such as the result 1014) based on input data (such as the new data set 1012). In at least one embodiment, the training framework 1004 iteratively trains the untrained neural network 1006 while adjusting weights to refine (refine) the output of the untrained neural network 1006 using an loss function and an adjustment algorithm, such as a random gradient descent. In at least one embodiment, the training framework 1004 trains the untrained neural network 1006 until the untrained neural network 1006 reaches a desired accuracy. In at least one embodiment, the trained neural network 1008 can then be deployed to implement any number of machine learning operations.
In at least one embodiment, the untrained neural network 1006 is trained using unsupervised learning, wherein the untrained neural network 1006 attempts to train itself using untagged data. In at least one embodiment, the unsupervised learning training data set 1002 will include input data without any associated output data or "ground truth" data. In at least one embodiment, the untrained neural network 1006 can learn groupings within the training data set 1002 and can determine how the various inputs relate to the untrained data set 1002. In at least one embodiment, unsupervised training may be used to generate an ad hoc graph in the trained neural network 1008 that is capable of performing operations useful for reducing the dimensions of the new data set 1012. In at least one embodiment, unsupervised training may also be used to perform anomaly detection, which allows identification of data points in new data set 1012 that deviate from the normal pattern of new data set 1012.
In at least one embodiment, semi-supervised learning, a technique in which a mix of labeled and unlabeled data is included in the training dataset 1002, may be used. In at least one embodiment, training framework 1004 may be used to perform incremental learning, such as through a transfer learning technique. In at least one embodiment, incremental learning enables the trained neural network 1008 to adapt to the new data set 1012 without forgetting knowledge injected into the trained neural network 1008 during initial training.
In at least one embodiment, training framework 1004 is a framework that is processed in connection with a software development kit, such as the OpenVINO (open vision reasoning and neural network optimization) kit. In at least one embodiment, the OpenVINO toolkit is a toolkit such as developed by intel corporation of santa clara, california. In at least one embodiment, openVINO includes logic 915 or uses logic 915 to perform the operations described herein. In at least one embodiment, the SoC, integrated circuit, or processor uses OpenVINO to perform the operations described herein.
In at least one embodiment, openVINO is a tool package for facilitating development of applications (particularly neural network applications) for various tasks and operations, such as human visual simulation, speech recognition, natural language processing, recommendation systems, and/or variants thereof. In at least one embodiment, openVINO supports neural networks, such as Convolutional Neural Networks (CNNs), recurrent neural networks, and/or attention-based neural networks, and/or various other neural network models. In at least one embodiment, openVINO supports various software libraries, such as OpenCV, openCL and/or variants thereof.
In at least one embodiment, openVINO supports neural network models for various tasks and operations, such as classification, segmentation, object detection, face recognition, speech recognition, pose estimation (e.g., human and/or object), monocular depth estimation, image restoration, style conversion, motion recognition, coloring, and/or variants thereof.
In at least one embodiment, openVINO includes one or more software tools and/or modules for model optimization, also referred to as a model optimizer. In at least one embodiment, the model optimizer is a command line tool that facilitates the transition between training and deployment of neural network models. In at least one embodiment, the model optimizer optimizes the neural network model for execution on various devices and/or processing units such as GPU, CPU, PPU, GPGPU and/or variants thereof. In at least one embodiment, a model optimizer generates an internal representation of a model and optimizes the model to generate an intermediate representation. In at least one embodiment, the model optimizer reduces the number of layers of the model. In at least one embodiment, the model optimizer removes layers of the model used for training. In at least one embodiment, the model optimizer performs various neural network operations, such as modifying an input of the model (e.g., adjusting a size of the input of the model), modifying a size of the input of the model (e.g., modifying a batch size of the model), modifying a model structure (e.g., modifying a layer of the model), normalizing, quantifying (e.g., converting a weight of the model from a first representation, such as floating point, to a second representation, such as an integer), and/or variants thereof.
In at least one embodiment, openVINO includes one or more software libraries for reasoning, also referred to as a reasoning engine. In at least one embodiment, the inference engine is a C++ library or any suitable programming language library. In at least one embodiment, an inference engine is used to infer input data. In at least one embodiment, the inference engine implements various categories to infer input data and generate one or more results. In at least one embodiment, the inference engine implements one or more API functions to process intermediate representations, set input and/or output formats, and/or execute models on one or more devices.
In at least one embodiment, openVINO provides various capabilities for heterogeneous execution of one or more neural network models. In at least one embodiment, heterogeneous execution or heterogeneous computing refers to one or more computing processes and/or systems that utilize one or more types of processors and/or cores (cores). In at least one embodiment, openVINO provides various software functions to execute programs on one or more devices. In at least one embodiment, openVINO provides various software functions to execute programs and/or portions of programs on different devices. In at least one embodiment, openVINO provides various software functions, for example, to run a first code portion on a CPU and a second code portion on a GPU and/or FPGA. In at least one embodiment, openVINO provides various software functions to execute one or more layers of a neural network on one or more devices (e.g., a first set of layers on a first device (e.g., GPU) and a second set of layers on a second device (e.g., CPU).
In at least one embodiment, openVINO includes various functions similar to those associated with the CUDA programming model, such as various neural network model operations associated with frameworks such as TensorFlow, pyTorch and/or variants thereof. In at least one embodiment, one or more CUDA programming model operations are performed using OpenVINO. In at least one embodiment, the various systems, methods, and/or techniques described herein are implemented using OpenVINO.
In at least one embodiment, at least one component shown or described with respect to fig. 10 is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 10 is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 10 is used to cause one or more neural networks to select one or more variations in a feature of one or more text prompts based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 10 is used to perform at least one aspect described with respect to block 100, block 200, process 300, block 400, process 500, block 600, block 700, block 800, and/or other systems, methods, or operations described herein.
Data center
FIG. 11 illustrates an example data center 1100 in which at least one embodiment may be used. In at least one embodiment, the data center 1100 includes a data center infrastructure layer 1110, a framework layer 1120, a software layer 1130, and an application layer 1140.
In at least one embodiment, as shown in fig. 11, the data center infrastructure layer 1110 can include a resource coordinator 1112, grouped computing resources 1114, and node computing resources ("node c.r.") 1116 (1) -1116 (N), where "N" represents a positive integer (which can be an integer "N" that is different from the integers used in the other figures). In at least one embodiment, the nodes C.R.1116 (1) -1116 (N) may include, but are not limited to, any number of central processing units ("CPUs") or other processors (including accelerators, field Programmable Gate Arrays (FPGAs), graphics processors, etc.), memory storage devices 1118 (1) -1118 (N) (e.g., dynamic read only memories, solid state storage or disk drives), network input/output ("NW I/O") devices, network switches, virtual machines ("VMs"), power and cooling modules, and the like. In at least one embodiment, one or more of nodes c.r.1116 (1) -1116 (N) may be a server having one or more of the above-described computing resources.
In at least one embodiment, the grouped computing resources 1114 may include individual groupings of nodes c.r. housed within one or more racks (not shown), or a number of racks housed within a data center (also not shown) at various geographic locations. In at least one embodiment, individual packets of node c.r. within the grouped computing resources 1114 may include computing, network, memory, or storage resources of the packet that may be configured or allocated to support one or more workloads. In at least one embodiment, several nodes c.r. including CPUs or processors may be grouped within one or more racks to provide computing resources to support one or more workloads. In at least one embodiment, one or more racks may also include any number of power modules, cooling modules, and network switches in any combination.
In at least one embodiment, resource coordinator 1112 may configure or otherwise control one or more nodes c.r.1116 (1) -1116 (N) and/or grouped computing resources 1114. In at least one embodiment, the resource coordinator 1112 may include a software design infrastructure ("SDI") management entity for the data center 1100. In at least one embodiment, resource coordinator 1112 may include hardware, software, or some combination thereof.
In at least one embodiment, as shown in FIG. 11, the framework layer 1120 includes a job scheduler 1122, a configuration manager 1124, a resource manager 1126, and a distributed file system 1128. In at least one embodiment, the framework layer 1120 can include a framework of one or more applications 1142 supporting software 1132 of the software layer 1130 and/or the application layer 1140. In at least one embodiment, software 1132 or application 1142 may include Web-based services software or applications, respectively, such as those described by Amazon Web Services, google Cloud, and Microsoft Azure provided service software or applications. In at least one embodiment, the framework layer 1120 may be, but is not limited to, a type of free and open source software web application framework, such as Apache Spark, which may utilize the distributed file system 1128 for large scale data processing (e.g., "big data") TM (hereinafter referred to as "Spark"). In at least one embodiment, job scheduler 1122 may include Spark drivers to facilitate scheduling of the workloads supported by the various layers of data center 1100. In at least one embodiment, the configuration manager 1124 may be capable of configuring different layers, such as a software layer 1130 and a framework layer 1120 including Spark and a distributed file system 1128 for supporting large-scale data processing. In at least one embodiment, the resource manager 1126 may be capable of managing clustered or grouped computing resources mapped to or allocated for supporting the distributed file system 1128 and job scheduler 1122. In at least one embodiment, clustered or grouped computing resources can include grouped computing resources 1114 at data center infrastructure layer 1110. In at least one embodiment, the resource manager 1126 may coordinate with the resource coordinator 1112 to manage these mapped or allocated computing resources.
In at least one embodiment, the software 1132 included in the software layer 1130 can include software used by at least portions of the nodes c.r.1116 (1) -1116 (N), the grouped computing resources 1114, and/or the distributed file system 1128 of the framework layer 1120. In at least one embodiment, the one or more types of software may include, but are not limited to, internet web search software, email virus scanning software, database software, and streaming video content software.
In at least one embodiment, the one or more applications 1142 included in the application layer 1140 may include one or more types of applications used by at least portions of nodes c.r.1116 (1) -1116 (N), the grouped computing resources 1114, and/or the distributed file system 1128 of the framework layer 1120. In at least one embodiment, the one or more types of applications may include, but are not limited to, any number of genomics applications, cognitive computing, applications, and machine learning applications, including training or reasoning software, machine learning framework software (e.g., pyTorch, tensorFlow, caffe, etc.), or other machine learning applications used in connection with one or more embodiments.
In at least one embodiment, any of the configuration manager 1124, resource manager 1126, and resource coordinator 1112 may implement any number and type of self-modifying actions based on any number and type of data acquired in any technically feasible manner. In at least one embodiment, the self-modifying action may mitigate data center operators of the data center 1100 from making potentially bad configuration decisions and may avoid underutilized and/or poorly performing portions of the data center.
In at least one embodiment, the data center 1100 may include tools, services, software, or other resources for training one or more machine learning models or predicting or reasoning about information using one or more machine learning models in accordance with one or more embodiments described herein. For example, in at least one embodiment, the machine learning model may be trained by computing weight parameters from a neural network architecture using the software and computing resources described above with respect to the data center 1100. In at least one embodiment, by using the weight parameters calculated by one or more training techniques described herein, information can be inferred or predicted using the resources described above with respect to the data center 1100 using a trained machine learning model corresponding to one or more neural networks.
In at least one embodiment, the data center may use the above resources to perform training and/or reasoning using a CPU, application Specific Integrated Circuit (ASIC), GPU, FPGA, or other hardware. Furthermore, one or more of the software and/or hardware resources described above may be configured as a service for allowing a user to train or perform information reasoning, such as image recognition, speech recognition, or other artificial intelligence services.
Logic 915 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding logic 915 are provided herein in connection with fig. 9A and/or 9B. In at least one embodiment, logic 915 may be used in data center 1100 for performing inference or predictive operations based at least in part on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein.
In at least one embodiment, at least one component shown or described with respect to fig. 11 is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 11 is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 11 is used to cause one or more neural networks to select one or more variations in a feature of one or more text prompts based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 11 is used to perform at least one aspect described with respect to block 100, block 200, process 300, block 400, process 500, block 600, block 700, block 800, and/or other systems, methods, or operations described herein.
Autonomous vehicle
Fig. 12A illustrates an example of an autonomous vehicle 1200 in accordance with at least one embodiment. In at least one embodiment, autonomous vehicle 1200 (alternatively referred to herein as "vehicle 1200") may be, but is not limited to, a passenger vehicle, such as a car, truck, bus, and/or another type of vehicle that accommodates one or more passengers. In at least one embodiment, the vehicle 1200 may be a semi-tractor-trailer truck for hauling cargo. In at least one embodiment, the vehicle 1200 may be an aircraft, robotic vehicle, or other type of vehicle.
The autonomous vehicle may be described in terms of the taxonomies and definitions (Taxonomy and Definitions for Terms Related to Driving Automation Systems for On-Road Motor Vehicles) of terms related to the driving automation system for road motor vehicles (e.g., standard number J3016-2016806 published on month 15 of 2018, standard number J3016-201609 published on month 9 of 2016, 30, and previous and future versions of that standard) defined by the national highway traffic safety administration ("NHTSA") and society of automotive engineers ("SAE"). In at least one embodiment, the vehicle 1200 may be capable of functionality according to one or more of level 1 through level 5 of autonomous driving levels. For example, in at least one embodiment, the vehicle 1200 may be capable of conditional automation (level 3), high automation (level 4), and/or full automation (level 5), depending on the embodiment.
In at least one embodiment, the vehicle 1200 may include, but is not limited to, components such as chassis, body, wheels (e.g., 2, 4, 6, 8, 18, etc.), tires, axles, and other components of the vehicle. In at least one embodiment, the vehicle 1200 may include, but is not limited to, a propulsion system 1250, such as an internal combustion engine, a hybrid device, an all-electric engine, and/or another propulsion system type. In at least one embodiment, propulsion system 1250 may be connected to a driveline of vehicle 1200, which may include, but is not limited to, a transmission, for enabling propulsion of vehicle 1200. In at least one embodiment, propulsion system 1250 may be controlled in response to receiving a signal from throttle/accelerator 1252.
In at least one embodiment, a steering system 1254 (which may include, but is not limited to, a steering wheel) is used to steer (e.g., along a desired path or route) the vehicle 1200 when the propulsion system 1250 is running (e.g., when the vehicle 1200 is in motion). In at least one embodiment, the steering system 1254 can receive signals from the steering actuators 1256. In at least one embodiment, the steering wheel may be optional for fully automated (level 5) functions. In at least one embodiment, brake sensor system 1246 can be used to operate vehicle brakes in response to receiving signals from brake actuators 1248 and/or brake sensors.
In at least one embodiment, one or more controllers 1236, which may include, but are not limited to, one or more systems on a chip ("SoC") (not shown in fig. 12A) and/or graphics processing units ("GPUs"), provide signals (e.g., representative commands) to one or more components and/or systems of vehicle 1200. For example, in at least one embodiment, the one or more controllers 1236 can send signals to operate vehicle braking via a brake actuator 1248, steering system 1254 via one or more steering actuators 1256, propulsion system 1250 via one or more throttle/accelerators 1252. In at least one embodiment, the one or more controllers 1236 may include one or more on-board (e.g., integrated) computing devices that process sensor signals and output operational commands (e.g., signals representing commands) to enable autonomous driving and/or to assist a human driver in driving the vehicle 1200. In at least one embodiment, the one or more controllers 1236 can include a first controller for autonomous driving functions, a second controller for functional safety functions, a third controller for artificial intelligence functions (e.g., computer vision), a fourth controller for infotainment functions, a fifth controller for redundancy in emergency situations, and/or other controllers. In at least one embodiment, a single controller may handle two or more of the above-described functions, and two or more controllers may handle a single function and/or any combination thereof.
In at least one embodiment, the one or more controllers 1236 provide signals to control one or more components and/or systems of the vehicle 1200 in response to sensor data received from one or more sensors (e.g., sensor inputs). In at least one embodiment, the sensor data may be received from, for example, but not limited to, the following sensors: one or more global navigation satellite system ("GNSS") sensors 1258 (e.g., one or more global positioning system sensors), one or more RADAR sensors 1260, one or more ultrasonic sensors 1262, one or more LIDAR sensors 1264, one or more Inertial Measurement Unit (IMU) sensors 1266 (e.g., one or more accelerometers, one or more gyroscopes, one or more magnetic compasses, one or more magnetometers, etc.), one or more microphones 1296, one or more stereo cameras 1268, one or more wide-angle cameras 1270 (e.g., fish eye cameras), one or more infrared cameras 2, one or more surround cameras 1274 (e.g., 360 degree cameras), remote cameras (not shown in fig. 12A), mid-range cameras (not shown in fig. 12A), one or more sensors (e.g., one or more speed sensors 1244, one or more sensors 1246 as sensors, one or more brakes, one or more brake systems, etc., one or more brake portions 1240, and/or more other types of sensors 1240.
In at least one embodiment, the one or more controllers 1236 can receive input (e.g., represented by input data) from an instrument panel 1232 of the vehicle 1200 and provide output (e.g., represented by output data, display data, etc.) via a human-machine interface ("HMI") display 1234, audible annunciators, speakers, and/or via other components of the vehicle 1200. In at least one embodiment, the output can include information such as vehicle speed, time, map data (e.g., a high definition map (not shown in FIG. 12A), location data (e.g., a location of the vehicle 1200, e.g., on a map), directions, locations of other vehicles (e.g., occupying a grid), information about objects, and status of the objects perceived by the one or more controllers 1236, etc., for example, in at least one embodiment, the HMI display 1234 can display information about the presence of one or more objects (e.g., a guideboard, warning sign, traffic light change, etc.) and/or information about driving maneuvers that the vehicle has, is, or is about to make (e.g., now changing lanes, reaching the exit 34B within two miles, etc.).
In at least one embodiment, vehicle 1200 further includes a network interface 1224 that may communicate over one or more networks using one or more wireless antennas 1226 and/or one or more modems. For example, in at least one embodiment, the network interface 1224 may be capable of communicating over long term evolution ("LTE"), wideband code division multiple access ("WCDMA"), universal mobile telecommunications system ("UMTS"), global system for mobile communications ("GSM"), IMT-CDMA multi-carrier ("CDMA 2000") networks, and the like. In at least one embodiment, the one or more wireless antennas 1226 may also enable communication between objects (e.g., vehicles, mobile devices, etc.) in the environment using one or more local area networks (such as Bluetooth, bluetooth Low Energy (LE), Z-Wave, zigBee, etc.) and/or one or more low power wide area networks ("LPWANs") (such as protocols of LoRaWAN, sigFox, etc.).
Logic 915 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding logic 915 are provided herein in connection with fig. 9A and/or 9B. In at least one embodiment, logic 915 may be used in vehicle 1200 for performing inference or predictive operations based at least in part on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein.
In at least one embodiment, at least one component shown or described with respect to fig. 12A is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 12A is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 12A is used to cause one or more neural networks to select one or more variations in a feature of one or more text cues based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 12A is used to perform at least one aspect described with respect to block diagram 100, block diagram 200, process 300, block diagram 400, process 500, block diagram 600, block diagram 700, block diagram 800, and/or other systems, methods, or operations described herein.
Fig. 12B illustrates an example of camera position and field of view of the autonomous vehicle 1200 of fig. 12A in accordance with at least one embodiment. In at least one embodiment, the camera and respective field of view are one example embodiment and are not intended to be limiting. For example, in at least one embodiment, additional and/or alternative cameras may be included and/or the cameras may be located at different locations on the vehicle 1200.
In at least one embodiment, the type of camera used for the camera may include, but is not limited to, a digital camera that may be suitable for use with the components and/or systems of the vehicle 1200. In at least one embodiment, one or more cameras may operate at an automotive safety integrity level ("ASIL") B and/or other ASIL. In at least one embodiment, the camera type may be capable of any image capture rate, such as 60 frames per second (fps), 120fps, 240fps, etc., depending on the embodiment. In at least one embodiment, the camera may be able to use a rolling shutter, a global shutter, other types of shutters, or a combination thereof. In at least one embodiment, the color filter array may include a red transparent clear ("RCCC") color filter array, a red transparent blue ("RCCB") color filter array, a red blue green transparent ("RBGC") color filter array, a Foveon X3 color filter array, a Bayer sensor ("RGGB") color filter array, a monochrome sensor color filter array, and/or other types of color filter arrays. In at least one embodiment, a transparent pixel camera, such as a camera with an RCCC, RCCB, and/or RBGC color filter array, may be used in an effort to increase photosensitivity.
In at least one embodiment, one or more cameras may be used to perform advanced driver assistance system ("ADAS") functions (e.g., as part of a redundant or fail-safe design). For example, in at least one embodiment, a multifunctional monocular camera may be installed to provide functions including lane departure warning, traffic sign assistance, and intelligent headlight control. In at least one embodiment, one or more cameras (e.g., all cameras) may record and provide image data (e.g., video) simultaneously.
In at least one embodiment, one or more cameras may be mounted in a mounting assembly, such as a custom designed (three-dimensional ("3D") printed) assembly, in order to remove stray light and reflected light from within the vehicle 1200 (e.g., reflected light from an instrument panel that is reflected in a windshield mirror), which may interfere with the image data capturing capabilities of the camera. With respect to the rearview mirror mount assembly, in at least one embodiment, the rearview mirror assembly can be 3D printed custom such that the camera mount plate matches the shape of the rearview mirror. In at least one embodiment, one or more cameras may be integrated into the rearview mirror. In at least one embodiment, for a side view camera, one or more cameras may also be integrated within four posts at each corner of the cabin.
In at least one embodiment, a camera (e.g., a forward facing camera) having a field of view that includes portions of the environment in front of the vehicle 1200 may be used for looking around to help identify forward paths and obstacles, as well as to help provide information critical to generating an occupancy grid and/or determining a preferred vehicle path with the aid of one or more controllers 1236 and/or control socs. In at least one embodiment, the forward facing camera may be used to perform many ADAS functions similar to LIDAR, including but not limited to emergency braking, pedestrian detection, and collision avoidance. In at least one embodiment, the forward facing camera may also be used for ADAS functions and systems, including, but not limited to, lane departure warning ("LDW"), automatic cruise control ("ACC"), and/or other functions (such as traffic sign recognition).
In at least one embodiment, a wide variety of cameras may be used in forward configurations, including, for example, monocular camera platforms including CMOS ("complementary metal oxide semiconductor") color imagers. In at least one embodiment, wide angle camera 1270 may be used to perceive objects (e.g., pedestrians, intersection traffic, or bicycles) that enter the view from the periphery. Although only one wide-angle camera 1270 is shown in fig. 12B, in other embodiments, there may be any number (including zero) of wide-angle cameras on the vehicle 1200. In at least one embodiment, any number of remote cameras 1298 (e.g., a pair of presbyopic stereoscopic cameras) may be used for depth-based object detection, particularly for objects for which a neural network has not been trained. In at least one embodiment, one or more remote cameras 1298 may also be used for object detection and classification as well as basic object tracking.
In at least one embodiment, any number of stereo cameras 1268 may also be included in the forward configuration. In at least one embodiment, one or more stereo cameras 1268 may include an integrated control unit including an extensible processing unit that may provide programmable logic ("FPGA") and a multi-core microprocessor with controller area network ("CAN") or ethernet interfaces integrated on a single chip. In at least one embodiment, such a unit may be used to generate a 3D map of the environment of the vehicle 1200, including distance estimates for all points in the image. In at least one embodiment, the one or more stereo cameras 1268 may include, but are not limited to, compact stereo vision sensors, which may include, but are not limited to, two camera lenses (one on each of the left and right) and an image processing chip, which may measure the distance from the vehicle 1200 to the target object and use the generated information (e.g., metadata) to activate autonomous emergency braking and lane departure warning functions. In at least one embodiment, other types of stereo cameras 1268 may be used in addition to or instead of those described herein.
In at least one embodiment, a camera (e.g., a side view camera) having a field of view that includes portions of the environment of the sides of the vehicle 1200 may be used for looking around that provides information for creating and updating occupancy grids, as well as generating side impact collision warnings. For example, in at least one embodiment, a surround camera 1274 (e.g., four surround cameras as shown in fig. 12B) may be positioned on the vehicle 1200. In at least one embodiment, the one or more surround cameras 1274 may include, but are not limited to, any number and combination of wide angle cameras, one or more fish-eye cameras, one or more 360 degree cameras, and/or the like. For example, in at least one embodiment, four fish-eye cameras may be located at the front, rear, and sides of the vehicle 1200. In at least one embodiment, the vehicle 1200 may use three surround cameras 1274 (e.g., left, right, and rear), and may utilize one or more other cameras (e.g., forward facing cameras) as a fourth look-around camera.
In at least one embodiment, a camera (e.g., a rear-view camera) having a field of view that includes portions of the environment behind the vehicle 1200 may be used for parking assistance, looking around, rear collision warning, and creating and updating occupancy grids. In at least one embodiment, a wide variety of cameras may be used, including, but not limited to, cameras that are also suitable as one or more forward facing cameras (e.g., remote camera 1298 and/or one or more mid-range cameras 1276, one or more stereo cameras 1268, one or more infrared cameras 1272, etc.), as described herein.
In at least one embodiment, at least one component shown or described with respect to fig. 12B is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 12B is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 12B is used to cause one or more neural networks to select one or more variations in a feature of one or more text cues based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 12B is used to perform at least one aspect described with respect to block diagram 100, block diagram 200, process 300, block diagram 400, process 500, block diagram 600, block diagram 700, block diagram 800, and/or other systems, methods, or operations described herein.
Fig. 12C is a block diagram illustrating an example system architecture of the autonomous vehicle 1200 of fig. 12A in accordance with at least one embodiment. In at least one embodiment, each of the components, features, and systems of the vehicle 1200 in fig. 12C are shown connected via the bus 1202. In at least one embodiment, the bus 1202 may include, but is not limited to, a CAN data interface (alternatively referred to herein as a "CAN bus"). In at least one embodiment, the CAN may be a network internal to the vehicle 1200 for assisting in controlling various features and functions of the vehicle 1200, such as brake actuation, acceleration, braking, steering, wipers, and the like. In at least one embodiment, the bus 1202 CAN be configured with tens or even hundreds of nodes, each node having its own unique identifier (e.g., CAN ID). In at least one embodiment, the bus 1202 may be read to find steering wheel angle, ground speed, engine revolutions per minute ("RPM"), button positions, and/or other vehicle status indicators. In at least one embodiment, the bus 1202 may be a CAN bus compliant with ASIL B.
In at least one embodiment, flexRay and/or Ethernet (Ethernet) protocols may be used in addition to or instead of CAN. In at least one embodiment, there may be any number of buses forming bus 1202, which may include, but are not limited to, zero or more CAN buses, zero or more FlexRay buses, zero or more Ethernet buses, and/or zero or more other types of buses using different protocols. In at least one embodiment, two or more buses may be used to perform different functions, and/or may be used for redundancy. For example, a first bus may be used for collision avoidance functions, and a second bus may be used for actuation control. In at least one embodiment, each of the buses 1202 may communicate with any of the components of the vehicle 1200, and two or more of the buses 1202 may communicate with the corresponding components. In at least one embodiment, each of any number of system on a chip ("socs") 1204 (e.g., socs 1204 (a) and 1204 (B)), each of the one or more controllers 1236 and/or each computer within the vehicle CAN access the same input data (e.g., input from sensors of vehicle 1200), and CAN be connected to a common bus, such as a CAN bus.
In at least one embodiment, the vehicle 1200 may include one or more controllers 1236, such as those described herein with respect to fig. 12A. In at least one embodiment, the controller 1236 may be used for a variety of functions. In at least one embodiment, the controller 1236 may be coupled to any of a variety of other components and systems of the vehicle 1200 and may be used to control the vehicle 1200, the artificial intelligence of the vehicle 1200, the infotainment of the vehicle 1200, and/or other functions.
In at least one embodiment, the vehicle 1200 may include any number of socs 1204. In at least one embodiment, each of the socs 1204 may include, but is not limited to, a central processing unit ("one or more CPUs") 1206, a graphics processing unit ("one or more GPUs") 1208, one or more processors 1210, one or more caches 1212, one or more accelerators 1214, one or more data stores 1216, and/or other components and features not shown. In at least one embodiment, one or more socs 1204 may be used to control vehicle 1200 in a wide variety of platforms and systems. For example, in at least one embodiment, one or more socs 1204 may be combined in a system (e.g., a system of vehicle 1200) with a high definition ("HD") map 1222, which high definition map 1222 may obtain map refreshes and/or updates from one or more servers (not shown in fig. 12C) via a network interface 1224.
In at least one embodiment, one or more CPUs 1206 may include a CPU cluster or CPU complex (alternatively referred to herein as a "CCPLEX"). In at least one embodiment, one or more CPUs 1206 may include multiple cores and/or level two ("L2") caches. For example, in at least one embodiment, one or more CPUs 1206 may include eight cores in a coherent (coherent) multiprocessor configuration. In at least one embodiment, the one or more CPUs 1206 may include four dual-core clusters, with each cluster having a dedicated L2 cache (e.g., a 2 Megabyte (MB) L2 cache). In at least one embodiment, one or more CPUs 1206 (e.g., CCPLEX) may be configured to support simultaneous cluster operation, which enables any combination of clusters of one or more CPUs 1206 to be active at any given time.
In at least one embodiment, one or more CPUs 1206 may implement power management functions including, but not limited to, one or more of the following features: when idle, each hardware block can be automatically clock-gated to save dynamic power; each core clock may be gated when the core is not actively executing instructions due to executing wait interrupt ("WFI")/wait event ("WFE") instructions; each core may be independently power gated; when all cores are clock-or power-gated, each core cluster may be clock-gated independently; and/or each core cluster may be power gated independently when all cores are power gated. In at least one embodiment, one or more CPUs 1206 may further implement an enhanced algorithm for managing power states, in which allowed power states and expected wake-up times are specified, and the hardware/microcode determines the optimal power states to be entered for the cores, clusters, and CCPLEX. In at least one embodiment, the processing core may support a simplified power state entry sequence in software, where work is offloaded to microcode.
In at least one embodiment, the one or more GPUs 1208 can include an integrated GPU (alternatively referred to herein as an "iGPU"). In at least one embodiment, one or more GPUs 1208 may be programmable and may be efficient for parallel workloads. In at least one embodiment, one or more GPUs 1208 may use an enhanced tensor instruction set. In at least one embodiment, the one or more GPUs 1208 may include one or more streaming microprocessors, wherein each streaming microprocessor may include a level one ("L1") cache (e.g., an L1 cache having a storage capacity of at least 96 KB), and two or more streaming microprocessors may share an L2 cache (e.g., an L2 cache having a storage capacity of 512 KB). In at least one embodiment, the one or more GPUs 1208 can comprise at least eight streaming microprocessors. In at least one embodiment, one or more GPUs 1208 can use one or more computer Application Programming Interfaces (APIs). In at least one embodiment, one or more GPUs 1208 can use one or more parallel computing platforms and/or programming models (e.g., CUDA model of NVIDIA).
In at least one embodiment, one or more GPUs 1208 may be power optimized to achieve optimal performance in automotive and embedded applications. For example, in at least one embodiment, one or more GPUs 1208 may be fabricated on fin field effect transistor ("FinFET") circuitry. In at least one embodiment, each streaming microprocessor may contain multiple hybrid precision processing cores partitioned into multiple blocks. For example, but not limited to, 64 FP32 cores and 32 FP64 cores may be partitioned into four processing blocks. In at least one embodiment, each processing block may be allocated 16 FP32 cores, 8 FP64 cores, 16 INT32 cores, two hybrid precision NVIDIA tensor cores for deep learning matrix arithmetic, a zero level ("L0") instruction cache, a scheduler (e.g., thread bundle scheduler) or sequencer, a dispatch unit, and/or a 64KB register file. In at least one embodiment, a streaming microprocessor may include independent parallel integer and floating point data paths for employing a mix of computation and addressing operations to provide efficient execution of workloads. In at least one embodiment, the streaming microprocessor may include independent thread scheduling capabilities to enable finer granularity synchronization and collaboration between parallel threads. In at least one embodiment, a streaming microprocessor may include a combined L1 data cache and shared memory unit to improve performance while simplifying programming.
In at least one embodiment, one or more GPUs 1208 may include high bandwidth memory ("HBM") and/or 16GB HBM2 memory subsystem, in some examples to provide a peak memory bandwidth of about 900 GB/sec. In at least one embodiment, a synchronous graphics random access memory ("SGRAM") such as a fifth generation graphics double data rate type synchronous random access memory ("GDDR 5") may be used in addition to or in place of HBM memory.
In at least one embodiment, one or more GPUs 1208 can comprise unified memory technology. In at least one embodiment, address translation services ("ATS") support may be used to allow one or more GPUs 1208 to directly access one or more CPU 1206 page tables. In at least one embodiment, an address translation request may be sent to one or more CPUs 1206 when a memory management unit ("MMU") of a GPU of the one or more GPUs 1208 experiences a miss (miss). In response, in at least one embodiment, 2 of the one or more CPUs 1206 may look up a virtual-to-physical mapping of the address in its page table and send the translation back to the one or more GPUs 1208. In at least one embodiment, unified memory technology may allow a single unified virtual address space for memory for both the one or more CPUs 1206 and the one or more GPUs 1208, thereby simplifying programming of the one or more GPUs 1208 and porting applications to the one or more GPUs 1208.
In at least one embodiment, the one or more GPUs 1208 can include any number of access counters that can track the frequency of accesses by the one or more GPUs 1208 to the memory of other processors. In at least one embodiment, one or more access counters may help ensure that memory pages are moved to the physical memory of the processor of the most frequently accessed page, thereby improving the efficiency with which memory ranges are shared among the processors.
In at least one embodiment, one or more socs 1204 may include any number of caches 1212, including those described herein. For example, in at least one embodiment, the one or more caches 1212 may include a three-level ("L3") cache that is available to both the one or more CPUs 1206 and the one or more GPUs 1208 (e.g., connected to the one or more CPUs 1206 and the one or more GPUs 1208). In at least one embodiment, the one or more caches 1212 may include a write-back cache that may track the state of lines, such as by using a cache coherency protocol (e.g., MEI, MESI, MSI, etc.). In at least one embodiment, the L3 cache may include 4MB of memory or more, although smaller cache sizes may be used, depending on the embodiment.
In at least one embodiment, the one or more socs 1204 can include one or more accelerators 1214 (e.g., hardware accelerators, software accelerators, or a combination thereof). In at least one embodiment, one or more socs 1204 may include a hardware acceleration cluster, which may include optimized hardware accelerators and/or large on-chip memory. In at least one embodiment, large on-chip memory (e.g., 4MB of SRAM) may enable the hardware acceleration cluster to accelerate neural networks and other computations. In at least one embodiment, a hardware acceleration cluster may be used to supplement one or more GPUs 1208 and offload some tasks of the one or more GPUs 1208 (e.g., to free up more cycles of the one or more GPUs 1208 to perform other tasks). In at least one embodiment, one or more accelerators 1214 can be used for target workloads (e.g., perception, convolutional neural network ("CNN"), recurrent neural network ("RNN"), etc.) that are stable enough to withstand acceleration challenges. In at least one embodiment, the CNNs may include area or area convolutional neural networks ("RCNNs") and fast RCNNs (e.g., as used for object detection) or other types of CNNs.
In at least one embodiment, the one or more accelerators 1214 (e.g., hardware acceleration clusters) may include one or more deep learning accelerators ("DLAs"). In at least one embodiment, the one or more DLAs may include, but are not limited to, one or more tensor processing units ("TPUs") that may be configured to provide an additional 10 trillion operations per second for deep learning applications and reasoning. In at least one embodiment, the TPU may be an accelerator configured and optimized for performing image processing functions (e.g., for CNN, RCNN, etc.). In at least one embodiment, one or more DLAs may be further optimized for a particular set of neural network types and floating point operations and reasoning. In at least one embodiment, the design of one or more DLAs may provide higher performance per millimeter than a typical general purpose GPU, and typically greatly exceeds the performance of the CPU. In at least one embodiment, one or more TPUs may perform several functions, including a single instance convolution function supporting, for example, INT8, INT16, and FP16 data types for features and weights, and a post processor function. In at least one embodiment, one or more DLAs may quickly and efficiently execute a neural network, particularly a CNN, on processed or unprocessed data for any of a variety of functions, including, for example and without limitation: CNN for object recognition and detection using data from camera sensors; CNN for distance estimation using data from the camera sensor; CNN for emergency vehicle detection, identification and detection using data from the microphones; CNN for face recognition and owner recognition using data from the camera sensor; and/or CNNs for protecting and/or security related events.
In at least one embodiment, one or more DLAs may perform any of the functions of one or more GPUs 1208, and by using inference accelerators, for example, a designer may target one or more DLAs or one or more GPUs 1208 for any of the functions. For example, in at least one embodiment, the designer may focus the processing and floating point operations of the CNN on one or more DLAs and leave other functionality to one or more GPUs 1208 and/or one or more accelerators 1214.
In at least one embodiment, the one or more accelerators 1214 may include a programmable visual accelerator ("PVA"), which may alternatively be referred to herein as a computer visual accelerator. In at least one embodiment, the PVA may be designed and configured to accelerate computer vision algorithms for advanced driver assistance systems ("ADAS") 1238, autonomous driving, augmented reality ("AR") applications, and/or virtual reality ("VR") applications. In at least one embodiment, PVA may provide a balance between performance and flexibility. For example, in at least one embodiment, each PVA may include, for example, but not limited to, any number of reduced instruction set computer ("RISC") cores, direct memory access ("DMA"), and/or any number of vector processors.
In at least one embodiment, the RISC core may interact with an image sensor (e.g., an image sensor of any of the cameras described herein), an image signal processor, or the like. In at least one embodiment, each RISC core may include any amount of memory. In at least one embodiment, the RISC core may use any of a variety of protocols, depending on the embodiment. In at least one embodiment, the RISC core may execute a real-time operating system ("RTOS"). In at least one embodiment, the RISC core may be implemented using one or more integrated circuit devices, application specific integrated circuits ("ASICs"), and/or memory devices. For example, in at least one embodiment, the RISC core may include an instruction cache and/or tightly coupled RAM.
In at least one embodiment, the DMA may enable components of the PVA to access system memory independently of the one or more CPUs 1206. In at least one embodiment, the DMA may support any number of features for providing optimization to the PVA, including, but not limited to, supporting multidimensional addressing and/or cyclic addressing. In at least one embodiment, the DMA may support up to six or more addressed dimensions, which may include, but are not limited to, block width, block height, block depth, horizontal block stepping, vertical block stepping, and/or depth stepping.
In at least one embodiment, the vector processor may be a programmable processor that may be designed to efficiently and flexibly perform programming for computer vision algorithms and provide signal processing capabilities. In at least one embodiment, the PVA may include a PVA core and two vector processing subsystem partitions. In at least one embodiment, the PVA core may include a processor subsystem, one or more DMA engines (e.g., two DMA engines), and/or other peripherals. In at least one embodiment, the vector processing subsystem may operate as a main processing engine of the PVA, and may include a vector processing unit ("VPU"), an instruction cache, and/or a vector memory (e.g., "VMEM"). In at least one embodiment, the VPU core can include a digital signal processor, such as, for example, a single instruction multiple data ("SIMD"), very long instruction word ("VLIW") digital signal processor. In at least one embodiment, a combination of SIMD and VLIW may improve throughput and speed.
In at least one embodiment, each vector processor may include an instruction cache and may be coupled to a dedicated memory. Thus, in at least one embodiment, each vector processor may be configured to execute independently of the other vector processors. In at least one embodiment, the vector processor included in a particular PVA may be configured to employ data parallelism. For example, in at least one embodiment, multiple vector processors included in a single PVA may perform general purpose computer vision algorithms, but on different areas of the image. In at least one embodiment, the vector processor included in a particular PVA may perform different computer vision algorithms on one image at the same time, or even on sequential images or portions of images. In at least one embodiment, any number of PVAs may be included in a hardware acceleration cluster, and any number of vector processors may be included in each PVA. In at least one embodiment, the PVA may include additional error correction code ("ECC") memory for enhancing overall system security.
In at least one embodiment, the one or more accelerators 1214 may include a computer vision network on a chip and static random access memory ("SRAM") for providing high bandwidth, low latency SRAM for the one or more accelerators 1214. In at least one embodiment, the on-chip memory may comprise at least 4MB of SRAM, including, for example and without limitation, eight field-configurable memory blocks, to which both PVA and DLA may access. In at least one embodiment, each pair of memory blocks may include an advanced peripheral bus ("APB") interface, configuration circuitry, a controller, and a multiplexer. In at least one embodiment, any type of memory may be used. In at least one embodiment, the PVA and DLA may access the memory via a backbone (backbone) that provides the PVA and DLA with high speed access to the memory. In at least one embodiment, the backbone may include an on-chip computer vision network that interconnects PVA and DLA to memory (e.g., using APB).
In at least one embodiment, the on-chip computer vision network may include an interface that determines that both PVA and DLA provide ready and valid signals before transmitting any control signals/addresses/data. In at least one embodiment, the interface may provide separate phases and separate channels for transmitting control signals/addresses/data, as well as burst-type communications for continuous data transmission. In at least one embodiment, the interface may conform to International organization for standardization ("ISO") 26262 or International electrotechnical Commission ("IEC") 61508 standards, although other standards and protocols may be used.
In at least one embodiment, one or more of the socs 1204 may include a real-time ray tracing hardware accelerator. In at least one embodiment, a real-time ray tracing hardware accelerator may be used to quickly and efficiently determine the location and range of objects (e.g., within a world model) to generate real-time visualization simulations for RADAR signal interpretation, for sound propagation synthesis and/or analysis, for simulation of a sonor system, for general wave propagation simulation, for comparison with LIDAR data for positioning and/or other functions, and/or for other uses.
In at least one embodiment, one or more accelerators 1214 may have broad utility for autonomous driving. In at least one embodiment, PVA can be used for critical processing stages in ADAS and autonomous vehicles. In at least one embodiment, the ability of PVA at low power consumption and low latency is well matched to the domain of algorithms that require predictable processing. In other words, PVA performs excellently in semi-dense or dense conventional computing, even on small data sets, which may require predictable run times with low latency and low power consumption. In at least one embodiment, such as in vehicle 1200, PVA may be designed to run classical computer vision algorithms, as they may be efficient in object detection and integer mathematical operations.
For example, according to at least one embodiment of the technology, PVA is used to perform computer stereoscopic vision. In at least one embodiment, a semi-global matching based algorithm may be used in some examples, but this is not meant to be limiting. In at least one embodiment, an application for 3-5 level autonomous driving uses motion estimation/stereo matching (e.g., from motion restoration structures, pedestrian recognition, lane detection, etc.) on the fly. In at least one embodiment, the PVA may perform computer stereoscopic functions on input from two monocular cameras.
In at least one embodiment, PVA may be used to perform dense light flow. For example, in at least one embodiment, the PVA may process raw RADAR data (e.g., using a 4D fast Fourier transform) to provide processed RADAR data. In at least one embodiment, for example, PVA is used to perform time-of-flight depth processing by processing raw time-of-flight data to provide processed time-of-flight data.
In at least one embodiment, the DLA may be used to run any type of network to enhance control and driving safety, including for example, but not limited to, neural networks that output a measure of confidence for each object detection. In at least one embodiment, the confidence may be expressed or interpreted as a probability, or as providing a relative "weight" for each detection as compared to other detections. In at least one embodiment, the confidence measure enables the system to make further decisions as to which tests should be considered true positive tests rather than false positive tests. In at least one embodiment, the system may set a threshold for the confidence and treat only detections exceeding the threshold as true positive detections. In embodiments using an automatic emergency brake ("AEB") system, false positive detection will result in the vehicle automatically performing emergency braking, which is clearly undesirable. In at least one embodiment, the high confidence detection may be considered a trigger for AEB. In at least one embodiment, the DLA may run a neural network for regressing the confidence values. In at least one embodiment, the neural network may have as its inputs at least some subset of the parameters, e.g., bounding box dimensions, ground plane estimates obtained (e.g., from another subsystem), outputs of one or more IMU sensors 1266 related to the 3D position estimates of the object, the vehicle 1200 direction, distance, and distance obtained from the neural network and/or other sensors (e.g., one or more LIDAR sensors 1264 or one or more RADAR sensors 1260).
In at least one embodiment, one or more socs 1204 may include one or more data stores (e.g., memories) 1216. In at least one embodiment, the one or more data stores 1216 may be on-chip memory of the one or more socs 1204, which may store a neural network to be executed on the one or more GPUs 1208 and/or DLAs. In at least one embodiment, one or more data stores 1216 can have a capacity large enough to store multiple instances of the neural network for redundancy and security. In at least one embodiment, the one or more data stores 1216 may include one or more L2 or L3 caches.
In at least one embodiment, the one or more socs 1204 may include any number of processors 1210 (e.g., embedded processors). In at least one embodiment, the one or more processors 1210 can include a startup and power management processor, which can be a dedicated processor and subsystem, for handling startup power and management functions and associated secure execution. In at least one embodiment, the boot and power management processor may be part of a boot sequence of one or more socs 1204 and may provide runtime power management services. In at least one embodiment, the boot power and management processor may provide clock and voltage programming, assist in system low power state transitions, one or more SoC 1204 thermal and temperature sensor management, and/or one or more SoC 1204 power state management. In at least one embodiment, each temperature sensor may be implemented as a ring oscillator whose output frequency is proportional to temperature, and the one or more socs 1204 may use the ring oscillator to detect the temperature of the one or more CPUs 1206, the one or more GPUs 1208, and/or the one or more accelerators 1214. In at least one embodiment, if it is determined that the temperature exceeds the threshold, the start-up and power management processor may enter a temperature fault routine and place one or more socs 1204 in a lower power state and/or place the vehicle 1200 in a safe parking mode for the driver (e.g., to safely park the vehicle 1200).
In at least one embodiment, the one or more processors 1210 may further comprise a set of embedded processors that may function as an audio processing engine, which may be an audio subsystem that implements all hardware support for multi-channel audio through multiple interfaces and a wide and flexible range of audio I/O interfaces. In at least one embodiment, the audio processing engine is a special purpose processor core having a digital signal processor with special purpose RAM.
In at least one embodiment, the one or more processors 1210 may further include an always-on (always-on) processor engine that may provide the necessary hardware features to support low power sensor management and wake-up use cases. In at least one embodiment, the always-on processor engine may include, but is not limited to, a processor core, tightly coupled RAM, supporting peripherals (e.g., timers and interrupt controllers), various I/O controller peripherals, and routing logic.
In at least one embodiment, the one or more processors 1210 may further include a security cluster engine including, but not limited to, a dedicated processor subsystem for handling security management of automotive applications. In at least one embodiment, the security cluster engine may include, but is not limited to, two or more processor cores, tightly coupled RAM, supporting peripherals (e.g., timers, interrupt controllers, etc.), and/or routing logic. In the secure mode, in at least one embodiment, two or more cores may operate in lockstep mode and may function as a single core with comparison logic for detecting any differences between their operations. In at least one embodiment, the one or more processors 1210 may further include a real-time camera engine, which may include, but is not limited to, a dedicated processor subsystem for processing real-time camera management. In at least one embodiment, the one or more processors 1210 may further include a high dynamic range signal processor, which may include, but is not limited to, an image signal processor that is a hardware engine that is part of a camera processing pipeline.
In at least one embodiment, the one or more processors 1210 can include a video image compositor, which can be a processing block (e.g., implemented on a microprocessor) that implements the video post-processing functions required by a video playback application to generate final images for a player window. In at least one embodiment, the video image compositor may perform lens distortion correction on one or more wide angle cameras 1270, one or more surround cameras 1274, and/or one or more intra-cabin surveillance camera sensors. In at least one embodiment, the in-cabin monitoring camera sensor is preferably monitored by a neural network running on another instance of the SoC 1204, the neural network being configured to recognize the in-cabin event and respond accordingly. In at least one embodiment, the in-cabin system may perform, but is not limited to, lip reading to activate cellular services and make calls, instruct emails, change the destination of the vehicle, activate or change the infotainment system and settings of the vehicle, or provide voice activated surfing of the web. In at least one embodiment, certain functions are available to the driver when the vehicle is operating in autonomous mode, otherwise they are disabled.
In at least one embodiment, the video image synthesizer may include enhanced temporal noise reduction for both spatial and temporal noise reduction. For example, in at least one embodiment, in the event of motion in the video, the noise reduction appropriately weights the spatial information, thereby reducing the weight of the information provided by adjacent frames. In at least one embodiment, where the image or portion of the image does not include motion, the temporal noise reduction performed by the video image compositor may use information from the previous image to reduce noise in the current image.
In at least one embodiment, the video image compositor may be further configured to perform stereo correction on the input stereo lens frame. In at least one embodiment, the video image compositor may also be used for user interface compositing while the operating system desktop is being used, and one or more GPUs 1208 are not needed to continuously render new surfaces. In at least one embodiment, when one or more GPUs 1208 are powered and active for 3D rendering, a video image compositor may be used to offload one or more GPUs 1208 to improve performance and responsiveness.
In at least one embodiment, one or more of the socs 1204 may further include a mobile industrial processor interface ("MIPI") camera serial interface for receiving video and input from a camera, a high-speed interface, and/or a video input block that is available for camera and related pixel input functions. In at least one embodiment, one or more of the socs 1204 may further include an input/output controller, which may be controlled by software and may be used to receive I/O signals not submitted to a particular role.
In at least one embodiment, one or more of the socs 1204 may further include a wide range of peripheral interfaces for enabling communication with peripheral devices, audio encoder/decoders ("codecs"), power management, and/or other devices. In at least one embodiment, one or more socs 1204 may be used to process data from cameras, sensors (e.g., connected via gigabit multimedia serial links and ethernet channels), data from bus 1202 (e.g., speed of vehicle 1200, steering wheel position, etc.), data from one or more GNSS sensors 1258 (e.g., connected via ethernet bus or CAN bus), etc., such as one or more LIDAR sensors 1264, one or more RADAR sensors 1260, etc. In at least one embodiment, one or more of the socs 1204 may further include a dedicated high-performance mass storage controller, which may include their own DMA engine, and which may be used to free up one or more CPUs 1206 from conventional data management tasks.
In at least one embodiment, one or more of the socs 1204 can be an end-to-end platform with a flexible architecture that spans the automation level 3-5, providing for the utilization and efficient use of computer vision and ADAS technology to achieve diversity and redundancy, and providing an integrated functional security architecture for flexible, reliable driving software stacks and platforms of deep learning tools. In at least one embodiment, one or more socs 1204 may be faster, more reliable, and even more energy-efficient and space-efficient than conventional systems. For example, in at least one embodiment, the one or more accelerators 1214, when combined with the one or more CPUs 1206, the one or more GPUs 1208, and the one or more data stores 1216, may provide a fast, efficient platform for 3-5 level autonomous vehicles.
In at least one embodiment, the computer vision algorithms may be executed on a CPU, which may be configured to execute various processing algorithms on various visual data using a high-level programming language (e.g., C). However, in at least one embodiment, the CPU is typically unable to meet the performance requirements of many computer vision applications, such as performance requirements related to execution time and power consumption, for example. In at least one embodiment, many CPUs are not capable of executing complex object detection algorithms in real-time, which are used in on-board ADAS applications and in actual class 3-5 autonomous vehicles.
The embodiments described herein allow multiple neural networks to be executed simultaneously and/or sequentially, and allow the results to be combined together to achieve 3-5 level autonomous driving functionality. For example, in at least one embodiment, a CNN executing on a DLA or discrete GPU (e.g., one or more GPUs 1220) may include text and word recognition, allowing traffic signs, including signs for which the neural network has not been trained specifically, to be read and understood. In at least one embodiment, the DLA may further include a neural network capable of identifying, interpreting, and providing a semantic understanding of the markers and communicating the semantic understanding to a path planning module running on the CPU complex.
In at least one embodiment, multiple neural networks may be operated simultaneously for 3, 4, or 5 level driving. For example, in at least one embodiment, a warning flag that asserts "care: the flashing light indicates icing conditions (section: flashing lights indicate icy conditions) ", together with the electric light, can be interpreted by several neural networks, either individually or together. In at least one embodiment, the warning sign itself may be identified as a traffic sign by a first deployed neural network (e.g., a trained neural network), and the text "flashing lights indicate icing conditions" may be interpreted by a second deployed neural network, which informs the vehicle's path planning software (preferably executing on the CPU complex): when a flashing light is detected, an icing condition may exist. In at least one embodiment, the flashing lights may be identified by operating the third deployed neural network over a plurality of frames, informing the path planning software of the vehicle of the presence (or absence) of the flashing lights. In at least one embodiment, all three neural networks may run simultaneously, e.g., within a DLA and/or on one or more GPUs 1208.
In at least one embodiment, the CNN for face recognition and vehicle owner identification may use data from the camera sensors to identify the presence of an authorized driver and/or owner of the vehicle 1200. In at least one embodiment, the normally open sensor processing engine may be used to unlock the vehicle when the owner approaches the driver door and turns on the lights, and may be used to disable the vehicle when the owner leaves the vehicle in a safe mode. In this way, one or more socs 1204 provide protection against theft and/or robbery.
In at least one embodiment, the CNN for emergency vehicle detection and identification may use data from microphone 1296 to detect and identify emergency vehicle alarms. In at least one embodiment, one or more socs 1204 use CNNs to classify environmental and urban sounds, as well as to classify visual data. In at least one embodiment, the CNN running on the DLA is trained to identify the relative approach speed of the emergency vehicle (e.g., by using the doppler effect). In at least one embodiment, the CNN may also be trained to identify emergency vehicles specific to the local area in which the vehicle is operating, as identified by one or more GNSS sensors 1258. In at least one embodiment, the CNN will seek to detect european alarms when operating in europe, and will seek to identify north american alarms only when operating in north america. In at least one embodiment, once an emergency vehicle is detected, a control program may be used with the assistance of one or more ultrasonic sensors 1262 to perform an emergency vehicle safety routine, slow the vehicle down, drive the vehicle to the curb, park, and/or idle the vehicle until the emergency vehicle passes.
In at least one embodiment, the vehicle 1200 may include one or more CPUs 1218 (e.g., one or more discrete CPUs or one or more dcpus), which may be coupled to one or more socs 1204 via a high-speed interconnect (e.g., PCIe). In at least one embodiment, the one or more CPUs 1218 can include, for example, an X86 processor. The one or more CPUs 1218 can be used to perform any of a variety of functions, including, for example, arbitrating results that may not be consistent between the ADAS sensor and the one or more socs 1204, and/or monitoring the status and health of the one or more controllers 1236 and/or the infotainment system on chip ("infotainment SoC") 1230. In at least one embodiment, the SoC 1204 includes one or more interconnects, and the interconnects may include peripheral component interconnect express (PCIe).
In at least one embodiment, vehicle 1200 may include one or more GPUs 1220 (e.g., one or more discrete GPUs or one or more dGPU's) that may be coupled to one or more socs 1204 via a high-speed interconnect (e.g., NVLINK channels of NVIDIA). In at least one embodiment, one or more GPUs 1220 can provide additional artificial intelligence functionality, such as by performing redundancy and/or a different neural network, and can be used to train and/or update the neural network based at least in part on inputs (e.g., sensor data) from sensors of vehicle 1200.
In at least one embodiment, vehicle 1200 may further include a network interface 1224, which may include, but is not limited to, one or more wireless antennas 1226 (e.g., one or more wireless antennas for different communication protocols, such as a cellular antenna, a bluetooth antenna, etc.). In at least one embodiment, network interface 1224 can be used to enable wireless connectivity to the internet cloud service (e.g., with servers and/or other network devices), with other vehicles, and/or with computing devices (e.g., passenger's client devices). In at least one embodiment, a direct link may be established between the vehicle 1200 and another vehicle and/or an indirect link may be established (e.g., over a network and the internet) for communication with other vehicles. In at least one embodiment, the direct link may be provided using a vehicle-to-vehicle communication link. In at least one embodiment, the vehicle-to-vehicle communication link may provide information to the vehicle 1200 about vehicles in the vicinity of the vehicle 1200 (e.g., vehicles in front of, to the side of, and/or behind the vehicle 1200). In at least one embodiment, the aforementioned functionality may be part of a cooperative adaptive cruise control function of the vehicle 1200.
In at least one embodiment, the network interface 1224 may include a SoC that provides modulation and demodulation functions and enables one or more controllers 1236 to communicate over a wireless network. In at least one embodiment, the network interface 1224 may include a radio frequency front end for up-conversion from baseband to radio frequency and down-conversion from radio frequency to baseband. In at least one embodiment, the frequency conversion may be performed in any technically feasible manner. For example, frequency conversion may be performed by well-known processes and/or using super-heterodyne (super-heterodyne) processes. In at least one embodiment, the radio frequency front end functionality may be provided by a separate chip. In at least one embodiment, the network interface may include wireless functionality for communicating via LTE, WCDMA, UMTS, GSM, CDMA2000, bluetooth LE, wi-Fi, Z-Wave, zigBee, loRaWAN, and/or other wireless protocols.
In at least one embodiment, the vehicle 1200 may further include one or more data stores 1228, which may include, but are not limited to, off-chip (e.g., one or more off-chip socs 1204) stores. In at least one embodiment, the one or more data stores 1228 may include, but are not limited to, one or more storage elements including RAM, SRAM, dynamic random access memory ("DRAM"), video random access memory ("VRAM"), flash memory, hard disk, and/or other components and/or devices that may store at least one bit of data.
In at least one embodiment, the vehicle 1200 may further include one or more GNSS sensors 1258 (e.g., GPS and/or assisted GPS sensors) to assist in mapping, sensing, occupancy grid generation, and/or path planning functions. In at least one embodiment, any number of GNSS sensors 1258 may be used, including for example, but not limited to, GPS using a USB connector with an Ethernet-to-serial interface (e.g., RS-232) bridge.
In at least one embodiment, the vehicle 1200 may further include one or more RADAR sensors 1260. In at least one embodiment, one or more RADAR sensors 1260 may be used by the vehicle 1200 for remote vehicle detection, even in dark and/or severe weather conditions. In at least one embodiment, the RADAR function security level may be ASIL B. In at least one embodiment, one or more RADAR sensors 1260 CAN use a CAN bus and/or bus 1202 (e.g., for transmitting data generated by one or more RADAR sensors 1260) to control and access object tracking data, in some examples an ethernet channel CAN be accessed to access raw data. In at least one embodiment, a wide variety of RADAR sensor types may be used. For example, but not limited to, one or more RADAR sensors 1260 may be adapted for front, rear, and side RADAR use. In at least one embodiment, one or more of the one or more RADAR sensors 1260 are pulsed doppler RADAR sensors.
In at least one embodiment, the one or more RADAR sensors 1260 can include different configurations, such as long range with a narrow field of view, short range with a wide field of view, short range side coverage, and so forth. In at least one embodiment, remote RADAR may be used for adaptive cruise control functions. In at least one embodiment, the remote RADAR system may provide a wide field of view through two or more independent scans (e.g., within 250m (meters)). In at least one embodiment, the one or more RADAR sensors 1260 can help distinguish between static objects and moving objects, and can be used by the ADAS system 1238 for emergency braking assistance and forward collision warning. In at least one embodiment, the one or more sensors 1260 included in the remote RADAR system may include, but are not limited to, single-base (monostatic) multimode RADAR with multiple (e.g., six or more) fixed RADAR antennas and high-speed CAN and FlexRay interfaces. In at least one embodiment, with six antennas, the central four antennas may create a focused beam pattern designed to record the surroundings of the vehicle 1200 at a higher speed with minimal traffic interference in adjacent lanes. In at least one embodiment, the other two antennas may expand the field of view, enabling it to quickly detect vehicles entering or exiting the lane of the vehicle 1200.
In at least one embodiment, as an example, a medium range RADAR system may include a range of up to 160m (front) or 80m (rear), and a field of view of up to 42 degrees (front) or 150 degrees (rear). In at least one embodiment, the short range RADAR system may include, but is not limited to, any number of RADAR sensors 1260 designed to be mounted on both ends of the rear bumper. When mounted at both ends of the rear bumper, in at least one embodiment, the RADAR sensor system may generate two beams that continuously monitor the vehicle rear direction and nearby blind spots. In at least one embodiment, the short range RADAR system can be used in the ADAS system 1238 for blind spot detection and/or lane change assistance.
In at least one embodiment, the vehicle 1200 may further include one or more ultrasonic sensors 1262. In at least one embodiment, one or more ultrasonic sensors 1262, which may be positioned in front, rear, and/or lateral positions of the vehicle 1200, may be used for parking assistance and/or creating and updating occupancy grids. In at least one embodiment, a wide variety of ultrasonic sensors 1262 may be used, and different ultrasonic sensors 1262 may be used for different detection ranges (e.g., 2.5m, 4 m). In at least one embodiment, the ultrasonic sensor 1262 may operate at a functional safety level of ASIL B.
In at least one embodiment, the vehicle 1200 may include one or more LIDAR sensors 1264. In at least one embodiment, one or more LIDAR sensors 1264 may be used for object and pedestrian detection, emergency braking, collision avoidance, and/or other functions. In at least one embodiment, one or more LIDAR sensors 1264 may operate at a functional security level ASIL B. In at least one embodiment, the vehicle 1200 may include a plurality (e.g., two, four, six, etc.) of LIDAR sensors 1264 that may use ethernet channels (e.g., to provide data to a gigabit ethernet switch).
In at least one embodiment, one or more LIDAR sensors 1264 may be capable of providing a list of objects and their distances for a 360 degree field of view. In at least one embodiment, one or more LIDAR sensors 1264 available commercially, for example, may have an advertising range of approximately 100m, have a precision of 2cm-3cm, and support an Ethernet connection of 100 Mbps. In at least one embodiment, one or more non-protruding LIDAR sensors may be used. In such embodiments, one or more LIDAR sensors 1264 may include small devices that may be embedded in front, rear, sides, and/or corner locations of the vehicle 1200. In at least one embodiment, one or more LIDAR sensors 1264, in such embodiments, may provide up to 120 degrees of horizontal view and 35 degrees of vertical view, even for low reflectivity objects, and have a range of 200 m. In at least one embodiment, the forward mounted one or more LIDAR sensors 1264 may be configured for a horizontal field of view of between 45 degrees and 135 degrees.
In at least one embodiment, LIDAR technology (such as 3D flash LIDAR) may also be used. In at least one embodiment, the 3D flash LIDAR uses a laser flash as a transmission source to illuminate up to about 200m around the vehicle 1200. In at least one embodiment, the flash LIDAR unit includes, but is not limited to, a receiver that records the laser pulse travel time and the reflected light on each pixel, which in turn corresponds to the range from the vehicle 1200 to the object. In at least one embodiment, the flash LIDAR may allow for the generation of highly accurate and distortion-free images of the surrounding environment with each laser flash. In at least one embodiment, four flashing LIDAR sensors may be deployed, one on each side of the vehicle 1200. In at least one embodiment, the 3D flash LIDAR system includes, but is not limited to, a solid state 3D gaze (staring) array LIDAR camera with no moving parts other than a fan (e.g., a non-scanning LIDAR device). In at least one embodiment, the flash LIDAR device may use 5 nanosecond class I (eye-safe) laser pulses per frame and may capture reflected laser light as a 3D ranging point cloud and co-registered intensity data.
In at least one embodiment, the vehicle 1200 may also include one or more IMU sensors 1266. In at least one embodiment, one or more IMU sensors 1266 may be located at a rear axle center of the vehicle 1200. In at least one embodiment, the one or more IMU sensors 1266 may include, for example, but are not limited to, one or more accelerometers, one or more magnetometers, one or more gyroscopes, one or more magnetic compasses, and/or other sensor types. In at least one embodiment, for example in a six axis application, the one or more IMU sensors 1266 may include, but are not limited to, accelerometers and gyroscopes. In at least one embodiment, such as in a nine-axis application, the one or more IMU sensors 1266 may include, but are not limited to, accelerometers, gyroscopes, and magnetometers.
In at least one embodiment, one or more of the IMU sensors 1266 may be implemented as a miniature high-performance GPS-assisted inertial navigation system ("GPS/INS") that incorporates microelectromechanical system ("MEMS") inertial sensors, high-sensitivity GPS receivers, and advanced kalman filtering algorithms for providing estimates of position, velocity, and attitude. In at least one embodiment, the one or more IMU sensors 1266 may enable the vehicle 1200 to estimate its heading by directly observing and correlating changes in speed from GPS to the one or more IMU sensors 1266 without input from a magnetic sensor. In at least one embodiment, one or more IMU sensors 1266 and one or more GNSS sensors 1258 may be combined in a single integrated unit.
In at least one embodiment, the vehicle 1200 may include one or more microphones 1296 disposed within and/or around the vehicle 1200. In at least one embodiment, one or more microphones 1296 may be used for emergency vehicle detection and identification.
In at least one embodiment, the vehicle 1200 may further include any number of camera types, including one or more stereo cameras 1268, one or more wide-angle cameras 1270, one or more infrared cameras 1272, one or more surround cameras 1274, one or more remote cameras 1298, one or more mid-range cameras 1276, and/or other camera types. In at least one embodiment, a camera may be used to capture image data around the entire periphery of the vehicle 1200. In at least one embodiment, the type of camera used depends on the vehicle 1200. In at least one embodiment, any combination of camera types may be used to provide the necessary coverage around the vehicle 1200. In at least one embodiment, the number of cameras deployed may vary from embodiment to embodiment. For example, in at least one embodiment, the vehicle 1200 may include six cameras, seven cameras, ten cameras, twelve cameras, or other numbers of cameras. In at least one embodiment, the camera may support gigabit multimedia serial link ("GMSL") and/or gigabit ethernet communications by way of example and not limitation. In at least one embodiment, each camera is described in more detail herein before with reference to fig. 12A and 12B.
In at least one embodiment, the vehicle 1200 may further include one or more vibration sensors 1242. In at least one embodiment, one or more vibration sensors 1242 may measure vibrations of components (e.g., axles) of the vehicle 1200. For example, in at least one embodiment, a change in vibration may be indicative of a change in road surface. In at least one embodiment, when two or more vibration sensors 1242 are used, the difference between the vibrations may be used to determine friction or slip of the road surface (e.g., when there is a vibration difference between the powered drive shaft and the free-wheeling shaft).
In at least one embodiment, the vehicle 1200 can include an ADAS system 1238. In at least one embodiment, the ADAS system 1238 can include, but is not limited to, an SoC in some examples. In at least one embodiment, the ADAS system 1238 can include, but is not limited to, any number and any combination of autonomous/adaptive/auto cruise control ("ACC") systems, collaborative adaptive cruise control ("CACC") systems, forward collision warning ("FCW") systems, automatic emergency braking ("AEB") systems, lane departure warning ("LDW") systems, lane keeping assist ("LKA") systems, blind spot warning ("BSW") systems, rear cross traffic warning ("RCTW") systems, collision warning ("CW") systems, lane centering ("LC") systems, and/or other systems, features, and/or functions.
In at least one embodiment, the ACC system may use one or more RADAR sensors 1260, one or more LIDAR sensors 1264, and/or any number of cameras. In at least one embodiment, the ACC system may include a longitudinal ACC system and/or a lateral ACC system. In at least one embodiment, the longitudinal ACC system monitors and controls the distance to another vehicle immediately in front of the vehicle 1200 and automatically adjusts the speed of the vehicle 1200 to maintain a safe distance from the vehicle in front. In at least one embodiment, the lateral ACC system performs distance maintenance and recommends the vehicle 1200 to change lanes when needed. In at least one embodiment, the landscape ACC is associated with other ADAS applications, such as LC and CW.
In at least one embodiment, the CACC system uses information from other vehicles, which may be received indirectly from other vehicles via a wireless link or through a network connection (e.g., through the internet) via network interface 1224 and/or one or more wireless antennas 1226. In at least one embodiment, the direct link may be provided by a vehicle-to-vehicle ("V2V") communication link, while the indirect link may be provided by an infrastructure-to-vehicle ("I2V") communication link. Typically, V2V communication provides information about an immediately preceding vehicle (e.g., a vehicle immediately in front of and on the same lane as vehicle 1200), while I2V communication provides information about more forward traffic. In at least one embodiment, the CACC system may include one or both of I2V and V2V information sources. In at least one embodiment, given information of vehicles in front of vehicle 1200, the CACC system may be more reliable and have the potential to improve the smoothness of traffic flow and reduce road congestion.
In at least one embodiment, the FCW system is designed to alert the driver of the danger so that the driver can take corrective action. In at least one embodiment, the FCW system uses a forward facing camera and/or one or more RADAR sensors 1260 coupled to a dedicated processor, DSP, FPGA, and/or ASIC that are electrically coupled to provide driver feedback, such as a display, speaker, and/or vibration component. In at least one embodiment, the FCW system may provide an alert, such as in the form of an audible, visual alert, vibration, and/or rapid braking pulse.
In at least one embodiment, the AEB system detects an impending forward collision with another vehicle or other object and may automatically apply the brakes if the driver does not take corrective action within specified time or distance parameters. In at least one embodiment, the AEB system can use one or more forward facing cameras and/or one or more RADAR sensors 1260 coupled to a dedicated processor, DSP, FPGA, and/or ASIC. In at least one embodiment, when the AEB system detects a hazard, it typically first alerts the driver to take corrective action to avoid the collision, and if the driver does not take corrective action, the AEB system can automatically apply the brakes in an attempt to prevent or at least mitigate the effects of the predicted collision. In at least one embodiment, the AEB system can include techniques such as dynamic braking support and/or crash-impending braking.
In at least one embodiment, the LDW system provides visual, audible, and/or tactile warnings, such as steering wheel or seat vibrations, to alert the driver when the vehicle 1200 crosses the lane markings. In at least one embodiment, the LDW system is not activated when the driver indicates an intentional lane departure, such as by activating a turn signal. In at least one embodiment, the LDW system may use a front facing camera coupled to a dedicated processor, DSP, FPGA, and/or ASIC that is electrically coupled to provide driver feedback such as a display, speaker, and/or vibration component. In at least one embodiment, the LKA system is a variation of the LDW system. In at least one embodiment, if the vehicle 1200 begins to leave its lane, the LKA system provides steering input or braking to correct the vehicle 1200.
In at least one embodiment, the BSW system detects and alerts the driver that the vehicle is in the blind spot of the automobile. In at least one embodiment, the BSW system may provide visual, audible, and/or tactile alerts to indicate that merging or changing lanes is unsafe. In at least one embodiment, the BSW system may provide additional warning when the driver uses the turn signal. In at least one embodiment, the BSW system may use one or more rear facing cameras and/or one or more RADAR sensors 1260 coupled to a dedicated processor, DSP, FPGA, and/or ASIC, which are electrically coupled to driver feedback, such as a display, speaker, and/or vibration component.
In at least one embodiment, the RCTW system can provide visual, audible, and/or tactile notification when the vehicle 1200 detects an object outside the rear camera range when reversing. In at least one embodiment, the RCTW system includes an AEB system to ensure that vehicle brakes are applied to avoid collisions. In at least one embodiment, the RCTW system can use one or more rear-facing RADAR sensors 1260 coupled to a dedicated processor, DSP, FPGA, and/or ASIC, which are electrically coupled to provide driver feedback such as a display, speaker, and/or vibration component.
In at least one embodiment, conventional ADAS systems may be prone to false positive results, which may annoy and distract the driver, but are generally not catastrophic because conventional ADAS systems may alert the driver and allow the driver to decide whether a safety condition is actually present and take action accordingly. In at least one embodiment, in the event of a result conflict, the vehicle 1200 itself decides whether to hear the result of the primary or secondary computer (e.g., the first or second of the controllers 1236). For example, in at least one embodiment, the ADAS system 1238 can be a backup and/or auxiliary computer for providing awareness information to a backup computer rationality module. In at least one embodiment, the standby computer rationality monitor may run redundant various software on hardware components to detect faults in perceived and dynamic driving tasks. In at least one embodiment, the output from the ADAS system 1238 can be provided to a supervising MCU. In at least one embodiment, if the output from the primary computer and the output from the secondary computer conflict, the supervising MCU decides how to coordinate the conflicts to ensure safe operation.
In at least one embodiment, the host computer may be configured to provide a confidence score to the supervising MCU that indicates the host computer's confidence in the selected result. In at least one embodiment, if the confidence score exceeds a threshold, the supervising MCU may follow the direction of the primary computer, regardless of whether the secondary computer provides conflicting or inconsistent results. In at least one embodiment, where the confidence score does not meet a threshold, and where the primary and secondary computers indicate different results (e.g., conflicts), the supervising MCU may arbitrate between the computers to determine the appropriate result.
In at least one embodiment, the supervising MCU may be configured to run a neural network trained and configured to determine a condition that the auxiliary computer provides a false alarm based at least in part on output from the main computer and output from the auxiliary computer. In at least one embodiment, one or more neural networks in the supervising MCU may learn when the output of the auxiliary computer can be trusted and when it cannot be trusted. For example, in at least one embodiment, when the secondary computer is a RADAR-based FCW system, one or more neural networks in the supervising MCU may learn when the FCW system is identifying metal objects that are not actually dangerous, such as drain grids or manhole covers that would trigger an alarm. In at least one embodiment, when the helper computer is a camera-based LDW system, the neural network in the supervising MCU may learn override control (LDW) when there is a cyclist or pedestrian and in fact lane departure is the safest operation. In at least one embodiment, the supervising MCU may include at least one of a DLA or GPU adapted to run one or more neural networks with associated memory. In at least one embodiment, the supervising MCU may include and/or be included as a component of one or more socs 1204.
In at least one embodiment, the ADAS system 1238 can include an auxiliary computer that performs ADAS functions using conventional computer vision rules. In at least one embodiment, the auxiliary computer may use classical computer vision rules (if-then) and supervising the presence of one or more neural networks in the MCU may improve reliability, security and performance. For example, in at least one embodiment, the varied implementation and intentional non-uniformities make the overall system more fault tolerant, especially to faults caused by software (or software-hardware interface) functions. For example, in at least one embodiment, if there is a software bug or error in the software running on the host computer and the non-identical software code running on the secondary computer provides a consistent overall result, the supervising MCU may have greater confidence that the overall result is correct and that the bug in the software or hardware on the host computer does not result in a significant error.
In at least one embodiment, the output of the ADAS system 1238 can be fed into a perception block of a host computer and/or a dynamic driving task block of the host computer. For example, in at least one embodiment, if the ADAS system 1238 indicates a forward collision warning due to an object directly in front, the perception block can use this information when identifying the object. In at least one embodiment, the auxiliary computer may have its own neural network trained, as described herein, to reduce the risk of false positives.
In at least one embodiment, the vehicle 1200 may further include an infotainment SoC1230 (e.g., an in-vehicle infotainment system (IVI)). Although shown and described as a SoC, in at least one embodiment, the infotainment system SoC1230 may not be a SoC and may include, but is not limited to, two or more discrete components. In at least one embodiment, the infotainment SoC1230 may include, but is not limited to, a combination of hardware and software that may be used to provide audio (e.g., music, personal digital assistant, navigation instructions, news, broadcast, etc.), video (e.g., television, movie, streaming media, etc.), telephone (e.g., hands-free calls), network connectivity (e.g., LTE, wiFi, etc.), and/or information services (e.g., navigation system, rear parking assistance, radio data system, vehicle related information such as fuel level, total coverage distance, brake fuel level, door opening/closing, air filter information, etc.) to the vehicle 1200. For example, the infotainment SoC1230 may include a radio, disk player, navigation system, video player, USB and bluetooth connection, vehicle computer, vehicle entertainment system, wiFi, steering wheel audio control, hands-free voice control, head-up display ("HUD"), HMI display 1234, telematics device, control panel (e.g., for controlling and/or interacting with various components, features, and/or systems), and/or other components. In at least one embodiment, the infotainment SoC1230 can be further configured to provide information (e.g., visual and/or audible information) to one or more users of the vehicle 1200, such as information from the ADAS system 1238, autonomous driving information (such as planned vehicle maneuvers), trajectories, surrounding information (e.g., intersection information, vehicle information, road information, etc.), and/or other information.
In at least one embodiment, the infotainment SoC 1230 can include any number and type of GPU functions. In at least one embodiment, the infotainment SoC 1230 can communicate with other devices, systems, and/or components of the vehicle 1200 over the bus 1202. In at least one embodiment, the infotainment SoC 1230 can be coupled to a supervisory MCU such that the GPU of the infotainment system can perform some autopilot functions in the event of failure of one or more of the master controllers 1236 (e.g., the host and/or standby computers of the vehicle 1200). In at least one embodiment, the infotainment SoC 1230 can place the vehicle 1200 in a driver-to-safe parking mode, as described herein.
In at least one embodiment, the vehicle 1200 may further include an instrument panel 1232 (e.g., a digital instrument panel, an electronic instrument panel, a digital instrument panel, etc.). In at least one embodiment, the dashboard 1232 may include, but is not limited to, a controller and/or a supercomputer (e.g., a discrete controller or supercomputer). In at least one embodiment, the dashboard 1232 may include, but is not limited to, a set of meters in any number and combination, such as a speedometer, fuel level, oil pressure, tachometer, odometer, turn indicator, shift position indicator, one or more seat belt warning lights, one or more parking brake warning lights, one or more engine failure lights, auxiliary restraint system (e.g., airbag) information, lighting controls, safety system controls, navigation information, and the like. In some examples, information may be displayed and/or shared between the infotainment SoC 1230 and the dashboard 1232. In at least one embodiment, a dashboard 1232 may be included as part of the infotainment SoC 1230, and vice versa.
In at least one embodiment, at least one component shown or described with respect to fig. 12C is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 12C is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 12C is used to cause one or more neural networks to select one or more variations in a feature of one or more text cues based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 12C is used to perform at least one aspect described with respect to block diagram 100, block diagram 200, process 300, block diagram 400, process 500, block diagram 600, block diagram 700, block diagram 800, and/or other systems, methods, or operations described herein.
Fig. 12D is a diagram of a system for communicating between one or more cloud-based servers and the autonomous vehicle 1200 of fig. 12A in accordance with at least one embodiment. In at least one embodiment, the system may include, but is not limited to, one or more servers 1278, one or more networks 1290, and any number and type of vehicles, including vehicle 1200. In at least one embodiment, the one or more servers 1278 can include, but are not limited to, a plurality of GPUs 1284 (a) -1284 (H) (collectively referred to herein as GPUs 1284), PCIe switches 1282 (a) -1282 (D) (collectively referred to herein as PCIe switches 1282), and/or CPUs 1280 (a) -1280 (B) (collectively referred to herein as CPUs 1280). In at least one embodiment, GPU 1284, CPU 1280, and PCIe switch 1282 may be interconnected with a high-speed interconnect such as, for example, but not limited to, NVLink interface 1288 and/or PCIe connection 1286 developed by NVIDIA. In at least one embodiment, GPU 1284 is connected via an NVLink and/or NVSwitch SoC, and GPU 1284 and PCIe switch 1282 are connected via a PCIe interconnect. Although eight GPUs 1284, two CPUs 1280, and four PCIe switches 1282 are shown, this is not intended to be limiting. In at least one embodiment, each of the one or more servers 1278 may include, but is not limited to, any number of GPUs 1284, CPUs 1280, and/or PCIe switches 1282 in any combination. For example, in at least one embodiment, one or more servers 1278 may each include eight, sixteen, thirty-two, and/or more GPUs 1284.
In at least one embodiment, one or more servers 1278 can receive image data representing images from vehicles over one or more networks 1290, the images showing unexpected or changing road conditions, such as recently started road works. In at least one embodiment, one or more servers 1278 can send updated isopipe network 1292 and/or map information 1294, including but not limited to information about traffic and road conditions, to vehicles via one or more networks 1290. In at least one embodiment, the update to the map information 1294 may include, but is not limited to, an update to the HD map 1222, such as information about a building site, a pothole, a passageway, a flood, and/or other obstacle. In at least one embodiment, the neural network 1292 and/or map information 1294 can have been generated from new training and/or experience represented in data received from any number of vehicles in the environment, and/or based at least on training performed at a data center (e.g., using one or more servers 1278 and/or other servers).
In at least one embodiment, one or more servers 1278 can be utilized to train a machine learning model (e.g., a neural network) based at least in part on training data. In at least one embodiment, the training data may be generated by the vehicle and/or may be generated in a simulation (e.g., using a game engine). In at least one embodiment, any number of training data (e.g., where an associated neural network benefits from supervised learning) is tagged and/or subjected to other preprocessing. In at least one embodiment, any number of training data is not labeled and/or preprocessed (e.g., where the associated neural network does not need supervised learning). In at least one embodiment, once the machine learning model is trained, the machine learning model may be used by the vehicle (e.g., sent to the vehicle over one or more networks 1290, and/or the machine learning model may be used by one or more servers 1278 to remotely monitor the vehicle.
In at least one embodiment, one or more servers 1278 can receive data from vehicles and apply the data to up-to-date real-time neural networks for real-time intelligent reasoning. In at least one embodiment, one or more servers 1278 can include a deep learning supercomputer powered by one or more GPUs 1284 and/or a dedicated AI computer, such as DGX and DGX Station machines developed by NVIDIA. However, in at least one embodiment, one or more servers 1278 may comprise a deep learning infrastructure of a data center powered using CPUs.
In at least one embodiment, the deep learning infrastructure of one or more servers 1278 may be capable of fast, real-time reasoning and may use this capability to assess and verify the health of processors, software, and/or associated hardware in the vehicle 1200. For example, in at least one embodiment, the deep learning infrastructure may receive periodic updates from the vehicle 1200, such as a sequence of images and/or objects in which the vehicle 1200 is positioned in the sequence of images (e.g., via computer vision and/or other machine learning object classification techniques). In at least one embodiment, the deep learning infrastructure can run its own neural network to identify objects and compare them to objects identified by the vehicle 1200, and if the results do not match and the deep learning infrastructure concludes that the AI in the vehicle 1200 is malfunctioning, the one or more servers 1278 can send a signal to the vehicle 1200 instructing the fail-safe computer of the vehicle 1200 to take control, notify the passenger, and complete the safe parking operation.
In at least one embodiment, one or more servers 1278 can include one or more GPUs 1284 and one or more programmable inference accelerators (e.g., the TensorRT 3 device of NVIDIA). In at least one embodiment, a combination of GPU-driven servers and inference acceleration may enable real-time responses. In at least one embodiment, servers driven by CPUs, FPGAs and other processors can be used for reasoning, such as where performance is less critical. In at least one embodiment, one or more hardware structures 915 are used to perform one or more embodiments. Details regarding the hardware structure 915 are provided herein in connection with fig. 9A and/or 9B.
In at least one embodiment, at least one component shown or described with respect to fig. 12D is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 12D is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 12D is used to cause one or more neural networks to select one or more variations in a feature of one or more text cues based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 12D is used to perform at least one aspect described with respect to block diagram 100, block diagram 200, process 300, block diagram 400, process 500, block diagram 600, block diagram 700, block diagram 800, and/or other systems, methods, or operations described herein.
Computer system
FIG. 13 is a block diagram illustrating an exemplary computer system, which may be a system with interconnected devices and components, a system on a chip (SOC), or some combination thereof formed with a processor, which may include execution units for executing instructions, in accordance with at least one embodiment. In at least one embodiment, in accordance with the present disclosure, such as in the embodiments described herein, computer system 1300 may include, but is not limited to, components, such as a processor 1302, for employing execution units (including logic) to execute algorithms for process data. In at least one embodiment, computer system 1300 may include a processor, such as that available from Intel corporation (Intel Corporation of Santa Clara, california), santa Clara, califProcessor family, xeon TM 、XScale TM And/or StrongARM TM ,Core TM Or->Nervana TM Microprocessors, although other systems (including PCs with other microprocessors, engineering workstations, set-top boxes, etc.) may also be used. In at least one embodiment, computer system 1300 may execute a WINDOWS operating system version available from Microsoft corporation of Redmond, wash, microsoft Corporation of Redmond, although Other operating systems (e.g., UNIX and Linux), embedded software, and/or graphical user interfaces may also be used.
Embodiments may be used in other devices, such as handheld devices and embedded applications. Some examples of handheld devices include cellular telephones, internet protocol (Internet Protocol) devices, digital cameras, personal digital assistants ("PDAs"), and handheld PCs. In at least one embodiment, the embedded application may include a microcontroller, a digital signal processor ("DSP"), a system on a chip, a network computer ("NetPC"), a set-top box, a network hub, a wide area network ("WAN") switch, or any other system that may execute one or more instructions in accordance with at least one embodiment.
In at least one embodiment, computer system 1300 may include, but is not limited to, a processor 1302, which processor 1302 may include, but is not limited to, one or more execution units 1308 for performing machine learning model training and/or reasoning in accordance with the techniques described herein. In at least one embodiment, computer system 1300 is a single processor desktop or server system, but in another embodiment computer system 1300 may be a multiprocessor system. In at least one embodiment, processor 1302 may include, but is not limited to, for example, a complex instruction set computer ("CISC") microprocessor, a reduced instruction set computing ("RISC") microprocessor, a very long instruction word ("VLIW") microprocessor, a processor implementing a combination of instruction sets, or any other processor device, such as a digital signal processor. In at least one embodiment, processor 1302 can be coupled to a processor bus 1310, which processor bus 1310 can transmit data signals between processor 1302 and other components in computer system 1300.
In at least one embodiment, the processor 1302 may include, but is not limited to, a level 1 ("L1") internal cache memory ("cache") 1304. In at least one embodiment, the processor 1302 may have a single internal cache or multiple levels of internal caches. In at least one embodiment, the cache memory may reside external to the processor 1302. Other embodiments may also include a combination of internal and external caches, depending on the particular implementation and requirements. In at least one embodiment, register file 1306 may store different types of data in various registers, including, but not limited to, integer registers, floating point registers, status registers, and instruction pointer registers.
In at least one embodiment, an execution unit 1308, including but not limited to logic to perform integer and floating point operations, is also located in the processor 1302. In at least one embodiment, the processor 1302 may also include a microcode ("ucode") read-only memory ("ROM") that stores microcode for certain macro-instructions. In at least one embodiment, the execution unit 1308 may include logic to process the packed instruction set 1309. In at least one embodiment, the packed data in the processor 1302 may be used to perform operations used by many multimedia applications by including a packed instruction set 1309 in the instruction set of a general purpose processor and associated circuitry to execute instructions. In at least one embodiment, many multimedia applications may be more efficiently accelerated and executed by performing operations on packed data using the full width of a processor's data bus, which may eliminate the need to transmit smaller data units on the processor's data bus to perform one or more operations on one data element at a time.
In at least one embodiment, execution unit 1308 may also be used in microcontrollers, embedded processors, graphics devices, DSPs, and other types of logic circuits. In at least one embodiment, computer system 1300 can include, but is not limited to, memory 1320. In at least one embodiment, memory 1320 may be a dynamic random access memory ("DRAM") device, a static random access memory ("SRAM") device, a flash memory device, or other memory device. In at least one embodiment, the memory 1320 may store one or more instructions 1319 and/or data 1321 represented by data signals that the processor 1302 may execute.
In at least one embodiment, a system logic chip may be coupled to processor bus 1310 and memory 1320. In at least one embodiment, the system logic chip may include, but is not limited to, a memory controller hub ("MCH") 1316 and the processor 1302 may communicate with the MCH 1316 via a processor bus 1310. In at least one embodiment, the MCH 1316 may provide a high bandwidth memory path 1318 to memory 1320 for instruction and data storage as well as for storage of graphics commands, data, and textures. In at least one embodiment, the MCH 1316 may direct data signals between the processor 1302, memory 1320, and other components in the computer system 1300, and bridge data signals between the processor bus 1310, memory 1320, and the system I/O interface 1322. In at least one embodiment, the system logic chip may provide a graphics port for coupling to a graphics controller. In at least one embodiment, the MCH 1316 may be coupled to memory 1320 through a high bandwidth memory path 1318 and the graphics/video card 1312 may be coupled to the MCH 1316 through an accelerated graphics port ("AGP") interconnect 1314.
In at least one embodiment, the computer system 1300 may use the system I/O interface 1322 as a proprietary hub interface bus to couple the MCH 1316 to an I/O controller hub ("ICH") 1330. In at least one embodiment, the ICH 1330 may provide a direct connection to certain I/O devices via a local I/O bus. In at least one embodiment, the local I/O bus may include, but is not limited to, a high-speed I/O bus for connecting peripheral devices to memory 1320, a chipset, and processor 1302. Examples may include, but are not limited to, an audio controller 1329, a firmware hub ("flash BIOS") 1328, a wireless transceiver 1326, a data store 1324, a conventional I/O controller 1323 including user input and a keyboard interface 1325, a serial expansion port 1327 (such as a universal serial bus ("USB") port), and a network controller 1334. In at least one embodiment, data store 1324 can include a hard disk drive, a floppy disk drive, a CD-ROM device, a flash memory device, or other mass storage device.
In at least one embodiment, fig. 13 shows a system including interconnected hardware devices or "chips," while in other embodiments, fig. 13 may show an exemplary SoC. In at least one embodiment, the devices shown in FIG. 13 may be interconnected using proprietary interconnects, standardized interconnects (e.g., PCIe), or some combination thereof. In at least one embodiment, one or more components of computer system 1300 are interconnected using a computing quick link (CXL) interconnect.
Logic 915 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding logic 915 are provided herein in connection with fig. 9A and/or 9B. In at least one embodiment, logic 915 may be used in computer system 1300 for performing inference or predictive operations based at least in part on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein.
In at least one embodiment, at least one component shown or described with respect to fig. 13 is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 13 is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 13 is used to cause one or more neural networks to select one or more variations in a feature of one or more text prompts based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 13 is used to perform at least one aspect described with respect to block 100, block 200, process 300, block 400, process 500, block 600, block 700, block 800, and/or other systems, methods, or operations described herein.
Fig. 14 is a block diagram illustrating an electronic device 1400 for utilizing a processor 1410 in accordance with at least one embodiment. In at least one embodiment, the electronic device 1400 may be, for example, but not limited to, a notebook computer, a tower server, a rack server, a blade server, a laptop computer, a desktop computer, a tablet computer, a mobile device, a telephone, an embedded computer, or any other suitable electronic device.
In at least one embodiment, electronic device 1400 may include, but is not limited to, a processor 1410 communicatively coupled to any suitable number or variety of components, peripheral devices, modules, or devices. In at least one embodiment, processor 1410 uses bus or interface coupling, such as I 2 A C bus, a system management bus ("SMBus"), a Low Pin Count (LPC) bus, a serial peripheral interface ("SPI"), a high definition audio ("HDA") bus, a serial advanced technology attachment ("SATA") bus, a universal serial bus ("USB") (version 1, 2, 3, etc.), or a universal asynchronous receiver/transmitter ("UART") bus. In at least one embodiment, fig. 14 shows a system comprising interconnected hardware devices or "chips", while in other embodiments, fig. 14 may show an exemplary SoC. In at least one embodiment, the devices shown in FIG. 14 may be interconnected using proprietary interconnects, standardized interconnects (e.g., PCIe), or some combination thereof. In at least one embodiment, one or more components of fig. 14 are interconnected using a computing fast link (CXL) interconnect.
In at least one embodiment, fig. 14 may include a display 1424, a touch screen 1425, a touch pad 1430, a near field communication unit ("NFC") 1445, a sensor hub 1440, a thermal sensor 1446, a fast chipset ("EC") 1435, a trusted platform module ("TPM") 1438, a BIOS/firmware/Flash ("BIOS, FW Flash") 1422, a DSP 1460, a drive 1420 (such as a solid state disk ("SSD") or hard disk drive ("HDD")), a wireless local area network unit ("WLAN") 1450, a bluetooth unit 1452, a wireless wide area network unit ("WWAN") 1456, a Global Positioning System (GPS) unit 1455, a camera ("USB 3.0 camera") 1454 (such as a USB 3.0 camera), and/or a low power double data rate ("LPDDR") memory unit ("LPDDR 3") 1415 implemented, for example, in the LPDDR3 standard. These components may each be implemented in any suitable manner.
In at least one embodiment, other components may be communicatively coupled to the processor 1410 through components as described herein. In at least one embodiment, an accelerometer 1441, an ambient light sensor ("ALS") 1442, a compass 1443, and a gyroscope 1444 can be communicatively coupled to the sensor hub 1440. In at least one embodiment, thermal sensor 1439, fan 1437, keyboard 1436, and touch pad 1430 can be communicatively coupled to EC 1435. In at least one embodiment, a speaker 1463, an earphone 1464, and a microphone ("mic") 1465 can be communicatively coupled to an audio unit ("audio codec and class D amplifier") 1462, which in turn can be communicatively coupled to the DSP 1460. In at least one embodiment, audio unit 1462 may include, for example, but not limited to, an audio encoder/decoder ("codec") and a class D amplifier. In at least one embodiment, a SIM card ("SIM") 1457 can be communicatively coupled to the WWAN unit 1456. In at least one embodiment, components, such as WLAN unit 1450 and bluetooth unit 1452, and WWAN unit 1456, may be implemented as next generation form factors ("NGFF").
Logic 915 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding logic 915 are provided herein in connection with fig. 9A and/or 9B. In at least one embodiment, logic 915 may be used in electronic device 1400 to perform inference or predictive operations based at least in part on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein.
In at least one embodiment, at least one component shown or described with respect to fig. 14 is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 14 is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 14 is used to cause one or more neural networks to select one or more variations in a feature of one or more text prompts based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 14 is used to perform at least one aspect described with respect to block 100, block 200, process 300, block 400, process 500, block 600, block 700, block 800, and/or other systems, methods, or operations described herein.
FIG. 15 illustrates a computer system 1500 in accordance with at least one embodiment. In at least one embodiment, computer system 1500 is configured to implement the various processes and methods described throughout this disclosure.
In at least one embodiment, computer system 1500 includes, but is not limited to, at least one central processing unit ("CPU") 1502 connected to a communication bus 1510 implemented using any suitable protocol, such as PCI ("peripheral component interconnect"), peripheral component interconnect Express ("PCI-Express"), AGP ("accelerated graphics port"), hyperTransport, or any other bus or point-to-point communication protocol. In at least one embodiment, computer system 1500 includes, but is not limited to, a main memory 1504 and control logic (e.g., implemented in hardware, software, or a combination thereof), and the data is stored in main memory 1504, which may take the form of random access memory ("RAM"). In at least one embodiment, a network interface subsystem ("network interface") 1522 provides an interface to other computing devices and networks for receiving data from and transmitting data to other systems using computer system 1500.
In at least one embodiment, computer system 1500 includes, in at least one embodiment, but is not limited to, an input device 1508, a parallel processing system 1512, and a display device 1506, which can be implemented using conventional cathode ray tubes ("CRTs"), liquid crystal displays ("LCDs"), light emitting diode ("LED") displays, plasma displays, or other suitable display technologies. In at least one embodiment, user input is received from an input device 1508 (such as a keyboard, mouse, touchpad, microphone, etc.). In at least one embodiment, each module described herein may be located on a single semiconductor platform to form a processing system.
Logic 915 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding logic 915 are provided herein in connection with fig. 9A and/or 9B. In at least one embodiment, logic 915 may be used in computer system 1500 to perform inference or predictive operations based at least in part on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein.
In at least one embodiment, at least one component shown or described with respect to fig. 15 is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 15 is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 15 is used to cause one or more neural networks to select one or more variations in a feature of one or more text cues based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 15 is used to perform at least one aspect described with respect to block 100, block 200, process 300, block 400, process 500, block 600, block 700, block 800, and/or other systems, methods, or operations described herein.
FIG. 16 illustrates a computer system 1600 in accordance with at least one embodiment. In at least one embodiment, computer system 1600 includes, but is not limited to, a computer 1610 and a USB disk 1620. In at least one embodiment, computer 1610 may include, but is not limited to, any number and type of processors (not shown) and memory (not shown). In at least one embodiment, computers 1610 include, but are not limited to, servers, cloud instances, laptop computers, and desktop computers.
In at least one embodiment, USB disk 1620 includes, but is not limited to, a processing unit 1630, a USB interface 1640, and USB interface logic 1650. In at least one embodiment, processing unit 1630 may be any instruction execution system, apparatus, or device capable of executing instructions. In at least one embodiment, processing unit 1630 may include, but is not limited to, any number and type of processing cores (not shown). In at least one embodiment, processing unit 1630 includes an application specific integrated circuit ("ASIC") that is optimized to perform any number and type of operations associated with machine learning. For example, in at least one embodiment, processing unit 1630 is a tensor processing unit ("TPC") that is optimized to perform machine learning reasoning operations. In at least one embodiment, processing unit 1630 is a visual processing unit ("VPU") that is optimized to perform machine vision and machine learning reasoning operations.
In at least one embodiment, USB interface 1640 may be any type of USB connector or USB receptacle. For example, in at least one embodiment, USB interface 1640 is a USB 3.0Type-C receptacle for data and power. In at least one embodiment, USB interface 1640 is a USB 3.0Type-A connector. In at least one embodiment, USB interface logic 1650 may include any number and type of logic that enables processing unit 1630 to interface with a device (e.g., computer 1610) via USB connector 1640.
Logic 915 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding logic 915 are provided herein in connection with fig. 9A and/or 9B. In at least one embodiment, logic 915 may be used in computer system 1600 for performing inference or predictive operations based at least in part on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein.
In at least one embodiment, at least one component shown or described with respect to fig. 16 is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 16 is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 16 is used to cause one or more neural networks to select one or more variations in a feature of one or more text cues based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 16 is used to perform at least one aspect described with respect to block 100, block 200, process 300, block 400, process 500, block 600, block 700, block 800, and/or other systems, methods, or operations described herein.
FIG. 17A illustrates an exemplary architecture in which multiple GPUs 1710 (1) -1710 (N) are communicatively coupled to multiple multi-core processors 1705 (1) -1705 (M) through high speed links 1740 (1) -1740 (N) (e.g., buses, point-to-point interconnects, etc.). In at least one embodiment, high speed links 1740 (1) -1740 (N) support 4GB/s, 30GB/s, 80GB/s, or higher communication throughput. In at least one embodiment, various interconnect protocols may be used, including but not limited to PCIe 4.0 or 5.0 and NVLink 2.0. In the respective figures, "N" and "M" represent positive integers, and the values thereof may vary from one figure to another. In at least one embodiment, one or more of the plurality of GPUs 1710 (1) -1710 (N) include one or more graphics cores (also simply referred to as "cores") 2000 as disclosed in fig. 20A and 20B. In at least one embodiment, one or more graphics cores 2000 may be referred to as a streaming multiprocessor ("SM"), a streaming processor ("SP"), a streaming processing unit ("SPU"), a computing unit ("CU"), an execution unit ("EU"), and/or a slice, where in this context a slice may refer to a portion of processing resources in a processing unit (e.g., 16 cores, ray tracing units, thread directors, or schedulers).
Further, in at least one embodiment, two or more GPUs 1710 are interconnected via high-speed links 1729 (1) -1729 (2), which may be implemented using protocols/links that are similar or different than those used for high-speed links 1740 (1) -1740 (N). Similarly, two or more multi-core processors 1705 may be connected by a high-speed link 1728, which may be a Symmetric Multiprocessor (SMP) bus running at 20GB/s, 30GB/s, 120GB/s, or higher. Alternatively, all communications between the various system components shown in FIG. 17A may be accomplished using similar protocols/links (e.g., through a common interconnect structure).
In at least one embodiment, each multi-core processor 1705 is communicatively coupled to processor memories 1701 (1) -1701 (M) via memory interconnects 1726 (1) -1726 (M), respectively, and each GPU 1710 (1) -1710 (N) is communicatively coupled to GPU memories 1720 (1) -1720 (N) via GPU memory interconnects 1750 (1) -1750 (N), respectively. In at least one embodiment, memory interconnects 1726 and 1750 may utilize similar or different memory access technologies. By way of example, and not limitation, the processor memories 1701 (1) -1701 (M) and the GPU memory 1720 may be volatile memory, such as Dynamic Random Access Memory (DRAM) (including stacked DRAM), graphics DDR SDRAM (GDDR) (e.g., GDDR5, GDDR 6), or High Bandwidth Memory (HBM), and/or may be non-volatile memory, such as 3D XPoint or Nano-Ram. In at least one embodiment, some portion of the processor memory 1701 may be volatile memory while another portion may be non-volatile memory (e.g., using a two-level memory (2 LM) hierarchy).
As described herein, although the respective multi-core processor 1705 and GPU 1710 may be physically coupled to specific memories 1701, 1720, respectively, and/or a unified memory architecture may be implemented in which a virtual system address space (also referred to as an "effective address" space) is distributed among the respective physical memories. For example, the processor memories 1701 (1) -1701 (M) may each include 64GB of system memory address space, and the GPU memories 1720 (1) -1720 (N) may each include 32GB of system memory address space, resulting in a total of 256GB of addressable memory when m=2 and n=4. Other values of N and M are possible.
In at least one embodiment, at least one component shown or described with respect to fig. 17A is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 17A is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 17A is used to cause one or more neural networks to select one or more variations in a feature of one or more text cues based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 17A is used to perform at least one aspect described with respect to block diagram 100, block diagram 200, process 300, block diagram 400, process 500, block diagram 600, block diagram 700, block diagram 800, and/or other systems, methods, or operations described herein.
FIG. 17B illustrates additional details for an interconnection between the multi-core processor 1707 and the graphics acceleration module 1746 according to at least one embodiment. In at least one embodiment, the graphics acceleration module 1746 may include one or more GPU chips integrated on a line card that is coupled to the processor 1707 via a high speed link 1740 (e.g., PCIe bus, NVLink, etc.). In at least one embodiment, the graphics acceleration module 1746 may alternatively be integrated on a package or chip with the processor 1707.
In at least one embodiment, the processor 1707 includes a plurality of cores 1760A-1760D (which may be referred to as "execution units"), each having a translation lookaside buffer ("TLB") 1761A-1761D and one or more caches 1762A-1762D. In at least one embodiment, cores 1760A-1760D may include various other components not shown for executing instructions and processing data. In at least one embodiment, caches 1762A-1762D may include level 1 (L1) and level 2 (L2) caches. Further, one or more shared caches 1756 may be included in caches 1762A-1762D and shared by the various sets of cores 1760A-1760D. For example, one embodiment of the processor 1707 includes 24 cores, each core having its own L1 cache, 12 shared L2 caches, and 12 shared L3 caches. In this embodiment, two adjacent cores share one or more L2 and L3 caches. In at least one embodiment, the processor 1707 and the graphics acceleration module 1746 are connected to a system memory 1714, which system memory 1714 may include the processor memories 1701 (1) -1701 (M) of FIG. 17A.
In at least one embodiment, coherency is maintained for data and instructions stored in the respective caches 1762A-1762D, 1756 and system memory 1714 via inter-core communication over a coherency bus 1764. In at least one embodiment, for example, each cache may have cache coherency logic/circuitry associated therewith to communicate over coherency bus 1764 in response to detecting a read or write to a particular cache line. In at least one embodiment, a cache snoop protocol is implemented over coherency bus 1764 to snoop (snoop) cache accesses.
In at least one embodiment, the proxy circuit 1725 communicatively couples the graphics acceleration module 1746 to the coherency bus 1764, allowing the graphics acceleration module 1746 to participate in the cache coherency protocol as a peer of cores 1760A-1760D. In particular, in at least one embodiment, interface 1735 provides a connection to proxy circuitry 1725 through high speed link 1740 and interface 1737 connects graphics acceleration module 1746 to high speed link 1740.
In at least one embodiment, accelerator integrated circuit 1736 provides cache management, memory access, context management, and interrupt management services on behalf of multiple graphics processing engines 1731 (1) -1731 (N) of graphics acceleration module 1746. In at least one embodiment, graphics processing engines 1731 (1) -1731 (N) may each include a separate Graphics Processing Unit (GPU). In at least one embodiment, the plurality of graphics processing engines 1731 (1) -1731 (N) of the graphics acceleration module 1746 includes one or more graphics cores 2000 as discussed in connection with FIGS. 20A and 20B. In at least one embodiment, graphics processing engines 1731 (1) -1731 (N) may alternatively include different types of graphics processing engines within GPUs, such as graphics execution units, media processing engines (e.g., video encoders/decoders), samplers, and blit (block handling) engines. In at least one embodiment, the graphics acceleration module 1746 may be a GPU with multiple graphics processing engines 1731 (1) -1731 (N), or the graphics processing engines 1731 (1) -1731 (N) may be individual GPUs integrated on a common package, line card, or chip.
In at least one embodiment, the accelerator integrated circuit 1736 includes a Memory Management Unit (MMU) 1739 to perform various memory management functions, such as virtual to physical memory translation (also referred to as efficient to real memory translation), and also includes memory access protocols to access the system memory 1714. In at least one embodiment, the MMU 1739 may also include a translation lookaside buffer ("TLB") (not shown) for caching virtual/effective to physical/real address translations. In at least one embodiment, caches 1738 may store commands and data for efficient access by graphics processing engines 1731 (1) -1731 (N). In at least one embodiment, the data stored in the caches 1738 and the graphics memories 1733 (1) -1733 (M) may be kept consistent with the core caches 1762A-1762D, 1756 and the system memory 1714, possibly using the fetch unit 1744. As previously described, this may be implemented on behalf of caches 1738 and memories 1733 (1) -1733 (M) via agent circuitry 1725 (e.g., to send updates to cache 1738 regarding modification/access of cache lines on processor caches 1762A-1762D, 1756 and to receive updates from cache 1738).
In at least one embodiment, a set of registers 1745 stores context data for threads executed by graphics processing engines 1731 (1) -1731 (N), and context management circuitry 1748 manages thread contexts. For example, the context management circuitry 1748 may perform save and restore operations to save and restore the context of the various threads during a context switch (e.g., where a first thread is saved and a second thread is stored so that the second thread may be executed by the graphics processing engine). For example, the context management circuitry 1748 may store the current register value to a designated region (e.g., identified by a context pointer) in memory upon a context switch. The register value may then be restored when the context is returned. In at least one embodiment, the interrupt management circuitry 1747 receives and processes interrupts received from system devices.
In at least one embodiment, MMU 1739 translates virtual/effective addresses from graphics processing engine 1731 to real/physical addresses in system memory 1714. In at least one embodiment, accelerator integrated circuit 1736 supports multiple (e.g., 4, 8, 16) graphics accelerator modules 1746 and/or other accelerator devices. In at least one embodiment, the graphics accelerator module 1746 may be dedicated to a single application executing on the processor 1707 or may be shared among multiple applications. In at least one embodiment, a virtualized graphics execution environment is presented in which the resources of graphics processing engines 1731 (1) -1731 (N) are shared with multiple applications or Virtual Machines (VMs). In at least one embodiment, resources may be subdivided into "slices" that are assigned to different VMs and/or applications based on processing requirements and priorities associated with the VMs and/or applications.
In at least one embodiment, the accelerator integrated circuit 1736 executes as a bridge to the system of the graphics acceleration module 1746 and provides address translation and system memory caching services. In addition, in at least one embodiment, accelerator integrated circuit 1736 may provide a virtualization facility for a host processor to manage virtualization, interrupts, and memory management for graphics processing engines 1731 (1) -1731 (N).
In at least one embodiment, since the hardware resources of graphics processing engines 1731 (1) -1731 (N) are explicitly mapped to the real address space seen by host processor 1707, any host processor may directly address these resources using the effective address values. In at least one embodiment, one function of accelerator integrated circuit 1736 is the physical separation of graphics processing engines 1731 (1) -1731 (N) so that they appear to the system as separate units.
In at least one embodiment, one or more graphics memories 1733 (1) -1733 (M) are coupled to each graphics processing engine 1731 (1) -1731 (N), respectively, with n=m. In at least one embodiment, graphics memories 1733 (1) -1733 (M) store instructions and data being processed by each graphics processing engine 1731 (1) -1731 (N). In at least one embodiment, graphics memories 1733 (1) -1733 (M) may be volatile memory, such as DRAM (including stacked DRAM), GDDR memory (e.g., GDDR5, GDDR 6), or HBM, and/or may be non-volatile memory, such as 3D XPoint or Nano-Ram.
In at least one embodiment, to reduce data traffic on high-speed link 1740, biasing techniques may be used to ensure that the data stored in graphics memories 1733 (1) -1733 (M) is the most commonly used by graphics processing engines 1731 (1) -1731 (N), and preferably the data that is not used (at least not frequently used) by cores 1760A-1760D. Similarly, in at least one embodiment, the biasing mechanism attempts to keep core-needed (and preferably, not needed by graphics processing engines 1731 (1) -1731 (N)) data in caches 1762A-1762D, 1756 and system memory 1714.
In at least one embodiment, at least one component shown or described with respect to fig. 17B is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 17B is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 17B is used to cause one or more neural networks to select one or more variations in a feature of one or more text cues based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 17B is used to perform at least one aspect described with respect to block diagram 100, block diagram 200, process 300, block diagram 400, process 500, block diagram 600, block diagram 700, block diagram 800, and/or other systems, methods, or operations described herein.
Fig. 17C illustrates another exemplary embodiment in which accelerator integrated circuit 1736 is integrated within processor 1707. In this embodiment, graphics processing engines 1731 (1) -1731 (N) communicate directly with accelerator integrated circuit 1736 via high speed link 1740 via interface 1737 and interface 1735 (again, it may be any form of bus or interface protocol). In at least one embodiment, accelerator integrated circuit 1736 may perform operations similar to those described with respect to FIG. 17B, but may have a higher throughput due to its close proximity to coherency bus 1764 and caches 1762A-1762D, 1756. In at least one embodiment, the accelerator integrated circuit supports different programming models, including process-specific programming models (no graphics acceleration module virtualization) and shared programming models (with virtualization), which may include programming models controlled by the accelerator integrated circuit 1736 and programming models controlled by the graphics acceleration module 1746.
In at least one embodiment, graphics processing engines 1731 (1) -1731 (N) are dedicated to a single application or process under a single operating system. In at least one embodiment, a single application may aggregate (fuel) other application requests to graphics processing engines 1731 (1) -1731 (N), thereby providing virtualization within a VM/partition.
In at least one embodiment, graphics processing engines 1731 (1) -1731 (N) may be shared by multiple VM/application partitions. In at least one embodiment, the sharing model may use a hypervisor (hypervisor) to virtualize graphics processing engines 1731 (1) -1731 (N) to allow access by each operating system. In at least one embodiment, for a single partition system without a hypervisor, the operating system has graphics processing engines 1731 (1) -1731 (N). In at least one embodiment, the operating system may virtualize graphics processing engines 1731 (1) -1731 (N) to provide access to each process or application.
In at least one embodiment, the graphics acceleration module 1746 or the individual graphics processing engines 1731 (1) -1731 (N) use a process handle (handle) to select a process element. In at least one embodiment, the process elements are stored in the system memory 1714 and are addressable using the effective address to real address translation techniques described herein. In at least one embodiment, the process handle may be an implementation-specific value that is provided to the host process (i.e., system software is invoked to add a process element to the process element linked list) when registering its context with graphics processing engines 1731 (1) -1731 (N). In at least one embodiment, the lower 16 bits of the process handle may be the offset of the process element in the process element linked list.
In at least one embodiment, at least one component shown or described with respect to fig. 17C is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 17C is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 17C is used to cause one or more neural networks to select one or more variations in a feature of one or more text cues based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 17C is used to perform at least one aspect described with respect to block diagram 100, block diagram 200, process 300, block diagram 400, process 500, block diagram 600, block diagram 700, block diagram 800, and/or other systems, methods, or operations described herein.
Fig. 17D illustrates an exemplary accelerator integrated slice 1790. In at least one embodiment, a "slice" includes a specified portion of the processing resources of accelerator integrated circuit 1736. In at least one embodiment, the application is an effective address space 1782 in system memory 1714 that stores process elements 1783. In at least one embodiment, the process element 1783 is stored in response to a GPU call 1781 from an application 1780 executing on the processor 1707. In at least one embodiment, the process elements 1783 contain the process state of the corresponding application 1780. In at least one embodiment, the Work Descriptor (WD) 1784 contained in the process element 1783 may be a single job requested by the application or may contain a pointer to a job queue. In at least one embodiment, WD 1784 is a pointer to a job request queue in the application's effective address space 1782.
In at least one embodiment, the graphics acceleration module 1746 and/or the various graphics processing engines 1731 (1) -1731 (N) may be shared by all or a subset of the processes in the system. In at least one embodiment, an infrastructure may be included for setting the process state and sending WD 1784 to the graphics acceleration module 1746 to begin a job in the virtualized environment.
In at least one embodiment, the process-specific programming model is implementation-specific. In at least one embodiment, a single process owns the graphics acceleration module 1746 or the individual graphics processing engine 1731 in this model. In at least one embodiment, when the graphics acceleration module 1746 is owned by a single process, the hypervisor initializes the accelerator integrated circuits for the owned partition and when the graphics acceleration module 1746 is assigned, the operating system initializes the accelerator integrated circuits 1736 for the owned process.
In at least one embodiment, in operation, the WD obtain unit 1791 in the accelerator integrated slice 1790 obtains the next WD 1784 that includes an indication of work to be done by one or more graphics processing engines of the graphics acceleration module 1746. In at least one embodiment, data from WD 1784 may be stored in registers 1745 and used by MMU 1739, interrupt management circuitry 1747, and/or context management circuitry 1748 as shown. For example, one embodiment of MMU 1739 includes segment/page roaming (walk) circuitry for accessing segment/page tables 1786 within OS virtual address space 1785. In at least one embodiment, the interrupt management circuitry 1747 can process interrupt events 1792 received from the graphics acceleration module 1746. In at least one embodiment, when performing graphics operations, effective addresses 1793 generated by graphics processing engines 1731 (1) -1731 (N) are translated into real addresses by MMU 1739.
In at least one embodiment, registers 1745 are replicated for each graphics processing engine 1731 (1) -1731 (N) and/or graphics acceleration module 1746, and the registers 1745 may be initialized by a hypervisor or operating system. In at least one embodiment, each of these replicated registers may be included in accelerator integrated slice 1790. Exemplary registers that may be initialized by the hypervisor are shown in table 1.
TABLE 1 registers for hypervisor initialization
An exemplary register that may be initialized by the operating system is shown in Table 2.
TABLE 2 registers for operating system initialization
In at least one embodiment, each WD 1784 is specific to a particular graphics acceleration module 1746 and/or graphics processing engines 1731 (1) -1731 (N). In at least one embodiment, it contains all the information needed by graphics processing engines 1731 (1) -1731 (N) to complete the work, or it may be a pointer to a memory location where the application has set a command queue for the work to complete.
In at least one embodiment, at least one component shown or described with respect to fig. 17D is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 17D is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 17D is used to cause one or more neural networks to select one or more variations in a feature of one or more text cues based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 17D is used to perform at least one aspect described with respect to block diagram 100, block diagram 200, process 300, block diagram 400, process 500, block diagram 600, block diagram 700, block diagram 800, and/or other systems, methods, or operations described herein.
FIG. 17E illustrates additional details of one exemplary embodiment of a sharing model. This embodiment includes a hypervisor real address space 1798 in which a list of process elements 1799 is stored. In at least one embodiment, the hypervisor real address space 1798 can be accessed via a hypervisor 1796, the hypervisor 1796 virtualizing the graphics acceleration module engine for the operating system 1795.
In at least one embodiment, the shared programming model allows all processes or subsets of processes from all partitions or subsets of partitions in the system to use the graphics acceleration module 1746. In at least one embodiment, there are two programming models in which the graphics acceleration module 1746 is shared by multiple processes and partitions, runtime slice sharing and graphics orientation sharing.
In at least one embodiment, in this model, hypervisor 1796 has graphics acceleration module 1746 and makes its functions available to all operating systems 1795. In at least one embodiment, virtualization is supported by the hypervisor 1796 for the graphics acceleration module 1746, the graphics acceleration module 1746 can adhere to certain requirements, such as (1) the application's job requests must be autonomous (i.e., no state needs to be maintained between jobs), or the graphics acceleration module 1746 must provide a context save and restore mechanism, (2) the graphics acceleration module 1746 ensures that the application's job requests are completed within a specified amount of time, including any conversion errors, or the graphics acceleration module 1746 provides the ability to preempt (preempt) job processing, and (3) when operating in a directed shared programming model, the graphics acceleration module 1746 must ensure fairness between processes.
In at least one embodiment, application 1780 is required to make an operating system 1795 system call using a graphics acceleration module type, a Work Descriptor (WD), a permission mask register (AMR) value, and a context save/restore zone pointer (CSRP). In at least one embodiment, the graphics acceleration module type describes a target acceleration function for system calls. In at least one embodiment, the graphics acceleration module type may be a system-specific value. In at least one embodiment, WD is specifically formatted for the graphics acceleration module 1746 and may take the form of graphics acceleration module 1746 commands, effective address pointers to user-defined structures, effective address pointers to command queues, or any other data structure describing the work to be done by the graphics acceleration module 1746.
In at least one embodiment, the AMR value is the AMR state for the current process. In at least one embodiment, the values passed to the operating system are similar to the application program setting AMR. In at least one embodiment, if the implementation of accelerator integrated circuit 1736 (not shown) and graphics acceleration module 1746 does not support a user rights mask override register (UAMOR), the operating system may apply the current UAMOR value to the AMR value before passing AMR in the hypervisor call. In at least one embodiment, the hypervisor 1796 can selectively apply a current rights mask override register (AMOR) value prior to placing AMR in the process element 1783. In at least one embodiment, CSRP is one of the registers 1745 that contains the effective address of a region in the effective address space 1782 of the application for the graphics acceleration module 1746 to save and restore the context state. In at least one embodiment, the pointer is optional if there is no need to save state between jobs or when a job is preempted. In at least one embodiment, the context save/restore area may be a fixed system memory.
Upon receiving a system call, the operating system 1795 can verify that the application 1780 has been registered and granted permission to use the graphics acceleration module 1746. Then, in at least one embodiment, operating system 1795 uses the information shown in Table 3 to invoke hypervisor 1796.
TABLE 3 operating System to hypervisor call parameters
In at least one embodiment, upon receiving the hypervisor call, the hypervisor 1796 verifies that the operating system 1795 is registered and granted permission to use the graphics acceleration module 1746. Then, in at least one embodiment, the hypervisor 1796 places the process element 1783 into a linked list of process elements of the corresponding graphics acceleration module 1746 type. In at least one embodiment, the process elements may include the information shown in Table 4.
TABLE 4 Process element information
In at least one embodiment, the hypervisor initializes a plurality of accelerator integrated slices 1790 registers 1745.
In at least one embodiment, at least one component shown or described with respect to fig. 17E is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 17E is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 17E is used to cause one or more neural networks to select one or more variations in a feature of one or more text cues based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 17E is used to perform at least one aspect described with respect to block diagram 100, block diagram 200, process 300, block diagram 400, process 500, block diagram 600, block diagram 700, block diagram 800, and/or other systems, methods, or operations described herein.
As shown in fig. 17F, in at least one embodiment, unified memory is used that is addressable via a common virtual memory address space for accessing physical processor memories 1701 (1) -1701 (N) and GPU memories 1720 (1) -1720 (N). In this implementation, operations performed on GPUs 1710 (1) -1710 (N) utilize the same virtual/effective memory address space to access processor memories 1701 (1) -1701 (M), and vice versa, thereby simplifying programmability. In at least one embodiment, a first portion of the virtual/effective address space is allocated to processor memory 1701 (1), a second portion is allocated to second processor memory 1701 (N), a third portion is allocated to GPU memory 1720 (1), and so on. In at least one embodiment, the entire virtual/effective memory space (sometimes referred to as an effective address space) is thus distributed across each of the processor memory 1701 and the GPU memory 1720, allowing any processor or GPU to access any physical memory with virtual addresses that map to that memory.
In at least one embodiment, the bias/coherency management circuitry 1794A-1794E within one or more MMUs 1739A-1739E ensures cache coherency between one or more host processors (e.g., 1705) and the caches of GPU 1710 and implements a bias technique that indicates physical memory in which certain types of data should be stored. In at least one embodiment, although multiple instances of the bias/coherency management circuitry 1794A-1794E are shown in FIG. 17F, the bias/coherency circuitry may be implemented within the MMU of the one or more host processors 1705 and/or within the accelerator integrated circuit 1736.
One embodiment allows the GPU memory 1720 to be mapped as part of system memory and accessed using Shared Virtual Memory (SVM) techniques, but without suffering from the performance drawbacks associated with full system cache coherency. In at least one embodiment, the ability of GPU memory 1720 to be accessed as system memory without the need for heavy cache coherency overhead provides an advantageous operating environment for GPU offloading. In at least one embodiment, this arrangement allows software of the host processor 1705 to set operands and access the results of the computation without the overhead of conventional I/O DMA data replication. In at least one embodiment, such traditional replicas include driver calls, interrupts, and memory mapped I/O (MMIO) accesses, which are all inefficient relative to simple memory accesses. In at least one embodiment, the ability to access the GPU memory 1720 without cache coherency overhead may be critical to the execution time of the offloaded computation. In at least one embodiment, for example, with a large amount of streaming write memory traffic, the cache coherency overhead may significantly reduce the effective write bandwidth seen by GPU 1710. In at least one embodiment, the efficiency of operand setting, the efficiency of result access, and the efficiency of GPU computing may play a role in determining the effectiveness of GPU offloading.
In at least one embodiment, the selection of GPU bias and host processor bias is driven by a bias tracker data structure. In at least one embodiment, for example, a bias table may be used, which may be a page granularity structure (e.g., controlled at the granularity of memory pages) that includes 1 or 2 bits of memory page attached per GPU. In at least one embodiment, the bias table may be implemented in a stolen (stolen) memory range of one or more GPU memories 1720 with or without a bias cache in the GPU 1710 (e.g., a frequently/recently used entry for caching bias tables). Alternatively, in at least one embodiment, the entire bias table may be maintained within the GPU.
In at least one embodiment, the offset table entries associated with each access to GPU additional memory 1720 are accessed prior to actually accessing the GPU memory, thereby causing the following operations. In at least one embodiment, local requests from GPU 1710 that find their pages in the GPU bias are forwarded directly to the corresponding GPU memory 1720. In at least one embodiment, local requests from the GPU that find their pages in the host bias are forwarded to the processor 1705 (e.g., over the high speed link described herein). In at least one embodiment, the request from the processor 1705 to find the requested page in the host processor bias completes a request similar to a normal memory read. Alternatively, a request directed to the GPU offset page may be forwarded to GPU 1710. In at least one embodiment, if the GPU is not currently using the page, the GPU may migrate the page to the host processor bias. In at least one embodiment, the bias state of the page may be changed by a software-based mechanism, a hardware-assisted software-based mechanism, or, in the case of a limited set, by a purely hardware-based mechanism.
In at least one embodiment, a mechanism for changing the bias state employs an API call (e.g., openCL) that in turn invokes a device driver of the GPU, which in turn sends a message (or causes a command description Fu Rudui) to the GPU, directs the GPU to change bias state, and in some migration performs a cache flush operation in the host. In at least one embodiment, the cache flush operation is used to migrate from the host processor 1705 bias to the GPU bias, but not for the opposite migration.
In at least one embodiment, cache coherency is maintained by temporarily rendering GPU-biased pages that cannot be cached by the host processor 1705. In at least one embodiment, to access these pages, processor 1705 may request access from GPU 1710, which GPU 1710 may or may not immediately grant access rights. Thus, in at least one embodiment, to reduce communication between the processor 1705 and the GPU 1710, it is beneficial to ensure that the GPU bias page is a page required by the GPU and not the host processor 1705, and vice versa.
One or more hardware structures 915 are used to perform one or more embodiments. Details regarding one or more hardware structures 915 may be provided herein in connection with fig. 9A and/or 9B.
In at least one embodiment, at least one component shown or described with respect to fig. 17F is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 17F is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 17F is used to cause one or more neural networks to select one or more variations in a feature of one or more text cues based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 17F is used to perform at least one aspect described with respect to block diagram 100, block diagram 200, process 300, block diagram 400, process 500, block diagram 600, block diagram 700, block diagram 800, and/or other systems, methods, or operations described herein.
FIG. 18 illustrates an exemplary integrated circuit and associated graphics processor that can be fabricated using one or more IP cores in accordance with at least one embodiment. In addition to the illustration, other logic and circuitry may be included in at least one embodiment, including additional graphics processors/cores, peripheral interface controllers, or general purpose processor cores.
Fig. 18 is a block diagram illustrating an exemplary system on a chip integrated circuit 1800 that may be fabricated using one or more IP cores in accordance with at least one embodiment. In at least one embodiment, the integrated circuit 1800 includes one or more application processors 1805 (e.g., CPUs), at least one graphics processor 1810, and may additionally include an image processor 1815 and/or a video processor 1820, any of which may be a modular IP core. In at least one embodiment, integrated circuit 1800 includes peripheral or bus logic including USB controller 1825, UART controller 1830, SPI/SDIO controller 1835, and I 2 S/I 2 C controller 1840. In at least one embodiment, integrated circuit 1800 can include a logic circuit coupled to a high definition multi-processorA media interface (HDMI) controller 1850 and a Mobile Industrial Processor Interface (MIPI) display device 1845 of one or more of the display interfaces 1855. In at least one embodiment, the storage may be provided by a flash subsystem 1860 that includes flash memory and a flash controller. In at least one embodiment, a memory interface may be provided via a memory controller 1865 for accessing SDRAM or SRAM memory devices. In at least one embodiment, some integrated circuits further include an embedded security engine 1870.
Logic 915 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding logic 915 are provided herein in connection with fig. 9A and/or 9B. In at least one embodiment, logic 915 may be used in integrated circuit 1800 to infer or predict an operation based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein.
In at least one embodiment, at least one component shown or described with respect to fig. 18 is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 18 is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 18 is used to cause one or more neural networks to select one or more variations in a feature of one or more text prompts based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 18 is used to perform at least one aspect described with respect to block 100, block 200, process 300, block 400, process 500, block 600, block 700, block 800, and/or other systems, methods, or operations described herein.
19A and 19B illustrate an exemplary integrated circuit and associated graphics processor that can be fabricated using one or more IP cores in accordance with at least one embodiment. In addition to the illustration, other logic and circuitry may be included in at least one embodiment, including additional graphics processors/cores, peripheral interface controllers, or general purpose processor cores.
Fig. 19A and 19B are block diagrams illustrating an exemplary graphics processor for use within a SoC according to embodiments described herein. Fig. 19A illustrates an exemplary graphics processor 1910 of a system-on-chip integrated circuit that can be fabricated using one or more IP cores in accordance with at least one embodiment. Fig. 19B illustrates an additional exemplary graphics processor 1940 of a system-on-chip integrated circuit, which may be fabricated using one or more IP cores, in accordance with at least one embodiment. In at least one embodiment, the graphics processor 1910 of FIG. 19A is a low power graphics processor core. In at least one embodiment, graphics processor 1940 of FIG. 19B is a higher performance graphics processor core. In at least one embodiment, each graphics processor 1910, 1940 may be a variation of graphics processor 1910 of FIG. 19.
In at least one embodiment, graphics processor 1910 includes vertex processor 1905 and one or more fragment processors 1915A-1915N (e.g., 1915A, 1915B, 1915C, 1915D-1915N-1, and 1915N). In at least one embodiment, the graphics processor 1910 may execute different shader programs via separate logic such that the vertex processor 1905 is optimized to perform operations for the vertex shader program, while one or more fragment processors 1915A-1915N perform fragment (e.g., pixel) shading operations for fragment or pixel shader programs. In at least one embodiment, vertex processor 1905 performs the vertex processing stages of the 3D graphics pipeline and generates primitives and vertex data. In at least one embodiment, one or more fragment processors 1915A-1915N use the primitives and vertex data generated by vertex processor 1905 to generate a frame buffer for display on a display device. In at least one embodiment, one or more fragment processors 1915A-1915N are optimized to execute fragment shader programs as provided in the OpenGL API, which may be used to perform operations similar to pixel shader programs provided in the Direct 3D API.
In at least one embodiment, graphics processor 1910 additionally includes one or more Memory Management Units (MMUs) 1920A-1920B, one or more caches 1925A-1925B, and one or more circuit interconnects 1930A-1930B. In at least one embodiment, one or more MMUs 1920A-1920B provide virtual-to-physical address maps for graphics processor 1910 (including for vertex processor 1905 and/or fragment processors 1915A-1915N), which may reference vertex or image/texture data stored in memory in addition to vertex or image/texture data stored in one or more caches 1925A-1925B. In at least one embodiment, one or more of the MMUs 1920A-1920B may be synchronized with other MMUs within the system, including one or more of the MMUs associated with one or more of the application processors 1905, the image processors 1915, and/or the video processors 1920 of FIG. 19, such that each of the processors 1905-1920 may participate in a shared or unified virtual memory system. In at least one embodiment, one or more circuit interconnects 1930A-1930B enable graphics processor 1910 to interface with other IP cores within a SoC via an internal bus of the SoC or via direct connections.
In at least one embodiment, graphics processor 1940 includes one or more shader cores 1955A-1955N (e.g., 1955A, 1955B, 1955C, 1955D, 1955E, 1955F-1955N-1 and 1955N) as shown in FIG. 19B, which provides a unified shader core architecture in which a single core or type or core can execute all types of programmable shader code, including shader program code for implementing vertex shaders, fragment shaders, and/or compute shaders. In at least one embodiment, the number of shader cores may vary. In at least one embodiment, the graphics processor 1940 includes an inter-core task manager 1945 that acts as a thread dispatcher for dispatching execution threads to one or more shader cores 1955A-1955N and a partitioning unit 1958 to accelerate tile-based rendering partitioning operations, where rendering operations of a scene are subdivided in image space, e.g., to take advantage of local spatial coherence within the scene or to optimize use of internal caches.
Logic 915 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding logic 915 are provided herein in connection with fig. 9A and/or 9B. In at least one embodiment, logic 915 may be used in an integrated circuit of graphics processor 1910 and/or 1940 to perform inference or predictive operations based at least in part on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein.
In at least one embodiment, at least one component shown or described with respect to fig. 19A and 19B is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 19A and 19B is used to cause selection of a most consistent output of one or more pre-trained neural networks based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 19A and 19B is used to cause one or more neural networks to select one or more variations in a feature of one or more text prompts based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 19A and 19B is used to perform at least one aspect described with respect to block diagram 100, block diagram 200, process 300, block diagram 400, process 500, block diagram 600, block diagram 700, block diagram 800, and/or other systems, methods, or operations described herein.
FIGS. 20A and 20B illustrate additional example graphics processor logic in accordance with at least one embodiment. In at least one embodiment, the components shown and described in connection with fig. 20A and 20B are integrated into a single system, such as a Graphics Processing Unit (GPU), soC, or another type of processor. In at least one embodiment, FIG. 20A illustrates a graphics core 2000 that may be included within graphics processor 1810 of FIG. 18, and in at least one embodiment, may be unified shader cores 1955A-1955N as shown in FIG. 19B. FIG. 20B illustrates a highly parallel general purpose graphics processing unit ("GPGPU", which may also be referred to as a "graphics processing unit") 2030 suitable for deployment on a multi-chip module in at least one embodiment. In at least one embodiment, graphics processing unit 2030 is a GPGPU comprising a graphics processor. In at least one embodiment, integrated circuit 1800 includes a graphics core 2000, e.g., for forming an integrated circuit and/or for forming a SoC, where such integrated circuit and/or such SoC performs the operations described herein.
In at least one embodiment, graphics core 2000 includes shared instruction cache 2002, texture unit 2018, and cache/shared memory 2020 (e.g., including L1, L2, L3, last level cache, or other caches) that are common to execution resources within graphics core 2000. In at least one embodiment, graphics core 2000 may include multiple slices 2001A-2001N or partitions of each core, and a graphics processor may include multiple instances of graphics core 2000. In at least one embodiment, each slice 2001A-2001N refers to graphics core 2000. In at least one embodiment, the slices 2001A-2001N have a plurality of sub-slices that are part of the slices 2001A-2001N. In at least one embodiment, slices 2001A-2001N are independent of other slices or dependent on other slices. In at least one embodiment, slices 2001A-2001N may include support logic that includes local instruction caches 2004A-2004N, thread schedulers (sequencers) 2006A-2006N, thread dispatchers 2008A-2008N, and a set of registers 2010A-2010N. In at least one embodiment, slices 2001A-2001N may include a set of additional functional units (AFUs 2012A-2012N), floating point units (FPUs 2014A-2014N), integer arithmetic logic units (ALUs 2016A-2016N), address calculation units (ACUs 2013A-2013N), double precision floating point units (DPFPUs 2015A-2015N), and matrix processing units (MPUs 2017A-2017N). In at least one embodiment, MPUs 2017A-2017N are referred to as matrix engines.
In at least one embodiment, each slice 2001A-2001N includes one or more engines for floating point and integer vector operations and one or more engines for accelerating convolution and matrix operations in AI, machine learning, or large dataset workloads. In at least one embodiment, one or more slices 2001A-2001N include one or more vector engines for computing vectors (e.g., computing mathematical operations of vectors). In at least one embodiment, the vector engine may calculate vector operations in 16-bit floating point (also referred to as "FP 16"), 32-bit floating point (also referred to as "FP 32"), or 64-bit floating point (also referred to as "FP 64"). In at least one embodiment, one or more slices 2001A-2001N include 16 vector engines paired with 16 matrix math units to compute matrix/tensor operations, where the vector engines and math units are shown by matrix expansion. In at least one embodiment, a slice of a specified portion of a processing resource of a processing unit (e.g., 16 cores and a ray tracing unit or 8 cores), a thread scheduler, a thread dispatcher, and additional functional units for the processor. In at least one embodiment, graphics core 2000 includes one or more matrix engines for computing matrix operations, for example, in computing tensor operations.
In at least one embodiment, one or more of the slices 2001A-2001N include one or more ray tracing units for computing ray tracing operations (e.g., 16 ray tracing units per slice 2001A-2001N). In at least one embodiment, the ray tracing unit calculates ray traversals, triangle intersections, bounding box intersections, or other ray tracing operations.
In at least one embodiment, one or more slices 2001A-2001N include media slices that encode, decode, and/or transcode data; scaling and/or format converting the data; and/or performing video quality operations on video data.
In at least one embodiment, one or more slices 2001A-2001N are linked to an L2 cache and memory structure, a link connector, a High Bandwidth Memory (HBM) (e.g., HBM2e, HDM 3) stack, and a media engine. In at least one embodiment, one or more slices 2001A-2001N include multiple cores (e.g., 16 cores) and multiple ray tracing units (e.g., 16) paired with each core. In at least one embodiment, one or more slices 2001A-2001N have one or more L1 caches. In at least one embodiment, one or more slices 2001A-2001N include one or more vector engines; one or more instruction caches for storing instructions; one or more L1 caches for caching data; one or more Shared Local Memories (SLMs) for storing data, e.g., corresponding to instructions; one or more samplers for sampling data; one or more ray tracing units for performing ray tracing operations; one or more geometry units for performing operations in the geometry pipeline and/or applying geometry transformations to vertices or polygons; one or more rasterizers to describe and convert images in a vector graphics format (e.g., shape) into raster images (e.g., a series of pixels, points, or lines that when displayed together create an image represented by the shape); one or more hierarchical depth buffers (Hiz) for caching data; and/or one or more pixel backend. In at least one embodiment, slices 2001A-2001N include a memory structure, such as an L2 cache.
In at least one embodiment, the FPUs 2014A-2014N may perform single-precision (32-bit) and half-precision (16-bit) floating-point operations, while the DPFPUs 2015A-2015N perform double-precision (64-bit) floating-point operations. In at least one embodiment, ALUs 2016A-2016N may perform variable precision integer operations with 8-bit, 16-bit, and 32-bit precision, and may be configured for mixed precision operations. In at least one embodiment, MPUs 2017A-2017N may also be configured for mixed precision matrix operations, including half-precision floating point operations and 8-bit integer operations. In at least one embodiment, MPUs 2017A-2017N may perform various matrix operations to accelerate machine learning application frameworks, including enabling support for accelerated generic matrix-to-matrix multiplication (GEMM). In at least one embodiment, AFUs 2012A-2012N may perform additional logical operations not supported by floating point units or integer units, including trigonometric function operations (e.g., sine, cosine, etc.).
Logic 915 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding logic 915 are provided herein in connection with fig. 9A and/or 9B. In at least one embodiment, logic 915 may be used in graphics core 2000 to perform inference or prediction operations based at least in part on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein.
In at least one embodiment, graphics core 2000 includes interconnect and link structure sublayers attached to switches and GPU-GPU bridges that enable multiple graphics processors 2000 (e.g., 8) to be interconnected without being attached to each other through the synchronous semantics of a load/store unit (LSU), a data transfer unit, and multiple graphics processors 2000. In at least one embodiment, the interconnect comprises a standardized interconnect (e.g., PCIe) or some combination thereof.
In at least one embodiment, graphics core 2000 includes multiple tiles (tiles). In at least one embodiment, the tiles are individual die or one or more die, where the individual die may be connected using interconnects (e.g., embedded multi-die interconnect bridges (EMIBs)). In at least one embodiment, graphics core 2000 includes computing tiles, memory tiles (e.g., where the memory tiles are exclusively accessible by different tiles or different chipsets (such as Rambo tiles)), substrate tiles, base tiles, HMB tiles, link tiles, and EMIB tiles, where all of the tiles are packaged together in graphics core 2000 as part of a GPU. In at least one embodiment, graphics core 2000 may include multiple tiles in a single package (also referred to as a "multi-tile package"). In at least one embodiment, a compute tile may have 8 graphics cores 2000, L1 caches; and the base tile may have a host interface with PCIe 5.0, HBM2e, MDFI, and EMIB, the linked tile having 8 links, 8 ports with embedded switches. In at least one embodiment, each tile is connected with a face-to-face (F2F) chip-on-chip bond using fine pitch 36 micron micro bumps (e.g., copper pillars). In at least one embodiment, graphics core 2000 includes a memory structure (which includes memory) and is a tile that is accessible by multiple tiles. In at least one embodiment, graphics core 2000 stores, accesses, or loads its own hardware context into memory, where the hardware context is a set of data loaded from registers prior to process recovery, and where the hardware context may indicate the state of the hardware (e.g., the state of the GPU).
In at least one embodiment, graphics core 2000 includes serializer/deserializer (SERDES) circuitry that converts a serial data stream to a parallel data stream, or vice versa.
In at least one embodiment, graphics core 2000 includes a high-speed uniform structure (GPU-to-GPU), load/store units, bulk data transfer and synchronization semantics, and GPUs connected through embedded switches, where the GPU-GPU bridge is controlled by a controller.
In at least one embodiment, graphics core 2000 executes APIs that abstract hardware of graphics core 2000 and use instructions to access libraries to perform mathematical operations (e.g., mathematical kernel libraries), deep neural network operations (e.g., deep neural network libraries), vector operations, collective communications, thread building blocks, video processing, data analysis libraries, and/or ray tracing operations.
In at least one embodiment, at least one component shown or described with respect to fig. 20A is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 20A is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 20A is used to cause one or more neural networks to select one or more variations in a feature of one or more text cues based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 20A is used to perform at least one aspect described with respect to block diagram 100, block diagram 200, process 300, block diagram 400, process 500, block diagram 600, block diagram 700, block diagram 800, and/or other systems, methods, or operations described herein.
FIG. 20B illustrates a GPGPU 2030, according to at least one embodiment, which may be configured to enable highly parallel computing operations to be performed by an array of graphics processing units. In at least one embodiment, the GPGPU 2030 may be directly linked to other instances of the GPGPU 2030 to create multiple GPU clusters to increase the training speed for deep neural networks. In at least one embodiment, the GPGPU 2030 includes a host interface 2032 for enabling a connection with a host processor. In at least one embodiment, host interface 2032 is a PCI Express interface. In at least one embodiment, host interface 2032 may be a vendor-specific communication interface or communication fabric. In at least one embodiment, the GPGPU 2030 receives commands from a host processor and allocates execution threads associated with those commands to a set of computing clusters 2036A-2036H using a global scheduler 2034 (which may be referred to as a thread sequencer and/or an asynchronous compute engine). In at least one embodiment, the computing clusters 2036A-2036H share a cache memory 2038. In at least one embodiment, the cache memory 2038 may serve as a higher level cache for the cache memory within the computing clusters 2036A-2036H. In at least one embodiment, the computing clusters 2036A-2036H comprise slices or are referred to as "slices". In at least one embodiment, the GPGPU 2030 is part of a SoC, such as part of the integrated circuit 1800 (fig. 18).
In at least one embodiment, GPGPU 2030 includes memories 2044A-2044B that are coupled to compute clusters 2036A-2036H via a set of memory controllers 2042A-2042B (e.g., one or more controllers for HBM2 e). In at least one embodiment, the memories 2044A-2044B may comprise various types of memory devices including Dynamic Random Access Memory (DRAM) or graphics random access memory, such as Synchronous Graphics Random Access Memory (SGRAM), which includes Graphics Double Data Rate (GDDR) memory.
In at least one embodiment, the computing clusters 2036A-2036H each include a set of graphics cores, such as the graphics core 2000 of FIG. 20A, which may include multiple types of integer and floating point logic units that may perform computing operations over a range of precision including precision suitable for machine learning computing. For example, in at least one embodiment, at least a subset of the floating point units in each of the computing clusters 2036A-2036H may be configured to perform 16-bit or 32-bit floating point operations, while a different subset of the floating point units may be configured to perform 64-bit floating point operations.
In at least one embodiment, multiple instances of the GPGPU 2030 may be configured to operate as a compute cluster. In at least one embodiment, the communication used by the computing clusters 2036A-2036H for synchronization and data exchange varies from embodiment to embodiment. In at least one embodiment, multiple instances of the GPGPU 2030 communicate through a host interface 2032. In at least one embodiment, the GPGPU 2030 includes an I/O hub 2039 that couples the GPGPU 2030 to a GPU link 2040, which GPU link 2040 enables direct connection to other instances of the GPGPU 2030. In at least one embodiment, GPU link 2040 is coupled to a dedicated GPU-to-GPU bridge that enables communication and synchronization between multiple instances of GPGPU 2030. In at least one embodiment, GPU link 2040 is coupled with a high speed interconnect to send and receive data to other GPGPUs or parallel processors. In at least one embodiment, multiple instances of GPGPU 2030 are located in separate data processing systems and communicate via a network device accessible via host interface 2032. In at least one embodiment, GPU link 2040 may also be configured to enable a connection to a host processor in addition to or in lieu of host interface 2032.
In at least one embodiment, the GPGPU 2030 may be configured to train a neural network. In at least one embodiment, the GPGPU 2030 may be used within an inference platform. In at least one embodiment, where reasoning is performed using the GPGPU 2030, the GPGPU 2030 may include fewer computing clusters 2036A-2036H relative to when training a neural network using the GPGPU 2030. In at least one embodiment, the memory technology associated with memories 2044A-2044B may differ between the reasoning and training configurations, with higher bandwidth memory technology being dedicated to the training configuration. In at least one embodiment, the reasoning configuration of the GPGPU 2030 may support reasoning specific instructions. For example, in at least one embodiment, the inference configuration may provide support for one or more 8-bit integer dot product instructions, which may be used during inference operations of a deployed neural network.
Logic 915 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding logic 915 are provided herein in connection with fig. 9A and/or 9B. In at least one embodiment, logic 915 may be used in GPGPU 2030 for performing inference or prediction operations based at least in part on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein.
In at least one embodiment, at least one component shown or described with respect to fig. 20B is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 20B is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 20B is used to cause one or more neural networks to select one or more variations in a feature of one or more text cues based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 20B is used to perform at least one aspect described with respect to block diagram 100, block diagram 200, process 300, block diagram 400, process 500, block diagram 600, block diagram 700, block diagram 800, and/or other systems, methods, or operations described herein.
Fig. 21 is a block diagram illustrating a computing system 2100 in accordance with at least one embodiment. In at least one embodiment, the computing system 2100 includes a processing subsystem 2101 having one or more processors 2102 and a system memory 2104 that communicate via an interconnection path that may include a memory hub 2105. In at least one embodiment, the memory hub 2105 may be a separate component within a chipset component or may be integrated within one or more processors 2102. In at least one embodiment, the memory hub 2105 is coupled to the I/O subsystem 2111 via a communication link 2106. In at least one embodiment, the I/O subsystem 2111 includes an I/O hub 2107, which may enable the computing system 2100 to receive input from one or more input devices 2108. In at least one embodiment, the I/O hub 2107 may enable a display controller, which may be included in the one or more processors 2102, to provide output to the one or more display devices 2110A. In at least one embodiment, the one or more display devices 2110A coupled to the I/O hub 2107 may comprise a local, internal, or embedded display device.
In at least one embodiment, the processing subsystem 2101 includes one or more parallel processors 2112 coupled to a memory hub 2105 via a bus or other communication link 2113. In at least one embodiment, the communication link 2113 may use one of any number of standards based on communication link technology or protocols (such as, but not limited to, PCI Express), or may be a vendor-specific communication interface or communication fabric. In at least one embodiment, one or more parallel processors 2112 form a computationally intensive parallel or vector processing system that may include a large number of processing cores and/or processing clusters, such as integrated many-core (MIC) processors. In at least one embodiment, some or all of the one or more parallel processors 2112 form a graphics processing subsystem that can output pixels to one of the one or more display devices 2110A coupled via the I/O hub 2107. In at least one embodiment, the one or more parallel processors 2112 may also include a display controller and display interface (not shown) for enabling direct connection to one or more display devices 2110B. In at least one embodiment, the one or more parallel processors 2112 include one or more cores, such as graphics core 2000 discussed herein.
In at least one embodiment, the system memory unit 2114 may be connected to the I/O hub 2107 to provide a storage mechanism for the computing system 2100. In at least one embodiment, the I/O switch 2116 can be used to provide an interface mechanism for enabling connection between the I/O hub 2107 and other components, such as a network adapter 2118 and/or a wireless network adapter 2119, which can be integrated into a platform, as well as various other devices that can be added via one or more additional devices 2120. In at least one embodiment, the network adapter 2118 can be an Ethernet adapter or another wired network adapter. In at least one embodiment, the wireless network adapter 2119 may include one or more of Wi-Fi, bluetooth, near Field Communication (NFC), or other network devices including one or more radios.
In at least one embodiment, the computing system 2100 may include other components not explicitly shown, including USB or other port connections, optical storage drives, video capture devices, etc., that may also be connected to the I/O hub 2107. In at least one embodiment, the communication paths interconnecting the various components in FIG. 21 may be implemented using any suitable protocol, such as a PCI (peripheral component interconnect) based protocol (e.g., PCI-Express) or other bus or point-to-point communication interfaces and/or protocols such as the NV-Link high-speed interconnect or interconnect protocol.
In at least one embodiment, the one or more parallel processors 2112 include circuitry optimized for graphics and video processing, including, for example, video output circuitry, and constitute a Graphics Processing Unit (GPU), for example, the one or more parallel processors 2112 include a graphics core 2000. In at least one embodiment, one or more of the parallel processors 2112 includes circuitry optimized for general purpose processing. In at least one embodiment, components of computing system 2100 may be integrated with one or more other system elements on a single integrated circuit. For example, in at least one embodiment, one or more of the parallel processor 2112, the memory hub 2105, one or more of the processor 2102 and the I/O hub 2107 may be integrated into a system on a chip (SoC) integrated circuit. In at least one embodiment, the components of the computing system 2100 may be integrated into a single package to form a System In Package (SIP) configuration. In at least one embodiment, at least a portion of the components of computing system 2100 may be integrated into a multi-chip module (MCM) that may be interconnected with other multi-chip modules into a modular computing system.
Logic 915 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding logic 915 are provided herein in connection with fig. 9A and/or 9B. In at least one embodiment, the logic 915 may be used in the computing system 2100 for performing inference or predictive operations based at least in part on weight parameters calculated using the neural network training operations, neural network functions and/or architectures, or neural network use cases described herein.
In at least one embodiment, at least one component shown or described with respect to fig. 21 is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 21 is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 21 is used to cause one or more neural networks to select one or more variations in a feature of one or more text cues based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 21 is used to perform at least one aspect described with respect to block 100, block 200, process 300, block 400, process 500, block 600, block 700, block 800, and/or other systems, methods, or operations described herein.
Processor and method for controlling the same
Fig. 22A illustrates a parallel processor 2200 in accordance with at least one embodiment. In at least one embodiment, the various components of the parallel processor 2200 may be implemented using one or more integrated circuit devices, such as a programmable processor, an Application Specific Integrated Circuit (ASIC), or a Field Programmable Gate Array (FPGA). In at least one embodiment, the parallel processor 2200 shown is a variation of one or more of the parallel processors 2112 shown in fig. 21 in accordance with an exemplary embodiment. In at least one embodiment, parallel processor 2200 includes one or more graphics cores 2000.
In at least one embodiment, parallel processor 2200 includes parallel processing unit 2202. In at least one embodiment, parallel processing unit 2202 includes an I/O unit 2204 that enables communication with other devices, including other instances of parallel processing unit 2202. In at least one embodiment, the I/O unit 2204 may be directly connected to other devices. In at least one embodiment, the I/O unit 2204 is connected to other devices via the use of a hub or switch interface (e.g., memory hub 2205). In at least one embodiment, the connection between the memory hub 2205 and the I/O unit 2204 forms a communication link 2213. In at least one embodiment, the I/O unit 2204 is coupled to a host interface 2206 and a memory crossbar 2216, wherein the host interface 2206 receives commands for performing processing operations and the memory crossbar 2216 receives commands for performing memory operations.
In at least one embodiment, when the host interface 2206 receives a command buffer via the I/O unit 2204, the host interface 2206 may direct work operations for executing those commands to the front end 2208. In at least one embodiment, the front end 2208 is coupled to a scheduler 2210 (which may be referred to as a sequencer), the scheduler 2210 being configured to assign commands or other work items to the processing cluster array 2212. In at least one embodiment, the scheduler 2210 ensures that the processing cluster array 2212 is properly configured and in an active state before tasks are assigned to clusters in the processing cluster array 2212. In at least one embodiment, scheduler 2210 is implemented via firmware logic executing on a microcontroller. In at least one embodiment, the microcontroller-implemented scheduler 2210 may be configured to perform complex scheduling and work allocation operations at coarse and fine granularity, thereby enabling fast preemption and context switching of threads executing on the processing cluster array 2212. In at least one embodiment, host software can demonstrate a workload for scheduling on the processing cluster array 2212 via one of a plurality of graphics processing paths. In at least one embodiment, the workload may then be automatically distributed over the processing cluster array 2212 by scheduler 2210 logic within a microcontroller that includes scheduler 2210.
In at least one embodiment, the processing cluster array 2212 may include up to "N" processing clusters (e.g., cluster 2214A, cluster 2214B through cluster 2214N), where "N" represents a positive integer (which may be an integer "N" different from the integers used in the other figures). In at least one embodiment, each cluster 2214A-2214N of the processing cluster array 2212 can execute a large number of concurrent threads. In at least one embodiment, the scheduler 2210 may assign work to the clusters 2214A-2214N in the processing cluster array 2212 using various scheduling and/or work assignment algorithms, which may vary according to the workload generated for each type of program or computation. In at least one embodiment, the scheduling may be dynamically processed by scheduler 2210 or may be aided in part by compiler logic during compilation of program logic configured to be executed by processing cluster array 2212. In at least one embodiment, different clusters 2214A-2214N in the processing cluster array 2212 may be allocated for processing different types of programs or for performing different types of computations.
In at least one embodiment, the processing cluster array 2212 may be configured to perform various types of parallel processing operations. In at least one embodiment, the processing cluster array 2212 is configured to perform general parallel computing operations. For example, in at least one embodiment, the processing cluster array 2212 can include logic for performing processing tasks including filtering video and/or audio data, performing modeling operations, including physical operations, and performing data transformations.
In at least one embodiment, the processing cluster array 2212 is configured to perform parallel graphics processing operations. In at least one embodiment, the processing cluster array 2212 may include additional logic for supporting the execution of such graphics processing operations, including but not limited to texture sampling logic to perform texture operations, as well as tessellation logic and other vertex processing logic. In at least one embodiment, the processing cluster array 2212 can be configured to execute shader programs related to graphics processing, such as, but not limited to, vertex shaders, tessellation shaders, geometry shaders, and pixel shaders. In at least one embodiment, the parallel processing unit 2202 may transfer data from system memory for processing via the I/O unit 2204. In at least one embodiment, during processing, the transferred data may be stored to on-chip memory (e.g., parallel processor memory 2222) during processing and then written back to system memory.
In at least one embodiment, when the parallel processing unit 2202 is used to perform graphics processing, the scheduler 2210 may be configured to divide the processing workload into approximately equal sized tasks to better enable allocation of graphics processing operations to multiple clusters 2214A-2214N in the processing cluster array 2212. In at least one embodiment, portions of the processing cluster array 2212 may be configured to perform different types of processing. For example, in at least one embodiment, a first portion may be configured to perform vertex shading and topology generation, a second portion may be configured to perform tessellation and geometry shading, and a third portion may be configured to perform pixel shading or other screen space operations to produce a rendered image for display. In at least one embodiment, intermediate data generated by one or more of the clusters 2214A-2214N may be stored in a buffer to allow the intermediate data to be transferred between the clusters 2214A-2214N for further processing.
In at least one embodiment, the processing cluster array 2212 can receive processing tasks to be performed via a scheduler 2210, the scheduler 2210 receiving commands defining the processing tasks from the front end 2208. In at least one embodiment, the processing tasks may include an index of data to be processed, e.g., surface (patch) data, raw data, vertex data, and/or pixel data, as well as state parameters and commands defining how to process the data (e.g., what program to execute). In at least one embodiment, the scheduler 2210 may be configured to obtain an index corresponding to the task, or may receive the index from the front end 2208. In at least one embodiment, the front end 2208 can be configured to ensure that the processing cluster array 2212 is configured to be in a valid state prior to launching a workload specified by an incoming command buffer (e.g., batch-buffer, push-buffer, etc.).
In at least one embodiment, each of the one or more instances of parallel processing unit 2202 may be coupled to a parallel processor memory 2222. In at least one embodiment, parallel processor memory 2222 may be accessed via memory crossbar 2216, which memory crossbar 2216 may receive memory requests from processing cluster array 2212 and I/O unit 2204. In at least one embodiment, memory crossbar 2216 can access parallel processor memory 2222 via memory interface 2218. In at least one embodiment, the memory interface 2218 can include a plurality of partition units (e.g., partition unit 2220A, partition unit 2220B through partition unit 2220N), which can each be coupled to a portion of the parallel processor memory 2222 (e.g., a memory unit). In at least one embodiment, the number of partition units 2220A-2220N is configured to be equal to the number of memory units such that a first partition unit 2220A has a corresponding first memory unit 2224A, a second partition unit 2220B has a corresponding second memory unit 2224B, and an Nth partition unit 2220N has a corresponding Nth memory unit 2224N. In at least one embodiment, the number of partition units 2220A-2220N may not be equal to the number of memory units.
In at least one embodiment, the memory cells 2224A-2224N may include various types of memory devices, including Dynamic Random Access Memory (DRAM) or graphics random access memory, such as Synchronous Graphics Random Access Memory (SGRAM), including Graphics Double Data Rate (GDDR) memory. In at least one embodiment, memory cells 2224A-2224N may also include 3D stacked memory, including but not limited to high bandwidth memory (HBM, HBM2e, or HDM 3). In at least one embodiment, rendering targets such as frame buffers or texture maps may be stored across memory units 2224A-2224N, allowing partition units 2220A-2220N to write portions of each rendering target in parallel to efficiently use the available bandwidth of parallel processor memory 2222. In at least one embodiment, the local instance of parallel processor memory 2222 may be eliminated to facilitate a unified memory design that utilizes system memory as well as local cache memory.
In at least one embodiment, any of the clusters 2214A-2214N in the processing cluster array 2212 can process data to be written to any of the memory cells 2224A-2224N within the parallel processor memory 2222. In at least one embodiment, the memory crossbar 2216 can be configured to transmit the output of each cluster 2214A-2214N to any partition units 2220A-2220N or another cluster 2214A-2214N, and the other cluster 2214A-2214N can perform additional processing operations on the output. In at least one embodiment, each cluster 2214A-2214N can communicate with a memory interface 2218 through a memory crossbar 2216 to read from or write to various external memory devices. In at least one embodiment, memory crossbar 2216 has a connection to memory interface 2218 for communication with I/O unit 2204, and a connection to a local instance of parallel processor memory 2222, which enables processing units within different processing clusters 2214A-2214N to communicate with system memory or other memory that is not local to parallel processing unit 2202. In at least one embodiment, the memory crossbar 2216 can use virtual channels to separate traffic flows between clusters 2214A-2214N and partition units 2220A-2220N.
In at least one embodiment, multiple instances of parallel processing unit 2202 may be provided on a single add-on card, or multiple add-on cards may be interconnected. In at least one embodiment, different instances of parallel processing unit 2202 may be configured to interoperate even though the different instances have different numbers of processing cores, different amounts of local parallel processor memory, and/or other configuration differences. For example, in at least one embodiment, some instances of parallel processing unit 2202 may include a higher precision floating point unit relative to other instances. In at least one embodiment, a system comprising one or more instances of parallel processing unit 2202 or parallel processor 2200 may be implemented in a variety of configurations and form factors, including, but not limited to, a desktop, laptop or handheld personal computer, a server, a workstation, a game console, and/or an embedded system.
In at least one embodiment, at least one component shown or described with respect to fig. 22A is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 22A is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 22A is used to cause one or more neural networks to select one or more variations in a feature of one or more text cues based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 22A is used to perform at least one aspect described with respect to block diagram 100, block diagram 200, process 300, block diagram 400, process 500, block diagram 600, block diagram 700, block diagram 800, and/or other systems, methods, or operations described herein.
Fig. 22B is a block diagram of a partition unit 2220 in accordance with at least one embodiment. In at least one embodiment, the partition unit 2220 is an example of one of the partition units 2220A-2220N of FIG. 22A. In at least one embodiment, partition unit 2220 includes an L2 cache 2221, a frame buffer interface 2225, and a ROP 2226 (raster operations unit). In at least one embodiment, L2 cache 2221 is a read/write cache configured to perform load and store operations received from memory crossbar 2216 and ROP 2226. In at least one embodiment, the L2 cache 2221 outputs read misses and urgent write-back requests to the frame buffer interface 2225 for processing. In at least one embodiment, updates can also be sent to the frame buffer for processing via the frame buffer interface 2225. In at least one embodiment, the frame buffer interface 2225 interfaces with one of the memory units in the parallel processor memory, such as memory units 2224A-2224N of FIG. 22A (e.g., within parallel processor memory 2222).
In at least one embodiment, ROP 2226 is a processing unit that performs raster operations, such as stencil, z-test, blending, and the like. In at least one embodiment, ROP 2226 then outputs the processed graphics data stored in the graphics memory. In at least one embodiment, ROP 2226 includes compression logic to compress depth or color data written to memory and decompress depth or color data read from memory. In at least one embodiment, the compression logic may be lossless compression logic utilizing one or more of a variety of compression algorithms. In at least one embodiment, the type of compression performed by ROP 2226 may vary based on the statistical properties of the data to be compressed. For example, in at least one embodiment, delta color compression is performed on depth and color data on a per tile basis.
In at least one embodiment, ROP 2226 is included within each processing cluster (e.g., clusters 2214A-2214N of fig. 22A) instead of partition unit 2220. In at least one embodiment, read and write requests for pixel data, but not pixel fragment data, are communicated through memory crossbar 2216. In at least one embodiment, the processed graphics data may be displayed on a display device (such as one of the one or more display devices 2110 of fig. 21), routed by the processor 2102 for further processing, or routed by one of the processing entities within the parallel processor 2200 of fig. 22A for further processing.
In at least one embodiment, at least one component shown or described with respect to fig. 22B is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 22B is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 22B is used to cause one or more neural networks to select one or more variations in a feature of one or more text cues based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 22B is used to perform at least one aspect described with respect to block diagram 100, block diagram 200, process 300, block diagram 400, process 500, block diagram 600, block diagram 700, block diagram 800, and/or other systems, methods, or operations described herein.
Fig. 22C is a block diagram of a processing cluster 2214 within a parallel processing unit in accordance with at least one embodiment. In at least one embodiment, the processing clusters are examples of one of the processing clusters 2214A-2214N of FIG. 22A. In at least one embodiment, the processing cluster 2214 may be configured to execute a number of threads in parallel, where a "thread" refers to an instance of a particular program executing on a particular set of input data. In at least one embodiment, single Instruction Multiple Data (SIMD) instruction issue techniques are used to support parallel execution of a large number of threads without providing multiple independent instruction units. In at least one embodiment, single Instruction Multithreading (SIMT) techniques are used to support parallel execution of a large number of generally simultaneous threads using a common instruction unit configured to issue instructions to a set of processing engines within each processing cluster.
In at least one embodiment, the operation of the processing cluster 2214 can be controlled via a pipeline manager 2232 that assigns processing tasks to the SIMT parallel processors. In at least one embodiment, the pipeline manager 2232 receives instructions from the scheduler 2210 of FIG. 22A and manages execution of these instructions via the graphics multiprocessor 2234 and/or the texture units 2236. In at least one embodiment, the graphics multiprocessor 2234 is an illustrative example of a SIMT parallel processor. However, in at least one embodiment, various types of SIMT parallel processors of different architectures may be included within processing cluster 2214. In at least one embodiment, one or more instances of the graphics multiprocessor 2234 may be included within the processing cluster 2214. In at least one embodiment, the graphics multiprocessor 2234 may process data, and the data crossbar 2240 may be used to distribute the processed data to one of a plurality of possible destinations (including other shader units). In at least one embodiment, the pipeline manager 2232 may facilitate distribution of processed data by specifying a destination of the processed data to be distributed via the data crossbar 2240.
In at least one embodiment, each graphics multiprocessor 2234 within processing cluster 2214 may include the same set of function execution logic (e.g., arithmetic logic units, load-store units, etc.). In at least one embodiment, the function execution logic may be configured in a pipelined fashion where a new instruction may be issued before a previous instruction completes. In at least one embodiment, the function execution logic supports various operations including integer and floating point arithmetic, comparison operations, boolean operations, bit shifting, and computation of various algebraic functions. In at least one embodiment, some of the functional unit hardware may be utilized to perform different operations, and any combination of functional units may be present.
In at least one embodiment, instructions transferred to the processing cluster 2214 constitute threads. In at least one embodiment, the set of threads executing across a set of parallel processing engines is a thread group. In at least one embodiment, a thread group performs a generic program on different input data. In at least one embodiment, each thread within a thread group may be assigned to a different processing engine within the graphics multiprocessor 2234. In at least one embodiment, the thread group may include fewer threads than the number of processing engines within the graphics multiprocessor 2234. In at least one embodiment, when a thread group includes fewer threads than the number of processing engines, one or more of the processing engines may be idle during the loop that is processing the thread group. In at least one embodiment, the thread group may also include more threads than the number of processing engines within the graphics multiprocessor 2234. In at least one embodiment, when the thread group includes more threads than the number of processing engines within the graphics multiprocessor 2234, processing may be performed in successive clock cycles. In at least one embodiment, multiple thread groups may be executed concurrently on the graphics multiprocessor 2234.
In at least one embodiment, the graphics multiprocessor 2234 includes an internal cache memory for performing load and store operations. In at least one embodiment, the graphics multiprocessor 2234 may relinquish the internal caches and use cache memory (e.g., the L1 cache 2248) within the processing cluster 2214. In at least one embodiment, each graphics multiprocessor 2234 may also access an L2 cache within partition units (e.g., partition units 2220A-2220N of FIG. 22A) that are shared among all processing clusters 2214 and may be used to transfer data between threads. In at least one embodiment, the graphics multiprocessor 2234 may also access off-chip global memory, which may include one or more of local parallel processor memory and/or system memory. In at least one embodiment, any memory external to parallel processing unit 2202 may be used as global memory. In at least one embodiment, the processing cluster 2214 includes multiple instances of the graphics multiprocessor 2234, which may share common instructions and data that may be stored in the L1 cache 2248.
In at least one embodiment, each processing cluster 2214 may include a memory management unit ("MMU") 2245 configured to map virtual addresses to physical addresses. In at least one embodiment, one or more instances of the MMU 2245 may reside within the memory interface 2218 of fig. 22A. In at least one embodiment, the MMU 2245 includes a set of Page Table Entries (PTEs) for mapping virtual addresses to physical addresses of tiles and optionally to cache line indexes. In at least one embodiment, the MMU 2245 may include an address Translation Lookaside Buffer (TLB) or may reside in the graphics multiprocessor 2234 or the L1 cache 2248 or caches within the processing cluster 2214. In at least one embodiment, physical addresses are processed to allocate surface data access locality for efficient request interleaving among partition units. In at least one embodiment, the cache line index may be used to determine whether a request for a cache line is a hit or miss.
In at least one embodiment, the processing clusters 2214 may be configured such that each graphics multiprocessor 2234 is coupled to a texture unit 2236 to perform texture mapping operations that determine texture sample locations, read texture data, and filter texture data. In at least one embodiment, texture data is read from an internal texture L1 cache (not shown) or from an L1 cache within the graphics multiprocessor 2234, and fetched from an L2 cache, local parallel processor memory, or system memory, as desired. In at least one embodiment, each graphics multiprocessor 2234 outputs processed tasks to data crossbar 2240 to provide the processed tasks to another processing cluster 2214 for further processing, or to store the processed tasks in an L2 cache, local parallel processor memory, or in system memory via memory crossbar 2216. In at least one embodiment, preROP 2242 (pre-raster operations unit) is configured to receive data from graphics multiprocessor 2234, and direct the data to ROP units, which may be located with the partition units described herein (e.g., partition units 2220A-2220N of FIG. 22A). In at least one embodiment, the PreROP 2242 unit may perform optimizations for color blending, organizing pixel color data, and performing address translation.
Logic 915 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding logic 915 are provided herein in connection with fig. 9A and/or 9B. In at least one embodiment, the logic 915 may be used in the graphics processing cluster 2214 for performing inference or predictive operations based at least in part on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein.
In at least one embodiment, at least one component shown or described with respect to fig. 22C is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 22C is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 22C is used to cause one or more neural networks to select one or more variations in a feature of one or more text cues based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 22C is used to perform at least one aspect described with respect to block diagram 100, block diagram 200, process 300, block diagram 400, process 500, block diagram 600, block diagram 700, block diagram 800, and/or other systems, methods, or operations described herein.
Fig. 22D illustrates a graphics multiprocessor 2234 in accordance with at least one embodiment. In at least one embodiment, the graphics multiprocessor 2234 is coupled with a pipeline manager 2232 of the processing cluster 2214. In at least one embodiment, the graphics multiprocessor 2234 has an execution pipeline that includes, but is not limited to, an instruction cache 2252, an instruction unit 2254, an address mapping unit 2256, a register file 2258, one or more General Purpose Graphics Processing Unit (GPGPU) cores 2262, and one or more load/store units 2266, wherein the one or more load/store units 2266 can execute load/store operations to load/store instructions corresponding to the execution operations. In at least one embodiment, the GPGPU core 2262 and the load/store unit 2266 are coupled with a cache memory 2272 and a shared memory 2270 via a memory and cache interconnect 2268. In at least one embodiment, the GPGPU core 2262 is part of a SoC, such as part of the integrated circuit 1800 in fig. 18.
In at least one embodiment, the instruction cache 2252 receives a stream of instructions to be executed from the pipeline manager 2232. In at least one embodiment, instructions are cached in the instruction cache 2252 and dispatched for execution by the instruction unit 2254. In at least one embodiment, the instruction unit 2254 may dispatch instructions as a thread group (e.g., thread bundle, wave front, wave), where each thread in the thread group is assigned to a different execution unit within the GPGPU core 2262. In at least one embodiment, an instruction may access any local, shared, or global address space by specifying an address within a unified address space. In at least one embodiment, address mapping unit 2256 may be used to translate addresses in the unified address space into different memory addresses that may be accessed by load/store unit 2266.
In at least one embodiment, register file 2258 provides a set of registers for the functional units of graphics multiprocessor 2234. In at least one embodiment, register file 2258 provides temporary storage for operands of the data path connected to the functional units of graphics multiprocessor 2234 (e.g., GPGPU core 2262, load/store unit 2266). In at least one embodiment, the register file 2258 is divided among each functional unit such that a dedicated portion of the register file 2258 is allocated for each functional unit. In at least one embodiment, the register file 2258 is divided among the different thread bundles (which may be referred to as wave fronts and/or waves) that the graphics multiprocessor 2234 is executing.
In at least one embodiment, the GPGPU cores 2262 may each include a Floating Point Unit (FPU) and/or an integer Arithmetic Logic Unit (ALU) for executing instructions of the graphics multiprocessor 2234. In at least one embodiment, the architecture of each GPGPU core 2262 may be similar or the architecture may be different. In at least one embodiment, the first portion of the GPGPU core 2262 includes a single-precision FPU and integer ALUs, while the second portion of the GPGPU core includes a dual-precision FPU. In at least one embodiment, the FPU may implement the IEEE 754-2008 standard for floating point algorithms or enable variable precision floating point algorithms. In at least one embodiment, the graphics multiprocessor 2234 may additionally include one or more fixed-function or special-function units for performing specific functions, such as copy rectangle or pixel blend operations. In at least one embodiment, one or more of the GPGPU cores 2262 may also include fixed or special function logic.
In at least one embodiment, GPGPU core 2262 includes SIMD logic capable of executing a single instruction on multiple sets of data. In at least one embodiment, GPGPU core 2262 may physically execute SIMD4, SIMD8, and SIMD16 instructions and logically execute SIMD1, SIMD2, and SIMD32 instructions. In at least one embodiment, SIMD instructions for a GPGPU core may be generated by a shader compiler at compile time, or automatically when executing programs written and compiled for Single Program Multiple Data (SPMD) or SIMT architectures. In at least one embodiment, multiple threads of a program configured for the SIMT execution model may be executed via a single SIMD instruction. For example, in at least one embodiment, eight SIMT threads performing the same or similar operations may be executed in parallel via a single SIMD8 logic unit.
In at least one embodiment, memory and cache interconnect 2268 is an interconnection network that connects each functional unit of graphics multiprocessor 2234 to register file 2258 and shared memory 2270. In at least one embodiment, memory and cache interconnect 2268 is a crossbar interconnect that allows load/store unit 2266 to implement load and store operations between shared memory 2270 and register file 2258. In at least one embodiment, the register file 2258 may operate at the same frequency as the GPGPU core 2262, such that the latency of data transfer between the GPGPU core 2262 and the register file 2258 is very low. In at least one embodiment, shared memory 2270 may be used to enable communication between threads executing on functional units within the graphics multiprocessor 2234. In at least one embodiment, cache memory 2272 may be used, for example, as a data cache for caching texture data communicated between functional units and texture unit 2236. In at least one embodiment, shared memory 2270 may also be used as a program managed cache. In at least one embodiment, threads executing on GPGPU core 2262 may also programmatically store data in shared memory in addition to automatically cached data stored in cache memory 2272.
In at least one embodiment, a parallel processor or GPGPU as described herein is communicatively coupled to a host/processor core to accelerate graphics operations, machine learning operations, pattern analysis operations, and various General Purpose GPU (GPGPU) functions. In at least one embodiment, the GPU may be communicatively coupled to the host processor/core via a bus or other interconnect (e.g., a high speed interconnect such as PCIe or NVLink). In at least one embodiment, a SoC includes a parallel processor or GPGPU as described herein, wherein the parallel processor or GPGPU executes on the SoC. In at least one embodiment, the GPU may be integrated with the core on a package or chip and communicatively coupled to the core through an internal processor bus/interconnect internal to the package or chip. In at least one embodiment, regardless of the manner in which the GPUs are connected, the processor cores may allocate work to the GPUs in the form of command/instruction sequences contained in the work descriptors. In at least one embodiment, the GPU then uses dedicated circuitry/logic to efficiently process these commands/instructions.
Logic 915 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding logic 915 are provided herein in connection with fig. 9A and/or 9B. In at least one embodiment, the logic 915 may be used in the graphics multiprocessor 2234 for performing inference or predictive operations based at least in part on weight parameters calculated using the neural network training operations, the neural network functions and/or architectures, or the neural network use cases described herein.
In at least one embodiment, at least one component shown or described with respect to fig. 22D is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 22D is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 22D is used to cause one or more neural networks to select one or more variations in a feature of one or more text cues based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 22D is used to perform at least one aspect described with respect to block diagram 100, block diagram 200, process 300, block diagram 400, process 500, block diagram 600, block diagram 700, block diagram 800, and/or other systems, methods, or operations described herein.
FIG. 23 illustrates a multi-GPU computing system 2300 according to at least one embodiment. In at least one embodiment, a multi-GPU computing system 2300 may include a processor 2302 coupled to a plurality of General Purpose Graphics Processing Units (GPGPUs) 2306A-D via a host interface switch 2304. In at least one embodiment, the host interface switch 2304 is a PCI Express switch device that couples the processor 2302 to a PCI Express bus through which the processor 2302 may communicate with the GPGPGPUs 2306A-D. In at least one embodiment, GPGPUs 2306A-D may be interconnected via a set of high speed P2P (point-to-point) GPU-to-GPU links 2316. In at least one embodiment, the GPU-to-GPU link 2316 is connected to each of the GPGPUs 2306A-D via a dedicated GPU link. In at least one embodiment, the P2P GPU link 2316 enables direct communication between each GPGPU 2306A-D without requiring communication through a host interface bus 2304 to which the processor 2302 is connected. In at least one embodiment, where GPU-to-GPU traffic is directed to the P2P GPU link 2316, the host interface bus 2304 remains available for system memory access or communication with other instances of the multi-GPU computing system 2300, e.g., via one or more network devices. While in at least one embodiment the GPGPUs 2306A-D are connected to the processor 2302 via a host interface switch 2304, in at least one embodiment the processor 2302 includes direct support for the P2P GPU link 2316 and may be connected directly to the GPGPUs 2306A-D. In at least one embodiment, GPGPUs 2306A-D are part of a SoC, such as part of integrated circuit 1800 in FIG. 18, where GPGPUs 2306A-D perform the operations described herein.
Logic 915 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding logic 915 are provided herein in connection with fig. 9A and/or 9B. In at least one embodiment, logic 915 may be used in multi-GPU computing system 2300 for performing inference or prediction operations based at least in part on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein.
In at least one embodiment, multi-GPU computing system 2300 includes one or more graphics cores 2000.
In at least one embodiment, at least one component shown or described with respect to fig. 23 is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 23 is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 23 is used to cause one or more neural networks to select one or more variations in a feature of one or more text prompts based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 23 is used to perform at least one aspect described with respect to block 100, block 200, process 300, block 400, process 500, block 600, block 700, block 800, and/or other systems, methods, or operations described herein.
Fig. 24 is a block diagram of a graphics processor 2400 in accordance with at least one embodiment. In at least one embodiment, graphics processor 2400 includes a ring interconnect 2402, a pipeline front end 2404, a media engine 2437, and graphics cores 2480A-2480N. In at least one embodiment, the ring interconnect 2402 couples the graphics processor 2400 to other processing units, including other graphics processors or one or more general purpose processor cores. In at least one embodiment, graphics processor 2400 is one of many processors integrated within a multi-core processing system. In at least one embodiment, graphics processor 2400 includes a graphics core 2000.
In at least one embodiment, graphics processor 2400 receives multiple batches of commands via ring interconnect 2402. In at least one embodiment, the incoming commands are interpreted by a command stream transformer (streamer) 2403 in the pipeline front end 2404. In at least one embodiment, graphics processor 2400 includes scalable execution logic to perform 3D geometry processing and media processing via graphics cores 2480A-2480N. In at least one embodiment, for 3D geometry processing commands, command stream converter 2403 provides commands to geometry pipeline 2436. In at least one embodiment, for at least some media processing commands, the command stream transformer 2403 provides commands to a video front end 2434, which is coupled to a media engine 2437. In at least one embodiment, the media engine 2437 includes a Video Quality Engine (VQE) 2430 for video and image post-processing, and a multi-format encoding/decoding (MFX) 2433 engine for providing hardware-accelerated media data encoding and decoding. In at least one embodiment, the geometry pipeline 2436 and the media engine 2437 each generate execution threads for thread execution resources provided by at least one graphics core 2480.
In at least one embodiment, the graphics processor 2400 includes an extensible thread execution resource having the features of (patterning) graphics cores 2480A-2480N (which may be modular and sometimes referred to as core slices), each having a plurality of sub-cores 2450A-2450N,2460A-2460N (sometimes referred to as core sub-slices). In at least one embodiment, graphics processor 2400 may have any number of graphics cores 2480A. In at least one embodiment, the graphics processor 2400 includes a graphics core 2480A having at least a first sub-core 2450A and a second sub-core 2460A. In at least one embodiment, graphics processor 2400 is a low power processor with a single sub-core (e.g., 2450A). In at least one embodiment, graphics processor 2400 includes a plurality of graphics cores 2480A-2480N, each including a set of first sub-cores 2450A-2450N and a set of second sub-cores 2460A-2460N. In at least one embodiment, each of the first sub-cores 2450A-2450N includes at least a first set of execution units 2452A-2452N and media/texture samplers 2454A-2454N. In at least one embodiment, each of the second sub-cores 2460A-2460N includes at least a second set of execution units 2462A-2462N and samplers 2464A-2464N. In at least one embodiment, each sub-core 2450A-2450N,2460A-2460N shares a set of shared resources 2470A-2470N. In at least one embodiment, the shared resources include shared cache memory and pixel operation logic. In at least one embodiment, graphics processor 2400 includes a load/store unit in pipeline front end 2404.
Logic 915 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding logic 915 are provided herein in connection with fig. 9A and/or 9B. In at least one embodiment, logic 915 may be used in graphics processor 2400 for performing inference or prediction operations based at least in part on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein.
In at least one embodiment, at least one component shown or described with respect to fig. 24 is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 24 is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 24 is used to cause one or more neural networks to select one or more variations in a feature of one or more text prompts based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 24 is used to perform at least one aspect described with respect to block 100, block 200, process 300, block 400, process 500, block 600, block 700, block 800, and/or other systems, methods, or operations described herein.
Fig. 25 is a block diagram illustrating a microarchitecture for a processor 2500, which processor 2500 may include logic circuitry to execute instructions, in accordance with at least one embodiment. In at least one embodiment, the processor 2500 may execute instructions, including x86 instructions, ARM instructions, application specific instructions for an Application Specific Integrated Circuit (ASIC), and the like. In at least one embodiment, processor 2500 may include a register for storing packed data, such as a 64-bit wide MMX in a microprocessor implemented with Intel corporation of Santa Clara, calif., MMX technology TM A register. In at least one embodiment, MMX registers available in both integer and floating point forms may operate with packed data elements accompanying single instruction multiple data ("SIMD") and streaming SIMD extension ("SSE") instructions. In at least one embodiment, 128-bit wide XMM registers related to SSE2, SSE3, SSE4, AVX, or higher version (beyond) (commonly referred to as "SSEx") technology may hold such packed data operands. In at least one embodiment, the processor 2500 may execute instructions that accelerate machine learning or deep learning algorithms, training, or reasoning.
In at least one embodiment, processor 2500 includes an in-order front end ("front end") 2501 for fetching instructions to be executed and preparing the instructions for later use in a processor pipeline. In at least one embodiment, the front end 2501 can comprise several units. In at least one embodiment, the instruction prefetch 2526 fetches instructions from memory and feeds the instructions to the instruction decoder 2528, which in turn decodes or interprets the instructions. For example, in at least one embodiment, the instruction decoder 2528 decodes the received instructions into one or more operations of so-called "micro-operations" or "micro-instructions" (also referred to as "micro ops" or "uops" or "μ -ops") that are machine executable. In at least one embodiment, the instruction decoder 2528 parses the instruction into an opcode and corresponding data and control fields, which may be used by the microarchitecture to perform operations in accordance with at least one embodiment. In at least one embodiment, trace cache 2530 may assemble decoded micro-operations into a program ordered sequence or trace in micro-operation queue 2534 for execution. In at least one embodiment, when the trace cache 2530 encounters a complex instruction, the microcode ROM 2532 provides the micro-operations required to complete the operation.
In at least one embodiment, some instructions may be converted to single micro-operations, while other instructions require several micro-operations to complete the entire operation. In at least one embodiment, if more than four micro-operations are required to complete an instruction, the instruction decoder 2528 may access the microcode ROM 2532 to execute the instruction. In at least one embodiment, instructions may be decoded into a small number of micro-operations for processing at the instruction decoder 2528. In at least one embodiment, if multiple micro-operations are required to accomplish this, the instructions may be stored in the micro-code ROM 2532. In at least one embodiment, the trace cache 2530 references an entry point programmable logic array ("PLA") to determine a correct microinstruction pointer for reading a microcode sequence from the microcode ROM 2532 to complete one or more instructions according to at least one embodiment. In at least one embodiment, after the microcode ROM 2532 has completed serializing the micro-operations of the instructions, the front end 2501 of the machine may resume fetching the micro-operations from the trace cache 2530.
In at least one embodiment, an out-of-order execution engine ("out-of-order engine") 2503 may prepare instructions for execution. In at least one embodiment, the out-of-order execution logic has multiple buffers to smooth and reorder the instruction stream to optimize performance as the instruction stream is pipelined down and scheduled for execution. In at least one embodiment, the out-of-order execution engine 2503 includes, but is not limited to, a allocator/register renamer 2540, a memory micro-operation queue 2542, an integer/floating-point micro-operation queue 2544, a memory scheduler 2546, a fast scheduler 2502, a slow/general floating-point scheduler ("slow/general FP scheduler") 2504, and a simple floating-point scheduler ("simple FP scheduler") 2506. In at least one embodiment, the fast scheduler 2502, the slow/general floating point scheduler 2504, and the simple floating point scheduler 2506 are also collectively referred to herein as "micro-operation schedulers 2502, 2504, 2506". In at least one embodiment, allocator/register renamer 2540 allocates the machine buffers and resources required for each micro operation to execute. In at least one embodiment, allocator/register renamer 2540 renames logical registers to entries in register files. In at least one embodiment, the allocator/register renamer 2540 also allocates an entry for each of two micro-operation queues, ahead of the memory scheduler 2546 and the micro-operation schedulers 2502, 2504, 2506, memory micro-operation queues 2542 for memory operations and integer/floating point micro-operation queues 2544 for non-memory operations. In at least one embodiment, the micro-operation schedulers 2502, 2504, 2506 determine when micro-operations are ready to execute based on the readiness of their dependent input register operand sources and the availability of execution resources required for the micro-operations to complete their operations. In at least one embodiment, the fast scheduler 2502 may schedule on each half of the master clock cycle, while the slow/general floating point scheduler 2504 and the simple floating point scheduler 2506 may schedule once per master processor clock cycle. In at least one embodiment, the micro-operation schedulers 2502, 2504, 2506 arbitrate for dispatch ports to schedule micro-operations for execution.
In at least one embodiment, execution block 2511 includes, but is not limited to, an integer register file/bypass network 2508, a floating point register file/bypass network ("FP register file/bypass network") 2510, address generation units ("AGUs") 2512 and 2514, a fast Arithmetic Logic Unit (ALU) ("fast ALU") 2516 and 2518, a slow arithmetic logic unit ("slow ALU") 2520, a floating point ALU ("FP") 2522, and a floating point move unit ("FP move") 2524. In at least one embodiment, the integer register file/bypass network 2508 and floating point register file/bypass network 2510 are also referred to herein as "register files 2508, 2510". In at least one embodiment, AGUs 2512 and 2514, fast ALUs 2516 and 2518, slow ALU 2520, floating point ALU 2522 and floating point move unit 2524 are also referred to herein as "execution units 2512, 2514, 2516, 2518, 2520, 2522 and 2524". In at least one embodiment, execution block 2511 may include, but is not limited to, any number (including zero) and type of register files, bypass networks, address generation units, and execution units in any combination.
In at least one embodiment, a register network 2508, 2510 may be disposed between the micro-operation schedulers 2502, 2504, 2506 and the execution units 2512, 2514, 2516, 2518, 2520, 2522 and 2524. In at least one embodiment, the integer register file/bypass network 2508 performs integer operations. In at least one embodiment, the floating point register file/bypass network 2510 performs floating point operations. In at least one embodiment, each of the register networks 2508, 2510 may include, but is not limited to, a bypass network that may bypass or forward the just completed result that has not been written to the register file to a new related micro-operation. In at least one embodiment, the register networks 2508, 2510 can communicate data with each other. In at least one embodiment, the integer/bypass network 2508 may include, but is not limited to, two separate register files, one for low order 32-bit data and one for high order 32-bit data. In at least one embodiment, the floating point register file/bypass network 2510 may include, but is not limited to, 128-bit wide entries, as floating point instructions typically have operands from 64 to 128 bits in width.
In at least one embodiment, the execution units 2512, 2514, 2516, 2518, 2520, 2522, 2524 may execute instructions. In at least one embodiment, the register networks 2508, 2510 store integer and floating point data operand values that the microinstructions need to execute. In at least one embodiment, the processor 2500 may include, but is not limited to, any number of execution units 2512, 2514, 2516, 2518, 2520, 2522, 2524, and combinations thereof. In at least one embodiment, floating point ALU 2522 and floating point move unit 2524 may perform floating point, MMX, SIMD, AVX, and SSE or other operations, including specialized machine learning instructions. In at least one embodiment, the floating point ALU 2522 may include, but is not limited to, a 64-bit by 64-bit floating point divider for performing division, square root, and remainder micro-operations. In at least one embodiment, instructions involving floating point values may be processed with floating point hardware. In at least one embodiment, the ALU operations may be passed to the fast ALUs 2516, 2518. In at least one embodiment, the fast ALUs 2516, 2518 may perform fast operations with an effective latency of half a clock cycle. In at least one embodiment, most complex integer operations enter the slow ALU 2520, as the slow ALU 2520 may include, but is not limited to, integer execution hardware for long latency type operations, such as multipliers, shifts, tag logic, and branch processing. In at least one embodiment, memory load/store operations may be performed by the AGUs 2512, 2514. In at least one embodiment, the fast ALU 2516, the fast ALU 2518, and the slow ALU 2520 may perform integer operations on 64-bit data operands. In at least one embodiment, the fast ALU 2516, the fast ALU 2518, and the slow ALU 2520 may be implemented to support a variety of data bit sizes including sixteen, thirty-two, 128, 256, etc. In at least one embodiment, the floating point ALU 2522 and floating point move unit 2524 may be implemented to support a range of operands having bits of various widths, such as 128-bit wide packed data operands in conjunction with SIMD and multimedia instructions.
In at least one embodiment, the micro-operation schedulers 2502, 2504, 2506 dispatch dependent (dependent) operations before the parent load has completed execution. In at least one embodiment, processor 2500 may also include logic to handle memory misses, as micro-operations may be speculatively scheduled and executed in processor 2500. In at least one embodiment, if a data load in the data cache misses, there may be an ongoing dependent operation in the pipeline that causes the scheduler to temporarily have no correct data. In at least one embodiment, a replay mechanism tracks and re-executes instructions that use incorrect data. In at least one embodiment, replay related operations may be required and independent operations may be allowed to complete. In at least one embodiment, the scheduler and replay mechanism of at least one embodiment of the processor may also be designed to capture instruction sequences for text string comparison operations.
In at least one embodiment, a "register" may refer to an on-board processor memory location that may be used as part of an instruction that identifies an operand. In at least one embodiment, the registers may be those that may be used externally to the processor (from a programmer's perspective). In at least one embodiment, the registers may not be limited to a particular type of circuit. Rather, in at least one embodiment, registers may store data, provide data, and perform the functions described herein. In at least one embodiment, the registers described herein may be implemented by circuitry within a processor using any number of different techniques, such as dedicated physical registers, dynamically allocated physical registers using register renaming, a combination of dedicated and dynamically allocated physical registers, and so forth. In at least one embodiment, the integer registers store 32-bit integer data. The register file of at least one embodiment also includes eight multimedia SIMD registers for packed data.
In at least one embodiment, the processor 2500 or each core of the processor 2500 includes one or more prefetchers, one or more fetchers, one or more pre-decoders, one or more decoders for decoding data (e.g., instructions), one or more instruction queues for processing instructions (e.g., instructions corresponding to operations or API calls), one or more micro-operation (μop) caches for storing micro-operations, one or more micro-operation (μop) queues, an ordered execution engine, one or more load buffers, one or more store buffers, one or more reorder buffers, one or more fill buffers, an out-of-order execution engine, one or more ports, one or more shift and/or shift units, one or more fusion product accumulation (FMA) units, one or more store units for executing data (e.g., instructions) corresponding to loads/stores (e.g., instructions) to perform operations or store operations (e.g., API's) for further processing operations or more MMA's, one or more hash units, or more hash matrices for execution, or more of any of the described above. In at least one embodiment, processor 2500 may access, use, implement, or execute instructions corresponding to calling APIs.
In at least one embodiment, the processor 2500 includes one or more hyper-path interconnects (UPIs), which are, for example, point-to-point processor interconnects; one or more PCIe; one or more accelerators for accelerating computations or operations; and/or one or more memory controllers. In at least one embodiment, processor 2500 includes a shared Last Level Cache (LLC) coupled to one or more memory controllers that can enable shared memory accesses across processor cores.
In at least one embodiment, the processor 2500 or cores of the processor 2500 have a grid structure in which processor cores, on-chip caches, memory controllers, and I/O controllers are organized into rows and columns with wires and switches connecting them at each intersection to allow turns. In at least one embodiment, the processor 2500 has one or more higher memory bandwidths (HMBs, e.g., HMBs) for storing or caching data in, for example, double data rate 5 synchronous dynamic random access memory (DDR 5 SDRAM). In at least one embodiment, one or more components of processor 2500 are interconnected using a computing quick link (CXL) interconnect. In at least one embodiment, the memory controller uses a "least recently used" (LRU) method to determine content stored in the cache. In at least one embodiment, processor 2500 includes one or more PCIe (e.g., PCIe 5.0).
Logic 915 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding logic 915 are provided herein in connection with fig. 9A and/or 9B. In at least one embodiment, some or all of the logic 915 may be incorporated into the execution block 2511 and other memory or registers, shown or not. For example, in at least one embodiment, the training and/or reasoning techniques described herein may use one or more ALUs shown in execution block 2511. Further, the weight parameters may be stored in on-chip or off-chip memory and/or registers (shown or not shown) that configure the ALU executing block 2511 to perform one or more of the machine learning algorithms, neural network architectures, use cases, or training techniques described herein.
In at least one embodiment, at least one component shown or described with respect to fig. 25 is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 25 is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 25 is used to cause one or more neural networks to select one or more variations in a feature of one or more text cues based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 25 is used to perform at least one aspect described with respect to block 100, block 200, process 300, block 400, process 500, block 600, block 700, block 800, and/or other systems, methods, or operations described herein.
Fig. 26 illustrates a deep learning application processor 2600 in accordance with at least one embodiment. In at least one embodiment, the deep learning application processor 2600 uses instructions that, if executed by the deep learning application processor 2600, cause the deep learning application processor 2600 to perform some or all of the processes and techniques described throughout this disclosure. In at least one embodiment, the deep learning application processor 2600 is an Application Specific Integrated Circuit (ASIC). In at least one embodiment, the application processor 2600 performs matrix multiplication operations or "hard-wired" into hardware as a result of executing one or more instructions, or both. In at least one embodiment, the deep learning application processor 2600 includes, but is not limited to, a processing setGroups 2610 (1) -2610 (12), inter-chip links ("ICL") 2620 (1) -2620 (12), inter-chip controllers ("ICC") 2630 (1) -2630 (2), second generation high bandwidth memories ("HBM 2") 2640 (1) -2640 (4), memory controllers ("Mem Ctrlr") 2642 (1) -2642 (4), high bandwidth memory physical layers ("HBM PHY") 2644 (1) -2644 (4), management controller central processing unit ("management controller CPU") 2650, serial peripheral interface, internal integrated circuits, and general purpose input/output blocks ("SPI, I) 2 C. GPIO ") 2660, peripheral component interconnect Express controller and direct memory access block (" PCIe controller and DMA ") 2670, and sixteen channel peripheral component interconnect Express port (" PCI Express x 16 ") 2680.
In at least one embodiment, the processing cluster 2610 may perform deep learning operations, including inference or predictive operations of weight parameters calculated based on one or more training techniques, including those described herein. In at least one embodiment, each processing cluster 2610 may include, but is not limited to, any number and type of processors. In at least one embodiment, the deep learning application processor 2600 can include any number and type of processing clusters 2600. In at least one embodiment, the inter-chip link 2620 is bi-directional. In at least one embodiment, the inter-chip link 2620 and the inter-chip controller 2630 enable the plurality of deep learning application processors 2600 to exchange information, including activation information resulting from execution of one or more machine learning algorithms embodied in one or more neural networks. In at least one embodiment, the deep learning application processor 2600 may include any number (including zero) and type of ICLs 2620 and ICC 2630.
In at least one embodiment, HBM2 2640 provides a total of 32GB of memory. In at least one embodiment, HBM2 2640 (i) is associated with both memory controller 2642 (i) and HBM PHY 2644 (i), where "i" is any integer. In at least one embodiment, any number of HBM2 2640 may provide any type and amount of high bandwidth memory and may be associated with any number (including zero) and type of memory controllers 2642 and HBM PHYs 2644. In at least one embodimentAny number and type of block replacement SPIs, I, of any number and type of communication standards can be implemented in any technically feasible manner 2 C. GPIO 2660, PCIe controller and DMA 2670 and/or PCIe 2680.
Logic 915 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding logic 915 are provided herein in connection with fig. 9A and/or 9B. In at least one embodiment, the deep learning application processor is used to train a machine learning model (such as a neural network) to predict or infer information provided to the deep learning application processor 2600. In at least one embodiment, the deep learning application processor 2600 is used to infer or predict information based on a trained machine learning model (e.g., neural network) that has been trained by another processor or system or by the deep learning application processor 2600. In at least one embodiment, the processor 2600 can be used to perform one or more neural network use cases described herein.
In at least one embodiment, at least one component shown or described with respect to fig. 26 is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 26 is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 26 is used to cause one or more neural networks to select one or more variations in a feature of one or more text prompts based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 26 is used to perform at least one aspect described with respect to block 100, block 200, process 300, block 400, process 500, block 600, block 700, block 800, and/or other systems, methods, or operations described herein.
Fig. 27 is a block diagram of a neuromorphic processor 2700 in accordance with at least one embodiment. In at least one embodiment, the neuromorphic processor 2700 can receive one or more inputs from a source external to the neuromorphic processor 2700. In at least one embodiment, these inputs can be communicated to one or more neurons 2702 within the neuromorphic processor 2700. In at least one embodiment, the neuron 2702 and its components may be implemented using circuitry or logic comprising one or more Arithmetic Logic Units (ALUs). In at least one embodiment, the neuromorphic processor 2700 may include, but is not limited to, an instance of thousands or millions of neurons 2702, but any suitable number of neurons 2702 may be used. In at least one embodiment, each instance of a neuron 2702 may include a neuron input 2704 and a neuron output 2706. In at least one embodiment, the neuron 2702 can generate an output that can be communicated to inputs of other instances of the neuron 2702. For example, in at least one embodiment, the neuron input 2704 and the neuron output 2706 may be interconnected via a synapse 2708.
In at least one embodiment, the neurons 2702 and synapses 2708 can be interconnected such that the neuromorphic processor 2700 operates to process or analyze information received by the neuromorphic processor 2700. In at least one embodiment, the neuron 2702 may send an output pulse (or "fire" or "spike") when an input received through the neuron input 2704 exceeds a threshold. In at least one embodiment, the neuron 2702 may sum or integrate signals received at the neuron input 2704. For example, in at least one embodiment, the neuron 2702 may be implemented as a leaky integrated discharge (leak integration-and-fire) neuron, wherein if the summation (referred to as "membrane potential") exceeds a threshold, the neuron 2702 may generate an output (or "discharge") using a transfer function such as a sigmoid or threshold function. In at least one embodiment, leaky integral firing neurons may sum the signals received at neuron input 2704 to the membrane potential, and an attenuation factor (or leak) may also be applied to reduce the membrane potential. In at least one embodiment, if multiple input signals are received at neuron input 2704 fast enough to exceed a threshold (i.e., before the membrane potential decays too low to discharge), then an integrated discharging neuron with leakage may discharge. In at least one embodiment, the neuron 2702 may be implemented using circuitry or logic that receives an input, integrates the input into a membrane potential, and attenuates the membrane potential. In at least one embodiment, the inputs may be averaged, or any other suitable transfer function may be used. Further, in at least one embodiment, the neuron 2702 may include, but is not limited to, a comparator circuit or logic that produces an output spike at the neuron output 2706 when the result of applying a transfer function to the neuron input 2704 exceeds a threshold. In at least one embodiment, once neuron 2702 fires, it can ignore previously received input information by, for example, resetting the membrane potential to 0 or another suitable default value. In at least one embodiment, once the membrane potential is reset to 0, the neuron 2702 may resume normal operation after a suitable period of time (or refractory period).
In at least one embodiment, neurons 2702 can be interconnected by synapses 2708. In at least one embodiment, the synapse 2708 may operate to send a signal from the output of the first neuron 2702 to the input of the second neuron 2702. In at least one embodiment, the neuron 2702 may communicate information on more than one instance of the synapse 2708. In at least one embodiment, one or more instances of the neuron output 2706 may be connected to an instance of the neuron input 2704 in the same neuron 2702 via an instance of the synapse 2708. In at least one embodiment, an instance of neuron 2702 that produces an output to be transmitted on an instance of synapse 2708 may be referred to as a "pre-synaptic neuron" relative to the instance of synapse 2708. In at least one embodiment, an instance of neuron 2702 that receives input transmitted through an instance of synapse 2708 may be referred to as a "post-synaptic neuron" relative to the instance of synapse 2708. In at least one embodiment, a single instance of neuron 2702 can be both a "pre-synaptic neuron" and a "post-synaptic neuron" because an instance of neuron 2702 can receive input from one or more instances of synapse 2708 and can also transmit output through one or more instances of synapse 2708, relative to various instances of synapse 2708.
In at least one embodiment, neurons 2702 may be organized into one or more layers. In at least one embodiment, each instance of a neuron 2702 can have one neuron output 2706, which neuron output 2706 can fan out to one or more neuron inputs 2704 through one or more synapses 2708. In at least one embodiment, the neuron outputs 2706 of the neurons 2702 in the first layer 2710 can be connected to the neuron inputs 2704 of the neurons 2702 in the second layer 2712. In at least one embodiment, layer 2710 may be referred to as a "feed forward layer. In at least one embodiment, each instance of a neuron 2702 in an instance of a first layer 2710 can fan out to each instance of a neuron 2702 in a second layer 2712. In at least one embodiment, the first layer 2710 may be referred to as a "fully connected feed forward layer. In at least one embodiment, each instance of a neuron 2702 in an instance of a second layer 2712 can fan out to less than all instances of a neuron 2702 in a third layer 2714. In at least one embodiment, the second layer 2712 may be referred to as a "sparsely connected feed forward layer. In at least one embodiment, the neurons 2702 in the second layer 2712 can fan out to neurons 2702 in a plurality of other layers, including to neurons 2702 also in the second layer 2712. In at least one embodiment, the second layer 2712 may be referred to as a "loop layer. In at least one embodiment, neuromorphic processor 2700 may include, but is not limited to, any suitable combination of a loop layer and a feed-forward layer, including, but not limited to, a sparsely connected feed-forward layer and a fully connected feed-forward layer.
In at least one embodiment, neuromorphic processor 2700 may include, but is not limited to, a reconfigurable interconnect architecture or a dedicated hardwired interconnect for connecting synapse 2708 to neuron 2702. In at least one embodiment, the neuromorphic processor 2700 may include, but is not limited to, circuitry or logic that allows synapses to be assigned to different neurons 2702 as needed based on neural network topology and neuron fan-in/fan-out. For example, in at least one embodiment, synapse 2708 may be connected to neuron 2702 using an interconnect structure (such as a network on chip) or with a dedicated connection. In at least one embodiment, the synaptic interconnections and their components may be implemented using circuitry or logic.
In at least one embodiment, at least one component shown or described with respect to fig. 27 is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 27 is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 27 is used to cause one or more neural networks to select one or more variations in a feature of one or more text cues based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 27 is used to perform at least one aspect described with respect to block 100, block 200, process 300, block 400, process 500, block 600, block 700, block 800, and/or other systems, methods, or operations described herein.
FIG. 28 is a processing system in accordance with at least one embodiment. In at least one embodiment, system 2800 includes one or more processors 2802 and one or more graphics processors 2808, and may be a single processor desktop system, a multiprocessor workstation system, or a server system with a large number of processors 2802 or processor cores 2807. In at least one embodiment, system 2800 is a processing platform contained within a system on a chip (SoC) integrated circuit for use in a mobile, handheld, or embedded device. In at least one embodiment, one or more graphics processors 2808 include one or more graphics cores 2000.
In at least one embodiment, system 2800 can be included or incorporated in a server-based gaming platform, a gaming console including a game and media console, a mobile gaming console, a handheld gaming console, or an online gaming console. In at least one embodiment, system 2800 is a mobile phone, smart phone, tablet computing device, or mobile internet device. In at least one embodiment, the processing system 2800 can further include a wearable device coupled to or integrated in a wearable device, such as a smart watch wearable device, a smart glasses device, an augmented reality device, or a virtual reality device. In at least one embodiment, processing system 2800 is a television or set-top box device having one or more processors 2802 and a graphical interface generated by one or more graphics processors 2808.
In at least one embodiment, one or more processors 2802 each include one or more processor cores 2807 for processing instructions that, when executed, perform operations for system and user software. In at least one embodiment, each of the one or more processor cores 2807 is configured to process a particular sequence of instructions 2809. In at least one embodiment, the instruction sequence 2809 may facilitate Complex Instruction Set Computing (CISC), reduced Instruction Set Computing (RISC), or computing via Very Long Instruction Words (VLIW). In at least one embodiment, the processor cores 2807 may each process a different instruction sequence 2809, which may include instructions that help simulate other instruction sequences. In at least one embodiment, the processor core 2807 may also include other processing devices, such as a Digital Signal Processor (DSP).
In at least one embodiment, processor 2802 includes a cache memory 2804. In at least one embodiment, processor 2802 may have a single internal cache or multiple levels of internal caches. In at least one embodiment, cache memory is shared among the various components of processor 2802. In at least one embodiment, processor 2802 also uses an external cache (e.g., a level three (L3) cache or Last Level Cache (LLC)) (not shown), which may be shared among processor cores 2807 using known cache coherency techniques. In at least one embodiment, additionally included in processor 2802 is a register file 2806 that may include different types of registers (e.g., integer registers, floating point registers, status registers, and instruction pointer registers) for storing different types of data. In at least one embodiment, register file 2806 may include general purpose registers or other registers.
In at least one embodiment, one or more processors 2802 are coupled with one or more interface buses 2810 to communicate communication signals, such as address, data, or control signals, between the processors 2802 and other components in the system 2800. In at least one embodiment, the interface bus 2810 may be a processor bus, such as a version of a Direct Media Interface (DMI) bus. In at least one embodiment, interface bus 2810 is not limited to a DMI bus and may include one or more peripheral component interconnect buses (e.g., PCI Express), memory buses, or other types of interface buses. In at least one embodiment, the one or more processors 2802 include an integrated memory controller 2816 and a platform controller hub 2830. In at least one embodiment, memory controller 2816 facilitates communication between the memory devices and other components of system 2800, while Platform Controller Hub (PCH) 2830 provides connectivity to I/O devices via a local I/O bus.
In at least one embodiment, memory device 2820 may be a Dynamic Random Access Memory (DRAM) device, a Static Random Access Memory (SRAM) device, a flash memory device, a phase change memory device, or some other memory device having suitable capabilities to function as a processor memory. In at least one embodiment, the memory device 2820 may operate as a system memory of the system 2800 for storing data 2822 and instructions 2821 for use when one or more processors 2802 execute applications or processes. In at least one embodiment, the memory controller 2816 is also coupled with an optional external graphics processor 2812, which may communicate with one or more graphics processors 2808 of the processors 2802 to perform graphics and media operations. In at least one embodiment, the display device 2811 can be connected to one or more processors 2802. In at least one embodiment, the display device 2811 can include one or more of internal display devices, such as in a mobile electronic device or a laptop device or an external display device attached via a display interface (e.g., display port (DisplayPort), etc.). In at least one embodiment, the display device 2811 may comprise a Head Mounted Display (HMD), such as a stereoscopic display device used in a Virtual Reality (VR) application or an Augmented Reality (AR) application.
In at least one embodiment, the platform controller hub 2830 enables peripheral devices to connect to the memory device 2820 and the processor 2802 via a high speed I/O bus. In at least one embodiment, the I/O peripherals include, but are not limited to, an audio controller 2846, a network controller 2834, a firmware interface 2828, a wireless transceiver 2826, a touch sensor 2825, a data storage 2824 (e.g., hard disk drive, flash memory, etc.). In at least one embodiment, data storage 2824 may be connected via a storage interface (e.g., SATA) or via a peripheral bus, such as a peripheral component interconnect bus (e.g., PCI, PCIe). In at least one embodiment, touch sensor 2825 may include a touch screen sensor, a pressure sensor, or a fingerprint sensor. In at least one embodiment, the wireless transceiver 2826 may be a Wi-Fi transceiver, a bluetooth transceiver, or a mobile network transceiver, such as a 3G, 4G, or Long Term Evolution (LTE) transceiver. In at least one embodiment, firmware interface 2828 enables communication with system firmware and may be, for example, a Unified Extensible Firmware Interface (UEFI). In at least one embodiment, network controller 2834 may implement network connections to wired networks. In at least one embodiment, a high performance network controller (not shown) is coupled to interface bus 2810. In at least one embodiment, the audio controller 2846 is a multi-channel high definition audio controller. In at least one embodiment, system 2800 includes an optional legacy I/O controller 2840 for coupling legacy (e.g., personal System 2 (PS/2)) devices to system 2800. In at least one embodiment, the platform controller hub 2830 may also be connected to one or more Universal Serial Bus (USB) controllers 2842, which connect input devices, such as a keyboard and mouse 2843 combination, a camera 2844, or other USB input devices.
In at least one embodiment, the memory controller 2816 and an instance of the platform controller hub 2830 may be integrated into a separate external graphics processor, such as external graphics processor 2812. In at least one embodiment, the platform controller hub 2830 and/or the memory controller 2816 may be external to the one or more processors 2802. For example, in at least one embodiment, the system 2800 may include an external memory controller 2816 and a platform controller hub 2830, which may be configured as a memory controller hub and a peripheral controller hub in a system chipset in communication with the one or more processors 2802.
Logic 915 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding logic 915 are provided herein in connection with fig. 9A and/or 9B. In at least one embodiment, some or all of logic 915 may be incorporated into graphics processor 2808. For example, in at least one embodiment, the training and/or reasoning techniques described herein may use one or more ALUs that are embodied in a 3D pipeline. Further, in at least one embodiment, the reasoning and/or training operations described herein may be accomplished using logic other than that shown in FIG. 9A or 9B. In at least one embodiment, the weight parameters may be stored in on-chip or off-chip memory and/or registers (shown or not shown) that configure the ALUs of graphics processor 2808 to perform one or more of the machine learning algorithms, neural network architectures, use cases, or training techniques described herein.
In at least one embodiment, at least one component shown or described with respect to fig. 28 is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 28 is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 28 is used to cause one or more neural networks to select one or more variations in a feature of one or more text prompts based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 28 is used to perform at least one aspect described with respect to block 100, block 200, process 300, block 400, process 500, block 600, block 700, block 800, and/or other systems, methods, or operations described herein.
FIG. 29 is a block diagram of a processor 2900 having one or more processor cores 2902A-2902N, an integrated memory controller 2914, and an integrated graphics processor 2908 according to at least one embodiment. In at least one embodiment, the processor 2900 may include additional cores up to and including additional cores 2902N, represented by dashed boxes. In at least one embodiment, each processor core 2902A-2902N includes one or more internal cache units 2904A-2904N. In at least one embodiment, each processor core may also access one or more shared cache units 2906. In at least one embodiment, graphics processor 2908 includes one or more graphics cores 2000.
In at least one embodiment, internal cache units 2904A-2904N and shared cache unit 2906 represent a cache memory hierarchy within processor 2900. In at least one embodiment, cache memory units 2904A-2904N may include at least one level of instruction and data caches within each processor core and one or more levels of shared mid-level caches, such as level 2 (L2), level 3 (L3), level 4 (L4), or other levels of caches, where the highest level of cache preceding external memory is categorized as LLC. In at least one embodiment, the cache coherency logic maintains coherency between the various cache units 2906 and 2904A-2904N.
In at least one embodiment, the processor 2900 may also include a set of one or more bus controller units 2916 and a system agent core 2910. In at least one embodiment, bus controller unit 2916 manages a set of peripheral buses, such as one or more PCI or PCIe buses. In at least one embodiment, the system agent core 2910 provides management functionality for the various processor components. In at least one embodiment, the system agent core 2910 includes one or more integrated memory controllers 2914 for managing access to various external memory devices (not shown).
In at least one embodiment, one or more of the processor cores 2902A-2902N include support for simultaneous multithreading. In at least one embodiment, the system agent core 2910 includes components for coordinating and operating the cores 2902A-2902N during multi-threaded processing. In at least one embodiment, the system agent core 2910 may additionally include a Power Control Unit (PCU) including logic and components for adjusting one or more power states of the processor cores 2902A-2902N and the graphics processor 2908.
In at least one embodiment, the processor 2900 further includes a graphics processor 2908 for performing graphics processing operations. In at least one embodiment, graphics processor 2908 is coupled with shared cache unit 2906 and system agent core 2910, which includes one or more integrated memory controllers 2914. In at least one embodiment, the system agent core 2910 further includes a display controller 2911 for driving the graphics processor output to one or more coupled displays. In at least one embodiment, display controller 2911 may also be a stand-alone module coupled to graphics processor 2908 via at least one interconnect, or may be integrated within graphics processor 2908.
In at least one embodiment, ring-based interconnect unit 2912 is used to couple internal components of processor 2900. In at least one embodiment, alternative interconnect units may be used, such as point-to-point interconnects, switched interconnects, or other technologies. In at least one embodiment, graphics processor 2908 is coupled with ring interconnect 2912 via I/O link 2913.
In at least one embodiment, I/O link 2913 represents at least one of a variety of I/O interconnects, including encapsulated I/O interconnects that facilitate communication between various processor components and high performance embedded memory module 2918 (such as an eDRAM module). In at least one embodiment, each of the processor cores 2902A-2902N and graphics processor 2908 use embedded memory module 2918 as a shared last level cache.
In at least one embodiment, processor cores 2902A-2902N are homogenous cores that execute a common instruction set architecture. In at least one embodiment, processor cores 2902A-2902N are heterogeneous in terms of Instruction Set Architecture (ISA), with one or more processor cores 2902A-2902N executing a common instruction set, and one or more other cores of processor cores 2902A-2902N executing a subset of the common instruction set or a different instruction set. In at least one embodiment, the processor cores 2902A-2902N are heterogeneous in terms of microarchitecture, in that one or more cores with relatively higher power consumption are coupled with one or more power cores with lower power consumption. In at least one embodiment, the processor 2900 may be implemented on one or more chips or as a SoC integrated circuit.
Logic 915 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding logic 915 are provided herein in connection with fig. 9A and/or 9B. In at least one embodiment, some or all of logic 915 may be incorporated into graphics processor 2908. For example, in at least one embodiment, the training and/or reasoning techniques described herein may use one or more ALUs that are embodied in the 3D pipeline, graphics core 2902, shared functional logic, or other logic in FIG. 29. Further, in at least one embodiment, the reasoning and/or training operations described herein may be accomplished using logic other than that shown in FIG. 9A or 9B. In at least one embodiment, the weight parameters may be stored in on-chip or off-chip memory and/or registers (shown or not shown) that configure the ALU of the processor 2900 to perform one or more machine learning algorithms, neural network architectures, use cases, or training techniques described herein.
In at least one embodiment, at least one component shown or described with respect to fig. 29 is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 29 is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 29 is used to cause one or more neural networks to select one or more variations in a feature of one or more text prompts based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 29 is used to perform at least one aspect described with respect to block 100, block 200, process 300, block 400, process 500, block 600, block 700, block 800, and/or other systems, methods, or operations described herein.
Fig. 30 is a block diagram of a graphics processor 3000, which may be a discrete graphics processing unit or may be a graphics processor integrated with multiple processing cores. In at least one embodiment, graphics processor 3000 communicates with registers on graphics processor 3000 and commands placed in memory via a memory mapped I/O interface. In at least one embodiment, graphics processor 3000 includes a memory interface 3014 for accessing memory. In at least one embodiment, memory interface 3014 is an interface to local memory, one or more internal caches, one or more shared external caches, and/or to system memory. In at least one embodiment, graphics processor 3000 includes a graphics core 2000.
In at least one embodiment, the graphics processor 3000 further includes a display controller 3002 for driving display output data to the display device 3020. In at least one embodiment, the display controller 3002 includes hardware for one or more overlay planes of the display device 3020 and a combination of multi-layer video or user interface elements. In at least one embodiment, the display device 3020 may be an internal or external display device. In at least one embodiment, the display device 3020 is a head mounted display device, such as a Virtual Reality (VR) display device or an Augmented Reality (AR) display device. In at least one embodiment, the graphics processor 3000 includes a video codec engine 3006 to encode, decode, or transcode media into, from, or between one or more media encoding formats, including, but not limited to, moving Picture Experts Group (MPEG) formats such as MPEG-2, advanced Video Coding (AVC) formats such as h.264/MPEG-4AVC, and american Society of Motion Picture Television Engineers (SMPTE) 421M/VC-1 and Joint Photographic Experts Group (JPEG) formats such as JPEG and Motion JPEG (MJPEG).
In at least one embodiment, graphics processor 3000 includes a block image transfer (BLIT) engine 3004 for performing two-dimensional (2D) rasterizer operations, including, for example, bit boundary block transfers. However, in at least one embodiment, 2D graphics operations are performed using one or more components of Graphics Processing Engine (GPE) 3010. In at least one embodiment, GPE 3010 is a compute engine for performing graphics operations, including three-dimensional (3D) graphics operations and media operations.
In at least one embodiment, the GPE 3010 includes a 3D pipeline 3012 for performing 3D operations, such as rendering three-dimensional images and scenes using processing functions that operate on 3D primitive shapes (e.g., rectangles, triangles, etc.). In at least one embodiment, 3D pipeline 3012 includes programmable and fixed functional elements that perform various tasks and/or spawn threads of execution to 3D/media subsystem 3015. Although the 3D pipeline 3012 may be used to perform media operations, in at least one embodiment, the GPE 3010 also includes a media pipeline 3016 for performing media operations such as video post-processing and image enhancement.
In at least one embodiment, the media pipeline 3016 includes fixed function or programmable logic units for performing one or more specialized media operations, such as video decoding acceleration, video de-interlacing, and video encoding acceleration, in place of or on behalf of the video codec engine 3006. In at least one embodiment, the media pipeline 3016 also includes a thread generation unit to generate threads for execution on the 3D/media subsystem 3015. In at least one embodiment, the spawned threads perform computations for the media operations on one or more graphics execution units included in the 3D/media subsystem 3015.
In at least one embodiment, the 3D/media subsystem 3015 includes logic for executing threads spawned by the 3D pipeline 3012 and the media pipeline 3016. In at least one embodiment, the 3D pipeline 3012 and media pipeline 3016 send thread execution requests to the 3D/media subsystem 3015, which includes thread dispatch logic for arbitrating and dispatching various requests to available thread execution resources. In at least one embodiment, the execution resources include an array of graphics execution units for processing 3D and media threads. In at least one embodiment, the 3D/media subsystem 3015 includes one or more internal caches for thread instructions and data. In at least one embodiment, subsystem 3015 further includes a shared memory including registers and addressable memory for sharing data among threads and storing output data.
Logic 915 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding logic 915 are provided herein in connection with fig. 9A and/or 9B. In at least one embodiment, portions or all of logic 915 may be incorporated into graphics processor 3000. For example, in at least one embodiment, the training and/or reasoning techniques described herein may use one or more ALUs contained in the 3D pipeline 3012. Further, in at least one embodiment, the reasoning and/or training operations described herein may be accomplished using logic other than that shown in FIG. 9A or 9B. In at least one embodiment, the weight parameters may be stored in on-chip or off-chip memory and/or registers (shown or not shown) that configure the ALUs of graphics processor 3000 to perform one or more of the machine learning algorithms, neural network architectures, use cases, or training techniques described herein.
In at least one embodiment, at least one component shown or described with respect to fig. 30 is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 30 is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 30 is used to cause one or more neural networks to select one or more variations in a feature of one or more text cues based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 30 is used to perform at least one aspect described with respect to block 100, block 200, process 300, block 400, process 500, block 600, block 700, block 800, and/or other systems, methods, or operations described herein.
FIG. 31 is a block diagram of a graphics processing engine 3110 of a graphics processor in accordance with at least one embodiment. In at least one embodiment, graphics Processing Engine (GPE) 3110 is a version of GPE 3010 shown in fig. 30. In at least one embodiment, the media pipeline 3116 is optional and may not be explicitly included in the GPE 3110. In at least one embodiment, a separate media and/or image processor is coupled to GPE 3110.
In at least one embodiment, the GPE 3110 is coupled to or includes a command stream converter 3103 that provides command streams to a 3D pipeline 3112 and/or a media pipeline 3116. In at least one embodiment, command stream translator 3103 is coupled to a memory, which may be a system memory, or may be one or more of an internal cache memory and a shared cache memory. In at least one embodiment, the command stream translator 3103 receives commands from memory and sends commands to the 3D pipeline 3112 and/or the media pipeline 3116. In at least one embodiment, the commands are instructions, primitives, or micro-operations fetched from a ring buffer that stores commands for the 3D pipeline 3112 and the media pipeline 3116. In at least one embodiment, the ring buffer may further include a batch command buffer storing a plurality of commands for each batch. In at least one embodiment, the commands for 3D pipeline 3112 may also include references to data stored in memory, such as, but not limited to, vertex and geometry data for 3D pipeline 3112 and/or image data and memory objects for media pipeline 3116. In at least one embodiment, 3D pipeline 3112 and media pipeline 3116 process commands and data by performing operations or by dispatching one or more threads of execution to graphics core array 3114. In at least one embodiment, graphics core array 3114 includes one or more graphics core blocks (e.g., one or more graphics cores 3115A, one or more graphics cores 3115B), each block including one or more graphics cores. In at least one embodiment, one or more graphics cores 3115A, 3115B may be referred to as execution units ("EUs"). In at least one embodiment, each graphics core includes a set of graphics execution resources including general and graphics-specific execution logic for performing graphics and computing operations, as well as fixed-function texture processing and/or machine learning and artificial intelligence acceleration logic, including logic 915 in fig. 9A and 9B.
In at least one embodiment, 3D pipeline 3112 includes fixed functionality and programmable logic for processing one or more shader programs, such as vertex shader, geometry shader, pixel shader, fragment shader, compute shader, or other shader programs, by processing instructions and dispatching execution threads to graphics core array 3114. In at least one embodiment, graphics core array 3114 provides uniform execution resource blocks for use in processing shader programs. In at least one embodiment, multipurpose execution logic (e.g., execution units) within one or more graphics cores 3115A-3115B of graphics core array 3114 includes support for various 3D API shader languages, and may execute multiple simultaneous threads of execution associated with multiple shaders.
In at least one embodiment, graphics core array 3114 further includes execution logic for performing media functions, such as video and/or image processing. In at least one embodiment, the execution unit includes general logic that is programmable to perform parallel general purpose computing operations in addition to graphics processing operations.
In at least one embodiment, output data generated by threads executing on graphics core array 3114 may output data to memory in Unified Return Buffer (URB) 3118. In at least one embodiment, the URB 3118 may store data for multiple threads. In at least one embodiment, URB 3118 may be used to send data between different threads executing on graphics core array 3114. In at least one embodiment, URB 3118 can also be used for synchronization between threads on graphics core array 3114 and fixed function logic within shared function logic 3120.
In at least one embodiment, graphics core array 3114 is expandable such that graphics core array 3114 includes a variable number of graphics cores, each having a variable number of execution units based on a target power and performance level of GPE 3110. In at least one embodiment, the execution resources are dynamically extensible such that the execution resources may be enabled or disabled as desired.
In at least one embodiment, graphics core array 3114 is coupled to shared functional logic 3120, which includes a plurality of resources shared between graphics cores in graphics core array 3114. In at least one embodiment, the shared functionality performed by shared functionality logic 3120 is embodied in hardware logic that provides dedicated supplemental functionality to graphics core array 3114. In at least one embodiment, shared functional logic 3120 includes, but is not limited to, sampler unit 3121, math unit 3122, and inter-thread communication (ITC) logic 3123. In at least one embodiment, one or more caches 3125 are included in or coupled to shared function logic 3120.
In at least one embodiment, shared functionality is used if the need for dedicated functionality is not sufficient to be included in graphics core array 3114. In at least one embodiment, a single instantiation of a dedicated function is used in shared function logic 3120 and shared among other execution resources within graphics core array 3114. In at least one embodiment, specific shared functions within shared function logic 3120 that are widely used by graphics core array 3114 may be included within shared function logic 3126 within graphics core array 3114. In at least one embodiment, shared function logic 3126 within graphics core array 3114 may include some or all of the logic within shared function logic 3120. In at least one embodiment, all logic elements within shared function logic 3120 may be replicated within shared function logic 3126 of graphics core array 3114. In at least one embodiment, shared function logic 3120 is excluded to support shared function logic 3126 within graphics core array 3114.
Logic 915 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding logic 915 are provided herein in connection with fig. 9A and/or 9B. In at least one embodiment, portions or all of logic 915 may be incorporated into graphics processor 3110. For example, in at least one embodiment, the training and/or reasoning techniques described herein may use one or more ALUs that are embodied in the 3D pipeline 3112, one or more graphics cores 3115, shared function logic 3126, shared function logic 3120, or other logic in FIG. 31. Further, in at least one embodiment, the reasoning and/or training operations described herein may be accomplished using logic other than that shown in FIG. 9A or 9B. In at least one embodiment, the weight parameters may be stored in on-chip or off-chip memory and/or registers (shown or not shown) that configure the ALUs of the graphics processor 3110 to perform one or more machine learning algorithms, neural network architectures, use cases, or training techniques described herein.
In at least one embodiment, at least one component shown or described with respect to fig. 31 is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 31 is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 31 is used to cause one or more neural networks to select one or more variations in a feature of one or more text cues based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 31 is used to perform at least one aspect described with respect to block 100, block 200, process 300, block 400, process 500, block 600, block 700, block 800, and/or other systems, methods, or operations described herein.
Fig. 32 is a block diagram of hardware logic of a graphics processor core 3200 in accordance with at least one embodiment. In at least one embodiment, graphics processor core 3200 includes graphics core 2000. In at least one embodiment, graphics processor core 3200 is included within a graphics core array. In at least one embodiment, graphics processor core 3200 (sometimes referred to as a core slice) may be one or more graphics cores within a modular graphics processor. In at least one embodiment, graphics processor core 3200 is an example of one graphics core slice, and the graphics processor described herein may include multiple graphics core slices based on target power and performance envelope. In at least one embodiment, each graphics core 3200 may include a fixed function block 3230 coupled with a plurality of sub-cores 3201A-3201F (also referred to as sub-slices), which includes modular blocks of general and fixed function logic.
In at least one embodiment, fixed function block 3230 includes a geometry and fixed function pipeline 3236, which may be shared by all sub-cores in graphics processor 3200, for example, in a lower performance and/or lower power graphics processor implementation. In at least one embodiment, geometry and fixed function pipeline 3236 includes a 3D fixed function pipeline, a video front end unit, a thread generator and thread dispatcher, and a unified return buffer manager that manages unified return buffers.
In at least one embodiment, the fixed function block 3230 further comprises a graphics SoC interface 3237, a graphics microcontroller 3238, and a media pipeline 3239. In at least one embodiment, graphics SoC interface 3237 provides an interface between graphics core 3200 and other processor cores in a system-on-chip integrated circuit. In at least one embodiment, graphics microcontroller 3238 is a programmable sub-processor that can be configured to manage various functions of graphics processor 3200, including thread dispatch, scheduling, and preemption. In at least one embodiment, media pipeline 3239 includes logic that facilitates decoding, encoding, preprocessing, and/or post-processing multimedia data, including image and video data. In at least one embodiment, media pipeline 3239 implements media operations via requests to compute or sample logic within sub-cores 3201-3201F.
In at least one embodiment, soC interface 3237 enables graphics core 3200 to communicate with a general purpose application processor core (e.g., CPU) and/or other components within the SoC, including memory hierarchy elements such as shared last level cache memory, system RAM, and/or embedded on-chip or on-package DRAM. In at least one embodiment, soC interface 3237 may also enable communication with fixed function devices within the SoC (e.g., camera imaging pipelines) and enable use and/or implementation of global memory atoms (atoms) that may be shared between graphics core 3200 and the CPU within the SoC. In at least one embodiment, graphics SoC interface 3237 may also implement power management controls for graphics processor core 3200 and interfaces between the clock domains of (enabled) graphics processor core 3200 and other clock domains within the SoC. In at least one embodiment, soC interface 3237 enables receiving command buffers from a command stream translator and a global thread dispatcher configured to provide commands and instructions to each of one or more graphics cores within a graphics processor. In at least one embodiment, commands and instructions may be dispatched to the media pipeline 3239 when media operations are to be performed, or to the geometry and fixed-function pipeline (e.g., geometry and fixed-function pipeline 3236, and/or geometry and fixed-function pipeline 3214) when graphics processing operations are to be performed.
In at least one embodiment, graphics microcontroller 3238 can be configured to perform various scheduling and management tasks on graphics core 3200. In at least one embodiment, graphics microcontroller 3238 can perform graphics and/or compute workload scheduling on individual graphics parallel engines within Execution Unit (EU) arrays 3202A-3202F, 3204A-3204F in sub-cores 3201A-3201F. In at least one embodiment, host software executing on a CPU core of a SoC comprising graphics core 3200 may submit a workload to one of a plurality of graphics processor paths, which invokes a scheduling operation on the appropriate graphics engine. In at least one embodiment, the scheduling operation includes determining which workload is to be run next, submitting the workload to a command stream transformer, preempting existing workloads running on the engine, monitoring the progress of the workload, and notifying the host software when the workload is completed. In at least one embodiment, graphics microcontroller 3238 may also facilitate a low power or idle state of graphics core 3200, providing graphics core 3200 with the ability to save and restore registers within graphics core 3200 independent of operating system and/or graphics driver software on the system across low power state transitions.
In at least one embodiment, graphics core 3200 may have up to N modular sub-cores greater or fewer than sub-cores 3201A-3201F shown. For each set of N sub-cores, in at least one embodiment, graphics core 3200 may also include shared function logic 3210, shared and/or cache memory 3212, geometry/fixed function pipeline 3214, and additional fixed function logic 3216 to accelerate various graphics and computing processing operations. In at least one embodiment, shared functional logic 3210 may include logic elements (e.g., samplers, mathematical and/or inter-thread communication logic) that may be shared by each of the N sub-cores within graphics core 3200. In at least one embodiment, shared and/or cache memory 3212 may be a last level cache of N sub-cores 3201A-3201F within graphics core 3200 and may also be used as shared memory accessible by multiple sub-cores. In at least one embodiment, a geometry/fixed function pipeline 3214 may be included in place of geometry/fixed function pipeline 3236 within fixed function block 3230 and may include similar logic units.
In at least one embodiment, graphics core 3200 includes additional fixed-function logic 3216, which may include various fixed-function acceleration logic for use by graphics core 3200. In at least one embodiment, the additional fixed-function logic 3216 includes additional geometric pipelines for use in location-only shading. In location-only shading, there are at least two geometry pipelines, namely a full (full) geometry pipeline and a culling pipeline within the geometry and fixed-function pipelines 3214, 3236, which are additional geometry pipelines that may be included in additional fixed-function logic 3216. In at least one embodiment, the culling pipeline is a trimmed version of the full geometry pipeline. In at least one embodiment, the full pipeline and the culling pipeline may execute different instances of an application, each instance having a separate context. In at least one embodiment, only location shading may hide the long culling runs of discarded triangles, thereby enabling earlier shading to be accomplished in some cases. For example, in at least one embodiment, the culling pipeline logic in the additional fixed-function logic 3216 may execute the position shader in parallel with the host application and typically generate critical (results) faster than full pipeline because the culling pipeline takes the position attributes of the vertices and shaders them (shading) without performing rasterization and rendering the pixels to the frame buffer. In at least one embodiment, the culling pipeline may use the generated critical results to calculate visibility information for all triangles, regardless of whether the triangles are culled. In at least one embodiment, a full pipeline (which may be referred to as a replay pipeline in this case) may consume visibility information to skip the culled triangle to color only the visible triangle that is ultimately passed to the rasterization stage.
In at least one embodiment, the additional fixed-function logic 3216 may also include machine learning acceleration logic, such as fixed-function matrix multiplication logic, for implementations that include optimizations for machine learning training or reasoning.
In at least one embodiment, a set of execution resources are included within each graphics sub-core 3201A-3201F that are operable to perform graphics, media, and computing operations in response to requests by a graphics pipeline, media pipeline, or shader program. In at least one embodiment, the graphics sub-cores 3201A-3201F include a plurality of EU arrays 3202A-3202F, 3204A-3204F, thread dispatch and inter-thread communication (TD/IC) logic 3203A-3203F,3D (e.g., texture) samplers 3205A-3205F, media samplers 3206A-3206F, shader processors 3207A-3207F, and Shared Local Memory (SLM) 3208A-3208F. In at least one embodiment, the EU arrays 3202A-3202F, 3204A-3204F each include a plurality of execution units, which are general purpose graphics processing units capable of performing floating point and integer/fixed point logical operations, serving graphics, media, or computational operations (including graphics, media, or computational shader programs). In at least one embodiment, the TD/IC logic 3203A-3203F performs local thread dispatch and thread control operations for execution units within the sub-cores and facilitates communication between threads executing on execution units of the sub-cores. In at least one embodiment, 3D samplers 3205A-3205F may read data related to textures or other 3D graphics into memory. In at least one embodiment, the 3D sampler may read texture data differently based on the configured sample state and the texture format associated with a given texture. In at least one embodiment, media samplers 3206A-3206F may perform similar read operations based on the type and format associated with the media data. In at least one embodiment, each graphics sub-core 3201A-3201F may alternatively include a unified 3D and media sampler. In at least one embodiment, threads executing on execution units within each sub-core 3201A-3201F may utilize shared local memory 3208A-3208F within each sub-core to enable threads executing within a thread group to execute using a common pool of on-chip memory.
Logic 915 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding logic 915 are provided herein in connection with fig. 9A and/or 9B. In at least one embodiment, some or all of logic 915 may be incorporated into graphics processor 3200. For example, in at least one embodiment, the training and/or reasoning techniques described herein may use one or more ALUs embodied in a 3D pipeline, a graphics microcontroller 3238, geometric and fixed function pipelines 3214 and 3236, or other logic in FIG. 32. Further, in at least one embodiment, the reasoning and/or training operations described herein may be accomplished using logic other than that shown in FIG. 9A or 9B. In at least one embodiment, the weight parameters may be stored in on-chip or off-chip memory and/or registers (shown or not shown) that configure the ALUs of graphics processor 3200 to perform one or more machine learning algorithms, neural network architectures, use cases, or training techniques described herein.
In at least one embodiment, at least one component shown or described with respect to fig. 32 is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 32 is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 32 is used to cause one or more neural networks to select one or more variations in a feature of one or more text cues based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 32 is used to perform at least one aspect described with respect to block 100, block 200, process 300, block 400, process 500, block 600, block 700, block 800, and/or other systems, methods, or operations described herein.
33A and 33B illustrate thread execution logic 3300 of an array of processing elements including a graphics processor core in accordance with at least one embodiment. FIG. 33A illustrates at least one embodiment in which thread execution logic 3300 is utilized. FIG. 33B illustrates exemplary internal details of a graphics execution unit 3308 in accordance with at least one embodiment.
As shown in fig. 33A, in at least one embodiment, thread execution logic 3300 includes a shader processor 3302, a thread dispatcher 3304, an instruction cache 3306, an array of scalable execution units including a plurality of execution units 3307A-3307N and 3308A-3308N, a sampler 3310, a data cache 3312, and a data port 3314. In at least one embodiment, the array of scalable execution units may be dynamically scaled by enabling or disabling one or more execution units (e.g., any of execution units 3308A-N or 3307A-N), e.g., based on the computational requirements of the workload. In at least one embodiment, the scalable execution units are interconnected via an interconnect structure linked to each execution unit. In at least one embodiment, the thread execution logic 3300 includes one or more connections to memory (such as system memory or cache memory) through one or more of the instruction cache 3306, data port 3314, sampler 3310, and execution units 3307 or 3308. In at least one embodiment, each execution unit (e.g., 3307A) is a separate programmable general purpose computing unit capable of executing multiple simultaneous hardware threads while processing multiple data elements in parallel for each thread. In at least one embodiment, the array of execution units 3307 and/or 3308 can be expanded to include any number of individual execution units.
In at least one embodiment, execution units 3307 and/or 3308 are primarily used to execute shader programs. In at least one embodiment, the shader processor 3302 can process various shader programs and dispatch execution threads associated with the shader programs via a thread dispatcher 3304. In at least one embodiment, the thread dispatcher 3304 includes logic to arbitrate thread initialization requests from the graphics and media pipelines and to instantiate the requested threads on one or more of the execution units 3307 and/or 3308. For example, in at least one embodiment, a geometry pipeline may dispatch vertices, tessellations, or geometry shaders to thread execution logic for processing. In at least one embodiment, the thread dispatcher 3304 may also process runtime thread generation requests from executing shader programs.
In at least one embodiment, execution units 3307 and/or 3308 support an instruction set that includes native support for many standard 3D graphics shader instructions, such that shader programs from graphics libraries (e.g., direct 3D and OpenGL) can be executed with minimal conversion. In at least one embodiment, the execution units support vertex and geometry processing (e.g., vertex programs, geometry programs, and/or vertex shaders), pixel processing (e.g., pixel shaders, fragment shaders), and general purpose processing (e.g., compute and media shaders). In at least one embodiment, each execution unit 3307 and/or 3308, which includes one or more Arithmetic Logic Units (ALUs), is capable of executing multiple issue Single Instruction Multiple Data (SIMDs), and multi-threaded operations enable an efficient execution environment, despite the higher latency of memory accesses. In at least one embodiment, each hardware thread within each execution unit has a dedicated high bandwidth register file and associated independent thread state. In at least one embodiment, execution is multi-issue per clock to a pipeline, which is capable of integer, single and double precision floating point operations, SIMD branching functions, logical operations, overrunning operations (transcendental operation), and other miscellaneous operations (miscellaneous operation). In at least one embodiment, while waiting for data from one of the memory or shared functions, the dependency logic within execution units 3307 and/or 3308 sleeps the waiting thread until the requested data is returned. In at least one embodiment, while the waiting thread is sleeping, the hardware resources may be dedicated to processing other threads. For example, in at least one embodiment, the execution unit may perform operations on a pixel shader, a fragment shader, or another type of shader program (including a different vertex shader) during a delay associated with vertex shader operations.
In at least one embodiment, each of execution units 3307 and/or 3308 operates on an array of data elements. In at least one embodiment, the number of data elements is the "execution size" or the number of channels of the instruction. In at least one embodiment, the execution channel is a logical execution unit for data element access, masking, and flow control within an instruction. In at least one embodiment, the number of channels may be independent of the number of physical Arithmetic Logic Units (ALUs) or Floating Point Units (FPUs) of a particular graphics processor. In at least one embodiment, execution units 3307 and/or 3308 support integer and floating point data types.
In at least one embodiment, the execution unit instruction set includes SIMD instructions. In at least one embodiment, individual data elements may be stored in registers as packed data types, and the execution unit will process individual elements based on the data size of those elements. For example, in at least one embodiment, when operating on a 256-bit wide vector, the 256-bit vector is stored in a register, and the execution unit operates on a vector that is four separate 64-bit packed data elements (quad-word (QW) size data elements), eight separate 33-bit packed data elements (double-word (DW) size data elements), sixteen separate 16-bit packed data elements (word (W) size data elements), or thirty-two separate 8-bit data elements (byte (B) size data elements). However, in at least one embodiment, different vector widths and register sizes are possible.
In at least one embodiment, one or more execution units can be combined into a fused execution unit 3309A-3309N having thread control logic (3311A-3311N) common to the fused EUs, such as fusing execution unit 3307A with execution unit 3308A into fused execution unit 3309A. In at least one embodiment, multiple EUs may be fused into EU groups. In at least one embodiment, each EU in the fused set of EUs may be configured to execute a separate SIMD hardware thread, wherein the number of EUs in the fused set of EUs may vary according to the respective embodiment. In at least one embodiment, various SIMD widths may be performed per EU, including but not limited to SIMD8, SIMD16, and SIMD33. In at least one embodiment, each fused graphics execution unit 3309A-3309N includes at least two execution units. For example, in at least one embodiment, the fusion execution unit 3309A includes a first EU 3307A, a second EU 3308A, and thread control logic 3311A common to the first EU 3307A and the second EU 3308A. In at least one embodiment, the thread control logic 3311A controls the threads executing on the fused graphics execution unit 3309A, allowing each EU within the fused execution units 3309A-3309N to execute using a common instruction pointer register.
In at least one embodiment, one or more internal instruction caches (e.g., 3306) are included in the thread execution logic 3300 to cache thread instructions for execution units. In at least one embodiment, one or more data caches (e.g., 3312) are included to cache thread data during thread execution. In at least one embodiment, a sampler 3310 is included to provide texture samples for 3D operations and media samples for media operations. In at least one embodiment, sampler 3310 includes specialized texture or media sampling functionality to process texture or media data during sampling prior to providing the sampled data to an execution unit.
During execution, in at least one embodiment, the graphics and media pipeline sends a thread initiation request to thread execution logic 3300 via thread generation and dispatch logic. In at least one embodiment, once a set of geometric objects has been processed and rasterized into pixel data, pixel processor logic (e.g., pixel shader logic, fragment shader logic, etc.) within shader processor 3302 is invoked to further calculate output information and cause the results to be written to an output surface (e.g., color buffer, depth buffer, stencil buffer, etc.). In at least one embodiment, the pixel shader or fragment shader calculates the values of individual vertex attributes to be interpolated on the rasterized object. In at least one embodiment, the pixel processor logic within shader processor 3302 then executes the pixel or fragment shader program provided by an Application Programming Interface (API). In at least one embodiment, to execute a shader program, shader processor 3302 dispatches threads to execution units (e.g., 3308A) via thread dispatcher 3304. In at least one embodiment, shader processor 3302 uses texture sampling logic in sampler 3310 to access texture data in a texture map stored in memory. In at least one embodiment, arithmetic operations on texture data and input geometry data calculate pixel color data for each geometry segment, or discard one or more pixels for no further processing.
In at least one embodiment, the data port 3314 provides a memory access mechanism for the thread execution logic 3300 to output processed data to memory for further processing on a graphics processor output pipeline. In at least one embodiment, the data port 3314 includes or is coupled to one or more cache memories (e.g., data cache 3312) for caching data for memory access via the data port.
As shown in FIG. 33B, in at least one embodiment, the graphics execution unit 3308 may include an instruction fetch unit 3337, a general purpose register file array (GRF) 3324, an architectural register file Array (ARF) 3326, a thread arbiter 3322, a issue unit 3330, a branch unit 3332, a set of SIMD Floating Point Units (FPUs) 3334, and a set of special integer SIMD ALUs 3335. In at least one embodiment, GRF 3324 and ARF 3326 include a set of general purpose register files and architectural register files associated with each simultaneous hardware thread that may be active in graphics execution unit 3308. In at least one embodiment, per-thread architecture state is maintained in the ARF 3326, while data used during thread execution is stored in the GRF 3324. In at least one embodiment, the execution state of each thread, including the instruction pointer of each thread, may be saved in a thread-specific register in ARF 3326.
In at least one embodiment, the graphics execution unit 3308 has an architecture that is a combination of Simultaneous Multithreading (SMT) and fine grain Interleaved Multithreading (IMT). In at least one embodiment, the architecture has a modular configuration that can be fine-tuned at design time based on a target number of simultaneous threads and a number of registers per execution unit, where execution unit resources are logically partitioned for executing multiple simultaneous threads.
In at least one embodiment, the graphics execution unit 3308 may issue multiple instructions together, each of which may be a different instruction. In at least one embodiment, the thread arbiter 3322 of the graphics execution unit thread 3308 may dispatch instructions to one of the issue unit 3330, branch unit 3332, or SIMD FPU 3334 for execution. In at least one embodiment, each thread of execution may access 128 general purpose registers in GRF 3324, where each register may store 33 bytes, accessible as a SIMD 8-element vector of 33-bit data elements. In at least one embodiment, each execution unit thread may access 4KB in GRF 3324, although embodiments are not so limited and may provide more or less register resources in other embodiments. In at least one embodiment, up to seven threads may be executed simultaneously, although the number of threads per execution unit may also vary depending on the embodiment. In at least one embodiment, in which seven threads may access 4KB, GRF 3324 may store a total of 28KB. In at least one embodiment, a flexible addressing scheme may allow registers to be addressed together to effectively build wider registers or rectangular block data structures representing strides.
In at least one embodiment, memory operations, sampler operations, and other longer-latency system communications are dispatched via "send" instructions executed by messages passed to the sending unit 3330. In at least one embodiment, branch instructions are dispatched to branch unit 3332 to facilitate SIMD divergence and final convergence.
In at least one embodiment, the graphics execution unit 3308 includes one or more SIMD Floating Point Units (FPUs) 3334 to perform floating point operations. In at least one embodiment, one or more FPUs 3334 also support integer computations. In at least one embodiment, one or more FPUs 3334 may SIMD perform up to M33-bit floating point (or integer) operations, or SIMD perform up to 2M 16-bit integer or 16-bit floating point operations. In at least one embodiment, at least one FPU provides extended mathematical capabilities to support high throughput transcendental mathematical functions and double precision 64-bit floating points. In at least one embodiment, there is also a set of 8-bit integer SIMD ALUs 3335, and which may be specifically optimized to perform operations associated with machine learning computations.
In at least one embodiment, an array of multiple instances of graphics execution unit 3308 may be instantiated in a graphics sub-core grouping (e.g., sub-slice). In at least one embodiment, execution unit 3308 may execute instructions across multiple execution channels. In at least one embodiment, each thread executing on graphics execution unit 3308 executes on a different channel.
Logic 915 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding logic 915 are provided below in connection with fig. 9A and/or 9B. In at least one embodiment, some or all of the logic 915 may be incorporated into the thread execution logic 3300. Further, in at least one embodiment, the reasoning and/or training operations described herein may be accomplished using logic other than that shown in FIG. 9A or FIG. 9B. In at least one embodiment, the weight parameters may be stored in on-chip or off-chip memory and/or registers (shown or not shown) that configure the ALU of the thread execution logic 3300 to perform one or more of the machine learning algorithms, neural network architectures, use cases, or training techniques described herein.
In at least one embodiment, at least one component shown or described with respect to fig. 33A and 33B is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 33A and 33B is used to cause selection of a most consistent output of one or more pre-trained neural networks based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 33A and 33B is used to cause one or more neural networks to select one or more variations in a feature of one or more text prompts based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 33A and 33B is used to perform at least one aspect described with respect to block diagram 100, block diagram 200, process 300, block diagram 400, process 500, block diagram 600, block diagram 700, block diagram 800, and/or other systems, methods, or operations described herein.
FIG. 34 illustrates a parallel processing unit ("PPU") 3400 in accordance with at least one embodiment. In at least one embodiment, the PPU 3400 is configured with machine-readable code that, if executed by the PPU 3400, causes the PPU 3400 to perform some or all of the processes and techniques described throughout this disclosure. In at least one embodiment, the PPU 3400 is a multi-threaded processor implemented on one or more integrated circuit devices and utilizes multi-threading as a delay hiding technique designed to process computer-readable instructions (also referred to as machine-readable instructions or simply instructions) in parallel on multiple threads. In at least one embodiment, PPU 3400 includes one or more graphics cores 2000. In at least one embodiment, a thread refers to a thread of execution and is an instantiation of a set of instructions configured to be executed by PPU 3400. In at least one embodiment, PPU 3400 is a graphics processing unit ("GPU") configured to implement a graphics rendering pipeline for processing three-dimensional ("3D") graphics data in order to generate two-dimensional ("2D") image data for display on a display device, such as a liquid crystal display ("LCD") device. In at least one embodiment, the PPU 3400 is used to perform computations, such as linear algebraic operations and machine learning operations. Fig. 34 shows an example parallel processor for illustrative purposes only, and should be construed as a non-limiting example of a processor architecture contemplated within the scope of the present disclosure, and any suitable processor may be employed in addition to and/or in lieu thereof.
In at least one embodiment, one or more PPUs 3400 are configured to accelerate high-performance computing ("HPCs"), data centers, and machine learning applications. In at least one embodiment, PPU 3400 is configured to accelerate deep learning systems and applications, including the following non-limiting examples: autonomous automotive platform, deep learning, high precision speech, image, text recognition system, intelligent video analysis, molecular simulation, drug discovery, disease diagnosis, weather forecast, big data analysis, astronomy, molecular dynamics simulation, financial modeling, robotics, factory automation, real-time language conversion, online search optimization, personalized user recommendation, etc.
In at least one embodiment, PPU 3400 includes, but is not limited to, an input/output ("I/O") unit 3406, a front end unit 3410, a scheduler (sequencer) unit 3412, a work distribution unit 3414, a hub 3416, a crossbar ("Xbar") 3420, one or more general processing clusters ("GPCs") 3418, and one or more partition units ("memory partition units") 3422. In at least one embodiment, the PPU 3400 is connected to a host processor or other PPU 3400 via one or more high-speed GPU interconnects ("GPU interconnects") 3408. In at least one embodiment, the PPU 3400 is connected to a host processor or other peripheral device via a system bus 3402. In at least one embodiment, PPU 3400 is connected to a local memory comprising one or more memory devices ("memories") 3404. In at least one embodiment, memory device 3404 includes, but is not limited to, one or more dynamic random access memory ("DRAM") devices. In at least one embodiment, one or more DRAM devices are configured and/or configurable as a high bandwidth memory ("HBM") subsystem, and multiple DRAM dies are stacked within each device.
In at least one embodiment, the high-speed GPU interconnect 3408 may refer to a line-based multi-channel communication link that the system uses to extend and includes one or more PPUs 3400 in conjunction with one or more central processing units ("CPUs"), supporting cache coherency between PPUs 3400 and CPUs, as well as CPU hosting. In at least one embodiment, the high-speed GPU interconnect 3408 communicates data and/or commands to or from other units of the PPU 3400, such as one or more replication engines, video encoders, video decoders, power management units, and/or other components that may not be explicitly shown in fig. 34, through the hub 3416.
In at least one embodiment, the I/O unit 3406 is configured to send and receive communications (e.g., commands, data) from a host processor (not shown in fig. 34) over the system bus 3402. In at least one embodiment, the I/O unit 3406 communicates with the host processor directly via the system bus 3402 or through one or more intermediary devices (such as a memory bridge). In at least one embodiment, the I/O unit 3406 may communicate with one or more other processors (such as one or more PPUs 3400) via a system bus 3402. In at least one embodiment, I/O unit 3406 implements a peripheral component interconnect express ("PCIe") interface for communicating over a PCIe bus. In at least one embodiment, I/O unit 3406 implements an interface for communicating with external devices.
In at least one embodiment, the I/O unit 3406 decodes packets (packets) received via the system bus 3402. In at least one embodiment, at least some of the packets represent commands configured to cause PPU 3400 to perform various operations. In at least one embodiment, I/O unit 3406 communicates the decoded command to various other units of PPU 3400 as specified by the command. In at least one embodiment, the commands are transmitted to the front end unit 3410 and/or to other units of the hub 3416 or PPU 3400, such as one or more replication engines, video encoders, video decoders, power management units, etc. (not explicitly shown in fig. 34). In at least one embodiment, I/O unit 3406 is configured to route communications between and among the various logical units of PPU 3400.
In at least one embodiment, programs executed by the host processor encode a command stream in a buffer that provides the workload to the PPU 3400 for processing. In at least one embodiment, a workload includes instructions and data to be processed by those instructions. In at least one embodiment, the buffer is a region in memory that is accessible (e.g., read/write) by both the host processor and the PPU 3400-the host interface unit may be configured to access the buffer in system memory connected to the system bus 3402 via memory requests that are transmitted by the I/O unit 3406 over the system bus 3402. In at least one embodiment, the host processor writes the command stream to the buffer and then sends a pointer to the beginning of the command stream to the PPU 3400, such that the front-end unit 3410 receives pointers to and manages one or more command streams, reads commands from the command streams and forwards commands to the various units of the PPU 3400.
In at least one embodiment, the front end units 3410 are coupled to a scheduler unit 3412 (which may be referred to as a sequencer unit, a thread sequencer, and/or an asynchronous compute engine), which scheduler unit 3412 configures each GPC 3418 to process tasks defined by one or more command streams. In at least one embodiment, the scheduler unit 3412 is configured to track status information regarding various tasks managed by the scheduler unit 3412, where the status information may indicate to which GPC 3418 a task is assigned, whether a task is active or inactive, priorities associated with a task, and so forth. In at least one embodiment, the scheduler unit 3412 manages execution of multiple tasks on one or more GPCs 3418.
In at least one embodiment, the scheduler unit 3412 is coupled to a work distribution unit 3414, which work distribution unit 3414 is configured to dispatch tasks for execution on GPCs 3418. In at least one embodiment, the work distribution unit 3414 tracks a plurality of scheduled tasks received from the scheduler unit 3412 and the work distribution unit 3414 manages a pending (pending) task pool and an active task pool for each GPC 3418. In at least one embodiment, the pool of tasks to be processed includes a plurality of time slots (e.g., 32 time slots) containing tasks assigned to be processed by a particular GPC 3418; the active task pool may include multiple time slots (e.g., 4 time slots) for tasks actively processed by GPCs 3418 such that as one of GPCs 3418 completes execution of the task, that task will be evicted from the active task pool of GPCs 3418 and another task is selected from the pending task pool and scheduled for execution on GPCs 3418. In at least one embodiment, if an active task is idle on the GPC 3418, such as while waiting for data dependencies to be resolved, the active task is evicted from the GPC 3418 and returned to the pending task pool while another task in the pending task pool is selected and scheduled for execution on the GPC 3418.
In at least one embodiment, the work distribution unit 3414 communicates with one or more GPCs 3418 via XBar 3420. In at least one embodiment, XBar 3420 is an interconnection network that couples many of the units of PPU 3400 to other units of PPU 3400 and may be configured to couple work distribution units 3414 to a particular GPC 3418. In at least one embodiment, one or more other units of PPU 3400 may also be connected to XBar 3420 via hub 3416.
In at least one embodiment, tasks are managed by the scheduler unit 3412 and assigned to one of the GPCs 3418 by the work distribution unit 3414. In at least one embodiment, the GPCs 3418 are configured to process tasks and generate results. In at least one embodiment, the results may be consumed by other tasks in the GPCs 3418, routed to a different GPC 3418 via XBar 3420, or stored in memory 3404. In at least one embodiment, the results can be written to memory 3404 via partition unit 3422, which implements a memory interface for writing data to memory 3404 or reading data from memory 3404. In at least one embodiment, the results may be transferred to another PPU or CPU via the high-speed GPU interconnect 3408. In at least one embodiment, the PPU 3400 includes, but is not limited to, a number U partition units 3422 equal to the number of separate and distinct memory devices 3404 coupled to the PPU 3400, as described in more detail herein in connection with fig. 36.
In at least one embodiment, the host processor executes a driver kernel that implements an Application Programming Interface (API) that enables one or more applications executing on the host processor to schedule operations for execution on the PPU 3400. In at least one embodiment, multiple computing applications are executed simultaneously by the PPU 3400, and the PPU 3400 provides isolation, quality of service ("QoS"), and independent address space for the multiple computing applications. In at least one embodiment, the application generates instructions (e.g., in the form of API calls) that cause the driver kernel to generate one or more tasks for execution by the PPU 3400, and the driver kernel outputs the tasks to one or more streams being processed by the PPU 3400. In at least one embodiment, each task includes one or more related thread groups, which may be referred to as thread bundles (warp), wave fronts, and/or waves. In at least one embodiment, the thread bundles, wave fronts, and/or waves include multiple related threads (e.g., 32 threads) that may be executed in parallel. In at least one embodiment, a collaboration thread may refer to multiple threads, including instructions for performing tasks and exchanging data through shared memory. In at least one embodiment, threads and collaboration threads are described in more detail in connection with FIG. 36.
Logic 915 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding logic 915 are provided herein in connection with fig. 9A and/or 9B. In at least one embodiment, the deep learning application processor is used to train a machine learning model (such as a neural network) to predict or infer information provided to the PPU 3400. In at least one embodiment, the deep learning application processor is used to infer or predict information based on a trained machine learning model (e.g., neural network) that has been trained by another processor or system or by the PPU 3400. In at least one embodiment, PPU 3400 may be used to perform one or more neural network use cases described herein.
In at least one embodiment, at least one component shown or described with respect to fig. 34 is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 34 is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 34 is used to cause one or more neural networks to select one or more variations in a feature of one or more text prompts based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 34 is used to perform at least one aspect described with respect to block 100, block 200, process 300, block 400, process 500, block 600, block 700, block 800, and/or other systems, methods, or operations described herein.
FIG. 35 illustrates a general processing cluster ("GPC") 3500 in accordance with at least one embodiment. In at least one embodiment, GPC 3500 is GPC 3418 of fig. 34. In at least one embodiment, each GPC 3500 includes, but is not limited to, a plurality of hardware units for processing tasks, and each GPC 3500 includes, but is not limited to, a pipeline manager 3502, a pre-raster operations unit ("preROP") 3504, a raster engine 3508, a work distribution crossbar ("WDX") 3516, a memory management unit ("MMU") 3518, one or more data processing clusters ("DPC") 3506, and any suitable combination of components.
In at least one embodiment, the operation of the GPC 3500 is controlled by a pipeline manager 3502. In at least one embodiment, the pipeline manager 3502 manages the configuration of one or more DPCs 3506 to handle tasks assigned to GPCs 3500. In at least one embodiment, the pipeline manager 3502 configures at least one of the one or more DPCs 3506 to implement at least a portion of the graphics rendering pipeline. In at least one embodiment, DPC 3506 is configured to execute a vertex shader program on programmable streaming multiprocessor ("SM") 3514. In at least one embodiment, the pipeline manager 3502 is configured to route packets received from the work distribution unit to appropriate logic units within the GPC 3500, and in at least one embodiment, some packets may be routed to fixed function hardware units in the preROP 3504 and/or the raster engine 3508, while other packets may be routed to the DPC 3506 for processing by the primitive engine 3512 or SM 3514. In at least one embodiment, the pipeline manager 3502 configures at least one of the DPCs 3506 to implement a neural network model and/or a computational pipeline.
In at least one embodiment, preROP unit 3504 is configured to route data generated by raster engine 3508 and DPC 3506 to a raster operations ("ROP") unit in partition unit 3422 described in more detail above in connection with fig. 34 in at least one embodiment. In at least one embodiment, preROP unit 3504 is configured to perform optimizations for color blending, organize pixel data, perform address translations, and so forth. In at least one embodiment, the raster engine 3508 includes, but is not limited to, a plurality of fixed-function hardware units configured to perform individual raster operations, and in at least one embodiment, the raster engine 3508 includes, but is not limited to, a setup engine, a coarse raster engine, a culling engine, a clipping engine, a fine raster engine, a tile aggregation engine, and any suitable combination thereof. In at least one embodiment, the setup engine receives transformed vertices and generates plane equations associated with geometric primitives defined by the vertices; the plane equations are passed to the coarse raster engine to generate coverage information for the primitives (e.g., x, y coverage masks for the tiles); the output of the coarse raster engine is passed to a culling engine where the segments associated with the primitives that failed the z-test are culled and passed to a clipping engine where the segments outside the view cone are clipped. In at least one embodiment, the segments left after clipping and culling are passed to a fine raster engine to generate attributes of pixel segments based on plane equations generated by a setup engine. In at least one embodiment, the output of the raster engine 3508 includes the fragments to be processed by any suitable entity (e.g., by the fragment shader implemented within the DPC 3506).
In at least one embodiment, each DPC 3506 included in GPC 3500 includes, but is not limited to, an M-pipe controller ("MPC") 3510; primitive engine 3512; one or more SM 3514; and any suitable combination thereof. In at least one embodiment, MPC 3510 controls the operation of DPC 3506, routing packets received from pipeline manager 3502 to appropriate units in DPC 3506. In at least one embodiment, the packets associated with the vertex are routed to primitive engine 3512, primitive engine 3512 being configured to retrieve vertex attributes associated with the vertex from memory; instead, packets associated with the shader program may be transmitted to SM 3514.
In at least one embodiment, SM 3514 includes, but is not limited to, a programmable streaming processor configured to process tasks represented by multiple threads. In at least one embodiment, SM 3514 is multithreaded and configured to concurrently execute multiple threads (e.g., 32 threads) from a particular thread group, and implements a single instruction, multiple data ("SIMD") architecture in which each thread of a set of threads (e.g., thread bundles, wave fronts, waves) is configured to process different sets of data based on the same instruction set. In at least one embodiment, all threads in a thread group execute a common instruction set. In at least one embodiment, the SM 3514 implements a single instruction, multi-thread ("SIMT") architecture in which each thread of a thread group is configured to process different sets of data based on a common instruction set, but in which individual threads of the thread group are allowed to diverge during execution. In at least one embodiment, program counters, call stacks, and execution states are maintained for each thread bundle (which may be referred to as wave fronts and/or waves) to achieve concurrency between the thread bundles and serial execution within the thread bundles when threads in the thread bundles diverge. In another embodiment, program counters, call stacks, and execution states are maintained for each individual thread, thereby achieving equal concurrency between all threads within and between thread bundles. In at least one embodiment, execution state is maintained for each individual thread, and threads executing common instructions may be executed in parallel and converged to improve efficiency. At least one embodiment of SM 3514 is described in more detail herein.
In at least one embodiment, the MMU 3518 provides an interface between the GPC 3500 and a memory partition unit (e.g., partition unit 3422 of FIG. 34), and the MMU 3518 provides virtual address to physical address translation, memory protection, and arbitration of memory requests. In at least one embodiment, the MMU 3518 provides one or more translation lookaside buffers ("TLB") for performing translations of virtual addresses to physical addresses in memory.
Logic 915 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding logic 915 are provided herein in connection with fig. 9A and/or 9B. In at least one embodiment, the deep learning application processor is used to train a machine learning model (such as a neural network) to predict or infer information provided to the GPCs 3500. In at least one embodiment, the GPC 3500 is used to infer or predict information based on a trained machine learning model (e.g., neural network) that has been trained by another processor or system or GPC 3500. In at least one embodiment, GPC 3500 can be used to perform one or more neural network use cases described herein.
In at least one embodiment, at least one component shown or described with respect to fig. 35 is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 35 is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 35 is used to cause one or more neural networks to select one or more variations in a feature of one or more text cues based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 35 is used to perform at least one aspect described with respect to block 100, block 200, process 300, block 400, process 500, block 600, block 700, block 800, and/or other systems, methods, or operations described herein.
FIG. 36 illustrates a memory partition unit 3600 of a parallel processing unit ("PPU") in accordance with at least one embodiment. In at least one embodiment, memory partition unit 3600 includes, but is not limited to, a raster operations ("ROP") unit 3602; a level two ("L2") cache 3604; a memory interface 3606; and any suitable combination thereof. In at least one embodiment, the memory interface 3606 is coupled to a memory. In at least one embodiment, the memory interface 3606 may implement 32, 64, 128, 1024 bit data buses, etc. for high speed data transfer. In at least one embodiment, the PPU includes U memory interfaces 3606, where U is a positive integer, one memory interface 3606 for each pair of partition units 3600, where each pair of partition units 3600 is connected to a corresponding memory device. For example, in at least one embodiment, the PPU may be connected to up to Y memory devices, such as a high bandwidth memory stack or a graphics double data rate version 5 synchronous dynamic random access memory ("GDDR 5 SDRAM").
In at least one embodiment, memory interface 3606 implements a second generation high bandwidth memory ("HBM 2") memory interface and Y is equal to half of U. In at least one embodiment, the HBM2 memory stack is located on a physical package with the PPU, which may provide substantial power and area savings over conventional GDDR5 SDRAM systems. In at least one embodiment, each HBM2 stack includes, but is not limited to, four memory dies, and y=4, where each HBM2 stack includes two 128-bit lanes per die, a total of 8 lanes, and a data bus width of 1024 bits. In at least one embodiment, the memory supports single error correction double error detection ("SECDED") error correction code ("ECC") for protecting data. In at least one embodiment, ECC may provide higher reliability for computing applications that are sensitive to data corruption.
In at least one embodiment, the PPU implements a multi-level memory hierarchy. In at least one embodiment, memory partition unit 3600 supports unified memory for providing a single unified virtual address space for central processing units ("CPUs") and PPU memory, thereby enabling data sharing between virtual memory systems. In at least one embodiment, the frequency of access of the PPU to memory located on other processors is tracked to ensure that memory pages are moved to the physical memory of the PPU that accesses the pages more frequently. In at least one embodiment, the high-speed GPU interconnect 3408 supports an address translation service that allows the PPU to directly access the CPU's page tables and provides the PPU full access to the CPU memory.
In at least one embodiment, the replication engine transfers data between multiple PPUs or between a PPU and a CPU. In at least one embodiment, the replication engine may generate a page fault for an address that is not mapped into the page table, and then memory partition unit 3600 services the page fault, maps the address into the page table, and then the replication engine performs the transfer. In at least one embodiment, fixed (i.e., non-pageable) memory is operated for multiple replication engines between multiple processors, thereby significantly reducing available memory. In at least one embodiment, in the event of a hardware page fault, the address may be passed to the replication engine regardless of whether the memory page resides or not, and the replication process is transparent.
In accordance with at least one embodiment, data from memory 3404 or other system memory of FIG. 34 is fetched by memory partition unit 3600 and stored in L2 cache 3604, with the L2 cache 3604 being on-chip and shared among the various GPCs. In at least one embodiment, each memory partition unit 3600 includes, but is not limited to, at least a portion of an L2 cache associated with a corresponding memory device. In at least one embodiment, a lower level cache is implemented in each unit within the GPC. In at least one embodiment, each SM 3514 of fig. 35 can implement a level one ("L1") cache, where the L1 cache is private memory dedicated to a particular SM 3514, and data is fetched from the L2 cache 3604 and stored in each L1 cache for processing in the functional units of the SM 3514. In at least one embodiment, L2 cache 3604 is coupled to memory interface 3606 and XBar 3420 shown in fig. 34.
In at least one embodiment, the ROP unit 3602 performs graphics raster operations related to pixel colors, such as color compression, pixel blending, and the like. In at least one embodiment, ROP unit 3602 implements depth testing in conjunction with raster engine 3508, receives the depth of a sample location associated with a pixel fragment from a culling engine of raster engine 3508. In at least one embodiment, the depth is tested against a corresponding depth in a depth buffer for sample locations associated with the fragment. In at least one embodiment, if the fragment passes the depth test for the sample location, the ROP unit 3602 updates the depth buffer and communicates the result of the depth test to the raster engine 3508. It will be appreciated that the number of partition units 3600 may be different than the number of GPCs, and thus, in at least one embodiment, each ROP unit 3602 may be coupled to each GPC. In at least one embodiment, the ROP unit 3602 tracks packets received from different GPCs and determines whether the results generated by the ROP unit 3602 are to be routed through XBar 3420.
In at least one embodiment, at least one component shown or described with respect to fig. 36 is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 36 is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 36 is used to cause one or more neural networks to select one or more variations in a feature of one or more text prompts based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 36 is used to perform at least one aspect described with respect to block 100, block 200, process 300, block 400, process 500, block 600, block 700, block 800, and/or other systems, methods, or operations described herein.
Fig. 37 illustrates a streaming multiprocessor ("SM") 3700 in accordance with at least one embodiment. In at least one embodiment, SM 3700 is the SM of fig. 35. In at least one embodiment, SM 3700 includes, but is not limited to, instruction cache 3702; one or more scheduler units 3704 (which may be referred to as sequencer units); register file 3708; one or more processing cores ("cores") 3710; one or more special function units ("SFUs") 3712; one or more load/store units ("LSUs") 3714; an interconnection network 3716; a shared memory/level one ("L1") cache 3718; and/or any suitable combination thereof. In at least one embodiment, LSU 3714 performs load or store operations corresponding to load/store data (e.g., instructions) to perform operations (e.g., execute APIs, API calls).
In at least one embodiment, the work allocation unit dispatches tasks to execute on a common processing cluster ("GPC") of parallel processing units ("PPU"), and each task is allocated to a particular data processing cluster ("DPC") within the GPC, and if a task is associated with a shader program, the task is allocated to SM 3700 (which may be referred to as a CU and/or a slice). In at least one embodiment, scheduler unit 3704 (which may be referred to as a sequencer and/or asynchronous compute engine) receives tasks from the work allocation unit and manages instruction scheduling for one or more thread blocks assigned to SM 3700. In at least one embodiment, scheduler unit 3704 schedules thread blocks to execute as thread bundles (which may be referred to as wave fronts and/or waves) of parallel threads, where each thread block is assigned at least one thread bundle. In at least one embodiment, each thread bundle executes threads. In at least one embodiment, scheduler unit 3704 manages a plurality of different thread blocks, assigns thread bundles to different thread blocks, and then assigns instructions from a plurality of different collaboration groups to respective functional units (e.g., processing cores 3710, SFUs 3712, and LSUs 3714) in each clock cycle.
In at least one embodiment, a collaboration group (which may also be referred to as a wave front and/or wave) may refer to a programming model for organizing groups of communication threads that allows a developer to express the granularity at which threads are communicating, thereby enabling richer expressions, more efficient parallel decomposition. In at least one embodiment, the collaboration initiation API supports synchronization between thread blocks to execute parallel algorithms. In at least one embodiment, the application of the conventional programming model provides a single, simple construct for synchronizing collaborative threads: a barrier (e.g., syncthreads () function) across all threads of a thread block. However, in at least one embodiment, a programmer may define groups of threads at a granularity smaller than a thread block and synchronize within the defined groups to achieve higher performance, design flexibility, and software reuse in the form of a set-wide functional interface. In at least one embodiment, the collaboration group enables a programmer to explicitly define a thread group at sub-block (i.e., as small as a single thread) and multi-block granularity, and perform aggregation operations, such as synchronizing threads in the collaboration group. In at least one embodiment, the programming model supports clean combinations across software boundaries so that libraries and utility functions can be securely synchronized in their local context without having to make assumptions about convergence. In at least one embodiment, the collaboration group primitives implement new modes of collaborative parallelism, including but not limited to producer-consumer parallelism, opportunistic parallelism, and global synchronization across a thread block grid.
In at least one embodiment, dispatch unit 3706 is configured to communicate instructions to one or more functional units and scheduler unit 3704 includes, but is not limited to, two dispatch units 3706, the two dispatch units 3706 enabling two different instructions from a common thread bundle to be dispatched within each clock cycle. In at least one embodiment, each scheduler unit 3704 includes a single dispatch unit 3706 or additional dispatch units 3706.
In at least one embodiment, each SM 3700 (which may be referred to as a CU and/or slice) includes, in at least one embodiment, but is not limited to, a register file 3708, the register file 3708 providing a set of registers for the functional units of SM 3700. In at least one embodiment, register file 3708 is divided among each functional unit such that each functional unit is assigned a dedicated portion of register file 3708. In at least one embodiment, register file 3708 is divided between different bundles of threads being executed by SM 3700, and register file 3708 provides temporary storage for operands connected to the functional unit's data path. In at least one embodiment, each SM 3700 includes, but is not limited to, a plurality of L processing cores 3710, where L is a positive integer. In at least one embodiment, SM 3700 includes, but is not limited to, a large number (e.g., 128 or more) of different processing cores 3710. In at least one embodiment, each processing core 3710 includes, but is not limited to, a full pipeline, single precision, double precision, and/or mixed precision processing unit including, but not limited to, a floating point arithmetic logic unit and an integer arithmetic logic unit. In at least one embodiment, the floating point arithmetic logic unit implements the IEEE 754-2008 standard for floating point arithmetic. In at least one embodiment, processing cores 3710 include, but are not limited to, 64 single precision (32-bit) floating point cores, 64 integer cores, 32 double precision (64-bit) floating point cores, and 8 tensor cores.
According to at least one embodiment, the tensor core is configured to perform a matrix operation. In at least one embodiment, one or more tensor cores are included in the processing core 3710. In at least one embodiment, the tensor core is configured to perform deep learning matrix arithmetic, such as convolution operations for neural network training and reasoning. In at least one embodiment, each tensor core operates on a 4×4 matrix and performs a matrix multiply and accumulate operation d=a×b+c, where A, B, C and D are 4×4 matrices.
In at least one embodiment, matrix multiplication inputs a and B are 16-bit floating point matrices and accumulation matrices C and D are 16-bit floating point matrices or 32-bit floating point matrices. In at least one embodiment, the tensor core performs a 32-bit floating point accumulation operation on 16-bit floating point input data. In at least one embodiment, a 16-bit floating-point multiply uses 64 operations and results in a full-precision product, which is then accumulated with other intermediate products using a 32-bit floating-point addition to perform a 4x4x4 matrix multiply. In at least one embodiment, the tensor core is used to perform a larger two-dimensional or higher-dimensional matrix operation made up of these smaller elements. In at least one embodiment, an API (such as the CUDA 9C++ API) exposes specialized matrix loading, matrix multiplication and accumulation, and matrix storage operations to efficiently use tensor cores from the CUDA-C++ program. In at least one embodiment, at the CUDA level, the thread bundle level interface assumes a 16 x 16 sized matrix that spans all 32 threads of a thread bundle (which may be referred to as a wave front and/or wave).
In at least one embodiment, each SM 3700 includes, but is not limited to, M SFUs 3712 that perform a particular function (e.g., attribute evaluation, reciprocal square root, etc.). In at least one embodiment, SFU 3712 includes, but is not limited to, a tree traversal unit configured to traverse the hierarchical tree data structure. In at least one embodiment, SFU 3712 includes, but is not limited to, a texture unit configured to perform texture mapping filtering operations. In at least one embodiment, the texture unit is configured to load a texture map (e.g., a 2D array of texture pixels) from memory and sample the texture map to produce sampled texture values for use in a shader program executed by SM 3700. In at least one embodiment, the texture map is stored in shared memory/L1 cache 3718. In at least one embodiment, according to at least one embodiment, texture units use a mip map (e.g., a texture map of different levels of detail) to implement texture operations (such as filtering operations). In at least one embodiment, each SM 3700 includes, but is not limited to, two texture units.
In at least one embodiment, each SM 3700 includes, but is not limited to, N LSUs 3714 that implement load and store operations between shared memory/L1 cache 3718 and register file 3708. In at least one embodiment, an interconnection network 3716 connects each functional unit to register file 3708 and LSU 3714 to register file 3708 and shared memory/L1 cache 3718. In at least one embodiment, interconnection network 3716 is a crossbar that may be configured to connect any functional unit to any register in register file 3708 and to connect LSU 3714 to register file 3708 and to memory locations in shared memory/L1 cache 3718.
In at least one embodiment, shared memory/L1 cache 3718 is an array of on-chip memory that, in at least one embodiment, allows for data storage and communication between SM 3700 and primitive engines and between threads in SM 3700. In at least one embodiment, shared memory/L1 cache 3718 includes, but is not limited to, 128KB of storage and is located in the path from SM 3700 to the partition units. In at least one embodiment, shared memory/L1 cache 3718 is used in at least one embodiment to cache reads and writes. In at least one embodiment, one or more of shared memory/L1 cache 3718, L2 cache, and memory is a spare store.
In at least one embodiment, combining data caching and shared memory functions into a single memory block provides improved performance for both types of memory accesses. In at least one embodiment, capacity is used by programs that do not use shared memory or as a cache, such as if the shared memory is configured to use half the capacity, while texture and load/store operations may use the remaining capacity. In accordance with at least one embodiment, integration within shared memory/L1 cache 3718 enables shared memory/L1 cache 3718 to function as a high throughput pipe for streaming data while providing high bandwidth and low latency access to frequently reused data. In at least one embodiment, when configured for general-purpose parallel computing, a simpler configuration may be used than graphics processing. In at least one embodiment, the fixed function graphics processing unit is bypassed, creating a simpler programming model. In at least one embodiment, in a general parallel computing configuration, the work allocation unit directly assigns and allocates individual blocks of threads to DPCs. In at least one embodiment, threads in a block execute a common program, use unique thread IDs in the computation to ensure that each thread generates unique results, use SM 3700 to execute the program and perform the computation, use shared memory/L1 cache 3718 to communicate between threads, and use LSU 3714 to read and write global memory through shared memory/L1 cache 3718 and memory partition units. In at least one embodiment, when configured for general parallel computing, SM 3700 writes commands that scheduler unit 3704 can use to initiate new work on DPC.
In at least one embodiment, the PPU is included in or coupled with a desktop computer, a laptop computer, a tablet computer, a server, a supercomputer, a smart phone (e.g., wireless, handheld device), a personal digital assistant ("PDA"), a digital camera, a vehicle, a head mounted display, a handheld electronic device, and the like. In at least one embodiment, the PPU is implemented on a single semiconductor substrate. In at least one embodiment, the PPU is included in a system on a chip ("SoC") along with one or more other devices (e.g., additional PPU, memory, reduced instruction set computer ("RISC") CPU, memory management unit ("MMU"), digital-to-analog converter ("DAC"), etc.).
In at least one embodiment, the PPU may be included on a graphics card that includes one or more memory devices. In at least one embodiment, the graphics card may be configured to interface with a PCIe slot on a desktop computer motherboard. In at least one embodiment, the PPU may be an integrated graphics processing unit ("iGPU") included in a chipset of a motherboard.
Logic 915 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding logic 915 are provided herein in connection with fig. 9A and/or 9B. In at least one embodiment, the deep learning application processor is used to train a machine learning model (such as a neural network) to predict or infer information provided to SM 3700. In at least one embodiment, SM 3700 is used to infer or predict information based on a trained machine learning model (e.g., neural network) that has been trained by another processor or system or by SM 3700. In at least one embodiment, SM 3700 can be used to perform one or more neural network use cases described herein.
In at least one embodiment, at least one component shown or described with respect to fig. 37 is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 37 is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 37 is used to cause one or more neural networks to select one or more variations in a feature of one or more text cues based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 37 is used to perform at least one aspect described with respect to block 100, block 200, process 300, block 400, process 500, block 600, block 700, block 800, and/or other systems, methods, or operations described herein.
Embodiments are disclosed that relate to virtualized computing platforms for advanced computing, such as image reasoning and image processing in medical applications. Embodiments may include, but are not limited to, radiography, magnetic Resonance Imaging (MRI), nuclear medicine, ultrasound examination, elastography, photoacoustic imaging, tomography, echocardiography, functional near infrared spectroscopy, and magnetic particle imaging, or combinations thereof. In at least one embodiment, the virtualized computing platform and related processes described herein can additionally or alternatively be used for, but are not limited to, forensic science analysis, subsurface exploration and imaging (e.g., petroleum exploration, archaeology, ancient biology, etc.), topography, oceanography, geology, bone, meteorology, intelligent area or target tracking and monitoring, sensor data processing (e.g., radar, sonar, lidar, etc.), and/or genomics and genetic sequencing.
Referring to fig. 38, fig. 38 is an example data flow diagram of a process 3800 for generating and deploying an image processing and reasoning pipeline in accordance with at least one embodiment. In at least one embodiment, the process 3800 can be deployed for imaging devices, processing devices, genomic devices, gene sequencing devices, radiological devices, and/or other device types at one or more facilities 3802, such as medical facilities, hospitals, medical institutions, clinics, research or diagnostic laboratories, and the like. In at least one embodiment, process 3800 can be deployed to genomically analyze and infer sequencing data. Examples of genomic analysis, including but not limited to, identification of variants, mutation detection, and quantification of gene expression, may be performed using the systems and processes described herein.
In at least one embodiment, the process 3800 can be performed within the training system 3804 and/or the deployment system 3806. In at least one embodiment, the training system 3804 can be used to perform training, deployment, and implementation of machine learning models (e.g., neural networks, object detection algorithms, computer vision algorithms, etc.) for the deployment system 3806. In at least one embodiment, the deployment system 3806 can be configured to offload processing and computing resources in a distributed computing environment to reduce infrastructure requirements at the facility 3802. In at least one embodiment, the deployment system 3806 can provide a streamlined platform for selecting, customizing, and implementing virtual instruments for use with imaging devices (e.g., MRI, CT scan, X-ray, ultrasound, etc.) or sequencing devices at the facility 3802. In at least one embodiment, the virtual instrument may include a software-defined application for performing one or more processing operations on imaging data generated by an imaging device, a sequencing device, a radiological device, and/or other device types. In at least one embodiment, one or more applications in the pipeline can use or invoke services (e.g., reasoning, visualization, computing, AI, etc.) of the deployment system 3806 during application execution.
In at least one embodiment, some applications used in advanced processing and reasoning pipelines may use machine learning models or other AI to perform one or more processing steps. In at least one embodiment, the machine learning model can be trained at the facility 3802 using data 3808 (e.g., imaging data) generated at the facility 3802 (and stored on one or more Picture Archiving and Communication System (PACS) servers at the facility 3802), the machine learning model can be trained using imaging or sequencing data 3808 from another one or more facilities (e.g., a different hospital, laboratory, clinic, etc.), or a combination thereof. In at least one embodiment, the training system 3804 can be used to provide applications, services, and/or other resources to generate a working, deployable machine learning model for deploying the system 3806.
In at least one embodiment, model registry 3824 can be supported by an object store, which can support version control and object metadata. In at least one embodiment, the object store may be accessed from within the cloud platform through, for example, a cloud storage (e.g., cloud 3926 of fig. 39) compatible Application Programming Interface (API). In at least one embodiment, the machine learning model within model registry 3824 can be uploaded, listed, modified, or deleted by a developer or partner of the system interacting with the API. In at least one embodiment, the API may provide access to a method that allows a user with appropriate credentials to associate a model with an application such that the model may be executed as part of the execution of a containerized instantiation of the application.
In at least one embodiment, training pipeline 3904 (fig. 39) may include the following: where facilities 3802 are training their own machine learning models or have existing machine learning models that need to be optimized or updated. In at least one embodiment, imaging data 3808 generated by one or more imaging devices, sequencing devices, and/or other types of devices may be received. In at least one embodiment, upon receipt of the imaging data 3808, the ai-assisted annotation 3810 can be used to assist in generating annotations corresponding to the imaging data 3808 for use as truth data for a machine learning model. In at least one embodiment, the AI-assisted annotation 3810 can include one or more machine learning models (e.g., convolutional Neural Networks (CNNs)) that can be trained to generate annotations corresponding to certain types of imaging data 3808 (e.g., from certain devices) and/or certain types of anomalies in the imaging data 3808. In at least one embodiment, the AI-assisted annotation 3810 can then be used directly, or can be adjusted or fine-tuned using an annotation tool (e.g., by a researcher, clinician, doctor, scientist, etc.) to generate truth data. In at least one embodiment, in some examples, the labeled clinical data 3812 (e.g., annotations provided by a clinician, doctor, scientist, technician, etc.) can be used as truth data for training a machine learning model. In at least one embodiment, AI-assisted notes 3810, labeled clinical data 3812, or a combination thereof can be used as truth data for training a machine learning model. In at least one embodiment, the trained machine learning model can be referred to as an output model 3816 and can be used by the deployment system 3806, as described herein.
In at least one embodiment, training pipeline 3904 (fig. 39) may include the following: where the facility 3802 requires a machine learning model for performing one or more processing tasks for deploying one or more applications in the system 3806, the facility 3802 may not currently have such a machine learning model (or may not have an efficient, effective, or effective model optimized for that purpose). In at least one embodiment, an existing machine learning model may be selected from model registry 3824. In at least one embodiment, the model registry 3824 can include a machine learning model that is trained to perform a variety of different reasoning tasks on the imaging data. In at least one embodiment, the machine learning model in model registry 3824 may have been trained on imaging data from a facility other than facility 3802 (e.g., a remotely located facility). In at least one embodiment, the machine learning model may have been trained on imaging data from one location, two locations, or any number of locations. In at least one embodiment, when training on imaging data from a particular location, training may be performed at that location, or at least in a manner that protects confidentiality of the imaging data or limits transmission of the imaging data from offsite (e.g., to comply with HIPAA regulations, privacy regulations, etc.). In at least one embodiment, once the model is trained or partially trained at one location, a machine learning model may be added to the model registry 3824. In at least one embodiment, the machine learning model may then be retrained or updated at any number of other facilities, and the retrained or updated model may be obtained at model registry 3824. In at least one embodiment, a machine learning model (and referred to as an output model 3816) may then be selected from the model registry 3824 and used in the deployment system 3806 to perform one or more processing tasks for one or more applications of the deployment system.
In at least one embodiment, the training pipeline 3904 (fig. 39) may be used in a scenario that includes a facility 3802 that requires a machine learning model for performing one or more processing tasks for deploying one or more applications in the system 3806, but the facility 3802 may not currently have such a machine learning model (or may not have an optimized, efficient, or effective model). In at least one embodiment, the machine learning model selected from the model registry 3824 may not be fine-tuned or optimized for the imaging data 3808 generated at the facility 3802 due to population differences, genetic variation, robustness of the training data used to train the machine learning model, diversity of training data anomalies, and/or other issues with the training data. In at least one embodiment, AI-assisted annotation 3810 can be used to assist in generating annotations corresponding to imaging data 3808 for use as truth data for retraining or updating a machine learning model. In at least one embodiment, the labeled clinical data 3812 (e.g., annotations provided by a clinician, doctor, scientist, etc.) can be used as truth data for training a machine learning model. In at least one embodiment, retraining or updating the machine learning model may be referred to as model training 3814. In at least one embodiment, model training 3814 (e.g., AI-assisted annotation 3810, labeled clinical data 3812, or a combination thereof) can be used as truth data for retraining or updating a machine learning model.
In at least one embodiment, the deployment system 3806 may include software 3818, services 3820, hardware 3822, and/or other components, features, and functions. In at least one embodiment, the deployment system 3806 can include a software "stack" such that the software 3818 can be built on top of the service 3820 and can use the service 3820 to perform some or all of the processing tasks, and the service 3820 and the software 3818 can be built on top of the hardware 3822 and use the hardware 3822 to perform the processing, storage, and/or other computing tasks of the deployment system 3806.
In at least one embodiment, the software 3818 can include any number of different containers, each of which can perform instantiation of an application. In at least one embodiment, each application may perform one or more processing tasks (e.g., reasoning, object detection, feature detection, segmentation, image enhancement, registration, etc.) in an advanced processing and reasoning pipeline. In at least one embodiment, for each type of imaging device (e.g., CT, MRI, X-ray, ultrasound examination, echocardiography, etc.), sequencing device, radiological device, genomics device, etc., there may be any number of containers that can perform data processing tasks on imaging data 3808 (or other data types, such as those described herein) generated by the device. In at least one embodiment, in addition to containers that receive and configure imaging data for use by each container and/or for use by facility 3802 after processing through the pipeline, advanced processing and reasoning pipelines may be defined based on selection of different containers desired or required to process imaging data 3808 (e.g., to convert output back into usable data types such as digital imaging and communications in medicine (DICOM) data, radiology Information System (RIS) data, clinical Information System (CIS) data, remote Procedure Call (RPC) data, data that substantially conforms to a representational state transfer (REST) interface, data that substantially conforms to a file-based interface, and/or raw data for storage and display at facility 3802). In at least one embodiment, a combination of containers within software 3818 (e.g., which constitute a pipeline) may be referred to as a virtual instrument (as described in more detail herein), and the virtual instrument may utilize services 3820 and hardware 3822 to perform some or all of the processing tasks of applications instantiated in the containers.
In at least one embodiment, the data processing pipeline can receive DICOM, RIS, CIS, compliance REST (REST compliant), RPC, raw, and/or other formats of input data (e.g., imaging data 3808) in response to an inference request (e.g., a request from a user (e.g., clinician, doctor, radiologist, etc.) of the deployment system 3806. In at least one embodiment, the input data may represent one or more image, video, and/or other data representations generated by one or more imaging devices, sequencing devices, radiological devices, genomic devices, and/or other device types. In at least one embodiment, the data may be subjected to preprocessing as part of a data processing pipeline to prepare the data for processing by one or more applications. In at least one embodiment, post-processing may be performed on the output of one or more inference tasks or other processing tasks of the pipeline to prepare output data for a next application, and/or to prepare output data for transmission and/or use by a user (e.g., as a response to an inference request). In at least one embodiment, the inference tasks can be performed by one or more machine learning models (such as a trained or deployed neural network) that can include an output model 3816 of the training system 3804.
In at least one embodiment, the tasks of the data processing pipeline may be packaged in one or more containers, each container representing a separate full-function instantiation of an application and virtualized computing environment capable of referencing a machine learning model. In at least one embodiment, a container or application can be published into a private (e.g., limited access) region of a container registry (described in more detail herein), and a trained or deployed model can be stored in model registry 3824 and associated with one or more applications. In at least one embodiment, an image of an application (e.g., a container image) may be obtained in a container registry, and once the user selects the image from the container registry for deployment in the pipeline, the image may be used to generate a container for instantiation of the application for use by the user's system.
In at least one embodiment, a developer (e.g., software developer, clinician, doctor, etc.) can develop, publish, and store applications (e.g., stored as containers) for performing image processing and/or reasoning on the provided data. In at least one embodiment, development, release, and/or storage may be performed using a Software Development Kit (SDK) associated with the system (e.g., to ensure that the developed applications and/or containers are compliant or compatible with the system). In at least one embodiment, the developed application may be tested locally (e.g., at a first facility, testing data from the first facility) using an SDK that may support at least some of the services 3820 as a system (e.g., system 3900 in fig. 39). In at least one embodiment, since DICOM objects may contain one to hundreds of images or other data types, and due to changes in data, a developer may be responsible for managing (e.g., setup constructs, for building preprocessing into applications, etc.) extraction and preparation of incoming DICOM data. In at least one embodiment, once verified by the system 3900 (e.g., for accuracy, security, patient privacy, etc.), the application may be available in a container registry for selection and/or implementation by a user (e.g., a hospital, clinic, laboratory, healthcare provider, etc.) to perform one or more processing tasks on data at the user's facility (e.g., a second facility).
In at least one embodiment, the developer may then share an application or container over a network for access and use by a user of the system (e.g., system 3900 of FIG. 39). In at least one embodiment, the completed and validated application or container may be stored in a container registry and the associated machine learning model may be stored in model registry 3824. In at least one embodiment, a requesting entity (e.g., a user of a medical facility) that provides reasoning or image processing requests can browse through the container registry and/or model registry 3824 to obtain applications, containers, datasets, machine learning models, etc., select desired combinations of elements for inclusion in the data processing pipeline, and submit image processing requests. In at least one embodiment, the request may include input data (and, in some examples, associated patient data) necessary to execute the request, and/or may include a selection of one or more applications and/or machine learning models to be executed when processing the request. In at least one embodiment, the request may then be passed to one or more components (e.g., clouds) of the deployment system 3806 to perform the processing of the data processing pipeline. In at least one embodiment, the processing by the deployment system 3806 can include referencing elements (e.g., applications, containers, models, etc.) selected from a container registry and/or a model registry 3824. In at least one embodiment, once the pipeline generates the results, the results may be returned to the user for reference (e.g., for viewing in a viewing application suite executing on a local on-site deployment workstation or terminal). In at least one embodiment, the radiologist may receive results from a data processing pipeline including any number of applications and/or containers, where the results may include anomaly detection in X-rays, CT scans, MRI, and the like.
In at least one embodiment, to assist in processing or executing applications or containers in a pipeline, the service 3820 may be utilized. In at least one embodiment, the services 3820 may include computing services, artificial Intelligence (AI) services, visualization services, and/or other service types. In at least one embodiment, the services 3820 can provide functionality common to one or more applications in the software 3818, and thus can abstract functionality into services that can be invoked or utilized by the applications. In at least one embodiment, the functionality provided by the service 3820 can operate dynamically and more efficiently while also expanding well by allowing applications to process data in parallel (e.g., using the parallel computing platform 3930 of FIG. 39). In at least one embodiment, not every application that requires sharing the same functionality provided by service 3820 must have a corresponding instance of service 3820, but rather service 3820 may be shared among and among the various applications. In at least one embodiment, the service may include, as non-limiting examples, an inference server or engine that may be used to perform detection or segmentation tasks. In at least one embodiment, a model training service may be included that may provide machine learning model training and/or retraining capabilities. In at least one embodiment, a data enhancement service may be further included that may provide GPU-accelerated data (e.g., DICOM, RIS, CIS, REST-compliant, RPC, primitive, etc.) extraction, resizing, scaling, and/or other enhancements. In at least one embodiment, a visualization service may be used that may add image rendering effects (such as ray tracing, rasterization, denoising, sharpening, etc.) to add realism to a two-dimensional (2D) and/or three-dimensional (3D) model. In at least one embodiment, virtual instrument services may be included that provide beamforming, segmentation, reasoning, imaging, and/or support for other applications within the pipeline of the virtual instrument.
In at least one embodiment, where the service 3820 includes an AI service (e.g., an inference service), one or more machine learning models associated with an application for anomaly detection (e.g., tumor, growth anomalies, scarring, etc.) can be executed by invoking (e.g., as an API call) the inference service (e.g., an inference server) to execute the one or more machine learning models or processes thereof as part of the application execution. In at least one embodiment, where another application includes one or more machine learning models for a segmentation task, the application may invoke the inference service to execute the machine learning model for performing one or more processing operations associated with the segmentation task. In at least one embodiment, the software 3818 implementing the advanced processing and inference pipeline (which includes the segmentation application and the anomaly detection application) can be streamlined in that each application can invoke the same inference service to perform one or more inference tasks.
In at least one embodiment, hardware 3822 can include a GPU, a CPU, a graphics card, an AI/deep learning system (e.g., an AI supercomputer, a DGX supercomputer system such as NVIDIA), a cloud platform, or a combination thereof. In at least one embodiment, different types of hardware 3822 can be used to provide efficient, specially constructed support for the software 3818 and services 3820 in the deployment system 3806. In at least one embodiment, the use of GPU processing to perform local processing within the AI/deep learning system, in the cloud system, and/or in other processing components of the deployment system 3806 (e.g., at the facility 3802) may be implemented to improve the efficiency, accuracy, and efficacy of image processing, image reconstruction, segmentation, MRI examination, stroke or heart attack detection (e.g., in real-time), rendered image quality, etc. In at least one embodiment, the facility may include an imaging device, a genomic device, a sequencing device, and/or other device types deployed locally, which may generate imaging data representative of the anatomy of the subject using the GPU.
In at least one embodiment, as non-limiting examples, the software 3818 and/or the services 3820 may be optimized for GPU processing with respect to deep learning, machine learning, and/or high performance computing. In at least one embodiment, at least some of the computing environments of the deployment system 3806 and/or the training system 3804 can be executing in a data center, one or more supercomputers, or high-performance computer systems with GPU-optimized software (e.g., a combination of hardware and software for the NVIDIA DGX system). In at least one embodiment, the data center may conform to HIPAA regulations such that privacy with respect to patient data securely handles the receipt, processing, and transmission of imaging data and/or other patient data. In at least one embodiment, hardware 3822 may include any number of GPUs that may be invoked to perform data processing in parallel, as described herein. In at least one embodiment, the cloud platform may also include GPU processing for GPU-optimized execution of deep learning tasks, machine learning tasks, or other computing tasks. In at least one embodiment, the cloud platform (e.g., the NGC of NVIDIA) may be executed using AI/deep learning supercomputer and/or GPU optimized software (e.g., as provided on the DGX system of NVIDIA) as a hardware abstraction and extension platform. In at least one embodiment, the cloud platform may integrate an application container clustering system or orchestration system (e.g., kubrennetes) on multiple GPUs to achieve seamless expansion and load balancing.
In at least one embodiment, at least one component shown or described with respect to fig. 38 is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 38 is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 38 is used to cause one or more neural networks to select one or more variations in a feature of one or more text cues based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 38 is used to perform at least one aspect described with respect to block 100, block 200, process 300, block 400, process 500, block 600, block 700, block 800, and/or other systems, methods, or operations described herein.
FIG. 39 is a system diagram of an example system 3900 for generating and deploying an imaging deployment pipeline in accordance with at least one embodiment. In at least one embodiment, system 3900 can be used to implement process 3800 of fig. 38 and/or other processes, including advanced processing and reasoning pipelines. In at least one embodiment, the system 3900 can include a training system 3804 and a deployment system 3806. In at least one embodiment, the training system 3804 and the deployment system 3806 can be implemented using software 3818, services 3820, and/or hardware 3822, as described herein.
In at least one embodiment, the system 3900 (e.g., the training system 3804 and/or the deployment system 3806) can be implemented in a cloud computing environment (e.g., using the cloud 3926). In at least one embodiment, system 3900 may be implemented locally (with respect to a healthcare facility) or as a combination of cloud computing resources and local computing resources. In at least one embodiment, in embodiments implementing cloud computing, patient data may be separate from, or not processed by, one or more components of system 3900, which would result in processing that is not in compliance with HIPAA and/or other data processing and privacy regulations or laws. In at least one embodiment, access to APIs in cloud 3926 may be restricted to authorized users by formulating security measures or protocols. In at least one embodiment, the security protocol may include a network token, which may be signed by an authentication (e.g., authN, authZ, gluecon, etc.) service, and may carry the appropriate authorization. In at least one embodiment, the API of the virtual instrument (described herein) or other instance of system 3900 may be limited to a set of public IPs that have been audited or authorized for interaction.
In at least one embodiment, the various components of system 3900 may communicate with each other and among each other using any of a variety of different network types, including, but not limited to, a Local Area Network (LAN) and/or a Wide Area Network (WAN) via wired and/or wireless communication protocols. In at least one embodiment, communications between facilities and components of system 3900 (e.g., for sending inference requests, for receiving results of inference requests, etc.) may be communicated over one or more data buses, wireless data protocol (Wi-Fi), wired data protocol (e.g., ethernet), etc.
In at least one embodiment, the training system 3804 may execute a training pipeline 3904 similar to that described herein with respect to fig. 38. In at least one embodiment, where the deployment system 3806 is to use one or more machine learning models in the deployment pipeline 3910, the training pipeline 3904 can be used to train or retrain one or more (e.g., pre-trained) models, and/or to implement one or more pre-training models 3906 (e.g., without requiring retraining or updating). In at least one embodiment, one or more output models 3816 may be generated as a result of training pipeline 3904. In at least one embodiment, the training pipeline 3904 may include any number of processing steps, such as, but not limited to, conversion or adaptation of imaging data (or other input data) (e.g., converting DICOM images using a DICOM adapter 3902A to another format suitable for processing by a corresponding machine learning model, such as the Neuroimaging information technology initiative (NIfTI) format), AI auxiliary annotations 3810, labeling or annotation of imaging data 3808 (for generating labeled clinical data 3812), selecting a model from a model registry, model training 3814, training, retraining or updating a model, and/or other processing steps. In at least one embodiment, different training pipelines 3904 may be used for different machine learning models used by the deployment system 3806. In at least one embodiment, a training pipeline 3904 similar to the first example described with respect to fig. 38 may be used for a first machine learning model, a training pipeline 3904 similar to the second example described with respect to fig. 38 may be used for a second machine learning model, and a training pipeline 3904 similar to the third example described with respect to fig. 38 may be used for a third machine learning model. In at least one embodiment, any combination of tasks within the training system 3804 may be used according to the requirements of each respective machine learning model. In at least one embodiment, one or more machine learning models may have been trained and ready for deployment, and thus the machine learning model may not be subject to any processing by the training system 3804, and the machine learning model may be implemented by the deployment system 3806.
In at least one embodiment, the one or more output models 3816 and/or the pre-training model 3906 may include any type of machine learning model, depending on the implementation or embodiment. In at least one embodiment, and without limitation, the machine learning model used by system 3900 may include one or more machine learning models using linear regression, logistic regression, decision trees, support Vector Machines (SVMs), naive bayes, k-nearest neighbors (Knn), k-means clustering, random forests, dimensionality reduction algorithms, gradient lifting algorithms, neural networks (e.g., auto encoders, convolutions, loops, perceptrons, long/short term memory (LSTM), hopfield, boltzmann, deep beliefs, deconvolution, generation countermeasure, fluid state machine, etc.), and/or other types of machine learning models.
In at least one embodiment, the training pipeline 3904 may include AI-assisted notes, as described in more detail herein with respect to at least fig. 42B. In at least one embodiment, the labeled clinical data 3812 (e.g., conventional annotations) may be generated by any number of techniques. In at least one embodiment, in some examples, the label or other annotation may be generated in a drawing program (e.g., an annotation program), a Computer Aided Design (CAD) program, a marking program, another type of program adapted to generate a true value or label, and/or may be hand-painted. In at least one embodiment, the truth data may be synthetically generated (e.g., generated from a computer model or rendering), truly generated (e.g., designed and generated from real world data), machine automatically generated (e.g., features extracted from data using feature analysis and learning, then tags generated), manually annotated (e.g., markers or annotation specialists, defined tag locations), and/or combinations thereof. In at least one embodiment, for each instance of imaging data 3808 (or other data type used by the machine learning model), there can be corresponding truth data generated by training system 3804. In at least one embodiment, AI-assisted annotation may be performed as part of deployment pipeline 3910 in addition to or instead of including AI-assisted annotation in training pipeline 3904. In at least one embodiment, system 3900 can include a multi-layered platform that can include a software layer (e.g., software 3818) of a diagnostic application (or other application type) that can perform one or more medical imaging and diagnostic functions. In at least one embodiment, the system 3900 may be communicatively coupled (e.g., via an encrypted link) to a PACS server network of one or more facilities. In at least one embodiment, the system 3900 may be configured to access and reference data (e.g., DICOM data, RIS data, raw data, CIS data, REST-compliant data, RPC, raw data, etc.) from a PACS server (e.g., via a DICOM adapter 3902 or another data type adapter such as RIS, CIS, REST-compliant, RPC, raw, etc.) to perform operations such as training a machine learning model, deploying a machine learning model, image processing, reasoning, and/or other operations.
In at least one embodiment, the software layer can be implemented as a secure, encrypted, and/or authenticated API through which an application or container can be invoked (e.g., call) from one or more external environments (e.g., facility 3802). In at least one embodiment, the application can then invoke or execute one or more services 3820 to perform computing, AI, or visualization tasks associated with the respective application, and the software 3818 and/or services 3820 can utilize the hardware 3822 to perform processing tasks in an effective and efficient manner.
In at least one embodiment, the deployment system 3806 can execute the deployment pipeline 3910. In at least one embodiment, deployment pipeline 3910 may include any number of applications that may be sequential, non-sequential, or otherwise applied to imaging data (and/or other data types) -including AI-assisted annotations-generated by imaging devices, sequencing devices, genomics devices, and the like, as described above. In at least one embodiment, the deployment pipeline 3910 for an individual device may be referred to as a virtual instrument of the device (e.g., virtual ultrasound, virtual CT scanner, virtual sequencer, etc.), as described herein. In at least one embodiment, there may be more than one deployment pipeline 3910 for a single device, depending on the information desired for the data generated by the device. In at least one embodiment, a first deployment pipeline 3910 may be present where an anomaly is desired to be detected from the MRI machine, and a second deployment pipeline 3910 may be present where image enhancement is desired from the output of the MRI machine.
In at least one embodiment, the applications available to deploy pipeline 3910 may include any application that may be used to perform processing tasks on imaging data or other data from a device. In at least one embodiment, different applications may be responsible for image enhancement, segmentation, reconstruction, anomaly detection, object detection, feature detection, treatment planning, dosimetry, beam planning (or other radiation therapy programs), and/or other analysis, image processing, or reasoning tasks. In at least one embodiment, the deployment system 3806 can define a construct for each application such that a user of the deployment system 3806 (e.g., medical facility, laboratory, clinic, etc.) can understand the construct and adapt the application for implementation within its respective facility. In at least one embodiment, the application for image reconstruction may be selected for inclusion in deployment pipeline 3910, but the type of data generated by the imaging device may be different from the type of data used within the application. In at least one embodiment, a DICOM adapter 3902B (and/or DICOM reader) or another data type of adapter or reader (e.g., RIS, CIS, REST compliant, RPC, primitive, etc.) may be used within the deployment pipeline 3910 to convert data into a form usable by applications within the deployment system 3806. In at least one embodiment, access to DICOM, RIS, CIS, REST-compliant, RPC, raw and/or other data type libraries may be accumulated and preprocessed, including decoding data, extracting data, and/or performing any convolution, color correction, sharpening, gamma, and/or other enhancements to the data. In at least one embodiment, DICOM, RIS, CIS, REST-compliant, RPC, and/or raw data may be unordered and pre-transfers may be performed to organize or sort the collected data. In at least one embodiment, because various applications may share common image operations, in some embodiments, a data enhancement library (e.g., as one of the services 3820) may be used to accelerate these operations. In at least one embodiment, to avoid bottlenecks of conventional processing methods that rely on CPU processing, parallel computing platform 3930 may be used for GPU acceleration of these processing tasks.
In at least one embodiment, the image reconstruction application may include processing tasks including the use of machine learning models. In at least one embodiment, users may wish to use their own machine learning model, or select a machine learning model from model registry 3824. In at least one embodiment, users may implement their own machine learning model or select a machine learning model to include in an application executing a processing task. In at least one embodiment, the application may be selectable and customizable, and by defining the configuration of the application, the deployment and implementation of the application for a particular user is rendered as a more seamless user experience. In at least one embodiment, by utilizing other features of system 3900 (such as services 3820 and hardware 3822), deployment pipeline 3910 may be more user friendly, provide easier integration, and produce more accurate, efficient, and timely results.
In at least one embodiment, the deployment system 3806 can include a user interface 3914 (e.g., a graphical user interface, a web interface, etc.) that can be used to select applications to be included in one or more deployment pipelines 3910, to arrange applications, to modify or change applications or parameters or constructs thereof, to use and interact with one or more deployment pipelines 3910 during setup and/or deployment, and/or to otherwise interact with the deployment system 3806. In at least one embodiment, although not shown with respect to training system 3804, user interface 3914 (or a different user interface) may be used to select a model for use in deployment system 3806, to select a model for training or retraining in training system 3804, and/or to otherwise interact with training system 3804.
In at least one embodiment, in addition to the application coordination system 3928, a pipeline manager 3912 may be used to manage interactions between one or more applications or containers deploying the pipeline 3910 and the services 3820 and/or hardware 3822. In at least one embodiment, the pipeline manager 3912 may be configured to facilitate interactions from application to application, from application to service 3820, and/or from application or service to hardware 3822. In at least one embodiment, although illustrated as being included in software 3818, this is not intended to be limiting and in some examples (e.g., as shown in fig. 40), pipeline manager 3912 may be included in service 3820. In at least one embodiment, the application orchestration system 3928 (e.g., kubernetes, DOCKER, etc.) may comprise a container orchestration system that may group applications into containers as logical units for orchestration, management, extension, and deployment. In at least one embodiment, each application may be executed in a self-contained environment (e.g., at the kernel level) by associating applications (e.g., rebuild applications, split applications, etc.) from one or more deployment pipelines 3910 with respective containers to increase speed and efficiency.
In at least one embodiment, each application and/or container (or image thereof) may be developed, modified, and deployed separately (e.g., a first user or developer may develop, modify, and deploy a first application, and a second user or developer may develop, modify, and deploy a second application separate from the first user or developer), which may allow for the task of focusing on and focusing on a single application and/or container without being hindered by the task of other applications or containers. In at least one embodiment, the pipeline manager 3912 and the application coordination system 3928 may facilitate communication and collaboration between different containers or applications. In at least one embodiment, the application orchestration system 3928 and/or the pipeline manager 3912 may facilitate communication and sharing of resources between and among each application or container, so long as the expected input and/or output of each container or application is known to the system (e.g., based on the application or container's configuration). In at least one embodiment, because one or more applications or containers in one or more deployment pipelines 3910 may share the same services and resources, the application coordination system 3928 may coordinate, load balance, and determine the sharing of services or resources among and among the various applications or containers. In at least one embodiment, the scheduler may be used to track the resource requirements of an application or container, the current or projected use of these resources, and the availability of resources. Thus, in at least one embodiment, the scheduler may allocate resources to different applications and allocate resources among and among the applications, taking into account the needs and availability of the system. In some examples, the scheduler (and/or other components of the application coordination system 3928, such as the sequencer and/or asynchronous compute engine) may determine resource availability and distribution (e.g., to determine whether to perform real-time processing or delay processing) based on constraints imposed on the system (e.g., user constraints), such as quality of service (QoS), urgency of demand for data output, etc.
In at least one embodiment, the services 3820 utilized by and shared by applications or containers in the deployment system 3806 can include computing services 3916, AI services 3918, visualization services 3920, and/or other service types. In at least one embodiment, an application can invoke (e.g., execute) one or more services 3820 to perform processing operations for the application. In at least one embodiment, the application may utilize the computing service 3916 to perform supercomputing or other high-performance computing (HPC) tasks. In at least one embodiment, parallel processing (e.g., using parallel computing platform 3930) may be performed with one or more computing services 3916 to process data substantially simultaneously through one or more applications and/or one or more tasks of a single application. In at least one embodiment, parallel computing platform 3930 (e.g., CUDA of NVIDIA) may implement general purpose computing (GPGPU) on a GPU (e.g., GPU 3922). In at least one embodiment, the software layer of the parallel computing platform 3930 may provide access to the virtual instruction set of the GPU and the parallel computing elements to execute the compute kernel. In at least one embodiment, the parallel computing platform 3930 may include memory, and in some embodiments, memory may be shared among and among multiple containers, and/or among different processing tasks within a single container. In at least one embodiment, inter-process communication (IPC) calls may be generated for multiple containers and/or multiple processes within a container to use the same data from shared memory segments of parallel computing platform 3930 (e.g., where an application or multiple different phases of multiple applications are processing the same information). In at least one embodiment, rather than copying data and moving the data to different locations in memory (e.g., read/write operations), the same data in the same location of memory may be used for any number of processing tasks (e.g., at the same time, at different times, etc.). In at least one embodiment, this information of the new location of the data may be stored and shared among the various applications as the data is used to generate the new data as a result of the processing. In at least one embodiment, the location of the data and the location of the updated or modified data may be part of a definition of how the payload is in the container.
In at least one embodiment, the AI service 3918 can be utilized to perform an inference service for executing one or more machine learning models associated with an application (e.g., tasks are one or more processing tasks that execute the application). In at least one embodiment, the AI service 3918 can utilize the AI system 3924 to execute one or more machine learning models (e.g., neural networks such as CNNs) for segmentation, reconstruction, object detection, feature detection, classification, and/or other reasoning tasks. In at least one embodiment, the application of the one or more deployment pipelines 3910 can use one or more output models 3816 from the training system 3804 and/or other models of the application to perform reasoning on imaging data (e.g., DICOM data, RIS data, CIS data, REST-compliant data, RPC data, raw data, etc.). In at least one embodiment, two or more examples of reasoning using the application coordination system 3928 (e.g., scheduler, sequencer, and/or asynchronous compute engine) may be available. In at least one embodiment, the first category may include a high priority/low latency path that may implement a higher service level protocol, for example, for performing reasoning on emergency requests in an emergency situation, or for radiologists in a diagnostic procedure. In at least one embodiment, the second category may include standard priority paths that may be used for cases where the request may not be urgent or where the analysis may be performed at a later time. In at least one embodiment, the application orchestration system 3928 can allocate resources (e.g., services 3820 and/or hardware 3822) for different reasoning tasks of the AI service 3918 based on the priority paths.
In at least one embodiment, the shared store can be installed to AI services 3918 in system 3900. In at least one embodiment, the shared store may operate as a cache (or other storage device type) and may be used to process inference requests from applications. In at least one embodiment, when an inference request is submitted, a set of API instances of the deployment system 3806 can receive the request and can select one or more instances (e.g., for best fit, for load balancing, etc.) to process the request. In at least one embodiment, to process the request, the request may be entered into a database, if not already in the cache, the machine learning model may be located from model registry 3824, the verifying step may ensure that the appropriate machine learning model is loaded into the cache (e.g., shared storage), and/or a copy of the model may be saved into the cache. In at least one embodiment, if the application has not yet run or there are insufficient application instances, a scheduler (e.g., the scheduler of the pipeline manager 3912) may be used to launch the application referenced in the request. In at least one embodiment, the inference server may be started if it has not been started to execute the model. In at least one embodiment, any number of inference servers can be launched per model. In at least one embodiment, in a pull (pull) model that clusters reasoning servers, the model can be cached whenever load balancing is advantageous. In at least one embodiment, the inference servers can be statically loaded into the corresponding distributed servers.
In at least one embodiment, reasoning can be performed using a reasoning server running in the container. In at least one embodiment, an instance of the inference server can be associated with the model (and optionally multiple versions of the model). In at least one embodiment, if an instance of the inference server does not exist at the time the request to perform the inference on the model is received, a new instance may be loaded. In at least one embodiment, when the inference server is started, the models can be passed to the inference server so that the same container can be used to serve different models, as long as the inference server operates as a different instance.
In at least one embodiment, during application execution, an inference request for a given application may be received, and a container (e.g., a container hosting an instance of an inference server) may be loaded (if not already loaded) and a launcher may be invoked. In at least one embodiment, preprocessing logic in the container may load, decode, and/or perform any additional preprocessing of incoming data (e.g., using the CPU and/or GPU). In at least one embodiment, once the data is ready for reasoning, the container can perform reasoning on the data as needed. In at least one embodiment, this may include a single reasoning call for one image (e.g., hand X-rays), or may require reasoning about hundreds of images (e.g., chest CT). In at least one embodiment, the application may summarize the results prior to completion, which may include, but is not limited to, a single confidence score, pixel-level segmentation, voxel-level segmentation, generating a visualization, or generating text to summarize the results. In at least one embodiment, different models or applications may be assigned different priorities. For example, some models may have real-time (TAT less than 1 minute) priority, while other models may have lower priority (e.g., TAT less than 10 minutes). In at least one embodiment, the model execution time may be measured from a requesting entity or entity and may include partner network traversal time and execution time of the inference service.
In at least one embodiment, the transfer of requests between the service 3820 and the reasoning application may be hidden behind a Software Development Kit (SDK) and robust transmission may be provided through a queue. In at least one embodiment, the requests will be placed in a queue via the API for individual application/tenant ID combinations, and the SDK will pull the requests from the queue and provide the requests to the application. In at least one embodiment, the name of the queue may be provided in the context from which the SDK will pick up the queue. In at least one embodiment, asynchronous communication through a queue may be useful because it may allow any instance of an application to pick up work when it is available. In at least one embodiment, the results may be transmitted back through a queue to ensure that no data is lost. In at least one embodiment, the queue may also provide the ability to split work, as work of highest priority may enter the queue connected to most instances of the application, while work of lowest priority may enter the queue connected to a single instance, which processes tasks in the order received. In at least one embodiment, the application may run on GPU-accelerated instances that are generated in cloud 3926, and the inference service may perform inferences on the GPU.
In at least one embodiment, visualization services 3920 can be utilized to generate visualizations for viewing output of an application and/or one or more deployment pipelines 3910. In at least one embodiment, visualization service 3920 may utilize GPU 3922 to generate visualizations. In at least one embodiment, visualization service 3920 may implement rendering effects such as ray tracing to generate higher quality visualizations. In at least one embodiment, the visualization may include, but is not limited to, 2D image rendering, 3D volume reconstruction, 2D tomosynthesis slices, virtual reality display, augmented reality display, and the like. In at least one embodiment, a virtual interactive display or environment (e.g., a virtual environment) may be generated using a virtualized environment for interaction by a system user (e.g., doctor, nurse, radiologist, etc.). In at least one embodiment, visualization service 3920 may include internal visualizers, movies, and/or other rendering or image processing capabilities or functions (e.g., ray tracing, rasterization, internal optics, etc.).
In at least one embodiment, the hardware 3822 may include a GPU 3922, an AI system 3924, a cloud 3926, and/or any other hardware for executing the training system 3804 and/or the deployment system 3806. In at least one embodiment, the GPUs 3922 (e.g., TESLA and/or quadwo GPUs of NVIDIA) may include any number of GPUs that may be used to perform processing tasks of any feature or function of the computing service 3916, AI service 3918, visualization service 3920, other services, and/or software 3818. For example, for AI service 3918, gpu 3922 may be used to perform preprocessing on imaging data (or other data types used by a machine learning model), post-processing on the output of the machine learning model, and/or performing reasoning (e.g., to perform the machine learning model). In at least one embodiment, the cloud 3926, AI system 3924, and/or other components of system 3900 may use GPU 3922. In at least one embodiment, cloud 3926 may include a platform for GPU optimization for deep learning tasks. In at least one embodiment, the AI system 3924 can use a GPU and one or more AI systems 3924 can be used to execute the cloud 3926 (or tasks that are at least part of deep learning or reasoning). As such, although hardware 3822 is illustrated as a discrete component, this is not intended to be limiting, and any component of hardware 3822 may be combined with or utilized by any other component of hardware 3822.
In at least one embodiment, the AI system 3924 can include a specially constructed computing system (e.g., a supercomputer or HPC) configured for reasoning, deep learning, machine learning, and/or other artificial intelligence tasks. In at least one embodiment, the AI system 3924 (e.g., DGX of NVIDIA) may include, in addition to CPU, RAM, storage, and/or other components, features, or functions, GPU-optimized software (e.g., a software stack) that may be executed using multiple GPUs 3922. In at least one embodiment, one or more AI systems 3924 may be implemented in the cloud 3926 (e.g., in a data center) to perform some or all of the AI-based processing tasks of the system 3900.
In at least one embodiment, cloud 3926 may include GPU-accelerated infrastructure (e.g., NGC of NVIDIA) that may provide a platform for GPU optimization for performing processing tasks of system 3900. In at least one embodiment, the cloud 3926 can include one or more AI systems 3924 for performing one or more AI-based tasks of the system 3900 (e.g., as a hardware abstraction and extension platform). In at least one embodiment, cloud 3926 may be integrated with an application coordination system 3928 that utilizes multiple GPUs to enable seamless expansion and load balancing between and among applications and services 3820. In at least one embodiment, the task of cloud 3926 may be to execute at least some services 3820 of system 3900, including computing services 3916, AI services 3918, and/or visualization services 3920, as described herein. In at least one embodiment, cloud 3926 may perform reasoning about size batches (e.g., perform TENSOR RT of NVIDIA), provide accelerated parallel computing APIs and platform 3930 (e.g., CUDA of NVIDIA), execute application coordination system 3928 (e.g., kubrennetes), provide graphics rendering APIs and platforms (e.g., for ray tracing, 2D graphics, 3D graphics, and/or other rendering techniques to produce higher quality movie effects), and/or may provide other functionality for system 3900.
In at least one embodiment, to protect patient confidentiality (e.g., in the case of off-pre use of patient data or records), cloud 3926 may include a registry, such as a deep learning container registry. In at least one embodiment, the registry may store containers for instantiating applications that may perform pre-processing, post-processing, or other processing tasks on patient data. In at least one embodiment, cloud 3926 may receive data including patient data as well as sensor data in containers, perform requested processing only on those sensor data in containers, and then forward the resulting output and/or visualization to the appropriate parties and/or devices (e.g., locally deployed medical devices for visualization or diagnosis), all without the need to extract, store, or otherwise access the patient data. In at least one embodiment, confidentiality of patient data is maintained in accordance with HIPAA and/or other data specifications.
In at least one embodiment, at least one component shown or described with respect to fig. 39 is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 39 is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 39 is used to cause one or more neural networks to select one or more variations in a feature of one or more text cues based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 39 is used to perform at least one aspect described with respect to block 100, block 200, process 300, block 400, process 500, block 600, block 700, block 800, and/or other systems, methods, or operations described herein.
FIG. 40 includes an example illustration of a deployment pipeline 3910A for processing imaging data in accordance with at least one embodiment. In at least one embodiment, system 3900 (and in particular deployment system 3806) can be used to customize, update, and/or integrate one or more deployment pipelines 3910A into one or more production environments. In at least one embodiment, the deployment pipeline 3910A of fig. 40 includes a non-limiting example of a deployment pipeline 3910A that can be customized by a particular user (or team of users) at a facility (e.g., hospital, clinic, laboratory, research environment, etc.). In at least one embodiment, to define a deployment pipeline 3910A for a CT scanner 4002, a user may select one or more applications, for example, from a container registry, that perform particular functions or tasks with respect to imaging data generated by the CT scanner 4002. In at least one embodiment, the application can be applied to the deployment pipeline 3910A as a container that can utilize the services 3820 and/or hardware 3822 of the system 3900. Further, the deployment pipeline 3910A may include additional processing tasks or applications that may be implemented to prepare data for use by the application (e.g., DICOM adapter 3902B and DICOM reader 4006 may be used in deployment pipeline 3910A to prepare data for CT reconstruction 4008, organ segmentation 4010, etc.). In at least one embodiment, deployment pipeline 3910A may be customized or selected for consistent deployment, one-time use, or another frequency or interval use. In at least one embodiment, the user may wish to have CT reconstructions 4008 and organ segmentations 4010 for several subjects within a particular interval, and thus may have pipeline 3910A inside this period of time. In at least one embodiment, the user may select, for each request from system 3900, an application for which the user wants to perform processing on the data. In at least one embodiment, deployment pipeline 3910A may be adjusted at any interval, and this may be a seamless process due to the adaptability and scalability of the container structure within system 3900.
In at least one embodiment, the deployment pipeline 3910A of fig. 40 can include a CT scanner 4002 that generates imaging data of a patient or subject. In at least one embodiment, imaging data from the CT scanner 4002 may be stored on one or more PACS servers 4004 associated with the facility housing the CT scanner 4002. In at least one embodiment, one or more PACS servers 4004 may comprise software and/or hardware components that may directly interface with an imaging modality at the facility (e.g., CT scanner 4002). In at least one embodiment, the DICOM adapter 3902B may enable the sending and receiving of DICOM objects using the DICOM protocol. In at least one embodiment, the DICOM adapter 3902B may facilitate the preparation or configuration of DICOM data from one or more PACS servers 4004 for use by the deployment pipeline 3910A. In at least one embodiment, once DICOM data is processed through DICOM adapter 3902B, pipeline manager 3912 can route the data to deployment pipeline 3910A. In at least one embodiment, the DICOM reader 4006 can extract image files and any associated metadata from DICOM data (e.g., raw sinogram data, as shown in visualization 4016A). In at least one embodiment, the extracted working files may be stored in a cache for faster processing by other applications in the deployment pipeline 3910A. In at least one embodiment, once the DICOM reader 4006 has completed extracting and/or storing data, a completion signal may be communicated to the pipeline manager 3912. In at least one embodiment, the pipeline manager 3912 may then launch or call one or more other applications or containers in the deployment pipeline 3910A.
In at least one embodiment, once the data (e.g., raw sinogram data) is available for processing by the CT reconstruction 4008 application, the CT reconstruction 4008 application and/or container may be executed. In at least one embodiment, the CT reconstruction 4008 may read the raw sinogram data from a cache, reconstruct an image file from the raw sinogram data (e.g., as shown in visualization 4016B), and store the resulting image file in the cache. In at least one embodiment, upon completion of the rebuild, a signal may be sent to pipeline manager 3912 that the rebuild task is complete. In at least one embodiment, once reconstruction is complete, and the reconstructed image file may be stored in a cache (or other storage device), an organ segmentation 4010 application and/or container may be triggered by pipeline manager 3912. In at least one embodiment, the organ segmentation 4010 application and/or container can read the image file from the cache, normalize or convert the image file to a format suitable for reasoning (e.g., convert the image file to an input resolution of a machine learning model), and run reasoning on the normalized image. In at least one embodiment, to run reasoning about the normalized images, organ segmentation 4010 applications and/or containers can rely on service 3820, and pipeline manager 3912 and/or application coordination system 3928 can facilitate use of service 3820 by organ segmentation 4010 applications and/or containers. In at least one embodiment, for example, the organ segmentation 4010 application and/or container can utilize the AI service 3918 to perform reasoning on the normalized images, and the AI service 3918 can utilize hardware 3822 (e.g., AI system 3924) to perform the AI service 3918. In at least one embodiment, the inference results can be a mask file (e.g., as shown in visualization 4016C), which can be stored in a cache (or other storage device).
In at least one embodiment, a signal may be generated for the pipeline manager 3912 once an application processing and/or extracting DICOM data has completed processing. In at least one embodiment, the pipeline manager 3912 may then execute the DICOM writer 4012 to read the results from the cache (or other storage device), package the results into a DICOM format (e.g., as a DICOM output 4014) for use by a user at the facility generating the request. In at least one embodiment, the DICOM output 4014 can then be sent to the DICOM adapter 3902B to prepare the DICOM output 4014 for storage on the one or more PACS servers 4004 (e.g., for viewing by a DICOM viewer at the facility). In at least one embodiment, in response to the request for reconstruction and segmentation, visualizations 4016B and 4016C can be generated and made available to the user for diagnostic, research, and/or other purposes.
Although illustrated as a continuous application in deployment pipeline 3910A, in at least one embodiment, CT reconstruction 4008 and organ segmentation 4010 applications may be processed in parallel. In at least one embodiment, where applications do not have dependencies on each other and data is available to each application (e.g., after DICOM reader 4006 extracts data), applications may execute at the same time, substantially at the same time, or with some overlap. In at least one embodiment, where two or more applications require similar services 3820, the scheduler of system 3900 can be used for load balancing and allocation of computing or processing resources among and among the various applications. In at least one embodiment, in some embodiments, parallel computing platform 3930 may be used to perform parallel processing on applications to reduce the runtime of deployment pipeline 3910A to provide real-time results.
In at least one embodiment and referring to fig. 41A and 41B, the deployment system 3806 can be implemented as one or more virtual instruments for performing different functions, such as image processing, segmentation, augmentation, AI, visualization, and reasoning, using imaging devices (e.g., CT scanners, X-ray machines, MRI machines, etc.), sequencing devices, genomic devices, and/or other device types. In at least one embodiment, the system 3900 can allow for creation and provision of virtual instruments, which can include a software defined deployment pipeline 3910, which software defined deployment pipeline 3910 can receive raw/raw input data generated by one or more devices and output processed/reconstructed data. In at least one embodiment, deployment pipeline 3910 (e.g., 3910A and 3910B) representing virtual instruments can implement intelligence in the pipeline (such as by utilizing a machine learning model) to provide containerized reasoning support to the system. In at least one embodiment, the virtual instrument may execute any number of containers, each container including instantiation of an application. In at least one embodiment, such as where real-time processing is desired, the deployment pipeline 3910 representing the virtual instrument may be static (e.g., containers and/or applications may be set), while in other examples containers and/or applications for the virtual instrument may be selected from an application or resource pool (e.g., in a container registry) (e.g., on a per-request basis).
In at least one embodiment, the system 3900 can be instantiated or executed locally as one or more virtual instruments at a facility, such as in a computing system deployed alongside or otherwise in communication with a radiation machine, an imaging device, and/or another device type at the facility. However, in at least one embodiment, the local installation may be instantiated or performed in a computing system of the device itself (e.g., a computing system integrated with the imaging device), in a local data center (e.g., a locally deployed data center), and/or in a cloud environment (e.g., in cloud 3926). In at least one embodiment, in some examples, the deployment system 3806 operating as a virtual instrument can be instantiated by a supercomputer or other HPC system. In at least one embodiment, local installation may allow for high bandwidth use for real-time processing (e.g., via a higher throughput local communication interface, such as RF over ethernet). In at least one embodiment, real-time or near real-time processing may be particularly useful where the virtual instrument supports an ultrasound device or other imaging modality in which visualization on the fly is desired or required for accurate diagnosis and analysis. In at least one embodiment, the cloud computing architecture may be able to dynamically burst (burst) to a cloud computing service provider or other computing cluster when local demand exceeds the capacity or capability of the local deployment. In at least one embodiment, as described herein with respect to training system 3804, the cloud architecture, when implemented, may be adapted for training a neural network or other machine learning model. In at least one embodiment, with the training pipeline in place, the machine learning model may continually learn and improve as additional data from the devices it supports is processed. In at least one embodiment, additional data, new data, existing machine learning models, and/or new or updated machine learning models may be used to continually refine the virtual instrument.
In at least one embodiment, the computing system may include some or all of the hardware 3822 described herein, and the hardware 3822 may be distributed in any of a variety of ways, including: within the device, as part of a computing device coupled to and located in proximity to the device, in a local data center at the facility and/or in cloud 3926. In at least one embodiment, since the deployment system 3806 and associated applications or containers are created in software (e.g., as discrete containerized instantiations of applications), the behavior, operation, and configuration of the virtual instrument, and the output generated by the virtual instrument can be modified or customized as desired without altering or changing the original output of the device supported by the virtual instrument.
In at least one embodiment, at least one component shown or described with respect to fig. 40 is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 40 is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 40 is used to cause one or more neural networks to select one or more variations in a feature of one or more text cues based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 40 is used to perform at least one aspect described with respect to block 100, block 200, process 300, block 400, process 500, block 600, block 700, block 800, and/or other systems, methods, or operations described herein.
Fig. 41A includes an example data flow diagram of a virtual instrument supporting an ultrasound device in accordance with at least one embodiment. In at least one embodiment, deployment pipeline 3910B may utilize one or more services 3820 of system 3900. In at least one embodiment, deployment pipeline 3910B and service 3820 can utilize hardware 3822 of systems in local or cloud 3926. In at least one embodiment, although not shown, the process 4100 can be facilitated by a pipeline manager 3912, an application coordination system 3928, and/or a parallel computing platform 3930.
In at least one embodiment, the process 4100 can include receiving imaging data from the ultrasound device 4102. In at least one embodiment, the imaging data may be stored in DICOM format (or other format, e.g., RIS, CIS, REST, RPC compliant, raw, etc.) on one or more PACS servers, and may also be received by the system 3900 for processing through a deployment pipeline 3910, the deployment pipeline 3910 being selected or customized to the virtual instrument (e.g., virtual ultrasound) of the ultrasound device 4102. In at least one embodiment, imaging data may be received directly from an imaging device (e.g., ultrasound device 4102) and processed by a virtual instrument. In at least one embodiment, a transducer or other signal converter communicatively coupled between the imaging device and the virtual instrument may convert signal data generated by the imaging device into image data that may be processed by the virtual instrument. In at least one embodiment, raw data and/or image data may be applied to DICOM reader 4006 to extract data for use by an application or container deploying pipeline 3910B. In at least one embodiment, DICOM reader 4006 can utilize data expansion library 4114 (e.g., DALI of NVIDIA) as service 3820 (e.g., as one of one or more computing services 3916) for extracting, resizing, rescaling (rescaling), and/or otherwise preparing data for use by an application or container.
In at least one embodiment, once the data is ready, a reconstruction 4106 application and/or container may be executed to reconstruct the data from the ultrasound device 4102 into an image file. In at least one embodiment, after the reconstruction 4106 or concurrently with the reconstruction 4106, detection 4108 applications and/or containers may be executed for anomaly detection, object detection, feature detection, and/or other detection tasks related to data. In at least one embodiment, the image file generated during the reconstruction 4106 may be used during the detection 4108 to identify anomalies, objects, features, etc. In at least one embodiment, the detection 4108 application can utilize an inference engine 4116 (e.g., as one of the one or more AI services 3918) to perform inference on the data to generate a detection. In at least one embodiment, the detection 4108 application can execute or invoke one or more machine learning models (e.g., from the training system 3804).
In at least one embodiment, once the rebuilding 4106 and/or detecting 4108 is complete, the data output from these applications and/or containers can be used to generate a visualization 4110, such as a visualization 4112 (e.g., grayscale output), that is displayed on a workstation or display terminal. In at least one embodiment, the visualization may allow a technician or other user to visualize the results of the deployment pipeline 3910B with respect to the ultrasound device 4102. In at least one embodiment, the visualization 4110 can be performed by utilizing the rendering component 4118 of the system 3900 (e.g., one of the one or more visualization services 3920). In at least one embodiment, the rendering component 4118 may execute 2D, openGL or ray tracing services to generate the visualization 4112.
In at least one embodiment, at least one component shown or described with respect to fig. 41A is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 41A is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 41A is used to cause one or more neural networks to select one or more variations in a feature of one or more text cues based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 41A is used to perform at least one aspect described with respect to block diagram 100, block diagram 200, process 300, block diagram 400, process 500, block diagram 600, block diagram 700, block diagram 800, and/or other systems, methods, or operations described herein.
FIG. 41B includes an example data flow diagram of a virtual instrument supporting a CT scanner in accordance with at least one embodiment. In at least one embodiment, deployment pipeline 3910C may utilize one or more services 3820 of system 3900. In at least one embodiment, deployment pipeline 3910C and service 3820 may utilize hardware 3822 of the system locally or in cloud 3926. In at least one embodiment, although not shown, the pipeline manager 3912, the application coordination system 3928, and/or the parallel computing platform 3930 may facilitate the process 4120.
In at least one embodiment, the process 4120 can include the CT scanner 4122 generating raw data that can be received by the DICOM reader 4006 (e.g., received directly via the PACS server 4004, after processing, etc.). In at least one embodiment, the virtual CT (instantiated by the deployment pipeline 3910C) may include a first real-time pipeline for monitoring the patient (e.g., patient motion detection AI 4126) and/or for adjusting or optimizing the exposure of the CT scanner 4122 (e.g., using exposure control AI 4124). In at least one embodiment, one or more applications (e.g., 4124 and 4126) can utilize a service 3820, such as one or more AI services 3918. In at least one embodiment, the output of the exposure control AI 4124 application (or container) and/or the patient motion detection AI 4126 application (or container) may be used as feedback to the CT scanner 4122 and/or a technician to adjust the exposure (or other settings of the CT scanner 4122) and/or to inform the patient of reduced motion.
In at least one embodiment, the deployment pipeline 3910C may include a non-real-time pipeline for analyzing data generated by the CT scanner 4122. In at least one embodiment, the second pipeline may include a CT reconstruction 4008 application and/or container, a coarse detection AI 4128 application and/or container, a fine detection AI 4132 application and/or container (e.g., where certain results are detected by coarse detection AI 4128), a visualization 4130 application and/or container, and a DICOM writer 4012 (and/or other data type writer, such as RIS, CIS, REST compliant, RPC, primitive, etc.) application and/or container. In at least one embodiment, the raw data generated by the CT scanner 4122 can be passed through a pipeline (instantiated as a virtual CT instrument) of the deployment pipeline 3910C to generate results. In at least one embodiment, the results from the DICOM writer 4012 can be sent for display and/or can be stored on one or more PACS servers 4004 for later retrieval, analysis, or display by a technician, practitioner, or other user.
In at least one embodiment, at least one component shown or described with respect to fig. 41B is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 41B is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 41B is used to cause one or more neural networks to select one or more variations in a feature of one or more text cues based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 41B is used to perform at least one aspect described with respect to block diagram 100, block diagram 200, process 300, block diagram 400, process 500, block diagram 600, block diagram 700, block diagram 800, and/or other systems, methods, or operations described herein.
Fig. 42A illustrates a data flow diagram of a process 4200 for training, retraining, or updating a machine learning model in accordance with at least one embodiment. In at least one embodiment, process 4200 may be performed using system 3900 of FIG. 39 as a non-limiting example. In at least one embodiment, process 4200 can utilize services 3820 and/or hardware 3822 of system 3900, as described herein. In at least one embodiment, the refined (refined) model 4212 generated by the process 4200 may be executed by the deployment system 3806 for one or more containerized applications in the deployment pipeline 3910.
In at least one embodiment, model training 3814 can include retraining or updating initial model 4204 (e.g., a pre-trained model) with new training data (e.g., new input data such as customer dataset 4206, and/or new truth data associated with the input data). In at least one embodiment, to retrain or update the initial model 4204, one or more output or loss layers of the initial model 4204 may be reset or deleted and/or replaced with updated or new output or loss layers. In at least one embodiment, the initial model 4204 may have previously fine-tuned parameters (e.g., weights and/or bias) that remain from previous training, so training or retraining 3814 may not take as long as training the model from scratch or require as much processing. In at least one embodiment, during model training 3814, parameters of the new data set may be updated and readjusted based on loss calculations associated with the accuracy of one or more output or loss layers as predictions are generated on the new customer data set 4206 (e.g., image data 3808 of fig. 38) by resetting or replacing one or more output or loss layers of the initial model 4204.
In at least one embodiment, the pre-trained model 3906 can be stored in a data store or registry (e.g., model registry 3824 of fig. 38). In at least one embodiment, pre-training model 3906 may have been trained at least in part at one or more facilities other than the facility performing process 4200. In at least one embodiment, to protect privacy and rights of a patient, subject, or client of a different facility, pre-training model 3906 may have been trained locally using locally generated client or patient data. In at least one embodiment, the pre-training model 3906 may be trained using the cloud 3926 and/or other hardware 3822, but confidential, privacy-protected patient data may not be transferred to, used by, or accessed by any component of the cloud 3926 (or other non-native hardware). In at least one embodiment, where the pre-training model 3906 is trained using patient data from more than one facility, then the pre-training model 3906 may have been trained separately for each facility before training on patient or customer data from another facility. In at least one embodiment, customer or patient data from any number of facilities may be used to train pre-training model 3906 locally and/or non-locally, such as in a data center or other cloud computing infrastructure, such as where the customer or patient data has issued a privacy issue (e.g., through a disclaimer (by driver), for experimental use, etc.), or where the customer or patient data is included in a common dataset.
In at least one embodiment, the user may also select a machine learning model to be used for a particular application in selecting an application for use in deployment pipeline 3910. In at least one embodiment, the user may not have a model to use, so the user may select pre-trained model 3906 to use with the application. In at least one embodiment, the pre-training model 3906 may not be optimized for generating accurate results (e.g., based on patient diversity, demographics, type of medical imaging device used, etc.) on the customer dataset 4206 of the user facility. In at least one embodiment, the pre-training model 3906 may be updated, retrained, and/or trimmed for use at the respective facilities prior to deploying the pre-training model 3906 into the deployment pipeline 3910 for use with one or more applications.
In at least one embodiment, the user can select the pre-training model 3906 to update, re-train, and/or fine tune, and the pre-training model 3906 can be referred to as the initial model 4204 of the training system 3804 in the process 4200. In at least one embodiment, the customer dataset 4206 (e.g., imaging data, genomic data, sequencing data, or other data types generated by devices at the facility) can be used to perform model training 3814 (which can include, but is not limited to, transfer learning) on the initial model 4204 to generate the refined model 4212. In at least one embodiment, truth data corresponding to the customer data set 4206 can be generated by the training system 3804. In at least one embodiment, the truth data (e.g., labeled clinical data 3812 as in fig. 38) can be generated at the facility at least in part by a clinician, scientist, doctor, practitioner.
In at least one embodiment, the AI-assisted annotation 3810 can be used in some examples to generate truth data. In at least one embodiment, the AI-assisted annotation 3810 (e.g., implemented using AI-assisted annotation SDK) can utilize a machine learning model (e.g., neural network) to generate truth data for suggestions or predictions of the customer dataset. In at least one embodiment, the user 4210 may use the annotation tool within a user interface (graphical user interface (GUI)) on the computing device 4208.
In at least one embodiment, the user 4210 can interact with the GUI via the computing device 4208 to edit or fine tune annotations or automatic annotations. In at least one embodiment, a polygon editing feature may be used to move vertices of a polygon to a more precise or fine-tuned position.
In at least one embodiment, once the customer dataset 4206 has associated truth data, the truth data (e.g., from AI-assisted notes, manual markers, etc.) can be used during model training 3814 to generate the refined model 4212. In at least one embodiment, the customer data set 4206 can be applied to the initial model 4204 any number of times, and the truth data can be used to update parameters of the initial model 4204 until an acceptable level of accuracy is reached for the refining model 4212. In at least one embodiment, once the refining model 4212 is generated, the refining model 4212 may be deployed within one or more deployment pipelines 3910 at the facility for performing one or more processing tasks with respect to the medical imaging data.
In at least one embodiment, the refined model 4212 can be uploaded to the pre-trained model 3906 in the model registry 3824 for selection by another facility. In at least one embodiment, its process may be completed at any number of facilities such that the refining model 4212 may be further refined any number of times on the new dataset to generate a more generic model.
In at least one embodiment, at least one component shown or described with respect to fig. 42A is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 42A is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 42A is used to cause one or more neural networks to select one or more variations in a feature of one or more text cues based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 42A is used to perform at least one aspect described with respect to block diagram 100, block diagram 200, process 300, block diagram 400, process 500, block diagram 600, block diagram 700, block diagram 800, and/or other systems, methods, or operations described herein.
FIG. 42B is an example illustration of a client-server architecture 4232 for enhancing annotation tools with a pre-trained annotation model, in accordance with at least one embodiment. In at least one embodiment, the AI-assisted annotation tools 4236 can be instantiated based on the client-server architecture 4232. In at least one embodiment, the annotation tools 4236 in the imaging application can assist the radiologist, for example, in identifying organs and abnormalities. In at least one embodiment, the imaging application may include a software tool that aids the user 4210 in identifying several extremal points on a particular organ of interest in the original image 4234 (e.g., in a 3D MRI or CT scan), and receiving automatic annotation results for all 2D slices of the particular organ, as a non-limiting example. In at least one embodiment, the results may be stored in a data store as training data 4239 and used (e.g., without limitation) as truth data for training. In at least one embodiment, when the computing device 4208 transmits extreme points for the AI-assisted annotation 3810, for example, the deep learning model can receive the data as input and return inference results of segmented organs or abnormalities. In at least one embodiment, a pre-instantiated annotation tool (such as AI-assisted annotation tool 4236B in fig. 42B) can be enhanced by making an API call (e.g., API call 4244) to a server (such as annotation helper server 4241), and annotation helper server 4241 can include a set of pre-trained models 4242 stored, for example, in an annotation model registry. In at least one embodiment, the annotation model registry can store a pre-training model 4242 (e.g., a machine learning model, such as a deep learning model) that is pre-trained to perform AI-assisted annotation of a particular organ or abnormality. In at least one embodiment, these models may be further updated through the use of training pipeline 3904. In at least one embodiment, the pre-installed annotation tool can be improved over time as new tagged clinical data 3812 is added.
Logic 915 is used to perform inference and/or training operations associated with one or more embodiments. Details regarding logic 915 are provided herein in connection with fig. 9A and/or 9B.
In at least one embodiment, at least one component shown or described with respect to fig. 42B is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 42B is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 42B is used to cause one or more neural networks to select one or more variations in a feature of one or more text cues based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 42B is used to perform at least one aspect described with respect to block diagram 100, block diagram 200, process 300, block diagram 400, process 500, block diagram 600, block diagram 700, block diagram 800, and/or other systems, methods, or operations described herein.
FIG. 43 illustrates components of a system 4300 for accessing a large language model in accordance with at least one embodiment. In at least one embodiment, the system 4300 is a system for interfacing with the application 4302 to process data. In at least one embodiment, the application 4302 uses a Large Language Model (LLM) 4312 to generate output data 4320 based at least in part on the input data 4310. In at least one embodiment, input data 4310 is a text prompt. In at least one embodiment, input data 4310 comprises unstructured text. In at least one embodiment, input data 4310 includes a sequence of markers. In at least one embodiment, the tag is part of the input data. In at least one embodiment, the tag is a word. In at least one embodiment, the indicia is a character. In at least one embodiment, the tag is a subword. In at least one embodiment, input data 4310 is formatted in chat markup language (ChatML). In at least one embodiment, input data 4310 is an image. In at least one embodiment, input data 4310 is one or more video frames. In at least one embodiment, input data 4310 is any other expression medium.
In at least one embodiment, large language model 4312 comprises a deep neural network. In at least one embodiment, the deep neural network is a neural network having two or more layers. In at least one embodiment, large language model 4312 comprises a transducer model. In at least one embodiment, large language model 4312 includes a neural network configured to perform natural language processing. In at least one embodiment, large language model 4312 is configured to process one or more data sequences. In at least one embodiment, large language model 4312 is configured to process text. In at least one embodiment, the weights and bias of large language model 4312 are configured to process text. In at least one embodiment, the large language model 4312 is configured to determine patterns in the data to perform one or more natural language processing tasks. In at least one embodiment, the natural language processing task includes text generation. In at least one embodiment, the natural language processing task includes a question-answer. In at least one embodiment, performing natural language processing tasks generates output data 4320.
In at least one embodiment, the processor uses the input data 4310 to query the search database 4314. In at least one embodiment, the retrieval database 4314 is a key-value store. In at least one embodiment, the retrieval database 4314 is a corpus used to train a large language model 4312. In at least one embodiment, the processor uses the retrieval database 4314 to provide a large language model 4312 with updated information. In at least one embodiment, the retrieval database 4314 comprises data from internet sources. In at least one embodiment, large language model 4312 does not use search database 4314 to perform reasoning.
In at least one embodiment, the encoder encodes input data 4310 into one or more feature vectors. In at least one embodiment, the encoder encodes input data 4310 into sentence-embedded vectors. In at least one embodiment, the processor performs a nearest neighbor search using the sentence-embedded vector to generate one or more neighbors 4316. In at least one embodiment, one or more neighbors 4316 are values in the retrieval database 4314 that correspond to keys that include input data 4310. In at least one embodiment, one or more neighbors 4316 include text data. In at least one embodiment, the encoder 4318 encodes one or more neighbors 4316. In at least one embodiment, the encoder 4318 encodes one or more neighbors 4316 into a text-embedded vector. In at least one embodiment, the encoder 4318 encodes one or more neighbors 4316 into a sentence-embedded vector. In at least one embodiment, large language model 4316 uses input data 4310 and data generated by encoder 4318 to generate output data 4320. In at least one embodiment, the processor 4306 interfaces with the application 4302 using a Large Language Model (LLM) Application Programming Interface (API) 4304. In at least one embodiment, the processor 4306 accesses a large language model 4316 using a Large Language Model (LLM) Application Programming Interface (API) 4304.
In at least one embodiment, output data 4320 comprises computer instructions. In at least one embodiment, output data 4320 includes instructions written in a CUDA programming language. In at least one embodiment, the output data 4320 includes instructions to be executed by the processor 4306. In at least one embodiment, the output data 4320 includes instructions for controlling execution of one or more algorithm modules 4308. In at least one embodiment, the one or more algorithm modules 4308 include one or more neural networks, for example, for performing pattern recognition. In at least one embodiment, the one or more algorithm modules 4308 include one or more neural networks, for example, for performing frame generation. In at least one embodiment, the one or more algorithm modules 4308 include one or more neural networks, for example, for generating a driving path. In at least one embodiment, the one or more algorithm modules 4308 include one or more neural networks, for example, for generating 5G signals. In at least one embodiment, the processor 4306 interfaces with the application 4302 using a Large Language Model (LLM) Application Programming Interface (API) 4304. In at least one embodiment, the processor 4306 may use one or more parallel computing platforms and/or programming models (e.g., the CUDA model of NVIDIA).
In at least one embodiment, aspects of the systems and techniques described herein with respect to fig. 43 are incorporated into aspects of one or more of the preceding figures. For example, in at least one embodiment, the apparatus depicted in one or more of the preceding figures includes a processor 4306.
In at least one embodiment, the system 4300 uses ChatGPT to write the CUDA code. In at least one embodiment, the system 4300 uses ChatGPT to train the subject classification neural network. In at least one embodiment, the system 4300 uses ChatGPT and neural network to identify a driving path. In at least one embodiment, the system 4300 uses ChatGPT and a neural network to generate the 5G signal.
It should be noted that while the example embodiments described herein may relate to a CUDA programming model, the techniques described herein may be used with any suitable programming model, such as HIP, oneAPI (e.g., using oneAPI-based programming to perform or implement the methods disclosed herein), and/or variations thereof.
In at least one embodiment, one or more components of the systems and/or processors disclosed above may be in communication with one or more CPU, ASIC, GPU, FPGA or other hardware, circuitry, or integrated circuit components including, for example, an amplifier or upsampler to amplify an image; an image fusion or image fusion component for fusing, mixing or adding images together; a sampler for sampling the image (e.g., as part of a DSP); a neural network circuit configured to perform an amplifier to amplify an image (e.g., from a low resolution image to a high resolution image); or other hardware for modifying or generating an image, frame, or video to adjust its resolution, size, or pixels. One or more components of the systems and/or processors disclosed above may perform the methods, operations, or instructions of generating or modifying images using the components described in this disclosure.
In at least one embodiment, at least one component shown or described with respect to fig. 43 is used to perform the techniques and/or functions described in connection with fig. 1-8. In at least one embodiment, at least one component shown or described with respect to fig. 43 is used such that a most consistent output of one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks. In at least one embodiment, at least one component shown or described with respect to fig. 43 is used to cause one or more neural networks to select one or more variations in a feature of one or more text cues based at least in part on performance of the one or more neural networks using the one or more variations in one or more input images. In at least one embodiment, at least one component shown or described with respect to fig. 43 is used to perform at least one aspect described with respect to block 100, block 200, process 300, block 400, process 500, block 600, block 700, block 800, and/or other systems, methods, or operations described herein.
At least one embodiment of the present disclosure may be described according to the following clauses:
1. A processor, comprising:
one or more circuits configured to cause selection of a most consistent output of the one or more pre-trained neural networks based at least in part on a plurality of variations of one or more inputs of the one or more neural networks.
2. The processor of clause 1, wherein the one or more inputs of the one or more neural networks comprise one or more images.
3. The processor of clause 1 or 2, wherein the one or more inputs of the one or more neural networks comprise one or more text prompts.
4. The processor of any of clauses 1-3, wherein the one or more neural networks comprise a pre-trained visual language model.
5. The processor of any of clauses 1-4, wherein the plurality of variations of the one or more inputs of the one or more neural networks are based at least in part on one or more randomly enhanced views of one or more images.
6. The processor of any of clauses 1-5, wherein during reasoning, hints for the one or more neural networks are adjusted.
7. The processor of any of clauses 1-6, wherein the prompting of the one or more neural networks is adjusted based at least in part on classifying the plurality of variations of the one or more inputs of the one or more neural networks, classifying the plurality of variations of the one or more inputs of the one or more neural networks is based at least in part on removing one or more variations from the plurality of variations, and calculating an average of the plurality of variations.
8. A computer-implemented method, comprising:
such that a most consistent output of the one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks.
9. The computer-implemented method of clause 8, wherein the one or more inputs of the one or more neural networks comprise a single image.
10. The computer-implemented method of clauses 8 or 9, wherein the one or more inputs of the one or more neural networks comprise one or more text prompts based at least in part on the content of the single image.
11. The computer-implemented method of any of clauses 8-10, wherein the one or more neural networks comprise a visual language model.
12. The computer-implemented method of any of clauses 8-11, further comprising:
a plurality of randomly enhanced views of the one or more inputs of the one or more neural networks are generated.
13. The computer-implemented method of any of clauses 8-12, further comprising:
one or more confidence measures of the plurality of variations of the one or more inputs of the one or more neural networks are generated.
14. The computer-implemented method of any of clauses 8-13, further comprising:
one or more randomly enhanced views of the one or more inputs of the one or more neural networks are classified based at least in part on an average of the plurality of varying confidence metrics of the one or more inputs of the one or more neural networks.
15. A computer system, comprising:
one or more processors and memory storing executable instructions that, if executed by the one or more processors, cause selection of a most consistent output of one or more pre-trained neural networks based at least in part on a plurality of variations of one or more inputs of the one or more neural networks.
16. The computer system of clause 15, wherein the one or more inputs of the one or more neural networks comprise one or more images.
17. The computer system of clauses 15 or 16, wherein the one or more inputs of the one or more neural networks comprise one or more text prompts describing elements of one or more images.
18. The computer system of any of clauses 15-17, wherein the one or more neural networks comprise a pre-trained visual language model.
19. The computer system of any of clauses 15-18, wherein the plurality of variations of the one or more inputs of the one or more neural networks are based at least in part on one or more randomly enhanced views of one or more images.
20. The computer system of any of clauses 15-19, wherein during reasoning, hints for the one or more neural networks are adjusted based at least in part on entropy that minimizes the plurality of variations of the one or more inputs of the one or more neural networks.
In at least one embodiment, a single semiconductor platform may refer to a unique single semiconductor-based integrated circuit or chip. In at least one embodiment, a multi-chip module with increased connectivity may be used that simulates on-chip operation and is substantially improved over utilizing conventional central processing unit ("CPU") and bus implementations. In at least one embodiment, the various modules may also be placed alone or in various combinations of semiconductor platforms, depending on the needs of the user.
In at least one embodiment, referring back to FIG. 15, a computer program in the form of machine-readable executable code or computer control logic algorithms is stored in the main memory 1504 and/or secondary storage. In at least one embodiment, a computer program, if executed by one or more processors, enables the system 1500 to perform various functions in accordance with at least one embodiment. In at least one embodiment, the memory 1504, storage, and/or any other storage are possible examples of computer-readable media. In at least one embodiment, secondary storage may refer to any suitable storage device or system, such as a hard disk drive and/or a removable storage drive, representing a floppy diskette drive, a magnetic tape drive, an optical disk drive, a digital versatile disk ("DVD") drive, a recording device, a universal serial bus ("USB") flash memory, and so forth. In at least one embodiment, the architecture and/or functionality of each of the preceding figures is implemented in the context of a CPU 1502, a parallel processing system 1512, an integrated circuit capable of having at least some of the capabilities of both CPUs 1502, a parallel processing system 1512, a chipset (e.g., a set of integrated circuits designed to operate and sell as units performing related functions, etc.), and/or any suitable combination of one or more integrated circuits.
In at least one embodiment, the architecture and/or functionality of each of the preceding figures is implemented in the context of a general purpose computer system, a circuit board system, a game console system dedicated for entertainment purposes, a dedicated system, and the like. In at least one embodiment, computer system 1500 may take the form of a desktop computer, a laptop computer, a tablet computer, a server, a supercomputer, a smart phone (e.g., wireless, handheld), a personal digital assistant ("PDA"), a digital camera, a vehicle, a head mounted display, a handheld electronic device, a mobile telephone device, a television, a workstation, a game console, an embedded system, and/or any other type of logic. In at least one embodiment, computer system 1500 includes or refers to any of the devices of FIGS. 9A-42B.
In at least one embodiment, parallel processing system 1512 includes, but is not limited to, a plurality of parallel processing units ("PPUs") 1514 and associated memory 1516. In at least one embodiment, PPU 1514 is connected to a host processor or other peripheral device via interconnect 1518 and switch 1520 or a multiplexer. In at least one embodiment, parallel processing system 1512 allocates computing tasks on parallelizable PPUs 1514, e.g., as part of a computing task allocation across multiple graphics processing unit ("GPU") thread blocks. In at least one embodiment, memory (e.g., for read and/or write accesses) is shared and accessed between some or all of PPUs 1514, but such shared memory may incur a performance penalty relative to using local memory and registers residing on PPUs 1514. In at least one embodiment, the operation of PPU 1514 is synchronized through the use of commands (such as __ syncthreads ()) where all threads in a block (e.g., executing across multiple PPUs 1514) arrive at a certain code execution point before proceeding.
In at least one embodiment, one or more of the techniques described herein utilize an oneAPI programming model. In at least one embodiment, the oneAPI programming model refers to a programming model for interacting with various computing accelerator architectures. In at least one embodiment, oneAPI refers to an Application Programming Interface (API) designed to interact with various computing accelerator architectures. In at least one embodiment, the oneAPI programming model utilizes the DPC++ programming language. In at least one embodiment, the dpc++ programming language refers to a high-level language for data parallel programming productivity. In at least one embodiment, the dpc++ programming language is based at least in part on the C and/or c++ programming language. In at least one embodiment, oneAPI programming models are programming models such as those developed by intel corporation of santa clara, california.
In at least one embodiment, oneAPI and/or oneAPI programming model is used to interact with various accelerators, GPUs, processors, and/or variants, architectures thereof. In at least one embodiment, oneAPI comprises a set of libraries that implement various functions. In at least one embodiment, oneAPI includes at least oneapipc++ library, oneAPI mathematical kernel library, oneAPI data analysis library, oneAPI deep neural network library, oneAPI collective communication library, oneAPI thread building block library, oneAPI video processing library, and/or variants thereof.
In at least one embodiment, the oneapidpc++ library, also known as oneDPL, is a library that implements algorithms and functions to accelerate dpc++ kernel programming. In at least one embodiment, oneDPL implements one or more Standard Template Library (STL) functions. In at least one embodiment, oneDPL implements one or more parallel STL functions. In at least one embodiment, oneDPL provides a set of library classes and functions, such as parallel algorithms, iterators, function object classes, range-based APIs, and/or variants thereof. In at least one embodiment, oneDPL implements one or more classes and/or functions of a c++ standard library. In at least one embodiment, oneDPL implements one or more random number generator functions.
In at least one embodiment, the oneAPI mathematical kernel library, also referred to as oneMKL, is a library that implements various optimization and parallelization routines for various mathematical functions and/or operations. In at least one embodiment, oneMKL implements one or more Basic Linear Algebraic Subroutines (BLAS) and/or Linear Algebraic Package (LAPACK) dense linear algebraic routines. In at least one embodiment, oneMKL implements one or more sparse BLAS linear algebraic routines. In at least one embodiment, oneMKL implements one or more Random Number Generators (RNGs). In at least one embodiment, oneMKL implements one or more Vector Math (VM) routines for performing mathematical operations on vectors. In at least one embodiment, oneMKL implements one or more Fast Fourier Transform (FFT) functions.
In at least one embodiment, the oneAPI data analysis library, also referred to as oneDAL, is a library that implements various data analysis applications and distributed computing. In at least one embodiment, oneDAL implements various algorithms for preprocessing, conversion, analysis, modeling, validation, and decision-making of data analysis in batch, online, and distributed computing processing modes. In at least one embodiment, oneDAL implements various c++ and/or Java APIs and various connectors to one or more data sources. In at least one embodiment, oneDAL implements dpc++ API extensions to conventional c++ interfaces and enables GPUs to be used for various algorithms.
In at least one embodiment, the oneAPI deep neural network library, also referred to as oneDNN, is a library that implements various deep learning functions. In at least one embodiment, oneDNN implements various neural networks, machine learning and deep learning functions, algorithms, and/or variants thereof.
In at least one embodiment, the oneAPI collective communication library, also referred to as onecl, is a library that implements various applications for deep learning and machine learning workloads. In at least one embodiment, onecl builds on top of lower level communication middleware such as Message Passing Interfaces (MPI) and libfabrics. In at least one embodiment, onecl enables a set of deep learning specific optimizations such as priority, persistence operations, out-of-order execution, and/or variants thereof. In at least one embodiment, onecl implements various CPU and GPU functions.
In at least one embodiment, the oneAPI thread building block library, also referred to as oneTBB, is a library that implements various parallelization processes for various applications. In at least one embodiment, oneTBB is used for task-based shared parallel programming on a host. In at least one embodiment, oneTBB implements a generic parallel algorithm. In at least one embodiment, oneTBB implements a concurrency container. In at least one embodiment, oneTBB implements a scalable memory allocator. In at least one embodiment, oneTBB implements a work stealing task scheduler. In at least one embodiment, oneTBB implements low-level synchronization primitives. In at least one embodiment, oneTBB is independent of a compiler and can be used on various processors, such as GPU, PPU, CPU and/or variants thereof.
In at least one embodiment, the oneAPI video processing library, also known as oneVPL, is a library for accelerating video processing in one or more applications. In at least one embodiment, oneVPL implements various video decoding, encoding, and processing functions. In at least one embodiment, oneVPL implements various functions for media pipelines on CPUs, GPUs, and other accelerators. In at least one embodiment, oneVPL implements device discovery and selection in media-centric and video analytics workloads. In at least one embodiment, oneVPL implements API primitives for zero copy buffer sharing.
In at least one embodiment, the oneAPI programming model utilizes the DPC++ programming language. In at least one embodiment, the dpc++ programming language is a programming language that includes, but is not limited to, functionally similar versions of the CUDA mechanism for defining device code and distinguishing device code from host code. In at least one embodiment, the dpc++ programming language may include a subset of functions of the CUDA programming language. In at least one embodiment, one or more CUDA programming model operations are performed using an oneAPI programming model that utilizes dpc++ programming language.
In at least one embodiment, any Application Programming Interface (API) described herein is compiled by a compiler, interpreter, or other software tool into one or more instructions, operations, or any other signals. In at least one embodiment, compiling includes generating one or more machine-executable instructions, operations, or other signals from source code. In at least one embodiment, the API compiled into one or more instructions, operations, or other signals, when executed, cause one or more processors (such as graphics processor 3000, graphics core 2000, parallel processor 2200, processor 2500, processor core 2500, or any other logic circuit described further herein) to perform one or more computing operations.
It should be noted that while the example embodiments described herein may relate to a CUDA programming model, the techniques described herein may be used with any suitable programming model, such as HIP, oneAPI, and/or variants thereof.
Other variations are within the spirit of the present disclosure. Thus, while the disclosed technology is susceptible to various modifications and alternative constructions, certain illustrated embodiments thereof are shown in the drawings and have been described above in detail. It should be understood, however, that there is no intention to limit the disclosure to the specific form or forms disclosed, but on the contrary, the intention is to cover all modifications, alternative constructions, and equivalents falling within the spirit and scope of the disclosure as defined in the appended claims.
The use of the terms "a" and "an" and "the" and similar referents in the context of describing the disclosed embodiments (especially in the context of the following claims) are to be construed to cover both the singular and the plural, unless otherwise indicated herein or clearly contradicted by context. Unless otherwise indicated, the terms "comprising," "having," "including," and "containing" are to be construed as open-ended terms (meaning "including, but not limited to"). The term "connected" (which refers to a physical connection, when unmodified) should be interpreted as partially or wholly contained within, attached to, or connected together, even if there are some intervening objects. Recitation of ranges of values herein are merely intended to serve as a shorthand method of referring individually to each separate value falling within the range, unless otherwise indicated herein, and each separate value is incorporated into the specification as if it were individually recited herein. In at least one embodiment, unless indicated otherwise or contradicted by context, the use of the term "set" (e.g., "set of items") or "subset" should be interpreted as a non-empty set comprising one or more members. Furthermore, unless indicated otherwise or contradicted by context, the term "subset" of a respective set does not necessarily denote an appropriate subset of the corresponding set, but the subset and the corresponding set may be equal.
Unless otherwise explicitly indicated or clearly contradicted by context, a connective language such as a phrase in the form of "at least one of a, B, and C" or "at least one of a, B, and C" is understood in the context as generally used to denote an item (item), term (term), etc., which may be a or B or C, or any non-empty subset of the a and B and C sets. For example, in the illustrative example of a set having three members, the conjoin phrases "at least one of a, B, and C" and "at least one of a, B, and C" refer to any of the following sets: { A }, { B }, { C }, { A, B }, { A, C }, { B, C }, { A, B, C }. Thus, such connection language is not generally intended to imply that certain embodiments require the presence of at least one of A, at least one of B, and at least one of C each. In addition, unless otherwise indicated herein or otherwise clearly contradicted by context, the term "plurality" indicates a plurality of states (e.g., the term "plurality of items" indicates a plurality of items). In at least one embodiment, the number of items in the plurality of items is at least two, but may be more if explicitly indicated or indicated by context. Furthermore, unless otherwise indicated or clear from context, the phrase "based on" means "based at least in part on" rather than "based only on".
The operations of the processes described herein may be performed in any suitable order unless otherwise indicated herein or otherwise clearly contradicted by context. In at least one embodiment, processes such as those described herein (or variations and/or combinations thereof) are performed under control of one or more computer systems configured with executable instructions and are implemented as code (e.g., executable instructions, one or more computer programs, or one or more application programs) that are jointly executed on one or more processors by hardware or a combination thereof. In at least one embodiment, the code is stored on a computer readable storage medium in the form of, for example, a computer program comprising a plurality of instructions executable by one or more processors. In at least one embodiment, the computer-readable storage medium is a non-transitory computer-readable storage medium that excludes transitory signals (e.g., propagated transient electrical or electromagnetic transmissions), but includes non-transitory data storage circuitry (e.g., buffers, caches, and queues) within the transceiver of the transitory signals. In at least one embodiment, code (e.g., executable code or source code) is stored on a set of one or more non-transitory computer-readable storage media (or other memory for storing executable instructions) that, when executed by one or more processors of a computer system (i.e., as a result of being executed), cause the computer system to perform operations described herein. In at least one embodiment, a set of non-transitory computer-readable storage media includes a plurality of non-transitory computer-readable storage media, and one or more of the individual non-transitory storage media in the plurality of non-transitory computer-readable storage media lacks all code, but the plurality of non-transitory computer-readable storage media collectively store all code. In at least one embodiment, the executable instructions are executed such that different instructions are executed by different processors, e.g., a non-transitory computer readable storage medium stores instructions, and a main central processing unit ("CPU") executes some instructions while a graphics processing unit ("GPU") executes other instructions. In at least one embodiment, different components of the computer system have separate processors, and different processors execute different subsets of the instructions.
In at least one embodiment, the arithmetic logic unit is a set of combinational logic circuits that employ one or more inputs to produce a result. In at least one embodiment, the processor uses arithmetic logic units to implement mathematical operations, such as addition, subtraction, or multiplication. In at least one embodiment, an arithmetic logic unit is used to implement a logical operation, such as a logical AND/OR OR XOR. In at least one embodiment, the arithmetic logic unit is stateless and is made of physical switching elements, such as semiconductor transistors arranged to form logic gates. In at least one embodiment, the arithmetic logic unit may operate internally as a stateful logic circuit with an associated clock. In at least one embodiment, the arithmetic logic unit may be configured as an asynchronous logic circuit whose internal state is not held in the associated register set. In at least one embodiment, the processor uses an arithmetic logic unit to combine operands stored in one or more registers of the processor and produce an output that can be stored by the processor in another register or memory location.
In at least one embodiment, as a result of processing an instruction retrieved by a processor, the processor presents one or more inputs or operands to an arithmetic logic unit, causing the arithmetic logic unit to produce a result based at least in part on instruction code provided to the inputs of the arithmetic logic unit. In at least one embodiment, the instruction code provided by the processor to the ALU is based at least in part on instructions executed by the processor. In at least one embodiment, combinational logic in the ALU processes the inputs and produces outputs that are placed on a bus within the processor. In at least one embodiment, the processor selects a destination register, memory location, output device, or output storage location on the output bus, thereby clocking the processor such that the results produced by the ALU are sent to the desired location.
Within the scope of this application, the term arithmetic logic unit or ALU is used to refer to any computational logic circuit that processes operands to produce a result. For example, in this document, the term ALU may refer to a floating point unit, DSP, tensor core, shader core, coprocessor, or CPU.
In at least one embodiment, one or more components of the above disclosed systems and/or processors may communicate with one or more CPU, ASIC, GPU, FPGA or other hardware, circuits, or integrated circuit components including, for example, an amplifier or upsampler that includes, for example, an enlarged image; an image fusion or image fusion component for fusing, mixing or adding images together; a sampler for sampling the image (e.g., as part of a DSP); a neural network circuit configured to perform an amplifier to amplify an image (e.g., from a low resolution image to a high resolution image); or other hardware for modifying or generating an image, frame, or video to adjust its resolution, size, or pixels. One or more components of the systems and/or processors disclosed above may perform the methods, operations, or instructions of generating or modifying images using the components described in this disclosure.
Thus, in at least one embodiment, a computer system is configured to implement one or more services that individually or collectively perform the operations of the processes described herein, and such computer system is configured with suitable hardware and/or software that enables the operations to be performed. Further, a computer system implementing at least one embodiment of the present disclosure is a single device, and in another embodiment is a distributed computer system, comprising a plurality of devices that operate differently, such that the distributed computer system performs the operations described herein, and such that a single device does not perform all of the operations.
The use of any and all examples, or exemplary language (e.g., "such as") provided herein, is intended merely to better illuminate embodiments of the disclosure and does not pose a limitation on the scope of the disclosure unless otherwise claimed. No language in the specification should be construed as indicating any non-claimed element as essential to the practice of the disclosure.
All references, including publications, patent applications, and patents, cited herein are hereby incorporated by reference to the same extent as if each reference were individually and specifically indicated to be incorporated by reference and were set forth in its entirety herein.
In the description and claims, the terms "coupled" and "connected," along with their derivatives, may be used. It should be understood that these terms may not be intended as synonyms for each other. Rather, in particular examples, "connected" or "coupled" may be used to indicate that two or more elements are in direct or indirect physical or electrical contact with each other. "coupled" may also mean that two or more elements are not in direct contact with each other, but yet still co-operate or interact with each other.
Unless specifically stated otherwise, it is appreciated that throughout the description, terms such as "processing," "computing," "calculating," "determining," or the like, refer to the action and/or processes of a computer or computing system, or similar electronic computing device, that manipulates and/or transforms data represented as physical quantities (e.g., electronic) within the computing system's registers and/or memories into other data similarly represented as physical quantities within the computing system's memories, registers or other such information storage, transmission or display devices.
In a similar manner, the term "processor" may refer to any device or portion of a device that processes electronic data from registers and/or memory and converts the electronic data into other electronic data that may be stored in registers and/or memory. As a non-limiting example, a "processor" may be a CPU or GPU. A "computing platform" may include one or more processors. As used herein, a "software" process may include, for example, software and/or hardware entities that perform work over time, such as tasks, threads, and intelligent agents. Also, each process may refer to multiple processes to execute instructions sequentially or in parallel, continuously or intermittently. In at least one embodiment, the terms "system" and "method" are used interchangeably herein as long as the system can embody one or more methods, and the methods can be considered as systems.
In this document, reference may be made to obtaining, acquiring, receiving or inputting analog or digital data into a subsystem, computer system or computer-implemented machine. In at least one embodiment, the process of obtaining, acquiring, receiving, or inputting analog and digital data may be accomplished in a variety of ways, such as by receiving data that is a parameter of a function call or call to an application programming interface. In at least one embodiment, the process of obtaining, acquiring, receiving, or inputting analog or digital data may be accomplished by transmitting the data via a serial or parallel interface. In at least one embodiment, the process of obtaining, acquiring, receiving, or inputting analog or digital data may be accomplished by transmitting data from a providing entity to an acquiring entity via a computer network. In at least one embodiment, the analog or digital data may also be provided, output, transmitted, sent, or presented with reference. In various examples, the process of providing, outputting, transmitting, sending, or presenting analog or digital data may be implemented by transmitting the data as input or output parameters for a function call, parameters for an application programming interface, or an interprocess communication mechanism.
While the description herein sets forth an example implementation of the described technology, other architectures may be used to implement the described functionality and are intended to fall within the scope of the present disclosure. Furthermore, while specific assignments of responsibilities are defined above for purposes of description, various functions and responsibilities may be assigned and divided in different ways, as the case may be.
Furthermore, although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter claimed in the appended claims is not necessarily limited to the specific features or acts described. Rather, the specific features and acts are disclosed as example forms of implementing the claims.
Claims (20)
1. A processor, comprising:
one or more circuits configured to cause selection of a most consistent output of the one or more pre-trained neural networks based at least in part on a plurality of variations of one or more inputs of the one or more neural networks.
2. The processor of claim 1, wherein the one or more inputs of the one or more neural networks comprise one or more images.
3. The processor of claim 1, wherein the one or more inputs of the one or more neural networks comprise one or more text prompts.
4. The processor of claim 1, wherein the one or more neural networks comprise a pre-trained visual language model.
5. The processor of claim 1, wherein the plurality of variations of the one or more inputs of the one or more neural networks are based at least in part on one or more randomly enhanced views of one or more images.
6. The processor of claim 1, wherein hints for the one or more neural networks are adjusted during reasoning.
7. The processor of claim 1, wherein the prompting of the one or more neural networks is adjusted based at least in part on classifying the plurality of variations of the one or more inputs of the one or more neural networks, classifying the plurality of variations of the one or more inputs of the one or more neural networks is based at least in part on removing one or more variations from the plurality of variations, and calculating an average of the plurality of variations.
8. A computer-implemented method, comprising:
such that a most consistent output of the one or more pre-trained neural networks is selected based at least in part on a plurality of variations of one or more inputs of the one or more neural networks.
9. The computer-implemented method of claim 8, wherein the one or more inputs of the one or more neural networks comprise a single image.
10. The computer-implemented method of claim 8, wherein the one or more inputs of the one or more neural networks comprise one or more text prompts based at least in part on content of a single image.
11. The computer-implemented method of claim 8, wherein the one or more neural networks comprise a visual language model.
12. The computer-implemented method of claim 8, further comprising:
a plurality of randomly enhanced views of the one or more inputs of the one or more neural networks are generated.
13. The computer-implemented method of claim 8, further comprising:
one or more confidence measures of the plurality of variations of the one or more inputs of the one or more neural networks are generated.
14. The computer-implemented method of claim 8, further comprising:
one or more randomly enhanced views of the one or more inputs of the one or more neural networks are classified based at least in part on an average of the plurality of varying confidence metrics of the one or more inputs of the one or more neural networks.
15. A computer system, comprising:
one or more processors and memory storing executable instructions that, if executed by the one or more processors, cause selection of a most consistent output of one or more pre-trained neural networks based at least in part on a plurality of variations of one or more inputs of the one or more neural networks.
16. The computer system of claim 15, wherein the one or more inputs of the one or more neural networks comprise one or more images.
17. The computer system of claim 15, wherein the one or more inputs of the one or more neural networks comprise one or more text prompts describing elements of one or more images.
18. The computer system of claim 15, wherein the one or more neural networks comprise a pre-trained visual language model.
19. The computer system of claim 15, wherein the plurality of variations of the one or more inputs of the one or more neural networks are based at least in part on one or more randomly enhanced views of one or more images.
20. The computer system of claim 15, wherein during reasoning, hints for the one or more neural networks are adjusted based at least in part on minimizing entropy of the plurality of variations of the one or more inputs of the one or more neural networks.
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US63/405,355 | 2022-09-09 | ||
US18/243,348 US20240095534A1 (en) | 2022-09-09 | 2023-09-07 | Neural network prompt tuning |
US18/243,348 | 2023-09-07 |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117688971A true CN117688971A (en) | 2024-03-12 |
Family
ID=90127330
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202311169931.XA Pending CN117688971A (en) | 2022-09-09 | 2023-09-11 | Neural network hint modulation |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117688971A (en) |
-
2023
- 2023-09-11 CN CN202311169931.XA patent/CN117688971A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN115803756A (en) | Techniques for performing neural network architecture searches using joint learning | |
CN114330637A (en) | Neural network training using robust timing combinations | |
CN114600113A (en) | Selecting annotations for training images using neural networks | |
CN115600663A (en) | Training target detection system with generated images | |
CN115769307A (en) | Contextual image transformation using neural networks | |
CN116034378A (en) | Convolutional neural network technology based on grid | |
CN114611658A (en) | Neural network scheduler | |
CN116562307A (en) | Performing text translation using one or more neural networks | |
CN117501284A (en) | Techniques for inferring information | |
CN115438783A (en) | Neural network classification technique | |
CN116108907A (en) | Techniques for partitioning a neural network | |
CN117178276A (en) | Robust visual transducer | |
CN115081611A (en) | Pruning neural networks | |
CN115956261A (en) | Technique for identifying distributed external input data in neural networks | |
CN117280329A (en) | Selectable cache policies | |
CN116090539A (en) | Novel method for training neural network | |
CN117405133A (en) | Track generation | |
CN117616448A (en) | Technique for generating images using neural networks | |
CN116830117A (en) | Techniques for determining dimensions of data | |
CN117095094A (en) | Object animation using neural networks | |
CN117710181A (en) | Video generation techniques | |
CN117677956A (en) | Modifying performance of neural networks | |
CN115904693A (en) | Optimizing parallel artificial intelligence processing using a derivative neural network | |
CN117635871A (en) | Generating texture meshes using one or more neural networks | |
CN117474066A (en) | Analyzing feedback using one or more neural networks |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |