US20200050443A1 - Optimization and update system for deep learning models - Google Patents
Optimization and update system for deep learning models Download PDFInfo
- Publication number
- US20200050443A1 US20200050443A1 US16/537,215 US201916537215A US2020050443A1 US 20200050443 A1 US20200050443 A1 US 20200050443A1 US 201916537215 A US201916537215 A US 201916537215A US 2020050443 A1 US2020050443 A1 US 2020050443A1
- Authority
- US
- United States
- Prior art keywords
- deep learning
- learning model
- software application
- data
- inferenced
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000013136 deep learning model Methods 0.000 title claims abstract description 246
- 238000005457 optimization Methods 0.000 title description 3
- 238000000034 method Methods 0.000 claims abstract description 52
- 230000006872 improvement Effects 0.000 claims abstract description 24
- 238000013528 artificial neural network Methods 0.000 claims description 50
- 238000012545 processing Methods 0.000 claims description 18
- 238000013519 translation Methods 0.000 claims description 3
- 238000006467 substitution reaction Methods 0.000 claims description 2
- 230000001960 triggered effect Effects 0.000 claims description 2
- 230000008569 process Effects 0.000 abstract description 11
- 230000001934 delay Effects 0.000 abstract description 4
- 230000004048 modification Effects 0.000 abstract description 2
- 238000012986 modification Methods 0.000 abstract description 2
- 238000012549 training Methods 0.000 description 70
- 238000013500 data storage Methods 0.000 description 41
- 230000006870 function Effects 0.000 description 12
- 230000004913 activation Effects 0.000 description 11
- 238000001994 activation Methods 0.000 description 11
- 238000010801 machine learning Methods 0.000 description 9
- 210000002569 neuron Anatomy 0.000 description 7
- 238000013135 deep learning Methods 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000001537 neural effect Effects 0.000 description 4
- 238000003491 array Methods 0.000 description 3
- 230000008901 benefit Effects 0.000 description 3
- 210000004556 brain Anatomy 0.000 description 2
- 230000001149 cognitive effect Effects 0.000 description 2
- 238000004590 computer program Methods 0.000 description 2
- 238000001816 cooling Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000002474 experimental method Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 241000700605 Viruses Species 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 238000012884 algebraic function Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000009509 drug development Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000007726 management method Methods 0.000 description 1
- 238000007620 mathematical function Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000008520 organization Effects 0.000 description 1
- 230000000644 propagated effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/30—Creation or generation of source code
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/70—Software maintenance or management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/70—Software maintenance or management
- G06F8/71—Version control; Configuration management
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/541—Interprogram communication via adapters, e.g. between incompatible applications
-
- G06K9/6256—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/047—Probabilistic or stochastic networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/082—Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/10—Interfaces, programming languages or software development kits, e.g. for simulating neural networks
-
- G06T5/002—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/70—Denoising; Smoothing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/34—Network arrangements or protocols for supporting network services or applications involving the movement of software or configuration parameters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/60—Software deployment
- G06F8/65—Updates
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
Definitions
- the present disclosure relates to deep learning used by software applications.
- a method, computer readable medium, and system are disclosed for improving deep learning models that perform inferencing operations to provide inferenced data to software applications.
- a deep learning model usable for performing inferencing operations and for providing inferenced data is stored. Additionally, the deep learning model is updated to create an updated version of the deep learning model. Further, the updated version of the deep learning model is distributed to a client for use in providing the inferenced data.
- a deep learning model is stored. Additionally, the deep learning model is executed to perform inferencing operations and to provide inferenced data to a software application. Further, an updated version of the deep learning model is received. Still yet, the updated version of the deep learning model is executed to provide additional inferenced data to the software application.
- FIG. 1 illustrates a block diagram of a system including a server that provisions a deep learning model to a client for use by a software application installed on the client, in accordance with an embodiment.
- FIG. 2 illustrates a flowchart of a server method for improving a deep learning model for use by a client, in accordance with an embodiment.
- FIG. 3 illustrates a flowchart of a client method for implementing an improved deep learning model that provides inferenced data to a local software application, in accordance with an embodiment.
- FIG. 4A illustrates a block diagram of a system 400 for updating a deep learning model that performs inferencing operations and provides inferenced data to a software application, in accordance with an embodiment.
- FIG. 4B illustrates a flowchart of the method of the client of FIG. 4 , in accordance with an embodiment.
- FIG. 5A illustrates inference and/or training logic, according to at least one embodiment
- FIG. 5B illustrates inference and/or training logic, according to at least one embodiment
- FIG. 6 illustrates training and deployment of a neural network, according to at least one embodiment
- FIG. 7 illustrates an example data center system, according to at least one embodiment
- FIG. 1 illustrates a block diagram of a system 100 including a server 101 that provisions a deep learning model 102 to a client 103 for use by a software application 104 installed on the client 103 , in accordance with an embodiment.
- the server 101 may be any computing device, virtualized computing device, or combination of devices, capable of communicating with the client 103 over a wired or wireless connection, for the purpose of provisioning the deep learning model 102 to the client 103 for use by a software application 104 installed on the client 103 .
- the server 101 may include a hardware memory (e.g. random access memory (RAM), etc.) for storing the deep learning model 102 and a hardware processor (e.g. central processing unit (CPU), graphics processing unit (GPU), etc.) for provisioning the deep learning model 102 from the memory to the client 103 over the wired or wireless connection.
- the server 101 may provision the deep learning model 102 to the client 103 by sending a copy of the deep learning model 102 over the wired or wireless connection to the client 103 .
- the client 103 may be any computing device (including, without limitation, computing devices that are wholly or partially virtualized) capable of communicating with the server 101 over the wired or wireless connection, for the purpose of receiving from the server 101 the deep learning model 102 for use by the software application 104 installed on the client 103 .
- the client 103 may not necessarily be an end-user device (e.g. personal computer, laptop, mobile phone, etc.) but may also be a server or other cloud-based computer system having the software application 104 installed thereon.
- output of the software application 104 may optionally be streamed or otherwise communicated to an end-user device.
- the client 103 may include a memory for storing the deep learning model 102 and a processor by which the software application 104 installed on the client 103 uses the deep learning model 102 for obtaining inferenced data.
- the client 103 executes the deep learning model 102 locally.
- the deep learning model 102 is a machine learned network (e.g. deep neural network) that is trained to perform inferencing operations and to provide inferenced data from input data.
- the deep learning model 102 may be trained using supervised or unsupervised training techniques.
- the server 101 may be used to perform the training of the deep learning model 102 , or may receive the already trained deep learning model 102 from another device.
- the deep learning model 102 may be trained for performing any desired type of inferencing and making any desired type of inferences. However, in the present embodiment, the deep learning model 102 outputs inferences that are usable by the software application 104 installed on the client 103 . It should be noted that the deep learning model 102 may similarly be used by other software applications which may be installed on the client 103 or other clients, and thus may not necessarily be specifically trained for use by the software application 104 but instead may be trained more generically for use by multiple different software applications. In any case, the deep learning model 102 may not be coded within the software application 104 itself, but may be accessible to the software application 104 as external functionality (e.g. as a software patch) via an application programming interface (API). As a result, the deep learning model 102 may not necessarily be developed and provided by a same developer of the software application 104 but instead may be developed and provided by a third-party developer.
- API application programming interface
- the software application 104 installed on the client 103 provides input data to the deep learning model 102 which processes the input data to perform inferencing and/or to return one or more inferences (i.e. inferenced data) for the input data.
- the deep learning model 102 is trained to process the input data and make inferences therefrom.
- the inferenced data is output by the deep learning model 102 to the software application 104 for use by functions, tasks, etc. of the software application 104 .
- the software application 104 may be a video game, virtual reality application, image classification and other processing, sensor data analysis, or other graphics-related computer program.
- the deep learning model 102 may provide certain image-related inferences, such as providing from an input image or other input data an anti-aliased image, an image with upscaled resolution, a denoised image, and/or any other output image that is modified in at least one respect from the input image or other input data.
- the deep learning model 102 may provide inference output that can be used to apply certain video-related effects, such as providing from input video or other input data a slow-motion version of the input video or other input data, a super sampling of the input video or other input data, etc.
- the software application 104 may be a voice recognition application or other audio-related computer program.
- the deep learning model 102 may provide inference output that can be used to apply certain audio-related effects, such as providing from an input audio or other input data a language translation, a voice recognized command, and/or any other output that is inferenced from the input audio or other input data.
- the system 100 configuration described above enables improvements to be made to the deep learning model 102 without necessarily requiring any changes within the software application 104 itself.
- the software application 104 may inherently benefit from the improvements made to the deep learning model 102 , and thus an end-user or other system using the software application 104 may benefit from the improvements made to the deep learning model 102 , without the tradeoff of the usual delays associated with updating the software application 104 itself. All that may be required is that the copy of the deep learning model 102 on the client 103 be updated to the improved version.
- the software application 104 may inherently be improved by way of its use of the deep learning model 102 during execution thereof.
- the software application 104 may likewise provide faster results, results with less computations, and/or more accurate results as a result of its use of the improved deep learning model 102 .
- FIG. 2 illustrates a flowchart of a server method 200 for improving a deep learning model for use by a client, in accordance with an embodiment. Accordingly, in one embodiment, the method 200 may be performed by the server 101 of FIG. 1 .
- a deep learning model is stored.
- the deep learning model is usable for performing inferencing operations and/or providing inferenced data to a software application (e.g. such as the deep learning model 102 used by the software application 104 of FIG. 1 ).
- the deep learning model may be stored locally (e.g. by the server 101 ).
- the deep learning model may be stored in a local repository with other deep learning models usable for performing inferencing operations and/or providing other types of inferenced data to the software application or other software applications.
- the deep learning model is updated to create an improved (updated) version of the deep learning model. It should be noted that any aspect(s) of the deep learning model may be updated to create the improved version of the deep learning model. In any case though, the update to the deep learning model improves (e.g. optimizes) the deep learning model in at least one respect.
- the deep learning model may be updated by retraining the deep learning model and/or reconfiguring the deep learning model with new parameters (e.g., weights) or hyperparameters.
- the updating may be performed automatically by software and/or other neural networks.
- the process of updating the deep learning model to create an improved version thereof may be performed without requiring user intervention.
- the deep learning model may be retrained, specifically using a changed dataset.
- the deep learning model may be retrained using a dataset that is changed from the particular dataset.
- the changed dataset may include additional data that was not included in the particular dataset that was last used to train the deep learning model and/or may remove data that was included in the particular dataset.
- the deep learning model may be updated with one or more reconfigurations being made to the deep learning model.
- the deep learning model may be updated according to a hyperparameter adjustment.
- a hyperparameter refers to a parameter whose value is used to control the learning process for the deep learning model (as opposed to the values of other parameters that are learned).
- the deep learning model may be retrained according to one or more hyperparameters that are changed from the particular hyperparameter(s).
- the deep learning model may be updated with a layer substitution.
- the deep learning model may be updated to include, replace, etc. one or more layers that are different from the particular layers.
- the deep learning model may be updated with layer fusing (e.g. combining two or more of the particular layers).
- the deep learning model may be updated to use input stacking. For example, particular inputs last used by the deep learning model may be changed, such as by stacking inputs. The stacked inputs may be used to artificially increase the feature counts of tensors in the deep learning model.
- the deep learning model may be updated to include changed code, such as high-level code (at a software level), or low level code (e.g. at a GPU level with GPU assembler code, or even machine code).
- any aspect(s) of the deep learning model may be updated to create the improved version of the deep learning model.
- the aspect(s) that are changed for updating the deep learning model may be selected automatically.
- the aspect(s) may be iteratively changed until the improved deep learning model is generated.
- the deep learning model may be considered to be improved from the last (or any prior) version of the deep learning model when any aspect, or any preselected aspect(s), of the deep learning model has improved, such as accuracy (e.g. ability to provide more accurate inferences which may improve an end-user experience), quality (e.g. quality of inferences), performance (e.g. improved speed, reduced resource consumption, etc.), etc.
- accuracy e.g. ability to provide more accurate inferences which may improve an end-user experience
- quality e.g. quality of inferences
- performance e.g. improved speed, reduced resource consumption, etc.
- a version of the deep learning model resulting from any iteration of retraining may be considered “improved” when any improvement benchmark, or any preselected improvement benchmark(s), are met.
- the improvement benchmarks may be predefined (e.g. manually), for example as thresholds for each category of improvement (i.e.
- improvement metrics may be measured when the updated deep learning model is executed by different CPUs or GPUs, in which case the updated deep learning model may be considered “improved” for only those CPUs and/or GPUs that enabled the updated deep learning model to meet the improvement benchmark(s).
- a client with a previous version of the deep learning model is determined.
- the previous version of the deep learning model may refer to any version of the deep learning model generated prior to the updated version of the deep learning model generated in operation 202 .
- the updated version of the deep learning model is automatically distributed to the client when the updated version of the deep learning model meets or exceeds one or more improvement benchmarks, as shown in operation 204 ).
- the client may be client 103 of FIG. 1 , for example.
- the improved version of the deep learning model may only be distributed to the client when the client includes one or more of those certain CPUs and/or GPUs. This may help ensure that the client is configured to be able to realize the improvements when executing the improved version of the deep learning model.
- the improved version of the deep learning model may be distributed to the client by communicating a copy of the improved version of the deep learning model to the client.
- the client may locally store, and thus locally execute, the copy of the improved version of the deep learning model.
- the present method 200 references distributing the improved version of the deep learning model to a particular client, the method 200 may be implemented in other embodiments to distribute the improved version of the deep learning model to multiple different clients (e.g. that each have a previous version of the deep learning model).
- the improved version of the deep learning model may be distributed to the client responsive to a particular trigger.
- the trigger may be the creation of the improved version of the deep learning model.
- the trigger may be a scheduled distribution.
- the trigger may be a request received by the client for an improved version of the deep learning model (e.g. as described in more detail below).
- the server may distribute the updated version of the deep learning model to the client.
- the method 200 may be implemented for the deep learning model for creating an improved version of the deep learning model that can be used by the client to perform inferencing operations.
- the present method 200 may allow the server to attempt huge numbers of different possible combinations of changes to find improvements.
- This method 200 may be repeated over and over to provide ongoing and continuous deep learning model improvements that are then downloaded to the client to improve operations involving the deep learning model.
- the method 200 may be implemented for other deep learning models to create improved versions of those deep learning models that can be used by any number of different clients to provide other types of inferenced data.
- FIG. 3 illustrates a flowchart of a client method 300 for implementing an improved deep learning model that provides inferenced data to a local software application, in accordance with an embodiment.
- the method 300 may be performed by the client 103 of FIG. 1 .
- a deep learning model is stored.
- the deep learning model is usable for providing inferenced data to a software application (e.g. such as the deep learning model 102 used by the software application 104 of FIG. 1 ).
- the deep learning model may be stored locally (e.g. by the client 103 ).
- the deep learning model may be stored in a local repository with other deep learning models usable for providing other types of inferenced data to the software application or other software applications.
- the deep learning model is executed to perform inferencing operations and to provide inferenced data to a software application.
- the deep learning model and the software application may both execute locally.
- the software application provides input data to the deep learning model which processes the input data to generate one or more inferences (i.e. inferenced data) for the input data.
- the inferenced data is output by the deep learning model to the software application for use by functions, tasks, etc. of the software application.
- the software application may use the deep learning model as often as required while the deep learning model is stored and is thus accessible to the software application.
- various functions within the software application, or multiple executions of the same function may cause input data to be provided to the deep learning model for the purpose of obtaining the inferenced data.
- an updated version of the deep learning model is received.
- the improved version of the deep learning model may be received.
- the improved version of the deep learning model may be received by a server (e.g. server 101 of FIG. 1 ).
- the improved version of the deep learning model may be received responsive to a trigger.
- the trigger may occur on the server side, and thus the improved version of the deep learning model may be provided to the client proactively.
- the trigger may be the creation of the improved version of the deep learning model at the server.
- the trigger may be a scheduled distribution at the server.
- the trigger may occur on the client side.
- the trigger may be scheduled, may be the initiated execution of the software application that uses the deep learning model, or may be a call to a feature API that causes execution of the deep learning model.
- the client Responsive to the client-side trigger, the client may request from the server an improved version of the deep learning model.
- the server determines, responsive to the request, that it has a version of the deep learning model that is updated from a version currently stored on the client, the server may distribute the updated version of the deep learning model to the client.
- the updated version of the deep learning model is executed to provide additional inferenced data to the software application.
- the updated version of the deep learning model may replace the last version of the deep learning model used by the software application (i.e. in operation 302 ).
- the software application may use the updated version of the deep learning model once received by the client.
- FIG. 4A illustrates a block diagram of a system 400 for updating a deep learning model that performs inferencing operations and provides inferenced data to a software application, in accordance with an embodiment. It should be noted that the definitions and/or descriptions provided with respect to the embodiments above may equally apply to the present description.
- a client 401 has installed thereon a software application 402 that uses one or more deep learning models stored in a local deep learning model store 403 .
- Each of the deep learning models may perform a different type of inferences and provide a different type of inferenced data, and thus may be usable (e.g. by the software application 402 and/or other software applications installed on the client) to obtain any needed inferenced data.
- a server 409 operates to update a deep learning model to create an updated version of the deep learning model 410 .
- the server receives research data 404 which includes a new training dataset 407 and/or a new deep learning model design 408 (reconfiguration).
- the research data 404 may be generated from a newly generated public dataset 405 and/or from offline information 406 received in association with the software application.
- the server 409 may update the deep learning model using manual training and tuning of the deep learning model by one or more users, and/or using automatic training and optimizing of the deep learning model by a neural network optimizer (not shown).
- the updated version of the deep learning model 410 is then distributed to the client 401 via a deep learning model update server 412 of a cloud service 411 .
- the client 401 may subscribe to the cloud service 411 to be provided access to deep learning models.
- the server 409 Each time the server 409 starts a new deep learning model training session, the metadata that describes the deep learning model, including all training hyperparameters, inferencing parameters, and the dataset, is stored either in a file or a database. This allows the deep learning model to be fully recreated at any time in the future.
- the server 409 can also use that metadata to conduct future experiments and to derive new deep learning models. At any point, the server 409 will likely have multiple deep learning models being trained and evaluated against improvement benchmarks.
- FIG. 4B illustrates a flowchart of the method of the client 401 of FIG. 4 , in accordance with an embodiment.
- a feature API is invoked (operation 451 ).
- the feature API may provide an interface to the deep learning model to allow the software application 402 to interface with the deep learning model.
- the client 401 determines whether the deep learning model has been updated since a last call made to the deep learning model by the software application 402 (decision 452 ). The client 401 may accomplish this by querying the local deep learning model store 403 for a latest stored version of the deep learning model.
- the client 401 runs the deep learning model (operation 454 ) with input data provided by the software application 402 , and returns inferenced data output by the deep learning model back to the software application 402 (operation 455 ).
- the client 401 loads the updated (improved) deep learning model from the local deep learning model store 403 (operation 453 ). This may be performed as a hot-swap (in real-time) during execution of the software application.
- the client 401 then runs the updated deep learning model (operation 454 ) with input data provided by the software application 402 , and returns inferenced data output by the deep learning model back to the software application 402 (operation 455 ).
- the client 401 is triggered (operation 456 ) to check for a deep learning model update (operation 457 ).
- the trigger may be caused by a schedule, or by the feature API invocation in operation 451 .
- the client 401 sends a request to the cloud service 411 for any updated version of the deep learning model.
- the cloud service 411 has access to an updated version of the deep learning model
- the client 401 downloads the updated deep learning model (operation 458 ) and stores the new model in the local deep learning model store 403 (operation 459 ).
- Deep neural networks including deep learning models, developed on processors have been used for diverse use cases, from self-driving cars to faster drug development, from automatic image captioning in online image databases to smart real-time language translation in video chat applications.
- Deep learning is a technique that models the neural learning process of the human brain, continually learning, continually getting smarter, and delivering more accurate results more quickly over time.
- a child is initially taught by an adult to correctly identify and classify various shapes, eventually being able to identify shapes without any coaching.
- a deep learning or neural learning system needs to be trained in object recognition and classification for it get smarter and more efficient at identifying basic objects, occluded objects, etc., while also assigning context to objects.
- neurons in the human brain look at various inputs that are received, importance levels are assigned to each of these inputs, and output is passed on to other neurons to act upon.
- An artificial neuron or perceptron is the most basic model of a neural network.
- a perceptron may receive one or more inputs that represent various features of an object that the perceptron is being trained to recognize and classify, and each of these features is assigned a certain weight based on the importance of that feature in defining the shape of an object.
- a deep neural network (DNN) model includes multiple layers of many connected nodes (e.g., perceptrons, Boltzmann machines, radial basis functions, convolutional layers, etc.) that can be trained with enormous amounts of input data to quickly solve complex problems with high accuracy.
- a first layer of the DNN model breaks down an input image of an automobile into various sections and looks for basic patterns such as lines and angles.
- the second layer assembles the lines to look for higher level patterns such as wheels, windshields, and mirrors.
- the next layer identifies the type of vehicle, and the final few layers generate a label for the input image, identifying the model of a specific automobile brand.
- the DNN can be deployed and used to identify and classify objects or patterns in a process known as inference.
- inference the process through which a DNN extracts useful information from a given input
- examples of inference include identifying handwritten numbers on checks deposited into ATM machines, identifying images of friends in photos, delivering movie recommendations to over fifty million users, identifying and classifying different types of automobiles, pedestrians, and road hazards in driverless cars, or translating human speech in real-time.
- Training complex neural networks requires massive amounts of parallel computing performance, including floating-point multiplications and additions. Inferencing is less compute-intensive than training, being a latency-sensitive process where a trained neural network is applied to new inputs it has not seen before to classify images, translate speech, and generally infer new information.
- a deep learning or neural learning system needs to be trained to generate inferences from input data. Details regarding inference and/or training logic 515 for a deep learning or neural learning system are provided below in conjunction with FIGS. 5A and/or 5B .
- inference and/or training logic 515 may include, without limitation, a data storage 501 to store forward and/or output weight and/or input/output data corresponding to neurons or layers of a neural network trained and/or used for inferencing in aspects of one or more embodiments.
- data storage 501 stores weight parameters and/or input/output data of each layer of a neural network trained or used in conjunction with one or more embodiments during forward propagation of input/output data and/or weight parameters during training and/or inferencing using aspects of one or more embodiments.
- any portion of data storage 501 may be included with other on-chip or off-chip data storage, including a processor's L1, L2, or L3 cache or system memory.
- any portion of data storage 501 may be internal or external to one or more processors or other hardware logic devices or circuits.
- data storage 501 may be cache memory, dynamic randomly addressable memory (“DRAM”), static randomly addressable memory (“SRAM”), non-volatile memory (e.g., Flash memory), or other storage.
- DRAM dynamic randomly addressable memory
- SRAM static randomly addressable memory
- Flash memory non-volatile memory
- choice of whether data storage 501 is internal or external to a processor, for example, or comprised of DRAM, SRAM, Flash or some other storage type may depend on available storage on-chip versus off-chip, latency requirements of training and/or inferencing functions being performed, batch size of data used in inferencing and/or training of a neural network, or some combination of these factors.
- inference and/or training logic 515 may include, without limitation, a data storage 505 to store backward and/or output weight and/or input/output data corresponding to neurons or layers of a neural network trained and/or used for inferencing in aspects of one or more embodiments.
- data storage 505 stores weight parameters and/or input/output data of each layer of a neural network trained or used in conjunction with one or more embodiments during backward propagation of input/output data and/or weight parameters during training and/or inferencing using aspects of one or more embodiments.
- any portion of data storage 505 may be included with other on-chip or off-chip data storage, including a processor's L1, L2, or L3 cache or system memory. In at least one embodiment, any portion of data storage 505 may be internal or external to on one or more processors or other hardware logic devices or circuits. In at least one embodiment, data storage 505 may be cache memory, DRAM, SRAM, non-volatile memory (e.g., Flash memory), or other storage.
- choice of whether data storage 505 is internal or external to a processor, for example, or comprised of DRAM, SRAM, Flash or some other storage type may depend on available storage on-chip versus off-chip, latency requirements of training and/or inferencing functions being performed, batch size of data used in inferencing and/or training of a neural network, or some combination of these factors.
- data storage 501 and data storage 505 may be separate storage structures. In at least one embodiment, data storage 501 and data storage 505 may be same storage structure. In at least one embodiment, data storage 501 and data storage 505 may be partially same storage structure and partially separate storage structures. In at least one embodiment, any portion of data storage 501 and data storage 505 may be included with other on-chip or off-chip data storage, including a processor's L1, L2, or L3 cache or system memory.
- inference and/or training logic 515 may include, without limitation, one or more arithmetic logic unit(s) (“ALU(s)”) 510 to perform logical and/or mathematical operations based, at least in part on, or indicated by, training and/or inference code, result of which may result in activations (e.g., output values from layers or neurons within a neural network) stored in an activation storage 520 that are functions of input/output and/or weight parameter data stored in data storage 501 and/or data storage 505 .
- ALU(s) arithmetic logic unit
- activations stored in activation storage 520 are generated according to linear algebraic and or matrix-based mathematics performed by ALU(s) 510 in response to performing instructions or other code, wherein weight values stored in data storage 505 and/or data 501 are used as operands along with other values, such as bias values, gradient information, momentum values, or other parameters or hyperparameters, any or all of which may be stored in data storage 505 or data storage 501 or another storage on or off-chip.
- ALU(s) 510 are included within one or more processors or other hardware logic devices or circuits, whereas in another embodiment, ALU(s) 510 may be external to a processor or other hardware logic device or circuit that uses them (e.g., a co-processor). In at least one embodiment, ALUs 510 may be included within a processor's execution units or otherwise within a bank of ALUs accessible by a processor's execution units either within same processor or distributed between different processors of different types (e.g., central processing units, graphics processing units, fixed function units, etc.).
- data storage 501 , data storage 505 , and activation storage 520 may be on same processor or other hardware logic device or circuit, whereas in another embodiment, they may be in different processors or other hardware logic devices or circuits, or some combination of same and different processors or other hardware logic devices or circuits.
- any portion of activation storage 520 may be included with other on-chip or off-chip data storage, including a processor's L1, L2, or L3 cache or system memory.
- inferencing and/or training code may be stored with other code accessible to a processor or other hardware logic or circuit and fetched and/or processed using a processor's fetch, decode, scheduling, execution, retirement and/or other logical circuits.
- activation storage 520 may be cache memory, DRAM, SRAM, non-volatile memory (e.g., Flash memory), or other storage. In at least one embodiment, activation storage 520 may be completely or partially within or external to one or more processors or other logical circuits. In at least one embodiment, choice of whether activation storage 520 is internal or external to a processor, for example, or comprised of DRAM, SRAM, Flash or some other storage type may depend on available storage on-chip versus off-chip, latency requirements of training and/or inferencing functions being performed, batch size of data used in inferencing and/or training of a neural network, or some combination of these factors. In at least one embodiment, inference and/or training logic 515 illustrated in FIG.
- ASIC application-specific integrated circuit
- CPU central processing unit
- GPU graphics processing unit
- FPGA field programmable gate array
- FIG. 5B illustrates inference and/or training logic 515 , according to at least one embodiment.
- inference and/or training logic 515 may include, without limitation, hardware logic in which computational resources are dedicated or otherwise exclusively used in conjunction with weight values or other information corresponding to one or more layers of neurons within a neural network.
- inference and/or training logic 515 illustrated in FIG. 5 .B may be used in conjunction with an application-specific integrated circuit (ASIC), such as Tensorflow® Processing Unit from Google, an inference processing unit (IPU) from GraphcoreTM, or a Nervana® (e.g., “Lake Crest”) processor from Intel Corp.
- ASIC application-specific integrated circuit
- IPU inference processing unit
- Nervana® e.g., “Lake Crest”
- inference and/or training logic 515 includes, without limitation, data storage 501 and data storage 505 , which may be used to store weight values and/or other information, including bias values, gradient information, momentum values, and/or other parameter or hyperparameter information.
- data storage 501 and data storage 505 is associated with a dedicated computational resource, such as computational hardware 502 and computational hardware 506 , respectively.
- each of computational hardware 506 comprises one or more ALUs that perform mathematical functions, such as linear algebraic functions, only on information stored in data storage 501 and data storage 505 , respectively, result of which is stored in activation storage 520 .
- each of data storage 501 and 505 and corresponding computational hardware 502 and 506 correspond to different layers of a neural network, such that resulting activation from one “storage/computational pair 501 / 502 ” of data storage 501 and computational hardware 502 is provided as an input to next “storage/computational pair 505 / 506 ” of data storage 505 and computational hardware 506 , in order to mirror conceptual organization of a neural network.
- each of storage/computational pairs 501 / 502 and 505 / 506 may correspond to more than one neural network layer.
- additional storage/computation pairs (not shown) subsequent to or in parallel with storage computation pairs 501 / 502 and 505 / 506 may be included in inference and/or training logic 515 .
- FIG. 6 illustrates another embodiment for training and deployment of a deep neural network.
- untrained neural network 606 is trained using a training dataset 602 .
- training framework 604 is a PyTorch framework, whereas in other embodiments, training framework 604 is a Tensorflow, Boost, Caffe, Microsoft Cognitive Toolkit/CNTK, MXNet, Chainer, Keras, Deeplearning4j, or other training framework.
- training framework 604 trains an untrained neural network 606 and enables it to be trained using processing resources described herein to generate a trained neural network 608 .
- weights may be chosen randomly or by pre-training using a deep belief network.
- training may be performed in either a supervised, partially supervised, or unsupervised manner.
- untrained neural network 606 is trained using supervised learning, wherein training dataset 602 includes an input paired with a desired output for an input, or where training dataset 602 includes input having known output and the output of the neural network is manually graded.
- untrained neural network 606 is trained in a supervised manner processes inputs from training dataset 602 and compares resulting outputs against a set of expected or desired outputs.
- errors are then propagated back through untrained neural network 606 .
- training framework 604 adjusts weights that control untrained neural network 606 .
- training framework 604 includes tools to monitor how well untrained neural network 606 is converging towards a model, such as trained neural network 608 , suitable to generating correct answers, such as in result 614 , based on known input data, such as new data 612 .
- training framework 604 trains untrained neural network 606 repeatedly while adjust weights to refine an output of untrained neural network 606 using a loss function and adjustment algorithm, such as stochastic gradient descent.
- training framework 604 trains untrained neural network 606 until untrained neural network 606 achieves a desired accuracy.
- trained neural network 608 can then be deployed to implement any number of machine learning operations.
- untrained neural network 606 is trained using unsupervised learning, wherein untrained neural network 606 attempts to train itself using unlabeled data.
- unsupervised learning training dataset 602 will include input data without any associated output data or “ground truth” data.
- untrained neural network 606 can learn groupings within training dataset 602 and can determine how individual inputs are related to untrained dataset 602 .
- unsupervised training can be used to generate a self-organizing map, which is a type of trained neural network 608 capable of performing operations useful in reducing dimensionality of new data 612 .
- unsupervised training can also be used to perform anomaly detection, which allows identification of data points in a new dataset 612 that deviate from normal patterns of new dataset 612 .
- semi-supervised learning may be used, which is a technique in which in training dataset 602 includes a mix of labeled and unlabeled data.
- training framework 604 may be used to perform incremental learning, such as through transferred learning techniques.
- incremental learning enables trained neural network 608 to adapt to new data 612 without forgetting knowledge instilled within network during initial training.
- FIG. 7 illustrates an example data center 700 , in which at least one embodiment may be used.
- data center 700 includes a data center infrastructure layer 710 , a framework layer 720 , a software layer 730 and an application layer 740 .
- data center infrastructure layer 710 may include a resource orchestrator 712 , grouped computing resources 714 , and node computing resources (“node C.R.s”) 716 ( 1 )- 716 (N), where “N” represents any whole, positive integer.
- node C.R.s 716 ( 1 )- 716 (N) may include, but are not limited to, any number of central processing units (“CPUs”) or other processors (including accelerators, field programmable gate arrays (FPGAs), graphics processors, etc.), memory devices (e.g., dynamic read-only memory), storage devices (e.g., solid state or disk drives), network input/output (“NW I/O”) devices, network switches, virtual machines (“VMs”), power modules, and cooling modules, etc.
- one or more node C.R.s from among node C.R.s 716 ( 1 )- 716 (N) may be a server having one or more of above-mentioned computing resources.
- grouped computing resources 714 may include separate groupings of node C.R.s housed within one or more racks (not shown), or many racks housed in data centers at various geographical locations (also not shown).
- separate groupings of node C.R.s within grouped computing resources 714 may include grouped compute, network, memory or storage resources that may be configured or allocated to support one or more workloads.
- several node C.R.s including CPUs or processors may grouped within one or more racks to provide compute resources to support one or more workloads.
- one or more racks may also include any number of power modules, cooling modules, and network switches, in any combination.
- resource orchestrator 722 may configure or otherwise control one or more node C.R.s 716 ( 1 )- 716 (N) and/or grouped computing resources 714 .
- resource orchestrator 722 may include a software design infrastructure (“SDI”) management entity for data center 700 .
- SDI software design infrastructure
- resource orchestrator may include hardware, software or some combination thereof.
- framework layer 720 includes a job scheduler 732 , a configuration manager 734 , a resource manager 736 and a distributed file system 738 .
- framework layer 720 may include a framework to support software 732 of software layer 730 and/or one or more application(s) 742 of application layer 740 .
- software 732 or application(s) 742 may respectively include web-based service software or applications, such as those provided by Amazon Web Services, Google Cloud and Microsoft Azure.
- framework layer 720 may be, but is not limited to, a type of free and open-source software web application framework such as Apache SparkTM (hereinafter “Spark”) that may utilize distributed file system 738 for large-scale data processing (e.g., “big data”).
- job scheduler 732 may include a Spark driver to facilitate scheduling of workloads supported by various layers of data center 700 .
- configuration manager 734 may be capable of configuring different layers such as software layer 730 and framework layer 720 including Spark and distributed file system 738 for supporting large-scale data processing.
- resource manager 736 may be capable of managing clustered or grouped computing resources mapped to or allocated for support of distributed file system 738 and job scheduler 732 .
- clustered or grouped computing resources may include grouped computing resource 714 at data center infrastructure layer 710 .
- resource manager 736 may coordinate with resource orchestrator 712 to manage these mapped or allocated computing resources.
- software 732 included in software layer 730 may include software used by at least portions of node C.R.s 716 ( 1 )- 716 (N), grouped computing resources 714 , and/or distributed file system 738 of framework layer 720 .
- one or more types of software may include, but are not limited to, Internet web page search software, e-mail virus scan software, database software, and streaming video content software.
- application(s) 742 included in application layer 740 may include one or more types of applications used by at least portions of node C.R.s 716 ( 1 )- 716 (N), grouped computing resources 714 , and/or distributed file system 738 of framework layer 720 .
- one or more types of applications may include, but are not limited to, any number of a genomics application, a cognitive compute, and a machine learning application, including training or inferencing software, machine learning framework software (e.g., PyTorch, TensorFlow, Caffe, etc.) or other machine learning applications used in conjunction with one or more embodiments.
- machine learning framework software e.g., PyTorch, TensorFlow, Caffe, etc.
- any of configuration manager 734 , resource manager 736 , and resource orchestrator 712 may implement any number and type of self-modifying actions based on any amount and type of data acquired in any technically feasible fashion.
- self-modifying actions may relieve a data center operator of data center 700 from making possibly bad configuration decisions and possibly avoiding underutilized and/or poor performing portions of a data center.
- data center 700 may include tools, services, software or other resources to train one or more machine learning models or predict or infer information using one or more machine learning models according to one or more embodiments described herein.
- a machine learning model may be trained by calculating weight parameters according to a neural network architecture using software and computing resources described above with respect to data center 700 .
- trained machine learning models corresponding to one or more neural networks may be used to infer or predict information using resources described above with respect to data center 700 by using weight parameters calculated through one or more training techniques described herein.
- data center may use CPUs, application-specific integrated circuits (ASICs), GPUs, FPGAs, or other hardware to perform training and/or inferencing using above-described resources.
- ASICs application-specific integrated circuits
- GPUs GPUs
- FPGAs field-programmable gate arrays
- one or more software and/or hardware resources described above may be configured as a service to allow users to train or performing inferencing of information, such as image recognition, speech recognition, or other artificial intelligence services.
- Inference and/or training logic 515 are used to perform inferencing and/or training operations associated with one or more embodiments.
- inference and/or training logic 515 may be used in system FIG. 7 for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein.
- an embodiment may provide a deep learning model usable for performing inferencing operations and for providing inferenced data, where the deep learning model is stored (partially or wholly) in one or both of data storage 501 and 505 in inference and/or training logic 515 as depicted in FIGS. 5A and 5B .
- Training and deployment of the deep learning model may be performed as depicted in FIG. 6 and described herein.
- the deep learning model when untrained, may subsequently be trained using training framework 604 .
- the deep learning model when previously trained, may be updated to create an updated version of the deep learning model also using framework 604 . Further, the updated version of a deep learning model may be distributed to a client for use in providing the inferenced data. Distribution of the trained or re-trained deep learning model may be performed using one or more servers in a data center 700 as depicted in FIG. 7 and described herein.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Computing Systems (AREA)
- Artificial Intelligence (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Mathematical Physics (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Multimedia (AREA)
- Computer Security & Cryptography (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Probability & Statistics with Applications (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Machine Translation (AREA)
Abstract
Description
- This application claims the benefit of U.S. Provisional Application No. 62/717,735, titled “CONTINUOUS OPTIMIZATION AND UPDATE SYSTEM FOR DEEP LEARNING MODELS,” filed Aug. 10, 2018, the entire contents of which is incorporated herein by reference.
- This application is related to co-pending U.S. application Ser. No. ______, titled “DEEP LEARNING MODEL EXECUTION USING TAGGED DATA” (Attorney Ref: NVIDP1276/18-SC-0202US01) filed Aug. ______, 2019, the entire contents of which is incorporated herein by reference.
- This application is related to co-pending U.S. application Ser. No. ______, titled “AUTOMATIC DATASET CREATION USING SOFTWARE TAGS” (Attorney Ref: NVIDP1277/18-SC-0197US01) and filed Aug. ______, 2019, the entire contents of which is incorporated herein by reference.
- The present disclosure relates to deep learning used by software applications.
- Traditionally, a software application is developed, tested, and then published for use to end users. Any subsequent update made to the software application is generally in the form of a human programmed modification made to the code in the software application itself, and further only becomes usable once tested and published by an application developer and publisher, and installed by end users having the previous version of the software application. This typical software application lifecycle causes delays in not only generating improvements to software applications, but also to those improvements being made accessible to end users.
- There is a need for addressing these issues and/or other issues associated with the prior art.
- A method, computer readable medium, and system are disclosed for improving deep learning models that perform inferencing operations to provide inferenced data to software applications. In an embodiment, a deep learning model usable for performing inferencing operations and for providing inferenced data is stored. Additionally, the deep learning model is updated to create an updated version of the deep learning model. Further, the updated version of the deep learning model is distributed to a client for use in providing the inferenced data.
- In another embodiment, a deep learning model is stored. Additionally, the deep learning model is executed to perform inferencing operations and to provide inferenced data to a software application. Further, an updated version of the deep learning model is received. Still yet, the updated version of the deep learning model is executed to provide additional inferenced data to the software application.
-
FIG. 1 illustrates a block diagram of a system including a server that provisions a deep learning model to a client for use by a software application installed on the client, in accordance with an embodiment. -
FIG. 2 illustrates a flowchart of a server method for improving a deep learning model for use by a client, in accordance with an embodiment. -
FIG. 3 illustrates a flowchart of a client method for implementing an improved deep learning model that provides inferenced data to a local software application, in accordance with an embodiment. -
FIG. 4A illustrates a block diagram of asystem 400 for updating a deep learning model that performs inferencing operations and provides inferenced data to a software application, in accordance with an embodiment. -
FIG. 4B illustrates a flowchart of the method of the client ofFIG. 4 , in accordance with an embodiment. -
FIG. 5A illustrates inference and/or training logic, according to at least one embodiment; -
FIG. 5B illustrates inference and/or training logic, according to at least one embodiment; -
FIG. 6 illustrates training and deployment of a neural network, according to at least one embodiment; -
FIG. 7 illustrates an example data center system, according to at least one embodiment; -
FIG. 1 illustrates a block diagram of asystem 100 including aserver 101 that provisions adeep learning model 102 to aclient 103 for use by a software application 104 installed on theclient 103, in accordance with an embodiment. - With respect to the present description, the
server 101 may be any computing device, virtualized computing device, or combination of devices, capable of communicating with theclient 103 over a wired or wireless connection, for the purpose of provisioning thedeep learning model 102 to theclient 103 for use by a software application 104 installed on theclient 103. For example, theserver 101 may include a hardware memory (e.g. random access memory (RAM), etc.) for storing thedeep learning model 102 and a hardware processor (e.g. central processing unit (CPU), graphics processing unit (GPU), etc.) for provisioning thedeep learning model 102 from the memory to theclient 103 over the wired or wireless connection. Theserver 101 may provision thedeep learning model 102 to theclient 103 by sending a copy of thedeep learning model 102 over the wired or wireless connection to theclient 103. - Also with respect to the present description, the
client 103 may be any computing device (including, without limitation, computing devices that are wholly or partially virtualized) capable of communicating with theserver 101 over the wired or wireless connection, for the purpose of receiving from theserver 101 thedeep learning model 102 for use by the software application 104 installed on theclient 103. Thus, theclient 103 may not necessarily be an end-user device (e.g. personal computer, laptop, mobile phone, etc.) but may also be a server or other cloud-based computer system having the software application 104 installed thereon. In the case where theclient 103 is a cloud-based computer system, output of the software application 104 may optionally be streamed or otherwise communicated to an end-user device. Generally, theclient 103 may include a memory for storing thedeep learning model 102 and a processor by which the software application 104 installed on theclient 103 uses thedeep learning model 102 for obtaining inferenced data. By storing a copy of thedeep learning model 102 at the client (e.g. on a hard drive of the client), the client executes thedeep learning model 102 locally. - The
deep learning model 102 is a machine learned network (e.g. deep neural network) that is trained to perform inferencing operations and to provide inferenced data from input data. Thedeep learning model 102 may be trained using supervised or unsupervised training techniques. Optionally, theserver 101 may be used to perform the training of thedeep learning model 102, or may receive the already traineddeep learning model 102 from another device. - The
deep learning model 102 may be trained for performing any desired type of inferencing and making any desired type of inferences. However, in the present embodiment, thedeep learning model 102 outputs inferences that are usable by the software application 104 installed on theclient 103. It should be noted that thedeep learning model 102 may similarly be used by other software applications which may be installed on theclient 103 or other clients, and thus may not necessarily be specifically trained for use by the software application 104 but instead may be trained more generically for use by multiple different software applications. In any case, thedeep learning model 102 may not be coded within the software application 104 itself, but may be accessible to the software application 104 as external functionality (e.g. as a software patch) via an application programming interface (API). As a result, thedeep learning model 102 may not necessarily be developed and provided by a same developer of the software application 104 but instead may be developed and provided by a third-party developer. - In the present embodiment, the software application 104 installed on the
client 103 provides input data to thedeep learning model 102 which processes the input data to perform inferencing and/or to return one or more inferences (i.e. inferenced data) for the input data. Accordingly, thedeep learning model 102 is trained to process the input data and make inferences therefrom. The inferenced data is output by thedeep learning model 102 to the software application 104 for use by functions, tasks, etc. of the software application 104. - There are various use cases for the
system 100 described above. In one embodiment, the software application 104 may be a video game, virtual reality application, image classification and other processing, sensor data analysis, or other graphics-related computer program. In this embodiment, thedeep learning model 102 may provide certain image-related inferences, such as providing from an input image or other input data an anti-aliased image, an image with upscaled resolution, a denoised image, and/or any other output image that is modified in at least one respect from the input image or other input data. As another example, thedeep learning model 102 may provide inference output that can be used to apply certain video-related effects, such as providing from input video or other input data a slow-motion version of the input video or other input data, a super sampling of the input video or other input data, etc. - In another embodiment, the software application 104 may be a voice recognition application or other audio-related computer program. In this embodiment, the
deep learning model 102 may provide inference output that can be used to apply certain audio-related effects, such as providing from an input audio or other input data a language translation, a voice recognized command, and/or any other output that is inferenced from the input audio or other input data. - The
system 100 configuration described above enables improvements to be made to thedeep learning model 102 without necessarily requiring any changes within the software application 104 itself. Thus, the software application 104 may inherently benefit from the improvements made to thedeep learning model 102, and thus an end-user or other system using the software application 104 may benefit from the improvements made to thedeep learning model 102, without the tradeoff of the usual delays associated with updating the software application 104 itself. All that may be required is that the copy of thedeep learning model 102 on theclient 103 be updated to the improved version. - For example, when the
deep learning model 102 is improved to be faster, to be less computation-intensive, and/or to provide more accurate inferences, the software application 104 may inherently be improved by way of its use of thedeep learning model 102 during execution thereof. For example, the software application 104 may likewise provide faster results, results with less computations, and/or more accurate results as a result of its use of the improveddeep learning model 102. - The embodiments below describe systems and methods specifically for improving deep learning models that provide inferenced data to software applications. It should be noted that the systems and methods described below may be implemented in the context of the
system 100 ofFIG. 1 . -
FIG. 2 illustrates a flowchart of aserver method 200 for improving a deep learning model for use by a client, in accordance with an embodiment. Accordingly, in one embodiment, themethod 200 may be performed by theserver 101 ofFIG. 1 . - In
operation 201, a deep learning model is stored. In the context of thepresent method 200, the deep learning model is usable for performing inferencing operations and/or providing inferenced data to a software application (e.g. such as thedeep learning model 102 used by the software application 104 ofFIG. 1 ). The deep learning model may be stored locally (e.g. by the server 101). In one embodiment, the deep learning model may be stored in a local repository with other deep learning models usable for performing inferencing operations and/or providing other types of inferenced data to the software application or other software applications. - In
operation 202, the deep learning model is updated to create an improved (updated) version of the deep learning model. It should be noted that any aspect(s) of the deep learning model may be updated to create the improved version of the deep learning model. In any case though, the update to the deep learning model improves (e.g. optimizes) the deep learning model in at least one respect. - In particular, the deep learning model may be updated by retraining the deep learning model and/or reconfiguring the deep learning model with new parameters (e.g., weights) or hyperparameters. The updating may be performed automatically by software and/or other neural networks. Thus, the process of updating the deep learning model to create an improved version thereof may be performed without requiring user intervention.
- In one embodiment, as noted above, the deep learning model may be retrained, specifically using a changed dataset. For example, where the deep learning model was last trained using a particular dataset, the deep learning model may be retrained using a dataset that is changed from the particular dataset. The changed dataset may include additional data that was not included in the particular dataset that was last used to train the deep learning model and/or may remove data that was included in the particular dataset.
- In another embodiment, as noted above, the deep learning model may be updated with one or more reconfigurations being made to the deep learning model. With respect to the option to reconfigure the deep learning model, the deep learning model may be updated according to a hyperparameter adjustment. In the context of the present description, a hyperparameter refers to a parameter whose value is used to control the learning process for the deep learning model (as opposed to the values of other parameters that are learned). For example, where the deep learning model was last trained according to a particular hyperparameter or a particular combination of hyperparameters, the deep learning model may be retrained according to one or more hyperparameters that are changed from the particular hyperparameter(s).
- Further with respect to the option to reconfigure the deep learning model, the deep learning model may be updated with a layer substitution. For example, where the deep learning model included multiple particular layers, the deep learning model may be updated to include, replace, etc. one or more layers that are different from the particular layers. Similarly, the deep learning model may be updated with layer fusing (e.g. combining two or more of the particular layers).
- Also with respect to the option to reconfigure the deep learning model, the deep learning model may be updated to use input stacking. For example, particular inputs last used by the deep learning model may be changed, such as by stacking inputs. The stacked inputs may be used to artificially increase the feature counts of tensors in the deep learning model. In other embodiments with respect to the option to reconfigure the deep learning model, the deep learning model may be updated to include changed code, such as high-level code (at a software level), or low level code (e.g. at a GPU level with GPU assembler code, or even machine code).
- As noted above, any aspect(s) of the deep learning model, such as any combination of the embodiments mentioned above, may be updated to create the improved version of the deep learning model. As an option, the aspect(s) that are changed for updating the deep learning model may be selected automatically. For example, the aspect(s) may be iteratively changed until the improved deep learning model is generated.
- With respect to the present description, the deep learning model may be considered to be improved from the last (or any prior) version of the deep learning model when any aspect, or any preselected aspect(s), of the deep learning model has improved, such as accuracy (e.g. ability to provide more accurate inferences which may improve an end-user experience), quality (e.g. quality of inferences), performance (e.g. improved speed, reduced resource consumption, etc.), etc. A version of the deep learning model resulting from any iteration of retraining may be considered “improved” when any improvement benchmark, or any preselected improvement benchmark(s), are met. The improvement benchmarks may be predefined (e.g. manually), for example as thresholds for each category of improvement (i.e. accuracy, quality, and/or performance) or even sub-category of improvement (e.g. improved speed, reduced resource consumption). As an option, improvement metrics may be measured when the updated deep learning model is executed by different CPUs or GPUs, in which case the updated deep learning model may be considered “improved” for only those CPUs and/or GPUs that enabled the updated deep learning model to meet the improvement benchmark(s).
- In
operation 203, a client with a previous version of the deep learning model is determined. The previous version of the deep learning model may refer to any version of the deep learning model generated prior to the updated version of the deep learning model generated inoperation 202. - Once the deep learning model is updated to create the improved version of the deep learning model and the client with the previous version of the deep learning model is determined, the updated version of the deep learning model is automatically distributed to the client when the updated version of the deep learning model meets or exceeds one or more improvement benchmarks, as shown in operation 204). The client may be
client 103 ofFIG. 1 , for example. In the embodiment described above where the updated deep learning model is considered “improved” for only certain CPUs and/or GPUs (i.e. that enabled the updated deep learning model to meet the improvement benchmark(s)), the improved version of the deep learning model may only be distributed to the client when the client includes one or more of those certain CPUs and/or GPUs. This may help ensure that the client is configured to be able to realize the improvements when executing the improved version of the deep learning model. - In one embodiment, the improved version of the deep learning model may be distributed to the client by communicating a copy of the improved version of the deep learning model to the client. To this end, the client may locally store, and thus locally execute, the copy of the improved version of the deep learning model. It should be noted that while the
present method 200 references distributing the improved version of the deep learning model to a particular client, themethod 200 may be implemented in other embodiments to distribute the improved version of the deep learning model to multiple different clients (e.g. that each have a previous version of the deep learning model). - It should be further noted that the improved version of the deep learning model may be distributed to the client responsive to a particular trigger. In one embodiment, the trigger may be the creation of the improved version of the deep learning model. In another embodiment, the trigger may be a scheduled distribution. In yet another embodiment, the trigger may be a request received by the client for an improved version of the deep learning model (e.g. as described in more detail below). When the server determines, responsive to the request, that it has a version of the deep learning model that has been updated from a version currently stored on the client, the server may distribute the updated version of the deep learning model to the client.
- To this end, the
method 200 may be implemented for the deep learning model for creating an improved version of the deep learning model that can be used by the client to perform inferencing operations. Whereas current optimizations of deep learning models typically involve software engineers or data scientists conducting experiments to find better solutions, thepresent method 200 may allow the server to attempt huge numbers of different possible combinations of changes to find improvements. Thismethod 200 may be repeated over and over to provide ongoing and continuous deep learning model improvements that are then downloaded to the client to improve operations involving the deep learning model. Similarly, themethod 200 may be implemented for other deep learning models to create improved versions of those deep learning models that can be used by any number of different clients to provide other types of inferenced data. -
FIG. 3 illustrates a flowchart of aclient method 300 for implementing an improved deep learning model that provides inferenced data to a local software application, in accordance with an embodiment. In one embodiment, themethod 300 may be performed by theclient 103 ofFIG. 1 . - In
operation 301, a deep learning model is stored. In the context of thepresent method 300, the deep learning model is usable for providing inferenced data to a software application (e.g. such as thedeep learning model 102 used by the software application 104 ofFIG. 1 ). The deep learning model may be stored locally (e.g. by the client 103). In one embodiment, the deep learning model may be stored in a local repository with other deep learning models usable for providing other types of inferenced data to the software application or other software applications. - In
operation 302, the deep learning model is executed to perform inferencing operations and to provide inferenced data to a software application. The deep learning model and the software application may both execute locally. In particular, the software application provides input data to the deep learning model which processes the input data to generate one or more inferences (i.e. inferenced data) for the input data. The inferenced data is output by the deep learning model to the software application for use by functions, tasks, etc. of the software application. - It should be noted that the software application may use the deep learning model as often as required while the deep learning model is stored and is thus accessible to the software application. For example, various functions within the software application, or multiple executions of the same function, may cause input data to be provided to the deep learning model for the purpose of obtaining the inferenced data.
- In
operation 303, an updated version of the deep learning model is received. Thus, after some period in which the deep learning model is executed to provide inferenced data to a software application, the improved version of the deep learning model may be received. In one embodiment, the improved version of the deep learning model may be received by a server (e.g. server 101 ofFIG. 1 ). - As an option, the improved version of the deep learning model may be received responsive to a trigger. In one embodiment, the trigger may occur on the server side, and thus the improved version of the deep learning model may be provided to the client proactively. For example, the trigger may be the creation of the improved version of the deep learning model at the server. As another example, the trigger may be a scheduled distribution at the server.
- In another embodiment, the trigger may occur on the client side. The trigger may be scheduled, may be the initiated execution of the software application that uses the deep learning model, or may be a call to a feature API that causes execution of the deep learning model. Responsive to the client-side trigger, the client may request from the server an improved version of the deep learning model. When the server determines, responsive to the request, that it has a version of the deep learning model that is updated from a version currently stored on the client, the server may distribute the updated version of the deep learning model to the client.
- Further, in
operation 304, the updated version of the deep learning model is executed to provide additional inferenced data to the software application. In one embodiment, the updated version of the deep learning model may replace the last version of the deep learning model used by the software application (i.e. in operation 302). To this end, the software application may use the updated version of the deep learning model once received by the client. -
FIG. 4A illustrates a block diagram of asystem 400 for updating a deep learning model that performs inferencing operations and provides inferenced data to a software application, in accordance with an embodiment. It should be noted that the definitions and/or descriptions provided with respect to the embodiments above may equally apply to the present description. - As shown, a
client 401 has installed thereon asoftware application 402 that uses one or more deep learning models stored in a local deeplearning model store 403. Each of the deep learning models may perform a different type of inferences and provide a different type of inferenced data, and thus may be usable (e.g. by thesoftware application 402 and/or other software applications installed on the client) to obtain any needed inferenced data. - Additionally, a
server 409 operates to update a deep learning model to create an updated version of thedeep learning model 410. As shown, the server receivesresearch data 404 which includes anew training dataset 407 and/or a new deep learning model design 408 (reconfiguration). Theresearch data 404 may be generated from a newly generatedpublic dataset 405 and/or fromoffline information 406 received in association with the software application. - The
server 409 may update the deep learning model using manual training and tuning of the deep learning model by one or more users, and/or using automatic training and optimizing of the deep learning model by a neural network optimizer (not shown). The updated version of thedeep learning model 410 is then distributed to theclient 401 via a deep learningmodel update server 412 of acloud service 411. Optionally, theclient 401 may subscribe to thecloud service 411 to be provided access to deep learning models. - Each time the
server 409 starts a new deep learning model training session, the metadata that describes the deep learning model, including all training hyperparameters, inferencing parameters, and the dataset, is stored either in a file or a database. This allows the deep learning model to be fully recreated at any time in the future. Theserver 409 can also use that metadata to conduct future experiments and to derive new deep learning models. At any point, theserver 409 will likely have multiple deep learning models being trained and evaluated against improvement benchmarks. -
FIG. 4B illustrates a flowchart of the method of theclient 401 ofFIG. 4 , in accordance with an embodiment. As a first sub-process of the method of theclient 401, during runtime of the software application 402 (operation 450), a feature API is invoked (operation 451). The feature API may provide an interface to the deep learning model to allow thesoftware application 402 to interface with the deep learning model. - Responsive to the invocation of the feature API, the
client 401 determines whether the deep learning model has been updated since a last call made to the deep learning model by the software application 402 (decision 452). Theclient 401 may accomplish this by querying the local deeplearning model store 403 for a latest stored version of the deep learning model. - Responsive to determining that the deep learning model has not been updated, the
client 401 runs the deep learning model (operation 454) with input data provided by thesoftware application 402, and returns inferenced data output by the deep learning model back to the software application 402 (operation 455). - Responsive to determining that the deep learning model has been updated, the
client 401 loads the updated (improved) deep learning model from the local deep learning model store 403 (operation 453). This may be performed as a hot-swap (in real-time) during execution of the software application. Theclient 401 then runs the updated deep learning model (operation 454) with input data provided by thesoftware application 402, and returns inferenced data output by the deep learning model back to the software application 402 (operation 455). - As a second sub-process of the method of the
client 401, theclient 401 is triggered (operation 456) to check for a deep learning model update (operation 457). The trigger may be caused by a schedule, or by the feature API invocation in operation 451. Theclient 401 sends a request to thecloud service 411 for any updated version of the deep learning model. When thecloud service 411 has access to an updated version of the deep learning model, theclient 401 downloads the updated deep learning model (operation 458) and stores the new model in the local deep learning model store 403 (operation 459). - Deep neural networks (DNNs), including deep learning models, developed on processors have been used for diverse use cases, from self-driving cars to faster drug development, from automatic image captioning in online image databases to smart real-time language translation in video chat applications. Deep learning is a technique that models the neural learning process of the human brain, continually learning, continually getting smarter, and delivering more accurate results more quickly over time. A child is initially taught by an adult to correctly identify and classify various shapes, eventually being able to identify shapes without any coaching. Similarly, a deep learning or neural learning system needs to be trained in object recognition and classification for it get smarter and more efficient at identifying basic objects, occluded objects, etc., while also assigning context to objects.
- At the simplest level, neurons in the human brain look at various inputs that are received, importance levels are assigned to each of these inputs, and output is passed on to other neurons to act upon. An artificial neuron or perceptron is the most basic model of a neural network. In one example, a perceptron may receive one or more inputs that represent various features of an object that the perceptron is being trained to recognize and classify, and each of these features is assigned a certain weight based on the importance of that feature in defining the shape of an object.
- A deep neural network (DNN) model includes multiple layers of many connected nodes (e.g., perceptrons, Boltzmann machines, radial basis functions, convolutional layers, etc.) that can be trained with enormous amounts of input data to quickly solve complex problems with high accuracy. In one example, a first layer of the DNN model breaks down an input image of an automobile into various sections and looks for basic patterns such as lines and angles. The second layer assembles the lines to look for higher level patterns such as wheels, windshields, and mirrors. The next layer identifies the type of vehicle, and the final few layers generate a label for the input image, identifying the model of a specific automobile brand.
- Once the DNN is trained, the DNN can be deployed and used to identify and classify objects or patterns in a process known as inference. Examples of inference (the process through which a DNN extracts useful information from a given input) include identifying handwritten numbers on checks deposited into ATM machines, identifying images of friends in photos, delivering movie recommendations to over fifty million users, identifying and classifying different types of automobiles, pedestrians, and road hazards in driverless cars, or translating human speech in real-time.
- During training, data flows through the DNN in a forward propagation phase until a prediction is produced that indicates a label corresponding to the input. If the neural network does not correctly label the input, then errors between the correct label and the predicted label are analyzed, and the weights are adjusted for each feature during a backward propagation phase until the DNN correctly labels the input and other inputs in a training dataset. Training complex neural networks requires massive amounts of parallel computing performance, including floating-point multiplications and additions. Inferencing is less compute-intensive than training, being a latency-sensitive process where a trained neural network is applied to new inputs it has not seen before to classify images, translate speech, and generally infer new information.
- As noted above, a deep learning or neural learning system needs to be trained to generate inferences from input data. Details regarding inference and/or
training logic 515 for a deep learning or neural learning system are provided below in conjunction withFIGS. 5A and/or 5B . - In at least one embodiment, inference and/or
training logic 515 may include, without limitation, adata storage 501 to store forward and/or output weight and/or input/output data corresponding to neurons or layers of a neural network trained and/or used for inferencing in aspects of one or more embodiments. In at least oneembodiment data storage 501 stores weight parameters and/or input/output data of each layer of a neural network trained or used in conjunction with one or more embodiments during forward propagation of input/output data and/or weight parameters during training and/or inferencing using aspects of one or more embodiments. In at least one embodiment, any portion ofdata storage 501 may be included with other on-chip or off-chip data storage, including a processor's L1, L2, or L3 cache or system memory. - In at least one embodiment, any portion of
data storage 501 may be internal or external to one or more processors or other hardware logic devices or circuits. In at least one embodiment,data storage 501 may be cache memory, dynamic randomly addressable memory (“DRAM”), static randomly addressable memory (“SRAM”), non-volatile memory (e.g., Flash memory), or other storage. In at least one embodiment, choice of whetherdata storage 501 is internal or external to a processor, for example, or comprised of DRAM, SRAM, Flash or some other storage type may depend on available storage on-chip versus off-chip, latency requirements of training and/or inferencing functions being performed, batch size of data used in inferencing and/or training of a neural network, or some combination of these factors. - In at least one embodiment, inference and/or
training logic 515 may include, without limitation, adata storage 505 to store backward and/or output weight and/or input/output data corresponding to neurons or layers of a neural network trained and/or used for inferencing in aspects of one or more embodiments. In at least one embodiment,data storage 505 stores weight parameters and/or input/output data of each layer of a neural network trained or used in conjunction with one or more embodiments during backward propagation of input/output data and/or weight parameters during training and/or inferencing using aspects of one or more embodiments. In at least one embodiment, any portion ofdata storage 505 may be included with other on-chip or off-chip data storage, including a processor's L1, L2, or L3 cache or system memory. In at least one embodiment, any portion ofdata storage 505 may be internal or external to on one or more processors or other hardware logic devices or circuits. In at least one embodiment,data storage 505 may be cache memory, DRAM, SRAM, non-volatile memory (e.g., Flash memory), or other storage. In at least one embodiment, choice of whetherdata storage 505 is internal or external to a processor, for example, or comprised of DRAM, SRAM, Flash or some other storage type may depend on available storage on-chip versus off-chip, latency requirements of training and/or inferencing functions being performed, batch size of data used in inferencing and/or training of a neural network, or some combination of these factors. - In at least one embodiment,
data storage 501 anddata storage 505 may be separate storage structures. In at least one embodiment,data storage 501 anddata storage 505 may be same storage structure. In at least one embodiment,data storage 501 anddata storage 505 may be partially same storage structure and partially separate storage structures. In at least one embodiment, any portion ofdata storage 501 anddata storage 505 may be included with other on-chip or off-chip data storage, including a processor's L1, L2, or L3 cache or system memory. - In at least one embodiment, inference and/or
training logic 515 may include, without limitation, one or more arithmetic logic unit(s) (“ALU(s)”) 510 to perform logical and/or mathematical operations based, at least in part on, or indicated by, training and/or inference code, result of which may result in activations (e.g., output values from layers or neurons within a neural network) stored in anactivation storage 520 that are functions of input/output and/or weight parameter data stored indata storage 501 and/ordata storage 505. In at least one embodiment, activations stored inactivation storage 520 are generated according to linear algebraic and or matrix-based mathematics performed by ALU(s) 510 in response to performing instructions or other code, wherein weight values stored indata storage 505 and/ordata 501 are used as operands along with other values, such as bias values, gradient information, momentum values, or other parameters or hyperparameters, any or all of which may be stored indata storage 505 ordata storage 501 or another storage on or off-chip. In at least one embodiment, ALU(s) 510 are included within one or more processors or other hardware logic devices or circuits, whereas in another embodiment, ALU(s) 510 may be external to a processor or other hardware logic device or circuit that uses them (e.g., a co-processor). In at least one embodiment,ALUs 510 may be included within a processor's execution units or otherwise within a bank of ALUs accessible by a processor's execution units either within same processor or distributed between different processors of different types (e.g., central processing units, graphics processing units, fixed function units, etc.). In at least one embodiment,data storage 501,data storage 505, andactivation storage 520 may be on same processor or other hardware logic device or circuit, whereas in another embodiment, they may be in different processors or other hardware logic devices or circuits, or some combination of same and different processors or other hardware logic devices or circuits. In at least one embodiment, any portion ofactivation storage 520 may be included with other on-chip or off-chip data storage, including a processor's L1, L2, or L3 cache or system memory. Furthermore, inferencing and/or training code may be stored with other code accessible to a processor or other hardware logic or circuit and fetched and/or processed using a processor's fetch, decode, scheduling, execution, retirement and/or other logical circuits. - In at least one embodiment,
activation storage 520 may be cache memory, DRAM, SRAM, non-volatile memory (e.g., Flash memory), or other storage. In at least one embodiment,activation storage 520 may be completely or partially within or external to one or more processors or other logical circuits. In at least one embodiment, choice of whetheractivation storage 520 is internal or external to a processor, for example, or comprised of DRAM, SRAM, Flash or some other storage type may depend on available storage on-chip versus off-chip, latency requirements of training and/or inferencing functions being performed, batch size of data used in inferencing and/or training of a neural network, or some combination of these factors. In at least one embodiment, inference and/ortraining logic 515 illustrated inFIG. 5A may be used in conjunction with an application-specific integrated circuit (“ASIC”), such as Tensorflow® Processing Unit from Google, an inference processing unit (IPU) from Graphcore™, or a Nervana® (e.g., “Lake Crest”) processor from Intel Corp. In at least one embodiment, inference and/ortraining logic 515 illustrated inFIG. 5 .A may be used in conjunction with central processing unit (“CPU”) hardware, graphics processing unit (“GPU”) hardware or other hardware, such as field programmable gate arrays (“FPGAs”). -
FIG. 5B illustrates inference and/ortraining logic 515, according to at least one embodiment. In at least one embodiment, inference and/ortraining logic 515 may include, without limitation, hardware logic in which computational resources are dedicated or otherwise exclusively used in conjunction with weight values or other information corresponding to one or more layers of neurons within a neural network. In at least one embodiment, inference and/ortraining logic 515 illustrated inFIG. 5 .B may be used in conjunction with an application-specific integrated circuit (ASIC), such as Tensorflow® Processing Unit from Google, an inference processing unit (IPU) from Graphcore™, or a Nervana® (e.g., “Lake Crest”) processor from Intel Corp. In at least one embodiment, inference and/ortraining logic 515 illustrated inFIG. 5 .B may be used in conjunction with central processing unit (CPU) hardware, graphics processing unit (GPU) hardware or other hardware, such as field programmable gate arrays (FPGAs). In at least one embodiment, inference and/ortraining logic 515 includes, without limitation,data storage 501 anddata storage 505, which may be used to store weight values and/or other information, including bias values, gradient information, momentum values, and/or other parameter or hyperparameter information. In at least one embodiment illustrated inFIG. 5B , each ofdata storage 501 anddata storage 505 is associated with a dedicated computational resource, such as computational hardware 502 andcomputational hardware 506, respectively. In at least one embodiment, each ofcomputational hardware 506 comprises one or more ALUs that perform mathematical functions, such as linear algebraic functions, only on information stored indata storage 501 anddata storage 505, respectively, result of which is stored inactivation storage 520. - In at least one embodiment, each of
data storage computational hardware 502 and 506, respectively, correspond to different layers of a neural network, such that resulting activation from one “storage/computational pair 501/502” ofdata storage 501 and computational hardware 502 is provided as an input to next “storage/computational pair 505/506” ofdata storage 505 andcomputational hardware 506, in order to mirror conceptual organization of a neural network. In at least one embodiment, each of storage/computational pairs 501/502 and 505/506 may correspond to more than one neural network layer. In at least one embodiment, additional storage/computation pairs (not shown) subsequent to or in parallel with storage computation pairs 501/502 and 505/506 may be included in inference and/ortraining logic 515. -
FIG. 6 illustrates another embodiment for training and deployment of a deep neural network. In at least one embodiment, untrained neural network 606 is trained using a training dataset 602. In at least one embodiment, training framework 604 is a PyTorch framework, whereas in other embodiments, training framework 604 is a Tensorflow, Boost, Caffe, Microsoft Cognitive Toolkit/CNTK, MXNet, Chainer, Keras, Deeplearning4j, or other training framework. In at least one embodiment training framework 604 trains an untrained neural network 606 and enables it to be trained using processing resources described herein to generate a trained neural network 608. In at least one embodiment, weights may be chosen randomly or by pre-training using a deep belief network. In at least one embodiment, training may be performed in either a supervised, partially supervised, or unsupervised manner. - In at least one embodiment, untrained neural network 606 is trained using supervised learning, wherein training dataset 602 includes an input paired with a desired output for an input, or where training dataset 602 includes input having known output and the output of the neural network is manually graded. In at least one embodiment, untrained neural network 606 is trained in a supervised manner processes inputs from training dataset 602 and compares resulting outputs against a set of expected or desired outputs. In at least one embodiment, errors are then propagated back through untrained neural network 606. In at least one embodiment, training framework 604 adjusts weights that control untrained neural network 606. In at least one embodiment, training framework 604 includes tools to monitor how well untrained neural network 606 is converging towards a model, such as trained neural network 608, suitable to generating correct answers, such as in
result 614, based on known input data, such asnew data 612. In at least one embodiment, training framework 604 trains untrained neural network 606 repeatedly while adjust weights to refine an output of untrained neural network 606 using a loss function and adjustment algorithm, such as stochastic gradient descent. In at least one embodiment, training framework 604 trains untrained neural network 606 until untrained neural network 606 achieves a desired accuracy. In at least one embodiment, trained neural network 608 can then be deployed to implement any number of machine learning operations. - In at least one embodiment, untrained neural network 606 is trained using unsupervised learning, wherein untrained neural network 606 attempts to train itself using unlabeled data. In at least one embodiment, unsupervised learning training dataset 602 will include input data without any associated output data or “ground truth” data. In at least one embodiment, untrained neural network 606 can learn groupings within training dataset 602 and can determine how individual inputs are related to untrained dataset 602. In at least one embodiment, unsupervised training can be used to generate a self-organizing map, which is a type of trained neural network 608 capable of performing operations useful in reducing dimensionality of
new data 612. In at least one embodiment, unsupervised training can also be used to perform anomaly detection, which allows identification of data points in anew dataset 612 that deviate from normal patterns ofnew dataset 612. - In at least one embodiment, semi-supervised learning may be used, which is a technique in which in training dataset 602 includes a mix of labeled and unlabeled data. In at least one embodiment, training framework 604 may be used to perform incremental learning, such as through transferred learning techniques. In at least one embodiment, incremental learning enables trained neural network 608 to adapt to
new data 612 without forgetting knowledge instilled within network during initial training. -
FIG. 7 illustrates anexample data center 700, in which at least one embodiment may be used. In at least one embodiment,data center 700 includes a datacenter infrastructure layer 710, aframework layer 720, asoftware layer 730 and anapplication layer 740. - In at least one embodiment, as shown in
FIG. 7 , datacenter infrastructure layer 710 may include aresource orchestrator 712, groupedcomputing resources 714, and node computing resources (“node C.R.s”) 716(1)-716(N), where “N” represents any whole, positive integer. In at least one embodiment, node C.R.s 716(1)-716(N) may include, but are not limited to, any number of central processing units (“CPUs”) or other processors (including accelerators, field programmable gate arrays (FPGAs), graphics processors, etc.), memory devices (e.g., dynamic read-only memory), storage devices (e.g., solid state or disk drives), network input/output (“NW I/O”) devices, network switches, virtual machines (“VMs”), power modules, and cooling modules, etc. In at least one embodiment, one or more node C.R.s from among node C.R.s 716(1)-716(N) may be a server having one or more of above-mentioned computing resources. - In at least one embodiment, grouped
computing resources 714 may include separate groupings of node C.R.s housed within one or more racks (not shown), or many racks housed in data centers at various geographical locations (also not shown). separate groupings of node C.R.s within groupedcomputing resources 714 may include grouped compute, network, memory or storage resources that may be configured or allocated to support one or more workloads. In at least one embodiment, several node C.R.s including CPUs or processors may grouped within one or more racks to provide compute resources to support one or more workloads. In at least one embodiment, one or more racks may also include any number of power modules, cooling modules, and network switches, in any combination. - In at least one embodiment, resource orchestrator 722 may configure or otherwise control one or more node C.R.s 716(1)-716(N) and/or grouped
computing resources 714. In at least one embodiment, resource orchestrator 722 may include a software design infrastructure (“SDI”) management entity fordata center 700. In at least one embodiment, resource orchestrator may include hardware, software or some combination thereof. - In at least one embodiment, as shown in
FIG. 7 ,framework layer 720 includes ajob scheduler 732, aconfiguration manager 734, aresource manager 736 and a distributedfile system 738. In at least one embodiment,framework layer 720 may include a framework to supportsoftware 732 ofsoftware layer 730 and/or one or more application(s) 742 ofapplication layer 740. In at least one embodiment,software 732 or application(s) 742 may respectively include web-based service software or applications, such as those provided by Amazon Web Services, Google Cloud and Microsoft Azure. In at least one embodiment,framework layer 720 may be, but is not limited to, a type of free and open-source software web application framework such as Apache Spark™ (hereinafter “Spark”) that may utilize distributedfile system 738 for large-scale data processing (e.g., “big data”). In at least one embodiment,job scheduler 732 may include a Spark driver to facilitate scheduling of workloads supported by various layers ofdata center 700. In at least one embodiment,configuration manager 734 may be capable of configuring different layers such assoftware layer 730 andframework layer 720 including Spark and distributedfile system 738 for supporting large-scale data processing. In at least one embodiment,resource manager 736 may be capable of managing clustered or grouped computing resources mapped to or allocated for support of distributedfile system 738 andjob scheduler 732. In at least one embodiment, clustered or grouped computing resources may include groupedcomputing resource 714 at datacenter infrastructure layer 710. In at least one embodiment,resource manager 736 may coordinate withresource orchestrator 712 to manage these mapped or allocated computing resources. - In at least one embodiment,
software 732 included insoftware layer 730 may include software used by at least portions of node C.R.s 716(1)-716(N), groupedcomputing resources 714, and/or distributedfile system 738 offramework layer 720. one or more types of software may include, but are not limited to, Internet web page search software, e-mail virus scan software, database software, and streaming video content software. - In at least one embodiment, application(s) 742 included in
application layer 740 may include one or more types of applications used by at least portions of node C.R.s 716(1)-716(N), groupedcomputing resources 714, and/or distributedfile system 738 offramework layer 720. one or more types of applications may include, but are not limited to, any number of a genomics application, a cognitive compute, and a machine learning application, including training or inferencing software, machine learning framework software (e.g., PyTorch, TensorFlow, Caffe, etc.) or other machine learning applications used in conjunction with one or more embodiments. - In at least one embodiment, any of
configuration manager 734,resource manager 736, andresource orchestrator 712 may implement any number and type of self-modifying actions based on any amount and type of data acquired in any technically feasible fashion. In at least one embodiment, self-modifying actions may relieve a data center operator ofdata center 700 from making possibly bad configuration decisions and possibly avoiding underutilized and/or poor performing portions of a data center. - In at least one embodiment,
data center 700 may include tools, services, software or other resources to train one or more machine learning models or predict or infer information using one or more machine learning models according to one or more embodiments described herein. For example, in at least one embodiment, a machine learning model may be trained by calculating weight parameters according to a neural network architecture using software and computing resources described above with respect todata center 700. In at least one embodiment, trained machine learning models corresponding to one or more neural networks may be used to infer or predict information using resources described above with respect todata center 700 by using weight parameters calculated through one or more training techniques described herein. - In at least one embodiment, data center may use CPUs, application-specific integrated circuits (ASICs), GPUs, FPGAs, or other hardware to perform training and/or inferencing using above-described resources. Moreover, one or more software and/or hardware resources described above may be configured as a service to allow users to train or performing inferencing of information, such as image recognition, speech recognition, or other artificial intelligence services.
- Inference and/or
training logic 515 are used to perform inferencing and/or training operations associated with one or more embodiments. In at least one embodiment, inference and/ortraining logic 515 may be used in systemFIG. 7 for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein. - As described herein, a method, computer readable medium, and system are disclosed for improving deep learning models that perform inferencing operations to provide inferenced data to software applications. In accordance with
FIGS. 1-4B , an embodiment may provide a deep learning model usable for performing inferencing operations and for providing inferenced data, where the deep learning model is stored (partially or wholly) in one or both ofdata storage training logic 515 as depicted inFIGS. 5A and 5B . Training and deployment of the deep learning model may be performed as depicted inFIG. 6 and described herein. For example, the deep learning model, when untrained, may subsequently be trained using training framework 604. Additionally, the deep learning model, when previously trained, may be updated to create an updated version of the deep learning model also using framework 604. Further, the updated version of a deep learning model may be distributed to a client for use in providing the inferenced data. Distribution of the trained or re-trained deep learning model may be performed using one or more servers in adata center 700 as depicted inFIG. 7 and described herein.
Claims (20)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US16/537,215 US20200050443A1 (en) | 2018-08-10 | 2019-08-09 | Optimization and update system for deep learning models |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201862717735P | 2018-08-10 | 2018-08-10 | |
US16/537,215 US20200050443A1 (en) | 2018-08-10 | 2019-08-09 | Optimization and update system for deep learning models |
Publications (1)
Publication Number | Publication Date |
---|---|
US20200050443A1 true US20200050443A1 (en) | 2020-02-13 |
Family
ID=69405876
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/537,255 Pending US20200050936A1 (en) | 2018-08-10 | 2019-08-09 | Automatic dataset creation using software tags |
US16/537,242 Pending US20200050935A1 (en) | 2018-08-10 | 2019-08-09 | Deep learning model execution using tagged data |
US16/537,215 Pending US20200050443A1 (en) | 2018-08-10 | 2019-08-09 | Optimization and update system for deep learning models |
Family Applications Before (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US16/537,255 Pending US20200050936A1 (en) | 2018-08-10 | 2019-08-09 | Automatic dataset creation using software tags |
US16/537,242 Pending US20200050935A1 (en) | 2018-08-10 | 2019-08-09 | Deep learning model execution using tagged data |
Country Status (1)
Country | Link |
---|---|
US (3) | US20200050936A1 (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20210110304A1 (en) * | 2019-10-09 | 2021-04-15 | Hitachi, Ltd. | Operational support system and method |
CN112732297A (en) * | 2020-12-31 | 2021-04-30 | 平安科技(深圳)有限公司 | Method and device for updating federal learning model, electronic equipment and storage medium |
US11061791B2 (en) * | 2019-01-07 | 2021-07-13 | International Business Machines Corporation | Providing insight of continuous delivery pipeline using machine learning |
CN113742197A (en) * | 2020-05-27 | 2021-12-03 | 北京字节跳动网络技术有限公司 | Model management device, method, data management device, method and system |
US20220113048A1 (en) * | 2019-01-16 | 2022-04-14 | Fujitsu General Limited | Air conditioning system |
US20220283787A1 (en) * | 2019-08-27 | 2022-09-08 | Siemens Aktiengesellschaft | System and method supporting graphical programming based on neuron blocks, and storage medium |
US11501200B2 (en) * | 2016-07-02 | 2022-11-15 | Hcl Technologies Limited | Generate alerts while monitoring a machine learning model in real time |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10713769B2 (en) * | 2018-06-05 | 2020-07-14 | Kla-Tencor Corp. | Active learning for defect classifier training |
CN111126613A (en) * | 2018-10-31 | 2020-05-08 | 伊姆西Ip控股有限责任公司 | Method, apparatus and computer program product for deep learning |
US11385884B2 (en) * | 2019-04-29 | 2022-07-12 | Harman International Industries, Incorporated | Assessing cognitive reaction to over-the-air updates |
US11200722B2 (en) * | 2019-12-20 | 2021-12-14 | Intel Corporation | Method and apparatus for viewport shifting of non-real time 3D applications |
CN114063997A (en) * | 2020-07-31 | 2022-02-18 | 伊姆西Ip控股有限责任公司 | Method, apparatus and computer program product for generating program code |
CN112527321B (en) * | 2020-12-29 | 2022-05-27 | 平安银行股份有限公司 | Deep learning-based application online method, system, device and medium |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160078361A1 (en) * | 2014-09-11 | 2016-03-17 | Amazon Technologies, Inc. | Optimized training of linear machine learning models |
US9946576B2 (en) * | 2010-05-07 | 2018-04-17 | Microsoft Technology Licensing, Llc | Distributed workflow execution |
US20190037005A1 (en) * | 2017-07-28 | 2019-01-31 | Kong Inc. | Auto-documentation for application program interfaces based on network requests and responses |
US20190278870A1 (en) * | 2018-03-12 | 2019-09-12 | Microsoft Technology Licensing, Llc | Machine learning model to preload search results |
US20190279114A1 (en) * | 2018-03-08 | 2019-09-12 | Capital One Services, Llc | System and Method for Deploying and Versioning Machine Learning Models |
US10713543B1 (en) * | 2018-06-13 | 2020-07-14 | Electronic Arts Inc. | Enhanced training of machine learning systems based on automatically generated realistic gameplay information |
US20200349365A1 (en) * | 2017-04-04 | 2020-11-05 | Robert Bosch Gmbh | Direct vehicle detection as 3d bounding boxes using neural network image processing |
US20210390653A1 (en) * | 2018-01-23 | 2021-12-16 | Nvidia Corporation | Learning robotic tasks using one or more neural networks |
US11410024B2 (en) * | 2017-04-28 | 2022-08-09 | Intel Corporation | Tool for facilitating efficiency in machine learning |
US11481652B2 (en) * | 2015-06-23 | 2022-10-25 | Gregory Knox | System and method for recommendations in ubiquituous computing environments |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8364613B1 (en) * | 2011-07-14 | 2013-01-29 | Google Inc. | Hosting predictive models |
US9589210B1 (en) * | 2015-08-26 | 2017-03-07 | Digitalglobe, Inc. | Broad area geospatial object detection using autogenerated deep learning models |
US20170192957A1 (en) * | 2015-12-30 | 2017-07-06 | International Business Machines Corporation | Methods and analytics systems having an ontology-guided graphical user interface for analytics models |
US11403006B2 (en) * | 2017-09-29 | 2022-08-02 | Coupa Software Incorporated | Configurable machine learning systems through graphical user interfaces |
US10671435B1 (en) * | 2017-10-19 | 2020-06-02 | Pure Storage, Inc. | Data transformation caching in an artificial intelligence infrastructure |
US10831519B2 (en) * | 2017-11-22 | 2020-11-10 | Amazon Technologies, Inc. | Packaging and deploying algorithms for flexible machine learning |
US20190180189A1 (en) * | 2017-12-11 | 2019-06-13 | Sap Se | Client synchronization for offline execution of neural networks |
US11475291B2 (en) * | 2017-12-27 | 2022-10-18 | X Development Llc | Sharing learned information among robots |
US11250336B2 (en) * | 2017-12-28 | 2022-02-15 | Intel Corporation | Distributed and contextualized artificial intelligence inference service |
DE112018007550T5 (en) * | 2018-06-05 | 2021-01-28 | Mitsubishi Electric Corporation | Learning device, inference device, method and program |
-
2019
- 2019-08-09 US US16/537,255 patent/US20200050936A1/en active Pending
- 2019-08-09 US US16/537,242 patent/US20200050935A1/en active Pending
- 2019-08-09 US US16/537,215 patent/US20200050443A1/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9946576B2 (en) * | 2010-05-07 | 2018-04-17 | Microsoft Technology Licensing, Llc | Distributed workflow execution |
US20160078361A1 (en) * | 2014-09-11 | 2016-03-17 | Amazon Technologies, Inc. | Optimized training of linear machine learning models |
US11481652B2 (en) * | 2015-06-23 | 2022-10-25 | Gregory Knox | System and method for recommendations in ubiquituous computing environments |
US20200349365A1 (en) * | 2017-04-04 | 2020-11-05 | Robert Bosch Gmbh | Direct vehicle detection as 3d bounding boxes using neural network image processing |
US11410024B2 (en) * | 2017-04-28 | 2022-08-09 | Intel Corporation | Tool for facilitating efficiency in machine learning |
US20190037005A1 (en) * | 2017-07-28 | 2019-01-31 | Kong Inc. | Auto-documentation for application program interfaces based on network requests and responses |
US20210390653A1 (en) * | 2018-01-23 | 2021-12-16 | Nvidia Corporation | Learning robotic tasks using one or more neural networks |
US20190279114A1 (en) * | 2018-03-08 | 2019-09-12 | Capital One Services, Llc | System and Method for Deploying and Versioning Machine Learning Models |
US20190278870A1 (en) * | 2018-03-12 | 2019-09-12 | Microsoft Technology Licensing, Llc | Machine learning model to preload search results |
US10713543B1 (en) * | 2018-06-13 | 2020-07-14 | Electronic Arts Inc. | Enhanced training of machine learning systems based on automatically generated realistic gameplay information |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11501200B2 (en) * | 2016-07-02 | 2022-11-15 | Hcl Technologies Limited | Generate alerts while monitoring a machine learning model in real time |
US11061791B2 (en) * | 2019-01-07 | 2021-07-13 | International Business Machines Corporation | Providing insight of continuous delivery pipeline using machine learning |
US11061790B2 (en) * | 2019-01-07 | 2021-07-13 | International Business Machines Corporation | Providing insight of continuous delivery pipeline using machine learning |
US20220113048A1 (en) * | 2019-01-16 | 2022-04-14 | Fujitsu General Limited | Air conditioning system |
US11828479B2 (en) * | 2019-01-16 | 2023-11-28 | Fujitsu General Limited | Server based air conditioning system adaptor for updating control program |
US20220283787A1 (en) * | 2019-08-27 | 2022-09-08 | Siemens Aktiengesellschaft | System and method supporting graphical programming based on neuron blocks, and storage medium |
US20210110304A1 (en) * | 2019-10-09 | 2021-04-15 | Hitachi, Ltd. | Operational support system and method |
US11720820B2 (en) * | 2019-10-09 | 2023-08-08 | Hitachi, Ltd. | Operational support system and method |
CN113742197A (en) * | 2020-05-27 | 2021-12-03 | 北京字节跳动网络技术有限公司 | Model management device, method, data management device, method and system |
CN112732297A (en) * | 2020-12-31 | 2021-04-30 | 平安科技(深圳)有限公司 | Method and device for updating federal learning model, electronic equipment and storage medium |
WO2022141839A1 (en) * | 2020-12-31 | 2022-07-07 | 平安科技(深圳)有限公司 | Method and apparatus for updating federated learning model, and electronic device and storage medium |
Also Published As
Publication number | Publication date |
---|---|
US20200050935A1 (en) | 2020-02-13 |
US20200050936A1 (en) | 2020-02-13 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20200050443A1 (en) | Optimization and update system for deep learning models | |
US11375176B2 (en) | Few-shot viewpoint estimation | |
US11681914B2 (en) | Determining multivariate time series data dependencies | |
US20210397895A1 (en) | Intelligent learning system with noisy label data | |
US11379718B2 (en) | Ground truth quality for machine learning models | |
US11645575B2 (en) | Linking actions to machine learning prediction explanations | |
US11507890B2 (en) | Ensemble model policy generation for prediction systems | |
US11847546B2 (en) | Automatic data preprocessing | |
US10853718B2 (en) | Predicting time-to-finish of a workflow using deep neural network with biangular activation functions | |
Jeon et al. | Intelligent resource scaling for container based digital twin simulation of consumer electronics | |
US20230394781A1 (en) | Global context vision transformer | |
US20230139437A1 (en) | Classifier processing using multiple binary classifier stages | |
Guindani et al. | aMLLibrary: An automl approach for performance prediction | |
WO2021208808A1 (en) | Cooperative neural networks with spatial containment constraints | |
US20240127075A1 (en) | Synthetic dataset generator | |
EP4121913A1 (en) | A neural network system for distributed boosting for a programmable logic controller with a plurality of processing units | |
US20240119291A1 (en) | Dynamic neural network model sparsification | |
US20240168390A1 (en) | Machine learning for mask optimization in inverse lithography technologies | |
US20240221166A1 (en) | Point-level supervision for video instance segmentation | |
US11568235B2 (en) | Data driven mixed precision learning for neural networks | |
US20240096115A1 (en) | Landmark detection with an iterative neural network | |
US12039011B2 (en) | Intelligent expansion of reviewer feedback on training data | |
US20240070987A1 (en) | Pose transfer for three-dimensional characters using a learned shape code | |
US11288097B2 (en) | Automated hardware resource optimization | |
US20230214454A1 (en) | Intelligent expansion of reviewer feedback on training data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: NVIDIA CORPORATION, CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:EDELSTEN, ANDREW;HUANG, JEN-HSUN;SKALJAK, BOJAN;REEL/FRAME:050269/0426 Effective date: 20190807 |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: NON FINAL ACTION MAILED |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER |
|
STPP | Information on status: patent application and granting procedure in general |
Free format text: FINAL REJECTION MAILED |