US20200050935A1 - Deep learning model execution using tagged data - Google Patents

Deep learning model execution using tagged data Download PDF

Info

Publication number
US20200050935A1
US20200050935A1 US16/537,242 US201916537242A US2020050935A1 US 20200050935 A1 US20200050935 A1 US 20200050935A1 US 201916537242 A US201916537242 A US 201916537242A US 2020050935 A1 US2020050935 A1 US 2020050935A1
Authority
US
United States
Prior art keywords
data
deep learning
learning model
software application
metadata
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
US16/537,242
Inventor
Andrew Edelsten
Jen-Hsun Huang
Bojan Skaljak
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nvidia Corp
Original Assignee
Nvidia Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nvidia Corp filed Critical Nvidia Corp
Priority to US16/537,242 priority Critical patent/US20200050935A1/en
Assigned to NVIDIA CORPORATION reassignment NVIDIA CORPORATION ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: EDELSTEN, ANDREW, HUANG, JEN-HSUN, SKALJAK, BOJAN
Publication of US20200050935A1 publication Critical patent/US20200050935A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/70Software maintenance or management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/70Software maintenance or management
    • G06F8/71Version control; Configuration management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/541Interprogram communication via adapters, e.g. between incompatible applications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/10Interfaces, programming languages or software development kits, e.g. for simulating neural networks
    • G06T5/70
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/34Network arrangements or protocols for supporting network services or applications involving the movement of software or configuration parameters 
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/60Software deployment
    • G06F8/65Updates
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/088Non-supervised learning, e.g. competitive learning
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L67/00Network arrangements or protocols for supporting network services or applications
    • H04L67/01Protocols
    • H04L67/42

Definitions

  • the present disclosure relates to executing deep learning models.
  • a method, computer readable medium, and system are disclosed for executing a deep learning model using tagged data.
  • one or more required inputs for a deep learning model are determined.
  • metadata associated with data of a software application is used to retrieve one or more portions of the data from the software application that satisfy the one or more required inputs for the deep learning model. Further, the retrieved one or more portions of the data is provided to the deep learning model for processing to generate inferenced data.
  • FIG. 1 illustrates a block diagram of a system including a server that provisions a deep learning model to a client for use by a software application installed on the client, in accordance with an embodiment.
  • FIG. 2 illustrates a flowchart of a client method for tagging data for use in executing a deep learning model, in accordance with an embodiment.
  • FIG. 3 illustrates a flowchart of a client method for using tagged data to provide required input data to a deep learning model, in accordance with an embodiment.
  • FIG. 4 illustrates a flowchart of a client method for executing a deep learning model using tagged data, in accordance with an embodiment.
  • FIG. 5A illustrates a block diagram of a client system for executing a deep learning model using tagged data, in accordance with an embodiment.
  • FIG. 5B illustrates a flowchart of a method of the software application of FIG. 5A , in accordance with an embodiment.
  • FIG. 5C illustrates a flowchart of a method of the deep learning model executor of FIG. 5A , in accordance with an embodiment.
  • FIG. 6A illustrates inference and/or training logic, according to at least one embodiment
  • FIG. 6B illustrates inference and/or training logic, according to at least one embodiment
  • FIG. 7 illustrates training and deployment of a neural network, according to at least one embodiment
  • FIG. 8 illustrates an example data center system, according to at least one embodiment.
  • FIG. 1 illustrates a block diagram of a system 100 including a server 101 that provisions a deep learning model 102 to a client 103 for use by a software application 104 installed on the client 103 , in accordance with an embodiment.
  • the server 101 may be any computing device, including (without limitation) partially or wholly virtualized computing device, or combination of devices, capable of communicating with the client 103 over a wired or wireless connection, for the purpose of provisioning the deep learning model 102 to the client 103 for use by a software application 104 installed on the client 103 .
  • the server 101 may include a hardware memory (e.g. random access memory (RAM), etc.) for storing the deep learning model 102 and a hardware processor (e.g. central processing unit (CPU), graphics processing unit (GPU), etc.) for provisioning the deep learning model 102 from the memory to the client 103 over the wired or wireless connection.
  • the server 101 may provision the deep learning model 102 to the client 103 by sending a copy of the deep learning model 102 over the wired or wireless connection to the client 103 .
  • the client 103 may be any computing device—including (without limitation) one or more partially or wholly virtualized computing devices—capable of communicating with the server 101 over the wired or wireless connection, for the purpose of receiving from the server 101 the deep learning model 102 for use by the software application 104 installed on the client 103 .
  • the client 103 may not necessarily be an end-user device (e.g. personal computer, laptop, mobile phone, etc.) but may also be a server or other cloud-based computer system having the software application 104 installed thereon.
  • output of the software application 104 may optionally be streamed or otherwise communicated to an end-user device.
  • the client 103 may include a memory for storing the deep learning model 102 and a processor by which the software application 104 installed on the client 103 uses the deep learning model 102 for obtaining inferenced data.
  • the client 103 executes the deep learning model 102 locally.
  • the deep learning model 102 is a machine learned network (e.g. deep neural network) that is trained to perform inferencing operations and to provide inferenced data from input data.
  • the deep learning model 102 may be trained using supervised, semi-supervised, or unsupervised training techniques.
  • the server 101 may be used to perform the training of the deep learning model 102 , or may receive the already trained deep learning model 102 from another device.
  • the deep learning model 102 may be trained for performing various types of inferencing operations and for making any desired type of inferences. However, in the present embodiment, the deep learning model 102 outputs inferences that are usable by the software application 104 installed on the client 103 . It should be noted that the deep learning model 102 may similarly be used by other software applications which may be installed on the client 103 or other clients, and thus may not necessarily be specifically trained for use by the software application 104 but instead may be trained more generically for use by multiple different software applications. In any case, the deep learning model 102 may not be coded within the software application 104 itself, but may be accessible to the software application 104 as external functionality (e.g. as a software patch) via an application programming interface (API). As a result, the deep learning model 102 may not necessarily be developed and provided by a same developer of the software application 104 but instead may be developed and provided by a third party developer.
  • API application programming interface
  • the software application 104 installed on the client 103 generates or loads data which is used as input data to the deep learning model 102 , which processes the input data to compute one or more inferences (i.e. inferenced data) for the input data. Accordingly, the deep learning model 102 is trained to process the input data and make inferences therefrom. The inferenced data is output by the deep learning model 102 and is returned to the software application 104 for use by functions, tasks, etc. of the software application 104 .
  • the software application 104 may be a video game, virtual reality application, as part of a perception and/or control layer of an autonomous vehicle or machine, or other graphics-related computer program.
  • the deep learning model 102 may provide certain image-related inferences, such as providing from an input image or other input data an anti-aliased image, an image with upscaled resolution, a denoised image, and/or any other output image that is modified in at least one respect from the input image or other input data.
  • the deep learning model 102 may provide certain video-related inferences, such as providing from input video or other input data a slow motion version of the input video or other input data, a super sampling of the input video or other input data, etc.
  • the software application 104 may be a voice recognition application or other audio-related computer program.
  • the deep learning model 102 may provide certain audio-related inferences, such as providing from an input audio or other input data a language translation, a voice recognized command, and/or any other output that is inferenced from the input audio or other input data.
  • the embodiments below describe systems and methods for retraining a deep learning model using tagged data. These systems and methods will allow an updated, or improved, version of the deep learning model 102 to use different input data (e.g. from the software application 104 ) than a prior version of the deep learning model, without also requiring the software application 104 to be reconfigured to provide the newly required inputs. This is accomplished by tagging data of the software application 104 , and then using the tags to retrieve from the software application 104 input data currently required by the deep learning model.
  • FIG. 2 illustrates a flowchart of a client method 200 for tagging data for use in executing a deep learning model, in accordance with an embodiment. Accordingly, in one embodiment, the method 200 may be performed by the client 103 of FIG. 1 .
  • a software application is stored.
  • the software application is configured to use a deep learning model for performing inferencing operations and providing inferenced data (e.g. such as software application 104 that uses deep learning model 102 in FIG. 1 ).
  • the software application may be stored locally (e.g. by the client 103 of FIG. 1 ).
  • Metadata is received for data of the software application.
  • the data includes any data stored by the software application or stored for use by the software application.
  • the data may be generated by the software application during execution thereof (e.g. a graphical image or user interface generated by the software application).
  • the data may be stored in memory used by the software application, such as CPU random access memory (RAM) and/or GPU RAM.
  • RAM CPU random access memory
  • GPU RAM GPU random access memory
  • the metadata may be received for specific portions of the memory storing different data of the software application.
  • the metadata may be received for the data by being received for specific portions of the memory storing the data.
  • the specific portions of the memory may each store a different type of data, data output by a specific function or process of the software application, etc.
  • the portions of the memory may be particular data structures (e.g. custom or common data structures used by the software application), buffers (e.g. intermediate rendering buffers used in a rendering pipeline), etc.
  • metadata may be received for various buffers used in a graphics processing pipeline, such as a depth buffer, a normal buffer, etc.
  • the metadata is any descriptive information that can be associated with the data of the software application.
  • the metadata may categorize the data of the software application, may name the data of the software application, etc.
  • the metadata may comply with a nomenclature specified for the deep learning model.
  • a developer or provider of the deep learning model may specify or characterize a particular nomenclature to be used for the deep learning model when configuring required input data for the deep learning model.
  • the metadata may be received by a developer of the software application or other user having knowledge of the data of the software application.
  • the metadata may further be received in any desired format, such as extensible markup language (XML).
  • the metadata is stored in association with the data of the software application for use with the deep learning model.
  • the metadata may be assigned to the data of the software application for use with the deep learning model.
  • the metadata may be stored in any manner that associates it with the corresponding data of the software application for which the metadata was received.
  • the metadata received for particular data of the software application may be inserted in a portion of code of the software application that stores (in memory) or accesses (in memory) the particular data.
  • the metadata received for particular data of the software application may be inserted in a portion of code of the software application that defines the locations in memory in which the data is (to be) stored.
  • the metadata may be stored in a reference table that maps each metadata to the corresponding data and/or location in memory in which the data is stored.
  • the method 200 can be implemented as a way to tag the data of the software application with the metadata by storing an association (relationship) therebetween.
  • the tagged data may then be used for execution of the deep learning model, for example as described with reference to the Figures below.
  • FIG. 3 illustrates a flowchart of a client method 300 for using tagged data to assemble input data to provide to a deep learning model, in accordance with an embodiment.
  • the method 300 may be performed by the client 103 of FIG. 1 . Further, the method 300 may use the tagged data disclosed with respect to FIG. 2 above.
  • the deep learning model is usable for providing inferenced data to a software application (e.g. such as the deep learning model 102 used by the software application 104 of FIG. 1 ).
  • the deep learning model may be stored locally (e.g. by the client 103 ).
  • the deep learning model may be stored in a local repository with other deep learning models usable for providing other types of inferenced data to the software application or other software applications.
  • the deep learning model is configured to receive certain input(s) and output certain output(s).
  • the required input(s) for the deep learning model refers to the input(s) (e.g. data) that the deep learning model is configured to receive.
  • These input(s) may be specified in any desired manner.
  • the input(s) may be specified as tags selected in accordance with a particular nomenclature used for the deep learning model (e.g. predefined for use in configuring the required input(s) for the deep learning model).
  • the required input(s) may be determined from a configuration file defined for the deep learning model.
  • metadata associated with data of a software application is used to retrieve one or more portions of the data that satisfy the required input(s) for the deep learning model.
  • the software application may refer to one being executed to use the deep learning model to perform inferencing and obtain inferenced data therefrom.
  • operation 302 may access memory used by the software application to retrieve therefrom the input data required by the deep learning model.
  • an identifier e.g. name, etc.
  • the required input(s) determined in operation 301 may be matched to, or otherwise correlated with, metadata defined for certain data of the software application.
  • the certain data of the software application associated with that metadata may then be retrieved.
  • the present operation may retrieve from the software application data tagged with metadata matching, or closely (e.g. fuzzy) matching, those tags.
  • the one or more portions of the data retrieved from the software application is provided to the deep learning model for processing.
  • the data of the software application satisfying the required input(s) for the deep learning model is provided as input to the deep learning model.
  • the deep learning model can then process the input as described in more detail below with respect to FIG. 4 .
  • FIG. 4 illustrates a flowchart of a client method 400 for executing a deep learning model using tagged data, in accordance with an embodiment.
  • the method 400 may be performed by the client 103 of FIG. 1 .
  • the method 400 may be performed following the method 300 of FIG. 3 .
  • data retrieved from a software application is used as input to a deep learning model.
  • This operation may be the same as operation 303 of FIG. 3 , in one embodiment.
  • the data retrieved from the software application is selected based on associated tags determined to satisfy the required input(s) configured for the deep learning model.
  • the deep learning model is executed to process the input data. Further, in operation 403 , inferenced data is received as output of the deep learning model.
  • the inferenced data may be any data that the deep learning model inferences from the input data.
  • the inferenced data is provided to the software application.
  • the software application may use the inferenced data for one or more of its own processing tasks, functions, etc.
  • FIG. 5A illustrates a block diagram of a client system 500 for executing a deep learning model using tagged data. It should be noted that the definitions and/or descriptions provided with respect to the embodiments above may equally apply to the present description.
  • a software application 501 interfaces a memory 502 .
  • the memory 502 includes CPU RAM 503 and GPU RAM 506 , but may also include other types of memory in other embodiments.
  • the CPU RAM 503 stores one or more buffers 505 A-N and one or more other (e.g. custom or common) data structures 504 A-N.
  • the GPU RAM 506 stores one or more buffers 508 A-N and one or more other data structures 507 A-N.
  • the software application 501 may use the buffers 505 A-N and one or more other data structures 504 A-N of the CPU RAM 503 and/or the one or more buffers 508 A-N and one or more other data structures 507 A-N of the GPU RAM 506 for storing data therein.
  • the data may be any data generated, or otherwise used, by the software application 501 .
  • the software application 501 loads data into the memory 502 and tags the data with metadata.
  • the software application 501 may be configured (e.g. by a developer) to tag the data with certain metadata once loaded into the memory 502 .
  • the software application 501 tags the data by tagging locations in the memory 502 in which the data is stored (e.g. tagged data structure 504 A, tagged buffer 505 A, etc.).
  • the software application 501 also interfaces a deep learning executor 509 .
  • the deep learning executor 509 is executable computer code that executes a deep learning model (not shown) in association with the software application 501 .
  • the software application 501 includes a function that calls (initiates) the deep learning executor 509 to cause execution of the deep learning model.
  • the deep learning executor 509 collects data from the application or system memory 502 that satisfies the required input(s) of the deep learning model.
  • the deep learning executor 509 uses the tags provided for the data by the software application 501 to determine those portions of the data in the memory 502 that satisfy the required input(s) of the deep learning model.
  • the deep learning executor 509 may include a data collector 510 to collect the data.
  • the data collector 510 may be a module (e.g. code segment, function, etc.) within the deep learning executor 509 , in one embodiment.
  • the deep learning executor 509 further inputs the collected data to the deep learning model for processing thereof to generate inferenced data.
  • the inferenced data may be output by the deep learning model to the deep learning executor 509 .
  • the deep learning executor 509 then provides to the software application 501 the inferenced data output by the deep learning model.
  • FIG. 5B illustrates a flowchart of a method of the software application 501 of FIG. 5A , in accordance with an embodiment.
  • the software application 501 starts.
  • the software application 501 may start upon initiation of the software application 501 by a user or by another software application.
  • operation 512 the software application 501 loads data into memory 502 .
  • the data that is loaded into memory 502 is data that is used by the software application 501 for executing various functions, performing various processes, etc.
  • operation 512 may include configuring or instantiating various data structures and/or buffers in the memory 502 for use in storing data generated, or used, by the software application 501 during execution thereof.
  • tags are applied to the data (in the memory 502 ).
  • the tags are metadata that describe the data. Thus, different portions of the data may be tagged with different metadata.
  • the software application 501 executes (e.g. to perform various functions that use the memory 502 ). Then, in operation 515 , the software application 501 calls the deep learning model executor 509 . Operation 515 may occur at any point during execution of the software application 501 , for the purpose of executing the deep learning model to obtain inferenced data therefrom.
  • FIG. 5C illustrates a flowchart of a method of the deep learning model executor 509 of FIG. 5A , in accordance with an embodiment.
  • the deep learning model executor 509 is started.
  • operation 516 occurs in response to operation 515 of FIG. 5B .
  • the deep learning model executor 509 is started for executing a deep learning model to obtain inferenced data therefrom.
  • a deep learning model configuration 524 is loaded.
  • the deep learning model configuration 524 may be loaded from any memory (e.g. local memory) storing the same.
  • the deep learning model configuration 524 indicates at least required input(s) for the deep learning model.
  • the required input(s) may be indicated using tags that correlate with tagged data in the memory 502 .
  • data is collected from the memory 502 that satisfies the required input(s) for the deep learning model.
  • the tags indicating the required input(s) may be matched to tags in the memory 502 , and data in the memory 502 associated with those tags may be collected.
  • the deep learning model executor 509 may execute the input data collector 510 to perform operation 518 .
  • the collected data is provided to the deep learning model.
  • the collected data may be input to the deep learning model for processing thereof.
  • the deep learning model is executed to generate inferenced data for the input data.
  • output i.e. the inferenced data
  • the output is provided to the software application 501 , and finally the deep learning model executor 509 is terminated in operation 523 .
  • Deep neural networks including deep learning models, developed on processors have been used for diverse use cases, from self-driving cars to faster drug development, from automatic image captioning in online image databases to smart real-time language translation in video chat applications.
  • Deep learning is a technique that models the neural learning process of the human brain, continually learning, continually getting smarter, and delivering more accurate results more quickly over time.
  • a child is initially taught by an adult to correctly identify and classify various shapes, eventually being able to identify shapes without any coaching.
  • a deep learning or neural learning system needs to be trained in object recognition and classification for it get smarter and more efficient at identifying basic objects, occluded objects, etc., while also assigning context to objects.
  • neurons in the human brain look at various inputs that are received, importance levels are assigned to each of these inputs, and output is passed on to other neurons to act upon.
  • An artificial neuron or perceptron is the most basic model of a neural network.
  • a perceptron may receive one or more inputs that represent various features of an object that the perceptron is being trained to recognize and classify, and each of these features is assigned a certain weight based on the importance of that feature in defining the shape of an object.
  • a deep neural network (DNN) model includes multiple layers of many connected nodes (e.g., perceptrons, Boltzmann machines, radial basis functions, convolutional layers, etc.) that can be trained with enormous amounts of input data to quickly solve complex problems with high accuracy.
  • a first layer of the DNN model breaks down an input image of an automobile into various sections and looks for basic patterns such as lines and angles.
  • the second layer assembles the lines to look for higher level patterns such as wheels, windshields, and mirrors.
  • the next layer identifies the type of vehicle, and the final few layers generate a label for the input image, identifying the model of a specific automobile brand.
  • the DNN can be deployed and used to identify and classify objects or patterns in a process known as inference.
  • inference the process through which a DNN extracts useful information from a given input
  • examples of inference include identifying handwritten numbers on checks deposited into ATM machines, identifying images of friends in photos, delivering movie recommendations to over fifty million users, identifying and classifying different types of automobiles, pedestrians, and road hazards in driverless cars, or translating human speech in real-time.
  • Training complex neural networks requires massive amounts of parallel computing performance, including floating-point multiplications and additions. Inferencing is less compute-intensive than training, being a latency-sensitive process where a trained neural network is applied to new inputs it has not seen before to classify images, translate speech, and generally infer new information.
  • a deep learning or neural learning system needs to be trained to generate inferences from input data. Details regarding inference and/or training logic 615 for a deep learning or neural learning system are provided below in conjunction with FIGS. 6A and/or 6B .
  • inference and/or training logic 615 may include, without limitation, a data storage 601 to store forward and/or output weight and/or input/output data corresponding to neurons or layers of a neural network trained and/or used for inferencing in aspects of one or more embodiments.
  • data storage 601 stores weight parameters and/or input/output data of each layer of a neural network trained or used in conjunction with one or more embodiments during forward propagation of input/output data and/or weight parameters during training and/or inferencing using aspects of one or more embodiments.
  • any portion of data storage 601 may be included with other on-chip or off-chip data storage, including a processor's L1, L2, or L3 cache or system memory.
  • any portion of data storage 601 may be internal or external to one or more processors or other hardware logic devices or circuits.
  • data storage 601 may be cache memory, dynamic randomly addressable memory (“DRAM”), static randomly addressable memory (“SRAM”), non-volatile memory (e.g., Flash memory), or other storage.
  • DRAM dynamic randomly addressable memory
  • SRAM static randomly addressable memory
  • Flash memory non-volatile memory
  • choice of whether data storage 601 is internal or external to a processor, for example, or comprised of DRAM, SRAM, Flash or some other storage type may depend on available storage on-chip versus off-chip, latency requirements of training and/or inferencing functions being performed, batch size of data used in inferencing and/or training of a neural network, or some combination of these factors.
  • inference and/or training logic 615 may include, without limitation, a data storage 605 to store backward and/or output weight and/or input/output data corresponding to neurons or layers of a neural network trained and/or used for inferencing in aspects of one or more embodiments.
  • data storage 605 stores weight parameters and/or input/output data of each layer of a neural network trained or used in conjunction with one or more embodiments during backward propagation of input/output data and/or weight parameters during training and/or inferencing using aspects of one or more embodiments.
  • any portion of data storage 605 may be included with other on-chip or off-chip data storage, including a processor's L1, L2, or L3 cache or system memory. In at least one embodiment, any portion of data storage 605 may be internal or external to on one or more processors or other hardware logic devices or circuits. In at least one embodiment, data storage 605 may be cache memory, DRAM, SRAM, non-volatile memory (e.g., Flash memory), or other storage.
  • choice of whether data storage 605 is internal or external to a processor, for example, or comprised of DRAM, SRAM, Flash or some other storage type may depend on available storage on-chip versus off-chip, latency requirements of training and/or inferencing functions being performed, batch size of data used in inferencing and/or training of a neural network, or some combination of these factors.
  • data storage 601 and data storage 605 may be separate storage structures. In at least one embodiment, data storage 601 and data storage 605 may be same storage structure. In at least one embodiment, data storage 601 and data storage 605 may be partially same storage structure and partially separate storage structures. In at least one embodiment, any portion of data storage 601 and data storage 605 may be included with other on-chip or off-chip data storage, including a processor's L1, L2, or L3 cache or system memory.
  • inference and/or training logic 615 may include, without limitation, one or more arithmetic logic unit(s) (“ALU(s)”) 610 to perform logical and/or mathematical operations based, at least in part on, or indicated by, training and/or inference code, result of which may result in activations (e.g., output values from layers or neurons within a neural network) stored in an activation storage 620 that are functions of input/output and/or weight parameter data stored in data storage 601 and/or data storage 605 .
  • ALU(s) arithmetic logic unit
  • activations stored in activation storage 620 are generated according to linear algebraic and or matrix-based mathematics performed by ALU(s) 610 in response to performing instructions or other code, wherein weight values stored in data storage 605 and/or data 601 are used as operands along with other values, such as bias values, gradient information, momentum values, or other parameters or hyperparameters, any or all of which may be stored in data storage 605 or data storage 601 or another storage on or off-chip.
  • ALU(s) 610 are included within one or more processors or other hardware logic devices or circuits, whereas in another embodiment, ALU(s) 610 may be external to a processor or other hardware logic device or circuit that uses them (e.g., a co-processor). In at least one embodiment, ALUs 610 may be included within a processor's execution units or otherwise within a bank of ALUs accessible by a processor's execution units either within same processor or distributed between different processors of different types (e.g., central processing units, graphics processing units, fixed function units, etc.).
  • data storage 601 , data storage 605 , and activation storage 620 may be on same processor or other hardware logic device or circuit, whereas in another embodiment, they may be in different processors or other hardware logic devices or circuits, or some combination of same and different processors or other hardware logic devices or circuits.
  • any portion of activation storage 620 may be included with other on-chip or off-chip data storage, including a processor's L1, L2, or L3 cache or system memory.
  • inferencing and/or training code may be stored with other code accessible to a processor or other hardware logic or circuit and fetched and/or processed using a processor's fetch, decode, scheduling, execution, retirement and/or other logical circuits.
  • activation storage 620 may be cache memory, DRAM, SRAM, non-volatile memory (e.g., Flash memory), or other storage. In at least one embodiment, activation storage 620 may be completely or partially within or external to one or more processors or other logical circuits. In at least one embodiment, choice of whether activation storage 620 is internal or external to a processor, for example, or comprised of DRAM, SRAM, Flash or some other storage type may depend on available storage on-chip versus off-chip, latency requirements of training and/or inferencing functions being performed, batch size of data used in inferencing and/or training of a neural network, or some combination of these factors. In at least one embodiment, inference and/or training logic 615 illustrated in FIG.
  • inference and/or training logic 615 illustrated in FIG. 6A may be used in conjunction with central processing unit (“CPU”) hardware, graphics processing unit (“GPU”) hardware or other hardware, such as field programmable gate arrays (“FPGAs”).
  • CPU central processing unit
  • GPU graphics processing unit
  • FPGA field programmable gate array
  • FIG. 6B illustrates inference and/or training logic 615 , according to at least one embodiment.
  • inference and/or training logic 615 may include, without limitation, hardware logic in which computational resources are dedicated or otherwise exclusively used in conjunction with weight values or other information corresponding to one or more layers of neurons within a neural network.
  • inference and/or training logic 615 illustrated in FIG. 6B may be used in conjunction with an application-specific integrated circuit (ASIC), such as Tensorflow® Processing Unit from Google, an inference processing unit (IPU) from GraphcoreTM, or a Nervana® (e.g., “Lake Crest”) processor from Intel Corp.
  • ASIC application-specific integrated circuit
  • IPU inference processing unit
  • Nervana® e.g., “Lake Crest”
  • inference and/or training logic 615 includes, without limitation, data storage 601 and data storage 605 , which may be used to store weight values and/or other information, including bias values, gradient information, momentum values, and/or other parameter or hyperparameter information.
  • data storage 601 and data storage 605 are associated with a dedicated computational resource, such as computational hardware 602 and computational hardware 606 , respectively.
  • each of computational hardware 606 comprises one or more ALUs that perform mathematical functions, such as linear algebraic functions, only on information stored in data storage 601 and data storage 605 , respectively, result of which is stored in activation storage 620 .
  • each of data storage 601 and 605 and corresponding computational hardware 602 and 606 correspond to different layers of a neural network, such that resulting activation from one “storage/computational pair 601 / 602 ” of data storage 601 and computational hardware 602 is provided as an input to next “storage/computational pair 605 / 606 ” of data storage 605 and computational hardware 606 , in order to mirror conceptual organization of a neural network.
  • each of storage/computational pairs 601 / 602 and 605 / 606 may correspond to more than one neural network layer.
  • additional storage/computation pairs (not shown) subsequent to or in parallel with storage computation pairs 601 / 602 and 605 / 606 may be included in inference and/or training logic 615 .
  • FIG. 7 illustrates another embodiment for training and deployment of a deep neural network.
  • untrained neural network 706 is trained using a training dataset 702 .
  • training framework 704 is a PyTorch framework, whereas in other embodiments, training framework 704 is a Tensorflow, Boost, Caffe, Microsoft Cognitive Toolkit/CNTK, MXNet, Chainer, Keras, Deeplearning4j, or other training framework.
  • training framework 704 trains an untrained neural network 706 and enables it to be trained using processing resources described herein to generate a trained neural network 708 .
  • weights may be chosen randomly or by pre-training using a deep belief network.
  • training may be performed in either a supervised, partially supervised, or unsupervised manner.
  • untrained neural network 706 is trained using supervised learning, wherein training dataset 702 includes an input paired with a desired output for an input, or where training dataset 702 includes input having known output and the output of the neural network is manually graded.
  • untrained neural network 706 is trained in a supervised manner processes inputs from training dataset 702 and compares resulting outputs against a set of expected or desired outputs.
  • errors are then propagated back through untrained neural network 706 .
  • training framework 704 adjusts weights that control untrained neural network 706 .
  • training framework 704 includes tools to monitor how well untrained neural network 706 is converging towards a model, such as trained neural network 708 , suitable to generating correct answers, such as in result 714 , based on known input data, such as new data 712 .
  • training framework 704 trains untrained neural network 706 repeatedly while adjust weights to refine an output of untrained neural network 706 using a loss function and adjustment algorithm, such as stochastic gradient descent.
  • training framework 704 trains untrained neural network 706 until untrained neural network 706 achieves a desired accuracy.
  • trained neural network 708 can then be deployed to implement any number of machine learning operations.
  • untrained neural network 706 is trained using unsupervised learning, wherein untrained neural network 706 attempts to train itself using unlabeled data.
  • unsupervised learning training dataset 702 will include input data without any associated output data or “ground truth” data.
  • untrained neural network 706 can learn groupings within training dataset 702 and can determine how individual inputs are related to untrained dataset 702 .
  • unsupervised training can be used to generate a self-organizing map, which is a type of trained neural network 708 capable of performing operations useful in reducing dimensionality of new data 712 .
  • unsupervised training can also be used to perform anomaly detection, which allows identification of data points in a new dataset 712 that deviate from normal patterns of new dataset 712 .
  • semi-supervised learning may be used, which is a technique in which in training dataset 702 includes a mix of labeled and unlabeled data.
  • training framework 704 may be used to perform incremental learning, such as through transferred learning techniques.
  • incremental learning enables trained neural network 708 to adapt to new data 712 without forgetting knowledge instilled within network during initial training.
  • FIG. 8 illustrates an example data center 800 , in which at least one embodiment may be used.
  • data center 800 includes a data center infrastructure layer 810 , a framework layer 820 , a software layer 830 and an application layer 840 .
  • data center infrastructure layer 810 may include a resource orchestrator 812 , grouped computing resources 814 , and node computing resources (“node C.R.s”) 816 ( 1 )- 816 (N), where “N” represents any whole, positive integer.
  • node C.R.s 816 ( 1 )- 816 (N) may include, but are not limited to, any number of central processing units (“CPUs”) or other processors (including accelerators, field programmable gate arrays (FPGAs), graphics processors, etc.), memory devices (e.g., dynamic read-only memory), storage devices (e.g., solid state or disk drives), network input/output (“NW I/O”) devices, network switches, virtual machines (“VMs”), power modules, and cooling modules, etc.
  • one or more node C.R.s from among node C.R.s 816 ( 1 )- 816 (N) may be a server having one or more of above-mentioned computing resources.
  • grouped computing resources 814 may include separate groupings of node C.R.s housed within one or more racks (not shown), or many racks housed in data centers at various geographical locations (also not shown). Separate groupings of node C.R.s within grouped computing resources 814 may include grouped compute, network, memory or storage resources that may be configured or allocated to support one or more workloads. In at least one embodiment, several node C.R.s including CPUs or processors may grouped within one or more racks to provide compute resources to support one or more workloads. In at least one embodiment, one or more racks may also include any number of power modules, cooling modules, and network switches, in any combination.
  • resource orchestrator 822 may configure or otherwise control one or more node C.R.s 816 ( 1 )- 816 (N) and/or grouped computing resources 814 .
  • resource orchestrator 822 may include a software design infrastructure (“SDI”) management entity for data center 800 .
  • SDI software design infrastructure
  • resource orchestrator may include hardware, software or some combination thereof.
  • framework layer 820 includes a job scheduler 832 , a configuration manager 834 , a resource manager 836 and a distributed file system 838 .
  • framework layer 820 may include a framework to support software 832 of software layer 830 and/or one or more application(s) 842 of application layer 840 .
  • software 832 or application(s) 842 may respectively include web-based service software or applications, such as those provided by Amazon Web Services, Google Cloud and Microsoft Azure.
  • framework layer 820 may be, but is not limited to, a type of free and open-source software web application framework such as Apache SparkTM (hereinafter “Spark”) that may utilize distributed file system 838 for large-scale data processing (e.g., “big data”).
  • job scheduler 832 may include a Spark driver to facilitate scheduling of workloads supported by various layers of data center 800 .
  • configuration manager 834 may be capable of configuring different layers such as software layer 830 and framework layer 820 including Spark and distributed file system 838 for supporting large-scale data processing.
  • resource manager 836 may be capable of managing clustered or grouped computing resources mapped to or allocated for support of distributed file system 838 and job scheduler 832 .
  • clustered or grouped computing resources may include grouped computing resource 814 at data center infrastructure layer 810 .
  • resource manager 836 may coordinate with resource orchestrator 812 to manage these mapped or allocated computing resources.
  • software 832 included in software layer 830 may include software used by at least portions of node C.R.s 816 ( 1 )- 816 (N), grouped computing resources 814 , and/or distributed file system 838 of framework layer 820 .
  • one or more types of software may include, but are not limited to, Internet web page search software, e-mail virus scan software, database software, and streaming video content software.
  • application(s) 842 included in application layer 840 may include one or more types of applications used by at least portions of node C.R.s 816 ( 1 )- 816 (N), grouped computing resources 814 , and/or distributed file system 838 of framework layer 820 .
  • one or more types of applications may include, but are not limited to, any number of a genomics application, a cognitive compute, and a machine learning application, including training or inferencing software, machine learning framework software (e.g., PyTorch, TensorFlow, Caffe, etc.) or other machine learning applications used in conjunction with one or more embodiments.
  • machine learning framework software e.g., PyTorch, TensorFlow, Caffe, etc.
  • any of configuration manager 834 , resource manager 836 , and resource orchestrator 812 may implement any number and type of self-modifying actions based on any amount and type of data acquired in any technically feasible fashion.
  • self-modifying actions may relieve a data center operator of data center 800 from making possibly bad configuration decisions and possibly avoiding underutilized and/or poor performing portions of a data center.
  • data center 800 may include tools, services, software or other resources to train one or more machine learning models or predict or infer information using one or more machine learning models according to one or more embodiments described herein.
  • a machine learning model may be trained by calculating weight parameters according to a neural network architecture using software and computing resources described above with respect to data center 800 .
  • trained machine learning models corresponding to one or more neural networks may be used to infer or predict information using resources described above with respect to data center 800 by using weight parameters calculated through one or more training techniques described herein.
  • data center may use CPUs, application-specific integrated circuits (ASICs), GPUs, FPGAs, or other hardware to perform training and/or inferencing using above-described resources.
  • ASICs application-specific integrated circuits
  • GPUs GPUs
  • FPGAs field-programmable gate arrays
  • one or more software and/or hardware resources described above may be configured as a service to allow users to train or performing inferencing of information, such as image recognition, speech recognition, or other artificial intelligence services.
  • Inference and/or training logic 615 are used to perform inferencing and/or training operations associated with one or more embodiments. In at least one embodiment, inference and/or training logic 615 may be used in system FIG. 8 for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein.
  • an embodiment may provide a deep learning model usable for performing inferencing operations and for providing inferenced data, where the deep learning model is stored (partially or wholly) in one or both of data storage 601 and 605 in inference and/or training logic 615 as depicted in FIGS. 6A and 6B .
  • Training and deployment of the deep learning model may be performed as depicted in FIG. 7 and described herein.
  • Distribution of the deep learning model may be performed using one or more servers in a data center 800 as depicted in FIG. 8 and described herein.

Abstract

Traditionally, a software application is developed, tested, and then published for use by end users. Any subsequent update made to the software application is generally in the form of a human programmed modification made to the code in the software application itself, and further only becomes usable once tested, published, and installed by end users having the previous version of the software application. This typical software application lifecycle causes delays in not only generating improvements to software applications, but also to those improvements being made accessible to end users. To help avoid these delays and improve performance of software applications, deep learning models may be made accessible to the software applications for use in providing inferenced data to the software applications, which the software applications may then use as desired. These deep learning models can furthermore be improved independently of the software applications using manual and/or automated processes.

Description

    RELATED APPLICATION(S)
  • This application claims the benefit of U.S. Provisional Application No. 62/717,735, titled “CONTINUOUS OPTIMIZATION AND UPDATE SYSTEM FOR DEEP LEARNING MODELS” and filed Aug. 10, 2018, the entire contents of which is incorporated herein by reference.
  • This application is related to co-pending U.S. application Ser. No. 16/537,215, titled “OPTIMIZATION AND UPDATE SYSTEM FOR DEEP LEARNING MODELS” (Attorney Ref: NVIDP1275/18-SC-0194US02) filed Aug. 9, 2019, the entire contents of which is incorporated herein by reference.
  • This application is related to co-pending U.S. application Ser. No. ______, titled “AUTOMATIC DATASET CREATION USING SOFTWARE TAGS” (Attorney Ref: NVIDP1277/18-SC-0197US01) and filed Aug. ______, 2019, the entire contents of which is incorporated herein by reference.
  • TECHNICAL FIELD
  • The present disclosure relates to executing deep learning models.
  • BACKGROUND
  • Traditionally, a software application is developed and tested by developers, and then published for use to end users. Any subsequent update made to the software application is generally in the form of a human programmed modification made to the code in the software application itself, and further only becomes usable once tested, published, and installed by end users having the previous version of the software application. This typical software application lifecycle causes delays in not only generating improvements to software applications, but also to those improvements being made accessible to end users.
  • There is a need for addressing these issues and/or other issues associated with the prior art.
  • SUMMARY
  • A method, computer readable medium, and system are disclosed for executing a deep learning model using tagged data. In use, one or more required inputs for a deep learning model are determined. Additionally, metadata associated with data of a software application is used to retrieve one or more portions of the data from the software application that satisfy the one or more required inputs for the deep learning model. Further, the retrieved one or more portions of the data is provided to the deep learning model for processing to generate inferenced data.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 illustrates a block diagram of a system including a server that provisions a deep learning model to a client for use by a software application installed on the client, in accordance with an embodiment.
  • FIG. 2 illustrates a flowchart of a client method for tagging data for use in executing a deep learning model, in accordance with an embodiment.
  • FIG. 3 illustrates a flowchart of a client method for using tagged data to provide required input data to a deep learning model, in accordance with an embodiment.
  • FIG. 4 illustrates a flowchart of a client method for executing a deep learning model using tagged data, in accordance with an embodiment.
  • FIG. 5A illustrates a block diagram of a client system for executing a deep learning model using tagged data, in accordance with an embodiment.
  • FIG. 5B illustrates a flowchart of a method of the software application of FIG. 5A, in accordance with an embodiment.
  • FIG. 5C illustrates a flowchart of a method of the deep learning model executor of FIG. 5A, in accordance with an embodiment.
  • FIG. 6A illustrates inference and/or training logic, according to at least one embodiment;
  • FIG. 6B illustrates inference and/or training logic, according to at least one embodiment;
  • FIG. 7 illustrates training and deployment of a neural network, according to at least one embodiment;
  • FIG. 8 illustrates an example data center system, according to at least one embodiment.
  • DETAILED DESCRIPTION
  • FIG. 1 illustrates a block diagram of a system 100 including a server 101 that provisions a deep learning model 102 to a client 103 for use by a software application 104 installed on the client 103, in accordance with an embodiment.
  • With respect to the present description, the server 101 may be any computing device, including (without limitation) partially or wholly virtualized computing device, or combination of devices, capable of communicating with the client 103 over a wired or wireless connection, for the purpose of provisioning the deep learning model 102 to the client 103 for use by a software application 104 installed on the client 103. For example, the server 101 may include a hardware memory (e.g. random access memory (RAM), etc.) for storing the deep learning model 102 and a hardware processor (e.g. central processing unit (CPU), graphics processing unit (GPU), etc.) for provisioning the deep learning model 102 from the memory to the client 103 over the wired or wireless connection. The server 101 may provision the deep learning model 102 to the client 103 by sending a copy of the deep learning model 102 over the wired or wireless connection to the client 103.
  • Also with respect to the present description, the client 103 may be any computing device—including (without limitation) one or more partially or wholly virtualized computing devices—capable of communicating with the server 101 over the wired or wireless connection, for the purpose of receiving from the server 101 the deep learning model 102 for use by the software application 104 installed on the client 103. Thus, the client 103 may not necessarily be an end-user device (e.g. personal computer, laptop, mobile phone, etc.) but may also be a server or other cloud-based computer system having the software application 104 installed thereon. In the case where the client 103 is a cloud-based computer system, output of the software application 104 may optionally be streamed or otherwise communicated to an end-user device. Generally, the client 103 may include a memory for storing the deep learning model 102 and a processor by which the software application 104 installed on the client 103 uses the deep learning model 102 for obtaining inferenced data. By storing a copy of the deep learning model 102 at the client (e.g. on a hard drive of the client), the client executes the deep learning model 102 locally.
  • The deep learning model 102 is a machine learned network (e.g. deep neural network) that is trained to perform inferencing operations and to provide inferenced data from input data. The deep learning model 102 may be trained using supervised, semi-supervised, or unsupervised training techniques. Optionally, the server 101 may be used to perform the training of the deep learning model 102, or may receive the already trained deep learning model 102 from another device.
  • The deep learning model 102 may be trained for performing various types of inferencing operations and for making any desired type of inferences. However, in the present embodiment, the deep learning model 102 outputs inferences that are usable by the software application 104 installed on the client 103. It should be noted that the deep learning model 102 may similarly be used by other software applications which may be installed on the client 103 or other clients, and thus may not necessarily be specifically trained for use by the software application 104 but instead may be trained more generically for use by multiple different software applications. In any case, the deep learning model 102 may not be coded within the software application 104 itself, but may be accessible to the software application 104 as external functionality (e.g. as a software patch) via an application programming interface (API). As a result, the deep learning model 102 may not necessarily be developed and provided by a same developer of the software application 104 but instead may be developed and provided by a third party developer.
  • In the present embodiment, the software application 104 installed on the client 103 generates or loads data which is used as input data to the deep learning model 102, which processes the input data to compute one or more inferences (i.e. inferenced data) for the input data. Accordingly, the deep learning model 102 is trained to process the input data and make inferences therefrom. The inferenced data is output by the deep learning model 102 and is returned to the software application 104 for use by functions, tasks, etc. of the software application 104.
  • There are various use cases for the system 100 described above. In one embodiment, the software application 104 may be a video game, virtual reality application, as part of a perception and/or control layer of an autonomous vehicle or machine, or other graphics-related computer program. In this embodiment, the deep learning model 102 may provide certain image-related inferences, such as providing from an input image or other input data an anti-aliased image, an image with upscaled resolution, a denoised image, and/or any other output image that is modified in at least one respect from the input image or other input data. As another example, the deep learning model 102 may provide certain video-related inferences, such as providing from input video or other input data a slow motion version of the input video or other input data, a super sampling of the input video or other input data, etc.
  • In another embodiment, the software application 104 may be a voice recognition application or other audio-related computer program. In this embodiment, the deep learning model 102 may provide certain audio-related inferences, such as providing from an input audio or other input data a language translation, a voice recognized command, and/or any other output that is inferenced from the input audio or other input data.
  • When the developer of the deep learning model wants to update or improve the model, they may need to gather new data to be used to re-train the model. To address any changing requirements to input data, the embodiments below describe systems and methods for retraining a deep learning model using tagged data. These systems and methods will allow an updated, or improved, version of the deep learning model 102 to use different input data (e.g. from the software application 104) than a prior version of the deep learning model, without also requiring the software application 104 to be reconfigured to provide the newly required inputs. This is accomplished by tagging data of the software application 104, and then using the tags to retrieve from the software application 104 input data currently required by the deep learning model.
  • It should be noted that the systems and methods described below may be implemented in the context of the system 100 of FIG. 1, but are not necessarily limited thereto.
  • FIG. 2 illustrates a flowchart of a client method 200 for tagging data for use in executing a deep learning model, in accordance with an embodiment. Accordingly, in one embodiment, the method 200 may be performed by the client 103 of FIG. 1.
  • In operation 201, a software application is stored. In the context of the present method 200, the software application is configured to use a deep learning model for performing inferencing operations and providing inferenced data (e.g. such as software application 104 that uses deep learning model 102 in FIG. 1). The software application may be stored locally (e.g. by the client 103 of FIG. 1).
  • In operation 202, metadata is received for data of the software application. The data includes any data stored by the software application or stored for use by the software application. For example, the data may be generated by the software application during execution thereof (e.g. a graphical image or user interface generated by the software application). Further, the data may be stored in memory used by the software application, such as CPU random access memory (RAM) and/or GPU RAM.
  • In one embodiment, the metadata may be received for specific portions of the memory storing different data of the software application. Thus, the metadata may be received for the data by being received for specific portions of the memory storing the data. For example, the specific portions of the memory may each store a different type of data, data output by a specific function or process of the software application, etc. The portions of the memory may be particular data structures (e.g. custom or common data structures used by the software application), buffers (e.g. intermediate rendering buffers used in a rendering pipeline), etc. In one exemplary embodiment where the software application is a graphics-related software application, metadata may be received for various buffers used in a graphics processing pipeline, such as a depth buffer, a normal buffer, etc.
  • In the context of the present description, the metadata is any descriptive information that can be associated with the data of the software application. For example, the metadata may categorize the data of the software application, may name the data of the software application, etc. As an option, the metadata may comply with a nomenclature specified for the deep learning model. For example, a developer or provider of the deep learning model may specify or characterize a particular nomenclature to be used for the deep learning model when configuring required input data for the deep learning model. In one embodiment, the metadata may be received by a developer of the software application or other user having knowledge of the data of the software application. The metadata may further be received in any desired format, such as extensible markup language (XML).
  • Further, as shown in operation 203, the metadata is stored in association with the data of the software application for use with the deep learning model. For example, the metadata may be assigned to the data of the software application for use with the deep learning model. Of course, it should be noted that the metadata may be stored in any manner that associates it with the corresponding data of the software application for which the metadata was received.
  • In one embodiment, the metadata received for particular data of the software application may be inserted in a portion of code of the software application that stores (in memory) or accesses (in memory) the particular data. In another embodiment, the metadata received for particular data of the software application may be inserted in a portion of code of the software application that defines the locations in memory in which the data is (to be) stored. In yet another embodiment, the metadata may be stored in a reference table that maps each metadata to the corresponding data and/or location in memory in which the data is stored.
  • To this end, the method 200 can be implemented as a way to tag the data of the software application with the metadata by storing an association (relationship) therebetween. The tagged data may then be used for execution of the deep learning model, for example as described with reference to the Figures below.
  • FIG. 3 illustrates a flowchart of a client method 300 for using tagged data to assemble input data to provide to a deep learning model, in accordance with an embodiment. For example, in one embodiment, the method 300 may be performed by the client 103 of FIG. 1. Further, the method 300 may use the tagged data disclosed with respect to FIG. 2 above.
  • In operation 301, one or more required inputs for a deep learning model are determined. In the context of the present method 300, the deep learning model is usable for providing inferenced data to a software application (e.g. such as the deep learning model 102 used by the software application 104 of FIG. 1). The deep learning model may be stored locally (e.g. by the client 103). In one embodiment, the deep learning model may be stored in a local repository with other deep learning models usable for providing other types of inferenced data to the software application or other software applications.
  • The deep learning model is configured to receive certain input(s) and output certain output(s). Thus, the required input(s) for the deep learning model refers to the input(s) (e.g. data) that the deep learning model is configured to receive. These input(s) may be specified in any desired manner. For example, the input(s) may be specified as tags selected in accordance with a particular nomenclature used for the deep learning model (e.g. predefined for use in configuring the required input(s) for the deep learning model). In one embodiment, the required input(s) may be determined from a configuration file defined for the deep learning model.
  • In operation 302, metadata associated with data of a software application is used to retrieve one or more portions of the data that satisfy the required input(s) for the deep learning model. The software application may refer to one being executed to use the deep learning model to perform inferencing and obtain inferenced data therefrom. Thus, operation 302 may access memory used by the software application to retrieve therefrom the input data required by the deep learning model.
  • In one embodiment, an identifier (e.g. name, etc.) of the required input(s) determined in operation 301 may be matched to, or otherwise correlated with, metadata defined for certain data of the software application. The certain data of the software application associated with that metadata may then be retrieved. In the example above where the required input(s) for the deep learning model is specified as tags, the present operation may retrieve from the software application data tagged with metadata matching, or closely (e.g. fuzzy) matching, those tags.
  • In operation 303, the one or more portions of the data retrieved from the software application is provided to the deep learning model for processing. In other words, the data of the software application satisfying the required input(s) for the deep learning model is provided as input to the deep learning model. The deep learning model can then process the input as described in more detail below with respect to FIG. 4.
  • FIG. 4 illustrates a flowchart of a client method 400 for executing a deep learning model using tagged data, in accordance with an embodiment. In one embodiment, the method 400 may be performed by the client 103 of FIG. 1. In another embodiment, the method 400 may be performed following the method 300 of FIG. 3.
  • In operation 401, data retrieved from a software application is used as input to a deep learning model. This operation may be the same as operation 303 of FIG. 3, in one embodiment. In any case, in the present embodiment the data retrieved from the software application is selected based on associated tags determined to satisfy the required input(s) configured for the deep learning model.
  • In operation 402, the deep learning model is executed to process the input data. Further, in operation 403, inferenced data is received as output of the deep learning model. The inferenced data may be any data that the deep learning model inferences from the input data.
  • In operation 404, the inferenced data is provided to the software application. In this way, the software application may use the inferenced data for one or more of its own processing tasks, functions, etc.
  • FIG. 5A illustrates a block diagram of a client system 500 for executing a deep learning model using tagged data. It should be noted that the definitions and/or descriptions provided with respect to the embodiments above may equally apply to the present description.
  • As shown, a software application 501 interfaces a memory 502. In the present embodiment, the memory 502 includes CPU RAM 503 and GPU RAM 506, but may also include other types of memory in other embodiments. The CPU RAM 503 stores one or more buffers 505A-N and one or more other (e.g. custom or common) data structures 504A-N. Similarly, the GPU RAM 506 stores one or more buffers 508A-N and one or more other data structures 507A-N. The software application 501 may use the buffers 505A-N and one or more other data structures 504A-N of the CPU RAM 503 and/or the one or more buffers 508A-N and one or more other data structures 507A-N of the GPU RAM 506 for storing data therein. The data may be any data generated, or otherwise used, by the software application 501.
  • During execution, the software application 501 loads data into the memory 502 and tags the data with metadata. The software application 501 may be configured (e.g. by a developer) to tag the data with certain metadata once loaded into the memory 502. In the embodiment shown, the software application 501 tags the data by tagging locations in the memory 502 in which the data is stored (e.g. tagged data structure 504A, tagged buffer 505A, etc.).
  • The software application 501 also interfaces a deep learning executor 509. The deep learning executor 509 is executable computer code that executes a deep learning model (not shown) in association with the software application 501. In particular, the software application 501 includes a function that calls (initiates) the deep learning executor 509 to cause execution of the deep learning model.
  • When called, the deep learning executor 509 collects data from the application or system memory 502 that satisfies the required input(s) of the deep learning model. The deep learning executor 509 uses the tags provided for the data by the software application 501 to determine those portions of the data in the memory 502 that satisfy the required input(s) of the deep learning model. As shown, the deep learning executor 509 may include a data collector 510 to collect the data. The data collector 510 may be a module (e.g. code segment, function, etc.) within the deep learning executor 509, in one embodiment.
  • The deep learning executor 509 further inputs the collected data to the deep learning model for processing thereof to generate inferenced data. The inferenced data may be output by the deep learning model to the deep learning executor 509. The deep learning executor 509 then provides to the software application 501 the inferenced data output by the deep learning model.
  • FIG. 5B illustrates a flowchart of a method of the software application 501 of FIG. 5A, in accordance with an embodiment. As shown in operation 511, the software application 501 starts. The software application 501 may start upon initiation of the software application 501 by a user or by another software application.
  • Then, in operation 512, the software application 501 loads data into memory 502. The data that is loaded into memory 502 is data that is used by the software application 501 for executing various functions, performing various processes, etc. In one embodiment, operation 512 may include configuring or instantiating various data structures and/or buffers in the memory 502 for use in storing data generated, or used, by the software application 501 during execution thereof.
  • In operation 513, tags are applied to the data (in the memory 502). The tags are metadata that describe the data. Thus, different portions of the data may be tagged with different metadata.
  • In operation 514, the software application 501 executes (e.g. to perform various functions that use the memory 502). Then, in operation 515, the software application 501 calls the deep learning model executor 509. Operation 515 may occur at any point during execution of the software application 501, for the purpose of executing the deep learning model to obtain inferenced data therefrom.
  • FIG. 5C illustrates a flowchart of a method of the deep learning model executor 509 of FIG. 5A, in accordance with an embodiment. As shown in operation 516, the deep learning model executor 509 is started. In the present embodiment, operation 516 occurs in response to operation 515 of FIG. 5B. Thus, the deep learning model executor 509 is started for executing a deep learning model to obtain inferenced data therefrom.
  • In operation 517, a deep learning model configuration 524 is loaded. The deep learning model configuration 524 may be loaded from any memory (e.g. local memory) storing the same. The deep learning model configuration 524 indicates at least required input(s) for the deep learning model. For example, the required input(s) may be indicated using tags that correlate with tagged data in the memory 502.
  • In operation 518, data is collected from the memory 502 that satisfies the required input(s) for the deep learning model. For example, the tags indicating the required input(s) may be matched to tags in the memory 502, and data in the memory 502 associated with those tags may be collected. In one embodiment, the deep learning model executor 509 may execute the input data collector 510 to perform operation 518.
  • In operation 519, the collected data is provided to the deep learning model. In particular, the collected data may be input to the deep learning model for processing thereof. Then, in operation 520, the deep learning model is executed to generate inferenced data for the input data. In operation 521, output (i.e. the inferenced data) is received from the deep learning model. Further, in operation 522, the output is provided to the software application 501, and finally the deep learning model executor 509 is terminated in operation 523.
  • Machine Learning
  • Deep neural networks (DNNs), including deep learning models, developed on processors have been used for diverse use cases, from self-driving cars to faster drug development, from automatic image captioning in online image databases to smart real-time language translation in video chat applications. Deep learning is a technique that models the neural learning process of the human brain, continually learning, continually getting smarter, and delivering more accurate results more quickly over time. A child is initially taught by an adult to correctly identify and classify various shapes, eventually being able to identify shapes without any coaching. Similarly, a deep learning or neural learning system needs to be trained in object recognition and classification for it get smarter and more efficient at identifying basic objects, occluded objects, etc., while also assigning context to objects.
  • At the simplest level, neurons in the human brain look at various inputs that are received, importance levels are assigned to each of these inputs, and output is passed on to other neurons to act upon. An artificial neuron or perceptron is the most basic model of a neural network. In one example, a perceptron may receive one or more inputs that represent various features of an object that the perceptron is being trained to recognize and classify, and each of these features is assigned a certain weight based on the importance of that feature in defining the shape of an object.
  • A deep neural network (DNN) model includes multiple layers of many connected nodes (e.g., perceptrons, Boltzmann machines, radial basis functions, convolutional layers, etc.) that can be trained with enormous amounts of input data to quickly solve complex problems with high accuracy. In one example, a first layer of the DNN model breaks down an input image of an automobile into various sections and looks for basic patterns such as lines and angles. The second layer assembles the lines to look for higher level patterns such as wheels, windshields, and mirrors. The next layer identifies the type of vehicle, and the final few layers generate a label for the input image, identifying the model of a specific automobile brand.
  • Once the DNN is trained, the DNN can be deployed and used to identify and classify objects or patterns in a process known as inference. Examples of inference (the process through which a DNN extracts useful information from a given input) include identifying handwritten numbers on checks deposited into ATM machines, identifying images of friends in photos, delivering movie recommendations to over fifty million users, identifying and classifying different types of automobiles, pedestrians, and road hazards in driverless cars, or translating human speech in real-time.
  • During training, data flows through the DNN in a forward propagation phase until a prediction is produced that indicates a label corresponding to the input. If the neural network does not correctly label the input, then errors between the correct label and the predicted label are analyzed, and the weights are adjusted for each feature during a backward propagation phase until the DNN correctly labels the input and other inputs in a training dataset. Training complex neural networks requires massive amounts of parallel computing performance, including floating-point multiplications and additions. Inferencing is less compute-intensive than training, being a latency-sensitive process where a trained neural network is applied to new inputs it has not seen before to classify images, translate speech, and generally infer new information.
  • Inference and Training Logic
  • As noted above, a deep learning or neural learning system needs to be trained to generate inferences from input data. Details regarding inference and/or training logic 615 for a deep learning or neural learning system are provided below in conjunction with FIGS. 6A and/or 6B.
  • In at least one embodiment, inference and/or training logic 615 may include, without limitation, a data storage 601 to store forward and/or output weight and/or input/output data corresponding to neurons or layers of a neural network trained and/or used for inferencing in aspects of one or more embodiments. In at least one embodiment data storage 601 stores weight parameters and/or input/output data of each layer of a neural network trained or used in conjunction with one or more embodiments during forward propagation of input/output data and/or weight parameters during training and/or inferencing using aspects of one or more embodiments. In at least one embodiment, any portion of data storage 601 may be included with other on-chip or off-chip data storage, including a processor's L1, L2, or L3 cache or system memory.
  • In at least one embodiment, any portion of data storage 601 may be internal or external to one or more processors or other hardware logic devices or circuits. In at least one embodiment, data storage 601 may be cache memory, dynamic randomly addressable memory (“DRAM”), static randomly addressable memory (“SRAM”), non-volatile memory (e.g., Flash memory), or other storage. In at least one embodiment, choice of whether data storage 601 is internal or external to a processor, for example, or comprised of DRAM, SRAM, Flash or some other storage type may depend on available storage on-chip versus off-chip, latency requirements of training and/or inferencing functions being performed, batch size of data used in inferencing and/or training of a neural network, or some combination of these factors.
  • In at least one embodiment, inference and/or training logic 615 may include, without limitation, a data storage 605 to store backward and/or output weight and/or input/output data corresponding to neurons or layers of a neural network trained and/or used for inferencing in aspects of one or more embodiments. In at least one embodiment, data storage 605 stores weight parameters and/or input/output data of each layer of a neural network trained or used in conjunction with one or more embodiments during backward propagation of input/output data and/or weight parameters during training and/or inferencing using aspects of one or more embodiments. In at least one embodiment, any portion of data storage 605 may be included with other on-chip or off-chip data storage, including a processor's L1, L2, or L3 cache or system memory. In at least one embodiment, any portion of data storage 605 may be internal or external to on one or more processors or other hardware logic devices or circuits. In at least one embodiment, data storage 605 may be cache memory, DRAM, SRAM, non-volatile memory (e.g., Flash memory), or other storage. In at least one embodiment, choice of whether data storage 605 is internal or external to a processor, for example, or comprised of DRAM, SRAM, Flash or some other storage type may depend on available storage on-chip versus off-chip, latency requirements of training and/or inferencing functions being performed, batch size of data used in inferencing and/or training of a neural network, or some combination of these factors.
  • In at least one embodiment, data storage 601 and data storage 605 may be separate storage structures. In at least one embodiment, data storage 601 and data storage 605 may be same storage structure. In at least one embodiment, data storage 601 and data storage 605 may be partially same storage structure and partially separate storage structures. In at least one embodiment, any portion of data storage 601 and data storage 605 may be included with other on-chip or off-chip data storage, including a processor's L1, L2, or L3 cache or system memory.
  • In at least one embodiment, inference and/or training logic 615 may include, without limitation, one or more arithmetic logic unit(s) (“ALU(s)”) 610 to perform logical and/or mathematical operations based, at least in part on, or indicated by, training and/or inference code, result of which may result in activations (e.g., output values from layers or neurons within a neural network) stored in an activation storage 620 that are functions of input/output and/or weight parameter data stored in data storage 601 and/or data storage 605. In at least one embodiment, activations stored in activation storage 620 are generated according to linear algebraic and or matrix-based mathematics performed by ALU(s) 610 in response to performing instructions or other code, wherein weight values stored in data storage 605 and/or data 601 are used as operands along with other values, such as bias values, gradient information, momentum values, or other parameters or hyperparameters, any or all of which may be stored in data storage 605 or data storage 601 or another storage on or off-chip. In at least one embodiment, ALU(s) 610 are included within one or more processors or other hardware logic devices or circuits, whereas in another embodiment, ALU(s) 610 may be external to a processor or other hardware logic device or circuit that uses them (e.g., a co-processor). In at least one embodiment, ALUs 610 may be included within a processor's execution units or otherwise within a bank of ALUs accessible by a processor's execution units either within same processor or distributed between different processors of different types (e.g., central processing units, graphics processing units, fixed function units, etc.). In at least one embodiment, data storage 601, data storage 605, and activation storage 620 may be on same processor or other hardware logic device or circuit, whereas in another embodiment, they may be in different processors or other hardware logic devices or circuits, or some combination of same and different processors or other hardware logic devices or circuits. In at least one embodiment, any portion of activation storage 620 may be included with other on-chip or off-chip data storage, including a processor's L1, L2, or L3 cache or system memory. Furthermore, inferencing and/or training code may be stored with other code accessible to a processor or other hardware logic or circuit and fetched and/or processed using a processor's fetch, decode, scheduling, execution, retirement and/or other logical circuits.
  • In at least one embodiment, activation storage 620 may be cache memory, DRAM, SRAM, non-volatile memory (e.g., Flash memory), or other storage. In at least one embodiment, activation storage 620 may be completely or partially within or external to one or more processors or other logical circuits. In at least one embodiment, choice of whether activation storage 620 is internal or external to a processor, for example, or comprised of DRAM, SRAM, Flash or some other storage type may depend on available storage on-chip versus off-chip, latency requirements of training and/or inferencing functions being performed, batch size of data used in inferencing and/or training of a neural network, or some combination of these factors. In at least one embodiment, inference and/or training logic 615 illustrated in FIG. 6A may be used in conjunction with an application-specific integrated circuit (“ASIC”), such as Tensorflow® Processing Unit from Google, an inference processing unit (IPU) from Graphcore™, or a Nervana® (e.g., “Lake Crest”) processor from Intel Corp. In at least one embodiment, inference and/or training logic 615 illustrated in FIG. 6A may be used in conjunction with central processing unit (“CPU”) hardware, graphics processing unit (“GPU”) hardware or other hardware, such as field programmable gate arrays (“FPGAs”).
  • FIG. 6B illustrates inference and/or training logic 615, according to at least one embodiment. In at least one embodiment, inference and/or training logic 615 may include, without limitation, hardware logic in which computational resources are dedicated or otherwise exclusively used in conjunction with weight values or other information corresponding to one or more layers of neurons within a neural network. In at least one embodiment, inference and/or training logic 615 illustrated in FIG. 6B may be used in conjunction with an application-specific integrated circuit (ASIC), such as Tensorflow® Processing Unit from Google, an inference processing unit (IPU) from Graphcore™, or a Nervana® (e.g., “Lake Crest”) processor from Intel Corp. In at least one embodiment, inference and/or training logic 615 illustrated in FIG. 6B may be used in conjunction with central processing unit (CPU) hardware, graphics processing unit (GPU) hardware or other hardware, such as field programmable gate arrays (FPGAs). In at least one embodiment, inference and/or training logic 615 includes, without limitation, data storage 601 and data storage 605, which may be used to store weight values and/or other information, including bias values, gradient information, momentum values, and/or other parameter or hyperparameter information. In at least one embodiment illustrated in FIG. 6B, each of data storage 601 and data storage 605 is associated with a dedicated computational resource, such as computational hardware 602 and computational hardware 606, respectively. In at least one embodiment, each of computational hardware 606 comprises one or more ALUs that perform mathematical functions, such as linear algebraic functions, only on information stored in data storage 601 and data storage 605, respectively, result of which is stored in activation storage 620.
  • In at least one embodiment, each of data storage 601 and 605 and corresponding computational hardware 602 and 606, respectively, correspond to different layers of a neural network, such that resulting activation from one “storage/computational pair 601/602” of data storage 601 and computational hardware 602 is provided as an input to next “storage/computational pair 605/606” of data storage 605 and computational hardware 606, in order to mirror conceptual organization of a neural network. In at least one embodiment, each of storage/computational pairs 601/602 and 605/606 may correspond to more than one neural network layer. In at least one embodiment, additional storage/computation pairs (not shown) subsequent to or in parallel with storage computation pairs 601/602 and 605/606 may be included in inference and/or training logic 615.
  • Neural Network Training and Deployment
  • FIG. 7 illustrates another embodiment for training and deployment of a deep neural network. In at least one embodiment, untrained neural network 706 is trained using a training dataset 702. In at least one embodiment, training framework 704 is a PyTorch framework, whereas in other embodiments, training framework 704 is a Tensorflow, Boost, Caffe, Microsoft Cognitive Toolkit/CNTK, MXNet, Chainer, Keras, Deeplearning4j, or other training framework. In at least one embodiment training framework 704 trains an untrained neural network 706 and enables it to be trained using processing resources described herein to generate a trained neural network 708. In at least one embodiment, weights may be chosen randomly or by pre-training using a deep belief network. In at least one embodiment, training may be performed in either a supervised, partially supervised, or unsupervised manner.
  • In at least one embodiment, untrained neural network 706 is trained using supervised learning, wherein training dataset 702 includes an input paired with a desired output for an input, or where training dataset 702 includes input having known output and the output of the neural network is manually graded. In at least one embodiment, untrained neural network 706 is trained in a supervised manner processes inputs from training dataset 702 and compares resulting outputs against a set of expected or desired outputs. In at least one embodiment, errors are then propagated back through untrained neural network 706. In at least one embodiment, training framework 704 adjusts weights that control untrained neural network 706. In at least one embodiment, training framework 704 includes tools to monitor how well untrained neural network 706 is converging towards a model, such as trained neural network 708, suitable to generating correct answers, such as in result 714, based on known input data, such as new data 712. In at least one embodiment, training framework 704 trains untrained neural network 706 repeatedly while adjust weights to refine an output of untrained neural network 706 using a loss function and adjustment algorithm, such as stochastic gradient descent. In at least one embodiment, training framework 704 trains untrained neural network 706 until untrained neural network 706 achieves a desired accuracy. In at least one embodiment, trained neural network 708 can then be deployed to implement any number of machine learning operations.
  • In at least one embodiment, untrained neural network 706 is trained using unsupervised learning, wherein untrained neural network 706 attempts to train itself using unlabeled data. In at least one embodiment, unsupervised learning training dataset 702 will include input data without any associated output data or “ground truth” data. In at least one embodiment, untrained neural network 706 can learn groupings within training dataset 702 and can determine how individual inputs are related to untrained dataset 702. In at least one embodiment, unsupervised training can be used to generate a self-organizing map, which is a type of trained neural network 708 capable of performing operations useful in reducing dimensionality of new data 712. In at least one embodiment, unsupervised training can also be used to perform anomaly detection, which allows identification of data points in a new dataset 712 that deviate from normal patterns of new dataset 712.
  • In at least one embodiment, semi-supervised learning may be used, which is a technique in which in training dataset 702 includes a mix of labeled and unlabeled data. In at least one embodiment, training framework 704 may be used to perform incremental learning, such as through transferred learning techniques. In at least one embodiment, incremental learning enables trained neural network 708 to adapt to new data 712 without forgetting knowledge instilled within network during initial training.
  • Data Center
  • FIG. 8 illustrates an example data center 800, in which at least one embodiment may be used. In at least one embodiment, data center 800 includes a data center infrastructure layer 810, a framework layer 820, a software layer 830 and an application layer 840.
  • In at least one embodiment, as shown in FIG. 8, data center infrastructure layer 810 may include a resource orchestrator 812, grouped computing resources 814, and node computing resources (“node C.R.s”) 816(1)-816(N), where “N” represents any whole, positive integer. In at least one embodiment, node C.R.s 816(1)-816(N) may include, but are not limited to, any number of central processing units (“CPUs”) or other processors (including accelerators, field programmable gate arrays (FPGAs), graphics processors, etc.), memory devices (e.g., dynamic read-only memory), storage devices (e.g., solid state or disk drives), network input/output (“NW I/O”) devices, network switches, virtual machines (“VMs”), power modules, and cooling modules, etc. In at least one embodiment, one or more node C.R.s from among node C.R.s 816(1)-816(N) may be a server having one or more of above-mentioned computing resources.
  • In at least one embodiment, grouped computing resources 814 may include separate groupings of node C.R.s housed within one or more racks (not shown), or many racks housed in data centers at various geographical locations (also not shown). Separate groupings of node C.R.s within grouped computing resources 814 may include grouped compute, network, memory or storage resources that may be configured or allocated to support one or more workloads. In at least one embodiment, several node C.R.s including CPUs or processors may grouped within one or more racks to provide compute resources to support one or more workloads. In at least one embodiment, one or more racks may also include any number of power modules, cooling modules, and network switches, in any combination.
  • In at least one embodiment, resource orchestrator 822 may configure or otherwise control one or more node C.R.s 816(1)-816(N) and/or grouped computing resources 814. In at least one embodiment, resource orchestrator 822 may include a software design infrastructure (“SDI”) management entity for data center 800. In at least one embodiment, resource orchestrator may include hardware, software or some combination thereof.
  • In at least one embodiment, as shown in FIG. 8, framework layer 820 includes a job scheduler 832, a configuration manager 834, a resource manager 836 and a distributed file system 838. In at least one embodiment, framework layer 820 may include a framework to support software 832 of software layer 830 and/or one or more application(s) 842 of application layer 840. In at least one embodiment, software 832 or application(s) 842 may respectively include web-based service software or applications, such as those provided by Amazon Web Services, Google Cloud and Microsoft Azure. In at least one embodiment, framework layer 820 may be, but is not limited to, a type of free and open-source software web application framework such as Apache Spark™ (hereinafter “Spark”) that may utilize distributed file system 838 for large-scale data processing (e.g., “big data”). In at least one embodiment, job scheduler 832 may include a Spark driver to facilitate scheduling of workloads supported by various layers of data center 800. In at least one embodiment, configuration manager 834 may be capable of configuring different layers such as software layer 830 and framework layer 820 including Spark and distributed file system 838 for supporting large-scale data processing. In at least one embodiment, resource manager 836 may be capable of managing clustered or grouped computing resources mapped to or allocated for support of distributed file system 838 and job scheduler 832. In at least one embodiment, clustered or grouped computing resources may include grouped computing resource 814 at data center infrastructure layer 810. In at least one embodiment, resource manager 836 may coordinate with resource orchestrator 812 to manage these mapped or allocated computing resources.
  • In at least one embodiment, software 832 included in software layer 830 may include software used by at least portions of node C.R.s 816(1)-816(N), grouped computing resources 814, and/or distributed file system 838 of framework layer 820. one or more types of software may include, but are not limited to, Internet web page search software, e-mail virus scan software, database software, and streaming video content software.
  • In at least one embodiment, application(s) 842 included in application layer 840 may include one or more types of applications used by at least portions of node C.R.s 816(1)-816(N), grouped computing resources 814, and/or distributed file system 838 of framework layer 820. one or more types of applications may include, but are not limited to, any number of a genomics application, a cognitive compute, and a machine learning application, including training or inferencing software, machine learning framework software (e.g., PyTorch, TensorFlow, Caffe, etc.) or other machine learning applications used in conjunction with one or more embodiments.
  • In at least one embodiment, any of configuration manager 834, resource manager 836, and resource orchestrator 812 may implement any number and type of self-modifying actions based on any amount and type of data acquired in any technically feasible fashion. In at least one embodiment, self-modifying actions may relieve a data center operator of data center 800 from making possibly bad configuration decisions and possibly avoiding underutilized and/or poor performing portions of a data center.
  • In at least one embodiment, data center 800 may include tools, services, software or other resources to train one or more machine learning models or predict or infer information using one or more machine learning models according to one or more embodiments described herein. For example, in at least one embodiment, a machine learning model may be trained by calculating weight parameters according to a neural network architecture using software and computing resources described above with respect to data center 800. In at least one embodiment, trained machine learning models corresponding to one or more neural networks may be used to infer or predict information using resources described above with respect to data center 800 by using weight parameters calculated through one or more training techniques described herein.
  • In at least one embodiment, data center may use CPUs, application-specific integrated circuits (ASICs), GPUs, FPGAs, or other hardware to perform training and/or inferencing using above-described resources. Moreover, one or more software and/or hardware resources described above may be configured as a service to allow users to train or performing inferencing of information, such as image recognition, speech recognition, or other artificial intelligence services.
  • Inference and/or training logic 615 are used to perform inferencing and/or training operations associated with one or more embodiments. In at least one embodiment, inference and/or training logic 615 may be used in system FIG. 8 for inferencing or predicting operations based, at least in part, on weight parameters calculated using neural network training operations, neural network functions and/or architectures, or neural network use cases described herein.
  • As described herein, a method, computer readable medium, and system are disclosed for executing a deep learning model using tagged data. In accordance with FIGS. 1-5C, an embodiment may provide a deep learning model usable for performing inferencing operations and for providing inferenced data, where the deep learning model is stored (partially or wholly) in one or both of data storage 601 and 605 in inference and/or training logic 615 as depicted in FIGS. 6A and 6B. Training and deployment of the deep learning model may be performed as depicted in FIG. 7 and described herein. Distribution of the deep learning model may be performed using one or more servers in a data center 800 as depicted in FIG. 8 and described herein.

Claims (20)

What is claimed is:
1. A method, comprising:
determining one or more required inputs for a deep learning model;
retrieving one or more portions of data from a software application that satisfy the one or more required inputs for the deep learning model using metadata associated with the data of the software application, the metadata comprising an indicia of locations in a memory device in which the one or more portions of the data is stored; and
providing the retrieved one or more portions of the data to the deep learning model for processing to generate inferenced data.
2. The method of claim 1, wherein the one or more required inputs for the deep learning model are determined from a configuration file for the deep learning model.
3. The method of claim 1, wherein the one or more required inputs for the deep learning model are indicated using tags.
4. The method of claim 1, wherein the data of the software application is data stored in the memory device by the software application.
5. The method of claim 1, further comprising:
receiving the metadata for the data of the software application; and
storing the metadata in association with the data of the software application.
6. The method of claim 1, wherein the metadata is associated with the data of the software application by being inserted in a portion of code of the software application that defines the locations in the memory device in which the data is stored.
7. The method of claim 6, wherein the locations in the memory device include one or more data structures and one or more buffers.
8. The method of claim 1, wherein the metadata is associated with the data of the software application by being stored in a reference table that maps each metadata to a corresponding portion of the data or location in the memory device in which the corresponding portion of the data is stored.
9. The method of claim 1, wherein the software application loads the data in the memory device and applies the metadata to the data in the memory device.
10. The method of claim 1, wherein retrieving the one or more portions of the data from the software application that satisfy the one or more required inputs for the deep learning model includes:
matching an identifier each of the one or more required inputs for the deep learning model to metadata defined for certain data of the software application, and
retrieving the certain data of the software application.
11. The method of claim 1, wherein providing the retrieved one or more portions of the data to the deep learning model includes using the one or more portions of the data to train the deep learning model.
12. The method of claim 1, wherein the determining is performed responsive to a call by the software application to execute the deep learning model.
13. The method of claim 1, wherein the determining, retrieving, and providing is performed by a deep learning model executor that interfaces the software application.
14. The method of claim 13, wherein the deep learning model executor, the software application, and the deep learning model are installed on a client system.
15. The method of claim 1, further comprising:
receiving the inferenced data as output of the deep learning model; and
providing the inferenced data to the software application.
16. The method of claim 1, further comprising:
updating the deep learning model, wherein the updated deep learning model is configured to be performed using one or more new inputs different from the determined one or more required inputs for the deep learning model;
determining the one or more new inputs for the updated deep learning model;
using the metadata associated with the data of the software application to retrieve one or more new portions of the data from the software application that satisfy the one or more new inputs required for the deep learning model; and
providing the retrieved one or more new portions of the data to the updated deep learning model to perform inferencing operations to generate new inferenced data.
17. The method of claim 16, further comprising:
receiving the new inferenced data as output of the updated deep learning model; and
providing the new inferenced data to the software application.
18. A non-transitory computer-readable medium storing computer instructions that, when executed by one or more processors, cause the one or more processors to perform a method comprising:
determining one or more required inputs for a deep learning model;
retrieving one or more portions of data from a software application that satisfy the one or more required inputs for the deep learning model using metadata associated with the data of the software application; and
providing the retrieved one or more portions of the data to the deep learning model for performing inferencing operations to generate inferenced data.
19. A system, comprising:
a memory storing instructions; and
one or more processors that execute the instructions to perform a method comprising:
determining one or more required inputs for a deep learning model;
retrieving one or more portions of data from a software application that satisfy the one or more required inputs for the deep learning model using metadata associated with the data of the software application; and
providing the retrieved one or more portions of the data to the deep learning model for performing inferencing operations to generate inferenced data.
20. The system of claim 19, wherein the memory further stores the metadata and associated data of the software application.
US16/537,242 2018-08-10 2019-08-09 Deep learning model execution using tagged data Pending US20200050935A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US16/537,242 US20200050935A1 (en) 2018-08-10 2019-08-09 Deep learning model execution using tagged data

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US201862717735P 2018-08-10 2018-08-10
US16/537,242 US20200050935A1 (en) 2018-08-10 2019-08-09 Deep learning model execution using tagged data

Publications (1)

Publication Number Publication Date
US20200050935A1 true US20200050935A1 (en) 2020-02-13

Family

ID=69405876

Family Applications (3)

Application Number Title Priority Date Filing Date
US16/537,242 Pending US20200050935A1 (en) 2018-08-10 2019-08-09 Deep learning model execution using tagged data
US16/537,215 Pending US20200050443A1 (en) 2018-08-10 2019-08-09 Optimization and update system for deep learning models
US16/537,255 Pending US20200050936A1 (en) 2018-08-10 2019-08-09 Automatic dataset creation using software tags

Family Applications After (2)

Application Number Title Priority Date Filing Date
US16/537,215 Pending US20200050443A1 (en) 2018-08-10 2019-08-09 Optimization and update system for deep learning models
US16/537,255 Pending US20200050936A1 (en) 2018-08-10 2019-08-09 Automatic dataset creation using software tags

Country Status (1)

Country Link
US (3) US20200050935A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112527321A (en) * 2020-12-29 2021-03-19 平安银行股份有限公司 Deep learning-based application online method, system, device and medium
US11061791B2 (en) * 2019-01-07 2021-07-13 International Business Machines Corporation Providing insight of continuous delivery pipeline using machine learning
US11106434B1 (en) * 2020-07-31 2021-08-31 EMC IP Holding Company LLC Method, device, and computer program product for generating program code

Families Citing this family (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11501200B2 (en) * 2016-07-02 2022-11-15 Hcl Technologies Limited Generate alerts while monitoring a machine learning model in real time
US10713769B2 (en) * 2018-06-05 2020-07-14 Kla-Tencor Corp. Active learning for defect classifier training
CN111126613A (en) * 2018-10-31 2020-05-08 伊姆西Ip控股有限责任公司 Method, apparatus and computer program product for deep learning
JP6699764B1 (en) * 2019-01-16 2020-05-27 株式会社富士通ゼネラル Air conditioning system
US11385884B2 (en) * 2019-04-29 2022-07-12 Harman International Industries, Incorporated Assessing cognitive reaction to over-the-air updates
JP7032366B2 (en) * 2019-10-09 2022-03-08 株式会社日立製作所 Operations support system and method
US11200722B2 (en) * 2019-12-20 2021-12-14 Intel Corporation Method and apparatus for viewport shifting of non-real time 3D applications
CN113742197B (en) * 2020-05-27 2023-04-14 抖音视界有限公司 Model management device, method, data management device, method and system
CN112732297B (en) * 2020-12-31 2022-09-27 平安科技(深圳)有限公司 Method and device for updating federal learning model, electronic equipment and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8364613B1 (en) * 2011-07-14 2013-01-29 Google Inc. Hosting predictive models
US20170192957A1 (en) * 2015-12-30 2017-07-06 International Business Machines Corporation Methods and analytics systems having an ontology-guided graphical user interface for analytics models
US20190042955A1 (en) * 2017-12-28 2019-02-07 Joe Cahill Distributed and contextualized artificial intelligence inference service
US20190102098A1 (en) * 2017-09-29 2019-04-04 Coupa Software Incorporated Configurable machine learning systems through graphical user interfaces
US20190155633A1 (en) * 2017-11-22 2019-05-23 Amazon Technologies, Inc. Packaging and deploying algorithms for flexible machine learning
US20190180189A1 (en) * 2017-12-11 2019-06-13 Sap Se Client synchronization for offline execution of neural networks
US20200125941A1 (en) * 2017-10-19 2020-04-23 Pure Storage, Inc. Artificial intelligence and machine learning infrastructure
US20210209468A1 (en) * 2018-06-05 2021-07-08 Mitsubishi Electric Corporatio Learning device, inference device, method, and program

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10318882B2 (en) * 2014-09-11 2019-06-11 Amazon Technologies, Inc. Optimized training of linear machine learning models
US11481652B2 (en) * 2015-06-23 2022-10-25 Gregory Knox System and method for recommendations in ubiquituous computing environments
US9589210B1 (en) * 2015-08-26 2017-03-07 Digitalglobe, Inc. Broad area geospatial object detection using autogenerated deep learning models
US11216673B2 (en) * 2017-04-04 2022-01-04 Robert Bosch Gmbh Direct vehicle detection as 3D bounding boxes using neural network image processing
US11410024B2 (en) * 2017-04-28 2022-08-09 Intel Corporation Tool for facilitating efficiency in machine learning
US10225330B2 (en) * 2017-07-28 2019-03-05 Kong Inc. Auto-documentation for application program interfaces based on network requests and responses
US11475291B2 (en) * 2017-12-27 2022-10-18 X Development Llc Sharing learned information among robots
US11941719B2 (en) * 2018-01-23 2024-03-26 Nvidia Corporation Learning robotic tasks using one or more neural networks
US10754912B2 (en) * 2018-03-12 2020-08-25 Microsoft Technology Licensing, Llc Machine learning model to preload search results
US10713543B1 (en) * 2018-06-13 2020-07-14 Electronic Arts Inc. Enhanced training of machine learning systems based on automatically generated realistic gameplay information

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8364613B1 (en) * 2011-07-14 2013-01-29 Google Inc. Hosting predictive models
US20170192957A1 (en) * 2015-12-30 2017-07-06 International Business Machines Corporation Methods and analytics systems having an ontology-guided graphical user interface for analytics models
US20190102098A1 (en) * 2017-09-29 2019-04-04 Coupa Software Incorporated Configurable machine learning systems through graphical user interfaces
US20200125941A1 (en) * 2017-10-19 2020-04-23 Pure Storage, Inc. Artificial intelligence and machine learning infrastructure
US20190155633A1 (en) * 2017-11-22 2019-05-23 Amazon Technologies, Inc. Packaging and deploying algorithms for flexible machine learning
US20190180189A1 (en) * 2017-12-11 2019-06-13 Sap Se Client synchronization for offline execution of neural networks
US20190042955A1 (en) * 2017-12-28 2019-02-07 Joe Cahill Distributed and contextualized artificial intelligence inference service
US20210209468A1 (en) * 2018-06-05 2021-07-08 Mitsubishi Electric Corporatio Learning device, inference device, method, and program

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11061791B2 (en) * 2019-01-07 2021-07-13 International Business Machines Corporation Providing insight of continuous delivery pipeline using machine learning
US11061790B2 (en) * 2019-01-07 2021-07-13 International Business Machines Corporation Providing insight of continuous delivery pipeline using machine learning
US11106434B1 (en) * 2020-07-31 2021-08-31 EMC IP Holding Company LLC Method, device, and computer program product for generating program code
CN112527321A (en) * 2020-12-29 2021-03-19 平安银行股份有限公司 Deep learning-based application online method, system, device and medium

Also Published As

Publication number Publication date
US20200050936A1 (en) 2020-02-13
US20200050443A1 (en) 2020-02-13

Similar Documents

Publication Publication Date Title
US20200050935A1 (en) Deep learning model execution using tagged data
US11816790B2 (en) Unsupervised learning of scene structure for synthetic data generation
JP7157154B2 (en) Neural Architecture Search Using Performance Prediction Neural Networks
EP3711000B1 (en) Regularized neural network architecture search
US11417011B2 (en) 3D human body pose estimation using a model trained from unlabeled multi-view data
US20190354868A1 (en) Multi-task neural networks with task-specific paths
US11375176B2 (en) Few-shot viewpoint estimation
US20210142168A1 (en) Methods and apparatuses for training neural networks
US20200410365A1 (en) Unsupervised neural network training using learned optimizers
US20210117786A1 (en) Neural networks for scalable continual learning in domains with sequentially learned tasks
CN110476173B (en) Hierarchical device placement with reinforcement learning
US11379718B2 (en) Ground truth quality for machine learning models
US11360927B1 (en) Architecture for predicting network access probability of data files accessible over a computer network
US20220269548A1 (en) Profiling and performance monitoring of distributed computational pipelines
US11544498B2 (en) Training neural networks using consistency measures
KR20220047228A (en) Method and apparatus for generating image classification model, electronic device, storage medium, computer program, roadside device and cloud control platform
CN116594748A (en) Model customization processing method, device, equipment and medium for task
US20230394781A1 (en) Global context vision transformer
US20240070874A1 (en) Camera and articulated object motion estimation from video
US11816185B1 (en) Multi-view image analysis using neural networks
US20220383073A1 (en) Domain adaptation using domain-adversarial learning in synthetic data systems and applications
US20220156585A1 (en) Training point cloud processing neural networks using pseudo-element - based data augmentation
WO2022251717A1 (en) Processing images using mixture of experts
WO2021208808A1 (en) Cooperative neural networks with spatial containment constraints
US20240127075A1 (en) Synthetic dataset generator

Legal Events

Date Code Title Description
AS Assignment

Owner name: NVIDIA CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:EDELSTEN, ANDREW;HUANG, JEN-HSUN;SKALJAK, BOJAN;REEL/FRAME:050269/0439

Effective date: 20190808

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: RESPONSE TO NON-FINAL OFFICE ACTION ENTERED AND FORWARDED TO EXAMINER

STPP Information on status: patent application and granting procedure in general

Free format text: FINAL REJECTION MAILED

STPP Information on status: patent application and granting procedure in general

Free format text: NON FINAL ACTION MAILED

STCV Information on status: appeal procedure

Free format text: NOTICE OF APPEAL FILED

STPP Information on status: patent application and granting procedure in general

Free format text: DOCKETED NEW CASE - READY FOR EXAMINATION