WO2020102887A1 - System and method for automated design space determination for deep neural networks - Google Patents
System and method for automated design space determination for deep neural networksInfo
- Publication number
- WO2020102887A1 WO2020102887A1 PCT/CA2019/051642 CA2019051642W WO2020102887A1 WO 2020102887 A1 WO2020102887 A1 WO 2020102887A1 CA 2019051642 W CA2019051642 W CA 2019051642W WO 2020102887 A1 WO2020102887 A1 WO 2020102887A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- architecture
- network
- deep neural
- design space
- constraints
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/088—Non-supervised learning, e.g. competitive learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/211—Selection of the most significant subset of features
- G06F18/2115—Selection of the most significant subset of features by evaluating different subsets according to an optimisation criterion, e.g. class separability, forward selection or backward elimination
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/004—Artificial life, i.e. computing arrangements simulating life
- G06N3/006—Artificial life, i.e. computing arrangements simulating life based on simulated virtual individual or collective life forms, e.g. social simulations or particle swarm optimisation [PSO]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/06—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
- G06N3/063—Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
Definitions
- the following relates to systems and methods for automated design space determination for deep neural networks, for example by enabling design space exploration.
- DNNs deep neural networks
- State-of-the-art DNNs have been found to achieve high accuracy on tasks in computer vision and natural language processing, even outperforming humans on object recognition tasks.
- the increasing complexity and sophistication of DNNs is predicated on significant power consumption, model size and computing resources. These factors have been found to limit deep learning’s performance in real-time applications, in large-scale systems, and on low- power devices.
- Modern DNN models require as many as billions of expensive floating-point operations for a single input classification. This problem is exacerbated in high-throughput systems that perform millions of inference computations per second, requiring large and expensive Graphics Processing Units (GPUs).
- GPUs Graphics Processing Units
- Prior solutions include a variety of core optimization techniques for compressing, accelerating and mapping DNNs on various hardware platforms.
- the main approach to model optimization is by approximating the original DNN.
- Techniques include the removal of redundant connections, nodes, filters and layers in the network, also referred to as“pruning”.
- the following relates to deep learning algorithms, for example, deep neural networks.
- a method for automated optimization, specifically design space exploration, is described.
- the following relates to the design of a learning process to leverage trade-offs in different deep neural network designs using computation constraints as inputs.
- the learning process trains an optimizer agent to adapt large, initial networks into smaller networks of similar performance that satisfy target constraints in a data-driven way.
- the learning process and agents are agnostic to both the network architecture and the target hardware platform.
- a method of automated design space exploration for deep neural networks comprising: obtaining a teacher model and one or more constraints associated with an application and/or target device or process used in the application configured to utilize a deep neural network; learning an optimal student architecture using the teacher model architecture, constraints, a training data set, and a validation data; and deploying the optimal architecture on the target device or process for use in the application.
- a computer readable medium comprising computer executable instructions for automated design space exploration for deep neural networks, the computer executable instructions comprising instructions for performing the above method.
- a deep neural network optimization engine configured to perform automated design space exploration for deep neural networks, the engine comprising a processor and memory, the memory comprising computer executable instructions for performing the above method.
- FIG. 2 is a block diagram of an example of a DNN optimization engine
- [0022] - is configured to be framework agnostic, enabling developers to readily apply the engine to a project without additional engineering overhead;
- [0023] - is configured to be hardware agnostic, helping end-users to readily change the back-end hardware or port a model from one hardware to another;
- [0024] - provides design space exploration that automatically satisfies different constraints such as accuracy, speed, power, target hardware memory, etc., provided by user, to find an optimal solution.
- the system described herein parallels neuroscience and years of academic research, which has found that over time, human brains reduce their neural connections for memory and energy-efficiency. That is, it is found that experience, or wisdom, is about thinking efficiently.
- the system described herein emulates this process for artificial neural networks.
- One of the core challenges with model optimization for DNN inference is evaluating which model architecture is best suited for a given application and environment.
- the engine described herein uses an Al-driven optimizer to overcome the drawbacks of manual model compression. Based on computation constraints, a software agent selectively optimizes the model. Trade-offs in network design are leveraged to effectively compress a DNN model.
- the engine combines powerful optimizations and design space exploration in one intelligent framework, maximizing the efficiency, scalability and opportunities for deep learning systems.
- FIG. 1 illustrates a DNN optimization engine 10 which is configured, as described below, to take an initial DNN 12 and generate or otherwise determine an optimized DNN 14 to be used by or deployed upon a target device or process 16, the“target 16” for brevity.
- the target 16 is used in or purposed for an Al application 18 that uses the optimized DNN 14.
- the Al application 18 has one or more application constraints 19 that dictate how the optimized DNN 14 is generated or chosen.
- FIG. 2 illustrates an example of an architecture for the DNN optimization engine 10.
- the engine 10 in this example configuration includes a model converter 22 which can interface with a number of frameworks 20, an intermediate representation model 24, a design space exploration module 26, a quantizer 28, and mapping algorithms 30 that can include algorithms for both heterogeneous hardware 32 and homogeneous hardware 34.
- the engine 10 is configured to support multiple frameworks 20 (e.g. TensorFlow, Pytorch, etc.) and DNN architectures (e.g. CNN, RNN, etc.), to facilitate applying the engine’s capabilities on different projects with different Al frameworks 20.
- frameworks 20 e.g. TensorFlow, Pytorch, etc.
- DNN architectures e.g. CNN, RNN, etc.
- two layers are included, namely: a) the model convertor 22, which contains each Al frameworks’ specifications and DNNs’ parser to produce the intermediate representation model (IRM) 24 from the original model; and b) the IRM 24 which represents all DNN models in a standard format.
- DNNs are heavily dependent on the design of hyper-parameters like the number of hidden layers, nodes per layer and activation functions, which have traditionally been optimized manually. Moreover, hardware constraints 19 such as memory and power should be considered to optimize the model effectively. Given spaces can easily exceed thousands of solutions, it can be intractable to find a near-optimal solution manually.
- the engine 10 uses an automated multi-objective design space exploration with respect to defined constraints, where a reinforcement learning based agent explores the design space for a smaller network (student) with similar performance of the given network (teacher) trained on the same task.
- the agent generates new networks by network transformation operations such as altering a layer (e.g. number of filters), altering the whole network (e.g. adding or removing a layer), etc.
- This agent can efficiently navigate the design space to yield an architecture which satisfies all the constraints for the target hardware.
- This module aims to reduce DNN memory footprint and computation complexity which are important for low-end devices with limited available memory.
- the platform aware optimization layer that includes the mapping algorithms 30 is configured to address this challenge.
- This layer contains standard transformation primitives commonly found in commodity hardware such as CPUs, GPUs, FPGAs, etc.
- This additional layer provides a toolset to optimize DNNs for FPGAs and automatically map them onto FPGAs for model inference. This automated toolset can save design time significantly.
- many homogeneous and heterogeneous multicore architectures have been introduced currently to continually improve system performance. Compared to
- heterogeneous multicore systems offer more computation power and efficient energy consumption because of the utilization of specialized cores for specific functions and each computational unit provides distinct resource efficiencies when executing different inference phases of deep models (e.g. Binary network on FPGA, full precision part on GPU/DSP, regular arithmetic operations on CPU, etc.).
- the engine 10 provides optimization primitives targeted at heterogeneous hardware 32, by automatically splitting the DNN’s computation on different hardware cores to maximize energy-efficiency and execution time on the target hardware 16.
- the engine 10 provides a framework for multi-objective design space exploration for DNNs with respect to the target hardware 16, where reinforcement learning-based agent 50 (see also FIG. 5) explores the design space for a smaller network (student) which satisfies all the constraints with similar performance of the given network (teacher model 40) trained on the same task.
- the process includes three steps as illustrated in FIG. 4.
- the agent 50 generates new networks by network transformation operations such as altering a layer (e.g. number of filters), altering the whole network (e.g. adding or removing a layer), etc., to produce an optimal model architecture.
- the engine 10 provides for automated optimization of deep learning algorithms.
- the engine 10 also employs an efficient process for design space exploration 26 of DNNs that can satisfy target computation constraints 19 such as speed, model size, accuracy, power consumption, etc.
- target computation constraints 19 such as speed, model size, accuracy, power consumption, etc.
- the proposed process makes this possible by automatically producing an optimized DNN model suitable for the production environment and hardware 16. Referring to FIG.
- the agent 50 receives as inputs an initial DNN or teacher model 40, a training data set 52, and target constraints 19. This can be done using the existing deep learning frameworks, without the need to introduce a new framework and the associated engineering overhead.
- the agent 50 then generates a new architecture from the initial DNN based on target constraints 19.
- the agent 50 receives a reward based on the performance of the adapted model measured on the training data set 52, guiding the process towards a feasible design.
- the learning process can converge on a feasible design using minimal computing resources, time and human expert interaction. This process overcomes the disadvantages of manual optimization, which is often limited to certain DNN architectures, applications, hardware platforms and requires domain expertise.
- the process is a universal method to leverage trade-offs in different DNN designs and to ensure that target computation constraints are met. Furthermore, the process benefits end-users with multiple DNNs in production, each requiring updates and re-training at various intervals by providing a fast, lightweight and flexible method for designing new and compact DNNs. This approach advances current approaches by enabling resource-efficient DNNs that economize data centers, are available for use on low-end, affordable hardware and are accessible to a wider audience aiming to use deep learning algorithms in daily environments.
- the engine 10 can in part be considered a reinforcement learning agent which learns a policy (optimal optimization strategy) which is applied to the input network (teacher) and produces a smaller network (student), with similar performance to the input network.
- a policy optimal optimization strategy
- the task of the reinforcement learning agent is to learn an optimal policy to maximize the expected total reward.
- the reward function 70 should be generic and independent from model specific hyper parameters, and the dataset, and should reflect the problem constraints 19 (e.g. hardware limitations, resource budget, etc.) to discriminate the good and bad student architectures by encouraging the ones which meet the constraints.
- problem constraints 19 e.g. hardware limitations, resource budget, etc.
- FIG. 4 An initial teacher model 40 is provided after training a given deep neural network architecture on a training data set, such as images or text. Once the teacher model 40 has achieved an acceptable accuracy on a given task, like recognizing objects in images, the reinforcement learning processes are applied to perform the optimization.
- the reinforcement learning agent 50 receives the teacher model 40, training data set 52, and a set of objective constraints, such as model size (in bytes), inference speed and number of operations. The agent 50 is tasked with learning a new architecture that meets these constraints to ultimately be deployed for inferencing on the target hardware platform 16.
- step 42 the reinforcement learning policy 51 repeatedly produces a set of transformation actions to generate new student networks at step 60 by shrinking or expanding the teacher network by altering the layers’ configuration parameters (e.g. number of filters) and altering the whole network configuration (e.g. adding or removing a layer).
- the agent 50 observes a state that is generated through applying steps 60-70, as follows.
- the new student architecture(s) spawned at step 60 is/are evaluated at step 62 to determine if there is a promising network. If so, the network is evaluated at step 68, and the reward function 70 applied. If the spawned network is not promising, a negative reward is issued from the results of step 62.
- Performance estimator 62 is a key feature of the optimizer engine which makes it scalable for industry use cases by estimating the performance of the spawned network in a fraction of second.
- the new model will be evaluated (step 68) on the validation data set 54 and the agent then updates the policy 51 based on the reward achieved by the student architecture.
- the reward function 70 contains terms that reflect the desired accuracy and compression rate to incentivize the agent 50 to produce smaller architectures that do not sacrifice functional accuracy. Over a series of iterations, the agent 50 converges on an acceptable student architecture as measured by the reward function 70.
- the engine 10 leverages the class of function-preserving transformations that help to initialize the new network to represent the same function as the given network but use different parameterization to be further trained to improve the performance.
- Knowledge distillation at step 66 has been employed as a component of the training process to accelerate the training of the student network, especially for large networks.
- the transformation actions may lead to defected networks (e.g. not realistic kernel size, number of filters, etc.). It is not worth it to train these networks as they cannot learn properly. To improve the training process, an apparatus has been employed to detect these defected networks earlier and cut off the learning process by using a negative reward for them.
- defected networks e.g. not realistic kernel size, number of filters, etc.
- any module or component exemplified herein that executes instructions may include or otherwise have access to computer readable media such as storage media, computer storage media, or data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape.
- Computer storage media may include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Computing Systems (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computational Linguistics (AREA)
- Biophysics (AREA)
- Biomedical Technology (AREA)
- Mathematical Physics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Biology (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
- Complex Calculations (AREA)
Abstract
There is provided a system and method of automated design space determination for deep neural networks. The method includes obtaining a teacher model and one or more constraints associated with an application and/or target device or process used in the application configured to utilize a deep neural network; learning an optimal architecture using the teacher model, constraints, a training data set, and a validation data set; and deploying the optimal architecture on the target device or process for use in the application.
Description
SYSTEM AND METHOD FOR AUTOMATED DESIGN SPACE DETERMINATION FOR
DEEP NEURAL NETWORKS
CROSS-REFERENCE TO RELATED APPLICATION(S)
[0001] This application claims priority to U.S. Provisional Patent Application No.
62/769,403 filed on November 19, 2018, the contents of which are incorporated herein by reference.
TECHNICAL FIELD
[0002] The following relates to systems and methods for automated design space determination for deep neural networks, for example by enabling design space exploration.
BACKGROUND
[0003] The emergence of deep neural networks (DNNs) in recent years has enabled ground-breaking abilities and applications for modern intelligent systems. State-of-the-art DNNs have been found to achieve high accuracy on tasks in computer vision and natural language processing, even outperforming humans on object recognition tasks. Concurrently, the increasing complexity and sophistication of DNNs is predicated on significant power consumption, model size and computing resources. These factors have been found to limit deep learning’s performance in real-time applications, in large-scale systems, and on low- power devices. Modern DNN models require as many as billions of expensive floating-point operations for a single input classification. This problem is exacerbated in high-throughput systems that perform millions of inference computations per second, requiring large and expensive Graphics Processing Units (GPUs). Furthermore, many low-end and cost- effective devices do not have the resources to execute DNN inference, causing users to sacrifice privacy and offload processing to the cloud. Furthermore, tasks with strict latency constraints, such as in automotive and mobility applications often require that inference be performed in a matter of milliseconds, often with limited hardware.
[0004] To address these problems, there has been a significant push in academia and industry to make deep learning models more resource-efficient and applicable for real-time, on-device applications. Many techniques have been proposed for model optimization and inference acceleration, as well as hardware implementations of DNNs.
[0005] Prior solutions include a variety of core optimization techniques for compressing, accelerating and mapping DNNs on various hardware platforms. The main approach to model optimization is by approximating the original DNN. Techniques include the removal of redundant connections, nodes, filters and layers in the network, also referred to as“pruning”.
An alternative approach to optimization is knowledge distillation, whereby a“teacher”
network is adapted to produce a smaller,“student” network. However, generally these techniques are implemented manually by a domain expert, relying on heuristics and intensive feature engineering. Additionally, these approaches are often found to sacrifice too much accuracy or limit network performance on complex and large data sets.
[0006] At present, two fundamental challenges exist with current optimization techniques, namely: 1) that hand-crafted features and domain expertise is required for model optimization, and 2) that time-consuming fine-tuning is often necessary to maintain accuracy.
[0007] There exists a need for scalable, automated processes for model optimization on diverse DNN architectures and hardware back-ends. Generally, it is found that the current capacity for model optimization is outpaced by the rapid development of new DNNs and disparate hardware platforms that are applicable, yet largely inefficient for deep learning workloads.
[0008] It is an object of the following to address at least one of the above-mentioned challenges.
SUMMARY
[0009] It is recognized that a general approach that is agnostic to both the architecture and target hardware(s) is needed to optimize DNNs, making them faster, smaller and energy-efficient for use in daily life. The following relates to deep learning algorithms, for example, deep neural networks. A method for automated optimization, specifically design space exploration, is described. The following relates to the design of a learning process to leverage trade-offs in different deep neural network designs using computation constraints as inputs. The learning process trains an optimizer agent to adapt large, initial networks into smaller networks of similar performance that satisfy target constraints in a data-driven way. By design, the learning process and agents are agnostic to both the network architecture and the target hardware platform.
[0010] In one aspect, there is provided a method of automated design space exploration for deep neural networks, the method comprising: obtaining a teacher model and one or more constraints associated with an application and/or target device or process used in the application configured to utilize a deep neural network; learning an optimal student architecture using the teacher model architecture, constraints, a training data set, and a validation data; and deploying the optimal architecture on the target device or process for use in the application.
[0011] In another aspect, there is provided a computer readable medium comprising computer executable instructions for automated design space exploration for deep neural networks, the computer executable instructions comprising instructions for performing the above method.
[0012] In yet another aspect, there is provided a deep neural network optimization engine configured to perform automated design space exploration for deep neural networks, the engine comprising a processor and memory, the memory comprising computer executable instructions for performing the above method.
BRIEF DESCRIPTION OF THE DRAWINGS
[0013] One or more embodiments will now be described with reference to the appended drawings wherein:
[0014] FIG. 1 is a schematic diagram of a system for optimizing a DNN for use in a target device or process used in an artificial intelligence (Al) application;
[0015] FIG. 2 is a block diagram of an example of a DNN optimization engine;
[0016] FIG. 3 is a graph comparing energy consumption and computation costs for various example network designs;
[0017] FIG. 4 is a flow chart illustrating a process for optimizing a teacher model DNN for deployment on a target device or process; and
[0018] FIG. 5 is a flow chart illustrating operations performed in learning an optimal architecture.
[0019] DETAILED DESCRIPTION
[0020] Al should be accessible and beneficial to various applications in everyday life. With the emergence of deep learning on embedded and mobile devices, DNN application designers are faced with stringent power, memory and cost requirements which often leads to inefficient solutions, possibly preventing people from moving to these devices. The system described below can be used to make deep learning applicable, affordable and scalable by bridging the gap between DNNs and hardware back-ends. To do so, a scalable, DNN-agnostic engine is provided, which can enable a platform-aware optimization. The engine targets information inefficiency in the implementation of DNNs, making them applicable for low-end devices and more efficient in data centers. To provide such functionality, the engine:
[0021] - is configured to be architecture independent, allowing the engine to support different DNN architectures such as convolution neural networks (CNNs), recurrent neural networks (RNNs), etc.;
[0022] - is configured to be framework agnostic, enabling developers to readily apply the engine to a project without additional engineering overhead;
[0023] - is configured to be hardware agnostic, helping end-users to readily change the back-end hardware or port a model from one hardware to another; and
[0024] - provides design space exploration that automatically satisfies different constraints such as accuracy, speed, power, target hardware memory, etc., provided by user, to find an optimal solution.
[0025] The system described herein parallels neuroscience and years of academic research, which has found that over time, human brains reduce their neural connections for memory and energy-efficiency. That is, it is found that experience, or wisdom, is about thinking efficiently. The system described herein emulates this process for artificial neural networks.
[0026] One of the core challenges with model optimization for DNN inference is evaluating which model architecture is best suited for a given application and environment. The engine described herein uses an Al-driven optimizer to overcome the drawbacks of manual model compression. Based on computation constraints, a software agent selectively optimizes the model. Trade-offs in network design are leveraged to effectively compress a DNN model. The engine combines powerful optimizations and design space exploration in one intelligent framework, maximizing the efficiency, scalability and opportunities for deep learning systems.
[0027] Turning now to the figures, FIG. 1 illustrates a DNN optimization engine 10 which is configured, as described below, to take an initial DNN 12 and generate or otherwise determine an optimized DNN 14 to be used by or deployed upon a target device or process 16, the“target 16” for brevity. The target 16 is used in or purposed for an Al application 18 that uses the optimized DNN 14. The Al application 18 has one or more application constraints 19 that dictate how the optimized DNN 14 is generated or chosen.
[0028] FIG. 2 illustrates an example of an architecture for the DNN optimization engine 10. The engine 10 in this example configuration includes a model converter 22 which can interface with a number of frameworks 20, an intermediate representation model 24, a design space exploration module 26, a quantizer 28, and mapping algorithms 30 that can
include algorithms for both heterogeneous hardware 32 and homogeneous hardware 34.
The engine 10 also interfaces with a target hardware (HW) platform 16. The design space exploration module 26, quantizer 28, and mapping algorithms 30 adopt, apply, consider, or otherwise take into account the constraints 19. In this example, the constraints include accuracy, power, cost, supported precision, speed, among others that are possible as shown in dashed lines. FIG. 2 illustrates a framework with maximum re-use in mind, so that new Al frameworks 20, new DNN architectures and new hardware architectures can be easily added to a platform utilizing the engine 10. The engine 10 addresses inference optimization of DNNs by leveraging state-of-the-art algorithms and methodologies to make DNNs applicable for any device 16. This provides an end-to-end framework to optimize DNNs from different deep learning framework front-ends down to low-level machine code for multiple hardware back-ends.
[0029] For the model converter 22, the engine 10 is configured to support multiple frameworks 20 (e.g. TensorFlow, Pytorch, etc.) and DNN architectures (e.g. CNN, RNN, etc.), to facilitate applying the engine’s capabilities on different projects with different Al frameworks 20. To do so, two layers are included, namely: a) the model convertor 22, which contains each Al frameworks’ specifications and DNNs’ parser to produce the intermediate representation model (IRM) 24 from the original model; and b) the IRM 24 which represents all DNN models in a standard format.
[0030] The engine 10 also provides content aware optimization, by providing a two- level intermediate layer composed of: a) the design space exploration module 26, which is an intermediate layer for finding a smaller architecture with similar performance as the given model to reduce memory footprint and computation (described in greater detail below); and b) the quantizer 28, which is a low-level layer for quantizing the network to gain further computation speedup.
[0031] Regarding the design space exploration module 26, DNNs are heavily dependent on the design of hyper-parameters like the number of hidden layers, nodes per layer and activation functions, which have traditionally been optimized manually. Moreover, hardware constraints 19 such as memory and power should be considered to optimize the model effectively. Given spaces can easily exceed thousands of solutions, it can be intractable to find a near-optimal solution manually.
[0032] Using the design space exploration module 26, the engine 10 provides an automated multi-objective design space exploration with respect to defined constraints, where a reinforcement learning based agent explores the design space for a smaller network
(student) with similar performance of the given network (teacher) trained on the same task. The agent generates new networks by network transformation operations such as altering a layer (e.g. number of filters), altering the whole network (e.g. adding or removing a layer), etc. This agent can efficiently navigate the design space to yield an architecture which satisfies all the constraints for the target hardware. This module aims to reduce DNN memory footprint and computation complexity which are important for low-end devices with limited available memory.
[0033] It is also recognized that a major challenge lies in enabling support for multiple hardware back-ends while keeping compute, memory and energy footprints at their lowest. Content aware optimization alone is not considered to be enough to solve the challenge of supporting different hardware back ends. The reason being that primitive operations like convolution or matrix multiplication may be mapped and optimized in very different ways for each hardware back-end. These hardware-specific optimizations can vary drastically in terms of memory layout, parallelization threading patterns, caching access patterns and choice of hardware primitives.
[0034] The platform aware optimization layer that includes the mapping algorithms 30 is configured to address this challenge. This layer contains standard transformation primitives commonly found in commodity hardware such as CPUs, GPUs, FPGAs, etc. This additional layer provides a toolset to optimize DNNs for FPGAs and automatically map them onto FPGAs for model inference. This automated toolset can save design time significantly. Importantly, many homogeneous and heterogeneous multicore architectures have been introduced currently to continually improve system performance. Compared to
homogeneous multicore systems, heterogeneous ones offer more computation power and efficient energy consumption because of the utilization of specialized cores for specific functions and each computational unit provides distinct resource efficiencies when executing different inference phases of deep models (e.g. Binary network on FPGA, full precision part on GPU/DSP, regular arithmetic operations on CPU, etc.). The engine 10 provides optimization primitives targeted at heterogeneous hardware 32, by automatically splitting the DNN’s computation on different hardware cores to maximize energy-efficiency and execution time on the target hardware 16.
[0035] Using platform aware optimization techniques in combination with content aware optimization techniques achieves significant performance cost reduction across different hardware platforms while delivering the same inference accuracy compared to the state-of- the-art deep learning approaches.
[0036] For example, assume an application that desires to run a CNN on a low-end hardware with 60MB memory. The model size is 450MB and it needs to meet 10ms critical response time for each inference operation. The model is 95% accurate, however, 90% accuracy is also acceptable. The CNN designers usually use GPUs to train and run their models, but they would now need to deal with memory and computation power limitations, new hardware architecture and satisfying all constraints (such as memory and accuracy) in the same time. It is considered infeasible to find a solution for the target hardware or may require tremendous engineering effort. In contrast, using the engine 10, and specifying the constraints 19, a user can effectively produce the optimized model by finding a feasible solution, reducing time to market and engineering effort, as illustrated in the chart shown in FIG. 3.
[0037] Referring now to FIG. 4, the engine 10 provides a framework for multi-objective design space exploration for DNNs with respect to the target hardware 16, where reinforcement learning-based agent 50 (see also FIG. 5) explores the design space for a smaller network (student) which satisfies all the constraints with similar performance of the given network (teacher model 40) trained on the same task. The process includes three steps as illustrated in FIG. 4. In the first step 42, the agent 50 generates new networks by network transformation operations such as altering a layer (e.g. number of filters), altering the whole network (e.g. adding or removing a layer), etc., to produce an optimal model architecture. Then, in step 44, a knowledge distillation method is applied, which is a training method where the smaller student architecture receives information from the larger teacher network. This method can be used once to fully train the optimal architecture produced by step 42 to recover the accuracy on the training dataset 52. Using the knowledge distillation method once at step 44, and using a performance estimator 62 (see also FIG. 5), can make the optimization process fast and scalable for real industry use cases. Ultimately, the trained optimal architecture will be deployed on the target platform 16 at step 46.
[0038] The engine 10 provides for automated optimization of deep learning algorithms. The engine 10 also employs an efficient process for design space exploration 26 of DNNs that can satisfy target computation constraints 19 such as speed, model size, accuracy, power consumption, etc. There is provided a learning process for training optimizer agents that automatically explore design trade-offs starting with large, initial DNNs to produce compact DNN designs in a data-driven way. Once an engineer has trained an initial deep neural network on a training data set to achieve a target accuracy for a task, they would then need to satisfy other constraints for the real-world production environment and computing hardware. The proposed process makes this possible by automatically producing an
optimized DNN model suitable for the production environment and hardware 16. Referring to FIG. 5, the agent 50 receives as inputs an initial DNN or teacher model 40, a training data set 52, and target constraints 19. This can be done using the existing deep learning frameworks, without the need to introduce a new framework and the associated engineering overhead. The agent 50 then generates a new architecture from the initial DNN based on target constraints 19. The agent 50 receives a reward based on the performance of the adapted model measured on the training data set 52, guiding the process towards a feasible design. The learning process can converge on a feasible design using minimal computing resources, time and human expert interaction. This process overcomes the disadvantages of manual optimization, which is often limited to certain DNN architectures, applications, hardware platforms and requires domain expertise. The process is a universal method to leverage trade-offs in different DNN designs and to ensure that target computation constraints are met. Furthermore, the process benefits end-users with multiple DNNs in production, each requiring updates and re-training at various intervals by providing a fast, lightweight and flexible method for designing new and compact DNNs. This approach advances current approaches by enabling resource-efficient DNNs that economize data centers, are available for use on low-end, affordable hardware and are accessible to a wider audience aiming to use deep learning algorithms in daily environments.
[0039] There are many well-designed architectures, by human or automatic architecture designing methods, that have achieved good performance at the target task. Under restricted computational resources limits, instead of totally neglecting these existing networks and exploring the architecture space from scratch (which does not guarantee to result in better performance architectures), a more economic and efficient alternative could be exploring the architecture space based on these successful networks and reusing their knowledge.
[0040] The engine 10 can in part be considered a reinforcement learning agent which learns a policy (optimal optimization strategy) which is applied to the input network (teacher) and produces a smaller network (student), with similar performance to the input network.
One can formulate the optimization process to find a smaller network as a sequential decision-making process which can be modeled as Markov Decision Process. The task of the reinforcement learning agent is to learn an optimal policy to maximize the expected total reward.
[0041] The reward function 70 should be generic and independent from model specific hyper parameters, and the dataset, and should reflect the problem constraints 19 (e.g.
hardware limitations, resource budget, etc.) to discriminate the good and bad student architectures by encouraging the ones which meet the constraints.
[0042] As noted above, a reinforcement learning process is proposed as shown in FIG. 4. An initial teacher model 40 is provided after training a given deep neural network architecture on a training data set, such as images or text. Once the teacher model 40 has achieved an acceptable accuracy on a given task, like recognizing objects in images, the reinforcement learning processes are applied to perform the optimization. The reinforcement learning agent 50 receives the teacher model 40, training data set 52, and a set of objective constraints, such as model size (in bytes), inference speed and number of operations. The agent 50 is tasked with learning a new architecture that meets these constraints to ultimately be deployed for inferencing on the target hardware platform 16.
[0043] In step 42, shown in FIG. 5, the reinforcement learning policy 51 repeatedly produces a set of transformation actions to generate new student networks at step 60 by shrinking or expanding the teacher network by altering the layers’ configuration parameters (e.g. number of filters) and altering the whole network configuration (e.g. adding or removing a layer). As shown in FIG. 5, the agent 50 observes a state that is generated through applying steps 60-70, as follows. The new student architecture(s) spawned at step 60 is/are evaluated at step 62 to determine if there is a promising network. If so, the network is evaluated at step 68, and the reward function 70 applied. If the spawned network is not promising, a negative reward is issued from the results of step 62. Performance estimator 62 is a key feature of the optimizer engine which makes it scalable for industry use cases by estimating the performance of the spawned network in a fraction of second.
[0044] That is, if the newly spawned architecture is promising at step 62, the new model will be evaluated (step 68) on the validation data set 54 and the agent then updates the policy 51 based on the reward achieved by the student architecture. The reward function 70 contains terms that reflect the desired accuracy and compression rate to incentivize the agent 50 to produce smaller architectures that do not sacrifice functional accuracy. Over a series of iterations, the agent 50 converges on an acceptable student architecture as measured by the reward function 70.
[0045] To reuse weights, the engine 10 leverages the class of function-preserving transformations that help to initialize the new network to represent the same function as the given network but use different parameterization to be further trained to improve the performance. Knowledge distillation at step 66 has been employed as a component of the
training process to accelerate the training of the student network, especially for large networks.
[0046] The transformation actions may lead to defected networks (e.g. not realistic kernel size, number of filters, etc.). It is not worth it to train these networks as they cannot learn properly. To improve the training process, an apparatus has been employed to detect these defected networks earlier and cut off the learning process by using a negative reward for them.
[0047] For simplicity and clarity of illustration, where considered appropriate, reference numerals may be repeated among the figures to indicate corresponding or analogous elements. In addition, numerous specific details are set forth in order to provide a thorough understanding of the examples described herein. However, it will be understood by those of ordinary skill in the art that the examples described herein may be practiced without these specific details. In other instances, well-known methods, procedures and components have not been described in detail so as not to obscure the examples described herein. Also, the description is not to be considered as limiting the scope of the examples described herein.
[0048] It will be appreciated that the examples and corresponding diagrams used herein are for illustrative purposes only. Different configurations and terminology can be used without departing from the principles expressed herein. For instance, components and modules can be added, deleted, modified, or arranged with differing connections without departing from these principles.
[0049] It will also be appreciated that any module or component exemplified herein that executes instructions may include or otherwise have access to computer readable media such as storage media, computer storage media, or data storage devices (removable and/or non-removable) such as, for example, magnetic disks, optical disks, or tape. Computer storage media may include volatile and non-volatile, removable and non-removable media implemented in any method or technology for storage of information, such as computer readable instructions, data structures, program modules, or other data. Examples of computer storage media include RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital versatile disks (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can be accessed by an application, module, or both. Any such computer storage media may be part of the engine 10, any component of or related to the engine, etc., or accessible or connectable thereto. Any application or module herein described may be implemented using
computer readable/executable instructions that may be stored or otherwise held by such computer readable media.
[0050] The steps or operations in the flow charts and diagrams described herein are just for example. There may be many variations to these steps or operations without departing from the principles discussed above. For instance, the steps may be performed in a differing order, or steps may be added, deleted, or modified.
[0051] Although the above principles have been described with reference to certain specific examples, various modifications thereof will be apparent to those skilled in the art as outlined in the appended claims.
Claims
1. A method of automated design space exploration for deep neural networks, the method comprising:
obtaining a teacher model and one or more constraints associated with an application and/or target device or process used in the application configured to utilize a deep neural network;
learning an optimal architecture using the teacher model, constraints, a training data set, and a validation data set; and
deploying the optimal architecture on the target device or process for use in the application.
2. The method of claim 1 , wherein the optimal architecture is learned using a policy to generate a new student architecture from the teacher model.
3. The method of claim 2, further comprising:
determining if the new student architecture is promising;
transferring knowledge from the teacher model to train with a knowledge distillation process;
evaluating the trained architecture;
applying a reward function; and
iterating for at least one additional new student architecture and selecting the optimal architecture.
4. The method of claim 2 or claim 3, wherein the new student architecture is generated by shrinking or expanding the teacher model by altering the network configuration.
5. The method of claim 3 or claim 4, wherein the reward function can be positive or negative according to whether the new student architecture is promising or not.
6. The method of any one of claims 3 to 5, wherein the reward function comprises one or more terms that reflect a desired accuracy and/or compression rate to incentivize production of smaller architectures without sacrificing functional accuracy.
7. The method of any one of claims 1 to 6, wherein learning the optimal architecture comprises generating new networks by applying network transformation operations.
8. The method of claim 7, wherein the network transformation operations comprise altering a layer or altering the whole network.
9. The method of any one of claims 3 to 8, wherein the knowledge distillation process comprises the new student architecture receiving information from the teacher model or a larger previously determined network.
10. The method of any one of claims 1 to 9, wherein the constraints comprise at least one of: accuracy, speed, power, target hardware memory.
1 1. The method of any one of claims 1 to 10, wherein the application is an artificial intelligence-based application.
12. A computer readable medium comprising computer executable instructions for automated design space exploration for deep neural networks, the computer executable instructions comprising instructions for performing the method of any one of claims 1 to 1 1.
13. A deep neural network optimization engine configured to perform automated design space exploration for deep neural networks, the engine comprising a processor and memory, the memory comprising computer executable instructions for performing the method of any one of claims 1 to 1 1.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CA3114632A CA3114632A1 (en) | 2018-11-19 | 2019-11-18 | System and method for automated design space determination for deep neural networks |
EP19886902.6A EP3884434A4 (en) | 2018-11-19 | 2019-11-18 | System and method for automated design space determination for deep neural networks |
US17/250,926 US20220335304A1 (en) | 2018-11-19 | 2019-11-18 | System and Method for Automated Design Space Determination for Deep Neural Networks |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201862769403P | 2018-11-19 | 2018-11-19 | |
US62/769,403 | 2018-11-19 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020102887A1 true WO2020102887A1 (en) | 2020-05-28 |
Family
ID=70773445
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CA2019/051643 WO2020102888A1 (en) | 2018-11-19 | 2019-11-18 | System and method for automated precision configuration for deep neural networks |
PCT/CA2019/051642 WO2020102887A1 (en) | 2018-11-19 | 2019-11-18 | System and method for automated design space determination for deep neural networks |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CA2019/051643 WO2020102888A1 (en) | 2018-11-19 | 2019-11-18 | System and method for automated precision configuration for deep neural networks |
Country Status (4)
Country | Link |
---|---|
US (2) | US20220335304A1 (en) |
EP (2) | EP3884435A4 (en) |
CA (2) | CA3114635A1 (en) |
WO (2) | WO2020102888A1 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3944029A1 (en) * | 2020-07-21 | 2022-01-26 | Siemens Aktiengesellschaft | Method and system for determining a compression rate for an ai model of an industrial task |
WO2023210914A1 (en) * | 2022-04-27 | 2023-11-02 | Samsung Electronics Co., Ltd. | Method for knowledge distillation and model generation |
Families Citing this family (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20200250523A1 (en) * | 2019-02-05 | 2020-08-06 | Gyrfalcon Technology Inc. | Systems and methods for optimizing an artificial intelligence model in a semiconductor solution |
US11907828B2 (en) * | 2019-09-03 | 2024-02-20 | International Business Machines Corporation | Deep neural network on field-programmable gate array |
CN113850749B (en) * | 2020-06-09 | 2024-07-09 | 英业达科技有限公司 | Method for training defect detector |
CN111967568B (en) * | 2020-06-29 | 2023-09-01 | 北京百度网讯科技有限公司 | Adaptation method and device for deep learning model and electronic equipment |
EP3945470A1 (en) * | 2020-07-31 | 2022-02-02 | Aptiv Technologies Limited | Methods and systems for reducing the complexity of a computational network |
CN112015749B (en) * | 2020-10-27 | 2021-02-19 | 支付宝(杭州)信息技术有限公司 | Method, device and system for updating business model based on privacy protection |
KR20220101954A (en) * | 2021-01-12 | 2022-07-19 | 삼성전자주식회사 | Neural network processing method and apparatus |
CN116702835A (en) * | 2022-02-23 | 2023-09-05 | 京东方科技集团股份有限公司 | Neural network reasoning acceleration method, target detection method, device and storage medium |
US20230412472A1 (en) * | 2022-06-21 | 2023-12-21 | Northestern University | 3D-O-RAN: Dynamic Data Driven Open Radio Access Network Systems |
CN115774851B (en) * | 2023-02-10 | 2023-04-25 | 四川大学 | Method and system for detecting internal defects of crankshaft based on hierarchical knowledge distillation |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016037350A1 (en) * | 2014-09-12 | 2016-03-17 | Microsoft Corporation | Learning student dnn via output distribution |
WO2017187516A1 (en) * | 2016-04-26 | 2017-11-02 | 株式会社日立製作所 | Information processing system and method for operating same |
WO2018051841A1 (en) * | 2016-09-16 | 2018-03-22 | 日本電信電話株式会社 | Model learning device, method therefor, and program |
US20180260665A1 (en) * | 2017-03-07 | 2018-09-13 | Board Of Trustees Of Michigan State University | Deep learning system for recognizing pills in images |
Family Cites Families (21)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20160328644A1 (en) * | 2015-05-08 | 2016-11-10 | Qualcomm Incorporated | Adaptive selection of artificial neural networks |
US10733532B2 (en) * | 2016-01-27 | 2020-08-04 | Bonsai AI, Inc. | Multiple user interfaces of an artificial intelligence system to accommodate different types of users solving different types of problems with artificial intelligence |
DE202016004627U1 (en) * | 2016-07-27 | 2016-09-23 | Google Inc. | Training a neural value network |
US10621486B2 (en) * | 2016-08-12 | 2020-04-14 | Beijing Deephi Intelligent Technology Co., Ltd. | Method for optimizing an artificial neural network (ANN) |
US20180165602A1 (en) * | 2016-12-14 | 2018-06-14 | Microsoft Technology Licensing, Llc | Scalability of reinforcement learning by separation of concerns |
EP3586277B1 (en) * | 2017-02-24 | 2024-04-03 | Google LLC | Training policy neural networks using path consistency learning |
US20180260695A1 (en) * | 2017-03-07 | 2018-09-13 | Qualcomm Incorporated | Neural network compression via weak supervision |
US9754221B1 (en) * | 2017-03-09 | 2017-09-05 | Alphaics Corporation | Processor for implementing reinforcement learning operations |
WO2018189728A1 (en) * | 2017-04-14 | 2018-10-18 | Cerebras Systems Inc. | Floating-point unit stochastic rounding for accelerated deep learning |
US10643297B2 (en) * | 2017-05-05 | 2020-05-05 | Intel Corporation | Dynamic precision management for integer deep learning primitives |
EP3602414A1 (en) * | 2017-05-20 | 2020-02-05 | Google LLC | Application development platform and software development kits that provide comprehensive machine learning services |
US10878273B2 (en) * | 2017-07-06 | 2020-12-29 | Texas Instruments Incorporated | Dynamic quantization for deep neural network inference system and method |
US10885900B2 (en) * | 2017-08-11 | 2021-01-05 | Microsoft Technology Licensing, Llc | Domain adaptation in speech recognition via teacher-student learning |
US20190050710A1 (en) * | 2017-08-14 | 2019-02-14 | Midea Group Co., Ltd. | Adaptive bit-width reduction for neural networks |
US20190171927A1 (en) * | 2017-12-06 | 2019-06-06 | Facebook, Inc. | Layer-level quantization in neural networks |
JP2019164793A (en) * | 2018-03-19 | 2019-09-26 | エスアールアイ インターナショナル | Dynamic adaptation of deep neural networks |
US11429862B2 (en) * | 2018-03-20 | 2022-08-30 | Sri International | Dynamic adaptation of deep neural networks |
KR102190483B1 (en) * | 2018-04-24 | 2020-12-11 | 주식회사 지디에프랩 | System for compressing and restoring picture based on AI |
US11948074B2 (en) * | 2018-05-14 | 2024-04-02 | Samsung Electronics Co., Ltd. | Method and apparatus with neural network parameter quantization |
US11403528B2 (en) * | 2018-05-31 | 2022-08-02 | Kneron (Taiwan) Co., Ltd. | Self-tuning incremental model compression solution in deep neural network with guaranteed accuracy performance |
CN110378382A (en) * | 2019-06-18 | 2019-10-25 | 华南师范大学 | Novel quantization transaction system and its implementation based on deeply study |
-
2019
- 2019-11-18 US US17/250,926 patent/US20220335304A1/en active Pending
- 2019-11-18 WO PCT/CA2019/051643 patent/WO2020102888A1/en unknown
- 2019-11-18 US US17/250,928 patent/US20210350233A1/en active Pending
- 2019-11-18 EP EP19887690.6A patent/EP3884435A4/en active Pending
- 2019-11-18 EP EP19886902.6A patent/EP3884434A4/en active Pending
- 2019-11-18 CA CA3114635A patent/CA3114635A1/en active Pending
- 2019-11-18 WO PCT/CA2019/051642 patent/WO2020102887A1/en unknown
- 2019-11-18 CA CA3114632A patent/CA3114632A1/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2016037350A1 (en) * | 2014-09-12 | 2016-03-17 | Microsoft Corporation | Learning student dnn via output distribution |
WO2017187516A1 (en) * | 2016-04-26 | 2017-11-02 | 株式会社日立製作所 | Information processing system and method for operating same |
WO2018051841A1 (en) * | 2016-09-16 | 2018-03-22 | 日本電信電話株式会社 | Model learning device, method therefor, and program |
US20180260665A1 (en) * | 2017-03-07 | 2018-09-13 | Board Of Trustees Of Michigan State University | Deep learning system for recognizing pills in images |
Non-Patent Citations (2)
Title |
---|
ANUBHAV ASHOK ET AL., N2N LEARNING: NETWORK TO NETWORK COMPRESSION VIA POLICY GRADIENT REINFORCEMENT LEARNING |
See also references of EP3884434A4 |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3944029A1 (en) * | 2020-07-21 | 2022-01-26 | Siemens Aktiengesellschaft | Method and system for determining a compression rate for an ai model of an industrial task |
WO2022017782A1 (en) * | 2020-07-21 | 2022-01-27 | Siemens Aktiengesellschaft | Method and system for determining a compression rate for an ai model of an industrial task |
CN116134387A (en) * | 2020-07-21 | 2023-05-16 | 西门子股份公司 | Method and system for determining the compression ratio of an AI model for an industrial task |
CN116134387B (en) * | 2020-07-21 | 2024-04-19 | 西门子股份公司 | Method and system for determining the compression ratio of an AI model for an industrial task |
WO2023210914A1 (en) * | 2022-04-27 | 2023-11-02 | Samsung Electronics Co., Ltd. | Method for knowledge distillation and model generation |
Also Published As
Publication number | Publication date |
---|---|
CA3114632A1 (en) | 2020-05-28 |
EP3884434A4 (en) | 2022-10-19 |
US20210350233A1 (en) | 2021-11-11 |
WO2020102888A1 (en) | 2020-05-28 |
CA3114635A1 (en) | 2020-05-28 |
EP3884435A4 (en) | 2022-10-19 |
EP3884435A1 (en) | 2021-09-29 |
US20220335304A1 (en) | 2022-10-20 |
EP3884434A1 (en) | 2021-09-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US20220335304A1 (en) | System and Method for Automated Design Space Determination for Deep Neural Networks | |
JP6790286B2 (en) | Device placement optimization using reinforcement learning | |
US11790212B2 (en) | Quantization-aware neural architecture search | |
JP7366274B2 (en) | Adaptive search method and device for neural networks | |
TW202026858A (en) | Exploiting activation sparsity in deep neural networks | |
EP4206957A1 (en) | Model training method and related device | |
CN116415654A (en) | Data processing method and related equipment | |
KR20200045128A (en) | Model training method and apparatus, and data recognizing method | |
Daghero et al. | Energy-efficient deep learning inference on edge devices | |
CA2957695A1 (en) | System and method for building artificial neural network architectures | |
JP2021504837A (en) | Fully connected / regression deep network compression through enhancing spatial locality to the weight matrix and providing frequency compression | |
CN113449859A (en) | Data processing method and device | |
TW201633181A (en) | Event-driven temporal convolution for asynchronous pulse-modulated sampled signals | |
CN114925320B (en) | Data processing method and related device | |
Zhang et al. | Implementation of DNNs on IoT devices | |
CN114356540A (en) | Parameter updating method and device, electronic equipment and storage medium | |
CN117744759A (en) | Text information identification method and device, storage medium and electronic equipment | |
Gadiyar et al. | Artificial intelligence software and hardware platforms | |
CN114286985A (en) | Method and apparatus for predicting kernel tuning parameters | |
CN112132281B (en) | Model training method, device, server and medium based on artificial intelligence | |
Sun et al. | Computation on sparse neural networks: an inspiration for future hardware | |
Sun et al. | Computation on sparse neural networks and its implications for future hardware | |
Guo et al. | Algorithms and architecture support of degree-based quantization for graph neural networks | |
Sun et al. | Unicnn: A pipelined accelerator towards uniformed computing for cnns | |
Thingom et al. | A Review on Machine Learning in IoT Devices |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19886902 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 3114632 Country of ref document: CA |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 2019886902 Country of ref document: EP Effective date: 20210621 |