CN114168186A - Embedded artificial intelligence implementation method and hardware platform for inference deployment - Google Patents

Embedded artificial intelligence implementation method and hardware platform for inference deployment Download PDF

Info

Publication number
CN114168186A
CN114168186A CN202111500361.9A CN202111500361A CN114168186A CN 114168186 A CN114168186 A CN 114168186A CN 202111500361 A CN202111500361 A CN 202111500361A CN 114168186 A CN114168186 A CN 114168186A
Authority
CN
China
Prior art keywords
neural network
network model
task
intelligent task
intelligent
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111500361.9A
Other languages
Chinese (zh)
Inventor
刘雷
周广蕴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Electromechanical Engineering Research Institute
Original Assignee
Beijing Electromechanical Engineering Research Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Electromechanical Engineering Research Institute filed Critical Beijing Electromechanical Engineering Research Institute
Priority to CN202111500361.9A priority Critical patent/CN114168186A/en
Publication of CN114168186A publication Critical patent/CN114168186A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/70Software maintenance or management
    • G06F8/71Version control; Configuration management
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Security & Cryptography (AREA)
  • Stored Programmes (AREA)

Abstract

The invention relates to an embedded artificial intelligence implementation method and a hardware platform facing inference deployment; the method comprises encapsulating a resource virtual programming interface; constructing a model base of a neural network model; each neural network model in the model library has a unique label; carrying out intelligent task development based on the packaged resource virtual programming interface; binding the intelligent task with a scheduling point of a master control flow; configuring a neural network model label required by executing a task in a script configuration file of the intelligent task; before the intelligent task is ready to run, the master control process analyzes a script configuration file of the task to obtain a neural network model label required by the task; loading the neural network model corresponding to the label from the model library into executed hardware; when the intelligent task is operated, the hardware loaded with the neural network model executes the intelligent task under the control of the main control flow. The invention fully transfers the computing power of hardware, ensures the stability of the operation effect and improves the development efficiency of developers.

Description

Embedded artificial intelligence implementation method and hardware platform for inference deployment
Technical Field
The invention belongs to the technical field of embedding, and particularly relates to an inference deployment-oriented embedded artificial intelligence implementation method and a hardware platform.
Background
In recent years, with the increase of the demands for computing capability and intelligence of end-side equipment, the application field of deep learning gradually expands from the cloud side to the end side. In order to meet the requirement of high-efficiency operation under the scene that the factors such as equipment performance and power consumption are limited on the terminal side of the unmanned aerial vehicle, higher computing efficiency and lower equipment interconnection delay are needed. The end-side inference framework mainly completes the deployment and calculation of the neural network model on different terminals such as embedded equipment, and because the specific requirements and limitations of inference calculation for different applications are different, the bottom hardware environments of different application scenes are also different. Thus. A method and a platform for rapidly deploying a model to a platform aiming at different embedded platforms and different application scenes are urgently needed, the computing power of hardware is fully adjusted, the stability of an algorithm effect is ensured, the operation effect is enhanced, and the development efficiency of developers is improved.
Disclosure of Invention
In view of the above analysis, the present invention aims to disclose an embedded artificial intelligence implementation method and hardware platform for inference deployment, which can realize rapid deployment and ensure the reliability of artificial intelligence application under the condition of resource limitation.
The invention discloses an embedded artificial intelligence implementation method facing inference deployment, which comprises the following steps:
encapsulating the resource virtual programming interface; modularizing functions realized by the operation of the basic register and the register, and packaging the functions into a programming interface which can be called by upper-layer application;
constructing a model base of a neural network model; each neural network model in the model library has a unique label;
carrying out intelligent task development based on the packaged resource virtual programming interface; binding the intelligent task with a scheduling point of a master control flow; configuring a neural network model label required by executing a task in a script configuration file of the intelligent task;
before the intelligent task is ready to run, the master control process analyzes a script configuration file of the task to obtain a neural network model label required by the task; loading the neural network model corresponding to the label from the model library into executed hardware; when the intelligent task is operated, the hardware loaded with the neural network model executes the intelligent task under the control of the main control flow.
Further, the model library of the neural network model is stored in a nonvolatile memory.
Further, classifying the neural network model by evaluation and carding, so that each classified neural network model corresponds to hardware loaded during execution.
Further, the evaluation combing is divided according to the fact that the neural network model is executed to perform deviation control or deviation calculation; and classifying the neural network model types subjected to deviation control to a processor end for execution, and classifying the neural network model types subjected to deviation calculation to an AI chip for execution.
Further, the execution process of the intelligent task comprises the following steps:
1) before the intelligent task starts, the main control flow analyzes a script configuration file of a first intelligent task with the highest priority to obtain a neural network model label required by the execution of the first intelligent task, and a neural network model corresponding to the label is loaded from a model library to an executed hardware; loading the neural network model with deviation control to a processor end, and loading the neural network model with deviation calculation to an AI chip;
2) when the intelligent task is executed, the main control flow controls corresponding hardware to execute the intelligent task by executing a neural network model loaded to a processor end or/and an AI chip according to the implementation process of the first intelligent task;
3) before a second intelligent task with the highest priority begins, the main control process analyzes a script configuration file of the second intelligent task to obtain a neural network model label required by the execution of the second intelligent task, and loads a neural network model corresponding to the label from a model library into executed hardware; unloading the neural network model of the first intelligent task from the hardware;
4) when the second intelligent task starts, the main control flow switches tasks, controls corresponding hardware to execute the intelligent tasks by executing a neural network model loaded to a processor end or/and an AI chip according to the implementation process of the first intelligent task;
according to the process, all intelligent tasks are switched and the dynamic loading and unloading of the neural network model are completed one by one through the control of the main control flow, and all the intelligent tasks are implemented.
The invention also discloses an embedded artificial intelligence hardware platform facing the inference deployment, which comprises a processor end, an AI chip, a first memory and a second memory;
the first memory is used for storing the encapsulated resource virtual programming interface; the virtual programming interface of the encapsulated resource comprises a basic register and a function realized by register operation, and is encapsulated into a programming interface which can be called by an upper application;
the second memory is used for storing a model library of the neural network model; each neural network model in the model library has a unique label;
the AI chip is connected with the second memory and is used for loading and running a neural network model required by executing an intelligent task from the second memory;
the processor end is used for executing the main control process of the platform; and loading the packaged resource virtual programming interface from the first memory, and dynamically loading the neural network model required by executing the intelligent task from the second memory to a processor end or an AI chip to control the implementation of the intelligent task.
Further, the second memory is a nonvolatile memory.
Further, the neural network models in the model library are divided into neural network models with biased control and neural network models with biased calculation.
Further, the neural network model of the deviation control is loaded to a processor end for execution; and loading the neural network model subjected to the deviation calculation to an AI chip for execution.
Further, the AI chip adopts an FPGA chip.
The invention can realize at least one of the following beneficial effects:
the invention utilizes the resource virtual programming interface to package aiming at different embedded platforms and different application scenes, provides a uniform programming interface for application software, provides high-efficiency resource management, establishes a model base, and realizes the rapid deployment and the online reconstruction of various intelligent tasks through a main control flow. By the method that the neural network model is distributed on the CPU and the neural network processor, the computational power of hardware is fully adjusted, the stability of the operation effect is ensured, and the development efficiency of developers is improved.
Drawings
The drawings are only for purposes of illustrating particular embodiments and are not to be construed as limiting the invention, wherein like reference numerals are used to designate like parts throughout.
FIG. 1 is a flow chart of an embodiment of a method for implementing embedded artificial intelligence;
FIG. 2 is a connection block diagram of an embedded artificial intelligence hardware platform in an embodiment of the present invention.
Detailed Description
The preferred embodiments of the present invention will now be described in detail with reference to the accompanying drawings, which form a part hereof, and which together with the embodiments of the invention serve to explain the principles of the invention.
An embodiment of the present invention discloses an embedded artificial intelligence implementation method facing inference deployment, as shown in fig. 1, including the following steps:
step S1, packaging a resource virtual programming interface; modularizing functions realized by the operation of the basic register and the register, and packaging the functions into a programming interface which can be called by upper-layer application;
by packaging the resource virtual programming interface, a uniform programming interface is provided for upper application software, efficient resource management is provided, the method has good adaptability and portability, and rapid deployment of multiple types of intelligent tasks is realized.
The encapsulated programming interface can be called by upper application; the program memory is stored, when the artificial intelligence implementing system is powered on, the program memory is read to the processor end, so as to realize the functions of calling the register and operating the register.
The hardware resources can be packaged into an interface which is more beneficial to user operation according to user requirements and use scenes through resource virtual programming interface packaging, and the packaged contents comprise hardware initialization, reading and writing, control, mode setting and other operations. For example, in the package of the resource virtual programming interface of the serial port, the operations of setting the baud rate, setting the data bit, clearing the buffer zone, setting the blocking/non-blocking mode and the like are packaged, so that the operation on the serial port has good adaptability and portability, and the control of the serial port on the rapid deployment of various intelligent tasks is realized.
Preferably, aiming at an embedded hardware platform, a heterogeneous platform of a processor end and a reconfigurable processor can be utilized; the processor end can adopt a general processor CPU, and the reconfigurable processor can adopt an FPGA;
reconfigurable processors (FPGAs) contain a large number of on-chip computational and memory resources with configurable and reconfigurable features.
S2, constructing a model library of the neural network model; each neural network model in the model library has a unique label;
the neural network models aiming at various complexities, various training frameworks and different intelligent tasks are sorted to form a model library; the model library of the neural network model is stored in a non-volatile memory, such as a flash memory.
The neural network models in the model library are all estimated and carded neural network models; classifying the neural network model by evaluating and combing, so that each classified neural network model corresponds to hardware loaded in execution.
Specifically, according to the field and application requirements, a neural network model in the related intelligent task is evaluated, and the neural network model is divided into different hardware for processing; the different hardware comprises a processor and an AI chip;
the evaluation of the neural network model in this embodiment means that the artificial intelligence model is divided into different hardware for processing by combing and analyzing the artificial intelligence application in the related application field and the artificial intelligence model at the neural network processor level, the neural network operator with deviation control is divided into the processor end, and the neural network operator with deviation calculation is divided into the neural network processor; and researching the optimal design of the computing resources of the algorithm according to the computing resource requirements of the algorithm.
Specifically, during division, the performance of an actual hardware platform is combined for division, a neural network operator subjected to deviation control is divided to a processor end, and a neural network operator subjected to deviation calculation is divided to a neural network processor, so that high energy efficiency is guaranteed.
Step S3, intelligent task development is carried out based on the packaged resource virtual programming interface; binding the intelligent task with a scheduling point of a master control flow; configuring a neural network model label required by executing a task in a script configuration file of the intelligent task;
step S4, before the intelligent task is ready to run, the master control process analyzes the script configuration file of the task to obtain the neural network model label required by the task; loading the neural network model corresponding to the label from the model library into executed hardware; when the intelligent task is operated, the hardware loaded with the neural network model executes the intelligent task under the control of the main control flow.
The main control process is responsible for scheduling control and execution of specific applications, and the specific applications are responsible for realizing actual service functions; wherein the neural network model to be loaded for specific application is to search the model corresponding to the label according to the script configuration file, read and execute the model from the corresponding storage address,
the embedded artificial intelligence in the embodiment includes a main control flow and at least one specific application corresponding to at least one intelligent task when implemented; the main control process is responsible for scheduling control and execution of specific applications; the specific application is responsible for realizing the actual service function of the intelligent task, and the corresponding neural network model needs to be called and operated when the service function of the intelligent task is realized.
Specifically, the execution process of the intelligent task comprises the following steps:
1) before the intelligent task starts, the main control flow analyzes a script configuration file of a first intelligent task with the highest priority to obtain a neural network model label required by the execution of the first intelligent task, and a neural network model corresponding to the label is loaded from a model library to an executed hardware; loading the neural network model with deviation control to a processor end, and loading the neural network model with deviation calculation to an AI chip;
2) when the intelligent task is executed, the main control flow controls corresponding hardware to execute the intelligent task by executing a neural network model loaded to a processor end or/and an AI chip according to the implementation process of the first intelligent task;
3) before a second intelligent task with the highest priority begins, the main control process analyzes a script configuration file of the second intelligent task to obtain a neural network model label required by the execution of the second intelligent task, and loads a neural network model corresponding to the label from a model library into executed hardware; unloading the neural network model of the first intelligent task from the hardware;
4) when the second intelligent task starts, the main control flow switches tasks, controls corresponding hardware to execute the intelligent tasks by executing a neural network model loaded to a processor end or/and an AI chip according to the implementation process of the first intelligent task;
according to the process, all intelligent tasks are switched and the dynamic loading and unloading of the neural network model are completed one by one through the control of the main control flow, and all the intelligent tasks are implemented.
Another embodiment of the present invention discloses an embedded artificial intelligence hardware platform for inference deployment, as shown in fig. 2, including: the device comprises a processor end, an AI chip, a first memory and a second memory;
the first memory is used for storing the encapsulated resource virtual programming interface; the virtual programming interface of the encapsulated resource comprises a basic register and a function realized by register operation, and is encapsulated into a programming interface which can be called by an upper application; specifically, the first memory is a program memory, for example, an E2 PROM.
The second memory is used for storing a model library of the neural network model; each neural network model in the model library has a unique label; specifically, the second memory is a nonvolatile memory, such as a flash memory.
The AI chip is connected with the second memory and is used for loading and running a neural network model required by executing an intelligent task from the second memory;
the neural network models in the model base are divided into neural network models with deviation control and neural network models with deviation calculation.
Specifically, the AI chip adopts a reconfigurable processor FPGA.
The processor end is used for executing the main control process of the platform; loading the packaged resource virtual programming interface from the first memory, dynamically loading a neural network model required by executing the intelligent task from the second memory to a processor terminal or an AI chip, and controlling the implementation of the intelligent task;
specifically, a neural network model biased to control in a model library is loaded to a processor end for execution; and loading the neural network model subjected to the deviation calculation to an AI chip for execution.
Specifically, when the artificial intelligence task is executed, the platform is powered on, and the processor end imports a main control flow and a packaged resource virtual programming interface from the first memory; executing a main control flow;
the main control process is responsible for scheduling control and execution of specific applications, and the specific applications are responsible for realizing actual service functions; the neural network model to be loaded by the specific application needs to search the model corresponding to the label according to the script configuration file, and read and execute the model from the corresponding storage address of the second memory;
the embedded artificial intelligence in the embodiment includes a main control flow and at least one specific application corresponding to at least one intelligent task when implemented; the main control process is responsible for scheduling control and execution of specific applications; the specific application is responsible for realizing the actual service function of the intelligent task, and the corresponding neural network model needs to be called and operated when the service function of the intelligent task is realized.
Specifically, the execution process of the intelligent task comprises the following steps:
1) before the intelligent task starts, the main control flow analyzes a script configuration file of a first intelligent task with the highest priority to obtain a neural network model label required by the execution of the first intelligent task, and a neural network model corresponding to the label is loaded from a model library to an executed hardware; loading the neural network model with deviation control to a processor end, and loading the neural network model with deviation calculation to an AI chip;
2) when the intelligent task is executed, the main control flow controls corresponding hardware to execute the intelligent task by executing a neural network model loaded to a processor end or/and an AI chip according to the implementation process of the first intelligent task;
3) before a second intelligent task with the highest priority begins, the main control process analyzes a script configuration file of the second intelligent task to obtain a neural network model label required by the execution of the second intelligent task, and loads a neural network model corresponding to the label from a model library into executed hardware; unloading the neural network model of the first intelligent task from the hardware;
4) when the second intelligent task starts, the main control flow switches tasks, controls corresponding hardware to execute the intelligent tasks by executing a neural network model loaded to a processor end or/and an AI chip according to the implementation process of the first intelligent task;
according to the process, all intelligent tasks are switched and the dynamic loading and unloading of the neural network model are completed one by one through the control of the main control flow, and all the intelligent tasks are implemented.
In summary, in the embodiment, for different embedded platforms and different application scenarios, the resource virtual programming interface is used for encapsulation, a uniform programming interface is provided for application software, efficient resource management is provided, a model base is established, and rapid deployment and online reconfiguration of multiple types of intelligent tasks are realized through a master control process. By means of the method that the neural network model is distributed on a processor end (CPU) and an AI chip (neural network processor), the computing power of hardware is fully adjusted, the stability of the operation effect is guaranteed, and the development efficiency of developers is improved.
The above description is only for the preferred embodiment of the present invention, but the scope of the present invention is not limited thereto, and any changes or substitutions that can be easily conceived by those skilled in the art within the technical scope of the present invention are included in the scope of the present invention.

Claims (10)

1. An embedded artificial intelligence implementation method facing inference deployment is characterized by comprising the following steps:
encapsulating the resource virtual programming interface; modularizing functions realized by the operation of the basic register and the register, and packaging the functions into a programming interface which can be called by upper-layer application;
constructing a model base of a neural network model; each neural network model in the model library has a unique label;
carrying out intelligent task development based on the packaged resource virtual programming interface; binding the intelligent task with a scheduling point of a master control flow; configuring a neural network model label required by executing a task in a script configuration file of the intelligent task;
before the intelligent task is ready to run, the master control process analyzes a script configuration file of the task to obtain a neural network model label required by the task; loading the neural network model corresponding to the label from the model library into executed hardware; when the intelligent task is operated, the hardware loaded with the neural network model executes the intelligent task under the control of the main control flow.
2. The embedded artificial intelligence implementation method of claim 1, wherein the model library of the neural network model is stored in a non-volatile memory.
3. The method of claim 2, wherein the neural network models are classified by evaluation and combing, such that each class of neural network model corresponds to hardware loaded at execution time.
4. The embedded artificial intelligence implementation method of claim 3, wherein the evaluation combing is divided according to whether a neural network model is performing biased control or biased calculation; and classifying the neural network model types subjected to deviation control to a processor end for execution, and classifying the neural network model types subjected to deviation calculation to an AI chip for execution.
5. The embedded artificial intelligence implementation method of claim 3, wherein the execution process of the intelligent task comprises:
1) before the intelligent task starts, the main control flow analyzes a script configuration file of a first intelligent task with the highest priority to obtain a neural network model label required by the execution of the first intelligent task, and a neural network model corresponding to the label is loaded from a model library to an executed hardware; loading the neural network model with deviation control to a processor end, and loading the neural network model with deviation calculation to an AI chip;
2) when the intelligent task is executed, the main control flow controls corresponding hardware to execute the intelligent task by executing a neural network model loaded to a processor end or/and an AI chip according to the implementation process of the first intelligent task;
3) before a second intelligent task with the highest priority begins, the main control process analyzes a script configuration file of the second intelligent task to obtain a neural network model label required by the execution of the second intelligent task, and loads a neural network model corresponding to the label from a model library into executed hardware; unloading the neural network model of the first intelligent task from the hardware;
4) when the second intelligent task starts, the main control flow switches tasks, controls corresponding hardware to execute the intelligent tasks by executing a neural network model loaded to a processor end or/and an AI chip according to the implementation process of the first intelligent task;
according to the process, all intelligent tasks are switched and the dynamic loading and unloading of the neural network model are completed one by one through the control of the main control flow, and all the intelligent tasks are implemented.
6. An embedded artificial intelligence hardware platform oriented to inference deployment is characterized by comprising a processor end, an AI chip, a first memory and a second memory;
the first memory is used for storing the encapsulated resource virtual programming interface; the virtual programming interface of the encapsulated resource comprises a basic register and a function realized by register operation, and is encapsulated into a programming interface which can be called by an upper application;
the second memory is used for storing a model library of the neural network model; each neural network model in the model library has a unique label;
the AI chip is connected with the second memory and is used for loading and running a neural network model required by executing an intelligent task from the second memory;
the processor end is used for executing the main control process of the platform; and loading the packaged resource virtual programming interface from the first memory, and dynamically loading the neural network model required by executing the intelligent task from the second memory to a processor end or an AI chip to control the implementation of the intelligent task.
7. The embedded artificial intelligence hardware platform of claim 6, wherein the second memory is a non-volatile memory.
8. The embedded artificial intelligence hardware platform of claim 7, wherein the neural network models in the model library are divided into neural network models with biased control and neural network models with biased computation.
9. The embedded artificial intelligence hardware platform of claim 8, wherein the neural network model of biased control is loaded to a processor for execution; and loading the neural network model subjected to the deviation calculation to an AI chip for execution.
10. The embedded artificial intelligence hardware platform of any one of claims 6-9, wherein the AI chip is an FPGA chip.
CN202111500361.9A 2021-12-09 2021-12-09 Embedded artificial intelligence implementation method and hardware platform for inference deployment Pending CN114168186A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111500361.9A CN114168186A (en) 2021-12-09 2021-12-09 Embedded artificial intelligence implementation method and hardware platform for inference deployment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111500361.9A CN114168186A (en) 2021-12-09 2021-12-09 Embedded artificial intelligence implementation method and hardware platform for inference deployment

Publications (1)

Publication Number Publication Date
CN114168186A true CN114168186A (en) 2022-03-11

Family

ID=80484920

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111500361.9A Pending CN114168186A (en) 2021-12-09 2021-12-09 Embedded artificial intelligence implementation method and hardware platform for inference deployment

Country Status (1)

Country Link
CN (1) CN114168186A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108243216A (en) * 2016-12-26 2018-07-03 华为技术有限公司 Method, end side equipment, cloud side apparatus and the end cloud cooperative system of data processing
US20190057036A1 (en) * 2018-10-15 2019-02-21 Amrita MATHURIYA Programmable interface to in-memory cache processor
CN110750312A (en) * 2019-10-17 2020-02-04 中科寒武纪科技股份有限公司 Hardware resource configuration method and device, cloud side equipment and storage medium
CN112446491A (en) * 2021-01-20 2021-03-05 上海齐感电子信息科技有限公司 Real-time automatic quantification method and real-time automatic quantification system for neural network model
CN113408634A (en) * 2021-06-29 2021-09-17 深圳市商汤科技有限公司 Model recommendation method and device, equipment and computer storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108243216A (en) * 2016-12-26 2018-07-03 华为技术有限公司 Method, end side equipment, cloud side apparatus and the end cloud cooperative system of data processing
US20190057036A1 (en) * 2018-10-15 2019-02-21 Amrita MATHURIYA Programmable interface to in-memory cache processor
CN110750312A (en) * 2019-10-17 2020-02-04 中科寒武纪科技股份有限公司 Hardware resource configuration method and device, cloud side equipment and storage medium
CN112446491A (en) * 2021-01-20 2021-03-05 上海齐感电子信息科技有限公司 Real-time automatic quantification method and real-time automatic quantification system for neural network model
CN113408634A (en) * 2021-06-29 2021-09-17 深圳市商汤科技有限公司 Model recommendation method and device, equipment and computer storage medium

Similar Documents

Publication Publication Date Title
EP3404587B1 (en) Cnn processing method and device
EP3629190B1 (en) Dynamic deep learning processor architecture
US20170277654A1 (en) Method and apparatus for task scheduling on heterogeneous multi-core reconfigurable computing platform
CN105808290B (en) Remote dynamic updating system and method for multi-FPGA complete machine system
US20100070671A1 (en) Method and device for processing data
JP2005510778A (en) Method and system for scheduling within an adaptive computing engine
CN110780858A (en) Software layering architecture based on embedded operating system
Haubenwaller et al. Computations on the edge in the internet of things
CN114996018A (en) Resource scheduling method, node, system, device and medium for heterogeneous computing
CN113867600A (en) Development method and device for processing streaming data and computer equipment
Doukas et al. A real-time Linux execution environment for function-block based distributed control applications
CN113467931B (en) Processing method, device and system of calculation task
WO2020062277A1 (en) Management method and apparatus for computing resources in data pre-processing phase of neural network
CN112434800B (en) Control device and brain-like computing system
CN114168186A (en) Embedded artificial intelligence implementation method and hardware platform for inference deployment
CN104219290B (en) A kind of multimode cloud application elasticity collocation method
CN112631968A (en) Dynamic evolvable intelligent processing chip structure
CN114281404B (en) Method and device for transplanting algorithm codes of industrial personal computer
CN113472557A (en) Virtual network element processing method and device and electronic equipment
CN115167985A (en) Virtualized computing power providing method and system
Werner et al. Virtualized on-chip distributed computing for heterogeneous reconfigurable multi-core systems
CN112346390B (en) Optical module control method, device, equipment and computer readable storage medium
CN106502633A (en) A kind of operating system of the transparent programming of reconfigurable hardware
JPH11120210A (en) Designing device of reconfigurable circuit and reconfigurable circuit device
CN111427687A (en) Artificial intelligence cloud platform

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination