CN110674935B - Method for transplanting intelligent algorithm to airborne embedded platform and intelligent computing platform - Google Patents

Method for transplanting intelligent algorithm to airborne embedded platform and intelligent computing platform Download PDF

Info

Publication number
CN110674935B
CN110674935B CN201910906956.0A CN201910906956A CN110674935B CN 110674935 B CN110674935 B CN 110674935B CN 201910906956 A CN201910906956 A CN 201910906956A CN 110674935 B CN110674935 B CN 110674935B
Authority
CN
China
Prior art keywords
intelligent
algorithm
platform
computing platform
processing module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910906956.0A
Other languages
Chinese (zh)
Other versions
CN110674935A (en
Inventor
罗庆
孙智孝
马晓宁
王鹤
费思邈
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Aircraft Design and Research Institute Aviation Industry of China AVIC
Original Assignee
Shenyang Aircraft Design and Research Institute Aviation Industry of China AVIC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Aircraft Design and Research Institute Aviation Industry of China AVIC filed Critical Shenyang Aircraft Design and Research Institute Aviation Industry of China AVIC
Priority to CN201910906956.0A priority Critical patent/CN110674935B/en
Publication of CN110674935A publication Critical patent/CN110674935A/en
Application granted granted Critical
Publication of CN110674935B publication Critical patent/CN110674935B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/06Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons
    • G06N3/063Physical realisation, i.e. hardware implementation of neural networks, neurons or parts of neurons using electronic means
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F15/00Digital computers in general; Data processing equipment in general
    • G06F15/76Architectures of general purpose stored program computers
    • G06F15/78Architectures of general purpose stored program computers comprising a single central processing unit
    • G06F15/7807System on chip, i.e. computer system on a single chip; System in package, i.e. computer system on one or more chips in a single package

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Computer Hardware Design (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Neurology (AREA)
  • Artificial Intelligence (AREA)
  • Microelectronics & Electronic Packaging (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)

Abstract

The application belongs to the technical field of intelligent algorithm transplanting design to an airborne embedded platform, and particularly relates to a method for transplanting an intelligent algorithm to the airborne embedded platform, which comprises the following steps: step one, carrying out algorithm structure parallel analysis, calculation complexity analysis and resource demand analysis on an intelligent algorithm developed based on a TensorFlow framework, and adding an intelligent calculation platform for an airborne core processing platform; the intelligent computing platform comprises: the general processing module is responsible for operation management of the intelligent operation platform and is used for data interaction with the built-in equipment; the intelligent processing module is used for carrying an intelligent algorithm; and step two, carrying out integrated development on the intelligent computing platform, and carrying out adaptation on the intelligent computing platform chip and the intelligent algorithm under an airborne environment. And relates to an intelligent computing platform for realizing the method for transplanting the intelligent algorithm to the airborne embedded platform.

Description

Method for transplanting intelligent algorithm to airborne embedded platform and intelligent computing platform
Technical Field
The application belongs to the technical field of intelligent algorithm transplanting design to an airborne embedded platform, and particularly relates to a method for transplanting an intelligent algorithm to the airborne embedded platform and an intelligent computing platform.
Background
Currently, most development and operation platforms of intelligent algorithms are servers based on an X86 platform, and in order to reduce difficulty in development of the intelligent algorithms and compress development cycles of the intelligent algorithms, the intelligent algorithms are developed by using open-source frameworks in large quantities, such as the tensrflow framework of Google.
The intelligent algorithm applies a large amount of convolution neural networks and reinforcement learning neural networks, and can realize perfect intelligent decision under uncertain conditions. With the development of the technical level, the requirement on the intelligent degree of the airplane is greatly improved, and related intelligent algorithms are urgently needed to be transplanted to the airborne platform, but because the airborne embedded platform is greatly different from the X86 platform, the intelligent algorithms developed and operated based on the X86 platform are difficult to be directly transplanted to the airborne embedded platform.
The present application is made in view of the above-mentioned drawbacks of the prior art.
Disclosure of Invention
The present application is directed to a method for transplanting an intelligent algorithm to an onboard embedded platform and an intelligent computing platform, so as to overcome or alleviate at least one of the drawbacks of the prior art.
The technical scheme of the application is as follows:
in one aspect, a method for transplanting an intelligent algorithm to an airborne embedded platform is provided, which comprises the following steps:
the method comprises the following steps of firstly, carrying out algorithm structure parallel analysis, calculation complexity analysis and resource demand analysis on an intelligent algorithm developed based on a TensorFlow framework, and accordingly adding an intelligent calculation platform for an airborne core processing platform; the intelligent computing platform comprises:
the general processing module is responsible for operation management of the intelligent operation platform and is used for data interaction with the built-in equipment;
the intelligent processing module is used for carrying an intelligent algorithm;
and step two, carrying out integrated development on the intelligent computing platform, and carrying out adaptation on the intelligent computing platform chip and the intelligent algorithm under an airborne environment.
According to at least one embodiment of the present application, the second step is specifically:
analyzing the dependency relationship of the TensorFlow frame, and accordingly completing intelligent computing platform chip driving, an intelligent computing platform runtime library, a neural network algorithm operator, the TensorFlow frame and TensorFlow frame dependency library transplantation;
and analyzing the intelligent algorithm framework and the dependence, and accordingly, completing development, transplantation and adaptation of an intelligent algorithm model framework, a specific operator, a dependence library related algorithm, an intelligent algorithm special data structure with different versions of compiler characteristics, and function call.
Another aspect provides an intelligent computing platform, where the intelligent computing platform is disposed on an onboard core processing platform, and is configured to implement any one of the above methods for transplanting an intelligent algorithm to an onboard embedded platform, where the method includes:
the general processing module is responsible for operation management of the intelligent operation platform and is used for data interaction with the built-in equipment;
and the intelligent processing module is used for carrying an intelligent algorithm.
According to at least one embodiment of the application, the general processing module adopts a PowerPC series processor, adopts an embedded multi-core operating system and adopts an embedded database eXtreemeDB to realize rule algorithm development.
According to at least one embodiment of the present application, the algorithmic design of the general processing module is in a framed, modular design.
According to at least one embodiment of the present application, the general processing modules are respectively designed with:
the rule base module is organized and managed in a database mode;
the reasoning model module is realized by C language, executes the needed rule in the reasoning process and calls through the interface provided by the database.
According to at least one embodiment of the application, the database characteristics of the general processing module organize and describe the rules, a parsable and database-writable rule set file and description are provided, and the mysql database is used for realizing the organization and management of the rules.
According to at least one embodiment of the application, the intelligent processing module adopts a PowerPC series processor, an embedded multi-core operating system is adopted, and a chip of the intelligent processing module adopts a high-performance FPGA.
According to at least one embodiment of the present application, the architecture comprises:
the hardware acceleration unit is used for modularizing operators in the neural network algorithm model to form a hardware acceleration unit PE (provider edge) for the operators, and forming operator acceleration unit array configuration according to performance requirements and hardware resources;
the control logic unit is used for scheduling and distributing tasks in the operator acceleration unit array according to the task characteristics of the neural network algorithm model;
and the storage unit is used for optimizing the data storage requirements of the parameters and the intermediate result according to the algorithm model neurons.
According to at least one embodiment of the present application, the storage unit stores the neuron parameters in on-chip BRAM and stores the intermediate results in off-chip DRAM.
Drawings
Fig. 1 is a flowchart of a method for porting an intelligent algorithm to an onboard embedded platform according to an embodiment of the present application.
Detailed Description
The present application will be described in further detail with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant application and are not limiting of the application. It should be noted that, for convenience of description, only the portions related to the present application are shown in the drawings.
It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict. The present application will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
It should be noted that in the description of the present application, the terms of direction or positional relationship indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the directions or positional relationships shown in the drawings, which are merely for convenience of description, and do not indicate or imply that the device or element must have a specific orientation, be constructed and operated in a specific orientation, and thus, should not be construed as limiting the present application. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Furthermore, it should be noted that, in the description of the present application, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as being fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present application can be understood by those skilled in the art as the case may be.
The present application is described in further detail below with reference to fig. 1.
On the one hand, the method for transplanting the intelligent algorithm to the airborne embedded platform is provided, and comprises the following steps:
the method comprises the following steps of firstly, carrying out algorithm structure parallel analysis, calculation complexity analysis and resource demand analysis on an intelligent algorithm developed based on a TensorFlow framework, and accordingly adding an intelligent calculation platform for an airborne core processing platform; the intelligent computing platform comprises:
the general processing module is responsible for operation management of the intelligent operation platform and is used for data interaction with the built-in equipment;
the intelligent processing module is used for carrying an intelligent algorithm;
and step two, carrying out integrated development on the intelligent computing platform, and carrying out adaptation on the intelligent computing platform chip and the intelligent algorithm under an airborne environment.
For the method for transplanting the intelligent algorithm to the airborne embedded platform disclosed in the above embodiment, those skilled in the art can understand that the intelligent computing platform is designed to be compatible with the existing airborne hardware platform architecture and communication network, and a large number of intelligent algorithms developed for the X86 server are transplanted to airborne, so that the method has high transplanting efficiency.
In some optional embodiments, step two specifically is:
analyzing the dependency relationship of the TensorFlow frame, and accordingly completing intelligent computing platform chip driving, an intelligent computing platform runtime library, a neural network algorithm operator, the TensorFlow frame and TensorFlow frame dependency library transplantation;
and analyzing the intelligent algorithm framework and the dependence, and accordingly, completing development, transplantation and adaptation of an intelligent algorithm model framework, a specific operator, a dependence library related algorithm, an intelligent algorithm special data structure with different version compiler characteristics, and function call.
Another aspect provides an intelligent computing platform, where the intelligent computing platform is disposed on an onboard core processing platform, and is configured to implement any one of the above methods for transplanting an intelligent algorithm to an onboard embedded platform, where the method includes:
the general processing module is responsible for operation management of the intelligent operation platform and is used for data interaction with the built-in equipment;
and the intelligent processing module is used for carrying an intelligent algorithm.
In some optional embodiments, the general processing module performs data interaction with the built-in device through an inter-device bus (i.e., a fiber optic bus or a GJB289A bus or a HB6096 bus or an RS422 bus or an RS232 bus), a discrete quantity or an analog quantity.
In some optional embodiments, the general processing module adopts a PowerPC series processor, adopts an embedded multi-core operating system, and adopts an embedded database eXtreemeDB to realize the development of the rule-class algorithm.
In some alternative embodiments, the algorithm design of the general processing module is a framed, modular design.
In some alternative embodiments, the general processing modules are respectively designed with:
the rule base module is organized and managed in a database mode;
and the reasoning model module is realized by C language, executes the rules required in the reasoning process and calls through an interface provided by the database.
In some optional embodiments, the database characteristics of the general processing module organize and describe the rules, provide parsable and writable rule set files and descriptions of the database, and use the mysql database to realize the organization and management of the rules.
In some optional embodiments, the intelligent processing module selects a PowerPC series processor, adopts an embedded multi-core operating system, adopts a high-performance FPGA for a chip thereof, and adopts an SDK development platform of a deep technology based on a high-performance FPGA development tool, so that design support for the convolutional neural network CNN can be provided, but the recurrent neural network RNN has a great difference from the former and cannot be directly developed based on the deep technology SDK; different from a feedforward neural network, data in the RNN are transmitted in two directions, the requirements on control logic design are higher, complex operators such as Sigmoid and Tanh are included, further optimization design needs to be carried out on an intelligent processing module hardware circuit, a model-based compression tool is adopted to realize pruning compression and data quantization processing of an algorithm model, and algorithm transplantation of the RNN in an embedded environment is realized.
In some optional embodiments, the architecture comprises:
the hardware acceleration unit modularizes operators in the neural network algorithm model, comprises matrix multiplication and addition operation, sigmoid, tanh and the like, forms a hardware acceleration unit PE for the operators, and forms operator acceleration unit array configuration according to performance requirements and hardware resources;
the control logic unit is used for scheduling and distributing tasks in the operator acceleration unit array according to the task characteristics of the neural network algorithm model, so that the hardware resource operation efficiency is improved;
and the storage unit is used for optimizing the data storage requirements of the parameters and the intermediate result according to the algorithm model neurons.
In some optional embodiments, the storage unit stores the neuron parameters in on-chip BRAMs, and stores intermediate results in off-chip DRAMs, so that a parameter access mechanism in the BRAMs can be further optimized, parameter sharing among the neurons is realized, and the utilization rate of a BRAM storage space is improved.
In some optional embodiments, an optimal interconnection network topology structure is constructed aiming at the data communication relation in the acceleration unit array, a high-bandwidth low-delay data transmission mechanism is designed, and the high-efficiency operation of a neural network algorithm model is realized.
So far, the technical solutions of the present application have been described in connection with the preferred embodiments shown in the drawings, but it is easily understood by those skilled in the art that the scope of the present application is obviously not limited to these specific embodiments. Equivalent changes or substitutions of related technical features can be made by those skilled in the art without departing from the principle of the present application, and the technical scheme after the changes or substitutions will fall into the protection scope of the present application.

Claims (9)

1. A method for transplanting an intelligent algorithm to an airborne embedded platform is characterized by comprising the following steps:
the method comprises the following steps of firstly, carrying out algorithm structure parallel analysis, calculation complexity analysis and resource demand analysis on an intelligent algorithm developed based on a TensorFlow framework, and accordingly adding an intelligent calculation platform for an airborne core processing platform; the intelligent computing platform comprises:
the general processing module is responsible for operation management of the intelligent operation platform and is used for data interaction with the built-in equipment;
the intelligent processing module is used for carrying an intelligent algorithm;
step two, carrying out integrated development on the intelligent computing platform, and carrying out adaptation under an airborne environment on an intelligent computing platform chip and an intelligent algorithm, specifically comprising the following steps:
analyzing the dependency relationship of the TensorFlow frame, and accordingly completing intelligent computing platform chip driving, an intelligent computing platform runtime library, a neural network algorithm operator, the TensorFlow frame and TensorFlow frame dependency library transplantation;
and analyzing the intelligent algorithm framework and the dependence, and accordingly, completing development, transplantation and adaptation of an intelligent algorithm model framework, a specific operator, a dependence library related algorithm, an intelligent algorithm special data structure with different versions of compiler characteristics, and function call.
2. An intelligent computing platform, wherein the intelligent computing platform is disposed on an onboard core processing platform, and is used for implementing the method for transplanting an intelligent algorithm to an onboard embedded platform according to claim 1, comprising:
the general processing module is responsible for operation management of the intelligent operation platform and is used for data interaction with the built-in equipment;
and the intelligent processing module is used for carrying an intelligent algorithm.
3. The intelligent computing platform of claim 2,
the general processing module adopts a PowerPC series processor, an embedded multi-core operating system and an embedded database eXtreeDB to realize rule algorithm development.
4. The intelligent computing platform of claim 3,
the algorithm design of the general processing module adopts frame type and modular design.
5. The intelligent computing platform of claim 4,
the general processing modules are respectively designed as follows:
the rule base module is organized and managed in a database mode;
the reasoning model module is realized by C language, executes the needed rule in the reasoning process and calls through the interface provided by the database.
6. The intelligent computing platform of claim 5,
the database characteristics of the general processing module organize and describe the rules, provide resolvable rule set files and descriptions which can be written into the database, and utilize the mysql database to realize the organization and management of the rules.
7. The intelligent computing platform of claim 6,
the intelligent processing module adopts a PowerPC series processor, an embedded multi-core operating system and a high-performance FPGA chip.
8. The intelligent computing platform of claim 7,
the framework comprises:
the hardware acceleration unit modularizes operators in the neural network algorithm model to form a hardware acceleration unit PE for the operators, and forms operator acceleration unit array configuration according to performance requirements and hardware resources;
the control logic unit is used for scheduling and distributing tasks in the operator accelerating unit array according to the task characteristics of the neural network algorithm model;
and the storage unit is used for optimizing the data storage requirements of the parameters and the intermediate result according to the algorithm model neurons.
9. The intelligent computing platform of claim 8,
the storage unit stores the neuron parameters in an on-chip BRAM and stores the intermediate result in an off-chip DRAM.
CN201910906956.0A 2019-09-24 2019-09-24 Method for transplanting intelligent algorithm to airborne embedded platform and intelligent computing platform Active CN110674935B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910906956.0A CN110674935B (en) 2019-09-24 2019-09-24 Method for transplanting intelligent algorithm to airborne embedded platform and intelligent computing platform

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910906956.0A CN110674935B (en) 2019-09-24 2019-09-24 Method for transplanting intelligent algorithm to airborne embedded platform and intelligent computing platform

Publications (2)

Publication Number Publication Date
CN110674935A CN110674935A (en) 2020-01-10
CN110674935B true CN110674935B (en) 2022-12-20

Family

ID=69078664

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910906956.0A Active CN110674935B (en) 2019-09-24 2019-09-24 Method for transplanting intelligent algorithm to airborne embedded platform and intelligent computing platform

Country Status (1)

Country Link
CN (1) CN110674935B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112734040A (en) * 2021-01-22 2021-04-30 中国人民解放军军事科学院国防科技创新研究院 Embedded artificial intelligence computing framework and application method

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105930460A (en) * 2016-04-21 2016-09-07 重庆邮电大学 Multi-algorithm-integrated big data analysis middleware platform
CN106716872A (en) * 2016-11-10 2017-05-24 深圳达闼科技控股有限公司 Aircraft and control method, device and electronic device thereof
CN107783779A (en) * 2017-11-10 2018-03-09 中国航空工业集团公司西安飞机设计研究所 A kind of flight management software heterogeneous platform implantation method
CN108614703A (en) * 2016-12-30 2018-10-02 浙江舜宇智能光学技术有限公司 Algorithm implant system based on embedded platform and its algorithm transplantation method
CN109625333A (en) * 2019-01-03 2019-04-16 西安微电子技术研究所 A kind of space non-cooperative target catching method based on depth enhancing study
CN109901604A (en) * 2019-03-25 2019-06-18 北京航空航天大学 A kind of aerostatics indoor sport control framework based on Matlab

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11094029B2 (en) * 2017-04-10 2021-08-17 Intel Corporation Abstraction layers for scalable distributed machine learning

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105930460A (en) * 2016-04-21 2016-09-07 重庆邮电大学 Multi-algorithm-integrated big data analysis middleware platform
CN106716872A (en) * 2016-11-10 2017-05-24 深圳达闼科技控股有限公司 Aircraft and control method, device and electronic device thereof
CN108614703A (en) * 2016-12-30 2018-10-02 浙江舜宇智能光学技术有限公司 Algorithm implant system based on embedded platform and its algorithm transplantation method
CN107783779A (en) * 2017-11-10 2018-03-09 中国航空工业集团公司西安飞机设计研究所 A kind of flight management software heterogeneous platform implantation method
CN109625333A (en) * 2019-01-03 2019-04-16 西安微电子技术研究所 A kind of space non-cooperative target catching method based on depth enhancing study
CN109901604A (en) * 2019-03-25 2019-06-18 北京航空航天大学 A kind of aerostatics indoor sport control framework based on Matlab

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
机载电子系统智能化的研究;钟宇浩;《创新应用》;20180930;第35卷(第9期);第66-68页 *

Also Published As

Publication number Publication date
CN110674935A (en) 2020-01-10

Similar Documents

Publication Publication Date Title
Jo et al. Smart livestock farms using digital twin: Feasibility study
CN110322010B (en) Pulse neural network operation system and method for brain-like intelligence and cognitive computation
CN108762768B (en) Intelligent network service deployment method and system
US11074107B1 (en) Data processing system and method for managing AI solutions development lifecycle
CN110119271B (en) Cross-machine learning platform model definition protocol and adaptation system
CN108924198B (en) Data scheduling method, device and system based on edge calculation
CN113282368B (en) Edge computing resource scheduling method for substation inspection
Etemadi et al. A cost-efficient auto-scaling mechanism for IoT applications in fog computing environment: a deep learning-based approach
Sharma et al. Enhancing the food locations in an artificial bee colony algorithm
CN109522002A (en) A kind of unmanned aerial vehicle station open architecture based on model-driven
CN112100155A (en) Cloud edge cooperative digital twin model assembling and fusing method
CN108985367A (en) Computing engines selection method and more computing engines platforms based on this method
CN115733754B (en) Resource management system based on cloud primary center platform technology and elastic construction method thereof
CN105868222A (en) Task scheduling method and device
WO2023179180A1 (en) Network virtualization system structure and virtualization method
CN109885584A (en) The implementation method and terminal device of distributed data analyzing platform
CN110674935B (en) Method for transplanting intelligent algorithm to airborne embedded platform and intelligent computing platform
Gand et al. A Fuzzy Controller for Self-adaptive Lightweight Edge Container Orchestration.
CN117076077A (en) Planning and scheduling optimization method based on big data analysis
CN110163255A (en) A kind of data stream clustering method and device based on density peaks
CN110766163B (en) System for implementing machine learning process
CN116109058A (en) Substation inspection management method and device based on deep reinforcement learning
Dustdar et al. An elasticity framework for smart contracts
CN106533720A (en) Network service request compiling method, network service request compiling device, and controller
Wang et al. Research Perspectives Toward Autonomic Optimization of In Situ Analysis and Visualization

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant