CN111176822B - OODA (on-off-line architecture) multi-task intelligent application-oriented embedded public software operation method - Google Patents

OODA (on-off-line architecture) multi-task intelligent application-oriented embedded public software operation method Download PDF

Info

Publication number
CN111176822B
CN111176822B CN202010001857.0A CN202010001857A CN111176822B CN 111176822 B CN111176822 B CN 111176822B CN 202010001857 A CN202010001857 A CN 202010001857A CN 111176822 B CN111176822 B CN 111176822B
Authority
CN
China
Prior art keywords
intelligent
service
resource
computing
application
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010001857.0A
Other languages
Chinese (zh)
Other versions
CN111176822A (en
Inventor
白林亭
文鹏程
程陶然
邹昌昊
高泽
李亚晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Aeronautics Computing Technique Research Institute of AVIC
Original Assignee
Xian Aeronautics Computing Technique Research Institute of AVIC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Aeronautics Computing Technique Research Institute of AVIC filed Critical Xian Aeronautics Computing Technique Research Institute of AVIC
Priority to CN202010001857.0A priority Critical patent/CN111176822B/en
Publication of CN111176822A publication Critical patent/CN111176822A/en
Application granted granted Critical
Publication of CN111176822B publication Critical patent/CN111176822B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/508Monitor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/509Offload
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention belongs to the field of embedded intelligent computing, and provides an embedded public software running method for an OODA (on-off-line architecture) multitasking intelligent application. The method designs an embedded public software operation framework for the OODA multi-task intelligent application, flexibly schedules various intelligent computing resources through resource service management, constructs the public software operation framework for knowledge-driven intelligent application, intelligent optimization intelligent application and deep learning intelligent application, improves flexibility of application operation, scheduling and maintenance, and simultaneously researches an application-oriented service management technology for the OODA multi-task scene to provide unified platform-level computing service for the OODA multi-task intelligent application.

Description

OODA (on-off-line architecture) multi-task intelligent application-oriented embedded public software operation method
Technical Field
The invention belongs to the field of embedded intelligent computing, and relates to an embedded software running method for an OODA (on-off data acquisition) multitasking intelligent application.
Background
In recent years, with the rapid development of artificial intelligence technology, the artificial intelligence technology plays an increasingly central role in the process of equipment intelligent upgrade. The intelligent comprehensive upgrading of various fields and various equipment brings higher and higher demands on the intelligent level of the equipment, and the equipment is required to evolve from artificial intelligence to strong artificial intelligence.
Under the environment of high dynamic and strong countermeasure, situation change is fast, and the game of friend and foe is strong, so that the information quantity to be processed by the embedded system is increased, the traditional human-guided equipment combat mode can not meet new combat demands, the embedded system is required to have the OODA overall process autonomous behavior capability from perception, cognition, decision-making to control, and extremely high requirements are provided for a computing platform of the embedded system.
On the one hand, the equipment is in the OODA whole process autonomous execution task, and relates to multi-stage complex intelligent application such as autonomous environment perception, autonomous situation cognition, autonomous behavior decision, autonomous behavior control and the like, and the intelligent application required by the equipment in the OODA process is different along with different task environments, different task targets, different equipment self characteristics and the like, so that extremely complex OODA multi-task intelligent application characteristics are brought; meanwhile, each type of intelligent application has different requirements on the running environment, resource requirement, real-time requirement and the like of the embedded intelligent computing platform, which results in that the running environment of the application provided by the traditional embedded intelligent computing platform cannot meet the running requirement of the OODA whole process intelligent combat task.
On the other hand, with the continuous upgrading of the processor form and the computer hardware, the intelligent computing-oriented platform hardware increasingly evolves to isomerization, diversification and multiple quantities, which leads to the hardware resources of the embedded intelligent computing platform, so that the management and the use of the platform hardware resources are complicated, and the overall computing efficiency of the hardware resources is greatly reduced.
Disclosure of Invention
The invention aims to improve the flexibility of operation, scheduling and maintenance of the OODA multi-task intelligent application and provide unified platform-level computing service for the OODA multi-task intelligent application.
Therefore, the invention provides an embedded public software operation method for the OODA multitasking intelligent application, which has the following design ideas:
the method is characterized in that the complexity characteristic of the OODA multi-task intelligent application of the embedded system and the heterogeneous diversity characteristic of intelligent computing hardware are faced, various intelligent computing resources are flexibly scheduled through resource service management, a public software operation framework is respectively built for the knowledge driving intelligent application, the intelligent optimizing intelligent application and the deep learning intelligent application, the flexibility of application operation, scheduling and maintenance is improved, meanwhile, the application-oriented service management technology is researched for the OODA multi-task scene, and unified platform-level computing service is provided for the OODA multi-task intelligent application. For this purpose, the whole embedded public software operation framework is divided into three layers:
1. resource service management at the bottom layer abstracts resources owned by heterogeneous modules of an intelligent computing platform into uniform resource description, establishes mapping management of service requests and resources of intelligent computing tasks, completes loading and unloading of intelligent computing services through dynamic allocation of resources on the basis of resource monitoring, and mainly solves the resource mapping problem of software and hardware. Specific:
the resource service management comprises resource-service mapping management, resource dynamic allocation and service dynamic loading and unloading, abstracts resources owned by heterogeneous modules of the intelligent computing platform into uniform resource description, establishes the mapping management of service requests and resources of the intelligent computing task, and completes loading and unloading of the intelligent computing service through the resource dynamic allocation on the basis of resource monitoring. The resource mapping management aims to decouple the binding relation between the intelligent service and the hardware platform, and realizes the dynamic allocation of the intelligent service request by establishing a resource description table and a service demand table, balances the resource load and improves the resource utilization efficiency. Establishing a resource monitoring mechanism on the basis of a resource description table, monitoring the computing resource load condition of each module of the intelligent computing platform in real time, establishing and maintaining a load information table, and recording the resource state information of each module; after the service request is managed and matched with the resource type and the computing capacity requirement through the resource mapping, the service request enters a scheduling queue for management according to the required resource type, and the resource dynamic allocation model allocates proper hardware resources for the service request by inquiring a load information table. Based on the resource dynamic allocation model, a service dynamic loading and unloading model is designed, the loading and unloading model is composed of a platform service library, a service management center and service loading and unloading, the service management center instructs nodes to load and run specified services through the resource dynamic management model, the nodes send service loading/unloading requests to the platform service library, and the platform service library activates the required services and executes corresponding actions.
2. The software operation frames positioned in the middle layer comprise an embedded public software operation frame oriented to knowledge driving, an embedded public software operation frame oriented to intelligent optimization and an embedded public software operation frame oriented to deep learning, and the problem of operation support of an intelligent algorithm, an intelligent optimization algorithm and a deep learning algorithm driven by knowledge is mainly solved based on the unified frame concept. Specific:
the software operation framework comprises an embedded public software operation framework facing to knowledge driving, an embedded public software operation framework facing to intelligent optimization and an embedded public software operation framework facing to deep learning. Based on the concept of a unified framework, a unified knowledge base and an operator model base are designed for diversified intelligent application demands, a knowledge-driven software operation mechanism, an intelligent optimization-oriented software operation mechanism and a deep learning-oriented software operation mechanism are respectively formulated for algorithm characteristic differences of intelligent applications, and the mechanisms are integrated into the unified software operation framework, so that operation support for the knowledge-driven intelligent algorithm, the intelligent optimization algorithm and the deep learning algorithm is realized, and the multi-field universality and the multi-task applicability of the platform are improved.
3. And the application service management at the top layer provides a set of standardized application service for the upper intelligent task, so that decoupling of the upper intelligent task, the intelligent algorithm and the lower software operation framework is realized, and meanwhile, the correct real-time execution of different intelligent computing services and the maximization of the computing efficiency are realized by utilizing a parallel scheduling technology, so that the mapping problem of the algorithm to the framework is mainly solved. Specific:
the application service management comprises intelligent algorithm-oriented service encapsulation and intelligent computing service parallel scheduling. The external interface definition of the application service is given through service encapsulation, a set of standardized application service is provided for the upper intelligent task, and decoupling of the upper intelligent task, the intelligent algorithm and the lower software operation frame is realized, so that the aim of transparent calling is fulfilled; and designing a scheduling algorithm of a plurality of services according to the urgency, priority and other attributes of the computing services, dividing the computing services into two types of real-time services and timely services, realizing a parallel scheduling process, and ensuring correct real-time execution of different computing services and maximization of computing resource utilization efficiency.
Based on the method, the embedded public software operation method for the OODA multi-task intelligent application mainly comprises the following execution steps:
scheduling through application service management, and determining the operation time sequence of a plurality of intelligent algorithms required by the multi-task intelligent application;
adapting corresponding software operation frames to each intelligent algorithm which is operated in sequence; the software operation frames comprise an embedded public software operation frame facing the knowledge driving, an embedded public software operation frame facing the intelligent optimization and an embedded public software operation frame facing the deep learning, wherein the three software operation frames separate an operation mechanism from a library, and a unified knowledge base and operator model library are established;
and the intelligent algorithm scheduling is mapped to an adaptive hardware module through resource service management, and on the basis of resource monitoring, the loading and unloading of the intelligent computing service are completed through dynamic allocation of resources.
Optionally, the application service management specifically includes service encapsulation oriented to intelligent algorithm and intelligent computing service parallel scheduling; wherein:
the service package facing the intelligent algorithm is an external interface definition for giving out application service and is used for providing a set of standardized application service for an upper intelligent task;
the intelligent computing service parallel scheduling determines a scheduling algorithm (namely determining the running time sequences of a plurality of intelligent algorithms) according to the urgency and the priority of the computing service so as to ensure the correct real-time execution of different computing services and the maximization of the utilization efficiency of computing resources; the computing services are classified into real-time services and timely services.
Optionally, the resource service management performs resource monitoring, specifically, abstracts resources owned by each hardware module of the intelligent computing platform into uniform resource descriptions, monitors the computing resource load condition of each hardware module of the intelligent computing platform in real time according to a set resource monitoring mechanism according to an established resource description table and a service demand table, establishes and maintains a load information table, and records resource state information of each module.
Optionally, the resource service management performs dynamic allocation of resources, specifically: the service request is managed by resource mapping to match the resource type and the computing power requirement, enters a scheduling queue for management according to the required resource type, and distributes proper hardware resources for the service request by inquiring a load information table.
Optionally, the loading and unloading functions of the intelligent computing service performed by the resource service management are realized by a service dynamic loading and unloading model; the service dynamic loading and unloading model comprises a platform service library, a service management center and service loading and unloading, wherein the service management center instructs a node to load and run appointed service through the resource dynamic management model, the node sends a service loading/unloading request to the platform service library, and the platform service library activates the required service and executes corresponding actions.
Optionally, the hardware modules include a multi-core parallel intelligent computing module, a flexible configurable intelligent computing module, and a deep learning dedicated intelligent computing module.
Compared with the prior art, the invention has the following advantages:
on one hand, a unified knowledge base and an operator model base are designed for diversified intelligent application requirements, a knowledge-driven-oriented software operation mechanism, an intelligent-optimization-oriented software operation mechanism and a deep learning-oriented software operation mechanism are respectively formulated for algorithm characteristic differences of intelligent applications, and the mechanisms are integrated into a unified software operation frame, so that flexible and general support for different types of intelligent applications is realized, and the multi-field universality and multi-task applicability of the platform are greatly improved; on the other hand, aiming at diversity of intelligent application and diversity of heterogeneous intelligent computing resources, a system-level optimization scheduling mechanism of embedded intelligent computing is established: a service encapsulation mechanism and a scheduling management mechanism of an intelligent algorithm are established aiming at diversity of intelligent applications, so that dynamic and flexible management of the intelligent applications is realized; a service management mechanism of the intelligent computing resources is built aiming at the diversity of heterogeneous intelligent computing resources, and dynamic and flexible management of the intelligent computing resources is realized through a resource real-time monitoring and dynamic scheduling mechanism. Finally, decoupling sum between the intelligent application task and the computing system and decoupling sum between the intelligent application algorithm and the platform resource are realized, and flexibility and universality of supporting diversified intelligent tasks are improved.
Drawings
Fig. 1 is an embedded public software running framework for an OODA-oriented multitasking intelligent application.
FIG. 2 is a heterogeneous fused embedded intelligent computing architecture.
Detailed Description
The invention is described in detail below with reference to the drawings and examples.
In the embodiment, scheduling is performed through application service management, and the operation time sequence of a plurality of intelligent algorithms required by the multi-task intelligent application is determined; adapting corresponding software operation frames to each intelligent algorithm which is operated in sequence; the software operation frames are divided into an embedded public software operation frame facing knowledge driving, an embedded public software operation frame facing intelligent optimization and an embedded public software operation frame facing deep learning; the operation mechanism of the software operation framework is separated from the library, and the three frameworks are provided with a unified knowledge base and an operator model library; and the intelligent algorithm scheduling is mapped to an adaptive hardware module through resource service management, and on the basis of resource monitoring, the loading and unloading of the intelligent computing service are completed through dynamic allocation of resources.
As shown in fig. 1, the whole embedded public software operation framework is divided into three layers:
1. resource service management
The resource service management comprises resource-service mapping management, resource dynamic allocation and service dynamic loading and unloading, abstracts resources owned by heterogeneous modules of the intelligent computing platform into uniform resource description, establishes the mapping management of service requests and resources of the intelligent computing task, and completes loading and unloading of the intelligent computing service through the resource dynamic allocation on the basis of resource monitoring. The resource mapping management aims to decouple the binding relation between the intelligent service and the hardware platform, and realizes the dynamic allocation of the intelligent service request by establishing a resource description table and a service demand table, balances the resource load and improves the resource utilization efficiency. Establishing a resource monitoring mechanism on the basis of a resource description table, monitoring the computing resource load condition of each module of the intelligent computing platform in real time, establishing and maintaining a load information table, and recording the resource state information of each module; after the service request is managed and matched with the resource type and the computing capacity requirement through the resource mapping, the service request enters a scheduling queue for management according to the required resource type, and the resource dynamic allocation model allocates proper hardware resources for the service request by inquiring a load information table. Based on the resource dynamic allocation model, a service dynamic loading and unloading model is designed, the loading and unloading model is composed of a platform service library, a service management center and service loading and unloading, the service management center instructs nodes to load and run specified services through the resource dynamic management model, the nodes send service loading/unloading requests to the platform service library, and the platform service library activates the required services and executes corresponding actions.
2. Unified software running framework
The unified software operation framework comprises an embedded public software operation framework facing to knowledge driving, an embedded public software operation framework facing to intelligent optimization and an embedded public software operation framework facing to deep learning. Based on the concept of a unified framework, a unified knowledge base and an operator model base are designed for diversified intelligent application demands, a knowledge-driven software operation mechanism, an intelligent optimization-oriented software operation mechanism and a deep learning-oriented software operation mechanism are respectively formulated for algorithm characteristic differences of intelligent applications, and the mechanisms are integrated into the unified software operation framework, so that operation support for the knowledge-driven intelligent algorithm, the intelligent optimization algorithm and the deep learning algorithm is realized, and the multi-field universality and the multi-task applicability of the platform are improved.
3. Application service management
The application service management comprises intelligent algorithm-oriented service encapsulation and intelligent computing service parallel scheduling. The external interface definition of the application service is given through service encapsulation, a set of standardized application service is provided for the upper intelligent task, and decoupling of the upper intelligent task, the intelligent algorithm and the lower software operation frame is realized, so that the aim of transparent calling is fulfilled; and designing a scheduling algorithm of a plurality of services according to the urgency, priority and other attributes of the computing services, dividing the computing services into two types of real-time services and timely services, realizing a parallel scheduling process, and ensuring correct real-time execution of different computing services and maximization of computing resource utilization efficiency.
FIG. 2 illustrates a heterogeneous fused embedded intelligent computing architecture. The embedded public software running framework for the OODA-oriented multitasking intelligent application shown in fig. 1 corresponds to the platform service layer in fig. 2.
As shown in fig. 2, the heterogeneous fused embedded intelligent computing architecture includes, from bottom to top, a hardware layer, an operating system layer, a platform service layer, an intelligent algorithm layer, and an application layer. Wherein:
at the hardware layer, three intelligent computing modules are designed. Wherein: the high-performance multi-core parallel intelligent computing module takes a multi-core CPU as a core processor, mainly aims at knowledge-driven intelligent algorithms and intelligent optimization algorithms with multiple condition judgment, multiple branch selection and multiple loop iteration, and has a system management function; the flexible configurable intelligent computing module takes an FPGA as a core processor and mainly aims at a deep learning algorithm with intensive computation and frequent data access; the special custom intelligent computing module for deep learning uses an AI special processor as a core processor, and mainly aims at a deep learning algorithm. The difference between the intelligent computing module and the flexible configurable intelligent computing module is that the special customization module emphasizes the specificity and the customization, optimizes the performance of a certain specific (such as common or high-real-time requirement) deep learning algorithm, and emphasizes the universality and the configurability, and maximizes the support types of the deep learning algorithm.
At the operating system layer, the characteristics of various operating systems and the adaptability of the operating systems and hardware platforms are comprehensively considered, the embedded operating system is selected for configuration, and various drivers and algorithms are supported by means of abundant library function resources at the bottom layer of the operating system.
In the intelligent algorithm layer, all possible intelligent algorithms are contained, and meanwhile, calculation optimization and comprehensive management of the algorithms are required to be completed.
At the application layer, the diversified intelligent application is realized around the OODA full task chain. It allows the user to develop specific applications by combining corresponding algorithms according to actual combat demands.
The following takes the typical OODA overall process autonomous tasks of target identification, threat assessment, target allocation, route planning, autonomous flight and autonomous obstacle avoidance, and heterogeneous and diversified hardware resources of high-performance multi-core intelligent computing hardware, deep learning special intelligent computing hardware and flexible customizable intelligent computing hardware as examples, and further describes the embedded public software operation method of the embodiment in detail.
First, for the OODA whole-process autonomous task, the intelligent algorithm related to each stage task is decomposed, and the intelligent algorithm is defined and described by a service interface, wherein the description content comprises a service ID, a service type, a default service node, a default service framework, service monitoring, a service dependency ID set and the like.
And establishing an application service scheduling management queue, and scheduling and managing the service description of the application. According to the time sequence task of the OODA process, service request triggering which occurs at different moments is simulated, judgment is carried out according to the description attribute of the service, and each service is mapped to a corresponding service framework through queue scheduling. According to the description attribute of the service, mapping corresponding operators in an operator library of the operation framework, and combining the operators to form an executable operator combination framework or operator execution stream.
Establishing a resource description table, and carrying out abstract description of resources on heterogeneous and diverse resources; meanwhile, a service demand table is established, intelligent computing tasks possibly occurring in a future battlefield are comprehensively analyzed, service requests are decomposed, and resource types and computing capacities required by different service requests are further analyzed. When the resource service management receives a service request transmitted by an upper layer, the resource type and the computing capacity requirement are matched according to the service requirement table.
And establishing a resource monitoring mechanism on the basis of the resource description table, monitoring the computing resource load condition of each module of the intelligent computing platform in real time, establishing and maintaining a load information table, recording the information of the resource type condition, whether the resources are exclusive, the resource computing capacity level and the like of each module, and providing a data basis for dynamic allocation of the resources.
And establishing a resource scheduling management queue, and dynamically distributing the resources to the service by a multi-priority scheduling method according to the resource state description of the resource description table and the service attribute and the priority attribute of the application service scheduling management queue. After the service request is managed and matched with the resource type and the computing capacity requirement through the resource mapping, the service request enters a scheduling queue for management according to the required resource type, and the resource dynamic allocation model allocates proper hardware resources for the service request by inquiring a load information table. The service enters the execution state only when all the resource types required by the service are satisfied. When a resource condition is not met, service requests wait in the resource queue.
And when the occupation of the resources is finished, releasing a signal to remind the load information table to update, checking the resource scheduling queue by a resource dynamic allocation model, and allocating the resources to the first service request which is suitable for the computing capacity of the resources.

Claims (3)

1. An embedded public software operation method for an OODA multi-task intelligent application is characterized by comprising the following steps:
scheduling through application service management, and determining the operation time sequence of a plurality of intelligent algorithms required by the multi-task intelligent application;
adapting corresponding software operation frames to each intelligent algorithm which is operated in sequence; the software operation frames comprise an embedded public software operation frame facing the knowledge driving, an embedded public software operation frame facing the intelligent optimization and an embedded public software operation frame facing the deep learning, wherein the three software operation frames separate an operation mechanism from a library, and a unified knowledge base and operator model library are established;
scheduling and mapping an intelligent algorithm to an adaptive hardware module through resource service management, and completing loading and unloading of intelligent computing service through dynamic allocation of resources on the basis of resource monitoring;
the application service management specifically comprises intelligent algorithm-oriented service encapsulation and intelligent computing service parallel scheduling; wherein:
the service package facing the intelligent algorithm is an external interface definition for giving out application service and is used for providing a set of standardized application service for an upper intelligent task;
the intelligent computing service parallel scheduling is to determine a scheduling algorithm according to urgency and priority of computing services so as to ensure correct real-time execution of different computing services and maximization of computing resource utilization efficiency; the computing services are divided into real-time services and timely services;
the resource monitoring is carried out by the resource service management, specifically, resources owned by each hardware module of the intelligent computing platform are abstracted into uniform resource descriptions, according to the established resource description table and service demand table, the computing resource load condition of each hardware module of the intelligent computing platform is monitored in real time according to a set resource monitoring mechanism, a load information table is established and maintained, and the resource state information of each module is recorded;
the dynamic allocation of resources by the resource service management is specifically as follows: the service request is managed by resource mapping to match the resource type and the computing power requirement, enters a scheduling queue for management according to the required resource type, and distributes proper hardware resources for the service request by inquiring a load information table.
2. The method for running embedded public software for the OODA multi-tasking intelligent application according to claim 1, wherein the method comprises the following steps: the loading and unloading functions of the intelligent computing service performed by the resource service management are realized by a service dynamic loading and unloading model; the service dynamic loading and unloading model comprises a platform service library, a service management center and service loading and unloading, wherein the service management center instructs a node to load and run appointed service through the resource dynamic management model, the node sends a service loading/unloading request to the platform service library, and the platform service library activates the required service and executes corresponding actions.
3. The method for running embedded public software for the OODA multi-tasking intelligent application according to claim 1, wherein the method comprises the following steps: the hardware module comprises a multi-core parallel intelligent computing module, a flexible configurable intelligent computing module and a deep learning special intelligent computing module.
CN202010001857.0A 2020-01-02 2020-01-02 OODA (on-off-line architecture) multi-task intelligent application-oriented embedded public software operation method Active CN111176822B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010001857.0A CN111176822B (en) 2020-01-02 2020-01-02 OODA (on-off-line architecture) multi-task intelligent application-oriented embedded public software operation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010001857.0A CN111176822B (en) 2020-01-02 2020-01-02 OODA (on-off-line architecture) multi-task intelligent application-oriented embedded public software operation method

Publications (2)

Publication Number Publication Date
CN111176822A CN111176822A (en) 2020-05-19
CN111176822B true CN111176822B (en) 2023-05-09

Family

ID=70646536

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010001857.0A Active CN111176822B (en) 2020-01-02 2020-01-02 OODA (on-off-line architecture) multi-task intelligent application-oriented embedded public software operation method

Country Status (1)

Country Link
CN (1) CN111176822B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111767028B (en) * 2020-06-10 2023-09-19 中国人民解放军军事科学院国防科技创新研究院 Cognitive resource management architecture and cognitive resource calling method
CN112748907B (en) * 2020-12-04 2023-01-13 中国航空工业集团公司成都飞机设计研究所 C/S mode general measurement and control software architecture based on hardware resources and design method
CN114490500B (en) * 2021-12-29 2024-03-08 西北工业大学 Comprehensive intelligent flight control computing platform for general intelligent application
CN114781900B (en) * 2022-05-07 2023-02-28 中国航空工业集团公司沈阳飞机设计研究所 Multi-task simultaneous working resource scheduling method and system and airplane

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6108711A (en) * 1998-09-11 2000-08-22 Genesys Telecommunications Laboratories, Inc. Operating system having external media layer, workflow layer, internal media layer, and knowledge base for routing media events between transactions
CN104298496A (en) * 2013-07-19 2015-01-21 上海宝信软件股份有限公司 Data-analysis-based software development framework system
CN104331325A (en) * 2014-11-25 2015-02-04 深圳市信义科技有限公司 Resource exploration and analysis-based multi-intelligence scheduling system and resource exploration and analysis-based multi-intelligence scheduling method for video resources
CN110034961A (en) * 2019-04-11 2019-07-19 重庆邮电大学 It take OODA chain as the infiltration rate calculation method of first body
CN110427523A (en) * 2019-07-31 2019-11-08 南京莱斯信息技术股份有限公司 The business environment application system based on big data that can be adapted to

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8332859B2 (en) * 2007-05-31 2012-12-11 International Business Machines Corporation Intelligent buyer's agent usage for allocation of service level characteristics

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6108711A (en) * 1998-09-11 2000-08-22 Genesys Telecommunications Laboratories, Inc. Operating system having external media layer, workflow layer, internal media layer, and knowledge base for routing media events between transactions
CN104298496A (en) * 2013-07-19 2015-01-21 上海宝信软件股份有限公司 Data-analysis-based software development framework system
CN104331325A (en) * 2014-11-25 2015-02-04 深圳市信义科技有限公司 Resource exploration and analysis-based multi-intelligence scheduling system and resource exploration and analysis-based multi-intelligence scheduling method for video resources
CN110034961A (en) * 2019-04-11 2019-07-19 重庆邮电大学 It take OODA chain as the infiltration rate calculation method of first body
CN110427523A (en) * 2019-07-31 2019-11-08 南京莱斯信息技术股份有限公司 The business environment application system based on big data that can be adapted to

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Guo Wen-yue et al..Semantic web service discovery algorithm and its application on the intelligent automotive manufacturing system.《2010 2nd IEEE International Conference on Information Management and Engineering》.2010,第1-4页. *
海钰琳等.一种面向嵌入式系统设计的全面测试过程模型.《信息通信》.2018,第86-87页. *

Also Published As

Publication number Publication date
CN111176822A (en) 2020-05-19

Similar Documents

Publication Publication Date Title
CN111176822B (en) OODA (on-off-line architecture) multi-task intelligent application-oriented embedded public software operation method
CN103069390B (en) Method and system for re-scheduling workload in a hybrid computing environment
US20200241916A1 (en) Legacy application migration to real time, parallel performance cloud
Hao et al. Implementing a hybrid simulation model for a Kanban-based material handling system
CN107122243B (en) The method of Heterogeneous Cluster Environment and calculating CFD tasks for CFD simulation calculations
US11789895B2 (en) On-chip heterogeneous AI processor with distributed tasks queues allowing for parallel task execution
CN103189845B (en) The dynamic equilibrium of the I/O resource on non-uniform memory access platform
US11782870B2 (en) Configurable heterogeneous AI processor with distributed task queues allowing parallel task execution
CN103069389B (en) High-throughput computing method and system in a hybrid computing environment
CN111061788B (en) Multi-source heterogeneous data conversion integration system based on cloud architecture and implementation method thereof
CN113176875B (en) Resource sharing service platform architecture based on micro-service
CN105045658A (en) Method for realizing dynamic dispatching distribution of task by multi-core embedded DSP (Data Structure Processor)
CN111682973B (en) Method and system for arranging edge cloud
CN114841345B (en) Distributed computing platform based on deep learning algorithm and application thereof
CN110838939B (en) Scheduling method based on lightweight container and edge Internet of things management platform
CN112230677B (en) Unmanned aerial vehicle group task planning method and terminal equipment
Zhou et al. Deep reinforcement learning-based methods for resource scheduling in cloud computing: A review and future directions
CN111159095A (en) Heterogeneous integrated embedded intelligent computing implementation method
CN115756833A (en) AI inference task scheduling method and system oriented to multiple heterogeneous environments
Hu et al. Software-defined edge computing (SDEC): Principles, open system architecture and challenges
Chen et al. Task partitioning and offloading in IoT cloud-edge collaborative computing framework: a survey
Hashemi et al. Gwo-sa: Gray wolf optimization algorithm for service activation management in fog computing
CN116402318B (en) Multi-stage computing power resource distribution method and device for power distribution network and network architecture
CN110083454B (en) Hybrid cloud service arrangement method with quantum computer
CN109242138A (en) The same automatic movement system of city cargo based on stack task management

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant