CN111176822A - OODA multi-task intelligent application-oriented embedded public software operation method - Google Patents

OODA multi-task intelligent application-oriented embedded public software operation method Download PDF

Info

Publication number
CN111176822A
CN111176822A CN202010001857.0A CN202010001857A CN111176822A CN 111176822 A CN111176822 A CN 111176822A CN 202010001857 A CN202010001857 A CN 202010001857A CN 111176822 A CN111176822 A CN 111176822A
Authority
CN
China
Prior art keywords
intelligent
service
resource
computing
application
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010001857.0A
Other languages
Chinese (zh)
Other versions
CN111176822B (en
Inventor
白林亭
文鹏程
程陶然
邹昌昊
高泽
李亚晖
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Aeronautics Computing Technique Research Institute of AVIC
Original Assignee
Xian Aeronautics Computing Technique Research Institute of AVIC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Aeronautics Computing Technique Research Institute of AVIC filed Critical Xian Aeronautics Computing Technique Research Institute of AVIC
Priority to CN202010001857.0A priority Critical patent/CN111176822B/en
Publication of CN111176822A publication Critical patent/CN111176822A/en
Application granted granted Critical
Publication of CN111176822B publication Critical patent/CN111176822B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/508Monitor
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/509Offload
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Abstract

The invention belongs to the field of embedded intelligent computing, and provides an embedded public software operation method for OODA multi-task intelligent application. The method designs an embedded common software operation framework facing OODA multi-task intelligent application, flexibly schedules various types of intelligent computing resources through resource service management, constructs the common software operation framework on the intelligent computing resources respectively aiming at knowledge-driven intelligent application, intelligent optimization intelligent application and deep learning intelligent application, improves the flexibility of application operation, scheduling and maintenance, simultaneously researches an application-oriented service management technology aiming at OODA multi-task scenes, and provides uniform platform-level computing service for the equipment OODA multi-task intelligent application.

Description

OODA multi-task intelligent application-oriented embedded public software operation method
Technical Field
The invention belongs to the field of embedded intelligent computing, and relates to an embedded software operation method for OODA multi-task intelligent application.
Background
In recent years, with the rapid development of artificial intelligence technology, the artificial intelligence technology plays an increasingly central role in the process of upgrading equipment intellectualization. The intelligent comprehensive upgrade of various fields and various types of equipment puts forward higher and higher requirements on the intelligent level of the equipment, and the equipment is required to be evolved from artificial intelligence to strong artificial intelligence.
Under the environment of high dynamic and strong confrontation, the situation changes rapidly, the enemy and my game is strong, the information amount needing to be processed by the equipment embedded system is increased sharply, the traditional human-dominated equipment combat mode cannot meet the new combat requirement, the equipment embedded system is required to have the OODA full-process autonomous behavior ability from perception, cognition, decision-making to control, and the extremely high requirement is provided for a computing platform provided with the embedded system.
On one hand, the equipment is equipped in an OODA overall process autonomous execution task, multi-stage complex intelligent applications such as autonomous environment perception, autonomous situation cognition, autonomous behavior decision, autonomous behavior control and the like are involved, and along with different task environments, different task targets, different equipment self characteristics and the like, the intelligent applications required in the OODA process are different, so that extremely complex OODA multi-task intelligent application characteristics are brought; meanwhile, the requirements of each type of intelligent application on the running environment, the resource requirement, the real-time requirement and the like of the equipment embedded intelligent computing platform are different, so that the application running environment provided by the traditional equipment embedded intelligent computing platform can not meet the running requirement of the equipment OODA whole-process intelligent combat task.
On the other hand, with the continuous upgrade of processor forms and computer hardware, the platform hardware facing intelligent computing is evolving towards isomerization, diversification and multiple quantities, which leads to the increasingly complex and diverse hardware resources equipped with embedded intelligent computing platforms, complicates the management and use of the platform hardware resources, and greatly reduces the overall computing efficiency of the hardware resources.
Disclosure of Invention
The invention aims to improve the flexibility of running, scheduling and maintaining equipment OODA multi-task intelligent application and provide uniform platform-level computing service for the equipment OODA multi-task intelligent application.
Therefore, the invention provides an embedded public software operation method facing OODA multi-task intelligent application, which is designed according to the following design concept:
in the face of the complexity characteristic of OODA multi-task intelligent application of an embedded system and the heterogeneous diversity characteristic of intelligent computing hardware, various types of intelligent computing resources are flexibly scheduled through resource service management, a common software operation framework is respectively constructed on the intelligent computing resources for knowledge-driven intelligent application, intelligent optimized intelligent application and deep learning intelligent application, the flexibility of application operation, scheduling and maintenance is improved, meanwhile, an application-oriented service management technology is researched for OODA multi-task scenes, and unified platform-level computing service is provided for the OODA multi-task intelligent application. Therefore, the whole embedded public software operation framework is divided into three layers:
resource service management at the bottom layer abstracts resources owned by each heterogeneous module of an intelligent computing platform into uniform resource description, establishes mapping management of service requests and resources of intelligent computing tasks, completes loading and unloading of intelligent computing services through dynamic resource allocation on the basis of resource monitoring, and mainly solves the problem of resource mapping of software and hardware. Specifically, the method comprises the following steps:
the resource service management comprises resource-service mapping management, resource dynamic allocation and service dynamic loading and unloading, resources owned by each heterogeneous module of the intelligent computing platform are abstracted into uniform resource description, the mapping management of service requests and resources of intelligent computing tasks is established, and the loading and unloading of the intelligent computing services are completed through the resource dynamic allocation on the basis of resource monitoring. The resource mapping management aims to decouple the binding relationship between the intelligent service and the hardware platform, realize the dynamic allocation of the intelligent service request, balance the resource load and improve the resource utilization efficiency by establishing a resource description table and a service requirement table. Establishing a resource monitoring mechanism on the basis of the resource description table, monitoring the computing resource load condition of each module of the intelligent computing platform in real time, establishing and maintaining a load information table, and recording the resource state information of each module; after the service request is managed and matched with the resource type and the computing capacity requirement through resource mapping, the service request enters a scheduling queue for management according to the required resource type, and the resource dynamic allocation model allocates appropriate hardware resources for the service request through inquiring a load information table. Based on a resource dynamic allocation model, a service dynamic loading and unloading model is designed, wherein the loading and unloading model consists of a platform service library, a service management center and service loading and unloading, the service management center instructs a node to load and run a specified service through the resource dynamic management model, the node sends a service loading/unloading request to the platform service library, and the platform service library activates the required service and executes a corresponding action.
And secondly, software running frames positioned in the middle layer comprise an embedded public software running frame facing knowledge driving, an embedded public software running frame facing intelligent optimization and an embedded public software running frame facing deep learning, and the problem of running support of an intelligent algorithm, an intelligent optimization algorithm and a deep learning algorithm driven by knowledge is mainly solved based on the concept of a unified frame. Specifically, the method comprises the following steps:
the software operation framework comprises an embedded public software operation framework facing knowledge driving, an embedded public software operation framework facing intelligent optimization and an embedded public software operation framework facing deep learning. Based on the concept of a unified framework, the unified knowledge base and operator model base are designed for diversified intelligent application requirements, a software operation mechanism facing knowledge driving, a software operation mechanism facing intelligent optimization and a software operation mechanism facing deep learning are respectively formulated for the algorithm characteristic difference of intelligent application, and the mechanisms are integrated into the unified software operation framework, so that the operation support of the intelligent algorithm, the intelligent optimization algorithm and the deep learning algorithm driven by knowledge is realized, and the multi-field universality and the multi-task applicability of the platform are improved.
And thirdly, managing the application services positioned at the top layer, providing a set of standardized application services for the upper layer intelligent tasks, decoupling the upper layer intelligent tasks, the intelligent algorithm and the lower layer software operation framework, simultaneously realizing the correct real-time execution of different intelligent computing services and the maximization of the computing efficiency by utilizing a parallel scheduling technology, and mainly solving the problem of mapping the algorithm to the framework. Specifically, the method comprises the following steps:
the application service management comprises service encapsulation facing an intelligent algorithm and intelligent computing service parallel scheduling. An external interface definition of the application service is given through service encapsulation, a set of standardized application service is provided for an upper layer intelligent task, decoupling of the upper layer intelligent task, an intelligent algorithm and a lower layer software operation framework is achieved, and therefore the aim of transparent calling is achieved; the scheduling algorithm of a plurality of services is designed according to the attributes of the urgency, the priority and the like of the computing services, the computing services are divided into a real-time service and a timely service, the parallel scheduling process is realized, and the correct real-time execution of different computing services and the maximization of the utilization efficiency of computing resources are ensured.
Based on this, the method for operating the embedded public software for the OODA multi-task intelligent application mainly comprises the following steps:
scheduling through application service management, and determining the operation time sequence of a plurality of intelligent algorithms required by the multitask intelligent application;
adapting corresponding software operation frameworks to each intelligent algorithm which is operated in sequence; the software operation framework comprises an embedded public software operation framework facing knowledge driving, an embedded public software operation framework facing intelligent optimization and an embedded public software operation framework facing deep learning, and the three software operation frameworks separate an operation mechanism from a library and establish a unified knowledge base and an operator model library;
and the intelligent algorithm is scheduled and mapped to the adaptive hardware module through resource service management, and the loading and unloading of the intelligent computing service are completed through resource dynamic allocation on the basis of resource monitoring.
Optionally, the application service management specifically includes service encapsulation facing an intelligent algorithm and parallel scheduling of intelligent computing services; wherein:
the service encapsulation facing the intelligent algorithm provides external interface definition of application service and is used for providing a set of standardized application service for an upper-layer intelligent task;
the intelligent computing service parallel scheduling is to determine a scheduling algorithm (i.e. determine the running time sequence of a plurality of intelligent algorithms) according to the urgency and the priority of the computing service so as to ensure the correct real-time execution of different computing services and the maximization of the utilization efficiency of computing resources; the computing services are divided into real-time services and timely services.
Optionally, the resource monitoring performed by the resource service management is specifically to abstract resources owned by each hardware module of the intelligent computing platform into a uniform resource description, monitor computing resource load conditions of each hardware module of the intelligent computing platform in real time according to the established resource description table and service requirement table and according to a set resource monitoring mechanism, establish and maintain a load information table, and record resource state information of each module.
Optionally, the resource service management performs resource dynamic allocation, specifically: the service request is managed and matched with the resource type and the computing capacity requirement through resource mapping, enters a scheduling queue for management according to the required resource type, and is distributed with proper hardware resources through inquiring a load information table.
Optionally, the loading and unloading functions of the intelligent computing service performed by the resource service management are implemented by a service dynamic loading and unloading model; the service dynamic loading and unloading model comprises a platform service library, a service management center and a service loading and unloading module, wherein the service management center indicates a node to load and run a specified service through the resource dynamic management model, the node sends a service loading/unloading request to the platform service library, and the platform service library activates the required service and executes a corresponding action.
Optionally, the hardware modules include a multi-core parallel intelligent computation module, a flexible configurable intelligent computation module, and a deep learning dedicated intelligent computation module.
Compared with the prior art, the invention has the following advantages:
on one hand, a unified knowledge base and an operator model base are designed for diversified intelligent application requirements, a software operation mechanism facing knowledge driving, a software operation mechanism facing intelligent optimization and a software operation mechanism facing deep learning are respectively formulated for the algorithm characteristic difference of intelligent application, and the mechanisms are integrated into a unified software operation frame, so that flexible and universal support for different types of intelligent application is realized, and the multi-field universality and the multi-task applicability of the platform are greatly improved; on the other hand, aiming at the diversity of intelligent application and the diversity of heterogeneous intelligent computing resources, a system-level optimization scheduling mechanism of embedded intelligent computing is established: a service encapsulation mechanism and a scheduling management mechanism of an intelligent algorithm are established upwards aiming at the diversity of the intelligent application, so that the dynamic flexible management of the intelligent application is realized; and a service management mechanism of the intelligent computing resources is established downwards aiming at the diversity of the heterogeneous intelligent computing resources, and the dynamic flexible management of the intelligent computing resources is realized through a resource real-time monitoring and dynamic scheduling mechanism. Finally, the decoupling between the intelligent application task and the computing system and the decoupling between the intelligent application algorithm and the platform resource are realized, and the flexibility and the universality of the support of the diversified intelligent tasks are improved.
Drawings
FIG. 1 is an embedded common software operating framework for OODA multitasking intelligent applications.
FIG. 2 is an embedded intelligent computing architecture for heterogeneous convergence.
Detailed Description
The invention is described in detail below with reference to the figures and examples.
The method comprises the steps of scheduling through application service management, and determining the operation time sequence of a plurality of intelligent algorithms required by multi-task intelligent application; adapting corresponding software operation frameworks to each intelligent algorithm which is operated in sequence; the software operation framework comprises an embedded public software operation framework facing knowledge driving, an embedded public software operation framework facing intelligent optimization and an embedded public software operation framework facing deep learning; the operation mechanism of the software operation framework is separated from the database, and the three frameworks have a unified knowledge base and an operator model base; and the intelligent algorithm is scheduled and mapped to the adaptive hardware module through resource service management, and the loading and unloading of the intelligent computing service are completed through resource dynamic allocation on the basis of resource monitoring.
As shown in fig. 1, the whole embedded common software operation framework is divided into three layers:
resource service management
The resource service management comprises resource-service mapping management, resource dynamic allocation and service dynamic loading and unloading, resources owned by each heterogeneous module of the intelligent computing platform are abstracted into uniform resource description, the mapping management of service requests and resources of intelligent computing tasks is established, and the loading and unloading of the intelligent computing services are completed through the resource dynamic allocation on the basis of resource monitoring. The resource mapping management aims to decouple the binding relationship between the intelligent service and the hardware platform, realize the dynamic allocation of the intelligent service request, balance the resource load and improve the resource utilization efficiency by establishing a resource description table and a service requirement table. Establishing a resource monitoring mechanism on the basis of the resource description table, monitoring the computing resource load condition of each module of the intelligent computing platform in real time, establishing and maintaining a load information table, and recording the resource state information of each module; after the service request is managed and matched with the resource type and the computing capacity requirement through resource mapping, the service request enters a scheduling queue for management according to the required resource type, and the resource dynamic allocation model allocates appropriate hardware resources for the service request through inquiring a load information table. Based on a resource dynamic allocation model, a service dynamic loading and unloading model is designed, wherein the loading and unloading model consists of a platform service library, a service management center and service loading and unloading, the service management center instructs a node to load and run a specified service through the resource dynamic management model, the node sends a service loading/unloading request to the platform service library, and the platform service library activates the required service and executes a corresponding action.
Two, unified software operation framework
The unified software operation framework comprises an embedded public software operation framework facing knowledge driving, an embedded public software operation framework facing intelligent optimization and an embedded public software operation framework facing deep learning. Based on the concept of a unified framework, the unified knowledge base and operator model base are designed for diversified intelligent application requirements, a software operation mechanism facing knowledge driving, a software operation mechanism facing intelligent optimization and a software operation mechanism facing deep learning are respectively formulated for the algorithm characteristic difference of intelligent application, and the mechanisms are integrated into the unified software operation framework, so that the operation support of the intelligent algorithm, the intelligent optimization algorithm and the deep learning algorithm driven by knowledge is realized, and the multi-field universality and the multi-task applicability of the platform are improved.
Third, application service management
The application service management comprises service encapsulation facing an intelligent algorithm and intelligent computing service parallel scheduling. An external interface definition of the application service is given through service encapsulation, a set of standardized application service is provided for an upper layer intelligent task, decoupling of the upper layer intelligent task, an intelligent algorithm and a lower layer software operation framework is achieved, and therefore the aim of transparent calling is achieved; the scheduling algorithm of a plurality of services is designed according to the attributes of the urgency, the priority and the like of the computing services, the computing services are divided into a real-time service and a timely service, the parallel scheduling process is realized, and the correct real-time execution of different computing services and the maximization of the utilization efficiency of computing resources are ensured.
FIG. 2 illustrates a heterogeneous converged embedded intelligent computing architecture. The embedded common software runtime framework for OODA-oriented multitasking intelligence applications shown in fig. 1 corresponds to the platform services layer in fig. 2.
As shown in fig. 2, the heterogeneous integrated embedded intelligent computing architecture includes, from bottom to top, a hardware layer, an operating system layer, a platform service layer, an intelligent algorithm layer, and an application layer. Wherein:
at a hardware layer, three intelligent computing modules are designed. Wherein: the high-performance multi-core parallel intelligent computing module takes a multi-core CPU as a core processor, mainly aims at a knowledge-driven intelligent algorithm and an intelligent optimization algorithm with more condition judgment, more branch selection and more loop iteration, and simultaneously has a system management function; the flexibly configurable intelligent computing module takes the FPGA as a core processor and mainly aims at a deep learning algorithm with intensive computation and frequent data access; the special customized intelligent computation module for deep learning uses an AI special processor as a core processor and mainly aims at a deep learning algorithm. The intelligent deep learning algorithm is different from the flexibly configurable intelligent computing module in that the special customization module emphasizes the specificity and the customization, the special deep learning algorithm is required to optimize the performance of a certain specific (such as common or high-real-time-requirement) deep learning algorithm, and the flexibly configurable module emphasizes the universality and the configurability, and the flexibly configurable intelligent computing module is required to make the supporting types of the deep learning algorithm the widest.
On an operating system layer, the characteristics of various operating systems and the adaptability of the operating systems and hardware platforms are comprehensively considered, an embedded operating system is selected for configuration, and the realization of various drivers and algorithms is supported by means of abundant library function resources on the operating system bottom layer.
In the intelligent algorithm layer, all possible intelligent algorithms are included, and meanwhile, calculation optimization and comprehensive management of the algorithms need to be completed.
In an application layer, diversified intelligent application is realized around an OODA full task chain. The method allows users to combine corresponding algorithms according to actual combat requirements to develop specific applications.
The embedded common software operation method of the embodiment is further described in detail below by taking a typical OODA overall process autonomous task of target identification, threat assessment, target allocation, air route planning, autonomous flight, and autonomous obstacle avoidance, and heterogeneous and diversified hardware resources of high-performance multi-core intelligent computing hardware, deep learning dedicated intelligent computing hardware, and flexible and customizable intelligent computing hardware as examples.
Firstly, aiming at the OODA full-process autonomous task, decomposing an intelligent algorithm related to each stage task, and defining and describing a service interface of the intelligent algorithm, wherein the description content comprises a service ID, a service type, a default service node, a default service framework, service monitoring, a service dependency ID set and the like.
And establishing an application service scheduling management queue, and scheduling and managing the service description of the application. Simulating service request triggering which occurs at different times according to a time sequence task of an OODA process, judging according to description attributes of services, and mapping each service to a corresponding service framework through queue scheduling. And mapping corresponding operators in an operator library of the operation frame according to the description attribute of the service, and combining the operators to form an executable operator combination frame or an operator execution stream.
Establishing a resource description table, and performing abstract description on the resources of the heterogeneous diversified resources; meanwhile, a service requirement table is established, intelligent computing tasks which may appear on a future battlefield are comprehensively analyzed, service requests are decomposed, and resource types and computing capacity required by different service requests are further analyzed. When the resource service management receives the service request transmitted from the upper layer, the resource type and the computing power requirement are matched according to the service requirement table.
A resource monitoring mechanism is established on the basis of the resource description table, the computing resource load condition of each module of the intelligent computing platform is monitored in real time, a load information table is established and maintained, the information such as the resource type condition, whether the resource is monopolized or not, the resource computing capacity level and the like of each module is recorded, and a data basis is provided for the dynamic allocation of the resource.
And establishing a resource scheduling management queue, and dynamically allocating resources to the service by a multi-priority scheduling method according to the resource state description of the resource description table and the service attribute and the priority attribute of the application service scheduling management queue. After the service request is managed and matched with the resource type and the computing capacity requirement through resource mapping, the service request enters a scheduling queue for management according to the required resource type, and the resource dynamic allocation model allocates appropriate hardware resources for the service request through inquiring a load information table. The service enters the execution state only if all resource types required by the service are satisfied. When a certain resource condition is not met, the service request waits in the resource queue.
When the resource occupation is finished, releasing a signal to remind the load information table to update, and simultaneously, the resource dynamic allocation model checks the resource scheduling queue and allocates the resource to the first service request suitable for the resource computing capacity.

Claims (6)

1. An embedded public software operation method facing OODA multi-task intelligent application is characterized by comprising the following steps:
scheduling through application service management, and determining the operation time sequence of a plurality of intelligent algorithms required by the multitask intelligent application;
adapting corresponding software operation frameworks to each intelligent algorithm which is operated in sequence; the software operation framework comprises an embedded public software operation framework facing knowledge driving, an embedded public software operation framework facing intelligent optimization and an embedded public software operation framework facing deep learning, and the three software operation frameworks separate an operation mechanism from a library and establish a unified knowledge base and an operator model library;
and the intelligent algorithm is scheduled and mapped to the adaptive hardware module through resource service management, and the loading and unloading of the intelligent computing service are completed through resource dynamic allocation on the basis of resource monitoring.
2. The method of claim 1 for operating embedded public software for OODA multitask intelligent applications, characterized in that: the application service management specifically comprises service encapsulation facing an intelligent algorithm and intelligent computing service parallel scheduling; wherein:
the service encapsulation facing the intelligent algorithm provides external interface definition of application service and is used for providing a set of standardized application service for an upper-layer intelligent task;
the intelligent computing service parallel scheduling is to determine a scheduling algorithm according to the urgency and the priority of computing services so as to ensure the correct real-time execution of different computing services and the maximization of the utilization efficiency of computing resources; the computing services are divided into real-time services and timely services.
3. The method of claim 1 for operating embedded public software for OODA multitask intelligent applications, characterized in that: the resource monitoring of the resource service management is specifically that resources owned by each hardware module of the intelligent computing platform are abstracted into uniform resource description, the computing resource load condition of each hardware module of the intelligent computing platform is monitored in real time according to the established resource description table and the service demand table and the set resource monitoring mechanism, a load information table is established and maintained, and the resource state information of each module is recorded.
4. The method of claim 3 for operating embedded public software for OODA multitasking intelligent applications, characterized in that: the resource dynamic allocation performed by the resource service management specifically includes: the service request is managed and matched with the resource type and the computing capacity requirement through resource mapping, enters a scheduling queue for management according to the required resource type, and is distributed with proper hardware resources through inquiring a load information table.
5. The method of claim 4 for operating embedded common software for OODA multitasking intelligent applications, characterized in that: the loading and unloading functions of the intelligent computing service performed by the resource service management are realized by a service dynamic loading and unloading model; the service dynamic loading and unloading model comprises a platform service library, a service management center and a service loading and unloading module, wherein the service management center indicates a node to load and run a specified service through the resource dynamic management model, the node sends a service loading/unloading request to the platform service library, and the platform service library activates the required service and executes a corresponding action.
6. The method of claim 1 for operating embedded public software for OODA multitask intelligent applications, characterized in that: the hardware module comprises a multi-core parallel intelligent computing module, a flexible configurable intelligent computing module and a deep learning special intelligent computing module.
CN202010001857.0A 2020-01-02 2020-01-02 OODA (on-off-line architecture) multi-task intelligent application-oriented embedded public software operation method Active CN111176822B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010001857.0A CN111176822B (en) 2020-01-02 2020-01-02 OODA (on-off-line architecture) multi-task intelligent application-oriented embedded public software operation method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010001857.0A CN111176822B (en) 2020-01-02 2020-01-02 OODA (on-off-line architecture) multi-task intelligent application-oriented embedded public software operation method

Publications (2)

Publication Number Publication Date
CN111176822A true CN111176822A (en) 2020-05-19
CN111176822B CN111176822B (en) 2023-05-09

Family

ID=70646536

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010001857.0A Active CN111176822B (en) 2020-01-02 2020-01-02 OODA (on-off-line architecture) multi-task intelligent application-oriented embedded public software operation method

Country Status (1)

Country Link
CN (1) CN111176822B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111767028A (en) * 2020-06-10 2020-10-13 中国人民解放军军事科学院国防科技创新研究院 Cognitive resource management architecture and cognitive resource calling method
CN112748907A (en) * 2020-12-04 2021-05-04 中国航空工业集团公司成都飞机设计研究所 C/S mode general measurement and control software architecture based on hardware resources and design method
CN114490500A (en) * 2021-12-29 2022-05-13 西北工业大学 Comprehensive intelligent flight control computing platform for smart application
CN114781900A (en) * 2022-05-07 2022-07-22 中国航空工业集团公司沈阳飞机设计研究所 Multitask simultaneous working resource scheduling method and system and airplane

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6108711A (en) * 1998-09-11 2000-08-22 Genesys Telecommunications Laboratories, Inc. Operating system having external media layer, workflow layer, internal media layer, and knowledge base for routing media events between transactions
US20080301024A1 (en) * 2007-05-31 2008-12-04 Boss Gregory J Intellegent buyer's agent usage for allocation of service level characteristics
CN104298496A (en) * 2013-07-19 2015-01-21 上海宝信软件股份有限公司 Data-analysis-based software development framework system
CN104331325A (en) * 2014-11-25 2015-02-04 深圳市信义科技有限公司 Resource exploration and analysis-based multi-intelligence scheduling system and resource exploration and analysis-based multi-intelligence scheduling method for video resources
CN110034961A (en) * 2019-04-11 2019-07-19 重庆邮电大学 It take OODA chain as the infiltration rate calculation method of first body
CN110427523A (en) * 2019-07-31 2019-11-08 南京莱斯信息技术股份有限公司 The business environment application system based on big data that can be adapted to

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6108711A (en) * 1998-09-11 2000-08-22 Genesys Telecommunications Laboratories, Inc. Operating system having external media layer, workflow layer, internal media layer, and knowledge base for routing media events between transactions
US20080301024A1 (en) * 2007-05-31 2008-12-04 Boss Gregory J Intellegent buyer's agent usage for allocation of service level characteristics
CN104298496A (en) * 2013-07-19 2015-01-21 上海宝信软件股份有限公司 Data-analysis-based software development framework system
CN104331325A (en) * 2014-11-25 2015-02-04 深圳市信义科技有限公司 Resource exploration and analysis-based multi-intelligence scheduling system and resource exploration and analysis-based multi-intelligence scheduling method for video resources
CN110034961A (en) * 2019-04-11 2019-07-19 重庆邮电大学 It take OODA chain as the infiltration rate calculation method of first body
CN110427523A (en) * 2019-07-31 2019-11-08 南京莱斯信息技术股份有限公司 The business environment application system based on big data that can be adapted to

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
GUO WEN-YUE ET AL.: "Semantic web service discovery algorithm and its application on the intelligent automotive manufacturing system" *
海钰琳等: "一种面向嵌入式系统设计的全面测试过程模型" *

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111767028A (en) * 2020-06-10 2020-10-13 中国人民解放军军事科学院国防科技创新研究院 Cognitive resource management architecture and cognitive resource calling method
CN111767028B (en) * 2020-06-10 2023-09-19 中国人民解放军军事科学院国防科技创新研究院 Cognitive resource management architecture and cognitive resource calling method
CN112748907A (en) * 2020-12-04 2021-05-04 中国航空工业集团公司成都飞机设计研究所 C/S mode general measurement and control software architecture based on hardware resources and design method
CN112748907B (en) * 2020-12-04 2023-01-13 中国航空工业集团公司成都飞机设计研究所 C/S mode general measurement and control software architecture based on hardware resources and design method
CN114490500A (en) * 2021-12-29 2022-05-13 西北工业大学 Comprehensive intelligent flight control computing platform for smart application
CN114490500B (en) * 2021-12-29 2024-03-08 西北工业大学 Comprehensive intelligent flight control computing platform for general intelligent application
CN114781900A (en) * 2022-05-07 2022-07-22 中国航空工业集团公司沈阳飞机设计研究所 Multitask simultaneous working resource scheduling method and system and airplane
CN114781900B (en) * 2022-05-07 2023-02-28 中国航空工业集团公司沈阳飞机设计研究所 Multi-task simultaneous working resource scheduling method and system and airplane

Also Published As

Publication number Publication date
CN111176822B (en) 2023-05-09

Similar Documents

Publication Publication Date Title
CN111176822B (en) OODA (on-off-line architecture) multi-task intelligent application-oriented embedded public software operation method
CN103069390B (en) Method and system for re-scheduling workload in a hybrid computing environment
CN107122243B (en) The method of Heterogeneous Cluster Environment and calculating CFD tasks for CFD simulation calculations
Hao et al. Implementing a hybrid simulation model for a Kanban-based material handling system
CN103069389B (en) High-throughput computing method and system in a hybrid computing environment
CN111061788B (en) Multi-source heterogeneous data conversion integration system based on cloud architecture and implementation method thereof
CN110737529A (en) cluster scheduling adaptive configuration method for short-time multiple variable-size data jobs
CN113176875B (en) Resource sharing service platform architecture based on micro-service
Tang et al. A container based edge offloading framework for autonomous driving
CN111159095B (en) Heterogeneous fusion embedded intelligent computing implementation method
CN103092683A (en) Scheduling used for analyzing data and based on elicitation method
Min-Allah et al. Cost efficient resource allocation for real-time tasks in embedded systems
CN110383245A (en) Safe and intelligent networking framework with dynamical feedback
CN112256414A (en) Method and system for connecting multiple computing storage engines
CN109857535A (en) The implementation method and device of task priority control towards Spark JDBC
CN115756833A (en) AI inference task scheduling method and system oriented to multiple heterogeneous environments
WO2014142498A1 (en) Method and system for scheduling computing
CN111353609A (en) Machine learning system
US20120059938A1 (en) Dimension-ordered application placement in a multiprocessor computer
CN114564298A (en) Serverless service scheduling system based on combination optimization in mixed container cloud environment
CN113515361A (en) Lightweight heterogeneous computing cluster system facing service
CN112860396A (en) GPU (graphics processing Unit) scheduling method and system based on distributed deep learning
CN113377503A (en) Task scheduling method, device and system for collaborative AI (artificial intelligence)
CN115543577B (en) Covariate-based Kubernetes resource scheduling optimization method, storage medium and device
Mazumdar et al. Adaptive resource allocation for load balancing in cloud

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant