CN116257364A - Method and device for occupying resources among systems, storage medium and electronic device - Google Patents

Method and device for occupying resources among systems, storage medium and electronic device Download PDF

Info

Publication number
CN116257364A
CN116257364A CN202310536663.4A CN202310536663A CN116257364A CN 116257364 A CN116257364 A CN 116257364A CN 202310536663 A CN202310536663 A CN 202310536663A CN 116257364 A CN116257364 A CN 116257364A
Authority
CN
China
Prior art keywords
operating system
resource
memory
processing resource
processor
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310536663.4A
Other languages
Chinese (zh)
Other versions
CN116257364B (en
Inventor
陈瑾
刘宝阳
马文凯
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Suzhou Inspur Intelligent Technology Co Ltd
Original Assignee
Suzhou Inspur Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Suzhou Inspur Intelligent Technology Co Ltd filed Critical Suzhou Inspur Intelligent Technology Co Ltd
Priority to CN202310536663.4A priority Critical patent/CN116257364B/en
Publication of CN116257364A publication Critical patent/CN116257364A/en
Application granted granted Critical
Publication of CN116257364B publication Critical patent/CN116257364B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/4401Bootstrapping
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5016Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals the resource being the memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5011Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resources being hardware resources other than CPUs, Servers and Terminals
    • G06F9/5022Mechanisms to release resources
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Security & Cryptography (AREA)
  • Hardware Redundancy (AREA)

Abstract

The embodiment of the application provides a method and a device for occupying resources among systems, a storage medium and an electronic device, wherein the method is applied to a chip, a first operating system and a second operating system run in the same processor on the chip, and the method comprises the following steps: determining target processing resources through a second operating system, wherein the processing resources of the processor comprise first processing resources and second processing resources, the first processing resources are allocated for use by the first operating system, and the second processing resources are allocated for use by the second operating system; releasing the target processing resource from the first processing resource by the first operating system; the target processing resource is added to the second processing resource by the second operating system. By the method and the device, the technical problem of poor adaptability of the resource allocation among the systems is solved, and the technical effect of improving the adaptability of the resource allocation among the systems is achieved.

Description

Method and device for occupying resources among systems, storage medium and electronic device
Technical Field
The embodiment of the application relates to the field of computers, in particular to a method and a device for occupying resources among systems, a storage medium and an electronic device.
Background
At present, due to the increase of cores of a CPU (Central Processing Unit ) in an embedded system, a multi-system co-operation architecture design appears, but in the prior art, each system can only operate by using fixed resources which are allocated in advance for each system, and the operation process of the system lacks flexibility and adaptability.
Aiming at the problems of poor adaptability of resource allocation among systems and the like in the related art, no effective solution is proposed yet.
Disclosure of Invention
The embodiment of the application provides a method and a device for occupying resources among systems, a storage medium and an electronic device, so as to at least solve the problem of poor adaptability of resource allocation among the systems in the related technology.
According to an embodiment of the present application, there is provided a method for occupying resources between systems, including:
determining, by the second operating system, a target processing resource, wherein the processing resources of the processor include a first processing resource and a second processing resource, the first processing resource being allocated for use by the first operating system, the second processing resource being allocated for use by the second operating system;
releasing the target processing resource from the first processing resource by the first operating system;
And adding the target processing resource to the second processing resource through the second operating system.
In an exemplary embodiment, the determining, by the second operating system, the target processing resource includes:
monitoring whether the second processing resource meets the operation of the service on the second operating system or not through the second operating system;
and under the condition that the second processing resource is determined not to meet the running of the business on the second operating system, estimating the resource information of the target processing resource through the second operating system.
In an exemplary embodiment, the monitoring, by the second operating system, whether the second processing resource satisfies the operation of the service on the second operating system includes at least one of:
monitoring, by the second operating system, whether remaining storage resources in the second processing resources are greater than a storage threshold, where the second processing resources do not satisfy operation of a service on the second operating system if the remaining storage resources are less than or equal to the storage threshold;
monitoring, by the second operating system, whether a service on the second operating system uses a reference peripheral resource other than a peripheral resource in the second processing resource, wherein the second processing resource does not satisfy operation of the service on the second operating system if the service on the second operating system uses the reference peripheral resource;
Monitoring, by the second operating system, whether the service on the second operating system uses a reference processor interrupt resource other than the processor interrupt resource in the second processing resource, wherein the second processing resource does not satisfy the operation of the service on the second operating system if the service on the second operating system uses the reference processor interrupt resource.
In an exemplary embodiment, the estimating, by the second operating system, resource information of the target processing resource includes:
determining, by the second operating system, a resource type of the target processing resource, wherein the resource type includes at least one of: storage resources, peripheral resources and processor interrupt resources;
and estimating the resource quantity corresponding to each resource type through the second operating system.
In an exemplary embodiment, the estimating, by the second operating system, the amount of resources corresponding to each resource type includes:
estimating, by the second operating system, a target storage amount to be occupied in the target processing resource, in a case where the resource type includes the storage resource;
estimating, by the second operating system, a peripheral identifier and/or a number of peripherals of a reference peripheral resource to be occupied, in the case that the resource type includes the peripheral resource;
And in the case that the resource type comprises the processor interrupt resource, estimating the interrupt quantity of the reference processor interrupt resource to be occupied by the second operating system.
In an exemplary embodiment, said releasing, by said first operating system, said target processing resource from said first processing resource comprises:
sending a first interrupt request to the first operating system through the second operating system, wherein the first interrupt request is used for indicating to preempt the target processing resource;
releasing the target processing resource from the first processing resource by the first operating system;
and sending a second interrupt request to the second operating system through the first operating system, wherein the second interrupt request is used for indicating that the target processing resource is released.
In an exemplary embodiment, the sending, by the second operating system, a first interrupt request to the first operating system includes:
storing the resource information of the target processing resource into a shared memory on the chip through the second operating system;
and sending the first interrupt request to the first operating system through the second operating system, wherein the first interrupt request is used for indicating to preempt the target processing resource indicated by the resource information stored in the shared memory.
In an exemplary embodiment, said releasing, by said first operating system, said target processing resource from said first processing resource comprises:
reading the resource information from the shared memory by the first operating system in response to the first interrupt request;
releasing, by the first operating system, the target processing resource satisfying the resource information from the first processing resource.
In an exemplary embodiment, said releasing, by said first operating system, said target processing resource from said first processing resource comprises:
determining, by the first operating system, whether the target processing resource is currently being used;
releasing the target processing resource under the condition that the target processing resource is not used currently;
suspending a reference service that is currently using the target processing resource in a case where the target processing resource is currently used; releasing the target processing resource from the first processing resource.
In an exemplary embodiment, after said releasing said target processing resource from said first processing resource, said method further comprises:
detecting whether processing resources except the target processing resource in the first processing resource meet the operation requirement of the reference service or not;
And under the condition that the operation requirement of the reference service is met, restoring the operation of the reference service by using the processing resources except the target processing resource in the first processing resources.
In an exemplary embodiment, the adding, by the second operating system, the target processing resource to the second processing resource includes:
initializing the target processing resource by the second operating system;
and adding the initialized target processing resource into the second processing resource through the second operating system.
In an exemplary embodiment, the method further comprises:
the first operating system is guided to start;
and guiding the second operating system to start.
In an exemplary embodiment, the booting the first operating system includes: the chip is started to be electrified, and a first processor core distributed for the first operating system in the processor is awakened by the processor; executing, by the first processor core, a secondary program loader, wherein a boot program of the first operating system includes the secondary program loader; loading the first operating system through the secondary program loader;
The booting the second operating system to boot includes: waking up a second processor core allocated for the second operating system by the secondary program loader; and executing a bootstrap program of the second operating system through the second processor core to guide the second operating system to start.
According to another embodiment of the present application, there is provided an inter-system resource occupation device applied to a chip, where a first operating system and a second operating system run in the same processor on the chip, including:
a determining module configured to determine, by the second operating system, a target processing resource, where the processing resources of the processor include a first processing resource and a second processing resource, the first processing resource being allocated for use by the first operating system, the second processing resource being allocated for use by the second operating system;
a releasing module, configured to release, by the first operating system, the target processing resource from the first processing resource;
and the adding module is used for adding the target processing resource into the second processing resource through the second operating system.
According to yet another embodiment of the present application, there is also provided a chip, wherein the chip includes at least one of programmable logic circuits and executable instructions, the chip being run in an electronic device for implementing the steps in any of the method embodiments described above.
According to still another embodiment of the present application, there is further provided a BMC chip (BMC, baseboard management controller, execution server remote management controller, which is a baseboard management controller, is a small operating system independent from a server system, and is a chip integrated on a motherboard), including: the device comprises a storage unit and a processing unit connected with the storage unit, wherein the storage unit is used for storing a program, and the processing unit is used for running the program to execute the steps in any method embodiment.
According to still another embodiment of the present application, there is also provided a motherboard, including: at least one processor; at least one memory for storing at least one program; the at least one program, when executed by the at least one processor, causes the at least one processor to perform the steps of any of the method embodiments described above.
According to yet another embodiment of the present application, there is also provided a server, including a processor, a communication interface, a memory, and a communication bus, wherein the processor, the communication interface, and the memory complete communication with each other through the communication bus; a memory for storing a computer program; and the processor is used for realizing the steps in any method embodiment when executing the program stored in the memory.
According to a further embodiment of the present application, there is also provided a computer readable storage medium having stored therein a computer program, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
According to a further embodiment of the present application, there is also provided an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
According to the method and the device, the first operating system and the second operating system are operated in the same processor on the chip, the processing resources of the processor comprise the first processing resources and the second processing resources, the first processing resources are allocated to the first operating system for use, the second processing resources are allocated to the second operating system for use, the target processing resources which need to be preempted are determined by one operating system, the target processing resources are released from the processing resources which need to be preempted by the other operating system, and then the target processing resources are added into the processing resources which occupy by the operating system which preempts the resources, so that the coordination scheduling of the processing resources can be carried out among the operating systems according to own processing requirements. That is, by releasing the target processing resource required by the second operating system in the first processing resource to the second processing resource, the resources allocated to the first operating system and the second operating system are dynamically adjusted according to the application requirement of the second operating system, so that the resources in the processor can be reasonably and dynamically adjusted. Therefore, the technical problem of poor adaptability of the resource allocation among the systems can be solved, and the technical effect of improving the adaptability of the resource allocation among the systems is achieved.
Drawings
Fig. 1 is a hardware block diagram of a mobile terminal according to an inter-system resource occupation method in an embodiment of the present application;
FIG. 2 is a flow chart of inter-system resource occupancy in accordance with an embodiment of the present application;
FIG. 3 is a block diagram of an alternative BMC chip according to an embodiment of the present application;
FIG. 4 is a flow chart of a determination process of dynamic configuration of resources according to an embodiment of the present application;
FIG. 5 is a schematic diagram of an interaction process for dynamic configuration of inter-system resources according to an embodiment of the present application;
FIG. 6 is a flow diagram of dynamic occupancy of resources between systems according to an embodiment;
fig. 7 is a block diagram of a configuration of an inter-system resource occupation device according to an embodiment of the present application.
Detailed Description
Embodiments of the present application will be described in detail below with reference to the accompanying drawings in conjunction with the embodiments.
It should be noted that the terms "first," "second," and the like in the description and claims of the present application and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
The method embodiments provided in the embodiments of the present application may be performed in a mobile terminal, a computer terminal or similar computing device. Taking the mobile terminal as an example, fig. 1 is a block diagram of a hardware structure of the mobile terminal according to an inter-system resource occupation method in an embodiment of the present application. As shown in fig. 1, a mobile terminal may include one or more (only one is shown in fig. 1) processors 102 (the processor 102 may include, but is not limited to, a microprocessor MCU or a processing device such as a programmable logic device FPGA) and a memory 104 for storing data, wherein the mobile terminal may also include a transmission device 106 for communication functions and an input-output device 108. It will be appreciated by those skilled in the art that the structure shown in fig. 1 is merely illustrative and not limiting of the structure of the mobile terminal described above. For example, the mobile terminal may also include more or fewer components than shown in fig. 1, or have a different configuration than shown in fig. 1.
The memory 104 may be used to store computer programs, such as software programs of application software and modules, such as computer programs corresponding to the method of occupying resources between systems in the embodiments of the present application, and the processor 102 executes the computer programs stored in the memory 104, thereby performing various functional applications and data processing, that is, implementing the method described above. Memory 104 may include high-speed random access memory, and may also include non-volatile memory, such as one or more magnetic storage devices, flash memory, or other non-volatile solid-state memory. In some examples, the memory 104 may further include memory remotely located relative to the processor 102, which may be connected to the mobile terminal via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The transmission device 106 is used to receive or transmit data via a network. Specific examples of the network described above may include a wireless network provided by a communication provider of the mobile terminal. In one example, the transmission device 106 includes a network adapter (Network Interface Controller, simply referred to as NIC) that can connect to other network devices through a base station to communicate with the internet. In one example, the transmission device 106 may be a Radio Frequency (RF) module, which is configured to communicate with the internet wirelessly.
In this embodiment, a method for occupying resources between systems is provided, which is applied to a chip, where a first operating system and a second operating system operate in the same processor on the chip, and fig. 2 is a flowchart of occupying resources between systems according to an embodiment of the present application, as shown in fig. 2, where the flowchart includes the following steps:
step S202, determining target processing resources through the second operating system, wherein the processing resources of the processor comprise first processing resources and second processing resources, the first processing resources are allocated to the first operating system for use, and the second processing resources are allocated to the second operating system for use;
step S204, releasing the target processing resource from the first processing resource through the first operating system;
step S206, adding, by the second operating system, the target processing resource to the second processing resource.
Through the steps, the first operating system and the second operating system are operated in the same processor on the chip, the processing resources of the processor comprise the first processing resources and the second processing resources, the first processing resources are allocated to the first operating system for use, the second processing resources are allocated to the second operating system for use, one operating system determines the target processing resources which need to be preempted, the other operating system releases the target processing resources from the processing resources which need to be preempted, and then the operating system which preempts the resources adds the target processing resources into the processing resources which occupy the processing resources, so that the operating systems can perform coordinated scheduling of the processing resources according to own processing requirements. That is, by releasing the target processing resource required by the second operating system in the first processing resource to the second processing resource, the resources allocated to the first operating system and the second operating system are dynamically adjusted according to the application requirement of the second operating system, so that the resources in the processor can be reasonably and dynamically adjusted. Therefore, the technical problem of poor adaptability of the resource allocation among the systems can be solved, and the technical effect of improving the adaptability of the resource allocation among the systems is achieved.
Optionally, in this embodiment, there is also provided a method for operating an operating system, including: running a first operating system and a second operating system in the same processor of the chip; and switching resources used by the first operating system and the second operating system. The resources of the above-described handover may include, but are not limited to, processing resources and/or operational traffic resources, and the like. The method for occupying resources between systems in this embodiment may be, but is not limited to, switching the resources used by the first operating system and the second operating system.
Alternatively, in this embodiment, the method for occupying resources between systems described above may be applied to, but not limited to, a chip, for example: the X86 architecture (The X86 architecture, the standard numbering abbreviation for The microprocessor-executed computer language instruction set, also identifies a set of general-purpose computer instruction sets, X86 generally refers to a series of chips based on Intel 8086 and backward compatible central processor instruction set architecture), ARM architecture (Advanced RISC Machine, earlier Acorn RISC Machine, a 32-bit reduced instruction set RISC processor architecture), RISC-V (RISC-five, an open source instruction set architecture based on reduced instruction set RISC principles) architecture chips and MIPS architecture (MIPS architecture, microprocessor without interlocked piped stages architecture, also referred to as Millions of Instructions Per Second, a processor architecture employing reduced instruction set RISC) chips, and so forth. The method may also, but not limited to, be applied to an embedded system, where the embedded system may be an embedded heterogeneous multi-system, and the heterogeneous multi-system refers to running multiple different operating systems (e.g., a first operating system, a second operating system, etc.) in a multi-core processor of the embedded system, where the operating systems are running simultaneously in the same embedded system.
Alternatively, in this embodiment, the first operating system and the second operating system may, but are not limited to, be executed in the same processor on the chip. The execution of the service on the operating system on the processor may be performed in parallel by the processing resources on multiple cores of the processor, where the processor may be a multi-core processor, for example, an 8-core processor, or a processor including other cores, and in this embodiment, the number of cores included in the multi-core processor is not limited.
Alternatively, in the present embodiment, the above-described chip may not be limited to any chip that allows a plurality of operating systems to be run in the same processor. Such as: BMC chip. For example, one example of a BMC chip may be as shown in FIG. 3, and the hardware of the BMC chip may include, but is not limited to, an SOC (SOC referred to as a system-on-chip, also referred to as a system-on-chip), which is a product, an integrated circuit with a dedicated target, which contains the entire contents of a complete system and embedded software.
The core and each controller are interconnected through a second bus, so that interaction between the core and each controller is realized. Meanwhile, ARM cores are connected to a first bus (for example, the ARM cores can be connected through an AXI (Advanced eXtensible Interface) Bridge), and communication between the cores is realized through the first bus. In addition, interconnection and intercommunication (such as Bridge) between the first bus and the second bus are realized in the SOC sub-module, so that a physical path is provided for the SOC sub-module to access the peripheral on the second bus.
The DDR4 controller can be connected with other components or devices through a DDR4 PHY (Physical Layer) interface, the MAC controller is connected with other components or devices through an RGMII (Reduced Gigabit Media Independent Interface, gigabit media independent interface), the SD card/eMMC controller is connected with other components or devices through an SD interface, and the RC controller is connected with other components or devices through a PCIe PHY interface.
The BMC out-of-band sub-module mainly comprises a controller corresponding to a chip peripheral such as PWM (Pulse Width Modulation, short pulse width modulation), GPIO (General-purpose input/output), fan speed regulation, mailbox and the like, which is a very effective technology for controlling an analog circuit by utilizing the digital output of a microprocessor, and through the controllers, PECI (Platform Environment Control Interface, i.e. platform environment type control interface) communication (such as using GPIO analog PECI), fan regulation and the like can be realized. As can be seen from fig. 3, the BMC out-of-band submodule may, but is not limited to, interact with the SOC submodule via an APB (Advanced Peripheral Bus, peripheral bus) bus.
And the BMC chip realizes interconnection among the on-chip ARM core, the storage unit and the controller hardware resource through the first bus and the second bus. The dynamic balanced scheduling of processor resources mainly relates to ARM core resource scheduling of a BMC chip, and inter-core communication refers to communication between ARM cores. Taking Linux (an open source computer operating system kernel, a Unix-like operating system written in C language and conforming to the POSIX standard) system preempting RTOS (Real Time Operating System, real-time operating system) system kernel as an example, the Linux system first sends an inter-core interrupt (interrupt number 9) to core 1 through an on-chip first bus on a certain core of cores 2~N. If the RTOS system is in an idle state at this time to allow preemption, the core 1 replies an inter-core interrupt (interrupt number 10) through the first bus, and releases the peripheral controller resource (e.g. PWM/PECI) mapped by the current core 1, the Linux system receives the inter-core interrupt 10, initiates a preemption flow, adds the core 1 into Linux SMP (Symmetrical Multi-Processing, symmetrical multiprocessing) scheduling, and simultaneously obtains the control right of the PWM/PECI peripheral, and can control the peripheral through the second bus.
In the solution provided in step S202, the first operating system and the second operating system may be, but are not limited to, two different operating systems applied in the same processor on a chip, such as: the first operating system may be, but is not limited to being, a non-real-time operating system and the second operating system may be, but is not limited to being, a real-time operating system. Alternatively, the first operating system may be, but is not limited to being, a real-time operating system and the second operating system may be, but is not limited to being, a non-real-time operating system. The business with different requirements can be processed according to different characteristics (such as real-time performance of operation response) of the two operation systems. Such as: the first operating system may be, but is not limited to, a real-time operating system, the real-time performance of the operation response of which may be strong, but the processing capability may be weak, and may process a small amount of services with high real-time requirements, and the second operating system may be, but is not limited to, a non-real-time operating system, the real-time performance of the operation response of which may be weak, but the processing capability may be strong, and may process a large amount of services with low real-time requirements.
Alternatively, in the present embodiment, the real-Time operating system (RTOS) may include, but is not limited to, freeRTOS and RTLinux (area-Time Linux, also referred to as real-Time Linux, a real-Time operating system in Linux), and the non-real-Time operating system may include, but is not limited to, contigi (a small, open-source, very portable multi-task computer operating system), heliOS (a set of extensible mobile back-end frameworks developed using Ruby), linux, and the like.
Alternatively, in this embodiment, the processing resources of the foregoing processor may, but are not limited to, include a plurality of processing resources, and may, but are not limited to, allocate the processing resources of the processor to the first operating system and the second operating system, so as to obtain the first processing resources and the second processing resources, for example: taking the example that the processing resources of the processor include processing resources a, processing resources B, processing resources C, processing resources D and processing resources E, the processing resources may be, but are not limited to, equally allocated, the processing resources a and the processing resources B are allocated to the first operating system, the processing resources C and the processing resources D are allocated to the second operating system, so as to obtain first processing resources as processing resources a and processing resources B, and second processing resources as processing resources C and processing resources D. The adaptive allocation of processing resources may be performed, but not limited to, allocating processing resource a to a first operating system with smaller traffic, and allocating processing resource B, processing resource C, and processing resource D to a second operating system with larger traffic, so as to obtain a first processing resource as processing resource a, and a second processing resource as processing resource B, processing resource C, and processing resource D.
Alternatively, in this embodiment, the target processing resource may be, but is not limited to, a processing resource that the second operating system needs to occupy in addition to the second processing resource, and may be, but is not limited to, a target processing resource that the second operating system needs by detecting a situation that the second operating system uses the second processing resource, for example: taking the example that N processing resources are allocated to the second operating system, the second operating system is executing a task that needs to occupy M processing resources, if M is greater than N, it may be, but is not limited to, determining that the second operating system needs the target processing resource, and if the second processing resources of the second operating system are all idle, the target processing resource may be, but is not limited to, m—n processing resources.
In an alternative embodiment, the operating services and processing resources may be allocated to the various operating systems in the following manner, but are not limited to:
distributing a group of services to be distributed to corresponding operating systems in the embedded system according to a resource dynamic distribution rule, wherein the resource dynamic distribution rule comprises resource dynamic distribution according to at least one of the following: the embedded system comprises a first operating system and a second operating system, wherein the first operating system and the second operating system run on a processor, and the response speed of the first operating system is higher than that of the second operating system;
Determining a resource allocation result corresponding to the set of services to be allocated, wherein the resource allocation result is used for indicating processing resources corresponding to each service to be allocated in the set of services to be allocated in processing resources of the processor, and the processing resources of the processor comprise a processor core;
and distributing the processing resources of the processor to the first operating system and the second operating system according to the operating system corresponding to each service to be distributed and the resource distribution result.
During the operation of the processor, a set of services to be allocated, i.e. the services to be allocated to the first operating system and the second operating system, may be acquired. Since different services to be allocated may differ in response speed, occupancy rate of service resources, service coupling degree with other services, importance of service, and other dimensions, a resource dynamic allocation rule may be preconfigured, and the resource dynamic allocation rule may include a rule for performing service allocation, and the service is allocated to a corresponding operating system, so that the processing resource of the corresponding operating system performs the service allocated to itself. Optionally, the resource dynamic allocation rule may include resource dynamic allocation according to at least one of: the service response speed, the service resource occupancy rate, the service coupling degree and the service importance can be provided with corresponding priorities according to different allocation rules, for example, the priorities are as follows in sequence from high to low: service importance, service coupling degree, service response speed and service resource occupancy rate. According to the source dynamic allocation rule, a group of services to be allocated (or tasks to be allocated, different services to be allocated may correspond to different processes) may be allocated to corresponding operating systems in the embedded system, so as to obtain a service allocation result.
Alternatively, based on constraints on response time, the first operating system may be an operating system with explicitly fixed time constraints, within which all processing (task scheduling) needs to be done, otherwise the system may be in error, which may be a real-time operating system (Real Time Operating System, RTOS for short), e.g., freeRTOS, RTLinux, etc., as well as real-time operating systems in other embedded systems. The second operating system does not have the feature, and the second operating system generally adopts a fair task scheduling algorithm, when the number of threads/processes increases, the CPU time needs to be shared, task debugging has uncertainty, and can be called as a non-real-time operating system, for example, contiki, heliOS, linux (collectively called GNU/Linux, a set of freely-transmissible Unix-like operating systems) or the like, and can also be a non-real-time operating system in other embedded systems, wherein the Linux system is a multi-user, multi-task and multi-CPU supporting operating system based on POSIX (Portable Operating System Interface ).
Accordingly, the traffic allocated to the first operating system is typically a real-time traffic, which refers to traffic that needs to be scheduled within a specified time, and that requires a processor to process the traffic quickly enough, and the result of the processing can control the production process or respond quickly to the processing system within the specified time. As a typical scenario, control of the robot in industrial control is a real-time business, and the system needs to take measures in time before detecting the misoperation of the robot, otherwise serious consequences may occur. The traffic allocated to the second operating system is typically non-real time traffic, which refers to traffic that is insensitive to the scheduling time and has a certain tolerance for the delay of the scheduling, e.g. sensor data of a read temperature sensor (sensor) in the server.
It should be noted that, when external events or data are generated, the real-time operating system can accept and process the external events or data fast enough, the processing result can control the production process or respond to the processing system fast within a specified time, schedule all available resources to complete real-time services, and control all real-time services to coordinate and run the operating system in a consistent manner.
After each service to be allocated is allocated to a corresponding operating system, corresponding processing resources can be allocated to each service to be allocated according to the service allocation result, and a resource allocation result corresponding to a group of services to be allocated is obtained. When the processing resources are allocated for the service to be allocated, the processing resources of the first operating system can be allocated for the service allocated to the first operating system, the processing resources of the second operating system can be allocated for the service allocated to the second operating system, and meanwhile, when the unallocated processing resources exist in consideration of load balancing, the unallocated processing resources can be allocated for part of the service.
The processing resources of the processor may be dynamically allocated in units of time slices, and in consideration that the operating system to which the processing resources are frequently switched and the service processing time are not necessarily integer multiples of the time slices, so that the response time of a part of the service is prolonged.
And according to the operation system corresponding to each service to be allocated and the resource allocation result, the processing resources of the processor can be allocated to the first operation system and the second operation system. Alternatively, the unallocated processing resources of the processor may be allocated to the operating system corresponding thereto, and the unallocated processing resources may be determined based on a correspondence between the unallocated processing resources and the traffic to be allocated and a correspondence between the traffic to be allocated and the operating system.
Alternatively, allocating the processing resources of the processor to the first operating system and the second operating system may be performed by a resource adaptive scheduling module (e.g., a core adaptive scheduling module), which may be a software module running on the first operating system or the second operating system, for example, running on the second operating system, and the resource adaptive scheduling module may be implemented by software in the Linux system, which may perform an actual scheduling action on the processing resources of the processor (e.g., the processor hard core resources) according to the output of the service management module and the output of the resource dynamic allocation module. For example, through the resource scheduling of the core resource adaptive module, M cores of the (M+N) cores are scheduled to the real-time operating system, and N cores are scheduled to the non-real-time operating system.
For example, heterogeneous operating systems (heterogeneous operating systems) can be run on different hard cores of the same processor, so that the whole processor system has parallel processing capability of real-time and non-real-time services, and meanwhile, the processor hard core resources (for example, processor cores) occupied by the different operating systems are adaptively adjusted, so that the utilization rate of the processor resources is remarkably improved. Here, heterogeneous refers to different types of operating systems running in the same multi-core processor of the embedded system, and multi-system refers to that the number of operating systems running on the same multi-core processor of the embedded system is multiple, and the operating systems run simultaneously in a time dimension.
Optionally, the above process further includes: and generating a rule structure body by reading the rule configuration file, wherein the rule structure body is used for recording the dynamic allocation rule of the resource.
The resource dynamic allocation rule may be configured based on a rule configuration file, and a rule structure for recording the resource dynamic allocation rule may be generated through the read rule configuration file, where the rule configuration file may be a load balancing policy file (payload_policy file), and the load balancing policy file may be used to configure a classification method of various running services (or processes), an evaluation principle of real-time level, and the like. The dynamic allocation rule of the resource can be configured by different parameters in the load balancing policy file, and one example of the load balancing policy configuration file is as follows:
classification kinds =2// value of 1 means classifying the process according to important and non-important attributes, or else classifying the process according to a preset classification method (real-time and non-real-time);
real-time grade evaluation =2// value of 1 represents that the average occupancy rate of the CPU in the past statistical minutes (statistic minutes) is taken as a process real-time rating evaluation principle; otherwise, the process real-time rating evaluation principle is represented according to the preset priority;
statistic minutes =5// a statistical time (in minutes) representing the average occupancy of each process, which is valid when real-time grade evaluation is 1.
Alternatively, the resource dynamic allocation rules may be stored in a load balancing policy module, where the load balancing policy module may be a software module running under the first operating system or the second operating system (e.g., a software module running under a Linux system), which may provide policy guidance for the service management module, including classification methods of various services (or processes) running in the system, evaluation principles of real-time level, and so on. The service management module can divide and manage the service in the system according to the real-time level, and further guide the resource source adaptive scheduling module to reallocate the processor resource. It may, for example, perform an actual classification of traffic based on the output of the load balancing policy module, yielding a list containing real-time traffic and non-real-time traffic.
It should be noted that, the classification method and the real-time level evaluation principle are open, a user can define a certain method or principle, the rules based on which the service management module performs service management can be dynamically configured, and further rules can be set on the basis of the existing rules. The service management module can be provided with a plurality of rules with the same function, but no contradiction exists among the rules, namely, the rules which are used currently in the rules with the same function can be determined based on the rule selection conditions such as the configuration time of the rules, the priority of the rules and the like, so that the contradiction among the rules is avoided. The above-mentioned profile load_policy_config describes a possible case, in which the classification_categories variable indicates specific classification criteria (for example, according to importance or real-time of the service) and classification categories (for example, important service and general service, real-time service and non-real-time service, etc.), and the real-time_grade_evaluation variable indicates real-time evaluation criteria (which may be average occupancy of CPU or preset service priority in the past stationary_minutes), and the real-time class type is customized by the user, which may be defined as high, normal, low, or may be subdivided into more.
The output of the load balancing policy module is a configured classification method, a real-time level evaluation principle and the like, and when software is implemented, the output can be a specific configuration file (such as a load_policy file) or a structure variable, and the file or the structure variable can be finally accessed by the service management module, so that a specific policy of load balancing is obtained.
By reading the rule configuration file, the rule structure body is generated to record the resource dynamic allocation rule, so that convenience of information configuration can be realized.
Optionally, the above process further includes: acquiring a rule updating configuration file through an external interface of the second operating system, wherein the rule updating configuration file is used for updating the configured resource dynamic allocation rule; the rule structure is updated by using the rule update configuration file to update the resource dynamic allocation rule recorded by the rule structure.
The rule structures may be in a fixed format, i.e. not allowed to be modified during the operation of the embedded system, or may be in a flexibly configurable format, i.e. may be configured to be altered by a configuration file of a specific format. In this embodiment, a rule update configuration file may be obtained, where the rule update configuration file is used to update the configured resource dynamic allocation rule; the rule structure may be updated using the rule update profile, thereby updating the resource dynamic allocation rules recorded by the rule structure.
When updating the rule structure using the rule update profile, a new rule structure may be generated directly from the rule update profile, and the new rule structure may be used to replace an existing rule structure, or the parameter value of the rule parameter indicated by the rule update profile may be used to update the parameter value of the corresponding rule parameter in the rule structure.
Optionally, the configuration file in a specific format may be read through an external interface of the first operating system or the second operating system, and in consideration of the traffic level to be processed, dynamic scheduling of resources of the embedded system may be mainly responsible for the second operating system. And when the rule updating configuration file is acquired, the rule updating configuration file can be acquired through an external interface of the second operating system.
For example, the load balancing policy module may be in a fixed format, or may be configured through an external interface of the Linux system, for example, a configuration file (load_policy. Config) in a specific format may be defined, and configuration change is performed by a file read-write manner.
The external interface may be a network interface, an SPI (Serial Peripheral Interface ) controller interface, a UART (Universal Asynchronous Receiver/Transmitter, universal asynchronous receiver Transmitter) serial port, or the like, as long as a path for acquiring data from the outside is available. Different implementations exist for hardware used for reading the file and specific file locations, for example, the configuration file can be loaded from a Web (World Wide Web) interface through a network interface; the configuration file can be read from SPI Flash (Flash memory) of the board card through the SPI controller; the configuration file may be obtained from a serial data transceiver software tool on another PC (Personal Computer ) via a UART serial port.
By the embodiment, the flexibility of the dynamic allocation rule configuration of the resource can be improved by acquiring the rule update configuration file and updating the rule structure body by using the acquired rule update configuration file.
Alternatively, a set of traffic to be allocated may be allocated to a corresponding operating system in the embedded system according to the resource dynamic allocation rule in the following manner, but not limited to: and distributing the service to be distributed in which the service response speed requirement in the group of service to be distributed is greater than or equal to the set response speed threshold value to the first operating system, and distributing the service to be distributed in which the service response speed requirement in the group of service to be distributed is less than the set response speed threshold value to the second operating system.
When the service to be allocated is allocated, the service to be allocated can be allocated to the corresponding operating system based on the service response speed requirement of the service to be allocated. The service response speed can be used for evaluating the real-time performance level of the service, the higher the service response speed requirement is, the more sensitive the service response speed requirement is to the scheduling time and the response speed of the operating system, the higher the real-time performance level is, the service with the high service response speed requirement needs to be processed by the operating system at a high enough speed, the processing result can control the production process or make a quick response to the processing system within a specified time, and the service with the low service response speed requirement has a certain tolerance to the scheduling delay.
For traffic to be allocated, where the traffic response speed requirement is greater than or equal to the set response speed threshold, which is sensitive to the scheduling time and response speed of the operating systems, such traffic to be allocated may be allocated to the first operating system (e.g., real-time traffic may be allocated to the real-time operating system). For traffic to be allocated, which requires less than a set response speed threshold for traffic response speed, which is insensitive to response speed and scheduling time, such traffic to be allocated may be allocated to the second operating system (e.g., non-real time traffic may be allocated to the non-real time operating system). Here, the service response speed requirement may be indicated by an indication parameter of the service response speed, and the set response speed threshold may be a response speed threshold of millisecond level or a response speed threshold of second level, for example, 100ms, 200ms, 1s, or the like, which is not limited in this embodiment.
Optionally, when a group of services to be allocated is allocated to a corresponding operating system in the embedded system, a first service list corresponding to the first operating system and a second service list corresponding to the second operating system may be output, where the first service list is used to record the services allocated to the first operating system, and the second service list is used to record the services allocated to the second operating system, that is, the service allocation result includes the first service list and the second service list, and the output first service list and the second service list may be used to perform a dynamic scheduling process of the processing resource of the processor.
For example, the real-time service of the system is classified to obtain real-time service and non-real-time service lists, and a total of 20 services are assumed, wherein the real-time services are service 1 and service 2, and the non-real-time services are service 3 to service 20.
Here, the service management module may classify the currently to-be-executed service, when the BMC system operates for the first time, since all the services to be currently operated by the system are known to the system, the service management module classifies the services once according to the output of the load balancing module, after classification, different services are allocated to different operating systems (RTOS system and Linux system) for execution, and in the subsequent operation process, if the number of service processes varies (for example, some processes are suspended or new processes are started), the service management module may further continue to perform service division, and divide and manage the existing services according to the load balancing policy in real time. The service management module can be a resident process in the Linux system, is running all the time, and manages and divides the currently running process.
By the method, the timeliness of service response of the service sensitive to the adjustment time can be ensured by distributing the service to be distributed to the corresponding operation system according to the service response speed requirement.
Alternatively, a set of traffic to be allocated may be allocated to a corresponding operating system in the embedded system according to the resource dynamic allocation rule in the following manner, but not limited to: and distributing the service to be distributed in the group of service to be distributed, wherein the service to be distributed has the service resource occupancy rate smaller than the first occupancy rate threshold value, to the first operating system, and distributing the service to be distributed in the group of service to be distributed, wherein the service resource occupancy rate of the service to be distributed is larger than or equal to the first occupancy rate threshold value, to the second operating system.
When the service to be allocated is allocated, the service to be allocated can be allocated to the corresponding operating system based on the service resource occupancy rate of the service to be allocated. The service resource occupancy rate may be an average occupancy rate of the service to the processing resource (for example, CPU occupancy rate per minute) in a unit time, and the response speed of the service and the response speed of the subsequent service are affected by the high or low service resource occupancy rate, so that the real-time level of the service may be estimated based on the service resource occupancy rate, the higher the service resource occupancy rate, the greater the influence of the service resource occupancy rate on the scheduling time and the response speed of the operating system, the lower the real-time level, and the service with low service resource occupancy rate, the less the influence of the service resource occupancy rate on the scheduling time and the response speed of the operating system, and the higher the real-time level.
For the service to be allocated, the service resource occupancy rate of which is smaller than the first occupancy rate threshold value, the influence on the scheduling time and the response speed of the operating system is small, and the service to be allocated can be allocated to the first operating system. For the service to be allocated, the service resource occupancy rate of which is greater than or equal to the first occupancy rate threshold, the influence on the scheduling time and the response speed of the operating system is greater, so that the service to be allocated can be allocated to the second operating system. Here, the first occupancy threshold may be configured as desired, which may be 10%, 15%, 20% or other threshold, while the first occupancy threshold may be dynamically adjusted.
By the embodiment, the service to be allocated is allocated to the corresponding operation system according to the service resource occupancy rate, so that the timeliness of service response with low service resource occupancy rate can be ensured.
Optionally, a set of services to be allocated may be allocated to a corresponding operating system in the embedded system according to a dynamic allocation rule of resources in at least one of the following ways:
distributing the service to be distributed, of the group of service to be distributed, to the first operating system, wherein the service coupling degree of the service to be distributed and the distributed service of the first operating system is greater than or equal to a first coupling degree threshold value;
And distributing the service to be distributed in the group of service to be distributed, wherein the service coupling degree of the service to be distributed with the distributed service of the second operating system is larger than or equal to the second coupling degree threshold value.
When the service to be allocated is allocated, the service to be allocated can be allocated to the corresponding operating system based on the service coupling degree of the service to be allocated. The service coupling degree may be used to represent the degree of association between the service to be allocated and the allocated service in the respective operating systems. If the service coupling degree of a service to be allocated and the allocated service of one operating system is high, the service coupling degree is not suitable for being allocated to another operating system. Therefore, the service to be allocated can be allocated to the corresponding operating system based on the degree of service coupling between the service to be allocated and the allocated service in the respective operating systems.
Alternatively, the service coupling degree may be evaluated by association between the input and output of the service, the service coupling degree may be represented by different coupling degree levels, if there is no relationship between the input and output of the service, the coupling degree level is low (or other coupling degree level indicating no association between the services), if the execution of one service depends on the output of another application (the output cannot be started as an input service), the coupling degree level between the services is high, if the execution of one service uses the output of another application, but the output does not interfere with the normal execution of the service (the output can be obtained when the service performs to the corresponding operation, and the corresponding operation is not a core operation), the coupling degree level between the services is medium. In addition, the service coupling degree may be represented by a numerical value, and the service coupling degree may be evaluated by one or more coupling degree conditions (for example, association between input and output), and a numerical value corresponding to the satisfied coupling degree condition may be determined as the numerical value of the service coupling degree.
Such traffic to be allocated may be allocated to the first operating system if there is traffic in the set of traffic to be allocated that has a traffic coupling degree with the allocated traffic of the first operating system that is greater than or equal to the first coupling degree threshold, and may be allocated to the second operating system if there is traffic in the set of traffic to be allocated that has a traffic coupling degree with the allocated traffic of the second operating system that is greater than or equal to the first coupling degree threshold.
For example, in addition to generating the real-time service list and the non-real-time service list, the service management module is also responsible for service decoupling evaluation and management, that is, find out the service which can be independently delivered to the real-time operating system for running from all the real-time services, so that the hardware resource dynamic allocation module reallocates the processor resources, and for the service which cannot be independently delivered to the real-time operating system for running, if the service coupling degree of the service to the non-real-time service is high, the service can be allocated to the non-real-time operating system.
Here, since some services have real-time requirements, they interact very frequently (i.e., the service coupling degree is high) with other non-real-time services in the system, and in this case, in order to improve the overall data interaction efficiency, such services are allocated to the non-real-time operating system. And the real-time service is relatively independent, and only needs to be divided into a real-time operating system, and the process is decoupling operation. The criterion for judging the independence of the services is not unique, and can be the degree of the association between the services, or the index of the relationship of other users.
The policy for reassignment is open, one possible policy is: when the system runs for the first time, the processor cores are distributed according to the proportion of the service quantity distributed to the real-time operation system and the non-real-time operation system by the service management module, the resource distribution is adjusted according to the respective core resource occupancy rate in the dual system in the subsequent running process, and from the point of view, the re-distribution process and the core preemption and release process are mutually matched.
According to the embodiment, the service to be allocated is allocated to the corresponding operation system according to the service coupling degree, so that the accuracy of service processing on a plurality of services with high service coupling degree can be ensured.
Alternatively, a set of traffic to be allocated may be allocated to a corresponding operating system in the embedded system according to the resource dynamic allocation rule in the following manner, but not limited to:
and distributing the service to be distributed, which contains sensitive information, in the group of services to be distributed to a target operating system, wherein the target operating system is an operating system with low interaction frequency with a used object in the first operating system and the second operating system.
In this embodiment, for a service to be allocated (which may be a service of importance to a service of which exposure to a user is not desired, for example) including sensitive data (for example, sensitive information such as a password), it may be allocated to a target operating system, and the service to be allocated including the sensitive information is isolated by the target operating system by performing security protection at a hard core level, where the target operating system is an operating system with a low interaction frequency with a used object, or an operating system with a fast response speed, for example, the first operating system, from among the first operating system and the second operating system.
For example, the service processing module is responsible for further performing hard-core-level security protection isolation on system services, namely, dividing important sensitive (unwanted to be exposed to users) services into real-time services, and finally, unloading the services from a non-real-time operating system to a real-time operating system can be realized, so that the effect of security protection is achieved. Here, the different services divided by the service processing module may be organized in the form of a structure when implemented in software. By designing the safety space among heterogeneous operating systems, sensitive type business is unloaded from a non-real-time operating system to a real-time operating system, and the aim of hard core level safety protection is achieved. Here, sensitive traffic refers to: security related services such as user passwords, identity information, etc. are related to the privacy of the user.
Here, the hard core level means that services are isolated at the core level of the processor, that is, sensitive services are allocated to a real-time operating system (the real-time operating system occupies a core different from a non-real-time operating system and thus belongs to the isolation of the core level), and the frequency and the degree of interaction with a user are relatively weak compared with the real-time operating system, so that it is difficult for the user as a user to "detect" sensitive data generated by the services running thereon. For the upper layer application, the services of user identity authentication management, security encryption and the like belong to the important sensitive services, the services are forcedly divided into real-time services through a service management module, and the services can be operated in a real-time operating system when hardware resources are dynamically allocated subsequently, so that the security isolation effect is achieved.
By the embodiment, the service to be distributed containing the sensitive information is distributed to the operation system with low interaction frequency with the user, so that the security protection isolation of the hard core level can be carried out on the system service, and the security of service execution is improved.
Alternatively, the resource allocation result corresponding to a set of traffic to be allocated may be determined, but is not limited to, in the following manner:
and generating a mapping table of the service to be allocated and the processing resources of the processor according to the allocation result of the service to be allocated and combining the resource utilization condition of the processing resources of the first operating system and the resource utilization condition of the processing resources of the second operating system.
In this embodiment, the allocation result of a set of to-be-allocated services is used to indicate the correspondence between the to-be-allocated services and the operating systems, the to-be-executed services allocated to one operating system are generally executed by using the processing resources of that operating system, and if the amount of the to-be-allocated services of a certain operating system is too large and there are currently unallocated processing resources, the unallocated processing resources may also be allocated to the to-be-allocated services allocated to a certain operating system. Therefore, according to the allocation result of a set of services to be allocated, in combination with the resource utilization condition of the processing resources of the first operating system and the resource utilization condition of the processing resources of the second operating system, a mapping table of the services to be allocated and the processing resources of the processor can be generated to indicate the processing resources allocated for each service to be allocated.
Here, each service to be allocated has a mapping relationship with only a certain processor core, while the same processor core may have a mapping relationship with a plurality of services to be allocated, and different services may have a mapping relationship with the same processor core through different time slices occupying the same processor core. At the same time, the same processor core is occupied by only one service, i.e., is used to execute only one service. Different services allocated to an operating system may determine time slices that occupy the same processor resource in terms of allocation time, service response speed requirements, or otherwise.
For example, the resource dynamic allocation module dynamically adjusts the processor resources according to the output result of the service management module to form a mapping table of different services and actual hardware resources, and optimizes the deployment structure of different hardware resources under heterogeneous operating systems so as to achieve the purpose of improving the utilization rate of the hardware resources of the whole system. The dynamic allocation process of the resources is managed and configured by software in the second operating system.
Taking an eight-core processor (cores 1-8) as an example, the processor cores that have been scheduled to the first operating system include: core 1, the processor core that has been scheduled to the second operating system, includes: core 2, core 3 and core 4, 6 services to be allocated, real-time service being service 1 and service 2, non-real-time service being service 3-service 6, corresponding processor cores being allocated for 6 services, core 1 is allocated for service 1, core 5 is allocated for service 2, core 2 is allocated for service 3, core 3 is allocated for service 4, core 4 is allocated for service 5, and core 6 is allocated for service 6.
According to the embodiment, based on the corresponding relation between the service and the operating system, the dynamic allocation of the processing resources is performed by combining the use conditions of the processing resources of different operating systems, so that the rationality of the allocation of the processing resources can be ensured.
Alternatively, the processing resources of the processor may be allocated to the first operating system and the second operating system according to the operating system corresponding to each service to be allocated and the resource allocation result in the following manner, but not limited to: and under the condition that the fact that the unallocated processing resources in the processing resources of the processor have corresponding services to be allocated is determined according to the resource allocation result, allocating the unallocated processing resources to an operating system allocated to the services to be allocated corresponding to the unallocated processing resources.
When the processing resource allocation is performed, if the unallocated processing resource in the processing resources of the processor has a corresponding service to be allocated, that is, the unallocated processing resource is allocated to the service to be allocated, the unallocated processing resource may be allocated to an operating system allocated to the service to be allocated corresponding to the unallocated processing resource.
Optionally, the resource adaptive scheduling module may complete an actual scheduling action for the processing resources of the processor according to a result of the dynamic allocation of the hardware resources. The resource adaptive scheduling module schedules a portion of the processor cores to execute traffic allocated to a first operating system, such as M cores of core group 1, and schedules the remaining processor cores to execute traffic allocated to a second operating system, such as N cores of core group 2.
Taking the foregoing eight-core processor as an example, according to the service allocation result and the resource allocation result, the unallocated core 4 may be allocated to the first operating system, and the unallocated cores 5 and 6 may be allocated to the Linux system. The entire scheduling process may be dominated by the second operating system.
According to the embodiment, the unallocated processor resources are scheduled to the corresponding operating systems based on the resource allocation result, so that the utilization rate of the processor resources can be improved.
In one exemplary embodiment, the target processing resource may be, but is not limited to, determined by the second operating system in the following manner: monitoring whether the second processing resource meets the operation of the service on the second operating system or not through the second operating system; and under the condition that the second processing resource is determined not to meet the running of the business on the second operating system, estimating the resource information of the target processing resource through the second operating system.
Optionally, in this embodiment, the second operating system may be, but is not limited to, used to run multiple services, and running the services on the second operating system may be, but is not limited to, requiring multiple processing resources, such as: storage resources (memory, flash memory, hard disk, etc.), peripheral resources (peripheral interfaces, etc.), and processor interrupt resources (interrupt numbers, interrupt instructions, interrupt requests, etc.), among others.
Alternatively, in this embodiment, it may be determined, but not limited to, whether the second processing resource of the second operating system satisfies the operation of the service on the second operating system according to the resource required for the operation of the service on the second operating system and the second processing resource of the second operating system, for example: taking the example that the second processing resources of the second operating system include M processing resources capable of running services, where N processing resources are required for running services on the second operating system, where in the second processing resources, the processing resources capable of running services are greater than or equal to the resources required for running services on the second operating system (where M is greater than or equal to N), it may be, but not limited to, determined that the second processing resources satisfy the running of services on the second operating system. Alternatively, in the case where the processing resources capable of running the service in the second processing resource are smaller than the resources (M is smaller than N) required for running the service on the second operating system, it may be determined, but not limited to, that the second processing resource does not satisfy the running of the service on the second operating system.
Alternatively, in this embodiment, the above-mentioned target processing resource may be, but not limited to, a processing resource required by the second processing resource to satisfy the operation of the service on the second operating system, and the resource information of the target processing resource may be, but not limited to, the number of processing resources required by the second processing resource to satisfy the operation of the service on the second operating system, the type of the processing resource, and so on.
In one exemplary embodiment, the second processing resource may be monitored by the second operating system for operation of a service on the second operating system by, but not limited to, one of: monitoring, by the second operating system, whether remaining storage resources in the second processing resources are greater than a storage threshold, where the second processing resources do not satisfy operation of a service on the second operating system if the remaining storage resources are less than or equal to the storage threshold; monitoring, by the second operating system, whether a service on the second operating system uses a reference peripheral resource other than a peripheral resource in the second processing resource, wherein the second processing resource does not satisfy operation of the service on the second operating system if the service on the second operating system uses the reference peripheral resource; monitoring, by the second operating system, whether the service on the second operating system uses a reference processor interrupt resource other than the processor interrupt resource in the second processing resource, wherein the second processing resource does not satisfy the operation of the service on the second operating system if the service on the second operating system uses the reference processor interrupt resource.
Alternatively, in the present embodiment, the storage resource may be, but is not limited to, a resource including a plurality of storage devices, such as: resources of memory, resources of disk, etc. If the storage resources allocated by the operating system cannot meet the running requirements, the operating system may count the additional required storage resources and preempt or coordinate the storage resources from the processing resources allocated for other operating systems. Thereby realizing reasonable reassignment of peripheral resources.
Alternatively, in this embodiment, the remaining storage resources may be, but are not limited to, unoccupied storage resources in the second processing resource of the second operating system.
Optionally, in this embodiment, the above-mentioned storage threshold may be, but is not limited to be, predetermined, and the storage threshold may be, but is not limited to be, a reasonable range for indicating the storage resource occupation in the second processing resource, and may be, but is not limited to, predetermined according to the influence of the storage resource on the usage of the second operating system, for example: taking the example that the second processing resource is allocated to the second operating system in advance, the second operating system can be used normally by detecting that the unoccupied storage resource of the second operating system is greater than or equal to a certain range, and the boundary value of the certain range can be determined as the storage threshold value.
Alternatively, in the present embodiment, the peripheral resources may include, but are not limited to, input devices and output devices, such as: a keyboard, a mouse, a scanner, a digital camera, a voice input system, a handwriting input system, an IC (Integrated Circuit Chip) in which an integrated circuit formed by a large number of microelectronic components (transistors, resistors, capacitors, etc.) is placed on a plastic substrate to make a chip card input system, etc.; display systems in output devices, various printers and plotters, floppy disk memory, hard disk memory, external storage devices, optical disk drives in multimedia devices, sound cards, speakers, video cards, televisions, and the like. Peripheral resources may also, but are not limited to, include peripheral interfaces that allow chip control, such as: I2C, RTC (real_time Clock), GPIO, PECI, PWM, and the like.
Alternatively, in this embodiment, the above-mentioned reference peripheral resource may be, but is not limited to, a peripheral resource that is required to run a service on the second operating system, in addition to a peripheral resource included in the second processing resource. If the peripheral resources allocated by the operating system cannot meet the running requirements, the operating system can count the peripheral resources additionally required and preempt or coordinate the peripheral resources from the processing resources allocated for other operating systems. Thereby realizing reasonable reassignment of peripheral resources.
Alternatively, in this embodiment, the processor interrupt resource may be, but is not limited to, an event required for executing an operating system, which may include, but is not limited to: interrupt execution, service execution to handle events, interrupt numbers, etc. If the processor interrupt resources allocated by the operating system cannot meet the running requirements, the operating system may count the additional needed processor interrupt resources and preempt or coordinate the processor interrupt resources allocated for other operating systems. Thereby achieving reasonable reallocation of processor interrupt resources.
In one exemplary embodiment, the resource information of the target processing resource may be estimated by the second operating system, but is not limited to, in the following manner: determining, by the second operating system, a resource type of the target processing resource, wherein the resource type includes at least one of: storage resources, peripheral resources and processor interrupt resources; and estimating the resource quantity corresponding to each resource type through the second operating system.
Optionally, in the present embodiment, the resource type of the target processing resource may include, but is not limited to, one processing resource or a plurality of processing resources, such as: taking the example that the resource types of the processing resources include a storage resource, a peripheral resource and a processor interrupt resource, the resource types of the target processing resources can include, but are not limited to, the storage resource, the peripheral resource and the processor interrupt resource. Alternatively, the resource type of the target processing resource may include, but is not limited to, any two of a memory resource, a peripheral resource, and a processor interrupt resource. Alternatively, the resource type of the target processing resource may include, but is not limited to, any one of a memory resource, a peripheral resource, and a processor interrupt resource. The second operating system first determines which processing resources of the resource types need to be preempted, and then estimates the amount of resources of each of the processing resources of the resource types that need to be preempted, thereby preempting the processing resources.
Alternatively, in this embodiment, the resource type of the target processing resource and the amount of resources corresponding to each resource type may be estimated according to the second processing resource already used in the second operating system and the processing resource required for running the service on the second operating system, for example: taking the example that the processing resources required for running the service on the second operating system include M peripheral resources, N (M is greater than N) peripheral resources that are not used in the second operating system, it may be, but not limited to, determining the resource type of the target processing resource as the peripheral resource, and estimating the resource amount corresponding to the peripheral resource as M-N.
In one exemplary embodiment, the amount of resources corresponding to each resource type may be estimated by the second operating system in the following manner, but is not limited to: estimating, by the second operating system, a target storage amount to be occupied in the target processing resource, in a case where the resource type includes the storage resource; estimating, by the second operating system, a peripheral identifier and/or a number of peripherals of a reference peripheral resource to be occupied, in the case that the resource type includes the peripheral resource; and in the case that the resource type comprises the processor interrupt resource, estimating the interrupt quantity of the reference processor interrupt resource to be occupied by the second operating system.
Alternatively, in the present embodiment, as for the storage resources, the target storage amount that needs to be occupied may be estimated as the resource amount corresponding to the storage resources. Or it is also possible to divide the types of different storage resources and estimate the amount of memory that needs to be occupied by each storage resource type as the amount of resources that the storage resource corresponds to.
Alternatively, in this embodiment, for the peripheral resources, the number of peripherals that need to be occupied may be estimated as the corresponding resource amount of the peripheral resources. Or can estimate which peripheral resources are needed to be occupied specifically to obtain the peripheral identification as the corresponding resource amount of the peripheral resources.
Alternatively, in this embodiment, for the processor interrupt resource, the number of interrupts that need to be occupied may be estimated as the amount of resources corresponding to the processor interrupt resource. Or it may also estimate which processor interrupt resources are specifically needed to be occupied to obtain the processor interrupt identifier as the corresponding resource amount of the processor interrupt resources.
In an alternative embodiment, a determination of dynamic configuration of resources is provided. FIG. 4 is a flowchart of a process for determining dynamic configuration of resources according to an embodiment of the present application, as shown in FIG. 4, and may be, but not limited to, estimating inter-system resource occupancy by:
Under the condition that the first operating system and the second operating system are kept running stably, the use conditions of the storage resources, the peripheral resources and the processor interrupt resources in the second operating system are read, and whether the storage resources, the peripheral resources and the processor interrupt resources can meet the running of the business on the second operating system or not is judged according to the use conditions of the storage resources, the peripheral resources and the processor interrupt resources in the second operating system. Under the condition that the storage resources, the peripheral resources and the processor interrupt resources can not meet the operation of the service on the second operating system, estimating the resource type and the resource quantity of the target processing resources required by the operation of the service on the second operating system, and executing a resource occupation process to the first operating system according to the resource information of the target processing resources.
Optionally, in this embodiment, the foregoing process of determining dynamic configuration of resources may be performed by configuring a single process on a chip, or may be performed by configuring corresponding processes for different resource types to be monitored, respectively, without interfering with each other.
Alternatively, in the present embodiment, if monitoring of various resource types is performed using a single process, one resource type may be monitored in turn in each cycle of the monitoring program.
In the technical solution provided in step S204, the first operating system releases the target processing resource to the second operating system for use. The first operating system first finds a processing resource meeting the requirement of the target processing resource from the first processing resource, and then executes release operation on the processing resource. Different release operations may be performed for different types of processing resources, but are not limited to. Such as unbinding processing resources, setting the use status of processing resources to currently prohibited use, etc.
Alternatively, in this embodiment, the first operating system may, but is not limited to, release the target processing resource corresponding to the resource type and the resource number according to the resource type and the resource number of the target processing resource indicated by the second operating system. Such as: the second operating system may explicitly indicate the target processing resources (memory resources with memory addresses from a to B, peripheral C, peripheral D and interrupt numbers E and F) that it wishes to preempt, and the first operating system releases the processing resources indicated by the second operating system from the first processing resources allocated thereto. The second operating system may also indicate only the form of the target processing resource (G-sized storage resource, two peripherals and two interrupt numbers) that it is desired to preempt, the first operating system finds the storage resources from a to B, peripheral C, peripheral D and interrupt numbers E and F, which are satisfactory processing resource storage addresses, from the first processing resource, and releases these processing resources.
Optionally, in this embodiment, the operating systems may, but are not limited to, perform interaction in terms of processing resource release occupation by adopting modes such as transmission instruction, inter-core communication, and the like.
In one exemplary embodiment, the target processing resource may be released from the first processing resource by the first operating system in the following manner: sending a first interrupt request to the first operating system through the second operating system, wherein the first interrupt request is used for indicating to preempt the target processing resource; releasing the target processing resource from the first processing resource by the first operating system; and sending a second interrupt request to the second operating system through the first operating system, wherein the second interrupt request is used for indicating that the target processing resource is released.
Alternatively, in this embodiment, the manner of transmitting the interrupt request between the operating systems may be, but not limited to, a software manner (such as transmitting instructions, signals, etc. based on a protocol), or a hardware manner (such as transmitting instructions, signals, etc. based on a hardware device).
Alternatively, in this embodiment, the second operating system may, but is not limited to, actively request the first operating system for the target processing resource that needs to be preempted. The first operating system responds to the request sent by the second operating system to release the target processing resource, and the release result is notified to the second operating system through the interrupt request.
Optionally, in this embodiment, the first interrupt request is used to instruct to preempt the target processing resource, and information of the target processing resource may be carried in the first interrupt request. Or the information of the target processing resource can be stored in a position agreed by both parties, the first interrupt request only informs the first operating system of the event of the preemption of the processing resource, and the specific requirement is that the first operating system releases what processing resource can be acquired from the first operating system to the agreed position.
In the running process of the operating systems, the interaction of service data can be performed, the interaction process can be realized by adopting a mode of matching and transmitting a storage space and an interrupt request, the data are transferred between the operating systems through the storage space, and the instruction notification between the operating systems is performed through the interrupt request. Such as: acquiring service data generated in the process of the first operating system running based on the processor; storing the business data to a storage space on a processor; and sending an interrupt request to the second operating system, wherein the interrupt request is used for requesting the second operating system to read the service data from the storage space, and the second operating system is used for responding to the interrupt request to read the service data from the storage space.
Optionally, in this embodiment, the first operating system is stored in a storage space on the processor based on service data generated during the running process of the processor, and the second operating system is notified by an interrupt request, and the second operating system reads the service data from the storage space, so as to implement interaction of the service data.
Alternatively, in this embodiment, the service data interacted between the operating systems may be, but is not limited to, any data that needs to be transmitted between the systems during the operation of the operating system to run the operation service. Such as: process data for the business, result data for the business, etc.
Alternatively, in this embodiment, a storage space on the processor may be, but is not limited to, a dedicated storage location configured for an interaction process between operating systems, which may be referred to as a shared memory, where the shared memory may be, but is not limited to, reallocated according to the operating systems, that is, each operating system corresponds to a dedicated section of shared memory.
The information (such as a storage address) of the shared memory corresponding to the first operating system may be carried in an interrupt request for requesting the second operating system to read the service data from the storage space, where the second operating system responds to the interrupt request to read the service data from the shared memory indicated by the interrupt request.
In this embodiment, the interrupt requests may be transmitted between systems by means of a software protocol, or may be transferred through a hardware module. Taking the form of hardware module mailbox to transmit interrupt request as an example, a mailbox channel can be established between the first operating system and the second operating system, service data is read and written through the storage space, and interrupt request is transmitted through the mailbox channel.
In an alternative embodiment, a means of inter-core communication is provided. The method comprises the following steps:
in step a, the first operating system sends the target data (which may be the service data) to the target virtual channel (which may be the storage space) in the processor memory.
Optionally, the first operating system and the second operating system may be real-time operating systems or non-real-time operating systems, the first operating system and the second operating system may be single-core operating systems or multi-core operating systems, the target data is data to be sent, the target virtual channel is a section of free storage space in the memory, and the first operating system sends the target data to the target virtual channel in the processor memory means that the CPU core of the first operating system writes the data to be sent into the target virtual channel.
Step b, sending an interrupt notification message (which may be an interrupt request as described above) to the second operating system.
Optionally, the CPU core of the first operating system sends an interrupt notification message to the CPU core of the second operating system, where the interrupt notification message may carry an address of the target virtual channel and is used to notify the second operating system to obtain target data from the target virtual channel, and the interrupt notification message may be triggered by software or hardware.
And c, the second operating system responds to the interrupt notification message to acquire target data from the target virtual channel in the memory.
Optionally, the CPU core of the second operating system responds to the interrupt notification message, analyzes the address of the target virtual channel from the interrupt notification message, locates the target virtual channel in the memory according to the analyzed address, and obtains the target data from the target virtual channel, so as to realize the data interaction between the first operating system and the second operating system.
Through the steps, when a plurality of operating systems running on the processor need to mutually transmit data, the first operating system for transmitting the data transmits the target data to the target virtual channel in the memory of the processor, and transmits an interrupt notification message to the second operating system, and the second operating system for receiving the data responds to the interrupt notification message to acquire the target data from the target virtual channel, so that the problems that resources are wasted in the inter-core communication process and the dependence on the operating system is strong are solved, and the effects of reducing the waste of resources in the inter-core communication process and the dependence on the operating system are achieved.
In one exemplary embodiment, the memory includes a data storage area and a metadata storage area, the data storage area is divided into a plurality of storage units, each storage unit is used for storing service data, and the metadata storage area is used for storing the size and occupied state of each storage unit of the data storage area.
Optionally, the target virtual channel is formed by one or more storage units of the data storage area, the metadata storage area may be divided into storage slices with the same number as storage units, each storage slice is used for recording a size of one storage unit and an occupied state, the size of the storage unit may be represented by a first address and a last address of the storage unit, and may also be represented by a first address and a length of the storage unit, and the occupied state includes an occupied state and an unoccupied state, and may be represented by a value of an idle flag.
In one exemplary embodiment, the first operating system sending the target data to the target virtual channel in the processor memory includes: the first operating system reads the record in the metadata storage area, and determines at least one storage unit which is in an idle state and has a total space greater than or equal to the length of target data in the data storage area according to the read record to obtain a target virtual channel; and setting the state of at least one storage unit corresponding to the target virtual channel in the metadata storage area as an occupied state, and storing the target data in the target virtual channel.
It should be noted that, in order to ensure that the target data can be continuously written into the memory, the written target virtual channel needs to be free and a storage space with a length greater than or equal to that of the target data, and because the memory is divided into a metadata storage area and a data storage area, the occupied state of each storage unit recorded in the metadata storage area can be read, and the storage unit which is in the free state and can meet the data storage requirement can be found out.
For example, if the size of each storage unit is equal and the length of the target data is greater than the length of one storage space, determining the number of storage units required according to the length of the target data, and finding out a plurality of storage units in an idle state, wherein the number of the storage units is continuous and meets the data storage requirement, so as to form a target virtual channel.
For another example, the size of each storage unit is equal, the data storage area has previously combined the storage units to obtain a plurality of virtual channels with different sizes, each virtual channel is formed by combining one or more storage units, and the occupied state of each virtual channel recorded in the metadata storage area can be read, so that the virtual channel with the length greater than the length of the target data in the idle state, namely the target virtual channel, can be found. It should be noted that, when the system software needs to apply for the shared memory space, it will determine whether the length of the data to be applied is greater than the maximum length of the data stored in the virtual channel, if so, the system software can send the data to be sent multiple times, so as to ensure that the length of the data to be sent each time is less than or equal to the maximum length of the data stored in the virtual channel, thereby ensuring smooth communication.
In one exemplary embodiment, the second operating system responding to the interrupt notification message, and acquiring the target data from the target virtual channel in the memory includes: the second operating system reads the record in the metadata storage area and determines a target virtual channel according to the read record; and acquiring target data from at least one storage unit corresponding to the target virtual channel, and setting the state of the at least one storage unit to be an idle state.
That is, after the second operating system extracts the target data from the storage unit corresponding to the target virtual channel, in order not to affect the use of the target virtual channel by other systems or tasks, the state of the storage unit corresponding to the target virtual channel is set to an idle state.
In one exemplary embodiment, the first operating system sending the target data to the target virtual channel in the processor memory includes: the method comprises the steps that a driving layer of a first operating system receives target data, and a virtual channel in an idle state is determined in a memory to obtain a target virtual channel; and setting the state of the target virtual channel to be an occupied state, and storing the target data into the target virtual channel.
Optionally, the real-time operating system and the non-real-time operating system both have a driving layer, after the driving layer receives the target data to be sent, the driving layer invokes the interface to search the target virtual channel in the memory, so as to avoid other systems applying for using the target virtual channel in the process of writing the data, after the target virtual channel is found, the state of the target virtual channel is set to be an occupied state, and then the target data is written into the target virtual channel.
In an exemplary embodiment, in a case that the first operating system includes an application layer, the application layer is provided with a man-machine interaction interface, before the driving layer of the first operating system determines, in the memory, a virtual channel in an idle state, the method further includes: the application layer of the first operating system receives data to be sent input by a user through a man-machine exchange interface, encapsulates the data to be sent in a preset format to obtain target data, and calls a data writing function to transfer the target data to a driving layer through a preset communication interface, wherein the preset communication interface is arranged on the driving layer.
Optionally, the application layer fills the data to be sent according to a preset format to obtain target data, then generates an equipment file ipidiv on the system/dev path, when the application layer needs to read and write the data from the driving layer, the application layer can open the equipment file/dev/ipidiv by using an open function of the system, then can send the target data from the application layer to the driving layer by using a write function of the system, the driving layer then places the data in a target virtual channel in the shared memory, and then triggers an interrupt to inform a second operating system to fetch the data.
In one exemplary embodiment, the second operating system responding to the interrupt notification message, and acquiring the target data from the target virtual channel in the memory includes: the second operating system triggers an interrupt processing function based on the interrupt notification message, determines a target virtual channel from the memory through the interrupt processing function, and acquires target data from the target virtual channel.
In one exemplary embodiment, determining a target virtual channel from memory by an interrupt handling function and retrieving target data from the target virtual channel includes: and calling a target task through the interrupt processing function, determining a target virtual channel from the memory by the target task, and acquiring target data from the target virtual channel.
Optionally, the interrupt processing function sends a task notification to wake up a target task responsible for data extraction, the target task searches for a target virtual channel in the shared memory through the calling interface, and then reads target data from the target virtual channel and performs data analysis.
In one exemplary embodiment, where the second operating system includes an application layer, storing a function identifier in the memory, the function identifier indicating a target function, determining a target virtual channel from the memory by the interrupt handling function, and obtaining target data from the target virtual channel includes: determining a function identifier and a target virtual channel from a memory through an interrupt processing function, and sending address information of the target virtual channel to a target application program matched with the function identifier, wherein the target application program is a target application program in an application layer; the target application program calls a data reading function to transfer address information to a driving layer through a preset communication interface, the driving layer acquires target data from a target virtual channel and transfers the target data to the target application layer program, wherein the preset communication interface is arranged at the driving layer, and the target application program processes the target data according to a processing function matched with a function identifier so as to execute the target function.
Optionally, after the second application system receives the interrupt notification message, the application layer invokes a corresponding interrupt processing function to find a target virtual channel from the memory, so as to obtain address information of the target virtual channel, then generates an equipment file ipidiv on a system/dev path, when the application layer needs to read and write data from the driving layer, the application layer can use an open function of the system to open the equipment file/dev/ipidiv, and then can use a read function of the system to read the target data in the target virtual channel, that is, the driving layer finds the corresponding target data in the shared memory according to the address information of the target virtual channel, returns the target data and the length of the target data to the application layer, and in an exemplary embodiment, sets the state of the target virtual channel to be idle.
It should be noted that, different application programs of the application layer can realize different functions by using the target data, the memory stores function identifiers indicating the target functions realized by the application program through the target data, alternatively, the function identifiers can be Net and Cmd, the Net, cmd and the application program PID can be registered to the driver when the system is initialized, the driver layer can find the PID of the application program according to the received Net fn and Cmd, and the data is sent to the corresponding application program according to the PID.
For example, netfn=1, cmd=1 indicates that "hello word" is sent between the first operating system and the second operating system, an array is initialized at the beginning of the system, three columns of the array are used, the first column of NetFn, the second column of Cmd, and the third column of processing functions corresponding to NetFn and Cmd are denoted as xxcmdsandler. For example, when the second operating system receives the message sent by the first operating system, netFn and Cmd are obtained from the message, and if netfn=1 and cmd=1 are determined, the processing function hellocmdsandler corresponding to "hello word" is executed to complete the corresponding function.
In an exemplary embodiment, the data storage area includes a plurality of memory channels, each memory channel is formed by one or more storage units, the metadata storage area stores a plurality of records, each record is used for recording metadata of one memory channel, the metadata of each memory channel at least includes a channel ID (Identity document) of the memory channel, an identification number of the identification card, a size of the memory channel, and an occupied state of the memory channel, the first operating system reads the record in the metadata storage area, determines at least one storage unit in the data storage area in an idle state, and a total space is greater than or equal to a length of target data according to the read record, and the obtaining the target virtual channel includes: traversing the record stored in the metadata storage area, and judging whether a first target record indicating that the memory channel is in an idle state and the size of the memory channel is greater than or equal to the length of target data exists; and determining the memory channel indicated by the channel ID recorded in the first target record as a target virtual channel under the condition that the first target record exists.
It should be noted that the data storage area may be divided into n virtual memory channels, each of which may be different in size, i.e., the n virtual channels may be sequentially 2 in size 0 *m、2 1 *m、2 2 *m、2 3 *m …… 2 n-1 * m, wherein m is the size of one storage unit, and the following structures are set as metadata management memory channels:
typedef struct {
uint32_t Flag;
uint16_t ChannelId;
uint8_t SrcId;
uint8_t NetFn;
uint8_t Cmd;
uint32_t Len;
uint32_t ChannelSize;
uint8_t *pData;
uint8_t CheckSum;
}IpiHeader_T
wherein uint 32_tFlag characterizes the state of the memory channel, e.g., 0xA5A5A5A5 indicates that this channel is not empty, otherwise it is empty; uint16_t ChannelId represents a channel ID; uint8_t SrcId represents the source CPU ID, which is the CPU that writes data to the memory channel; uint8_tNetFn and uint8_tCmd are functional parameters; uint 32_tLen is the length of the data stored in the memory channel; uint32_t channel size represents the size of the memory channel; uint8_t pData refers to the first address of the memory channel; the uint8_t CheckSum is a CheckSum, when the first operating system needs to send data, the first operating system calculates a CheckSum value from the sent data through a CheckSum algorithm, and sends the CheckSum value to the second operating system, when the second operating system receives the data and the CheckSum value, the second operating system calculates the CheckSum value according to the same CheckSum algorithm from the received data, and compares the calculated CheckSum value with the received CheckSum value, if the calculated CheckSum value is consistent, the received data is valid, and if the calculated CheckSum value is inconsistent, the received data is invalid. Each virtual memory channel corresponds to a structure record, the structure records are sequentially stored at the starting position of the shared memory in a mode of increasing channel ID, the structure records are initialized after the system is powered on, the initialization Flag is 0 to indicate that the channel is empty, the initialization ChannelId is 0, 1 and 2 … n-1 in sequence, the initialization ChannelSize is the size of the corresponding virtual memory channel, and the initialization pData points to the head address of the corresponding virtual memory channel.
In one exemplary embodiment, when determining the target virtual channel, the first operating system uses the interface GetEmptyChannel to find the virtual channel in all the memory channels according to the size of the target data to be sent, where the virtual channel satisfies the following two conditions: the idle Flag in the channel structure ipiHeader is not equal to 0xA5A5A5A5 (i.e. the channel is in idle state), and the size channel size of the channel in the channel structure ipiHeader is greater than or equal to the size of the target data (i.e. the memory size can meet the storage requirement of the target data). After a target virtual channel satisfying the above condition is found, the state of the channel is set to be non-empty, that is, the free Flag in the channel structure ipiHeader is set to 0xA5A5A5A5, and then the target data is copied into the target virtual channel.
In an exemplary embodiment, in a case where the memory channel is occupied, the metadata of the memory channel further includes an ID of a source CPU core of the target data and an ID of a destination CPU core of the target data, the second operating system reads a record in the metadata storage area, and determining the target virtual channel according to the read record includes: traversing the record stored in the metadata storage area, and judging whether a second target record exists, wherein the second target record indicates that the memory channel is in an occupied state, the ID of the target CPU core is the ID of the CUP core of the second operating system, and the ID of the source CPU core is not the ID of the CUP core of the second operating system; and determining the memory channel indicated by the channel ID recorded in the second target record as a target virtual channel when the second target record exists.
That is, the target virtual channel is a virtual channel satisfying the following three conditions among all channels: the first is that the free Flag in the channel structure IpiHeader is equal to 0xA5A5 (i.e., indicates that the channel is in an occupied state); secondly, the TargetId in the channel structure is equal to the ID of the current CPU (i.e., the destination CUP indicating the target data is the CPU of the second operating system); third, the TargetId in the channel structure is not equal to srmid (i.e., indicates that the target data is not sent by the CPU of the second operating system).
If the Flag is originally 0 and suddenly changed to 1, the system recognizes that the channel is not empty after reading the Flag, and thus causes communication abnormality. In this embodiment, the idle Flag is set to be a multi-bit special character, for example, 0xA5A5, and since the probability that multiple bits are simultaneously mutated to be a special character is much smaller than the probability of one-bit mutation, the influence of the mutation of the storage medium bit on the Flag value can be prevented, thereby improving the security of communication.
In an exemplary embodiment, the metadata storage area stores a mapping table, the mapping table has a plurality of records, each record is used for recording an occupied state of a storage unit, the first operating system reads the record in the metadata storage area, determines at least one storage unit in an idle state in the data storage area according to the read record, and the total space is greater than or equal to the length of target data, and obtaining the target virtual channel includes: determining the preset number of storage units to be occupied by target data; scanning each record from the initial position of the mapping table in turn; under the condition that a continuous preset number of target records are scanned, determining continuous storage units indicated by the preset number of target records, wherein the target records represent the storage units in an idle state; the contiguous memory locations are determined to be the target virtual channel.
It should be noted that, in order to facilitate data storage and extraction, since the operating system needs to occupy continuous storage units in the memory when transferring service data, it is first necessary to determine the number of storage units in the memory application instruction, and since the memory space of each storage unit is the same, the preset number of continuous storage units needed can be calculated by the space size of the needed memory and is recorded as a number.
Optionally, the first operating system traverses the record from an index position in the mapping table, where the index position may be a start position of the mapping table, and sequentially queries each record in the mapping table from the start position of the mapping table, determines whether there is a record greater than or equal to a number of a free memory page of a continuous record, determines, by a correspondence between the record and the memory page, a continuous storage unit in the processor and determines the continuous storage unit as a target virtual channel to write data into the target virtual channel when there is a record meeting the above condition.
In an exemplary embodiment, the interrupt notification message includes a first address and a preset number of consecutive storage units, the second operating system reads a record in the metadata storage area, and determining the target virtual channel according to the read record includes: scanning each record from the initial position of the mapping table in turn; and under the condition that the first address recorded with the continuous storage units is scanned, determining the storage units indicated by the scanned addresses and the continuous storage units with the preset number reduced by one as target virtual channels.
Optionally, the continuous storage unit is a continuous storage unit with the number equal to the number, each record in the mapping table further records a first address of a corresponding storage unit, and the second operating system indicates that the first address of the target virtual channel is scanned under the condition that the second operating system scans the record with the number equal to the first address of the continuous storage unit with the number equal to the number in the mapping table, the storage unit indicated by the first address and number-1 continuous storage units after the storage unit form the target virtual channel, and acquires data in the target virtual channel of the second operating system to complete data interaction with the first operating system.
In an exemplary embodiment, the scanned consecutive target records are recorded by a counter, and in the process of sequentially scanning each record from the initial position of the mapping table according to the number of storage units, the counter is controlled to be incremented in the case of current scanning to the target record, and the counter is controlled to be cleared in the case of current scanning to the non-target record.
Optionally, judging whether a continuous preset number of target records exist or not by utilizing the relation between the numerical value of the counter and the number of the required storage units, namely whether the preset number of continuous storage units exist or not, optionally, recording the numerical value of the counter as cntr, if one scanned storage unit is empty, carrying out 1 adding operation on the cntr, if the scanned storage unit is not empty, clearing the accumulated number cntr of the storage units in an accumulated continuous idle state, and continuously searching the storage units in the continuous idle state from the position behind the storage unit; until cntr equals number, indicating that a continuous, idle state storage unit has been found that meets memory requirements; if the cntr is not greater than or equal to the number after the complete mapping table is scanned, the failure of the dynamic application memory at this time is indicated, and no continuous storage units with preset numbers exist.
In an exemplary embodiment, before the first operating system reads the record in the metadata storage area, and determines at least one storage unit in the data storage area in an idle state and having a total space greater than or equal to the length of the target data according to the read record, the method further includes: the method comprises the steps that a first operating system sends a memory application instruction and performs locking operation on a memory of a processor, wherein the memory application instruction is used for applying for using the memory of the processor; and under the condition that the memory is successfully locked, reading the record in the mapping table.
Optionally, the memory application instruction is an instruction sent by an operating system running on the processor to apply for using the memory of the processor, and it needs to be described that, in order to prevent multiple operating systems from simultaneously applying for using the memory of the processor and causing application conflict, when the operating system sends the memory application instruction, a locking operation is performed on the memory of the processor, after the locking is successful, the application for using the memory can be performed, the locking operation refers to exclusive operation of the memory application, and after the locking of the current operating system is successful, if the locking is not released, other servers do not apply for using the authority of the memory of the processor.
In one exemplary embodiment, performing a locking operation on a memory of a processor includes: judging whether the memory is in a locked state currently, wherein the locked state represents that the memory is in a state of being applied for use; under the condition that the memory is not in a locked state at present, locking operation is carried out on the memory; under the condition that the memory is in a locked state currently, determining that the memory is failed to be locked, and applying for locking the memory of the processor again after a preset time period until the memory is successfully locked, or until the number of times of applying for locking is larger than the preset number of times.
Before the processor runs, the metadata storage area and the data storage area in the processor need to be initialized, and optionally, the record stored in the mapping table in the metadata storage area is initialized, and the memory management information is initialized.
Before applying for the memory operation, the following configuration is performed on the memory management information:
typedef struct {
uint32_t MemReady;
uint32_t MemLock;
}MallocMemInfo_T;
the member variable MemLock of the structural body MallocMemInfo_T indicates whether the shared memory is initialized, and the variable MemReady is 0xA5A5A5A5, which indicates that the initialization operation is completed, so that the memory can be normally and dynamically applied and released; the Member variable MemReady of the struct MallocMemInfo_T characterizes whether or not locked.
Optionally, if the variable MemLock is read to be 0, it indicates that no system or task is applying for the memory at this time, that is, the memory is not currently in the locked state. If the variable MemLock is 0xA5A5A5A5, indicating that a system or task is applying for memory, and the current application fails to lock after the application is completed.
In an exemplary embodiment, if there is a failure to lock the memory, the memory is applied again after waiting for a preset duration until the memory is successfully locked, for example, the preset duration may be 100 microseconds.
In an exemplary embodiment, if the application fails to lock and the number of repeated applications exceeds the preset number, indicating that the memory in the processor is in a non-allocable state in the current duration, stopping the application operation. For example, the preset number of times may be 3, and in the case where the number of times of applying for locking is greater than 3, a message that the current memory is unavailable may be returned to the operating system that sends the application.
Optionally, after the target virtual channel available for the first operating system exists in the memory space of the processor, the first operating system stores target data to be transmitted into the corresponding target virtual channel, and in an exemplary embodiment, the occupied state of the memory space of the processor is updated according to the data writing condition of the first operating system, that is, the target continuous memory space is changed from the unoccupied state to the occupied state, and in order to enable other systems or tasks to apply for memory, locking of the memory is released.
In an exemplary embodiment, the method further comprises: and releasing the locking of the memory under the condition that the continuous preset number of target records are not scanned.
Optionally, after the record in the mapping table is scanned, a preset number of continuous storage units in an idle state cannot be detected, which indicates that there are not enough space memory pages in the memory of the processor for the first operating system to use, and the dynamic application of the memory fails this time, and the locking of the memory is released.
In one exemplary embodiment, the interrupt notification message is sent to the second operating system by way of a software interrupt.
In one exemplary embodiment, sending the interrupt notification message to the second operating system by way of a software interrupt includes: writing an interrupt number and an ID of a CPU core of the second operating system into a preset register of the processor, and generating an interrupt notification message based on the interrupt number and the ID of the CPU core of the second operating system.
Alternatively, the soft interrupt is an interrupt generated by software, and the software may send an interrupt to the CPU core executing itself, or may send an interrupt to other CPU cores. The preset register may be a gicd_sgir register, and the SGI (Software Generated Interrupts, software interrupt) interrupt number and the destination CPU ID may be written into the gicd_sgir register by software to generate a software interrupt, where the SGI interrupt number is a soft interrupt number reserved for inter-core communication.
In order to maximally compatible with the current resource allocation mode in the multi-core heterogeneous operating system, an inter-core interrupt vector table is represented by 8-15 numbers (8 interrupts in total), and one possible allocation scheme of the vector table is shown in table 1 under the condition that the first operating system is an RTOS operating system and the second operating system is a Linux operating system:
TABLE 1
Figure SMS_1
In one exemplary embodiment, the interrupt notification message is sent to the second operating system by way of a hardware interrupt.
Optionally, the hardware interrupt refers to an interrupt generated by a hardware device, which may be a private peripheral interrupt or a shared peripheral interrupt, and it should be noted that the hard interrupt is an interrupt introduced by hardware outside the CPU, and has randomness, and the soft interrupt is an interrupt introduced by a software execution interrupt instruction running in the CPU, which is preset, and the embodiment does not limit the manner of generating the interrupt notification message.
In an alternative embodiment, a way of sharing memory is provided. The method comprises the following steps:
step 101, receiving a memory application instruction, and executing a locking operation on a memory of a processor, where the memory application instruction is used for applying for using the memory of the processor.
Optionally, the memory application instruction is an instruction sent by an operating system running on the processor to apply for using the memory of the processor, and it needs to be described that, in order to prevent multiple operating systems from simultaneously applying for using the memory of the processor and causing application conflict, when the operating system sends the memory application instruction, a locking operation is performed on the memory of the processor, after the locking is successful, the application for using the memory can be performed, the locking operation refers to exclusive operation of the memory application, and after the locking of the current operating system is successful, if the locking is not released, other servers do not apply for using the authority of the memory of the processor.
In the method for sharing a memory provided in the embodiments of the present application, before performing a locking operation on a memory of a processor, the method further includes: judging whether the memory is in a locked state currently, wherein the locked state represents that the memory is in a state of being applied for use; and executing locking operation on the memory under the condition that the memory is not in the locked state currently.
Optionally, because multiple systems or multiple tasks can cause application conflicts when applying for use of the memory at the same time, the memory of the processor can only be locked by one system or task in the same time period, and therefore, the current operating system can only execute the locking operation on the memory under the condition that the current memory is detected not to be in the locked state.
Optionally, whether the memory is in a locked state is determined by determining whether a preset variable stored in the memory is a preset value, if the preset variable is not a preset parameter number, the memory is indicated not to be in the locked state, and no other system or task is in the memory space application, and the locking is successful; otherwise, if the preset variable is a preset parameter, the memory is in a locked state at the current moment, and if other systems or tasks except the operating system are in the application memory space, the locking fails.
In the method for sharing the memory, after judging whether the memory is in the locked state currently, the method further comprises: under the condition that the memory is in a locked state currently, determining that the memory is failed to be locked; under the condition that the locking of the memory fails, the memory of the processor is applied to be locked again after the preset time period until the memory is successfully locked, or until the number of times of applying the locking is larger than the preset number of times.
Optionally, if there is a failure in locking the memory, the memory is applied again after waiting for a preset time period until the memory is successfully locked, for example, the preset time period may be 100 microseconds.
In an exemplary embodiment, if the application fails to lock and the number of repeated applications exceeds the preset number, indicating that the memory in the processor is in a non-allocable state in the current duration, stopping the application operation. For example, the preset number of times may be 3, and in the case where the number of times of applying for locking is greater than 3, a message that the current memory is unavailable may be returned to the operating system that sends the application.
Step 102, under the condition that the memory is successfully locked, the occupied state of the memory is read, and whether an idle target memory space exists in the memory or not is judged according to the occupied state of the memory, wherein the size of the target memory space is larger than or equal to the size of the memory applied by the memory application instruction.
After the application and locking are successful, the operating system applies for the memory in the processor, optionally scans the information for recording the occupied state of the memory, and judges whether a target memory space exists, namely, judges whether a continuous memory space which is in an unoccupied state and can meet the use requirement of the memory exists in the processor, wherein the requirement of meeting the use requirement of the memory refers to that the size of the memory space is larger than or equal to the size of the memory applied by the operating system.
It should be noted that, when applying for the memory, a discontinuous memory space may be used, a pointer may be added at the back of the non-minimum memory block to point to the minimum memory block obtained by the next application, and at the same time, when reading and writing data, the data reading and writing of the data across the data blocks are realized according to the storage address and the pointer. The present embodiment does not limit the form of the target memory space.
Step 103, under the condition that the target memory space exists in the memory, feeding back the address information of the target memory space to the sending end of the memory application instruction, updating the occupied state of the memory, and releasing the locking of the memory.
The transmitting end refers to an operating system for transmitting a memory application instruction, and it should be noted that, when the operating system communicates between cores, the operating system uses a shared memory to transmit and receive data, and uses an address returned by an applied memory to access the data in the process of transmitting and receiving the data, so that address information of an applied memory space needs to be determined.
Optionally, after a target memory space available for the operating system exists in the memory space of the processor, address information of the target continuous space is sent to the operating system, and the operating system stores data to be transmitted into the corresponding memory space according to the address information.
In one exemplary embodiment, the occupied state of the memory space of the processor is updated according to the data writing situation of the operating system, that is, the target memory space is changed from the unoccupied state to the occupied state, and the locking operation before the memory is dynamically applied is released, so that other operating systems can apply for using the memory space of the processor.
Through the steps: receiving a memory application instruction and executing locking operation on a memory of a processor, wherein the memory application instruction is used for applying for using the memory of the processor; under the condition that the memory is successfully locked, the occupied state of the memory is read, and whether an idle target memory space exists in the memory or not is judged according to the occupied state of the memory, wherein the size of the target memory space is larger than or equal to the size of the memory applied by the memory application instruction; under the condition that a target memory space exists in the memory, address information of the target memory space is fed back to a sending end of a memory application instruction, the occupied state of the memory is updated, and locking of the memory is released, so that the problems of low use efficiency, poor flexibility and excessive dependence on an operating system of a plurality of cores are solved, and the effects of improving the flexibility and the use efficiency of the shared memory and reducing the dependence on the operating system are achieved.
In the method for sharing the memory, the memory comprises a metadata storage area and a data storage area, the data storage area is used for storing service data, the metadata storage area is stored with a mapping table, the mapping table is used for recording the occupied state of the data storage area, reading the occupied state of the memory, and judging whether the memory has an idle target memory space according to the occupied state of the memory comprises the following steps: and reading the record in the mapping table from the metadata storage area, and judging whether the target memory space exists in the data storage area according to the record in the mapping table.
And querying the occupied state of the memory by querying the records in the mapping table, optionally, acquiring a metadata storage area stored in the processor, identifying the mapping table in the metadata storage area, and reading the occupied state of the data storage area by traversing the records in the mapping table to judge whether a continuous memory space which is in an idle state and meets the use requirement of the memory exists in the data storage area.
In the method for sharing memory provided in the embodiment of the present application, the data storage area is formed by a plurality of memory pages, the mapping table includes a plurality of records, each record is used for recording an occupied state of one memory page, reading the record in the mapping table from the metadata storage area, and determining whether a target memory space exists in the data storage area according to the record in the mapping table includes: determining the preset number of memory pages of a memory application instruction application; scanning each record from the initial position of the mapping table in turn; and under the condition that a continuous preset number of target records are scanned, determining that a target memory space exists in the memory, wherein the target records indicate that a memory page is in an idle state.
It should be noted that, the data storage area is divided into a plurality of allocation units according to the same memory size, each allocation unit is recorded as a memory page, for example, the memory space of the data storage area is a byte, the divided allocation units are B bytes, and then the data storage area contains a/B memory pages in total, a record in the mapping table, that is, a memory page record, each memory page record is used for recording the occupied state of one memory page, and the number of memory page records in the mapping table is the same as the number of memory pages in the data storage area.
The data storage area is a dynamic allocation memory block area, the metadata storage area comprises a dynamic allocation memory mapping table area, the mapping table area divides the same number of records according to the number of memory pages divided by the data storage area, the records are recorded as memory page records, all the memory page records are combined into a mapping table, a one-to-one correspondence exists between all the memory page records in the mapping table and all the memory pages in the data storage area, and each memory page record represents the allocation state of the corresponding memory page, namely whether the memory page is occupied or not.
Optionally, because the service data coordinated by the operating system needs to occupy continuous memory pages in the processor, the preset number of the memory pages in the memory application instruction needs to be determined first, and because the memory space of each memory page is the same, the preset number of the required continuous memory pages can be calculated through the space size of the required memory and is recorded as a number.
In an exemplary embodiment, after the mapping table in the metadata storage area of the processor is obtained, the memory page record is traversed from the index position in the mapping table, where the index position may be the start position of the mapping table, and each memory page record of the mapping table is sequentially queried from the start position of the mapping table, to determine whether there is a memory page record with a number greater than or equal to a number of consecutive free memory pages, and if there is a memory page record meeting the above condition, to determine that there is a target memory space in the processor through the corresponding relationship between the memory page record and the memory page.
In the method for sharing a memory provided in the embodiments of the present application, after each record is scanned sequentially from an initial position of a mapping table, the method further includes: and under the condition that all records in the mapping table are scanned and no continuous target records with preset number exist, determining that no target memory space exists in the memory.
Optionally, starting from the starting position of the mapping table, querying the memory page record of the mapping table to determine whether a continuous space with the number of memory pages being greater than or equal to the number exists, and if the continuous space with the preset number of free memory pages still does not exist after the whole mapping table is scanned, indicating that the target memory space does not exist.
In the method for sharing the memory provided in the embodiment of the present application, the number of scanned target records is recorded by a counter, and in the process of scanning each record sequentially from the initial position of the mapping table, the counter is controlled to be incremented when the target record is currently scanned, and the counter is controlled to be cleared when the non-target record is currently scanned, wherein the non-target record indicates that the memory page is in an occupied state.
Optionally, judging whether a continuous preset number of target records exist or not by utilizing the size relation between the numerical value of the counter and the number of the required memory pages, namely whether a target memory space exists or not, optionally, recording the number of the counter as cntr, if the scanned memory pages are empty, adding 1 to the cntr, if the scanned memory pages are not empty, resetting the accumulated number cntr of the memory pages in continuous and idle states, and continuously searching for continuous empty memory pages from an address behind the memory pages; until cntr equals number, indicating that a continuous, idle state memory page has been found that meets memory requirements; if cntr is smaller than number in the process of scanning the complete mapping table, it indicates that the dynamic application of the memory fails and there is no target memory space.
In the method for sharing a memory provided in the embodiment of the present application, when the initial position is the last position in the mapping table, feeding back the address information of the target memory space to the sending end of the memory application instruction includes: and determining the last scanned target record in the continuous preset number of target records, and feeding back the head address of the memory page indicated by the last scanned target record to the transmitting end.
Optionally, when the mapping table is scanned, the scanning mode may select to scan from the first position of the mapping table or scan from the last position of the mapping table, when the scanning mode is scanning from the last position of the mapping table, when the numerical value cntr displayed by the counter is greater than or equal to the preset number, the scanned last memory page records the first address of the memory page corresponding to the last memory page, and the state of the memory pages is set as non-null in the memory page records, and the first address is used as the first address of the whole continuous memory page of the current memory application instruction.
In one exemplary embodiment, the address is fed back to the operating system that issued the memory application instruction, and the operating system performs a data writing operation on the memory according to the address information.
In the method for sharing a memory provided in the embodiment of the present application, the initial position is the first position in the mapping table, and feeding back address information of the target memory space to the sending end of the memory application instruction includes: and determining the first scanned target record in the continuous preset number of target records, and feeding back the first address of the memory page indicated by the first scanned target record to the transmitting end.
Optionally, when the scanning mode is scanning from the first position of the mapping table, under the condition that the numerical value cntr displayed by the counter is greater than or equal to the preset number, the address recorded by the scanned first memory page is used as the first address, the first address is sent to the operating system sending the memory application instruction, and the operating system performs the data writing operation on the memory according to the address information.
In the method for sharing the memory provided in the embodiment of the present application, during the process of scanning each record sequentially from the initial position of the mapping table, the first target record in the scanned continuous target records is stored through a preset variable.
Optionally, the preset variable is a variable in the mapping table for storing address information of an initial position, and is recorded as an offset, and when an idle and continuous memory page is scanned, the numerical value cntr displayed by the counter is added with 1, and when the numerical value cntr displayed by the counter is greater than or equal to a preset number, the address information currently stored by the offset is used as the address of the first target record.
In the method for sharing a memory provided in the embodiment of the present application, after reading an occupied state of a memory and determining whether an idle target memory space exists in the memory according to the occupied state of the memory, the method further includes: and releasing the locking of the memory under the condition that no idle target memory space exists in the memory.
Optionally, after the memory page record in the mapping table is scanned. When the fact that the memory page does not contain the preset number of continuous and idle memory pages, namely the target memory space is not contained, the fact that the memory page with enough space is not available in the memory of the processor for the operating system is detected, the dynamic application of the memory fails, and locking of the memory is released.
In the method for sharing a memory provided in the embodiment of the present application, the memory includes a metadata storage area and a data storage area, the data storage area is used for storing service data, the metadata storage area stores memory management information, and determining whether the memory is currently in a locked state includes: reading memory management information stored in a metadata storage area, and judging whether the memory management information contains preset information, wherein the preset information represents that the memory is in a locked state; under the condition that the memory management information contains preset information, determining that the memory is not in a locked state currently; and under the condition that the memory management information does not contain preset information, determining that the memory is currently in a locked state.
Judging whether the memory of the processor is in a locked state or not by using the memory management information in the metadata storage area, and optionally, when the memory management information of the metadata storage area is acquired, judging whether the memory management information contains preset information or not by using the memory management information, wherein the preset information is used for representing whether the memory is in the locked state or not; if the memory management information does not contain preset information, the current memory is in an unlocked state, otherwise, the current memory is in a locked state.
In the method for sharing a memory provided in the embodiment of the present application, the memory management information includes first field information and second field information, where the first field information is used to describe whether the memory is in a locked state, and the second field is used to describe whether the memory is initialized to be completed, and before receiving the memory application instruction, the method further includes: the first field information and the second field information stored in the data storage area are initialized.
Before the embedded system operates, the metadata storage area and the data storage area in the processor need to be initialized, and optionally, the memory page record stored by the mapping table in the metadata storage area is initialized, and the memory management information is initialized.
Optionally, the memory management information is composed of first field information and second field information, that is: the first field information characterizes whether the memory is locked or not, the second field information is used for characterizing whether the initialization is completed or not, and the memory management information is configured as follows before the memory application operation is carried out:
typedef struct {
uint32_t MemReady;
uint32_t MemLock;
}MallocMemInfo_T;
the member variable MemLock (second field information) of the structural body malloc meminfo_t indicates whether the shared memory is initialized, and the member variable MemReady (first field information) of the structural body malloc meminfo_t indicates whether the shared memory is locked, wherein the variable MemLock is 0, which indicates that no system or task is in the application memory, that is, the shared memory is not locked, and the MemLock is 0xA5A5, which indicates that a system or task is in the application memory, and other systems or tasks are applied after the application is completed; the variable MemReady is 0xA5A5A5A5, which indicates that the initialization operation is completed, and the memory can be normally and dynamically applied and released.
In the method for sharing a memory provided in the embodiment of the present application, updating the occupied state of the memory includes: and changing the state of the memory page corresponding to the target memory space recorded in the mapping table into an occupied state.
Optionally, under the condition that the operating system needs to occupy the target memory space, the memory page records of the mapping table area of the metadata storage area are updated from the unoccupied state to the occupied state according to the corresponding relation between the memory pages and the memory page records by identifying address information of a plurality of memory pages of the target memory space.
In an alternative embodiment, a method of communication between operating systems is provided. The method comprises the following steps:
step 201, receiving a memory application instruction of a first operating system, and performing a locking operation on a memory of a processor, where the memory application instruction is used for applying for using the memory of the processor;
it should be noted that, in order to prevent a plurality of operating systems from simultaneously applying for the memory space of the processor and causing application failure, when the first operating system sends a memory application instruction, a locking operation is applied to the memory of the processor, and when the application is successful, the memory can be applied.
Optionally, whether the locking is successful is determined by judging whether a preset variable stored in the memory is a preset value, if the preset variable is not a preset parameter number, the fact that no other system or task is in the application memory space is indicated, and if the preset variable is not the preset parameter number, the locking is successful; otherwise, if the preset variable is a preset parameter, it indicates that at the current moment, other systems or tasks except the operating system are in the application memory space, and locking fails.
Step 202, under the condition that the memory is successfully locked, reading the occupied state of the memory, and judging whether an idle target memory space exists in the memory according to the occupied state of the memory, wherein the size of the target memory space is larger than or equal to the size of the memory applied by the memory application instruction;
Optionally, when the application for locking is successful, according to a memory application instruction sent by the operating system, information for recording the occupied state of the memory is scanned to determine whether a target memory space exists, that is, whether a continuous memory space in an unoccupied state exists in the processor, and in an exemplary embodiment, whether the size of the continuous memory space in the unoccupied state is greater than or equal to the size of the memory applied by the operating system is determined, so as to obtain a determination result.
Step 203, feeding back the address information of the target memory space to the first operating system, updating the occupied state of the memory, and releasing the locking of the memory under the condition that the target memory space exists in the memory;
optionally, after the judging result indicates that the memory space of the processor has a target memory space available for the operating system, address information of the target continuous space is sent to the operating system, and the operating system stores data to be transmitted into the corresponding memory space according to the address information.
Further, the occupied state of the memory space of the processor is updated according to the data writing condition of the operating system, namely, the target memory space is changed from the unoccupied state to the occupied state, and the locking operation before the memory is dynamically applied is released.
Step 204, responding to the storage operation of the first operating system, storing the target data into the target memory space, and sending the address information of the continuous memory space to the second operating system;
optionally, after the memory application is successful, the first operating system stores the target memory space applied for the value of the target data to be transferred, and sends address information of the target memory space to the second operating system cooperated with the first operating system to inform the second operating system to acquire the data.
Step 205, receiving an acquisition instruction sent by the second operating system based on the address information, and sending the target data stored in the target memory space to the second operating system.
Optionally, after the second operating system receives the address information of the target memory space, the second operating system sends a data acquisition instruction, and the embedded system receives the instruction and sends the target data stored in the target memory space to the second operating system.
Through the steps: receiving a memory application instruction of a first operating system, and executing locking operation on a memory of a processor, wherein the memory application instruction is used for applying for using the memory of the processor; under the condition that the memory is successfully locked, the occupied state of the memory is read, and whether an idle target memory space exists in the memory or not is judged according to the occupied state of the memory, wherein the size of the target memory space is larger than or equal to the size of the memory applied by the memory application instruction; under the condition that a target memory space exists in the memory, address information of the target memory space is fed back to a sending end of a memory application instruction, the occupied state of the memory is updated, and locking of the memory is released; responding to the storage operation of the first operating system, storing target data into a target memory space, and sending the address information of the continuous memory space to the second operating system; the method comprises the steps of receiving an acquisition instruction sent by a second operating system based on address information, and sending target data stored in a target memory space to the second operating system, so that the problems of low use efficiency, poor flexibility and excessive dependence on the operating system of a plurality of cores are solved, and the effects of improving the flexibility and the use efficiency of the shared memory and reducing the dependence on the operating system are achieved.
In one exemplary embodiment, in the case that the first operating system performs data read and write operations using physical addresses and the second operating system performs data read and write operations using virtual addresses, the second operating system converts address information of the target memory space into virtual addresses, accesses the memory using the virtual addresses, and reads the target data from the target memory space.
Because the shared memory is used for inter-core communication to send and receive data, the address returned by the dynamic application memory is used, but the address systems used by different systems may be different, for example, the real-time operating system is a first operating system, the non-real-time operating system is a second operating system, the shared memory can be directly accessed by using a physical address in the real-time operating system, the shared memory cannot be directly accessed by using the physical address in the non-real-time operating system, a mapped virtual address is needed, after the second operating system receives the address information of the target memory space, the address information is converted through the address information offset, the virtual address is mapped, and the operation is performed according to the virtual address. Optionally, the virtual base address vBase of the shared memory (the real physical address of the shared memory is assumed to be 0x 96000000) under the non-real-time operating system; the physical base address pBase (i.e., 0x 96000000) of the memory is shared under the real-time operating system.
The address returned by the dynamic applied memory in the non-real-time operating system is also a virtual address vData, and in the non-real-time operating system, offset=vData-vBase; the data transmission is transmitted from the non-real-time operating system to the real-time operating system, and the real-time operating system accesses the shared memory pdata=pbase+offset of the dynamic application using the address pData.
The address returned by the dynamic applied memory in the real-time operating system is the physical address pData, and the offset=pData-pBase in the real-time operating system; the data transmission is transmitted from the real-time operating system to the non-real-time operating system, and the non-real-time operating system uses the address vData to access the shared memory vData=vBase+Offset of the dynamic application.
In an exemplary embodiment, the memory includes a metadata storage area and a data storage area, the data storage area is formed by a plurality of memory pages, each memory page is used for storing service data, the metadata storage area stores a mapping table, the mapping table includes a plurality of records, each record is used for recording an occupied state of one memory page, reading the occupied state of the memory, and determining whether an idle target memory space exists in the memory according to the occupied state of the memory includes: determining the preset number of memory pages of a memory application instruction application; scanning each record from the initial position of the mapping table in turn; and under the condition that a continuous preset number of target records are scanned, determining that a target memory space exists in the memory, wherein the target records indicate that a memory page is in an idle state.
Optionally, the metadata storage area stored in the processor is obtained, a mapping table in the metadata storage area is identified, each memory page record is traversed from an index position in the mapping table, each memory page record of the mapping table is queried in sequence, whether memory page records with the number greater than or equal to the preset number of continuous idle memory pages exist is determined, under the condition that the memory page records meeting the above conditions exist, the existence of a target memory space in the processor is determined through the corresponding relation between the memory page records and the memory pages, and the existence of the target memory space in the processor is determined through the corresponding relation between the memory page records and the memory pages.
In an exemplary embodiment, in the case that the initial position is the last position in the mapping table, feeding back the address information of the target memory space to the sending end of the memory application instruction includes: and determining the last scanned target record in the continuous preset number of target records, and feeding back the head address of the memory page indicated by the last scanned target record to the transmitting end.
Optionally, when the mapping table is scanned, the scanning mode may select to scan from the first position of the mapping table or start to scan from the last position of the mapping table, and when the scanning mode is to scan from the last position of the mapping table, the first address of the memory page corresponding to the scanned last memory page record is set as non-empty memory pages, and the first address is used as the first address of the whole continuous memory page of the current memory application instruction. In one exemplary embodiment, the address is fed back to the operating system that issued the memory application instruction, and the operating system performs a data writing operation on the memory according to the address information.
The embodiment also provides a method for sharing the memory, which comprises the following steps: before the operating system sends out the memory application instruction, in order to prevent the application conflict caused by the simultaneous application of a plurality of operating systems to the memory space of the processor, the locking operation needs to be applied, and whether the locking is successful is judged; under the condition that the judgment result shows that the dynamic application memory locking is successful, calculating the number of pages of the continuous memory pages to be allocated according to the memory size in the issued memory application instruction, and marking as nmemb; if the judging result shows that the application fails to lock, the application is reissued after waiting for a period of time (which may be 100 microseconds) until the application is successful, and if the number of times of the application failure to lock is greater than the preset number of times (which may be three times), the memory application is exited.
In an exemplary embodiment, after the lock is applied successfully, initializing a metadata storage area of a processor, marking the last position of a mapping table as offset, calculating the number of required continuous memory pages according to the space size of a required memory in an applied memory instruction, marking the number of memory pages as nmemb, setting a counter for recording the number of memory pages as cmemb, acquiring the mapping table of the metadata storage area in the processor, scanning the whole mapping table from the offset position of the mapping table, searching for continuous empty memory pages through the corresponding relation between the memory page record stored in the mapping table and the memory page in the data storage area, if the scanned current memory page is in an occupied state, performing offset=offset-cmemb, resetting the accumulated data emb of the continuous empty memory pages in the counter, and continuously searching for continuous empty memory pages from the new offset position; if the scanned memory page is empty, i.e. in an idle state, the value cmemb of the counter is increased by 1, and the offset=offset-1, the next memory page is continuously judged until cmemb is equal to nmemb, i.e. when the size of the counter data is equal to the size of the space of the required memory, the continuous memory page meeting the requirement is scanned.
In an exemplary embodiment, the memory page meeting the requirement is marked as an occupied state in the corresponding mapping table, the first address of the last found memory page is used as the first address of the whole continuous memory page of the dynamic application, the lock of the dynamic application memory is released, and the dynamic application memory is successful.
If the value of offset is smaller than 0 in the process of scanning the whole mapping table, the fact that no memory page meeting the requirements is provided for the operating system is indicated, the lock of the dynamic application memory is released, and the dynamic application memory fails.
In addition, the size can be dynamically adjusted when the space is found to be insufficient after the space is dynamically applied, specifically, an updated memory application instruction can be issued again, locking operation is executed on the memory, under the condition that locking is successful, if the memory space required to be applied by the updated memory application instruction is increased, whether the required memory space exists after the applied target continuous memory is judged, under the condition that the memory space required to be applied by the updated memory application instruction is reduced, the memory space is released.
According to the embodiment, the index position is utilized to dynamically apply according to the size of the space which is actually needed by dividing the plurality of storage areas, the space is released after the use is completed, the size can be dynamically adjusted when the space is found to be insufficient after the space is dynamically applied, and the effects of improving the flexibility and the use efficiency of the shared memory can be achieved.
In one exemplary embodiment, a first interrupt request may be sent to the first operating system through the second operating system by, but is not limited to, the following: storing the resource information of the target processing resource into a shared memory on the chip through the second operating system; and sending the first interrupt request to the first operating system through the second operating system, wherein the first interrupt request is used for indicating to preempt the target processing resource indicated by the resource information stored in the shared memory.
Alternatively, in this embodiment, the interrupt request may be transmitted between operating systems by, but not limited to, using an interrupt in combination with the shared memory. The operating system stores the actual content to be interacted into the shared memory on the chip, and informs the opposite party to acquire the specific content from the shared memory through the interrupt request.
Optionally, in this embodiment, the shared memory on the chip may be, but is not limited to, a memory space that can be accessed by both the first operating system and the second operating system, where the first operating system and the second operating system may be, but are not limited to, information exchange through the shared memory on the chip, for example: the resource information of the target processing resource including the type of resource and the number of resources required by the second operating system may be, but is not limited to, stored in the shared memory on the chip, and the first operating system may obtain the resource information of the target processing resource by accessing the shared memory on the chip.
Optionally, in this embodiment, an example of a resource information format written into the shared memory is provided. Table 2 is an example of a format of resource information written to shared memory according to an embodiment of the present application, as shown in table 2, the following format of resource information may be written to shared memory:
the field Type is 8 bytes in size, 0 represents a memory resource, 1 represents a peripheral resource, and 2 represents a processor interrupt resource.
The field MemSize is 8 bytes, representing the target storage of the storage resource that needs to be preempted.
The field DevName is 16 bytes in size and represents the peripheral identification of the peripheral resource that needs to be preempted (which may be, but is not limited to, the name of the peripheral resource).
The field DevNum has a size of 8 bytes and represents the number of peripherals that need to preempt the peripheral resources.
The field IntNum is 8 bytes in size and represents the number of interrupts that need to preempt the processor interrupt resource.
TABLE 2
Figure SMS_2
In one exemplary embodiment, the target processing resource may be released from the first processing resource by the first operating system in the following manner: reading the resource information from the shared memory by the first operating system in response to the first interrupt request; releasing, by the first operating system, the target processing resource satisfying the resource information from the first processing resource.
Alternatively, in this embodiment, the first interrupt request may, but is not limited to, instruct the first operating system to access the shared memory, and the first operating system may, but is not limited to, read, by accessing the shared memory, resource information including a target processing resource of a resource type and a resource amount required by the second operating system.
In one exemplary embodiment, the target processing resource may be released from the first processing resource by the first operating system in the following manner: determining, by the first operating system, whether the target processing resource is currently being used; releasing the target processing resource under the condition that the target processing resource is not used currently; suspending a reference service that is currently using the target processing resource in a case where the target processing resource is currently used; releasing the target processing resource from the first processing resource.
Alternatively, in the present embodiment, the target processing resource required by the second operating system may be, but is not limited to, a processing resource being used in the first operating system. Alternatively, the target processing resources required by the second operating system may be, but are not limited to, processing resources in the first operating system that are in an idle state. Alternatively, the target processing resources required by the second operating system may include, but are not limited to, a plurality of processing resources, wherein the target processing resources in the first operating system may be, but are not limited to, processing resources that are in an idle state in part and processing resources that are in use in part.
Alternatively, in this embodiment, the reference service may be, but not limited to, a service that occupies the target processing resource required by the second operating system in the first operating system, and may be, but not limited to, ensuring the normal operation of the reference service by suspending the reference service that currently uses the target processing resource.
In one exemplary embodiment, after the target processing resource is released from the first processing resource, the reference service operation may be restored, but is not limited to, by: detecting whether processing resources except the target processing resource in the first processing resource meet the operation requirement of the reference service or not; and under the condition that the operation requirement of the reference service is met, restoring the operation of the reference service by using the processing resources except the target processing resource in the first processing resources.
Alternatively, in the present embodiment, the reference service whose execution is suspended may be implemented by using processing resources other than the target processing resource in the first operating system, for example: after releasing the target processing resource in the first processing resource, the first processing resource may, but is not limited to, include a plurality of idle processing resources, and may, but is not limited to, continue running the suspended reference traffic using the idle processing resource in the first processing resource.
In the solution provided in step S206, the second operating system may, but is not limited to, implement obtaining the target processing resource by adding the target processing resource released by the first operating system to the second processing resource.
In one exemplary embodiment, the target processing resource may be added to the second processing resource by the second operating system in the following manner: initializing the target processing resource by the second operating system; and adding the initialized target processing resource into the second processing resource through the second operating system.
Optionally, in this embodiment, the second operating system initializes the preempted target processing resource and then adds it to the second processing resource allocated for the second operating system to use by the operation service.
Optionally, in this embodiment, the second operating system may further clear related information (such as resource information) in the shared memory, so as to release the resources of the shared memory.
In an alternative embodiment, a process for interaction of dynamic configuration of intersystem resources is provided. FIG. 5 is a schematic diagram of an interaction process of dynamic configuration of resources between systems according to an embodiment of the present application, and as shown in FIG. 5, taking a first operating system as an RTOS system and a second operating system as a Linux system as an example, the interaction of dynamic configuration of resources between systems may be accomplished, but is not limited to, by:
Step S502: under the condition that the Linux system needs to occupy the resources of the RTOS system, the Linux system can initiate inter-core interrupt to the RTOS system and write information (resource information of target processing resources) such as types, sizes and numbers of the resources needing to be preempted into the shared memory.
Step S504: in the event that the RTOS system detects an inter-core interrupt initiated by the Linux system, the RTOS system may, but is not limited to, release resources by:
step S504-1: and reading information such as the type, the size, the quantity and the like of the resources to be preempted from the shared memory, and judging whether the resources to be preempted are being used by a certain application.
Step S504-2: in the case where the resource requiring preemption is being used by a certain application, the application using the resource requiring preemption is suspended.
Step S504-3: and releasing the resources which need to be preempted.
Step S504-4: detecting whether the residual resources in the RTOS system can resume the running of the suspended application, and if so, resuming the running of the application.
Step S506: in the case where the RTOS system completes the release of resources, the Linux system may be informed that the resources have been released by, but not limited to, initiating an inter-core interrupt to the Linux system.
Step S508: when the Linux system detects the inter-core interrupt initiated by the RTOS system, the Linux system can initialize and then use the resources, and clear the resource information of the target processing resources in the shared memory.
In one exemplary embodiment, the inter-system resource occupancy may be, but is not limited to, the following: the first operating system is guided to start; and guiding the second operating system to start.
Optionally, in this embodiment, the first operating system and the second operating system may be started sequentially, the first operating system may be started faster than the second operating system, the first operating system may be started simpler than the second operating system, and the first operating system may be started first and then may run a service capable of meeting the conditions required by the second operating system or accelerating the start of the second operating system, so that the multiple systems may start and run the service more efficiently and rapidly.
Such as: after the first operating system is guided to start, the first operating system can run the service (such as fan running, parameter control and other services) capable of controlling the environmental parameters of the chip to meet the starting requirement of the second operating system, so that the environmental parameters of the chip can rapidly reach the environment of the starting operation of the second operating system, and the starting efficiency and the operating efficiency of the operating system are improved.
Alternatively, in this embodiment, the first operating system may be, but is not limited to being, booted by a boot program of the first operating system, and the second operating system may be, but is not limited to being, booted by a boot program of the second operating system. Alternatively, both may be booted by the same boot program.
In one exemplary embodiment, the first operating system may be booted up, but is not limited to, in the following manner: the chip is started to be electrified, and a first processor core distributed for the first operating system in the processor is awakened by the processor; and executing a bootstrap program of the first operating system through the first processor core to guide the first operating system to start.
Alternatively, in this embodiment, the first processor core of the first operating system may be determined according to, but not limited to, a processor core that the processor where the first operating system is located has, for example: the processor in which the first operating system is located may, but is not limited to, include a plurality of processor cores (processor core 0 through processor core N), and one or more of the plurality of processor cores (such as processor core 0) may, but is not limited to, be allocated to the first operating system as the first processor core of the first operating system.
Alternatively, in this embodiment, the boot program of the first operating system may be, but is not limited to, a specific memory space stored on the chip and dedicated to booting the first operating system.
Alternatively, in this embodiment, the first processor core of the first operating system may be, but is not limited to, a boot program for executing the first operating system, and may be, but is not limited to, a boot program for executing the first operating system.
In one exemplary embodiment, the first operating system may be booted by the first processor core executing a boot program of the first operating system in the following manner: executing, by the first processor core, a secondary program loader, wherein a boot program of the first operating system includes the secondary program loader; and loading the first operating system through the secondary program loader.
Alternatively, in this embodiment, the boot program of the first operating system may include, but is not limited to including, the secondary program loader, and the first processor core may load the first operating system by, but is not limited to, executing the secondary program loader (Second Program Loader, SPL).
In one exemplary embodiment, the second operating system may be booted up, but is not limited to, in the following manner: waking up a second processor core allocated for the second operating system by the secondary program loader; and executing a bootstrap program of the second operating system through the second processor core to guide the second operating system to start.
Alternatively, in this embodiment, the second processor core of the second operating system may be determined according to, but not limited to, a processor core of a processor where the second operating system is located, for example: the processor in which the second operating system is located may, but is not limited to, include a plurality of processor cores (processor core 0 through processor core N), and one or more of the plurality of processor cores (processor core 1 through processor core N) may, but is not limited to, be allocated to the second operating system as the second processor core of the second operating system.
Alternatively, in this embodiment, the second processor core of the second operating system may be, but is not limited to, awakened according to the secondary program loader, such as: after loading the first operating system using the secondary program loader is completed, the second processor core of the second operating system may be awakened by the secondary program loader, but is not limited to. Alternatively, during loading of the first operating system using the secondary program loader, the second processor core of the second operating system may be awakened by the secondary program loader, but is not limited to.
Alternatively, in the present embodiment, the second operating system may be booted using, but not limited to, a boot program of the second processor core to execute the second operating system.
In one exemplary embodiment, the second operating system may be booted by the second processor core executing a boot program of the second operating system in the following manner: executing, by the second processor core, a generic bootloader, wherein a boot program of the second operating system includes the generic bootloader; and loading the second operating system through the universal boot loader.
Alternatively, in the present embodiment, the second processor core may load the second operating system by, but not limited to, executing a generic bootloader, which may include, but is not limited to, U-Boot (Universal Boot Loader).
In one exemplary embodiment, the secondary program loader may be executed by the first processor core in the following manner, but is not limited to: performing a secure boot check on the code of the secondary program loader through a boot memory on the chip; and executing the secondary program loader through the first processor core under the condition that the checking result is normal.
Alternatively, in this embodiment, the boot program of the operating system may, but is not limited to, include a secondary program loader, and may, but is not limited to, use the boot program of the operating system as the boot memory, and verify, through the boot memory, code of the secondary program loader included in the boot program of the operating system, for example: the second program loader of the first operating system may be obtained (the second program loader may be, but is not limited to, SPL) from a boot program of the first operating system (the boot program may be, but is not limited to, bootROM), and the code of the second program loader may be verified from a boot memory of the first operating system (the boot memory may be, but is not limited to, bootROM).
Alternatively, in the present embodiment, the process of booting the memory to perform the secure boot check on the code of the secondary program loader may be, but is not limited to,: the boot memory reads the codes and the verification codes of the secondary program loader, the codes of the secondary program loader are operated by a contracted operation mode (such as hash operation) to obtain an operation value, the operation value is compared with the read verification codes, the inspection result is normal when the operation value is consistent with the read verification codes, and the inspection result is abnormal when the operation value is inconsistent with the read verification codes.
Optionally, in this embodiment, the secondary program loader may also perform a secure boot check on the code of the general boot loader, where the secondary program loader reads the code and the verification code of the general boot loader, and performs an operation on the code of the general boot loader by using a predetermined operation mode (for example, a hash operation may be the same as or may be different from the operation mode of checking the secondary program loader with the boot memory) to obtain an operation value, and then compares the operation value with the read verification code, where the two are consistent, and the checking result is normal, and if the two are inconsistent, the checking result is abnormal. And under the condition that the checking result is normal, loading a second operating system through the universal boot loader.
In one exemplary embodiment, an example of a first operating system and a second operating system boot is provided. Taking the first processor core as CPU-0 and the second processor cores as CPU-1 through CPU-N as examples, the first operating system and the second operating system may be started up by, but not limited to, the following ways: starting and powering up the chip; waking up a first processor core CPU-0 of a first operating system in the processor; executing a boot program of a first operating system using a first processor core CPU-0 may be, but is not limited to, a secondary program loader; performing a secure boot check on the code of the secondary program loader through a boot memory (which may be, but is not limited to, bootROM) on the chip; the checking result is normal, and a second-level program loader (which can be but is not limited to SPL) is executed by the first processor core to load a first operating system; waking up a second processor core CPU-1 to CPU-N of a second operating system by a second-level program loader; a generic bootloader (which may be, but is not limited to, a U-Boot) is executed by the second processor core to load the second operating system.
In an alternative embodiment, a process for dynamic occupancy of resources between systems is provided. FIG. 6 is a flowchart of dynamic occupation of resources between systems according to an embodiment, as shown in FIG. 6, taking a first operating system as an RTOS system and a second operating system as a Linux system as an example, the dynamic occupation of resources between systems may be implemented by, but not limited to, the following ways:
firstly, after a chip is electrified, an RTOS system and a Linux system are sequentially loaded for mirror image operation, in the process of stable operation of the RTOS system and the Linux system, the operation condition of the application in the Linux system is monitored, and if resources (including storage resources, peripheral resources and processor interrupt resources) in the Linux system are detected to be incapable of meeting the operation requirement of the application in the Linux system, the Linux system can be considered to meet the condition of dynamic occupation of the resources. The Linux system sends inter-core interrupt to the RTOS system, and simultaneously writes the resource information needing to be occupied into the designated address of the shared memory.
When the RTOS system detects the inter-core interrupt sent by the Linux system, the RTOS system reads the resource information needing to be occupied from the shared memory and releases the corresponding resource of the resource information needing to be occupied.
And after the RTOS system releases the resources successfully, sending an inter-core interrupt to the Linux system.
When the Linux system detects the inter-core interrupt sent by the RTOS system, the resources occupied by the initialization of the Linux system start to be used.
From the description of the above embodiments, it will be clear to a person skilled in the art that the method according to the above embodiments may be implemented by means of software plus the necessary general hardware platform, but of course also by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk), comprising several instructions for causing a terminal device (which may be a mobile phone, a computer, a server, or a network device, etc.) to perform the method described in the embodiments of the present application.
The present embodiment also provides an inter-system resource occupation device, which is used to implement the foregoing embodiments and preferred embodiments, and is not described in detail. As used below, the term "module" may be a combination of software and/or hardware that implements a predetermined function. While the means described in the following embodiments are preferably implemented in software, implementation in hardware, or a combination of software and hardware, is also possible and contemplated.
FIG. 7 is a block diagram of an inter-system resource occupation arrangement according to an embodiment of the present application, the arrangement being applied to a chip with a first operating system and a second operating system running on the same processor on the chip, as shown in FIG. 7, the arrangement comprising:
a determining module 72 configured to determine, by the second operating system, a target processing resource, wherein the processing resources of the processor include a first processing resource and a second processing resource, the first processing resource being allocated for use by the first operating system, the second processing resource being allocated for use by the second operating system;
a release module 74 for releasing the target processing resource from the first processing resource by the first operating system;
an adding module 76 for adding the target processing resource to the second processing resource by the second operating system.
Through the device, the first operating system and the second operating system are operated in the same processor on the chip, the processing resources of the processor comprise the first processing resources and the second processing resources, the first processing resources are allocated to the first operating system for use, the second processing resources are allocated to the second operating system for use, one operating system determines the target processing resources which need to be preempted, the other operating system releases the target processing resources from the processing resources which need to be preempted, and then the operating system which preempts the resources adds the target processing resources into the processing resources which occupy the processing resources, so that the operating systems can perform coordinated scheduling of the processing resources according to own processing requirements. That is, by releasing the target processing resource required by the second operating system in the first processing resource to the second processing resource, the resources allocated to the first operating system and the second operating system are dynamically adjusted according to the application requirement of the second operating system, so that the resources in the processor can be reasonably and dynamically adjusted. Therefore, the technical problem of poor adaptability of the resource allocation among the systems can be solved, and the technical effect of improving the adaptability of the resource allocation among the systems is achieved.
In one exemplary embodiment, the determining module includes:
the monitoring unit is used for monitoring whether the second processing resource meets the operation of the service on the second operating system or not through the second operating system;
and the estimation unit is used for estimating the resource information of the target processing resource through the second operating system under the condition that the second processing resource is determined to not meet the running of the service on the second operating system.
In an exemplary embodiment, the monitoring unit is configured to at least one of: monitoring, by the second operating system, whether remaining storage resources in the second processing resources are greater than a storage threshold, where the second processing resources do not satisfy operation of a service on the second operating system if the remaining storage resources are less than or equal to the storage threshold; monitoring, by the second operating system, whether a service on the second operating system uses a reference peripheral resource other than a peripheral resource in the second processing resource, wherein the second processing resource does not satisfy operation of the service on the second operating system if the service on the second operating system uses the reference peripheral resource; monitoring, by the second operating system, whether the service on the second operating system uses a reference processor interrupt resource other than the processor interrupt resource in the second processing resource, wherein the second processing resource does not satisfy the operation of the service on the second operating system if the service on the second operating system uses the reference processor interrupt resource.
In an exemplary embodiment, the estimation unit is configured to: determining, by the second operating system, a resource type of the target processing resource, wherein the resource type includes at least one of: storage resources, peripheral resources and processor interrupt resources; and estimating the resource quantity corresponding to each resource type through the second operating system.
In an exemplary embodiment, the estimation unit is further configured to: estimating, by the second operating system, a target storage amount to be occupied in the target processing resource, in a case where the resource type includes the storage resource; estimating, by the second operating system, a peripheral identifier and/or a number of peripherals of a reference peripheral resource to be occupied, in the case that the resource type includes the peripheral resource; and in the case that the resource type comprises the processor interrupt resource, estimating the interrupt quantity of the reference processor interrupt resource to be occupied by the second operating system.
In one exemplary embodiment, the release module includes:
the sending unit is used for sending a first interrupt request to the first operating system through the second operating system, wherein the first interrupt request is used for indicating to preempt the target processing resource;
A releasing unit, configured to release, by the first operating system, the target processing resource from the first processing resource;
and the sending unit is used for sending a second interrupt request to the second operating system through the first operating system, wherein the second interrupt request is used for indicating that the target processing resource is released.
In an exemplary embodiment, the sending unit is configured to: storing the resource information of the target processing resource into a shared memory on the chip through the second operating system; and sending the first interrupt request to the first operating system through the second operating system, wherein the first interrupt request is used for indicating to preempt the target processing resource indicated by the resource information stored in the shared memory.
In an exemplary embodiment, the sending unit is further configured to: reading the resource information from the shared memory by the first operating system in response to the first interrupt request; releasing, by the first operating system, the target processing resource satisfying the resource information from the first processing resource.
In an exemplary embodiment, the release unit is configured to: determining, by the first operating system, whether the target processing resource is currently being used; releasing the target processing resource under the condition that the target processing resource is not used currently; suspending a reference service that is currently using the target processing resource in a case where the target processing resource is currently used; releasing the target processing resource from the first processing resource.
In an exemplary embodiment, the apparatus further comprises:
the detection module is used for detecting whether the processing resources except the target processing resource in the first processing resource meet the operation requirement of the reference service or not;
and the recovery module is used for recovering the operation of the reference service by using the processing resources except the target processing resource in the first processing resources under the condition that the operation requirement of the reference service is met.
In one exemplary embodiment, the adding module includes:
an initializing unit, configured to initialize the target processing resource through the second operating system;
and the adding unit is used for adding the initialized target processing resource into the second processing resource through the second operating system.
In an exemplary embodiment, the apparatus further comprises:
the first guiding module is used for guiding the first operating system to start;
and the second guiding module is used for guiding the second operating system to start.
In one exemplary embodiment, a first guidance module is configured to: the chip is started to be electrified, and a first processor core distributed for the first operating system in the processor is awakened by the processor; executing, by the first processor core, a secondary program loader, wherein a boot program of the first operating system includes the secondary program loader; loading the first operating system through the secondary program loader;
A second guiding module for: waking up a second processor core allocated for the second operating system by the secondary program loader; and executing a bootstrap program of the second operating system through the second processor core to guide the second operating system to start.
It should be noted that each of the above modules may be implemented by software or hardware, and for the latter, it may be implemented by, but not limited to: the modules are all located in the same processor; alternatively, the above modules may be located in different processors in any combination.
Embodiments of the present application also provide a chip, where the chip includes at least one of programmable logic circuits and executable instructions, and the chip is run in an electronic device, for implementing the steps in any of the method embodiments described above.
The embodiment of the application also provides a BMC chip, wherein the BMC chip can comprise: and the storage unit and the processing unit is connected with the storage unit. The storage unit is adapted to store a program and the processing unit is adapted to run the program to perform the steps of any of the method embodiments described above.
The embodiment of the application also provides a motherboard, wherein the motherboard comprises: at least one processor; at least one memory for storing at least one program; the at least one program, when executed by the at least one processor, causes the at least one processor to perform the steps of any of the method embodiments described above.
The embodiment of the application also provides a server, which comprises a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus, and the memory is used for storing a computer program; and the processor is used for realizing the steps in any method embodiment when executing the program stored in the memory so as to achieve the same technical effects.
The communication bus of the server may be a PCI (Peripheral Component Interconnect, peripheral component interconnect standard) bus, an EISA (Extended Industry Standard Architecture ) bus, or the like. The communication bus may be classified as an address bus, a data bus, a control bus, or the like. The communication interface is used for communication between the server and other devices.
The memory may include RAM (random access memory) or NVM (Non-volatile memory), such as at least one magnetic disk memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor. The processor may be a general-purpose processor, including a CPU (central processing unit), an NP (network processor), and the like; but also DSP (digital signal processor), ASIC (application specific integrated circuit), FPGA (Field-programmable gate array) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components.
For the server, the server has the characteristics of high expandability and high stability, wherein, as the enterprise network is impossible to be unchanged for a long time, the server has no certain expandability, the development after the enterprise is influenced and the use of the enterprise is influenced, so the expandability is the most basic characteristic, the later better utilization can be ensured only by having higher expandability, the expandability also comprises the expandability on software besides the expandability on hardware, and the functions of the server are very complex compared with the computer, so the functions of the server are important not only on the aspect of hardware configuration but also on the aspect of software configuration, more functions are wanted to be realized, and no comprehensive software support is imagined.
In addition, since the server needs to process a large amount of data to support continuous operation of the service, the server has an important feature of high stability, and if the data transmission of the server cannot stably operate, the server can have a great influence on service development.
According to the scheme, reasonable scheduling and occupation of resources among the operating systems are realized, so that operation of the operating systems can be dynamically adjusted through scheduling and occupation of the resources, a server can complete operation requirements of the operating systems through scheduling and occupation of the resources no matter software resources are expanded or hardware resources are expanded, and expandability of the server is improved. In addition, through the dynamic and reasonable occupation of the resources, the use of the resources can be more suitable for the performance of an operating system, and the stability of the server is improved.
Embodiments of the present application also provide a computer readable storage medium having a computer program stored therein, wherein the computer program is arranged to perform the steps of any of the method embodiments described above when run.
In one exemplary embodiment, the computer readable storage medium may include, but is not limited to: a usb disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a removable hard disk, a magnetic disk, or an optical disk, or other various media capable of storing a computer program.
Embodiments of the present application also provide an electronic device comprising a memory having stored therein a computer program and a processor arranged to run the computer program to perform the steps of any of the method embodiments described above.
In an exemplary embodiment, the electronic device may further include a transmission device connected to the processor, and an input/output device connected to the processor.
Specific examples in this embodiment may refer to the examples described in the foregoing embodiments and the exemplary implementation, and this embodiment is not described herein.
It will be appreciated by those skilled in the art that the modules or steps of the application described above may be implemented in a general purpose computing device, they may be concentrated on a single computing device, or distributed across a network of computing devices, they may be implemented in program code executable by computing devices, so that they may be stored in a storage device for execution by computing devices, and in some cases, the steps shown or described may be performed in a different order than that shown or described herein, or they may be separately fabricated into individual integrated circuit modules, or multiple modules or steps of them may be fabricated into a single integrated circuit module. Thus, the present application is not limited to any specific combination of hardware and software.
The foregoing description is only of the preferred embodiments of the present application and is not intended to limit the same, but rather, various modifications and variations may be made by those skilled in the art. Any modification, equivalent replacement, improvement, etc. made within the principles of the present application should be included in the protection scope of the present application.

Claims (20)

1. An inter-system resource occupation method, wherein the method is applied to a chip, and a first operating system and a second operating system run in the same processor on the chip, and the method comprises:
determining, by the second operating system, a target processing resource, wherein the processing resources of the processor include a first processing resource and a second processing resource, the first processing resource being allocated for use by the first operating system, the second processing resource being allocated for use by the second operating system;
releasing the target processing resource from the first processing resource by the first operating system;
and adding the target processing resource to the second processing resource through the second operating system.
2. The method of claim 1, wherein the determining, by the second operating system, a target processing resource comprises:
Monitoring whether the second processing resource meets the operation of the service on the second operating system or not through the second operating system;
and under the condition that the second processing resource is determined not to meet the running of the business on the second operating system, estimating the resource information of the target processing resource through the second operating system.
3. The method of claim 2, wherein monitoring, by the second operating system, whether the second processing resource satisfies the operation of the service on the second operating system comprises at least one of:
monitoring, by the second operating system, whether remaining storage resources in the second processing resources are greater than a storage threshold, where the second processing resources do not satisfy operation of a service on the second operating system if the remaining storage resources are less than or equal to the storage threshold;
monitoring, by the second operating system, whether a service on the second operating system uses a reference peripheral resource other than a peripheral resource in the second processing resource, wherein the second processing resource does not satisfy operation of the service on the second operating system if the service on the second operating system uses the reference peripheral resource;
Monitoring, by the second operating system, whether the service on the second operating system uses a reference processor interrupt resource other than the processor interrupt resource in the second processing resource, wherein the second processing resource does not satisfy the operation of the service on the second operating system if the service on the second operating system uses the reference processor interrupt resource.
4. The method of claim 2, wherein said estimating, by the second operating system, resource information of the target processing resource comprises:
determining, by the second operating system, a resource type of the target processing resource, wherein the resource type includes at least one of: storage resources, peripheral resources and processor interrupt resources;
and estimating the resource quantity corresponding to each resource type through the second operating system.
5. The method of claim 4, wherein estimating, by the second operating system, an amount of resources corresponding to each resource type, comprises:
estimating, by the second operating system, a target storage amount to be occupied in the target processing resource, in a case where the resource type includes the storage resource;
Estimating, by the second operating system, a peripheral identifier and/or a number of peripherals of a reference peripheral resource to be occupied, in the case that the resource type includes the peripheral resource;
and in the case that the resource type comprises the processor interrupt resource, estimating the interrupt quantity of the reference processor interrupt resource to be occupied by the second operating system.
6. The method of claim 1, wherein the releasing, by the first operating system, the target processing resource from the first processing resource comprises:
sending a first interrupt request to the first operating system through the second operating system, wherein the first interrupt request is used for indicating to preempt the target processing resource;
releasing the target processing resource from the first processing resource by the first operating system;
and sending a second interrupt request to the second operating system through the first operating system, wherein the second interrupt request is used for indicating that the target processing resource is released.
7. The method of claim 6, wherein the sending, by the second operating system, a first interrupt request to the first operating system comprises:
Storing the resource information of the target processing resource into a shared memory on the chip through the second operating system;
and sending the first interrupt request to the first operating system through the second operating system, wherein the first interrupt request is used for indicating to preempt the target processing resource indicated by the resource information stored in the shared memory.
8. The method of claim 7, wherein the releasing, by the first operating system, the target processing resource from the first processing resource comprises:
reading the resource information from the shared memory by the first operating system in response to the first interrupt request;
releasing, by the first operating system, the target processing resource satisfying the resource information from the first processing resource.
9. The method of claim 6, wherein the releasing, by the first operating system, the target processing resource from the first processing resource comprises:
determining, by the first operating system, whether the target processing resource is currently being used;
releasing the target processing resource under the condition that the target processing resource is not used currently;
Suspending a reference service that is currently using the target processing resource in a case where the target processing resource is currently used; releasing the target processing resource from the first processing resource.
10. The method of claim 9, wherein after said releasing the target processing resource from the first processing resource, the method further comprises:
detecting whether processing resources except the target processing resource in the first processing resource meet the operation requirement of the reference service or not;
and under the condition that the operation requirement of the reference service is met, restoring the operation of the reference service by using the processing resources except the target processing resource in the first processing resources.
11. The method of claim 1, wherein the adding, by the second operating system, the target processing resource to the second processing resource comprises:
initializing the target processing resource by the second operating system;
and adding the initialized target processing resource into the second processing resource through the second operating system.
12. The method according to claim 1, wherein the method further comprises:
The first operating system is guided to start;
and guiding the second operating system to start.
13. The method of claim 12, wherein the step of determining the position of the probe is performed,
the booting the first operating system to boot includes: the chip is started to be electrified, and a first processor core distributed for the first operating system in the processor is awakened by the processor; executing, by the first processor core, a secondary program loader, wherein a boot program of the first operating system includes the secondary program loader; loading the first operating system through the secondary program loader;
the booting the second operating system to boot includes: waking up a second processor core allocated for the second operating system by the secondary program loader; and executing a bootstrap program of the second operating system through the second processor core to guide the second operating system to start.
14. An inter-system resource occupation device, wherein the device is applied to a chip, and a first operating system and a second operating system run in the same processor on the chip, and the device comprises:
a determining module configured to determine, by the second operating system, a target processing resource, where the processing resources of the processor include a first processing resource and a second processing resource, the first processing resource being allocated for use by the first operating system, the second processing resource being allocated for use by the second operating system;
A releasing module, configured to release, by the first operating system, the target processing resource from the first processing resource;
and the adding module is used for adding the target processing resource into the second processing resource through the second operating system.
15. A chip comprising at least one of programmable logic circuitry and executable instructions, the chip operating in an electronic device for implementing the method of any one of claims 1 to 13.
16. A BMC chip, comprising: a storage unit for storing a program and a processing unit connected to the storage unit for running the program to perform the method according to any one of claims 1 to 13.
17. A motherboard, comprising:
at least one processor;
at least one memory for storing at least one program;
the at least one program, when executed by the at least one processor, causes the at least one processor to implement the method of any one of claims 1 to 13.
18. The server is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor, the communication interface and the memory are communicated with each other through the communication bus;
A memory for storing a computer program;
a processor for implementing the method of any one of claims 1 to 13 when executing a program stored on a memory.
19. A computer readable storage medium, characterized in that a computer program is stored in the computer readable storage medium, wherein the computer program, when being executed by a processor, implements the steps of the method according to any of the claims 1 to 13.
20. An electronic device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, characterized in that the processor implements the steps of the method of any one of claims 1 to 13 when the computer program is executed.
CN202310536663.4A 2023-05-12 2023-05-12 Method and device for occupying resources among systems, storage medium and electronic device Active CN116257364B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310536663.4A CN116257364B (en) 2023-05-12 2023-05-12 Method and device for occupying resources among systems, storage medium and electronic device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310536663.4A CN116257364B (en) 2023-05-12 2023-05-12 Method and device for occupying resources among systems, storage medium and electronic device

Publications (2)

Publication Number Publication Date
CN116257364A true CN116257364A (en) 2023-06-13
CN116257364B CN116257364B (en) 2023-08-04

Family

ID=86684636

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310536663.4A Active CN116257364B (en) 2023-05-12 2023-05-12 Method and device for occupying resources among systems, storage medium and electronic device

Country Status (1)

Country Link
CN (1) CN116257364B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116483013A (en) * 2023-06-19 2023-07-25 成都实时技术股份有限公司 High-speed signal acquisition system and method based on multichannel collector
CN116501507A (en) * 2023-06-28 2023-07-28 北京紫光芯能科技有限公司 Method for interrupt processing, interrupt control module, processor, and storage medium
CN117149441A (en) * 2023-10-27 2023-12-01 南京齐芯半导体有限公司 Task scheduling optimization method applied to IoT
CN117707796A (en) * 2024-02-06 2024-03-15 苏州元脑智能科技有限公司 Resource management method, device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104714843A (en) * 2013-12-17 2015-06-17 华为技术有限公司 Method and device supporting multiple processors through multi-kernel operating system living examples
CN115470000A (en) * 2022-08-22 2022-12-13 华为技术有限公司 Resource allocation method and device and carrier

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104714843A (en) * 2013-12-17 2015-06-17 华为技术有限公司 Method and device supporting multiple processors through multi-kernel operating system living examples
CN115470000A (en) * 2022-08-22 2022-12-13 华为技术有限公司 Resource allocation method and device and carrier

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116483013A (en) * 2023-06-19 2023-07-25 成都实时技术股份有限公司 High-speed signal acquisition system and method based on multichannel collector
CN116483013B (en) * 2023-06-19 2023-09-05 成都实时技术股份有限公司 High-speed signal acquisition system and method based on multichannel collector
CN116501507A (en) * 2023-06-28 2023-07-28 北京紫光芯能科技有限公司 Method for interrupt processing, interrupt control module, processor, and storage medium
CN116501507B (en) * 2023-06-28 2023-10-24 北京紫光芯能科技有限公司 Method for interrupt processing, interrupt control module, processor, and storage medium
CN117149441A (en) * 2023-10-27 2023-12-01 南京齐芯半导体有限公司 Task scheduling optimization method applied to IoT
CN117149441B (en) * 2023-10-27 2024-01-05 南京齐芯半导体有限公司 Task scheduling optimization method applied to IoT
CN117707796A (en) * 2024-02-06 2024-03-15 苏州元脑智能科技有限公司 Resource management method, device, electronic equipment and storage medium
CN117707796B (en) * 2024-02-06 2024-04-09 苏州元脑智能科技有限公司 Resource management method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN116257364B (en) 2023-08-04

Similar Documents

Publication Publication Date Title
CN116257364B (en) Method and device for occupying resources among systems, storage medium and electronic device
EP3414662B1 (en) Virtualizing sensors
US11799952B2 (en) Computing resource discovery and allocation
CN116244229B (en) Access method and device of hardware controller, storage medium and electronic equipment
US9430411B2 (en) Method and system for communicating with non-volatile memory
CN116243995B (en) Communication method, communication device, computer readable storage medium, and electronic apparatus
CN116302617B (en) Method for sharing memory, communication method, embedded system and electronic equipment
CN116830082A (en) Startup control method and device of embedded system, storage medium and electronic equipment
CN116868167A (en) Operation control method and device of operating system, embedded system and chip
JP2005056391A (en) Method and system for balancing workload of computing environment
KR102285749B1 (en) System on chip having semaphore function and emplementing method thereof
US9390033B2 (en) Method and system for communicating with non-volatile memory via multiple data paths
CN116627520B (en) System operation method of baseboard management controller and baseboard management controller
CN116541227B (en) Fault diagnosis method and device, storage medium, electronic device and BMC chip
CN115185880B (en) Data storage method and device
CN112860387A (en) Distributed task scheduling method and device, computer equipment and storage medium
US9377968B2 (en) Method and system for using templates to communicate with non-volatile memory
CN116868170A (en) Operation method and device of embedded system, embedded system and chip
CN116257471A (en) Service processing method and device
CN116302141A (en) Serial port switching method, chip and serial port switching system
CN116848519A (en) Method and device for generating hardware interface signal and electronic equipment
CN113076189B (en) Data processing system with multiple data paths and virtual electronic device constructed using multiple data paths
CN115002840A (en) Equipment data transmission method and device, electronic equipment and storage medium
West et al. Real-Time USB Networking and Device I/O
CN113485789A (en) Resource allocation method and device and computer architecture

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant