CN110083469B - Method and system for organizing and running unified kernel by heterogeneous hardware - Google Patents

Method and system for organizing and running unified kernel by heterogeneous hardware Download PDF

Info

Publication number
CN110083469B
CN110083469B CN201910391228.0A CN201910391228A CN110083469B CN 110083469 B CN110083469 B CN 110083469B CN 201910391228 A CN201910391228 A CN 201910391228A CN 110083469 B CN110083469 B CN 110083469B
Authority
CN
China
Prior art keywords
architecture
memory
kernel
code
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910391228.0A
Other languages
Chinese (zh)
Other versions
CN110083469A (en
Inventor
肖银皓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong University of Business Studies
Original Assignee
Guangdong University of Business Studies
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong University of Business Studies filed Critical Guangdong University of Business Studies
Priority to CN201910391228.0A priority Critical patent/CN110083469B/en
Publication of CN110083469A publication Critical patent/CN110083469A/en
Application granted granted Critical
Publication of CN110083469B publication Critical patent/CN110083469B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F13/00Interconnection of, or transfer of information or other signals between, memories, input/output devices or central processing units
    • G06F13/14Handling requests for interconnection or transfer
    • G06F13/16Handling requests for interconnection or transfer for access to memory bus
    • G06F13/1605Handling requests for interconnection or transfer for access to memory bus based on arbitration
    • G06F13/1652Handling requests for interconnection or transfer for access to memory bus based on arbitration in a multiprocessor architecture
    • G06F13/1663Access to shared memory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/41Compilation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/545Interprogram communication where tasks reside in different layers, e.g. user- and kernel-space

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Stored Programmes (AREA)

Abstract

The invention discloses a method and a system for heterogeneous hardware organization to run a unified kernel, which link code segments of each architecture and data segments related to the architecture to different storage areas of a memory, and link the data segments unrelated to the architecture to the storage areas of the same storage area of the memory when linking a binary system, thereby directly sharing and using kernel objects unrelated to the architecture. The multiple architectures can use the same kernel really, most of the data section is shared, and the code sections are independent, so that the multiple architectures can use the same security policy, and the security policies are automatically synchronized without communication. The kernel objects which are independent of the architecture can be directly shared among the architectures, and all the kernel objects can be directly managed on all the architectures without inter-kernel communication.

Description

Method and system for organizing and running unified kernel by heterogeneous hardware
Technical Field
The invention discloses the technical field of embedded systems and operating systems, in particular relates to a method and a system for organizing and operating a unified kernel by heterogeneous hardware, and is suitable for software stack integration of different hardware instruction sets.
Background
With the rise of the internet of things and the improvement of the data processing capability requirement of the sensor node, the integration of the computing capability of different architectures becomes extremely important. For example, in an MCU + DSP system, the MCU is responsible for managing the service logic of the system, and the DSP is dedicated to computing to achieve the effect of 1+1> 2. At this time, if the kernel needs to run on such a platform, the kernel needs to integrate the computing power provided by the two processor subsystems to achieve the computing scheme required by the application.
Existing integration methods can be divided into two categories:
the first approach is to run two different operating systems on the two architectures. For example, Linux is run on an MCU (microcontroller) or MPU (microprocessor), while FreeRTOS is run on a DSP (digital signal processor), and then the two different cores are caused to communicate using a method of sending inter-core interrupts and a shared memory to each other.
The second approach is to run the same operating system on both architectures, but using two kernel instances compiled separately for both architectures, and then organized into a multi-kernel operating system. The user mode between the two architectures runs some communication daemons to enable the kernels on both architectures to communicate with each other, such as Barrelfish and popcor Linux.
The existing heterogeneous processor integration method cannot cope with increasingly tight cooperation between heterogeneous processor systems appearing in the large background of the internet of things, and the integration method mainly has the following defects:
(1) the first method requires running two cores, one of which is a large core and the other of which is a small real-time core. Only large kernels have a perfect interprocess protection model and information security, while small systems do not. Once a real-time kernel has a security vulnerability or a fault crash, the kernel memory of a large kernel is possibly damaged, and the security of the whole system is poor;
(2) the second method is that two kernels cannot directly communicate with each other, and the kernel services communicate in a user mode. This approach necessitates communication between the kernels in the user mode; some second approach improvements allow kernel-mode communication between kernels, but the two kernels still require a large amount of data transceiving through the message queue to maintain state synchronization. The communication efficiency between the kernels is greatly reduced, and the data synchronization burden is increased;
(3) both methods have common problems: kernel resources between two kernels cannot be shared or are difficult to share, even if the kernel resources can be shared, the two kernels are difficult to require the same security policy to manage the memory together, and once the information security policies of the two kernels aiming at a certain section of memory are inconsistent, information of the section of memory is possibly leaked;
(4) the communication between the kernels of the two methods is very complicated, so that the practicability of the heterogeneous scheme is greatly reduced;
(5) some operating systems can also run in multiple architectures, but only if multiple kernels are running at the same time.
Disclosure of Invention
In order to solve the above problems, the present disclosure provides a method and system for heterogeneous hardware organizations to run a unified kernel, in which a code segment and a data segment related to each architecture are linked to different storage areas of a memory, and when a binary system is linked, the data segment not related to the architecture is linked to the storage area of the same storage area of the memory, so that kernel objects not related to the architecture are directly shared and used.
To achieve the above object, according to an aspect of the present disclosure, there is provided a method for heterogeneous hardware organizations to run a unified kernel, the method including the steps of:
step 1, compiling code segments of the kernel code aiming at each architecture in different architecture types when compiling the kernel code;
step 2, when the binary system is linked, linking the code segment of the architecture of each architecture type and the data segment related to the architecture of the architecture type to different storage areas of the memory;
step 3, when the binary system is linked, linking the data segment irrelevant to the architecture to the same storage area of the memory;
and 4, packaging the code segments of each architecture, the data segments related to the architecture and the data segments unrelated to the architecture into a binary system.
Further, in step 1, when compiling the kernel code, the method for compiling the code segments of the kernel code for each architecture of different architecture types includes: when compiling the kernel code, the compiler divides the source code into a code segment and a data segment because the source code comprises the kernel code and different hardware abstraction layers; the code segment and the data segment are divided into: compiling the kernel code into an architecture-independent data segment and a code segment, and compiling the hardware abstraction layer code into an architecture-dependent data segment and a code segment; the method comprises the steps of (code segments compiled by a kernel code and a hardware abstraction layer code part are both related to a framework, so that the content has no substantial difference), namely, compiling each framework of each framework type (framework or structure, architecture and architecture), classifying all the code segments according to the framework type in advance, and then directly transmitting the code segments corresponding to the framework to a compiler of the framework of the corresponding framework type for compiling.
The architecture (or organization) types include, but are not limited to: MCU architecture (microcontroller architecture), MPU architecture (microprocessor architecture), DSP architecture (digital signal processor), CPU architecture (central processing unit architecture), SoC architecture (system on chip architecture), SOPC architecture (programmable system on chip architecture), PLD architecture (programmable logic device architecture), FPGA architecture (field programmable gate array architecture).
The data segment is divided into an architecture related (architecture specific) data segment and an architecture unrelated (architecture independent or architecture agnostic) data segment; the data segment related to the architecture is a variable defined in a hardware abstraction layer; the architecture-independent data segment is a variable defined in a hardware-independent file of the kernel;
further, in step 2, the method for linking the code segment of the architecture of each architecture type and the data segment related to the architecture of the architecture type to different storage areas of the memory is as follows: at startup of an architecture of one architecture type (other architectures temporarily stopped), the entire kernel is loaded into memory. Firstly, copying data segments related to the architecture to a position which is not mutually overlapped in a memory; then, copying the code segments related to the architecture to the positions which are not mutually overlapped in the memory; finally, copying the data segment irrelevant to the architecture to a memory, and jumping to an entry point of a kernel for operation;
starting all other frameworks through the started frameworks, and enabling the other frameworks to jump to the entry points of the code segments related to the corresponding frameworks to run respectively;
when the linker links the binary system, the code segment of each architecture type and the data segment related to the architecture are linked to different storage areas in the memory in sequence, namely, the code segment of each architecture and the storage area stored by the data segment related to the architecture are taken as a whole sub-storage area between different architectures, and each sub-storage area is independent from each other in the memory;
for example, if an architecture of a certain architecture type needs to maintain its several architecture-related registers through the logic of the architecture, the data of the logic needs to be stored in a dedicated memory area of a memory, which is a dedicated memory area of the architecture, because other architectures cannot use the data, which are global variables specific to the architecture, and all kernel objects are in a shared area.
The cores of each architecture type's architecture access their corresponding architecture-dependent data segments that are not shared between the architectures.
Further, in step 3, when linking the binary, the method for linking the architecture-independent data segments to the same storage area of the memory is as follows: when the binary system is linked, all the data sections irrelevant to the architecture are stored in the storage area of the same block of memory to be used as the storage area of the shared data section, so that all variables on different architectures are actually located at the same memory address, and the data sections are shared among all the code sections.
The kernel of each architecture type architecture accesses the same architecture-independent data segment, the architecture-independent data segment includes but is not limited to kernel objects, the kernel objects are divided into two types, one type is the kernel objects related to the architecture (the architecture-related kernel objects include but are not limited to page tables, thread contexts, etc.), and can only be used by the specified architecture when being created; the other class is architecture-independent kernel objects (architecture-independent kernel objects include, but are not limited to, processes), which can be arbitrarily used by architectures of any architecture type. For example, a process may be run simultaneously on multiple types of architecture, where each type of architecture runs threads compiled for that type of architecture, and the threads 'data sections are shared, although the threads' code sections are different.
Furthermore, the architectures of different architecture types have a uniform security model, and since the kernel objects for recording the memory security policy are also shared, the memory security policy change operation executed on one architecture can be immediately known by another architecture without performing inter-core communication. Similarly, kernel object state change operations executing on one architecture are immediately known to the other architecture, and no explicit communication is required.
Further, in step 4, the code segments of each architecture, the architecture-dependent data segments and the architecture-independent data segments are packed into binary system without specific order, that is, the objects are assembled into elf and assembled into any format as long as the objects can be loaded correctly.
The invention also provides a heterogeneous hardware organization operation unified kernel system, which comprises: a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor executing the computer program to operate in the units of:
a code segment compiling unit for compiling code segments of the kernel code for each architecture of different architecture types when compiling the kernel code;
an architecture independent storage unit for linking code segments of an architecture of each architecture type and data segments associated with the architecture of the architecture type to different storage areas of a memory when linking the binary;
the architecture sharing storage unit is used for linking the data segments which are irrelevant to the architecture to the same storage area of the memory when the binary system is linked;
and the binary packing unit is used for packing the code segment of each architecture, the architecture-related data segment and the architecture-unrelated data segment into a binary system.
The beneficial effect of this disclosure does: the invention provides a method and a system for operating a unified kernel by heterogeneous hardware organization, the technology disclosed by the invention can enable a plurality of architectures to really use the same kernel (most of data segments of the architectures are shared, and code segments of the architectures are independent), so that the architectures can use the same security policy, and the security policies are automatically synchronized without communication. The kernel objects which are independent of the architecture can be directly shared among the architectures, and all the kernel objects can be directly managed on all the architectures without inter-kernel communication.
Drawings
The foregoing and other features of the present disclosure will become more apparent from the detailed description of the embodiments shown in conjunction with the drawings in which like reference characters designate the same or similar elements throughout the several views, and it is apparent that the drawings in the following description are merely some examples of the present disclosure and that other drawings may be derived therefrom by those skilled in the art without the benefit of any inventive faculty, and in which:
FIG. 1 is a flow diagram illustrating a method for heterogeneous hardware organizations to run a unified kernel;
FIG. 2 is a diagram of a heterogeneous hardware organization running a unified kernel system.
Detailed Description
The conception, specific structure and technical effects of the present disclosure will be clearly and completely described below in conjunction with the embodiments and the accompanying drawings to fully understand the objects, aspects and effects of the present disclosure. It should be noted that the embodiments and features of the embodiments in the present application may be combined with each other without conflict.
Fig. 1 is a flowchart illustrating a method for heterogeneous hardware organization to run a unified kernel according to the present disclosure, and the following describes a method for heterogeneous hardware organization to run a unified kernel according to an embodiment of the present disclosure with reference to fig. 1.
The disclosure provides a method for organizing and running unified kernels by heterogeneous hardware, which specifically comprises the following steps:
step 1, compiling code segments of the kernel code aiming at each architecture in different architecture types when compiling the kernel code;
step 2, when the binary system is linked, linking the code segment of the architecture of each architecture type and the data segment related to the architecture of the architecture type to different storage areas of the memory;
step 3, when the binary system is linked, linking the data segment irrelevant to the architecture to the same storage area of the memory;
and 4, packaging the code segments of each architecture, the data segments related to the architecture and the data segments unrelated to the architecture into a binary system.
Further, in step 1, when compiling the kernel code, the method for compiling the code segments of the kernel code for each architecture of different architecture types includes: when compiling the kernel code, the compiler divides the source code into a code segment and a data segment because the source code comprises the kernel code and different hardware abstraction layers; the code segment and the data segment are divided into: compiling the kernel code into an architecture-independent data segment and a code segment, and compiling the hardware abstraction layer code into an architecture-dependent data segment and a code segment; the method comprises the steps of (code segments compiled by a kernel code and a hardware abstraction layer code part are both related to a framework, so that the content has no substantial difference), namely, compiling each framework of each framework type (framework or structure, architecture and architecture), classifying all the code segments according to the framework type in advance, and then directly transmitting the code segments corresponding to the framework to a compiler of the framework of the corresponding framework type for compiling.
The architecture (or organization) types include, but are not limited to: MCU architecture (microcontroller architecture), MPU architecture (microprocessor architecture), DSP architecture (digital signal processor), CPU architecture (central processing unit architecture), SoC architecture (system on chip architecture), SOPC architecture (programmable system on chip architecture), PLD architecture (programmable logic device architecture), FPGA architecture (field programmable gate array architecture).
The data segment is divided into an architecture related (architecture specific) data segment and an architecture unrelated (architecture independent or architecture agnostic) data segment; the data segment related to the architecture is a variable defined in a hardware abstraction layer; the architecture-independent data segment is a variable defined in a hardware-independent file of the kernel;
further, in step 2, the method for linking the code segment of the architecture of each architecture type and the data segment related to the architecture of the architecture type to different storage areas of the memory is as follows: at startup of an architecture of one architecture type (other architectures temporarily stopped), the entire kernel is loaded into memory. Firstly, copying data segments related to the architecture to a position which is not mutually overlapped in a memory; then, copying the code segments related to the architecture to the positions which are not mutually overlapped in the memory; finally, copying the data segment irrelevant to the architecture to a memory, and jumping to an entry point of a kernel for operation;
starting all other frameworks through the started frameworks, and enabling the other frameworks to jump to the entry points of the code segments related to the corresponding frameworks to run respectively;
when the linker links the binary system, the code segment of each architecture type and the data segment related to the architecture are linked to different storage areas in the memory in sequence, namely, the code segment of each architecture and the storage area stored by the data segment related to the architecture are taken as a whole sub-storage area between different architectures, and each sub-storage area is independent from each other in the memory;
for example, if an architecture of a certain architecture type needs to maintain its several architecture-related registers through the logic of the architecture, the data of the logic needs to be stored in a dedicated memory area of a memory, which is a dedicated memory area of the architecture, because other architectures cannot use the data, which are global variables specific to the architecture, and all kernel objects are in a shared area.
The cores of each architecture type's architecture access their corresponding architecture-dependent data segments that are not shared between the architectures.
Further, in step 3, when linking the binary, the method for linking the architecture-independent data segments to the same storage area of the memory is as follows: when the binary system is linked, all the data sections irrelevant to the architecture are stored in the storage area of the same block of memory to be used as the storage area of the shared data section, so that all variables on different architectures are actually located at the same memory address, and the data sections are shared among all the code sections.
The kernel of each architecture type architecture accesses the same architecture-independent data segment, the architecture-independent data segment includes but is not limited to kernel objects, the kernel objects are divided into two types, one type is the kernel objects related to the architecture (the architecture-related kernel objects include but are not limited to page tables, thread contexts, etc.), and can only be used by the specified architecture when being created; the other class is architecture-independent kernel objects (architecture-independent kernel objects include, but are not limited to, processes), which can be arbitrarily used by architectures of any architecture type. For example, a process may be run simultaneously on multiple types of architecture, where each type of architecture runs threads compiled for that type of architecture, and the threads 'data sections are shared, although the threads' code sections are different.
Furthermore, the architectures of different architecture types have a uniform security model, and since the kernel objects for recording the memory security policy are also shared, the memory security policy change operation executed on one architecture can be immediately known by another architecture without performing inter-core communication. Similarly, kernel object state change operations executing on one architecture are immediately known to the other architecture, and no explicit communication is required.
Further, in step 4, the code segments of each architecture, the architecture-dependent data segments and the architecture-independent data segments are packed into binary system without specific order, that is, the objects are assembled into elf and assembled into any format as long as the objects can be loaded correctly.
A preferred embodiment method: at startup of one architecture type (other architectures temporarily stopped), the entire kernel is loaded into memory. Firstly, copying data segments related to the architecture to a position which is not mutually overlapped in a memory; then, copying the code segments related to the architecture to the positions which are not mutually overlapped in the memory; finally, copying the data segment irrelevant to the architecture to a memory, and jumping to an entry point of a kernel for operation; at the entry point of the system kernel, the CPU started first starts all other CPUs in a certain mode and makes other CPUs respectively jump to the entry points of the corresponding architecture-related code segments for operation.
The cores of each architecture type's architecture access their corresponding architecture-dependent data segments that are not shared between the architectures.
The kernel of each architecture type architecture accesses the same architecture-independent data segment, the architecture-independent data segment includes but is not limited to kernel objects, the kernel objects are divided into two types, one type is the kernel objects related to the architecture (the architecture-related kernel objects include but are not limited to page tables, thread contexts, etc.), and can only be used by the specified architecture when being created; the other class is architecture-independent kernel objects (architecture-independent kernel objects include, but are not limited to, processes), which can be arbitrarily used by architectures of any architecture type. For example, a process may be run simultaneously on multiple types of architecture, where each type of architecture runs threads compiled for that type of architecture, and the threads 'data sections are shared, although the threads' code sections are different.
The various architectures have a uniform security model, and since the kernel objects recording the memory security policy are also shared, the memory security policy change operation executed on one CPU is immediately known by another CPU, and no inter-core communication is required.
Preferably, architecture-dependent kernel objects are created on one architecture type for an architecture of another architecture type. Architecture-dependent kernel objects (including but not limited to page tables, thread contexts, etc.) are architecture-dependent, and basic operations of these kernel objects, including but not limited to creation, deletion, and initialization, can be performed although they cannot be used on other architecture types of architectures.
Preferably, this implementation presents a very brief embodiment, which illustrates how the method decides the configuration of the protection domain according to the user's needs. The hardware system comprises two architectures, namely an architecture A (two processor cores A1 and A2) which is good at logic processing and an architecture B (four processor cores B1-B4) which is good at numerical calculation, software comprises a kernel loader As which runs on the architecture A, a code segment Ac which runs on the architecture A, a code segment Bc which runs on the architecture B, a data segment Ad which is exclusive of the architecture A, a data segment Bd which is exclusive of the architecture B and a shared data segment Sd.
First, As running on a1 loads Ac, Bc, Ad, Bd, Sd into the appropriate locations determined when compiling the linked kernel, while a2, B1-B4 are all inactive.
As then passes kernel control to the code segment Ac run by A1. The Ac running on the A1 initializes the A2 and the B1, so that the A2 jumps to a proper entry point in the Ac, the kernel starting process of the A1 and A2 processors is finished, and after a certain initialization process, the user mode program jumps to continue running.
Then, A1 makes B1 jump to the proper entry point in Bc, and B2-B4 are initialized by Bc running on B1 and all jump to the proper entry point in Bc, at this time, the kernel starting flow on the B1-B4 processor is ended, and after a certain initialization flow, the program jumps to the user mode to continue running.
Thereafter, the process S running on the a architecture creates a process P, a page table Ap for the a architecture and a page table Bp for the B architecture, as well as threads At1, At2 responsible for management tasks running on the a architecture and threads Bt1-Bt4 responsible for computing tasks running on the B architecture. These kernel objects are all located in the shared data segment Sd, so the B architecture can observe this change in real time.
Finally, when process P is scheduled to run, At1, At2 run At A1, A2, Bt1-Bt4 run At B1-B4, the page table used on the A architecture is Ap, and the page table used on the B architecture is Bp. In the whole process, the B architecture is concentrated on calculation, the A architecture is concentrated on kernel object management, and the kernel memory is shared, so that any change of the kernel object can be observed mutually, the cooperation cost among different architectures is greatly reduced, and the flexibility is greatly enhanced.
An embodiment of the present disclosure provides a unified kernel system for heterogeneous hardware organization operation, and as shown in fig. 2, is a unified kernel system diagram for heterogeneous hardware organization operation according to the present disclosure, where the unified kernel system for heterogeneous hardware organization operation according to the embodiment includes: a processor, a memory, and a computer program stored in the memory and executable on the processor, the processor executing the computer program to operate in the units of:
a code segment compiling unit for compiling code segments of the kernel code for each architecture of different architecture types when compiling the kernel code;
an architecture independent storage unit for linking code segments of an architecture of each architecture type and data segments associated with the architecture of the architecture type to different storage areas of a memory when linking the binary;
the architecture sharing storage unit is used for linking the data segments which are irrelevant to the architecture to the same storage area of the memory when the binary system is linked;
and the binary packing unit is used for packing the code segment of each architecture, the architecture-related data segment and the architecture-unrelated data segment into a binary system.
The heterogeneous hardware organization operation unified kernel system can be operated in computing equipment such as desktop computers, notebooks, palm computers and cloud servers. The heterogeneous hardware organization runs a unified kernel system, and the runnable system can comprise a processor and a memory, but is not limited to the processor and the memory. Those skilled in the art will appreciate that the example is merely an example of a heterogeneous hardware organization running a unified kernel system, and does not constitute a limitation of a heterogeneous hardware organization running a unified kernel system, and may include more or less components than a certain component, or combine certain components, or different components, for example, the heterogeneous hardware organization running a unified kernel system may further include input and output devices, network access devices, buses, and the like.
The Processor may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. The general-purpose processor may be a microprocessor or the processor may be any conventional processor, and the processor is a control center of the heterogeneous hardware organization running unified kernel system running system, and various interfaces and lines are used to connect various parts of the whole heterogeneous hardware organization running unified kernel system runnable system.
The memory may be used to store the computer programs and/or modules, and the processor may implement the various functions of the heterogeneous hardware organization running a unified kernel system by running or executing the computer programs and/or modules stored in the memory and calling the data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. In addition, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), at least one magnetic disk storage device, a Flash memory device, or other volatile solid state storage device.
While the present disclosure has been described in considerable detail and with particular reference to a few illustrative embodiments thereof, it is not intended to be limited to any such details or embodiments or any particular embodiments, but it is to be construed as effectively covering the intended scope of the disclosure by providing a broad, potential interpretation of such claims in view of the prior art with reference to the appended claims. Furthermore, the foregoing describes the disclosure in terms of embodiments foreseen by the inventor for which an enabling description was available, notwithstanding that insubstantial modifications of the disclosure, not presently foreseen, may nonetheless represent equivalent modifications thereto.

Claims (6)

1. A method for heterogeneous hardware organization to run a unified kernel, the method comprising the steps of:
step 1, compiling code segments of the kernel code aiming at each architecture in different architecture types when compiling the kernel code;
step 2, when the linker links the binary system, linking the code segment of each architecture type and the data segment related to the architecture into different storage areas in the memory in sequence, namely, between different architectures, the code segment of each architecture and the storage area stored by the data segment related to the architecture are taken as a whole sub-storage area, and each sub-storage area is independent from the memory;
step 3, when the binary system is linked, linking the data segment irrelevant to the architecture to the same storage area of the memory;
and 4, packaging the code segments of each architecture, the data segments related to the architecture and the data segments unrelated to the architecture into a binary system.
2. The method for heterogeneous hardware organization to run unified kernel as claimed in claim 1, wherein in step 1, when compiling kernel code, the method for compiling code segments of each architecture in different architecture types is: when compiling the kernel code, the compiler divides the program into code segments and data segments according to the content, and independently compiles each architecture of each architecture type, all the code segments are classified according to the architecture type in advance, and then directly transmits the code segments corresponding to the architecture to the compiler of the architecture corresponding to the architecture type for compiling.
3. The method of claim 1, wherein the architecture types include but are not limited to: MCU framework, MPU framework, DSP framework, CPU framework, SoC framework, SOPC framework, PLD framework, FPGA framework.
4. The method for heterogeneous hardware organization to run unified kernel as claimed in claim 1, wherein in step 2, the method for linking the code segment of architecture of each architecture type and the data segment related to the architecture of the architecture type to different storage areas of the memory is as follows: when starting an architecture in an architecture type, loading the whole kernel into a memory, firstly, copying data segments related to the architecture to a position which is not mutually overlapped in the memory; then, copying the code segments related to the architecture to the positions which are not mutually overlapped in the memory; finally, copying the data segment irrelevant to the architecture to a memory, and jumping to an entry point of a kernel for operation; starting all other frameworks through the started frameworks, and enabling the other frameworks to jump to the entry points of the code segments related to the corresponding frameworks to run respectively; when the linker links the binary system, the code segment of each architecture type and the data segment related to the architecture are linked to different storage areas in the memory in sequence, namely, between different architectures, the code segment of each architecture and the storage area stored by the data segment related to the architecture are taken as a whole as sub-storage areas, and each sub-storage area is independent from each other in the memory.
5. The method for heterogeneous hardware organization to run unified kernel according to claim 1, wherein in step 3, when linking binary, the method for linking architecture-independent data segments to the same memory area of the memory is: when the binary system is linked, all the data sections irrelevant to the architecture are stored in the storage area of the same block of memory to be used as the storage area of the shared data section, so that all variables on different architectures are actually located at the same memory address, and the data sections are shared among all the code sections.
6. A heterogeneous hardware organization running a unified kernel system, the system comprising: a memory, a processor, and a computer program stored in the memory and executable on the processor, the processor executing the computer program to operate in the units of:
a code segment compiling unit for compiling code segments of the kernel code for each architecture of different architecture types when compiling the kernel code;
the architecture independent storage unit links the code segment of each architecture type and the data segment related to the architecture into different storage areas in the memory in sequence when the linker links the binary system, namely, between different architectures, the code segment of each architecture and the storage area stored by the data segment related to the architecture are taken as a whole sub-storage area, and each sub-storage area is independent from the memory;
the architecture sharing storage unit is used for linking the data segments which are irrelevant to the architecture to the same storage area of the memory when the binary system is linked;
and the binary packing unit is used for packing the code segment of each architecture, the data segment related to the architecture and the data segment unrelated to the architecture into a binary system.
CN201910391228.0A 2019-05-11 2019-05-11 Method and system for organizing and running unified kernel by heterogeneous hardware Active CN110083469B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910391228.0A CN110083469B (en) 2019-05-11 2019-05-11 Method and system for organizing and running unified kernel by heterogeneous hardware

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910391228.0A CN110083469B (en) 2019-05-11 2019-05-11 Method and system for organizing and running unified kernel by heterogeneous hardware

Publications (2)

Publication Number Publication Date
CN110083469A CN110083469A (en) 2019-08-02
CN110083469B true CN110083469B (en) 2021-06-04

Family

ID=67419686

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910391228.0A Active CN110083469B (en) 2019-05-11 2019-05-11 Method and system for organizing and running unified kernel by heterogeneous hardware

Country Status (1)

Country Link
CN (1) CN110083469B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114490033B (en) * 2021-12-27 2024-05-03 华东师范大学 Unified performance modeling and adaptability changing method and device for diversified calculation forces

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101095103A (en) * 2004-03-26 2007-12-26 爱特梅尔股份有限公司 Dual-processor complex domain floating-point dsp system on chip
CN101196922A (en) * 2007-12-24 2008-06-11 北京深思洛克数据保护中心 Information safety equipment and its file memory and access method
CN101477458A (en) * 2008-12-15 2009-07-08 浙江大学 Hardware thread execution method based on processor and FPGA mixed structure
CN101963918A (en) * 2010-10-26 2011-02-02 上海交通大学 Method for realizing virtual execution environment of central processing unit (CPU)/graphics processing unit (GPU) heterogeneous platform
CN103473059A (en) * 2013-09-11 2013-12-25 江苏中科梦兰电子科技有限公司 General purpose operating system capable of supporting multiple system structures
CN103562870A (en) * 2011-05-11 2014-02-05 超威半导体公司 Automatic load balancing for heterogeneous cores
CN103678131A (en) * 2013-12-18 2014-03-26 哈尔滨工业大学 Software failure injection and analysis system of multi-core processor
CN104050137A (en) * 2013-03-13 2014-09-17 华为技术有限公司 Method and device for operating inner cores in heterogeneous operation system
CN104407852A (en) * 2014-11-05 2015-03-11 中国航天科技集团公司第九研究院第七七一研究所 Code isolation-based construction method for embedded software and calling method for embedded software
CN105718287A (en) * 2016-01-20 2016-06-29 中南大学 Program streaming execution method for intelligent terminal
CN106201636A (en) * 2016-08-11 2016-12-07 中国电子科技集团公司第二十九研究所 A kind of DSP off-chip code dynamic loading method and device
CN109460369A (en) * 2017-09-06 2019-03-12 忆锐公司 Accelerator based on flash memory and the calculating equipment including the accelerator

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8917279B2 (en) * 2011-01-24 2014-12-23 Nec Laboratories America, Inc. Method and system to dynamically bind and unbind applications on a general purpose graphics processing unit
US8683243B2 (en) * 2011-03-11 2014-03-25 Intel Corporation Dynamic core selection for heterogeneous multi-core systems

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101095103A (en) * 2004-03-26 2007-12-26 爱特梅尔股份有限公司 Dual-processor complex domain floating-point dsp system on chip
CN101196922A (en) * 2007-12-24 2008-06-11 北京深思洛克数据保护中心 Information safety equipment and its file memory and access method
CN101477458A (en) * 2008-12-15 2009-07-08 浙江大学 Hardware thread execution method based on processor and FPGA mixed structure
CN101963918A (en) * 2010-10-26 2011-02-02 上海交通大学 Method for realizing virtual execution environment of central processing unit (CPU)/graphics processing unit (GPU) heterogeneous platform
CN103562870A (en) * 2011-05-11 2014-02-05 超威半导体公司 Automatic load balancing for heterogeneous cores
CN104050137A (en) * 2013-03-13 2014-09-17 华为技术有限公司 Method and device for operating inner cores in heterogeneous operation system
CN103473059A (en) * 2013-09-11 2013-12-25 江苏中科梦兰电子科技有限公司 General purpose operating system capable of supporting multiple system structures
CN103678131A (en) * 2013-12-18 2014-03-26 哈尔滨工业大学 Software failure injection and analysis system of multi-core processor
CN104407852A (en) * 2014-11-05 2015-03-11 中国航天科技集团公司第九研究院第七七一研究所 Code isolation-based construction method for embedded software and calling method for embedded software
CN105718287A (en) * 2016-01-20 2016-06-29 中南大学 Program streaming execution method for intelligent terminal
CN106201636A (en) * 2016-08-11 2016-12-07 中国电子科技集团公司第二十九研究所 A kind of DSP off-chip code dynamic loading method and device
CN109460369A (en) * 2017-09-06 2019-03-12 忆锐公司 Accelerator based on flash memory and the calculating equipment including the accelerator

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
Open-MP-to-OpenCL代码自动转换工具的设计与实现;王燕燕;《中国优秀硕士学位论文全文数据库 信息科技辑》;20150815;第2015年卷(第8期);全文 *

Also Published As

Publication number Publication date
CN110083469A (en) 2019-08-02

Similar Documents

Publication Publication Date Title
US9983891B1 (en) Systems and methods for distributing configuration templates with application containers
US10331468B2 (en) Techniques for routing service chain flow packets between virtual machines
US9971593B2 (en) Interactive content development
WO2020006910A1 (en) Business componentization development method and apparatus, computer device, and storage medium
CN108062252B (en) Information interaction method, object management method, device and system
US9183032B2 (en) Method and system for migration of multi-tier virtual application across different clouds hypervisor platforms
US10846101B2 (en) Method and system for starting up application
US10416979B2 (en) Package installation on a host file system using a container
US20140351811A1 (en) Datacenter application packages with hardware accelerators
US10833955B2 (en) Dynamic delivery of software functions
CN107015995B (en) Method and device for modifying mirror image file
CN110650347B (en) Multimedia data processing method and device
JP2014516191A (en) System and method for monitoring virtual partitions
CN110162344B (en) Isolation current limiting method and device, computer equipment and readable storage medium
US20170228190A1 (en) Method and system providing file system for an electronic device comprising a composite memory device
CN111736922B (en) Plug-in calling method and device, electronic equipment and storage medium
US9058494B2 (en) Method, apparatus, system, and computer readable medium to provide secure operation
US20210158131A1 (en) Hierarchical partitioning of operators
US9058576B2 (en) Multiple project areas in a development environment
CN113010265A (en) Pod scheduling method, scheduler, memory plug-in and system
US11467946B1 (en) Breakpoints in neural network accelerator
CN110083469B (en) Method and system for organizing and running unified kernel by heterogeneous hardware
CN113296891A (en) Multi-scene knowledge graph processing method and device based on platform
CN110955415B (en) Method for projecting multi-platform service adaptation
CN111459573A (en) Method and device for starting intelligent contract execution environment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20210127

Address after: No.21, Luntou Road, Guangzhou, Guangdong 510000

Applicant after: GUANGDONG University OF FINANCE & ECONOMICS

Address before: Room 2703, block 3, xingxinghuayuan, 17 Yingyin Road, Chancheng District, Foshan City, Guangdong Province, 528000

Applicant before: Xiao Yinhao

GR01 Patent grant
GR01 Patent grant