WO2022198586A1 - Method of providing application executable by a plurality of heterogeneous processor architectures and related devices - Google Patents
Method of providing application executable by a plurality of heterogeneous processor architectures and related devices Download PDFInfo
- Publication number
- WO2022198586A1 WO2022198586A1 PCT/CN2021/083059 CN2021083059W WO2022198586A1 WO 2022198586 A1 WO2022198586 A1 WO 2022198586A1 CN 2021083059 W CN2021083059 W CN 2021083059W WO 2022198586 A1 WO2022198586 A1 WO 2022198586A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- visa
- binary
- computing device
- level
- processing unit
- Prior art date
Links
- 102100003102 MAVS Human genes 0.000 claims description 34
- 101700018430 MAVS Proteins 0.000 claims description 34
- 230000001537 neural Effects 0.000 claims description 5
- 230000001419 dependent Effects 0.000 claims description 4
- 230000004044 response Effects 0.000 claims description 4
- 238000005457 optimization Methods 0.000 description 22
- 238000004891 communication Methods 0.000 description 11
- 238000010586 diagram Methods 0.000 description 7
- 230000006399 behavior Effects 0.000 description 6
- 238000000034 method Methods 0.000 description 6
- 239000000203 mixture Substances 0.000 description 6
- 230000001413 cellular Effects 0.000 description 5
- 238000011156 evaluation Methods 0.000 description 5
- 238000005070 sampling Methods 0.000 description 5
- 230000002085 persistent Effects 0.000 description 4
- 238000004590 computer program Methods 0.000 description 3
- 230000000875 corresponding Effects 0.000 description 3
- 238000009434 installation Methods 0.000 description 3
- 238000000926 separation method Methods 0.000 description 3
- 238000005516 engineering process Methods 0.000 description 2
- 239000000284 extract Substances 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000006011 modification reaction Methods 0.000 description 2
- 101700064449 ISLR2 Proteins 0.000 description 1
- 230000000712 assembly Effects 0.000 description 1
- 238000007667 floating Methods 0.000 description 1
- 230000004301 light adaptation Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000003287 optical Effects 0.000 description 1
- 235000019633 pungent taste Nutrition 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001960 triggered Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/40—Transformation of program code
- G06F8/41—Compilation
- G06F8/44—Encoding
- G06F8/447—Target code generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING; COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F8/00—Arrangements for software engineering
- G06F8/40—Transformation of program code
- G06F8/41—Compilation
- G06F8/44—Encoding
- G06F8/443—Optimisation
Abstract
A method of providing an application executable by a plurality of heterogeneous processor architectures and related devices. Each processor architecture in the plurality of processor architectures has an associated instruction set architecture (ISA). The method comprises generating from application source code a core virtual instruction set architecture (VISA) binary for a general-purpose central processing unit (CPU) architecture. A plurality of extension VISA binaries are also generated from the application source code. Each extension VISA binary in the plurality of extension VISA binaries is configured for a domain-specific processing unit architecture in a plurality of domain-specific processing unit architectures. A serialized VISA binary is generated based on the core VISA binary and each extension VISA binary. The serialized VISA binary comprising the core VISA binary and the plurality of extension VISA binaries is sent to a second computing device for execution thereon.
Description
The present disclosure relates to software compatibility, and more specifically, to a method of providing an application executable by a plurality of heterogeneous processor architectures and related devices.
A heterogeneous processor architecture (also known as a heterogeneous chipset architecture) aims to accommodate general-purpose computing via a general-purpose central processing unit (CPU) as well as computing in a range of application domains (e.g., graphics, digital signal processing, machine learning, etc. ) via domain-specific processing units (also referred to as processing units) . Domain-specific processing units allow efficient execution of the computation kernels of a particular domain. Examples of domain-specific processing units include graphics processing units (GPUs) , digital signal processing unit (DSPs) , neural processing units (NPUs) , network processing units, and tensor processing units (TPUs) .
Both general-purpose CPUs and domain-specific processing units can be designed into broadly defined architecture types with an instruction set architecture (ISA) for each processor. For each architecture type, multiple generations of processors can be designed and implemented, and mostly share the ISA of the respective architecture type. However, each generation of processors can add and remove some instructions from the ISA of the respective architecture type compared to previous generations. Moreover, each generation of a respective architecture type can have multiple implementations each with a different micro-architecture.
There are two significant issues with heterogeneous processor architectures. First issue is the incompatibility of heterogeneous processor architectures both across different architecture types within each domain and across different generations within each architecture type. The lack of compatibility means that all existing software must be ported onto a new processor ISA with the introduction of a new architecture type (with a completely new ISA) or a new generation within each architecture type (with changes in the ISA compared to the previous generations of the same type) . This requires application source code to be re-compiled to produce binaries for the new processor, which is difficult and impractical in many cases, particularly for third party applications and services that are deployed in a public cloud.
The second issue is the performance loss of heterogeneous processor architectures. Even if software binaries compiled for each particular processor generation can run as-is on other processor generations (i.e., fully compatible ISA between the generations) , the software binaries may not fully benefit from the architectural features of each generation (i.e., memory hierarchy, pipeline structure, branch prediction, and/or special instructions) . There is also an optimization opportunity loss in that development-time optimizations/specializations are not possible, thereby negating the possible for profile guided optimizations and dynamic specializations.
For at least the reasons above, there is a need for methods and devices for supporting heterogeneous processor architectures having improved compatibility and/or performance.
Summary
The present disclosure provides a method of providing an application executable by a plurality of heterogeneous processor architectures and related devices. The methods and devices of the present disclosure improve compatibility and/or performance of heterogeneous processor architectures both across different architecture types within each domain and across different generations within each architecture type.
To generate an application executable by a plurality of processor architectures, the application is first compiled from source code into an intermediary software binary, referred to as a virtual instruction set architecture (VISA) binary. A binary is a file that is interpreted by a special-purpose program or a hardware processor that understands in advance how it is formatted. A binary is not in an externally identifiable format that can interpreted by any program. The VISA binary is executable in a VISA, described more fully below. The VISA binary is deployed on a target computing device. The term “deployment” is intended to encompass all necessary processes involved in getting the application running properly on the target computing device. The VISA binary on the target computing device is then re-compiled into a native ISA binary based on an ISA of the processor architecture of the processing unit of the target computing device.
A plurality of VISAs may be provided, which are hierarchical in nature, from more generic to more specific. In some embodiments, the plurality of VISAs comprises a core VISA for a general-purpose CPU architecture, such as a scalar-based CPU architecture. The VISA set of instructions for the core VISA provide a base intermediate representation (IR) . The plurality of VISAs comprises may also comprise one or more extension VISAs that extend support from the CPU computing domain to other domains. Each extension VISA is a domain-specific extension of the core VISA for a domain-specific processing unit architecture. Each extension VISA defines an extension to the base intermediate representation of the core VISA. There may be a plurality of domain-specific extension VISAs for a plurality of domain-specific processing unit architectures. The domain-specific extension VISAs may be for domain-specific processing unit architectures comprising one or more GPU architectures, one or more DSP architectures, one or more NPU architectures, one or more network processing unit architectures, one or more TPU architectures, or any combination thereof. The core VISA is typically based on a scalar-based CPU architecture with a vector-based CPU architecture VISA extension being provided, possibly along with other VISA extensions for other CPU architectures. Alternatively, the core VISA may be based on a different CPU architecture such as a vector-based CPU architecture with VISA extensions to scalar-based and/or other CPU architectures. The plurality of VISAs comprises may comprise one or more lower-level VISAs, the lower-level VISAs being adapted for more specific targets than higher-level VISAs, such as the core VISA or a higher-level extension VISA, which are more generic re hierarchical in nature, from more generic to more specific. The lower-level VISAs may be provided for one or both of the core and extension VISAs. In other words, an extension VISA can itself comprise multiple levels. Regardless of the level or number of included extensions, VISA binaries are constructed similarly, for example, based on an IR in the LLVM bitcode format.
The VISA allows the compilation of application source code in a plurality of processors in a heterogeneous processor architecture by operating as an intermediate representation between the application source code and the native binary of the physical processor ISA. The VISA has a base intermediate representation for general-purpose CPU processors and multiple domain-specific VISA extensions for non-general-purpose CPU processors, allowing the source code to be compiled in a heterogeneous processor architecture. The set of instructions of the VISA is hardware agnostic. The base intermediate representation is used when the target processor is a general-purpose CPU. A domain-specific VISA extension is used when the processor is in another processing domain. The same set of VISA binaries can be used by multiple generations of the same architecture type and multiple architecture types within a domain.
The present disclosure also provides a method of flexible, multi-stage compilation, in which optimizations may be part performed on a development system in advance, and part performed on the target computing device (or deployment system) just-in-time or just ahead-of-time, prior to the application being invoked or launched. The optimizations performed in advance on a development system are typically hardware-agnostic, thereby maximizing compatibility of the code. In contrast, the later optimizations performed on the deployment system are typically hardware-aware optimizations that can exploit knowledge of characteristics of the processing unit of the target computing device and its ISA, thereby maximizing the efficiency and/or optimization of the code. The “remote” compilation performed on a development system in advance may itself be performed in stages, one for each VISA binary included in the serialized VISA binary including the core and extension VISA binaries at various levels of a VISA hierarchy. The extension VISA binaries provide additional ISA features which the compiler can leverage, e.g., vector instructions, matrix instructions, etc., and therefore present new opportunities for optimizations. Local compilation and optimizations performed on the deployment system are performed for only the selected VISA binary to be used by the target device. Dynamic code specialization and optimization on the target device can also be performed using runtime data collected by the target device during the running of the application.
The methods of the present disclosure provide increased portability and compatibility of the applications, both a cross architecture types and across generations within the same type. The same VISA binary can be lowered to multiple generations of the same architecture type, or potentially to multiple architecture types within a domain. The methods may be used for heterogeneous and/or hierarchical VISAs. By supporting a heterogeneous shipset architecture in which there is a VISA for a core, scalar-based, general purpose architecture along with multiple domain-specific extension VISAs, both CPU and domain-specific accelerators (GPU, DSP, neural processor, network processor, etc. ) . By support Hierarchical VISAs, VISA binaries for all targets can be lowered to the general-purpose, core VISA. The methods also allow software reuse by allowing architecture-independent optimization passes to be reused across architectures. The methods also provide enable a systemic approach to guide compiler optimizations, enable early code verification, mitigate against hardware side-channel attacks by handling future hardware bugs, and allow dynamic code specialization and optimization on the target device using deployment-time and execution-time (e.g., runtime) data collected by the target device during the running of the application. The methods of the present disclosure also protect intellectual property of innovators in that source code need not be shared by application developers and programmers and by abstracting out the details of the domain-specific ISA innovations.
In accordance with a first embodiment of a first aspect of the present disclosure, there is provided a method for deployment of an application executable by a plurality of processor architectures. Each processor architecture in the plurality of processor architectures having an associated instruction set architecture (ISA) . The method comprises: at a first computing device, generating from application source code a core VISA binary for a general-purpose central processing unit (CPU) architecture. A plurality of extension VISA binaries are also generated from the application source code. Each extension VISA binary in the plurality of extension VISA binaries is configured for a domain-specific processing unit architecture in a plurality of domain-specific processing unit architectures. A serialized VISA binary is generated based on the core VISA binary and each extension VISA binary. The serialized VISA binary comprising the core VISA binary and the plurality of extension VISA binaries is sent to a second computing device for execution thereon.
In some examples of the first embodiment of the first aspect, the generating of the core VISA binary comprises compiling.
In some examples of the first embodiment of the first aspect, the generating of the plurality of extension VISA binaries comprises compiling.
In some examples of the first embodiment of the first aspect, the second computing device selects a VISA binary from the serialized VISA binary and generates a native binary based on the selected VISA binary and an instruction set architecture (ISA) of a processing unit of the second computing device for execution of the application.
In some examples of the first embodiment of the first aspect, the generating of the native binary comprises compiling.
In some examples of the first embodiment of the first aspect, the second computing device optimizes the selected VISA binary based on the ISA of the processing unit of the second computing device. The term “optimize” is used herein to refer to an improvement in at least one feature or aspect of the VISA binary or the resultant deployed application. Optimization comprises changing the underlying code to realize the improvement. Optimization does not necessarily indicate improvement according to every metric, nor does it necessarily indicate that any improvement or set of improvements is the best possible.
In some examples of the first embodiment of the first aspect, the plurality of domain-specific processing unit architectures comprise one or more graphics processing unit (GPU) architectures, one or more digital signal processing unit (DSP) architectures, one or more neural processing unit (NPU) architectures, one or more network processing unit architectures, one or more tensor processing unit (TPU) architectures, or any combination thereof.
In some examples of the first embodiment of the first aspect, the core VISA is a highest-level VISA in a VISA hierarchy comprising the highest-level VISA and one or more lower-level VISAs with the CPU domain, and the method further comprises, at the first computing device, generating from the application source code a lower-level VISA binary for each lower-level VISA in the VISA hierarchy, wherein the serialized VISA binary is generated based on the highest-level VISA binary, each lower-level VISA binary, and each extension VISA binary.
In some examples of the first embodiment of the first aspect, the method further comprises: at the first computing device, performing optimizations for each VISA binary in the serialized VISA binary.
In some examples of the first embodiment of the first aspect, the optimizations for each VISA binary in the serialized VISA binary are hardware-agnostic optimizations that are independent of the processing unit of the second computing device and the ISA of the processing unit of the second computing device.
In some examples of the first embodiment of the first aspect, the method further comprises: at the second computing device, selecting a VISA binary from the serialized VISA binary and generates a native binary based on the selected VISA binary and an ISA of a processing unit of the second computing device for execution of the application, and performing optimizations on the selected VISA binary.
In some examples of the first embodiment of the first aspect, the optimizations on the selected VISA binary are hardware-aware optimizations that are dependent of the processing unit of the second computing device and the ISA of the processing unit of the second computing device.
In some examples of the first embodiment of the first aspect, the method further comprises: at the second computing device, collecting data about runtime behavior of the application, collecting data about hot spots of the application, and reoptimizing the selected VISA binary based on the runtime behavior of the application and hot spots of the application.
In accordance with a second embodiment of the first aspect of the present disclosure, there is provided a method for deployment of an application executable by a plurality of processor architectures. Each processor architecture in the plurality of processor architectures having an associated instruction set architecture (ISA) . The method comprises: at a first computing device, generating from application source code a core VISA binary for a general-purpose central processing unit (CPU) architecture. The core VISA is a highest-level VISA in a VISA hierarchy comprising the highest-level VISA and one or more lower-level VISAs with the CPU domain. A lower-level VISA binary is also generated from the application source code for each lower-level VISA in the VISA hierarchy. A serialized VISA binary is generated based on the highest-level VISA binary and each lower-level VISA binary. The serialized VISA binary comprising the highest-level VISA binary and each lower-level VISA binary is sent to a second computing device for execution thereon.
In some examples of the second embodiment of the first aspect, the method further comprises: at the first computing device, performing optimizations for each VISA binary in the serialized VISA binary.
In some examples of the second embodiment of the first aspect, the optimizations for each VISA binary in the serialized VISA binary are hardware-agnostic optimizations that are independent of the processing unit of the second computing device and the ISA of the processing unit of the second computing device.
In some examples of the second embodiment of the first aspect, the method further comprises: at the first computing device, selecting a VISA binary from the serialized VISA binary and generates a native binary based on the selected VISA binary and an ISA of a processing unit of the second computing device for execution of the application, and performing optimizations on the selected VISA binary.
In some examples of the second embodiment of the first aspect, the optimizations on the selected VISA binary are hardware-aware optimizations that are dependent of the processing unit of the second computing device and the ISA of the processing unit of the second computing device.
In some examples of the second embodiment of the first aspect, the method further comprises: at the second computing device, collecting data about runtime behavior of the application, collecting data about hot spots of the application, and reoptimizing the selected VISA binary based on the runtime behavior of the application and hot spots of the application.
In accordance with another aspect of the present disclosure, there is provided a computing device comprising a processor and a memory. The memory having tangibly stored thereon executable instructions for execution by the processor. The executable instructions, in response to execution by the processor, cause the computing device to perform the methods described above and herein.
In accordance with a further aspect of the present disclosure, there is provided a non-transitory machine-readable medium having tangibly stored thereon executable instructions for execution by a processor of a computing device. The executable instructions, in response to execution by the processor, cause the computing device to perform the methods described above and herein.
Other aspects and features of the present disclosure will become apparent to those of ordinary skill in the art upon review of the following description of specific implementations of the application in conjunction with the accompanying figures.
Brief Description of the Figures
FIG. 1 is a schematic diagram of an example hierarchy of heterogeneous processor architectures.
FIG. 2 is a block diagram of an example computing device suitable for practicing the teachings of the present disclosure.
FIG. 3 is flowchart of a method for deployment of an application executable by a plurality of processor architectures in accordance with a first embodiment of the present disclosure.
FIG. 4 is flowchart of a method for deployment of an application executable by a plurality of processor architectures in accordance with a second embodiment of the present disclosure.
FIG. 5 is a schematic diagram of an example VISA hierarchy.
FIG. 6A is a schematic block diagram of the multi-stage compilation of a high-level VISA in accordance with one example of the present disclosure.
FIG. 6B is a schematic block diagram of the multi-stage compilation of a low-level VISA in accordance with one example of the present disclosure.
FIG. 7 is a block diagram of a compilation method showing operations performed on an application provider device and a target device in accordance with one embodiment of the present disclosure.
FIG. 8 is flowchart of a method for deployment of an application executable by a plurality of processor architectures in accordance with a third embodiment of the present disclosure.
FIG. 9 is a block diagram of a method for continuous and profile guided optimization in accordance with one embodiment of the present disclosure.
FIG. 10 illustrates the format of a serialized VISA in accordance with one embodiment of the present disclosure.
Detailed Description of Example Embodiments
The present disclosure is made with reference to the accompanying drawings, in which embodiments are shown. However, many different embodiments may be used, and thus the description should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this application will be thorough and complete. Wherever possible, the same reference numbers are used in the drawings and the following description to refer to the same elements, and prime notation is used to indicate similar elements, operations or steps in alternative embodiments. Separate boxes or illustrated separation of functional elements of illustrated systems and devices does not necessarily require physical separation of such functions, as communication between such elements may occur by way of messaging, function calls, shared memory space, and so on, without any such physical separation. As such, functions need not be implemented in physically or logically separated platforms, although they are illustrated separately for ease of explanation herein. Different devices may have different designs, such that although some devices implement some functions in fixed function hardware, other devices may implement such functions in a programmable processor with code obtained from a machine-readable medium. Lastly, elements referred to in the singular may be plural and vice versa, except where indicated otherwise either explicitly or inherently by context. The terms “processor” and “processing unit” are used interchangeably in the present disclosure.
The present disclosure provides a method of providing an application executable by a plurality of heterogeneous processor architectures and related devices. FIG. 1 illustrates an example heterogeneous processor architecture. The heterogeneous processor architecture comprises processors across the CPU, NPU, DSP and GPU domains, with one or more architecture types in each domain, and one or more generations of each architecture type.
Reference is first made to FIG. 2 which illustrates a computing device 102 suitable for practicing the teachings of the present disclosure. The computing device 102 may be part of a deployment system or development system. The computing device 102 includes a central processing unit (CPU) 104 which controls the overall operation of the computing device 102. The computing device 102 may include one or more domain-specific processing units (not shown) , for example, for offloading certain computing tasks. Examples of domain-specific processing units comprise a graphics processing unit (GPU) , digital signal processing unit (DSP) , a neural processing unit (NPU) , network processing unit, and tensor processing unit (TPU) . Other types of processing units that may be included include an application specific integrated circuit, a field programmable gate array (FPGA) , a reduced instruction set circuit (RISC) , and a logic circuit.
The processing unit 104 is coupled to a plurality of components via a communication bus (not shown) which provides a communication path between the components and the processing unit 104. The processing unit 104 is coupled to Random Access Memory (RAM) 108, Read Only Memory (ROM) 110, and persistent (non-volatile) memory 112 such as flash memory, one or more input devices 120, one or more output devices 122, and a communication subsystem 130. The memory 112 of the computing device 102 stores data and instructions for execution by the processing unit 104.
The communication subsystem 130 includes one or more wireless transceivers for exchanging radio frequency signals with wireless networks. The communication subsystem 130 may also include a wireline transceiver for wireline communications with wired networks. The wireless transceivers may include one or a combination of Bluetooth transceiver or other short-range wireless transceiver, a Wi-Fi or other wireless local area network (WLAN) transceiver for communicating with a WLAN via a WLAN access point (AP) , or a wireless wide area network (WWAN) transceiver such as a cellular transceiver for communicating with a radio access network (e.g., cellular network) . The cellular transceiver may communicate with any one of a plurality of fixed transceiver base stations of the cellular network within its geographic coverage area. The wireless transceivers may include a multi-band cellular transceiver that supports multiple radio frequency bands. Other types of short-range wireless communication include near field communication (NFC) , IEEE 802.15.3a (also referred to as UltraWideband (UWB) ) , Z-Wave, ZigBee, ANT/ANT+ or infrared (e.g., Infrared Data Association (IrDA) communication) .
The computing device 102 may be used to provide a compiler for generating an application executable by a plurality of processor architectures, each processor architecture in the plurality of processor architectures having an associated instruction set architecture (ISA) , using a virtual instruction set architecture (VISA) . The VISA comprises two components: (1) a virtual instruction set; and (2) a virtual architecture (also referred to as logical architecture) . The virtual instruction set is an intermediate representation (IR) between the source code and a native binary for execution by a destination processing unit of a destination computing device compatible with the virtual architecture.
The virtual architecture of the VISA is comprised of a plurality of components comprising: (i) parallel architecture; (ii) memory; (iii) native data types and addressing; and (iv) abstract Application Binary Interface (ABI) . The parallel architecture component comprises logical compute cores, hardware threading model, and synchronization primitives. The memory component comprises registers, address spaces, memory hierarchy (caches) , prefetching mechanism, dataflow model (encoding dataflow to help extract parallelism) , memory consistency, and alignment restriction. The native data types and addressing component comprises addressing modes, scalar types (integer and floating point, aggregates (vectors, structs, etc. ) .
FIG. 3 is a flowchart of a method 300 for deployment of an application executable by a plurality of processor architectures in accordance with one embodiment of the present disclosure. At least parts of the method 300 are carried out by software executed by the processing unit 104 of a first or second computing device 102. The first and second computing device 102 are both provided with software for running a VISA runtime environment for providing a VISA as well as a compiler. The complier of the first computing device 102 is configured to compile source code into a VISA binary. The compiler of the second computing device 102 is configured to (re) compile a VISA binary into a native ISA binary.
At operation 302, source code of an application is received by the first computing device 102. The language of the application source code may vary. The first computing device 102 may be an application server. The application source code may be stored in an application database (not shown) . The application server may host an application store or other application delivery service in which a plurality of applications is available for download to host computing devices. Alternatively, the first computing device 102 may be an application provider, such as an application developer, who provides the application to an application server.
At operation 304, the first computing device 102 generates from the application source code a core VISA binary for a general-purpose CPU architecture. The core VISA binary is typically generated by compiling.
At operation 306, a plurality of extension VISA binaries is generated from the application source code. The plurality of extension VISA binaries is typically generated by compiling. Each extension VISA binary in the plurality of extension VISA binaries is configured for a domain-specific processing unit architecture in a plurality of domain-specific processing unit architectures, and provides the same functionality as the core VISA binary as well as additional functionality for a respective domain-specific processing unit architecture. Because the VISA extensions are extensions to the base IR of the core VIA, extension ISA binaries are compatible with general-purpose CPUs as well and the VISA binaries for all targets may be lowered to the core VISA of the general-purpose CPU.
At operation 308, the core VISA binary and the plurality of extension VISA binaries are provided to the second computing device 102 for execution thereon, for example, by downloading from the first computing device 102 via a wireless or wired communication channel.
At operation 310, one of the core VISA binary and the plurality of extension VISA binaries is selected by the second computing device 102 based on a an ISA of a processor architecture of the processing unit of the second computing device 102 to be used for execution of the application. When more than one execution platform is available on the the second computing device 102 (e.g., the target device) such as CPU, GPU, etc., the execution platform to be used and the corresponding processor architecture of the processing unit are selected based on the capabilities of the available execution platforms and the corresponding utilization by the application. Device profiles which define the capabilities of the available execution platforms and utilization may be provided and used by a runtime scheduler to optimize the allocation of tasks (e.g., applications) to the available execution platforms of computing devices. The device profile creation and management and the runtime scheduling algorithm are beyond the scope of the present disclosure.
At operation 312, a native ISA binary based on the ISA of the processor architecture of the processing unit of the destination computing device is generated from the selected VISA binary. The native ISA binary is typically generated by re-compiling.
Although in the above-example, the first computing device 102 is provided with application source code from which the VISA binaries are generated, it is complemented that the VISA binaries may be provided to the first computing device 102 by another computing device (e.g., belonging to a software developer) in other embodiments, in which case the first computing device 102 merely distributes the applications in the form of VISA binaries by providing an application store or other application delivery service.
FIG. 4 is a flowchart of a method 400 for deployment of an application executable by a plurality of processor architectures in accordance with a second embodiment of the present disclosure. At least parts of the method 400 are carried out by software executed by the processing unit 104 of the computing device 102. The method 400 is similar to the method 300 in several respects. However, unlike the method 300, the method 400 supports hierarchical VISAs within a domain. The VISA hierarchy comprises higher level VISAs and lower-level VISAs. Higher level VISAs allow for a wider range of target compatibility. Lower-level VISAs offer improved performance for more specific targets and reduced compilation overhead on the target computing device. Reference is made to FIG. 5, which illustrates a VISA hierarchy 500 having two-levels within the CPU domain. The highest-level of the VISA hierarchy 500 comprises a core CPU VISA 502. The lower-level of the VISA hierarchy 500 comprises a Cloud CPU VISA 504 adapted for cloud computing, a terminal CPU VISA 504 adapted for mobile terminals such as smartphones, and an Internet of Things (IoT) CPU VISA 508 adapted for IoT computing.
At operation 402, source code of an application is received by a first computing device 102. The language of the application source code may vary.
At operation 404, the first computing device 102 generates from the application source code a highest-level VISA binary in a VISA hierarchy for a particular processing domain, such as a CPU VISA for a general-purpose CPU architecture. The highest-level VISA is typically generated by compiling.
At operation 406, the first computing device 102 generates from the application source code a lower-level VISA binary in the VISA hierarchy, typically by compiling.
At operation 409, first compiler optimizations are optionally performed for each VISA binary. The first compiler optimizations are performed at each VISA level and based by the available information in the VISA virtual architecture. For example, optimization may be based information ranging from high-level information such as pointer size and calling conventions, to low-level information such as execution unit availability, register file size, cache hierarchy parameters, instruction latencies, etc. An application may be converted from a higher-level VISA to a lower-level VISA by performing only those optimizations that apply to the target VISA level
At operation 408, it is determined whether other lower-level VISAs in remain in the VISA hierarchy. If other lower-level VISAs remain, processing proceeds to operation 410 at which the first computing device 102 generates from the application source code a next lower-level VISA binary in the VISA hierarchy, typically by compiling.
If no other lower-level VISAs remain, processing proceeds to operation 414 at which a serialized VISA binary is generated by serializing the lower-level VISA binaries so that each lower-level binary contains the serialized VISAs from all of its respective ancestors in the VISA hierarchy (e.g., from its parents, grandparents, etc. in the VISA hierarchy) . The serialization allows higher-level VISAs to be used on the destination computing device.
At operation 416, the VISA binaries are provided to a second computing device 102 for execution thereon, for example, by downloading from the first computing device 102 via a wireless or wired communication channel.
At operation 418, one of the VISA binaries is selected by the second computing device 102 based on an ISA of a processor architecture of the processing unit of the second computing device 102 to be used for execution of the application.
At operation 420, second compiler optimizations are optionally performed for the selected VISA binary. The second compiler optimizations are based on the available information in the ISA of the processor architecture of the processing unit of the second computing device 102.
At operation 422, a native ISA binary based on the ISA of the processor architecture of the processing unit of the destination computing device is generated from the selected VISA binary. The native ISA binary is typically generated by re-compiling.
Although in the above example a two-level VISA hierarchy is described, the teachings of the present disclosure are not limited to a two-level VISA hierarchy. The teachings of the present disclosure may be adapted to a VISA hierarchy having three or more levels, each level having one or more VISAs. The levels of the VISA hierarchy may be based on application (e.g., Cloud, mobile device, server, IoT, etc. ) , type, generation, or other suitable hierarchical division. The number of levels depends on the application domain and/or the deployment environment (e.g., cloud, mobile device, server, IoT, etc. ) .
FIG. 6A and 6B illustrate multi-stage compilation at high-level and low-level VISAs in a VISA hierarchy, indicated by references 600 and 650, respectively, in accordance with example embodiments of the present disclosure. Referring to the high-level embodiment of FIG. 6A, a base IR 602 for general-purpose computing is generated from the source code 601 of the application via a VISA runtime environment on a first computing device 102. Next, a highest-level VISA binary 606 is generated using a VISA code generator (compiler) 604 of a first computing device 102. Lower-level VISA binaries are then created by the VISA code generator (compiler) 604 until a VISA binary has been created for each level of the VISA hierarchy. A linker and inter-procedural optimizer 608 performs a number of optimizations on each hierarchical VISA binary using VISA libraries (not shown) and binary libraries (not shown) . The linker and inter-procedural optimizer 608 is responsible for linking together multiple VISA object files (each created from one source file) into one combined VISA binary (the “serialized VISA binary” ) . During this process, the linker and inter-procedural optimizer 608 may optionally perform some inter-procedural optimizations on the serialized VISA binary so long as such optimizations do not depend on physical hardware details, such as inter-procedural constant propagation, function specialization, inlining, function reordering, etc.
The binary libraries are used for symbol resolution only. A serialized VISA binary 610 is then generated by the compiler of the first computing device 102 from the hierarchical VISA binaries. The VISA serialization may be implemented on top of LLVM bitcode format, which features efficient fixed and variable bitrate encoding of mostly small numbers and text. The serialized VISA 610 can be embedded in a popular object format, such as Executable and Linkable Format (ELF) or Common Object File Format (COFF) , resulting in a “fat binary” , which allows the application to be provide to destination computing devices in VISA binary form as well as executable machine code form.
FIG. 10 illustrates the format of a serialized VISA in accordance with one embodiment of the present disclosure. The serialized VISA format is based on the LLVM bitcode format and includes additional and/or extended block and record types comprising: VARCH_BLOCK (an addition) that contains records for various virtual architecture-related information, and permits undefined records so that later compiler stages may provide values for such records or assume conservative defaults; OPTIMIZATIONS_BLOCK (an addition) comprising an OPTS_PERFORMED record (a mandatory bitmap-type record that indicates which optimizations have already been performed) and optional key-value records for optimizations to propagate information to later compiler stages; METADATA_BLOCK (an extension of an existing block) that supports custom pragmas and attributes for, e.g. profiled hot values, profiled block frequencies, known runtime constants. The serialization format is extended to support multiple MODULE_BLOCKs in one container. This allows the multiple versions of the VISA (at different abstraction levels) of a program to be encoded simultaneously, e.g. Cloud-CPU, Hi16xx, Linx, etc.
The serialized VISA binary 610 is sent to a second computing device 102 where it is processed by a local compiler. The local compiler is configured to operate on VISA code at different levels of abstractions represented by the different VISA levels. As noted above, a higher-level VISA representation is more portable by not allowing hardware-aware optimizations to be performed until closer to deployment time, which may obviate problems arising from architecture replacement. The serialized VISA binary 610 is received by a program loader 612 of the local operating system of the target computing device, which selects a VISA binary from the serialized VISA binary 610 based on the ISA of the second computing device 102. When the corresponding application is launched or invoked, the operating system creates a thread for its execution, launches the program loader 612, passing the program loader 612 the application executable. The program loader 612 opens the executable, links it with required dynamically linked libraries, and passes control to the main function inside the executable. The program loader 612 has the ability to recognize a fat binary, select an appropriate VISA embedded therein, and invoke the local compiler to convert the VISA into the physical ISA of the second computing device.
A series of hardware-aware optimization operations may then be performed. The hardware-aware optimizations exploit knowledge of characteristics of the physical hardware and its ISA. Examples of hardware-aware optimizations include cache-aware loop optimizations 614, vector size-aware (SLP) vectorization 616, and resource-aware parallelization 618. Inter-procedural optimization (IPO) 620 is then optionally performed 620 utilizing the system resources of the second computing device 102, any VISA extensions and VISA libraries 622. The IPO 620 is different from the optimization of the linker and inter-procedural optimizer 608 as it can link and inline the VISA libraries available on the second computing device, giving it more information about the whole program. In addition, because the local compiler is aware of the physical hardware and ISA, the IPO techniques can exploit this knowledge to generate more efficient code. Next, partial evaluation 624 confirming the completeness and readiness of the optimized VISA binary is performed, the optimized VISA binary is output. Partial evaluation involves the evaluation of instructions that can be performed by the compiler statically at compile time independent of the values of the input parameters that are only known at runtime. In the context of compiler optimizations, a common implementation of partial evaluation generates specializable optimized code templates in the program executable. When runtime information becomes available, e.g. certain constants are submitted by the user, such information can be used to specialize the code templates to create more efficient code than the generic, unspecialized code. Lastly, a native ISA binary 640 is generated from the selected VISA binary based on the ISA of the processor architecture of the processing unit of the second computing device 102. The native ISA binary is typically generated by re-compiling.
Referring to the low-level embodiment of FIG. 6B, a base IR 602 for general-purpose computing is generated from the source code 601 of the application via a VISA runtime environment on a first computing device 102. Next, a highest-level VISA binary 606 is generated using a VISA code generator (compiler) 604 of a first computing device 102. Lower-level VISA binaries are then created by the VISA code generator (compiler) 604 until a VISA binary has been created for each level of the VISA hierarchy. A series of hardware-agnostic optimizations may then be performed on the hierarchical VISA binaries. Examples of hardware-agnostic optimizations include generic loop optimization 628, loop vectorization 630 to transform loops into vectors for faster processing, generic parallelization 632 to parallelize sections of the intermediate VISA binary for faster performance 608, and the linker and inter-procedural optimization to load various system resources of an ISA of a target device. The hardware-agnostic optimizations are weaker versions of the late-stage optimizations that can exploit hardware knowledge (such as operations 614, 616, 618) . Together, FIG. 6A and FIG. 6B illustrate how the hierarchical VISA specification allows stages of the compiler to be divided with flexibility between the developer's computing device and the user's computing device. The actual division of the stages is determined by the use case (e.g. Cloud, mobile device, server, IoT, etc. ) and resource availability on the computing devices. This allows the generation of a lower-level VISA representation for a certain class of CPU that is more specialized and, therefore, more accurate optimizations can be performed. This makes generational differences of an architecture transparent to programmers. A serialized VISA binary 610 is then generated by the compiler of the first computing device 102 from the hierarchical VISA binaries.
The serialized VISA binary 610 is sent to a second computing device 102 where it is processed by a local compiler. The serialized VISA binary 610 is received by a program loader 612 of the local operating system, which selects a VISA binary from the serialized VISA binary 610 based on the ISA of the second computing device 102. Inter-procedural optimization (IPO) 620 is then performed 620 utilizing the system resources of the second computing device 102, any VISA extensions and VISA libraries 622. Next, partial evaluation 624 confirming the completeness and readiness of the optimized VISA binary is performed, the optimized VISA binary is output. Lastly, a native ISA binary 640 is generated from the selected VISA binary based on the ISA of the processor architecture of the processing unit of the second computing device 102. The native ISA binary is typically generated by re-compiling.
The provision of a VISA hierarchy and multi-stage compilation allows the compiler on the destination computing device and VISA runtime environment to be adapted for and deployed in different product scenarios, with different trade-offs between portability/ (re) targetability, performance, and compilation overhead on the target computing device. For example, in server or cloud deployments where high performance and portability (e.g., between ARM and x86 servers) are more important, a higher-level VISA with fewer development system-side compilation stages can help achieve improved performance by increasing the deployment system-side compilation budget. The high-level VISA is produced by performing minimal translation from source code to VISA binary, leaving expensive and target-specific optimizations (e.g., cache-aware loop optimizations, vectorization, parallelization) to the deployment system-side compilation phase. For mobile deployments where resource constraints place an upper bound on the deployment system-side compilation budget, producing a lower-level VISA from more generic optimizations, leaving only critical specializations (e.g., instruction selection, code generation) to be done during deployment system-side compilation is important for achieving good performance on the target device economically. An extremely low-level VISA, with only sensitive ISA extensions abstracted, achieves IP protection without the disadvantages of multi-stage compilation on the target computing device.
Referring now to FIG. 7, a compilation method 700 showing operations performed on an application provider device (e.g., developer computing device) and a target device (e.g., end user computing device) in accordance with one embodiment of the present disclosure will be described. FIG. 7 is an expanded view of FIG. 6A (or FIG. 6B) . A developer may provide source code in one of several multiple different programming languages, referred to as source code 702 and source code 703. Advantageously, this allows developers to select a desired programming language and are not bound to use a specific programming language. Thus, the approach described above can be applied for any given programming language. The source code of the application may be received on a computing device 102 of the application provider (e.g., developer computing device) . Language frontends 704, 706 are used to convert the source code into a respective base intermediate representation 602 for general-purpose computing. A portable optimizer 708 and VISA code generator 604 are used to generate a VISA binary 606 for each VISA level of the VISA hierarchy and for each source code version of the application. The portable optimizer 708 is a part of the compiler that performs the portable or hardware-agnostic optimizations (e.g., operations 626, 630, 632) . The VISA binaries 606 may be optimized by the linker and inter-procedural optimizer 608 using VISA libraries 622 and binary libraries 710. The binary libraries 710 are used for symbol resolution only. Although not shown, other VISA optimizations such as those described above in connection with FIG. 6B may be performed. A serialized VISA binary 610 is then generated by the compiler of the developer computing device 102 from the hierarchical VISA binaries. The VISA libraries 622 may be embedded in fat libraries in the serialized VISA binary 610. Details concerning the serialized VISA binary 610 are provided above.
The serialized VISA binary 610 is sent to a target computing device 102 for processing by a local compiler 714. The serialized VISA binary 610 is loaded into memory for compilation for performance and execution by a program loader 612. An installation manager (not shown) invokes the compiler 714 to translate the VISA binary into executable machine code, native object, for the target computing device 102. This is known as “install-time” compilation. Alternatively, the functions of the program loader and installation manager may be combined. The program loader 612 recognizes the serialized VISA binary 610 and invokes the compiler 714 to translate the serialized VISA binary 610. This is known as “just-ahead-of-time” compilation and the compiler is sometimes referred to as a just ahead-of-time compiler 714. Heuristics may be employed to decide the optimal time for triggering the compilation. For example, applications (programs) that run rarely may not be worthwhile to specialize.
Once triggered, the compiler 714 extracts the appropriate VISA 716 from the serialized VISA binary 610 based on the ISA of the second computing device 102 and, depending on the available hierarchical VISA versions, constructs a suitable optimization pipeline (not shown) , e.g. more optimizations for higher-level VISA, fewer optimizations for lower-level VISA, and specializes the VISA for the target computing device 102. If the execution environment on the target computing device 102 has previously installed VISA-enabled libraries, the VISAs of the installed VISA-enabled libraries can be inlined into the program compiler 714 for improved performance.
The VISA binary 716 is compiled into a native object 720 of the target device using a native code generator 718. The native object 720 is linked to the resources of the target computing device 102 using a native linker 722. The native linker 722 uses binary libraries 710 to generate executable code 724 such as a native ISA binary ready for launching 726. Depending on the use case, a persistent code cache 728 is used to store the executable code onto the memory to reduce overhead and improve performance. The executable code is version-controlled in the persistent code cache 728 such that the previously compiled code can be launched directly by the program loader 612 on subsequent program launches without compiling again. It is possible to roll back to an older version of the compiled code by a tool (not shown) for managing the code cache provided to the user as part of the installation manager.
FIG. 8 is flowchart of a method 800 for deployment of an application executable by a plurality of processor architectures in accordance with a third embodiment of the present disclosure. At least parts of the method 800 are carried out by software executed by the processing unit 104 of the computing device 102. The method 800 is similar to the methods 300 and 400 in several respects. However, method 800 performs continuous and profile guided optimizations unlike the methods 300 and 400. After the native ISA binary is generated, the application is run on the in the second computing device 102 (operation 802) . The profile data is obtained from the running (e.g., operating or executing) application (operation 804) . The profile data from the running application is used to further optimize the VISA binary (operation 806) , which can be used to generate a new native ISA binary for use the next time the application is run. A detailed example is described below. However, many suitable just-in-time (JIT) compilation or just ahead-of-time (AoT) complication techniques may be adapted for use. As will be understood by persons skilled in the art, JIT compilation happens at start-up as well as during the execution of a program whereas AoT compilation happens prior to the execution of the application but after the exact hardware/software configuration of the execution platform of the target device is known.
Referring to FIG. 9, a method 900 for continuous and profile guided optimization in accordance with one embodiment of the present disclosure will be described. The method 900 is similar to the method 700 on the target computing device 200. However, the compilation performance is further extended with two of profilers: an instrumentation-based profiler 910 and a lightweight sampling profiler 920. The instrumentation-based profiler 910 collects data and information about the runtime behavior of the application using instrumentation code inserted to the source code during complication. This information can be used in subsequent recompilations of the source code for better optimization.
The lightweight sampling profiler 920 is a profiler thread that periodically samples the running program and collects data information about hot spots of the application such as function hotness, block call frequencies, call graph, hardware event statistics, etc. A hot spot is a region of a computer program where a high proportion of executed instructions occur or where most time is spent during the program's execution, which are not necessarily the same because some instructions are faster than others. This information can similarly be used to help better optimize the binary code at lower precision but without the runtime overhead incurred by instrumentation. A continuous optimizing compiler 904 uses heuristics to choose when to initiate recompilations, using the collected profile information and writes the new version of the executable code 724 into the persistent code cache 728.
The VISA binary 716 is extracted from a serialized VISA binary 610 by the continuous optimizing compiler 904. The VISA binary 716 is transformed into a native object 720 by the native code generator 718, and the native linker 722 makes use of binary libraries 710 to generating executable code 724 from which a program 726 is launched. Unlike the method 700, the continuous optimizing compiler 904 regularly or continuously optimizes the VISA binary 716. The instrumentation-based profiler 910 receives data from the running program 726 and sends data to the continuous optimizing compiler 904 which used the data to improve the VISA binary 716 during recompilation.
The lightweight sampling profiler 920 receives data from the program loader 612 which launches compiler and profiler threads, and periodically records call stacks of all threads and sends sampling profile data to a compilation queue 930. The sampling-based profiling does not require instrumentation of the target code. The compilation queue 930 which heuristically queues hot functions for re-optimization in the order of benefit/cost ratio, and feeds into the continuous optimization compiler 904 and uses profile data to guide re-optimization. The continuous optimization compiler 904 wakes up only when permitted by resource policy, e.g. idle-time-only, %-application, time.
The just-in-time compilation from VISA to executable machine code adapts and optimizes the program for the available hardware features and microarchitecture characteristics of the execution environment. Moreover, the continuous program optimization exploits the knowledge about the target computing device workload and the runtime behavior of the program to achieve an optimal tuning of the program. The continuous and profile guided optimizations create a closed-loop method to improve the VISA binary based on profiling data continuously.
General
The steps and/or operations in the flowcharts and drawings described herein are for purposes of example only. There may be many variations to these steps and/or operations without departing from the teachings of the present disclosure. For instance, the steps may be performed in a differing order, or steps may be added, deleted, or modified, as appropriate.
The coding of software for carrying out the above-described methods described is within the scope of a person of ordinary skill in the art having regard to the present disclosure. Machine-readable code executable by one or more processors of one or more respective devices to perform the above-described method may be stored in a machine-readable medium such as the memory of the data manager. The terms “software” and “firmware” are interchangeable within the present disclosure and comprise any computer program stored in memory for execution by a processor, comprising Random Access Memory (RAM) memory, Read Only Memory (ROM) memory, EPROM memory, electrically EPROM (EEPROM) memory, and non-volatile RAM (NVRAM) memory. The above memory types are examples only, and are thus not limiting as to the types of memory usable for storage of a computer program.
All values and sub-ranges within disclosed ranges are also disclosed. Also, although the systems, devices and processes disclosed and shown herein may comprise a specific plurality of elements, the systems, devices and assemblies may be modified to comprise additional or fewer of such elements. Although several example embodiments are described herein, modifications, adaptations, and other implementations are possible. For example, substitutions, additions, or modifications may be made to the elements illustrated in the drawings, and the example methods described herein may be modified by substituting, reordering, or adding steps to the disclosed methods.
Features from one or more of the above-described embodiments may be selected to create alternate embodiments comprised of a subcombination of features which may not be explicitly described above. In addition, features from one or more of the above-described embodiments may be selected and combined to create alternate embodiments comprised of a combination of features which may not be explicitly described above. Features suitable for such combinations and subcombinations would be readily apparent to persons skilled in the art upon review of the present disclosure as a whole.
In addition, numerous specific details are set forth to provide a thorough understanding of the example embodiments described herein. It will, however, be understood by those of ordinary skill in the art that the example embodiments described herein may be practiced without these specific details. Furthermore, well-known methods, procedures, and elements have not been described in detail so as not to obscure the example embodiments described herein. The subject matter described herein and in the recited claims intends to cover and embrace all suitable changes in technology.
Although the present disclosure is described at least in part in terms of methods, a person of ordinary skill in the art will understand that the present disclosure is also directed to the various elements for performing at least some of the aspects and features of the described methods, be it by way of hardware, software or a combination thereof. Accordingly, the technical solution of the present disclosure may be embodied in a non-volatile or non-transitory machine-readable medium (e.g., optical disk, flash memory, etc. ) having stored thereon executable instructions tangibly stored thereon that enable a processing device to execute examples of the methods disclosed herein.
The term “database” may refer to either a body of data, a relational database management system (RDBMS) , or to both. As used herein, a database may comprise any collection of data comprising hierarchical databases, relational databases, flat file databases, object-relational databases, object oriented databases, and any other structured collection of records or data that is stored in a computer system. The above examples are example only, and thus are not intended to limit in any way the definition and/or meaning of the terms "processor" or “database” .
The present disclosure may be embodied in other specific forms without departing from the subject matter of the claims. The described example embodiments are to be considered in all respects as being only illustrative and not restrictive. The present disclosure intends to cover and embrace all suitable changes in technology. The scope of the present disclosure is, therefore, described by the appended claims rather than by the foregoing description. The scope of the claims should not be limited by the embodiments set forth in the examples, but should be given the broadest interpretation consistent with the description as a whole.
Claims (23)
- A method for deployment of an application executable by a plurality of processor architectures, each processor architecture in the plurality of processor architectures having an associated instruction set architecture (ISA) , the method comprising:at a first computing device:generating from application source code a core virtual instruction set architecture (VISA) binary for a general-purpose central processing unit (CPU) architecture;generating from the application source code a plurality of extension VISA binaries, each extension VISA binary in the plurality of extension VISA binaries being configured for a domain-specific processing unit architecture in a plurality of domain-specific processing unit architectures;generating a serialized VISA binary based on the core VISA binary and each extension VISA binary; andsending the serialized VISA binary comprising the core VISA binary and the plurality of extension VISA binaries to a second computing device for execution thereon.
- The method of claim 1, wherein the generating the core VISA binary comprises compiling.
- The method of claim 2, wherein the generating the plurality of extension VISA binaries comprises compiling.
- The method of claim 1, wherein the second computing device selects a VISA binary from the serialized VISA binary and generates a native binary based on the selected VISA binary and an instruction set architecture (ISA) of a processing unit of the second computing device for execution of the application.
- The method of claim 4, wherein the generating the native binary comprises compiling.
- The method of claim 4, the second computing device optimizes the selected VISA binary based on the ISA of the processing unit of the second computing device.
- The method of claim 1, wherein the plurality of domain-specific processing unit architectures comprise one or more graphics processing unit (GPU) architectures, one or more digital signal processing unit (DSP) architectures, one or more neural processing unit (NPU) architectures, one or more network processing unit architectures, one or more tensor processing unit (TPU) architectures, or any combination thereof.
- The method of claim 1, wherein the core VISA is a highest-level VISA in a VISA hierarchy comprising the highest-level VISA and one or more lower-level VISAs with the CPU domain, the method further comprising:at the first computing device:generating from the application source code a lower-level VISA binary for each lower-level VISA in the VISA hierarchy;wherein the serialized VISA binary is generated based on the highest-level VISA binary, each lower-level VISA binary, and each extension VISA binary.
- The method of claim 8, further comprising:at the first computing device:performing optimizations for each VISA binary in the serialized VISA binary.
- The method of claim 9, wherein the optimizations for each VISA binary in the serialized VISA binary are hardware-agnostic optimizations that are independent of the processing unit of the second computing device and the ISA of the processing unit of the second computing device.
- The method of claim 10, further comprising:at the second computing device:selecting a VISA binary from the serialized VISA binary and generates a native binary based on the selected VISA binary and an ISA of a processing unit of the second computing device for execution of the application; andperforming optimizations on the selected VISA binary.
- The method of claim 11, wherein the optimizations on the selected VISA binary are hardware-aware optimizations that are dependent of the processing unit of the second computing device and the ISA of the processing unit of the second computing device.
- The method of claim 11, further comprising:at the second computing device:collecting data about runtime behavior of the application;collecting data about hot spots of the application; andreoptimizing the selected VISA binary based on the runtime behavior of the application and hot spots of the application.
- A method for deployment of an application executable by a plurality of processor architectures, each processor architecture in the plurality of processor architectures having an associated instruction set architecture (ISA) , the method comprising:at a first computing device:generating from application source code a core virtual instruction set architecture (VISA) binary for a general-purpose central processing unit (CPU) architecture, wherein the core VISA is a highest-level VISA in a VISA hierarchy comprising the highest-level VISA and one or more lower-level VISAs with the CPU domain;generating from the application source code a lower-level VISA binary for each lower-level VISA in the VISA hierarchy;generating a serialized VISA binary based on the highest-level VISA binary and each lower-level VISA binary; andsending the serialized VISA binary comprising the highest-level VISA binary and each lower-level VISA binary to a second computing device for execution thereon.
- The method of claim 14, further comprising:at the first computing device:performing optimizations for each VISA binary in the serialized VISA binary.
- The method of claim 15, wherein the optimizations for each VISA binary in the serialized VISA binary are hardware-agnostic optimizations that are independent of the processing unit of the second computing device and the ISA of the processing unit of the second computing device.
- The method of claim 15, further comprising:at the second computing device:selecting a VISA binary from the serialized VISA binary and generates a native binary based on the selected VISA binary and an ISA of a processing unit of the second computing device for execution of the application; andperforming optimizations on the selected VISA binary.
- The method of claim 17, wherein the optimizations on the selected VISA binary are hardware-aware optimizations that are dependent of the processing unit of the second computing device and the ISA of the processing unit of the second computing device.
- The method of claim 15, further comprising:at the second computing device:collecting data about runtime behavior of the application;collecting data about hot spots of the application; andreoptimizing the selected VISA binary based on the runtime behavior of the application and hot spots of the application.
- A computing device, comprising:a processor configured to:generate from application source code a core virtual instruction set architecture (VISA) binary for a general-purpose central processing unit (CPU) architecture;generate from the application source code a plurality of extension VISA binaries, each extension VISA binary in the plurality of extension VISA binaries being configured for a domain-specific processing unit architecture in a plurality of domain-specific processing unit architectures;generate a serialized VISA binary based on the core VISA binary and each extension VISA binary; andsend the serialized VISA binary comprising the core VISA binary and the plurality of extension VISA binaries to a second computing device for execution thereon.
- A computing device, comprising:a processor configured to:generate from application source code a core virtual instruction set architecture (VISA) binary for a general-purpose central processing unit (CPU) architecture, wherein the core VISA is a highest-level VISA in a VISA hierarchy comprising the highest-level VISA and one or more lower-level VISAs with the CPU domain;generate from the application source code a lower-level VISA binary for each lower-level VISA in the VISA hierarchy;generate a serialized VISA binary based on the highest-level VISA binary and each lower-level VISA binary; andsend the serialized VISA binary comprising the highest-level VISA binary and each lower-level VISA binary to a second computing device for execution thereon.
- A non-transitory machine-readable medium having tangibly stored thereon executable instructions for execution by a processor of a computing device, wherein the executable instructions, in response to execution by the processor, cause the computing device to:generate from application source code a core virtual instruction set architecture (VISA) binary for a general-purpose central processing unit (CPU) architecture;generate from the application source code a plurality of extension VISA binaries, each extension VISA binary in the plurality of extension VISA binaries being configured for a domain-specific processing unit architecture in a plurality of domain-specific processing unit architectures;generate a serialized VISA binary based on the core VISA binary and each extension VISA binary; andsend the serialized VISA binary comprising the core VISA binary and the plurality of extension VISA binaries to a second computing device for execution thereon.
- A non-transitory machine-readable medium having tangibly stored thereon executable instructions for execution by a processor of a computing device, wherein the executable instructions, in response to execution by the processor, cause the computing device to:generate from application source code a core virtual instruction set architecture (VISA) binary for a general-purpose central processing unit (CPU) architecture, wherein the core VISA is a highest-level VISA in a VISA hierarchy comprising the highest-level VISA and one or more lower-level VISAs with the CPU domain;generate from the application source code a lower-level VISA binary for each lower-level VISA in the VISA hierarchy;generate a serialized VISA binary based on the highest-level VISA binary and each lower-level VISA binary; andsend the serialized VISA binary comprising the highest-level VISA binary and each lower-level VISA binary to a second computing device for execution thereon.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2021/083059 WO2022198586A1 (en) | 2021-03-25 | 2021-03-25 | Method of providing application executable by a plurality of heterogeneous processor architectures and related devices |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2021/083059 WO2022198586A1 (en) | 2021-03-25 | 2021-03-25 | Method of providing application executable by a plurality of heterogeneous processor architectures and related devices |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022198586A1 true WO2022198586A1 (en) | 2022-09-29 |
Family
ID=83396255
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/083059 WO2022198586A1 (en) | 2021-03-25 | 2021-03-25 | Method of providing application executable by a plurality of heterogeneous processor architectures and related devices |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2022198586A1 (en) |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070266380A1 (en) * | 2006-05-09 | 2007-11-15 | International Business Machines Corporation | Extensible markup language (xml) performance optimization on a multi-core central processing unit (cpu) through core assignment |
US20130332349A1 (en) * | 2012-06-07 | 2013-12-12 | Bank Of America | Atm for use with cash bill payment |
US9740464B2 (en) * | 2014-05-30 | 2017-08-22 | Apple Inc. | Unified intermediate representation |
-
2021
- 2021-03-25 WO PCT/CN2021/083059 patent/WO2022198586A1/en unknown
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070266380A1 (en) * | 2006-05-09 | 2007-11-15 | International Business Machines Corporation | Extensible markup language (xml) performance optimization on a multi-core central processing unit (cpu) through core assignment |
US20130332349A1 (en) * | 2012-06-07 | 2013-12-12 | Bank Of America | Atm for use with cash bill payment |
US9740464B2 (en) * | 2014-05-30 | 2017-08-22 | Apple Inc. | Unified intermediate representation |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Jääskeläinen et al. | pocl: A performance-portable OpenCL implementation | |
Dolbeau et al. | HMPP: A hybrid multi-core parallel programming environment | |
US9720708B2 (en) | Data layout transformation for workload distribution | |
US7926046B2 (en) | Compiler method for extracting and accelerator template program | |
US6948160B2 (en) | System and method for loop unrolling in a dynamic compiler | |
US20110289519A1 (en) | Distributing workloads in a computing platform | |
US11269639B2 (en) | Methods and apparatus for intentional programming for heterogeneous systems | |
US10521208B2 (en) | Differentiated static analysis for dynamic code optimization | |
US20120272223A1 (en) | Technique for Run-Time Provision of Executable Code using Off-Device Services | |
US20130283250A1 (en) | Thread Specific Compiler Generated Customization of Runtime Support for Application Programming Interfaces | |
CN111770204B (en) | Method for executing intelligent contract, block chain node and storage medium | |
Hallou et al. | Runtime vectorization transformations of binary code | |
Ahmad et al. | Leveraging parallel data processing frameworks with verified lifting | |
Sato et al. | ExanaDBT: A dynamic compilation system for transparent polyhedral optimizations at runtime | |
Carreira et al. | From warm to hot starts: Leveraging runtimes for the serverless era | |
Acosta et al. | Towards a Unified Heterogeneous Development Model in Android TM | |
Arabnejad et al. | Source-to-source compilation targeting OpenMP-based automatic parallelization of C applications | |
Ivanenko et al. | TuningGenie: auto-tuning framework based on rewriting rules | |
WO2022198586A1 (en) | Method of providing application executable by a plurality of heterogeneous processor architectures and related devices | |
US10684873B2 (en) | Efficient data decoding using runtime specialization | |
Suhan et al. | LazyTensor: combining eager execution with domain-specific compilers | |
US20210342131A1 (en) | Methods, blockchain nodes, and storage media for executing smart contract | |
Engelke et al. | Using LLVM for optimized lightweight binary re-writing at runtime | |
Franz | Run-time code generation as a central system service | |
CN111768183A (en) | Method for executing intelligent contract, block chain node and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21932211 Country of ref document: EP Kind code of ref document: A1 |