WO2024055262A1 - Programming statements in embedded domain specific language - Google Patents

Programming statements in embedded domain specific language Download PDF

Info

Publication number
WO2024055262A1
WO2024055262A1 PCT/CN2022/119150 CN2022119150W WO2024055262A1 WO 2024055262 A1 WO2024055262 A1 WO 2024055262A1 CN 2022119150 W CN2022119150 W CN 2022119150W WO 2024055262 A1 WO2024055262 A1 WO 2024055262A1
Authority
WO
WIPO (PCT)
Prior art keywords
programming
statements
programming statements
language
code
Prior art date
Application number
PCT/CN2022/119150
Other languages
French (fr)
Inventor
Zhibin Li
Liyang Ling
Xinghong Chen
Original Assignee
Intel Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corporation filed Critical Intel Corporation
Priority to PCT/CN2022/119150 priority Critical patent/WO2024055262A1/en
Publication of WO2024055262A1 publication Critical patent/WO2024055262A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/41Compilation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/51Source to source

Definitions

  • MLIR Multi-Level Intermediate Representation
  • An eDSL embedded domain specific language
  • C++ a mature host programming language
  • Fig. 1a shows a block diagram of an example of an apparatus or device for compiling code comprising programming statements in an embedded domain specific language, and of a computer system comprising such an apparatus or device;
  • Fig. 1b shows a flow chart of an example of a method for compiling code comprising pro-gramming statements in an embedded domain specific language
  • Fig. 2 illustrates a workflow structure of an eDSL to MLIR compiler flow
  • Fig. 3a shows a code piece comprising eDSL programming statements and host language programming statements
  • Fig. 3b shows a code piece comprising transformed programming statements in an intermedi-ate representation
  • Fig. 4a shows an example of a BodyOp of a loop in the eDSL
  • Fig. 4b shows an example of a BodyOp of a loop in the IR
  • Figs. 5a and 5b show an example of a bare BodyOp in the eDSL and of the resulting IR
  • Figs. 6a and 6b show an example of a BodyOp of a switch case statement in the eDSL and of the resulting IR;
  • Figs. 7a and 7b show an example of a BodyOp of an if statement in the eDSL and of the resulting IR;
  • Fig. 8 shows a schematic diagram of how the BodyOp operation is lowered through the lev-els of the MLIR.
  • the terms “operating” , “executing” , or “running” as they pertain to software or firmware in relation to a system, device, platform, or resource are used interchangeably and can refer to software or firmware stored in one or more computer-readable storage media accessible by the system, device, platform, or resource, even though the instructions contained in the software or firmware are not actively being executed by the system, device, platform, or resource.
  • Fig. 1a shows a block diagram of an example of an apparatus 10 or device 10 for compiling code comprising programming statements in an embedded domain specific language.
  • the apparatus 10 comprises circuitry that is configured to provide the functionality of the appa-ratus 10.
  • the apparatus 10 of Figs. 1a and 1b comprises interface circuitry 12, processing circuitry 14 and (optional) storage circuitry 16.
  • the processing cir-cuitry 14 may be coupled with the interface circuitry 12 and with the storage circuitry 16.
  • the processing circuitry 14 may be configured to provide the functionality of the apparatus, in conjunction with the interface circuitry 12 (for exchanging information, e.g., with other components inside or outside a computer system 100 comprising the apparatus or device 10) and the storage circuitry (for storing information, such as machine-readable in-structions) 16.
  • the device 10 may comprise means that is/are configured to provide the functionality of the device 10.
  • the components of the device 10 are defined as component means, which may correspond to, or implemented by, the respective structural components of the apparatus 10.
  • 1a and 1b comprises means for pro-cessing 14, which may correspond to or be implemented by the processing circuitry 14, means for communicating 12, which may correspond to or be implemented by the interface circuitry 12, and (optional) means for storing information 16, which may correspond to or be imple-mented by the storage circuitry 16.
  • the functionality of the processing circuitry 14 or means for processing 14 may be implemented by the processing circuitry 14 or means for processing 14 executing machine-readable instructions.
  • any feature ascribed to the processing circuitry 14 or means for processing 14 may be defined by one or more in-structions of a plurality of machine-readable instructions.
  • the apparatus 10 or device 10 may comprise the machine-readable instructions, e.g., within the storage circuitry 16 or means for storing information 16.
  • the processing circuitry 14 or means for processing 14 is configured to obtain code compris-ing a first set of programming statements in an embedded domain-specific programming lan-guage and a second set of programming statements in a second programming language (e.g., via the interface circuitry 12 or means for communicating 12 or from the storage circuitry 16 or means for storing information 16) .
  • the first set of programming statements comprises one or more pre-defined programming statements encapsulating a block of programming state-ments.
  • the processing circuitry 14 or means for processing 14 may be configured to identify the one or more pre-defined programming statements within the first set of pro-gramming statements encapsulating a block of programming statements.
  • the processing cir-cuitry 14 or means for processing 14 is configured to compile the first set of programming statements to generate a first set of transformed programming statements according to an in-termediate representation, with the encapsulated programming statements being represented as a function call to a function comprising transformed programming statements correspond-ing to the encapsulated programming statements.
  • the processing circuitry 14 or means for processing 14 is configured to compile the second set of programming statements (e.g., to obtain a second set of transformed programming statements according to an intermediate rep-resentation) .
  • the first set of programming statements are represented by the first set of trans-formed programming statements during the compilation of the code defined by the second set of programming statements.
  • Fig. 1a further shows an example of a computer system 100 comprising such an apparatus or device.
  • Fig. 1b shows a flow chart of an example of a corresponding method for compiling code comprising programming statements in an embedded domain specific language.
  • the method comprises obtaining 110 the code comprising the first set of programming statements in the embedded domain-specific programming language and the second set of programming state-ments in the second programming language.
  • the method may comprise identi-fying 120 the one or more pre-defined programming statements within the first set of pro-gramming statements encapsulating the block of programming statements.
  • the method com-prises compiling 130 the first set of programming statements to generate the first set of trans-formed programming statements according to the intermediate representation.
  • the method comprises compiling 140 the second set of programming statements (e.g., to obtain the second set of transformed programming statements according to an intermediate representation) .
  • the method may be performed by the computer system 100, e.g., by the apparatus 10 or device 10 of the computer system 100.
  • MLIR Multi-Level Intermediate Representation
  • MLIR may be used to compile code that contains program-ming statements in different programming languages and/or dialects.
  • the code may comprise programming statements in a first programming language or dialect (of a pro-gramming language) , programming statements in a second programming language or dialect, programming statements in a third dialect or programming language etc.
  • the compiler passes through the different levels being based on the different dialects, with the respective programming statements written in the dialect or programming language being represented by the level being converted (i.e., translated) into an intermediate representation and being “lowered” (i.e., inserted as intermediate representation) into the next-lower level.
  • the level being converted (i.e., translated) into an intermediate representation and being “lowered” (i.e., inserted as intermediate representation) into the next-lower level.
  • the programming statements in the eDSL lan-guage or dialect is converted into an IR and lowered into the code further comprising pro-gramming statements in the affine dialect (a dialect that is focused on complex data structures as used in machine-learning) and programming statements in the standard dialect (e.g., C++) .
  • the programming statements in the different dialects or programming languages may be dis-tinguished using namespaces, e.g., by including a level-specific prefix before the respective programming statements, such as EDSL:: op (with op being the operation) , AFFINE: op, STD: op etc. or edsl. op, affine. op and std. op.
  • the programming statements in the standard dialect are included without a prefix
  • the pro-gramming statements in the eDSL dialect are included with the EDSL:: namespace prefix.
  • the one or more pre-defined programming statements may be defined in a namespace of the embedded domain-specific programming language.
  • MLIR is predominantly used in the context of machine learning and heterogenous computing (i.e., computing on dif-ferent computing devices, such as Central Processing Units (CPUs) , Graphics Processing Units (GPUs) and other purpose-built accelerator cards.
  • both the first set of programming statements and the second set of programming statements may be based on the same common programming language (e.g., C++ or Python) , with the first set of programming statements being based on a first dialect (i.e., the eDSL dialect) of the common programming language and the second set of programming statements being based on a second dialect (e.g., the stand-ard dialect) of the common programming language.
  • a first dialect i.e., the eDSL dialect
  • the second set of programming statements being based on a second dialect (e.g., the stand-ard dialect) of the common programming language.
  • different programming lan-guages may be intermixed, i.e., the first and second programming language might not share a common programming language.
  • An eDSL is, as the name suggests, a domain-specific language that is embedded in a host language.
  • a domain-specific language is a programming language that is specific to a domain of applications, such as machine-learning, data analytics, scientific computing, simulation, in contrast to general-purpose programming languages, such as the C++ standard dialect, C#, F#, Python or Swift. If the domain-specific language is embedded, it relies on a host language. Typically, embedded domain-specific languages are implemented as libraries that are built on top of the respective host language, and which use major aspects of the host language while adding some additional language elements that are useful in the particular application domain being addressed by the domain-specific language.
  • the embedded domain specific language may correspond to a library or framework that may be based on the second programming language (with the second programming language being the host (or common) language of the first programming language) .
  • the code comprise both the first set of programming statements (in the eDSL) and the second set of programming statements (in the second programming lan-guage, which may usually be the host language of the eDSL) .
  • these programming statements are intermixed, i.e., the code may comprise at least one file compris-ing both programming statements of the first set and of the second set of programming state-ments.
  • the first set of programming statements and the second set of program-ming statements may be intermixed within the code.
  • eDSL can lead to additional overhead during the multi-level compilation of the code comprising both the eDSL programming statements and the host programming language statements.
  • groups of programming statements that are encapsulated in a block can lead to additional overhead during the multi-level compilation.
  • loops may be unrolled during the MLIR-based compilation, leading to vastly increased overhead if the compiler includes the eDSL-based instructions within the loop numerous times after the loop unroll.
  • Such scenarios can bloat the compiled computer program or lead to errors or crashes during compilation.
  • the proposed concept seeks to avoid such scenarios by defining programming statements that define a block of code to be treated as a contiguous block during compilation.
  • these programming statements are also denoted “BodyOp”
  • the term “one or more pre-defined programming statements” are used.
  • the one or more pre-defined programming statements encapsulate a block of programming statements within the code, with the block of programming statements being the “Body” re-ferred to by the “BodyOp” statements.
  • the encapsulated programming state-ments may (all) be programming statements of the first set of programming statements, i.e., eDSL programming statements.
  • the programming statement of the encap-sulated block of programming statements are treated individually (e.g., when lowering their intermediate representation into the next-lower level of e.g., the second set of programming statements) , they are kept grouped (with respect to the control flow being modeled during the compilation) as they are lowered during the MLIR-based compilation. This is done by transforming the one or more-predefined programming statements into a call to a function comprising the en-capsulated programming statements during compilation of the first set of programming state-ments.
  • the processing circuitry is configured to compile the first set of pro-gramming statements such, that the encapsulated programming statements are represented as a function call to a function comprising transformed programming statements corresponding to the encapsulated programming statements (according to the respective intermediate repre-sentation being used) .
  • the programming statements of the first set of program-ming statements may all be transformed into the intermediate representation.
  • the programming statements that are encapsulated by the one or more pre-defined programming statements may be kept grouped as they are lowered through the levels, by moving them into a separate function (comprising transformed programming statements) that is called via a function call.
  • the proposed mechanism is based on transforming the one or more pre-defined programming statements into the function call for the function comprising the transformed programming statements corresponding to the encapsulated programming statements. Therefore, before or during compilation, the one or more pre-defined programming statements are identified by the processing circuitry (e.g., by parsing the code, and/or as part of the compilation of a spe-cific construct, such as a lambda function) .
  • the one or more pre-defined programming state-ments may comprise one or more pre-defined keywords or braces that are used to encapsulate the programming statements as a block.
  • these one or more pre-defined program-ming statements may be generic for a plurality of different types of encapsulated blocks of code, e.g., for two or more of a for loop block, a while loop block, a switch case block and an if block.
  • a for loop block e.g., for two or more of a for loop block, a while loop block, a switch case block and an if block.
  • Figs. 4a to 7a examples are given on how such blocks can be defined.
  • the one or more pre-defined programming statements may be delimited from the programming statements being encapsulated by the one or more pre-defined programming statements.
  • the one or more pre-defined pro-gramming statements may include the “EDSL:: BodyOp loopBody ( “and “) ⁇ ; ” and the fol-lowing call to the loopBody lambda function (a lambda function is an anonymous function that is defined inline with other code) , while the “return A+1” programming statement is being encapsulated by the loopBody lambda function (with the argument “ [] Tensor A” being nec-essary to define the function call, so that the argument can be considered to be encapsulated by the loopBody lambda function) .
  • the one or more pre-defined instructions may comprise one or more instructions for defining a lambda function.
  • the one or more pre-defined programming statements may be generic for a plurality of different types of encapsulated blocks of code. While the function receives different names in the examples of Figs. 4a to 7a (bodyLoop in Fig. 4a, bodyFunc in Fig. 5a, bodyFuncA –bodyFuncC in Fig. 5a and ifBody in Fig. 7a, the remainder of the one or more pre-defined programming statements may be the same (i.e., generic) in each case.
  • the one or more pre-defined programming statements may comprise at least one first programming statement defining the programming statements being encapsulated (i.e., the part that stays the same) and at least one second programming statement defining an op-eration to be performed with the programming statements being encapsulated (i.e., the part that is different depending on the type of block being defined) .
  • the at last one first programming statement defines which programming statements are encapsulated and is therefore transformed into a function call and a corresponding function comprising the transformed programming statements corresponding to the encapsulated programming state-ments.
  • the at least one second programming statement is translated into corre-sponding programming statements that define the processing to be applied to the block of code in the second programming language, thus lowering the operation to be performed into the second programming language.
  • the processing circuitry may be configured to translate the at least one second programming statement into at least one corresponding programming statement in the second programming language, and to compile the at least one corresponding programming statement in the second programming language together with the second set of programming statements.
  • the method may comprise translating 135 the at least one second programming statement into at least one corresponding programming statement in the second programming language and compiling 140 the at least one corresponding programming statement in the second programming lan-guage together with the second set of programming statements.
  • a second pro-gramming statement defining loop (or switch case, or if) properties of the block of encapsu-lated programming statements may be translated into a corresponding programming statement (e.g., a programming statement defining a loop, a programming statement defining a switch case construct or a programming statement defining an if clause) in the second programming language, for the compilation of the second set of programming statements.
  • a corresponding programming statement e.g., a programming statement defining a loop, a programming statement defining a switch case construct or a programming statement defining an if clause
  • compilation is performed iteratively, with the compilation of the second set of pro-gramming statements following the compilation of the first set of programming statements (however, some interaction between the compilation passes may occur as the compiler at-tempts to perform various optimization) .
  • the result of the compilation of the first set of pro-gramming statements is lowered into the code for the compilation of the second set of pro-gramming statements, with the first set of programming statements being represented by the first set of transformed programming statements during the compilation of the code defined by the second set of programming statements (and, optionally, the at least one corresponding programming statement in the second programming language that is translated from the at least one second programming statement) .
  • the processing circuitry is configured to compile the second set of programming statements, together with the elements (e.g., IR and the corre-sponding programming statement) that are being lowered down from a higher level, to obtain the second set of transformed programming statements according to an intermediate repre-sentation.
  • the compilation may then continue with the next-lower level (e.g., the SCF (Struc-ture Control Flow) level as shown in Fig. 2) .
  • the processing circuitry may be configured to continue compiling the code using the first set of transformed programming statements and the second set of transformed programming statements. Accordingly, as fur-ther shown in Fig. 1b, the method may comprise continuing compiling 150 the code using the first set of transformed programming statements and the second set of transformed program-ming statements.
  • the interface circuitry 12 or means for communicating 12 may correspond to one or more inputs and/or outputs for receiving and/or transmitting information, which may be in digital (bit) values according to a specified code, within a module, between modules or between modules of different entities.
  • the interface circuitry 12 or means for communi-cating 12 may comprise circuitry configured to receive and/or transmit information.
  • the processing circuitry 14 or means for processing 14 may be implemented using one or more processing units, one or more processing devices, any means for processing, such as a processor, a computer or a programmable hardware component being operable with accordingly adapted software.
  • any means for processing such as a processor, a computer or a programmable hardware component being operable with accordingly adapted software.
  • the described function of the processing cir-cuitry 14 or means for processing may as well be implemented in software, which is then executed on one or more programmable hardware components.
  • Such hardware components may comprise a general-purpose processor, a Digital Signal Processor (DSP) , a micro-con-troller, etc.
  • DSP Digital Signal Processor
  • the storage circuitry 16 or means for storing information 16 may comprise at least one element of the group of a computer readable storage medium, such as a magnetic or optical storage medium, e.g., a hard disk drive, a flash memory, Floppy-Disk, Random Access Memory (RAM) , Programmable Read Only Memory (PROM) , Erasable Programmable Read Only Memory (EPROM) , an Electronically Erasable Programmable Read Only Memory (EEPROM) , or a network storage.
  • a computer readable storage medium such as a magnetic or optical storage medium, e.g., a hard disk drive, a flash memory, Floppy-Disk, Random Access Memory (RAM) , Programmable Read Only Memory (PROM) , Erasable Programmable Read Only Memory (EPROM) , an Electronically Erasable Programmable Read Only Memory (EEPROM) , or a network storage.
  • a computer readable storage medium such as a magnetic or optical storage medium, e.g., a hard disk drive, a
  • the computer system 100 may be a workstation computer system (e.g., a work-station computer system being used for scientific computation) or a server computer system, i.e., a computer system being used to serve functionality, such as the computer program, to one or client computers.
  • a workstation computer system e.g., a work-station computer system being used for scientific computation
  • a server computer system i.e., a computer system being used to serve functionality, such as the computer program, to one or client computers.
  • the apparatus, device, method, and computer program for com-piling code comprising programming statements in an embedded domain specific language and of the computer system are mentioned in connection with the proposed concept or one or more examples described above or below (e.g., Figs. 2 to 8) .
  • the apparatus, device, method, and computer program for compiling code comprising programming statements in an embed-ded domain specific language and of the computer system may comprise one or more addi-tional optional features corresponding to one or more aspects of the proposed concept, or one or more examples described above or below.
  • Various examples of the present disclosure relate to a concept for improving a (C++) embed-ded domain-specific language (eDSL) control flow.
  • Fig. 2 illustrates a workflow structure of an eDSL to MLIR compiler flow.
  • the eDSL language 210 is shown, which, in this example, comprises C++ Tensor eDSL, Python Tensor eDSL, devito (an affine dialect) and Psyclone (another affine dialect) .
  • the MLIR flow 220 is shown, where the code in the eDSL dialect (e.g., C++ Tensor eDSL and/or Python Tensor eDSL) is translated, by the Low-Level Virtual Machine (LLVM) compiler, and the resulting IR of the eDSL dialect is lowered (i.e., incorporated) into the code in the affine dialect (e.g., devito and/or psyclone) and again translated by the LLVM compiler, with the resulting IR being lowered into the code in the standard dialect and again translated by the LLVM compiler, and with the resulting IR being lowered into the code in the SCF dialect (structured control flow dialect) and again translated and compiled by the LLVM com-piler.
  • the operations specified in the respective dialects are all combined by the LLVM compiler (by lowering the respective intermediate representations into the next-lower level in the multi-lever intermediate representation) and compiled for the respective target hardware 230.
  • eDSL In the frontend, eDSL is embedded in programing languages. As shown in Fig. 2, an eDSL dialect can be used as entry into the MLIR framework, with the eDSL dialect getting lowered to the MLIR built-in dialects and custom dialects. Finally, the eDSL, together with its host language, is translated to LLVM and run on different devices (hardware 230 in Fig. 2, with the hardware potentially comprising different computing devices such as a CPU (Central Pro-cessing Unit) , GPU (Graphics Processing Unit) or VPU (Video Processing Unit or Vector Processing Unit) .
  • CPU Central Pro-cessing Unit
  • GPU Graphics Processing Unit
  • VPU Video Processing Unit or Vector Processing Unit
  • eDSL flow control might not be as flexible as some common program-ming languages, especially adopted together with its host language, such as C++. As it is hard to override control flow operations of the host language, like for, while in C++, programmers tend to mix the host and eDSL language, which may be detrimental with respect to flow con-trol.
  • a mixed code block that contains an eDSL control flow is listed in Fig. 3a.
  • Fig. 3a shows a code piece comprising eDSL programming statements and host language pro-gramming statements.
  • the C++ loop control statement for and its loop body is used.
  • there is a C++ eDSL statement A + 1; in which all operators are overridden (as defined by the eDSL) .
  • the resulting IR after the first lowering passes is shown in Fig. 3b.
  • Fig. 3b shows a code piece comprising transformed programming statements in an intermediate representation.
  • the C++ for loop with eDSL loop content is (directly) fully unrolled. If there is a complex loop body, a large effort is caused by each pass with duplicated IR. This may result in a significant degradation of compile time performance.
  • the mixed code comprising eDSL and host language is usually parsed by the host language parser. While this works rea-sonably well if a piece of code is encapsulated in a control flow block, it usually does not if such code is part of a loop body. In general, the loop will be fully unrolled, and the loop body IR (generated based on the eDSL programming statements) will be repeated as many times as specified by the loop. This behavior significantly increases the compilation overhead, such that the compiler may crash during the compilation stage.
  • developers may avoid mixing host code (C++) and eDSL code together in a loop body. Instead, a group of operations used in a loop can be wrapped and moved to a separate operation (e.g., via a function call) . This group of operations can then be regarded as an individual programming statement in parsing and high-level passing. When it is passed to a suitable stage, those looped operations can be converted into intrinsic loop function or be decomposed to other operations.
  • this workaround requires eDSL developers to con-clude all commonly used loop scenarios and build operations for them one by one, which is effort-consuming and not generically applicable. eDSL defined in this way might not be con-sidered user-friendly and too domain specific to be used.
  • BodyOp BodyOperation
  • the bodyOp modifies the behavior of the eDSL control flow operations.
  • Control flow content is wrapped by the BodyOp to maintain a simplified high-level IR.
  • the BodyOp When lowering down to low-level IR, the BodyOp may be transformed to a function call inside MLIR loop statements or customized loop statements, so that no redundant IR might degrade the compilation performance.
  • the BodyOp does not refer to a specific operation to be added in a certain eDSL, but to a methodology to give MLIR-based eDSL an optimized control flow.
  • the eDSL allows developers to write statements with C++or eDSL actions without too much regard to compilation performance. By avoiding redundant IR, the BodyOp concept may avoid extra compilation overhead.
  • the im-proved control flow can make eDSL easier to extend and maintain. Developers can design more complex nested conditional statements.
  • the proposed concept introduces a new operation –BodyOp, which is adapted to the MLIR flow, to manage control flow blocks.
  • eDSL users can use a block from the host language to define the operations being placed in the BodyOp.
  • C++ a lambda function can be used.
  • the code in Fig. 3a can be re-written as shown in Fig. 4a after employing the BodyOp opera-tion in the eDSL.
  • Fig. 4a shows an example of a BodyOp of a loop in the eDSL.
  • the code to be looped is nested inside the EDSL:: BodyOp loopBody (... ⁇ ... ⁇ ) statement.
  • this code is just a possible form of C++ eDSL for example. Developers can design similar structures following the proposed concept, adapting the concept to the respective host language.
  • BodyOp may be designed to contain any possible blocks upon the base of MLIR.
  • the BodyOp might not be just another code block that will not be recognized by the eDSL but the host language. In other words, this block will not even be unrolled because it is no longer parsed by the host language.
  • the block may be treated as a whole and is opaque to the rest of the code. It might only take parameters ( [] (Tensor A) in the example given in Fig. 4a) and returns required results (A+1 in the example of Fig. 4a) .
  • the host and eDSL language were mixed, which may cause problems when determining the control flow.
  • the loop was unrolled, causing every statement in the loop body to be repeated many times during IR lowering. This issue might not occur with the proposed BodyOp concept.
  • the BodyOp may be transformed into a function call that is nested in the correspond-ing operation (loop operation in Figs. 4a and 4b) , as shown in the code shown in Fig. 4b.
  • Fig. 4b shows an example of a BodyOp of a loop in the IR.
  • the code shown in Fig. 4b is a new version of the code shown in Fig. 3b, this time using the BodyOp.
  • the compilation flow will mix/match different IR representations in a common loop. If the different IR are included into one ab-stract operation (BodyOp) , they may be translated into a function call during the MLIR low-ering pass. This can avoid duplication of loop content and make IR passing more effective.
  • BoodyOp ab-stract operation
  • Figs. 5a and 5b show an example of a bare (i.e., multi-purpose) BodyOp in the eDSL (Fig. 5a) and of the resulting IR (Fig. 5b) .
  • Figs. 6a and 6b show an example of a BodyOp of a switch case statement in the eDSL (Fig. 6a) and of the resulting IR (Fig. 6b) .
  • Figs. 7a and 7b show an example of a BodyOp of an if statement in the eDSL (Fig. 7a) and of the resulting IR (Fig. 7b) .
  • Fig. 8 shows a schematic diagram of how the BodyOp operation is lowered through the levels of the MLIR.
  • Fig. 8 shows the C++ tensor eDSL 810.
  • Two versions of the desired loop 812 are shown –the version 814 shown in Fig. 3a, and the BodyOp version 816 shown in Fig. 4a.
  • the MLIR 820 is shown.
  • the BodyOp version 816 is translated in the EDSL dialect to a loop body operation 824, which comprises a function call 826 to the block of code comprised in the BodyOp.
  • the statements defined in the eDSL loop body are ex-tracted into a separate function operation, carrying its original parameters, iteration counter and loop variables. In the loop op original location, it is replaced by a MLIR component loop operation based on the stage of lowering pipeline. This function call is then used by the LLVM compiler 822 when compiling the code.
  • BodyOp separates the control flow blocks and their upper level controlling code, these two parts of code are lowered separately.
  • the loop operation with the BodyOp may be transformed to a forOp or parallelOp of the Affine Dialect.
  • the loop with the BodyOp may be transformed to a for operation of the SCF Dialect, OMP (Open Message Passing) Dialect or customized Di-alects.
  • the sub-block in the BodyOp (containing the eDSL statements) also gets lowered by the same pipeline, and so does its nested control flow.
  • the proposed concept provides a methodology to improve the MLIR-based eDSL control flow.
  • the developer can import the BodyOp into their eDSL to separate eDSL code and its control flow blocks from parsing, so that redundant IR generation can be avoided, and IR compilation can gain better performance.
  • the proposed operation may be illustrated in the documentation of the respective eDSL, including its usage, the Op form in dialect and conversion passes.
  • the BodyOp concept may comprise one or more additional optional features corresponding to one or more aspects of the proposed concept, or one or more examples described above or below.
  • An example (e.g., example 1) relates to an apparatus (10) comprising interface circuitry (12) , machine-readable instructions and processing circuitry (14) to execute the machine-readable instructions to obtain code comprising a first set of programming statements in an embedded domain-specific programming language and a second set of programming statements in a second programming language, the first set of programming statements comprising one or more pre-defined programming statements encapsulating a block of programming statements.
  • the machine-readable instructions comprise instructions to compile the first set of program-ming statements to generate a first set of transformed programming statements according to an intermediate representation, with the encapsulated programming statements being repre-sented as a function call to a function comprising transformed programming statements cor-responding to the encapsulated programming statements.
  • the machine-readable instructions comprise instructions to compile the second set of programming statements, with the first set of programming statements being represented by the first set of transformed programming statements during the compilation of the second set of programming statements.
  • Another example relates to a previously described example (e.g., example 1) or to any of the examples described herein, further comprising that the one or more pre- defined programming statements are generic for a plurality of different types of encapsulated blocks of code.
  • Another example (e.g., example 3) relates to a previously described example (e.g., example 2) or to any of the examples described herein, further comprising that the one or more pre-defined programming statements are generic for two or more of a for loop block, a while loop block, a switch case block and an if block.
  • Another example (e.g., example 4) relates to a previously described example (e.g., one of the examples 1 to 3) or to any of the examples described herein, further comprising that the one or more pre-defined programming statements comprise at least one first programming state-ment defining the programming statements being encapsulated and at least one second pro-gramming statement defining an operation to be performed with the programming statements being encapsulated.
  • Another example (e.g., example 5) relates to a previously described example (e.g., example 4) or to any of the examples described herein, further comprising that the machine-readable instructions comprise instructions to translate the at least one second programming statement into at least one corresponding programming statement in the second programming language, and to compile the at least one corresponding programming statement in the second program-ming language together with the second set of programming statements.
  • Another example (e.g., example 6) relates to a previously described example (e.g., one of the examples 1 to 5) or to any of the examples described herein, further comprising that the ma-chine-readable instructions comprise instructions to continue compiling the code using the first set of transformed programming statements and the second set of transformed program-ming statements.
  • Another example (e.g., example 7) relates to a previously described example (e.g., one of the examples 1 to 6) or to any of the examples described herein, further comprising that the com-pilation of the first set of programming statements and of the second set of programming statements is part of a of a multi-layer intermediate representation-based compilation of the code.
  • Another example (e.g., example 8) relates to a previously described example (e.g., one of the examples 1 to 7) or to any of the examples described herein, further comprising that the one or more pre-defined programming statements are defined in a namespace of the embedded domain-specific programming language.
  • Another example (e.g., example 9) relates to a previously described example (e.g., one of the examples 1 to 8) or to any of the examples described herein, further comprising that the en-capsulated programming statements are programming statements of the first set of program-ming statements.
  • Another example (e.g., example 10) relates to a previously described example (e.g., one of the examples 1 to 9) or to any of the examples described herein, further comprising that the first set of programming statements and the second set of programming statements are inter-mixed within the code.
  • Another example relates to a previously described example (e.g., one of the examples 1 to 10) or to any of the examples described herein, further comprising that the embedded domain specific language corresponds to a library or framework that is based on the second programming language.
  • An example (e.g., example 12) relates to a computer system (100) comprising the apparatus (10) according to one of the examples 1 to 11 (or according to any other example) .
  • An example (e.g., example 13) relates to an apparatus (10) comprising processing circuitry (14) configured to obtain code comprising a first set of programming statements in an embed-ded domain-specific programming language and a second set of programming statements in a second programming language, the first set of programming statements comprising one or more pre-defined programming statements encapsulating a block of programming statements.
  • the processing circuitry is configured to compile the first set of programming statements to generate a first set of transformed programming statements according to an intermediate rep-resentation, with the encapsulated programming statements being represented as a function call to a function comprising transformed programming statements corresponding to the en-capsulated programming statements.
  • the processing circuitry is configured to compile the second set of programming statements, with the first set of programming statements being represented by the first set of transformed programming statements during the compilation of the second set of programming statements.
  • Another example relates to a previously described example (e.g., example 13) or to any of the examples described herein, further comprising that the one or more pre-defined programming statements are generic for a plurality of different types of encapsulated blocks of code.
  • Another example relates to a previously described example (e.g., example 14) or to any of the examples described herein, further comprising that the one or more pre-defined programming statements are generic for two or more of a for loop block, a while loop block, a switch case block and an if block.
  • Another example relates to a previously described example (e.g., one of the examples 13 to 15) or to any of the examples described herein, further comprising that the one or more pre-defined programming statements comprise at least one first programming statement defining the programming statements being encapsulated and at least one second programming statement defining an operation to be performed with the programming state-ments being encapsulated.
  • Another example relates to a previously described example (e.g., example 16) or to any of the examples described herein, further comprising that the processing circuitry is configured to translate the at least one second programming statement into at least one corresponding programming statement in the second programming language, and to compile the at least one corresponding programming statement in the second programming language together with the second set of programming statements.
  • Another example relates to a previously described example (e.g., one of the examples 13 to 17) or to any of the examples described herein, further comprising that the processing circuitry is configured to continue compiling the code using the first set of trans-formed programming statements and the second set of transformed programming statements.
  • Another example relates to a previously described example (e.g., one of the examples 13 to 18) or to any of the examples described herein, further comprising that the compilation of the first set of programming statements and of the second set of programming statements is part of a of a multi-layer intermediate representation-based compilation of the code.
  • Another example relates to a previously described example (e.g., one of the examples 13 to 19) or to any of the examples described herein, further comprising that the one or more pre-defined programming statements are defined in a namespace of the embedded domain-specific programming language.
  • Another example (e.g., example 21) relates to a previously described example (e.g., one of the examples 13 to 20) or to any of the examples described herein, further comprising that the encapsulated programming statements are programming statements of the first set of pro-gramming statements.
  • Another example relates to a previously described example (e.g., one of the examples 13 to 21) or to any of the examples described herein, further comprising that the first set of programming statements and the second set of programming statements are inter-mixed within the code.
  • Another example relates to a previously described example (e.g., one of the examples 13 to 22) or to any of the examples described herein, further comprising that the embedded domain specific language corresponds to a library or framework that is based on the second programming language.
  • An example (e.g., example 24) relates to a computer system (100) comprising the apparatus (10) according to one of the examples 13 to 23 (or according to any other example) .
  • An example (e.g., example 25) relates to a device (10) comprising means for processing (14) configured to obtain code comprising a first set of programming statements in an embedded domain-specific programming language and a second set of programming statements in a second programming language, the first set of programming statements comprising one or more pre-defined programming statements encapsulating a block of programming statements.
  • the means for processing is configured to compile the first set of programming statements to generate a first set of transformed programming statements according to an intermediate rep-resentation, with the encapsulated programming statements being represented as a function call to a function comprising transformed programming statements corresponding to the en-capsulated programming statements.
  • the means for processing is configured to compile the second set of programming statements, with the first set of programming statements being represented by the first set of transformed programming statements during the compilation of the second set of programming statements.
  • Another example relates to a previously described example (e.g., example 25) or to any of the examples described herein, further comprising that the one or more pre-defined programming statements are generic for a plurality of different types of encapsulated blocks of code.
  • Another example relates to a previously described example (e.g., example 26) or to any of the examples described herein, further comprising that the one or more pre-defined programming statements are generic for two or more of a for loop block, a while loop block, a switch case block and an if block.
  • Another example relates to a previously described example (e.g., one of the examples 25 to 27) or to any of the examples described herein, further comprising that the one or more pre-defined programming statements comprise at least one first programming statement defining the programming statements being encapsulated and at least one second programming statement defining an operation to be performed with the programming state-ments being encapsulated.
  • Another example relates to a previously described example (e.g., example 28) or to any of the examples described herein, further comprising that the means for pro-cessing is configured to translate the at least one second programming statement into at least one corresponding programming statement in the second programming language, and to com-pile the at least one corresponding programming statement in the second programming lan-guage together with the second set of programming statements.
  • Another example relates to a previously described example (e.g., one of the examples 25 to 29) or to any of the examples described herein, further comprising that the means for processing is configured to continue compiling the code using the first set of trans-formed programming statements and the second set of transformed programming statements.
  • Another example relates to a previously described example (e.g., one of the examples 25 to 30) or to any of the examples described herein, further comprising that the compilation of the first set of programming statements and of the second set of programming statements is part of a of a multi-layer intermediate representation-based compilation of the code.
  • Another example relates to a previously described example (e.g., one of the examples 25 to 31) or to any of the examples described herein, further comprising that the one or more pre-defined programming statements are defined in a namespace of the embedded domain-specific programming language.
  • Another example (e.g., example 33) relates to a previously described example (e.g., one of the examples 25 to 32) or to any of the examples described herein, further comprising that the encapsulated programming statements are programming statements of the first set of pro-gramming statements.
  • Another example relates to a previously described example (e.g., one of the examples 25 to 33) or to any of the examples described herein, further comprising that the first set of programming statements and the second set of programming statements are inter-mixed within the code.
  • Another example (e.g., example 35) relates to a previously described example (e.g., one of the examples 25 to 34) or to any of the examples described herein, further comprising that the embedded domain specific language corresponds to a library or framework that is based on the second programming language.
  • An example (e.g., example 36) relates to a computer system (100) comprising the device (10) according to one of the examples 25 to 35 (or according to any other example) .
  • An example (e.g., example 37) relates to a method comprising obtaining (110) code compris-ing a first set of programming statements in an embedded domain-specific programming lan-guage and a second set of programming statements in a second programming language, the first set of programming statements comprising one or more pre-defined programming state-ments encapsulating a block of programming statements.
  • the method comprises compiling (130) the first set of programming statements to generate a first set of transformed program-ming statements according to an intermediate representation, with the encapsulated program-ming statements being represented as a function call to a function comprising transformed programming statements corresponding to the encapsulated programming statements.
  • the method comprises compiling (140) the second set of programming statements, with the first set of programming statements being represented by the first set of transformed programming statements during the compilation of the code defined by the second set of programming statements.
  • Another example relates to a previously described example (e.g., example 37) or to any of the examples described herein, further comprising that the one or more pre-defined programming statements are generic for a plurality of different types of encapsulated blocks of code.
  • Another example relates to a previously described example (e.g., example 38) or to any of the examples described herein, further comprising that the one or more pre-defined programming statements are generic for two or more of a for loop block, a while loop block, a switch case block and an if block.
  • Another example relates to a previously described example (e.g., one of the examples 37 to 39) or to any of the examples described herein, further comprising that the one or more pre-defined programming statements comprise at least one first programming statement defining the programming statements being encapsulated and at least one second programming statement defining an operation to be performed with the programming state-ments being encapsulated.
  • Another example relates to a previously described example (e.g., example 40) or to any of the examples described herein, further comprising that the method comprises translating (135) the at least one second programming statement into at least one correspond-ing programming statement in the second programming language, and compiling (140) the at least one corresponding programming statement in the second programming language to-gether with the second set of programming statements.
  • Another example relates to a previously described example (e.g., one of the examples 37 to 41) or to any of the examples described herein, further comprising that the method comprises continuing compiling (150) the code using the first set of transformed pro-gramming statements and the second set of transformed programming statements.
  • Another example relates to a previously described example (e.g., one of the examples 37 to 42) or to any of the examples described herein, further comprising that the compilation of the first set of programming statements and of the second set of programming statements is part of a of a multi-layer intermediate representation-based compilation of the code.
  • Another example relates to a previously described example (e.g., one of the examples 37 to 43) or to any of the examples described herein, further comprising that the one or more pre-defined programming statements are defined in a namespace of the embedded domain-specific programming language.
  • Another example (e.g., example 45) relates to a previously described example (e.g., one of the examples 37 to 44) or to any of the examples described herein, further comprising that the encapsulated programming statements are programming statements of the first set of pro-gramming statements.
  • Another example relates to a previously described example (e.g., one of the examples 37 to 45) or to any of the examples described herein, further comprising that the first set of programming statements and the second set of programming statements are inter-mixed within the code.
  • Another example relates to a previously described example (e.g., one of the examples 37 to 46) or to any of the examples described herein, further comprising that the embedded domain specific language corresponds to a library or framework that is based on the second programming language.
  • An example e.g., example 48
  • a computer system 100 configured to perform the method according to one of the examples 37 to 47 (or according to any other example) .
  • An example (e.g., example 49) relates to a non-transitory machine-readable storage medium including program code, when executed, to cause a machine to perform the method of one of the examples 37 to 47.
  • An example (e.g., example 50) relates to a computer program having a program code for performing the method of one of the examples the method of one of the examples 37 to 47 when the computer program is executed on a computer, a processor, or a programmable hard-ware component.
  • An example (e.g., example 51) relates to a machine-readable storage including machine read-able instructions, when executed, to implement a method or realize an apparatus as claimed in any pending claim or shown in any example.
  • Examples may further be or relate to a (computer) program including a program code to exe-cute one or more of the above methods when the program is executed on a computer, proces-sor or other programmable hardware component.
  • steps, operations or processes of dif-ferent ones of the methods described above may also be executed by programmed computers, processors or other programmable hardware components.
  • Examples may also cover program storage devices, such as digital data storage media, which are machine-, processor-or com-puter-readable and encode and/or contain machine-executable, processor-executable or com-puter-executable programs and instructions.
  • Program storage devices may include or be dig-ital storage devices, magnetic storage media such as magnetic disks and magnetic tapes, hard disk drives, or optically readable digital data storage media, for example.
  • Other examples may also include computers, processors, control units, (field) programmable logic arrays ( (F) PLAs) , (field) programmable gate arrays ( (F) PGAs) , graphics processor units (GPU) , ap-plication-specific integrated circuits (ASICs) , integrated circuits (ICs) or system-on-a-chip (SoCs) systems programmed to execute the steps of the methods described above.
  • aspects described in relation to a device or system should also be understood as a description of the corresponding method.
  • a block, device or functional aspect of the device or system may correspond to a feature, such as a method step, of the corresponding method.
  • aspects described in relation to a method shall also be understood as a description of a corresponding block, a corresponding element, a property or a functional feature of a corresponding device or a corresponding system.
  • module refers to logic that may be implemented in a hardware component or device, software or firmware running on a processing unit, or a combination thereof, to perform one or more operations consistent with the present disclosure.
  • Software and firmware may be embodied as instructions and/or data stored on non-transitory computer-readable storage media.
  • circuitry can comprise, singly or in any combination, non-programmable (hardwired) circuitry, programmable circuitry such as pro-cessing units, state machine circuitry, and/or firmware that stores instructions executable by programmable circuitry.
  • Modules described herein may, collectively or individually, be em-bodied as circuitry that forms a part of a computing system. Thus, any of the modules can be implemented as circuitry.
  • a computing system referred to as being programmed to perform a method can be programmed to perform the method via software, hardware, firmware, or com-binations thereof.
  • any of the disclosed methods can be implemented as computer-execut-able instructions or a computer program product. Such instructions can cause a computing system or one or more processing units capable of executing computer-executable instructions to perform any of the disclosed methods.
  • the term “computer” refers to any computing system or device described or mentioned herein.
  • the term “computer-exe-cutable instruction” refers to instructions that can be executed by any computing system or device described or mentioned herein.
  • the computer-executable instructions can be part of, for example, an operating system of the computing system, an application stored locally to the computing system, or a remote appli-cation accessible to the computing system (e.g., via a web browser) . Any of the methods de-scribed herein can be performed by computer-executable instructions performed by a single computing system or by one or more networked computing systems operating in a network environment. Computer-executable instructions and updates to the computer-executable in-structions can be downloaded to a computing system from a remote server.
  • implementation of the disclosed technologies is not limited to any specific computer language or program.
  • the disclosed technologies can be implemented by software written in C++, C#, Java, Perl, Python, JavaScript, Adobe Flash, C#, assembly language, or any other programming language.
  • the disclosed tech-nologies are not limited to any particular computer system or type of hardware.
  • any of the software-based examples can be uploaded, downloaded, or remotely accessed through a suitable communication means.
  • suitable communication means include, for example, the Internet, the World Wide Web, an intranet, cable (including fiber optic cable) , magnetic communications, electromagnetic com-munications (including RF, microwave, ultrasonic, and infrared communications) , electronic communications, or other such communication means.
  • the disclosed methods, apparatuses, and systems are not to be construed as limiting in any way. Instead, the present disclosure is directed toward all novel and nonobvious features and aspects of the various disclosed examples, alone and in various combinations and subcombi-nations with one another.
  • the disclosed methods, apparatuses, and systems are not limited to any specific aspect or feature or combination thereof, nor do the disclosed examples require that any one or more specific advantages be present or problems be solved.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Devices For Executing Special Programs (AREA)

Abstract

An apparatus for compiling code comprising programming statements in an embedded do-main specific language is configured to obtain code comprising a first set of programming statements in an embedded domain-specific programming language and a second set of pro-gramming statements in a second programming language, the first set of programming state-ments comprising one or more pre-defined programming statements encapsulating a block of programming statements. The apparatus is configured to compile the first set of programming statements to generate a first set of transformed programming statements according to an in-termediate representation, the encapsulated programming statements being represented as a function call to a function comprising transformed programming statements corresponding to the encapsulated programming statements. The apparatus is configured to compile the second set of programming statements, the first set of programming statements being represented by the first set of transformed programming statements during the compilation of the second set of programming statements.

Description

Apparatus, Device, Method, and Computer Program for Compiling Code Comprising Programming Statements in an Embedded Domain Specific Language Background
MLIR (Multi-Level Intermediate Representation (IR) ) is a popular framework that is useful in accelerating the compiler development process. Compilers that utilize this framework can benefit from its pre-defined so-called Dialects as well as all related utilities (e.g., basic build-ing blocks and passes for translating one IR into another) .
An eDSL (embedded domain specific language) is a language defined to solve problems in a certain domain and is usually embedded in a mature host programming language, such as C++.
Brief description of the Figures
Some examples of apparatuses and/or methods will be described in the following by way of example only, and with reference to the accompanying figures, in which
Fig. 1a shows a block diagram of an example of an apparatus or device for compiling code comprising programming statements in an embedded domain specific language, and of a computer system comprising such an apparatus or device;
Fig. 1b shows a flow chart of an example of a method for compiling code comprising pro-gramming statements in an embedded domain specific language;
Fig. 2 illustrates a workflow structure of an eDSL to MLIR compiler flow;
Fig. 3a shows a code piece comprising eDSL programming statements and host language programming statements;
Fig. 3b shows a code piece comprising transformed programming statements in an intermedi-ate representation;
Fig. 4a shows an example of a BodyOp of a loop in the eDSL;
Fig. 4b shows an example of a BodyOp of a loop in the IR;
Figs. 5a and 5b show an example of a bare BodyOp in the eDSL and of the resulting IR;
Figs. 6a and 6b show an example of a BodyOp of a switch case statement in the eDSL and of the resulting IR;
Figs. 7a and 7b show an example of a BodyOp of an if statement in the eDSL and of the resulting IR;
Fig. 8 shows a schematic diagram of how the BodyOp operation is lowered through the lev-els of the MLIR.
Detailed Description
Some examples are now described in more detail with reference to the enclosed figures. How-ever, other possible examples are not limited to the features of these embodiments described in detail. Other examples may include modifications of the features as well as equivalents and alternatives to the features. Furthermore, the terminology used herein to describe certain ex-amples should not be restrictive of further possible examples.
Throughout the description of the figures same or similar reference numerals refer to same or similar elements and/or features, which may be identical or implemented in a modified form while providing the same or a similar function. The thickness of lines, layers and/or areas in the figures may also be exaggerated for clarification.
When two elements A and B are combined using an “or” , this is to be understood as disclosing all possible combinations, i.e., only A, only B as well as A and B, unless expressly defined otherwise in the individual case. As an alternative wording for the same combinations, "at least one of A and B" or "A and/or B" may be used. This applies equivalently to combinations of more than two elements.
If a singular form, such as “a” , “an” and “the” is used and the use of only a single element is not defined as mandatory either explicitly or implicitly, further examples may also use several elements to implement the same function. If a function is described below as implemented using multiple elements, further examples may implement the same function using a single element or a single processing entity. It is further understood that the terms "include" , "in-cluding" , "comprise" and/or "comprising" , when used, describe the presence of the specified features, integers, steps, operations, processes, elements, components and/or a group thereof, but do not exclude the presence or addition of one or more other features, integers, steps, operations, processes, elements, components and/or a group thereof.
In the following description, specific details are set forth, but examples of the technologies described herein may be practiced without these specific details. Well-known circuits, struc-tures, and techniques have not been shown in detail to avoid obscuring an understanding of this description. “An example/example, ” “various examples/examples, ” “some examples/ex-amples, ” and the like may include features, structures, or characteristics, but not every exam-ple necessarily includes the particular features, structures, or characteristics.
Some examples may have some, all, or none of the features described for other examples. “First, ” “second, ” “third, ” and the like describe a common element and indicate different in-stances of like elements being referred to. Such adjectives do not imply element item so de-scribed must be in a given sequence, either temporally or spatially, in ranking, or any other manner. “Connected” may indicate elements are in direct physical or electrical contact with each other and “coupled” may indicate elements co-operate or interact with each other, but they may or may not be in direct physical or electrical contact.
As used herein, the terms “operating” , “executing” , or “running” as they pertain to software or firmware in relation to a system, device, platform, or resource are used interchangeably and can refer to software or firmware stored in one or more computer-readable storage media accessible by the system, device, platform, or resource, even though the instructions contained in the software or firmware are not actively being executed by the system, device, platform, or resource.
The description may use the phrases “in an example/example, ” “in examples/examples, ” “in some examples/examples, ” and/or “in various examples/examples, ” each of which may refer  to one or more of the same or different examples. Furthermore, the terms “comprising, ” “in-cluding, ” “having, ” and the like, as used with respect to examples of the present disclosure, are synonymous.
Fig. 1a shows a block diagram of an example of an apparatus 10 or device 10 for compiling code comprising programming statements in an embedded domain specific language. The apparatus 10 comprises circuitry that is configured to provide the functionality of the appa-ratus 10. For example, the apparatus 10 of Figs. 1a and 1b comprises interface circuitry 12, processing circuitry 14 and (optional) storage circuitry 16. For example, the processing cir-cuitry 14 may be coupled with the interface circuitry 12 and with the storage circuitry 16. For example, the processing circuitry 14 may be configured to provide the functionality of the apparatus, in conjunction with the interface circuitry 12 (for exchanging information, e.g., with other components inside or outside a computer system 100 comprising the apparatus or device 10) and the storage circuitry (for storing information, such as machine-readable in-structions) 16. Likewise, the device 10 may comprise means that is/are configured to provide the functionality of the device 10. The components of the device 10 are defined as component means, which may correspond to, or implemented by, the respective structural components of the apparatus 10. For example, the device 10 of Figs. 1a and 1b comprises means for pro-cessing 14, which may correspond to or be implemented by the processing circuitry 14, means for communicating 12, which may correspond to or be implemented by the interface circuitry 12, and (optional) means for storing information 16, which may correspond to or be imple-mented by the storage circuitry 16. In general, the functionality of the processing circuitry 14 or means for processing 14 may be implemented by the processing circuitry 14 or means for processing 14 executing machine-readable instructions. Accordingly, any feature ascribed to the processing circuitry 14 or means for processing 14 may be defined by one or more in-structions of a plurality of machine-readable instructions. The apparatus 10 or device 10 may comprise the machine-readable instructions, e.g., within the storage circuitry 16 or means for storing information 16.
The processing circuitry 14 or means for processing 14 is configured to obtain code compris-ing a first set of programming statements in an embedded domain-specific programming lan-guage and a second set of programming statements in a second programming language (e.g., via the interface circuitry 12 or means for communicating 12 or from the storage circuitry 16 or means for storing information 16) . The first set of programming statements comprises one  or more pre-defined programming statements encapsulating a block of programming state-ments. For example, the processing circuitry 14 or means for processing 14 may be configured to identify the one or more pre-defined programming statements within the first set of pro-gramming statements encapsulating a block of programming statements. The processing cir-cuitry 14 or means for processing 14 is configured to compile the first set of programming statements to generate a first set of transformed programming statements according to an in-termediate representation, with the encapsulated programming statements being represented as a function call to a function comprising transformed programming statements correspond-ing to the encapsulated programming statements. The processing circuitry 14 or means for processing 14 is configured to compile the second set of programming statements (e.g., to obtain a second set of transformed programming statements according to an intermediate rep-resentation) . The first set of programming statements are represented by the first set of trans-formed programming statements during the compilation of the code defined by the second set of programming statements.
Fig. 1a further shows an example of a computer system 100 comprising such an apparatus or device.
Fig. 1b shows a flow chart of an example of a corresponding method for compiling code comprising programming statements in an embedded domain specific language. The method comprises obtaining 110 the code comprising the first set of programming statements in the embedded domain-specific programming language and the second set of programming state-ments in the second programming language. For example, the method may comprise identi-fying 120 the one or more pre-defined programming statements within the first set of pro-gramming statements encapsulating the block of programming statements. The method com-prises compiling 130 the first set of programming statements to generate the first set of trans-formed programming statements according to the intermediate representation. The method comprises compiling 140 the second set of programming statements (e.g., to obtain the second set of transformed programming statements according to an intermediate representation) . For example, the method may be performed by the computer system 100, e.g., by the apparatus 10 or device 10 of the computer system 100.
In the following, the features of the apparatus 10, device 10, method and computer program will be introduced in more detail with respect to the apparatus 10. Features introduced in  connection with the apparatus 10 may likewise be included in the corresponding device 10, method and computer program.
Various examples of the proposed concept relate to the compilation of code in an embedded domain-specific language (eDSL) , e.g., in the context of a Multi-Level Intermediate Repre-sentation-based compilation approach. For example, the compilation of the first set of pro-gramming statements and of the second set of programming statements may be part of a of MLIR-based compilation of the code. As the name Multi-Level Intermediate Representation (MLIR) already indicates, an MLIR-based compilation combines multiple levels of code, with the programming statements within the different levels of code being transformed into Inter-mediate Representations of these programming statements, and with the compiler, in the end, compiling the IRs that have passed through the different levels of the multi-level stack of code. For example, as shown in Fig. 2, MLIR may be used to compile code that contains program-ming statements in different programming languages and/or dialects. For example, the code may comprise programming statements in a first programming language or dialect (of a pro-gramming language) , programming statements in a second programming language or dialect, programming statements in a third dialect or programming language etc. During compilation, the compiler passes through the different levels being based on the different dialects, with the respective programming statements written in the dialect or programming language being represented by the level being converted (i.e., translated) into an intermediate representation and being “lowered” (i.e., inserted as intermediate representation) into the next-lower level. For example, in the example shown in Fig. 2, the programming statements in the eDSL lan-guage or dialect is converted into an IR and lowered into the code further comprising pro-gramming statements in the affine dialect (a dialect that is focused on complex data structures as used in machine-learning) and programming statements in the standard dialect (e.g., C++) . The programming statements in the different dialects or programming languages may be dis-tinguished using namespaces, e.g., by including a level-specific prefix before the respective programming statements, such as EDSL:: op (with op being the operation) , AFFINE: op, STD: op etc. or edsl. op, affine. op and std. op. In the examples given in Figs. 3a to 7b, the programming statements in the standard dialect are included without a prefix, and the pro-gramming statements in the eDSL dialect are included with the EDSL:: namespace prefix. Accordingly, the one or more pre-defined programming statements may be defined in a namespace of the embedded domain-specific programming language. MLIR is predominantly  used in the context of machine learning and heterogenous computing (i.e., computing on dif-ferent computing devices, such as Central Processing Units (CPUs) , Graphics Processing Units (GPUs) and other purpose-built accelerator cards.
In the proposed concept, at least two different levels are distinguished –a first level that is based on the first set of programming statements in the eDSL, and a second level that is based on the second set of programming statements. In most cases, both the first set of programming statements and the second set of programming statements may be based on the same common programming language (e.g., C++ or Python) , with the first set of programming statements being based on a first dialect (i.e., the eDSL dialect) of the common programming language and the second set of programming statements being based on a second dialect (e.g., the stand-ard dialect) of the common programming language. Alternatively, different programming lan-guages may be intermixed, i.e., the first and second programming language might not share a common programming language.
An eDSL is, as the name suggests, a domain-specific language that is embedded in a host language. A domain-specific language is a programming language that is specific to a domain of applications, such as machine-learning, data analytics, scientific computing, simulation, in contrast to general-purpose programming languages, such as the C++ standard dialect, C#, F#, Python or Swift. If the domain-specific language is embedded, it relies on a host language. Typically, embedded domain-specific languages are implemented as libraries that are built on top of the respective host language, and which use major aspects of the host language while adding some additional language elements that are useful in the particular application domain being addressed by the domain-specific language. For example, if the eDSL is focused on the application domain “machine learning” , the eDSL may comprise additional instructions and data types that are useful for machine learning, such as Tensor data types and matrix multi-plication functions. Accordingly, the embedded domain specific language may correspond to a library or framework that may be based on the second programming language (with the second programming language being the host (or common) language of the first programming language) .
In the proposed concept, the code comprise both the first set of programming statements (in the eDSL) and the second set of programming statements (in the second programming lan-guage, which may usually be the host language of the eDSL) . In various examples, these  programming statements are intermixed, i.e., the code may comprise at least one file compris-ing both programming statements of the first set and of the second set of programming state-ments. In other words, the first set of programming statements and the second set of program-ming statements may be intermixed within the code.
In the context of MLIR, the use of an eDSL can lead to additional overhead during the multi-level compilation of the code comprising both the eDSL programming statements and the host programming language statements. In particular, groups of programming statements that are encapsulated in a block (e.g., within a for/while loop, if-block, switch case block etc. ) can lead to additional overhead during the multi-level compilation. For example, as shown in connection with Figs. 3a and 3b, loops may be unrolled during the MLIR-based compilation, leading to vastly increased overhead if the compiler includes the eDSL-based instructions within the loop numerous times after the loop unroll. Such scenarios can bloat the compiled computer program or lead to errors or crashes during compilation.
The proposed concept seeks to avoid such scenarios by defining programming statements that define a block of code to be treated as a contiguous block during compilation. In the context of Figs. 2 to 8, these programming statements are also denoted “BodyOp” , while, in the con-text of Figs. 1a to 1b, the term “one or more pre-defined programming statements” are used. The one or more pre-defined programming statements encapsulate a block of programming statements within the code, with the block of programming statements being the “Body” re-ferred to by the “BodyOp” statements. For example, the encapsulated programming state-ments may (all) be programming statements of the first set of programming statements, i.e., eDSL programming statements. Instead of treating the programming statement of the encap-sulated block of programming statements individually (e.g., when lowering their intermediate representation into the next-lower level of e.g., the second set of programming statements) , they are kept grouped (with respect to the control flow being modeled during the compilation) as they are lowered during the MLIR-based compilation. This is done by transforming the one or more-predefined programming statements into a call to a function comprising the en-capsulated programming statements during compilation of the first set of programming state-ments. In other words, the processing circuitry is configured to compile the first set of pro-gramming statements such, that the encapsulated programming statements are represented as a function call to a function comprising transformed programming statements corresponding  to the encapsulated programming statements (according to the respective intermediate repre-sentation being used) . In other words, the programming statements of the first set of program-ming statements may all be transformed into the intermediate representation. However, the programming statements that are encapsulated by the one or more pre-defined programming statements may be kept grouped as they are lowered through the levels, by moving them into a separate function (comprising transformed programming statements) that is called via a function call.
The proposed mechanism is based on transforming the one or more pre-defined programming statements into the function call for the function comprising the transformed programming statements corresponding to the encapsulated programming statements. Therefore, before or during compilation, the one or more pre-defined programming statements are identified by the processing circuitry (e.g., by parsing the code, and/or as part of the compilation of a spe-cific construct, such as a lambda function) . The one or more pre-defined programming state-ments may comprise one or more pre-defined keywords or braces that are used to encapsulate the programming statements as a block. For example, these one or more pre-defined program-ming statements may be generic for a plurality of different types of encapsulated blocks of code, e.g., for two or more of a for loop block, a while loop block, a switch case block and an if block. In connection with Figs. 4a to 7a, examples are given on how such blocks can be defined.
In general, the one or more pre-defined programming statements may be delimited from the programming statements being encapsulated by the one or more pre-defined programming statements. For example, in the example shown in Fig. 4a, the one or more pre-defined pro-gramming statements may include the “EDSL:: BodyOp loopBody ( “and “) } ; ” and the fol-lowing call to the loopBody lambda function (a lambda function is an anonymous function that is defined inline with other code) , while the “return A+1” programming statement is being encapsulated by the loopBody lambda function (with the argument “ [] Tensor A” being nec-essary to define the function call, so that the argument can be considered to be encapsulated by the loopBody lambda function) . The lambda function is then called in the subsequent pro-gramming statement ( “EDSL:: Tensor C = loopBody (A) . niter (5) ; ” , with niter referring to the number of iterations of the loop) . For example, the one or more pre-defined instructions may comprise one or more instructions for defining a lambda function.
In general, as outlined above, the one or more pre-defined programming statements may be generic for a plurality of different types of encapsulated blocks of code. While the function receives different names in the examples of Figs. 4a to 7a (bodyLoop in Fig. 4a, bodyFunc in Fig. 5a, bodyFuncA –bodyFuncC in Fig. 5a and ifBody in Fig. 7a, the remainder of the one or more pre-defined programming statements may be the same (i.e., generic) in each case. In other words, the one or more pre-defined programming statements may comprise at least one first programming statement defining the programming statements being encapsulated (i.e., the part that stays the same) and at least one second programming statement defining an op-eration to be performed with the programming statements being encapsulated (i.e., the part that is different depending on the type of block being defined) . As outlined above, the at last one first programming statement defines which programming statements are encapsulated and is therefore transformed into a function call and a corresponding function comprising the transformed programming statements corresponding to the encapsulated programming state-ments. The at least one second programming statement, however, is translated into corre-sponding programming statements that define the processing to be applied to the block of code in the second programming language, thus lowering the operation to be performed into the second programming language. In other words, the processing circuitry may be configured to translate the at least one second programming statement into at least one corresponding programming statement in the second programming language, and to compile the at least one corresponding programming statement in the second programming language together with the second set of programming statements. Accordingly, as further shown in Fig. 1b, the method may comprise translating 135 the at least one second programming statement into at least one corresponding programming statement in the second programming language and compiling 140 the at least one corresponding programming statement in the second programming lan-guage together with the second set of programming statements. For example, a second pro-gramming statement defining loop (or switch case, or if) properties of the block of encapsu-lated programming statements may be translated into a corresponding programming statement (e.g., a programming statement defining a loop, a programming statement defining a switch case construct or a programming statement defining an if clause) in the second programming language, for the compilation of the second set of programming statements. These tasks may be part of the compilation of the first set of programming statements.
In MLIR, compilation is performed iteratively, with the compilation of the second set of pro-gramming statements following the compilation of the first set of programming statements  (however, some interaction between the compilation passes may occur as the compiler at-tempts to perform various optimization) . The result of the compilation of the first set of pro-gramming statements is lowered into the code for the compilation of the second set of pro-gramming statements, with the first set of programming statements being represented by the first set of transformed programming statements during the compilation of the code defined by the second set of programming statements (and, optionally, the at least one corresponding programming statement in the second programming language that is translated from the at least one second programming statement) . The processing circuitry is configured to compile the second set of programming statements, together with the elements (e.g., IR and the corre-sponding programming statement) that are being lowered down from a higher level, to obtain the second set of transformed programming statements according to an intermediate repre-sentation. The compilation may then continue with the next-lower level (e.g., the SCF (Struc-ture Control Flow) level as shown in Fig. 2) . In other words, the processing circuitry may be configured to continue compiling the code using the first set of transformed programming statements and the second set of transformed programming statements. Accordingly, as fur-ther shown in Fig. 1b, the method may comprise continuing compiling 150 the code using the first set of transformed programming statements and the second set of transformed program-ming statements.
The interface circuitry 12 or means for communicating 12 may correspond to one or more inputs and/or outputs for receiving and/or transmitting information, which may be in digital (bit) values according to a specified code, within a module, between modules or between modules of different entities. For example, the interface circuitry 12 or means for communi-cating 12 may comprise circuitry configured to receive and/or transmit information.
For example, the processing circuitry 14 or means for processing 14 may be implemented using one or more processing units, one or more processing devices, any means for processing, such as a processor, a computer or a programmable hardware component being operable with accordingly adapted software. In other words, the described function of the processing cir-cuitry 14 or means for processing may as well be implemented in software, which is then executed on one or more programmable hardware components. Such hardware components may comprise a general-purpose processor, a Digital Signal Processor (DSP) , a micro-con-troller, etc.
For example, the storage circuitry 16 or means for storing information 16 may comprise at least one element of the group of a computer readable storage medium, such as a magnetic or optical storage medium, e.g., a hard disk drive, a flash memory, Floppy-Disk, Random Access Memory (RAM) , Programmable Read Only Memory (PROM) , Erasable Programmable Read Only Memory (EPROM) , an Electronically Erasable Programmable Read Only Memory (EEPROM) , or a network storage.
For example, the computer system 100 may be a workstation computer system (e.g., a work-station computer system being used for scientific computation) or a server computer system, i.e., a computer system being used to serve functionality, such as the computer program, to one or client computers.
More details and aspects of the apparatus, device, method, and computer program for com-piling code comprising programming statements in an embedded domain specific language and of the computer system are mentioned in connection with the proposed concept or one or more examples described above or below (e.g., Figs. 2 to 8) . The apparatus, device, method, and computer program for compiling code comprising programming statements in an embed-ded domain specific language and of the computer system may comprise one or more addi-tional optional features corresponding to one or more aspects of the proposed concept, or one or more examples described above or below.
Various examples of the present disclosure relate to a concept for improving a (C++) embed-ded domain-specific language (eDSL) control flow.
In general, there is only limited support for flow control in MLIR-based eDSL frameworks. Fig. 2 illustrates a workflow structure of an eDSL to MLIR compiler flow. On the left in Fig. 2, the eDSL language 210 is shown, which, in this example, comprises C++ Tensor eDSL, Python Tensor eDSL, devito (an affine dialect) and Psyclone (another affine dialect) . In the middle, the MLIR flow 220 is shown, where the code in the eDSL dialect (e.g., C++ Tensor eDSL and/or Python Tensor eDSL) is translated, by the Low-Level Virtual Machine (LLVM) compiler, and the resulting IR of the eDSL dialect is lowered (i.e., incorporated) into the code in the affine dialect (e.g., devito and/or psyclone) and again translated by the LLVM compiler, with the resulting IR being lowered into the code in the standard dialect and again translated by the LLVM compiler, and with the resulting IR being lowered into the code in the SCF  dialect (structured control flow dialect) and again translated and compiled by the LLVM com-piler. In the end, the operations specified in the respective dialects are all combined by the LLVM compiler (by lowering the respective intermediate representations into the next-lower level in the multi-lever intermediate representation) and compiled for the respective target hardware 230.
In the frontend, eDSL is embedded in programing languages. As shown in Fig. 2, an eDSL dialect can be used as entry into the MLIR framework, with the eDSL dialect getting lowered to the MLIR built-in dialects and custom dialects. Finally, the eDSL, together with its host language, is translated to LLVM and run on different devices (hardware 230 in Fig. 2, with the hardware potentially comprising different computing devices such as a CPU (Central Pro-cessing Unit) , GPU (Graphics Processing Unit) or VPU (Video Processing Unit or Vector Processing Unit) .
However, the usage of eDSL flow control might not be as flexible as some common program-ming languages, especially adopted together with its host language, such as C++. As it is hard to override control flow operations of the host language, like for, while in C++, programmers tend to mix the host and eDSL language, which may be detrimental with respect to flow con-trol.
For example, a mixed code block that contains an eDSL control flow is listed in Fig. 3a. Fig. 3a shows a code piece comprising eDSL programming statements and host language pro-gramming statements. In this piece of code, the C++ loop control statement for and its loop body is used. In addition, there is a C++ eDSL statement A += 1; in which all operators are overridden (as defined by the eDSL) . The resulting IR after the first lowering passes is shown in Fig. 3b. Fig. 3b shows a code piece comprising transformed programming statements in an intermediate representation.
As shown in Fig. 3b, the C++ for loop with eDSL loop content is (directly) fully unrolled. If there is a complex loop body, a large effort is caused by each pass with duplicated IR. This may result in a significant degradation of compile time performance.
In general compilation stages, operations and operators in statements will be parsed into an AST (Abstract Syntax Tree) . It is acceptable for the pure host language or eDSL when those  statements exist in a sub-block of the control flow, such as loop body and conditional blocks, as this section (i.e., sub-block) of code can be parsed by corresponding parser. However, in many cases, developers prefer to combine host language and operations to reduce developing efforts.
Since the eDSL-oriented parser cannot (and does not need to) cover all kinds of expressions (i.e., programming statements) , but merely the eDSL expressions, the mixed code comprising eDSL and host language is usually parsed by the host language parser. While this works rea-sonably well if a piece of code is encapsulated in a control flow block, it usually does not if such code is part of a loop body. In general, the loop will be fully unrolled, and the loop body IR (generated based on the eDSL programming statements) will be repeated as many times as specified by the loop. This behavior significantly increases the compilation overhead, such that the compiler may crash during the compilation stage.
As a workaround, developers may avoid mixing host code (C++) and eDSL code together in a loop body. Instead, a group of operations used in a loop can be wrapped and moved to a separate operation (e.g., via a function call) . This group of operations can then be regarded as an individual programming statement in parsing and high-level passing. When it is passed to a suitable stage, those looped operations can be converted into intrinsic loop function or be decomposed to other operations. However, this workaround requires eDSL developers to con-clude all commonly used loop scenarios and build operations for them one by one, which is effort-consuming and not generically applicable. eDSL defined in this way might not be con-sidered user-friendly and too domain specific to be used.
In the proposed concept, a special operation, further denoted BodyOp (BodyOperation) is used, which can be used to improve the eDSL control flow in a top-down manner. In the frontend, a the eDSL level, the BodyOp (e.g., the one or more pre-defined programming state-ments) modifies the behavior of the eDSL control flow operations. Control flow content is wrapped by the BodyOp to maintain a simplified high-level IR.
When lowering down to low-level IR, the BodyOp may be transformed to a function call inside MLIR loop statements or customized loop statements, so that no redundant IR might degrade the compilation performance. In various examples, the BodyOp does not refer to a  specific operation to be added in a certain eDSL, but to a methodology to give MLIR-based eDSL an optimized control flow.
In the proposed BodyOp concept, the eDSL allows developers to write statements with C++or eDSL actions without too much regard to compilation performance. By avoiding redundant IR, the BodyOp concept may avoid extra compilation overhead. On the other hand, the im-proved control flow can make eDSL easier to extend and maintain. Developers can design more complex nested conditional statements.
To improve the host language (e.g., C++) eDSL control flow, the proposed concept introduces a new operation –BodyOp, which is adapted to the MLIR flow, to manage control flow blocks.
eDSL users can use a block from the host language to define the operations being placed in the BodyOp. To take C++ as an example, a lambda function can be used. To give an example, the code in Fig. 3a can be re-written as shown in Fig. 4a after employing the BodyOp opera-tion in the eDSL. Fig. 4a shows an example of a BodyOp of a loop in the eDSL. Instead of the for-construct used in the code of Fig. 3a, the code to be looped is nested inside the EDSL:: BodyOp loopBody (… {…} ) statement. However, this code is just a possible form of C++ eDSL for example. Developers can design similar structures following the proposed concept, adapting the concept to the respective host language.
BodyOp may be designed to contain any possible blocks upon the base of MLIR. However, the BodyOp might not be just another code block that will not be recognized by the eDSL but the host language. In other words, this block will not even be unrolled because it is no longer parsed by the host language. The block may be treated as a whole and is opaque to the rest of the code. It might only take parameters ( [] (Tensor A) in the example given in Fig. 4a) and returns required results (A+1 in the example of Fig. 4a) .
In the example given in Figs. 3a and 3b, the host and eDSL language were mixed, which may cause problems when determining the control flow. As a result, the loop was unrolled, causing every statement in the loop body to be repeated many times during IR lowering. This issue might not occur with the proposed BodyOp concept. During the conversion of the high-level dialects, the BodyOp may be transformed into a function call that is nested in the correspond-ing operation (loop operation in Figs. 4a and 4b) , as shown in the code shown in Fig. 4b. Fig.  4b shows an example of a BodyOp of a loop in the IR. The code shown in Fig. 4b is a new version of the code shown in Fig. 3b, this time using the BodyOp.
Following the characteristics of the MLIR framework, the compilation flow will mix/match different IR representations in a common loop. If the different IR are included into one ab-stract operation (BodyOp) , they may be translated into a function call during the MLIR low-ering pass. This can can avoid duplication of loop content and make IR passing more effective.
However, the proposed concept is not limited to loops. Figs. 5a and 5b show an example of a bare (i.e., multi-purpose) BodyOp in the eDSL (Fig. 5a) and of the resulting IR (Fig. 5b) . Figs. 6a and 6b show an example of a BodyOp of a switch case statement in the eDSL (Fig. 6a) and of the resulting IR (Fig. 6b) . Figs. 7a and 7b show an example of a BodyOp of an if statement in the eDSL (Fig. 7a) and of the resulting IR (Fig. 7b) .
Fig. 8 shows a schematic diagram of how the BodyOp operation is lowered through the levels of the MLIR. On the left, Fig. 8 shows the C++ tensor eDSL 810. Two versions of the desired loop 812 are shown –the version 814 shown in Fig. 3a, and the BodyOp version 816 shown in Fig. 4a. On the right, the MLIR 820 is shown. The BodyOp version 816 is translated in the EDSL dialect to a loop body operation 824, which comprises a function call 826 to the block of code comprised in the BodyOp. The statements defined in the eDSL loop body are ex-tracted into a separate function operation, carrying its original parameters, iteration counter and loop variables. In the loop op original location, it is replaced by a MLIR component loop operation based on the stage of lowering pipeline. This function call is then used by the LLVM compiler 822 when compiling the code.
Since BodyOp separates the control flow blocks and their upper level controlling code, these two parts of code are lowered separately. For example, in the high-level lowering, the loop operation with the BodyOp may be transformed to a forOp or parallelOp of the Affine Dialect. When this lowering occurs in the low-level, the loop with the BodyOp may be transformed to a for operation of the SCF Dialect, OMP (Open Message Passing) Dialect or customized Di-alects. Meanwhile, the sub-block in the BodyOp (containing the eDSL statements) also gets lowered by the same pipeline, and so does its nested control flow.
The proposed concept provides a methodology to improve the MLIR-based eDSL control flow. Using the proposed concept, the developer can import the BodyOp into their eDSL to separate eDSL code and its control flow blocks from parsing, so that redundant IR generation can be avoided, and IR compilation can gain better performance.
According to the proposed concept, the proposed operation (BodyOp) may be illustrated in the documentation of the respective eDSL, including its usage, the Op form in dialect and conversion passes.
More details and aspects of the BodyOp concept are mentioned in connection with the pro-posed concept or one or more examples described above or below (e.g., Figs. 1a to 1b) . The BodyOp concept may comprise one or more additional optional features corresponding to one or more aspects of the proposed concept, or one or more examples described above or below.
In the following, some examples of the proposed concept are shown:
An example (e.g., example 1) relates to an apparatus (10) comprising interface circuitry (12) , machine-readable instructions and processing circuitry (14) to execute the machine-readable instructions to obtain code comprising a first set of programming statements in an embedded domain-specific programming language and a second set of programming statements in a second programming language, the first set of programming statements comprising one or more pre-defined programming statements encapsulating a block of programming statements. The machine-readable instructions comprise instructions to compile the first set of program-ming statements to generate a first set of transformed programming statements according to an intermediate representation, with the encapsulated programming statements being repre-sented as a function call to a function comprising transformed programming statements cor-responding to the encapsulated programming statements. The machine-readable instructions comprise instructions to compile the second set of programming statements, with the first set of programming statements being represented by the first set of transformed programming statements during the compilation of the second set of programming statements.
Another example (e.g., example 2) relates to a previously described example (e.g., example 1) or to any of the examples described herein, further comprising that the one or more pre- defined programming statements are generic for a plurality of different types of encapsulated blocks of code.
Another example (e.g., example 3) relates to a previously described example (e.g., example 2) or to any of the examples described herein, further comprising that the one or more pre-defined programming statements are generic for two or more of a for loop block, a while loop block, a switch case block and an if block.
Another example (e.g., example 4) relates to a previously described example (e.g., one of the examples 1 to 3) or to any of the examples described herein, further comprising that the one or more pre-defined programming statements comprise at least one first programming state-ment defining the programming statements being encapsulated and at least one second pro-gramming statement defining an operation to be performed with the programming statements being encapsulated.
Another example (e.g., example 5) relates to a previously described example (e.g., example 4) or to any of the examples described herein, further comprising that the machine-readable instructions comprise instructions to translate the at least one second programming statement into at least one corresponding programming statement in the second programming language, and to compile the at least one corresponding programming statement in the second program-ming language together with the second set of programming statements.
Another example (e.g., example 6) relates to a previously described example (e.g., one of the examples 1 to 5) or to any of the examples described herein, further comprising that the ma-chine-readable instructions comprise instructions to continue compiling the code using the first set of transformed programming statements and the second set of transformed program-ming statements.
Another example (e.g., example 7) relates to a previously described example (e.g., one of the examples 1 to 6) or to any of the examples described herein, further comprising that the com-pilation of the first set of programming statements and of the second set of programming statements is part of a of a multi-layer intermediate representation-based compilation of the code.
Another example (e.g., example 8) relates to a previously described example (e.g., one of the examples 1 to 7) or to any of the examples described herein, further comprising that the one or more pre-defined programming statements are defined in a namespace of the embedded domain-specific programming language.
Another example (e.g., example 9) relates to a previously described example (e.g., one of the examples 1 to 8) or to any of the examples described herein, further comprising that the en-capsulated programming statements are programming statements of the first set of program-ming statements.
Another example (e.g., example 10) relates to a previously described example (e.g., one of the examples 1 to 9) or to any of the examples described herein, further comprising that the first set of programming statements and the second set of programming statements are inter-mixed within the code.
Another example (e.g., example 11) relates to a previously described example (e.g., one of the examples 1 to 10) or to any of the examples described herein, further comprising that the embedded domain specific language corresponds to a library or framework that is based on the second programming language.
An example (e.g., example 12) relates to a computer system (100) comprising the apparatus (10) according to one of the examples 1 to 11 (or according to any other example) .
An example (e.g., example 13) relates to an apparatus (10) comprising processing circuitry (14) configured to obtain code comprising a first set of programming statements in an embed-ded domain-specific programming language and a second set of programming statements in a second programming language, the first set of programming statements comprising one or more pre-defined programming statements encapsulating a block of programming statements. The processing circuitry is configured to compile the first set of programming statements to generate a first set of transformed programming statements according to an intermediate rep-resentation, with the encapsulated programming statements being represented as a function call to a function comprising transformed programming statements corresponding to the en-capsulated programming statements. The processing circuitry is configured to compile the second set of programming statements, with the first set of programming statements being  represented by the first set of transformed programming statements during the compilation of the second set of programming statements.
Another example (e.g., example 14) relates to a previously described example (e.g., example 13) or to any of the examples described herein, further comprising that the one or more pre-defined programming statements are generic for a plurality of different types of encapsulated blocks of code.
Another example (e.g., example 15) relates to a previously described example (e.g., example 14) or to any of the examples described herein, further comprising that the one or more pre-defined programming statements are generic for two or more of a for loop block, a while loop block, a switch case block and an if block.
Another example (e.g., example 16) relates to a previously described example (e.g., one of the examples 13 to 15) or to any of the examples described herein, further comprising that the one or more pre-defined programming statements comprise at least one first programming statement defining the programming statements being encapsulated and at least one second programming statement defining an operation to be performed with the programming state-ments being encapsulated.
Another example (e.g., example 17) relates to a previously described example (e.g., example 16) or to any of the examples described herein, further comprising that the processing circuitry is configured to translate the at least one second programming statement into at least one corresponding programming statement in the second programming language, and to compile the at least one corresponding programming statement in the second programming language together with the second set of programming statements.
Another example (e.g., example 18) relates to a previously described example (e.g., one of the examples 13 to 17) or to any of the examples described herein, further comprising that the processing circuitry is configured to continue compiling the code using the first set of trans-formed programming statements and the second set of transformed programming statements.
Another example (e.g., example 19) relates to a previously described example (e.g., one of the examples 13 to 18) or to any of the examples described herein, further comprising that the  compilation of the first set of programming statements and of the second set of programming statements is part of a of a multi-layer intermediate representation-based compilation of the code.
Another example (e.g., example 20) relates to a previously described example (e.g., one of the examples 13 to 19) or to any of the examples described herein, further comprising that the one or more pre-defined programming statements are defined in a namespace of the embedded domain-specific programming language.
Another example (e.g., example 21) relates to a previously described example (e.g., one of the examples 13 to 20) or to any of the examples described herein, further comprising that the encapsulated programming statements are programming statements of the first set of pro-gramming statements.
Another example (e.g., example 22) relates to a previously described example (e.g., one of the examples 13 to 21) or to any of the examples described herein, further comprising that the first set of programming statements and the second set of programming statements are inter-mixed within the code.
Another example (e.g., example 23) relates to a previously described example (e.g., one of the examples 13 to 22) or to any of the examples described herein, further comprising that the embedded domain specific language corresponds to a library or framework that is based on the second programming language.
An example (e.g., example 24) relates to a computer system (100) comprising the apparatus (10) according to one of the examples 13 to 23 (or according to any other example) .
An example (e.g., example 25) relates to a device (10) comprising means for processing (14) configured to obtain code comprising a first set of programming statements in an embedded domain-specific programming language and a second set of programming statements in a second programming language, the first set of programming statements comprising one or more pre-defined programming statements encapsulating a block of programming statements. The means for processing is configured to compile the first set of programming statements to  generate a first set of transformed programming statements according to an intermediate rep-resentation, with the encapsulated programming statements being represented as a function call to a function comprising transformed programming statements corresponding to the en-capsulated programming statements. The means for processing is configured to compile the second set of programming statements, with the first set of programming statements being represented by the first set of transformed programming statements during the compilation of the second set of programming statements.
Another example (e.g., example 26) relates to a previously described example (e.g., example 25) or to any of the examples described herein, further comprising that the one or more pre-defined programming statements are generic for a plurality of different types of encapsulated blocks of code.
Another example (e.g., example 27) relates to a previously described example (e.g., example 26) or to any of the examples described herein, further comprising that the one or more pre-defined programming statements are generic for two or more of a for loop block, a while loop block, a switch case block and an if block.
Another example (e.g., example 28) relates to a previously described example (e.g., one of the examples 25 to 27) or to any of the examples described herein, further comprising that the one or more pre-defined programming statements comprise at least one first programming statement defining the programming statements being encapsulated and at least one second programming statement defining an operation to be performed with the programming state-ments being encapsulated.
Another example (e.g., example 29) relates to a previously described example (e.g., example 28) or to any of the examples described herein, further comprising that the means for pro-cessing is configured to translate the at least one second programming statement into at least one corresponding programming statement in the second programming language, and to com-pile the at least one corresponding programming statement in the second programming lan-guage together with the second set of programming statements.
Another example (e.g., example 30) relates to a previously described example (e.g., one of the examples 25 to 29) or to any of the examples described herein, further comprising that the  means for processing is configured to continue compiling the code using the first set of trans-formed programming statements and the second set of transformed programming statements.
Another example (e.g., example 31) relates to a previously described example (e.g., one of the examples 25 to 30) or to any of the examples described herein, further comprising that the compilation of the first set of programming statements and of the second set of programming statements is part of a of a multi-layer intermediate representation-based compilation of the code.
Another example (e.g., example 32) relates to a previously described example (e.g., one of the examples 25 to 31) or to any of the examples described herein, further comprising that the one or more pre-defined programming statements are defined in a namespace of the embedded domain-specific programming language.
Another example (e.g., example 33) relates to a previously described example (e.g., one of the examples 25 to 32) or to any of the examples described herein, further comprising that the encapsulated programming statements are programming statements of the first set of pro-gramming statements.
Another example (e.g., example 34) relates to a previously described example (e.g., one of the examples 25 to 33) or to any of the examples described herein, further comprising that the first set of programming statements and the second set of programming statements are inter-mixed within the code.
Another example (e.g., example 35) relates to a previously described example (e.g., one of the examples 25 to 34) or to any of the examples described herein, further comprising that the embedded domain specific language corresponds to a library or framework that is based on the second programming language.
An example (e.g., example 36) relates to a computer system (100) comprising the device (10) according to one of the examples 25 to 35 (or according to any other example) .
An example (e.g., example 37) relates to a method comprising obtaining (110) code compris-ing a first set of programming statements in an embedded domain-specific programming lan-guage and a second set of programming statements in a second programming language, the first set of programming statements comprising one or more pre-defined programming state-ments encapsulating a block of programming statements. The method comprises compiling (130) the first set of programming statements to generate a first set of transformed program-ming statements according to an intermediate representation, with the encapsulated program-ming statements being represented as a function call to a function comprising transformed programming statements corresponding to the encapsulated programming statements. The method comprises compiling (140) the second set of programming statements, with the first set of programming statements being represented by the first set of transformed programming statements during the compilation of the code defined by the second set of programming statements.
Another example (e.g., example 38) relates to a previously described example (e.g., example 37) or to any of the examples described herein, further comprising that the one or more pre-defined programming statements are generic for a plurality of different types of encapsulated blocks of code.
Another example (e.g., example 39) relates to a previously described example (e.g., example 38) or to any of the examples described herein, further comprising that the one or more pre-defined programming statements are generic for two or more of a for loop block, a while loop block, a switch case block and an if block.
Another example (e.g., example 40) relates to a previously described example (e.g., one of the examples 37 to 39) or to any of the examples described herein, further comprising that the one or more pre-defined programming statements comprise at least one first programming statement defining the programming statements being encapsulated and at least one second programming statement defining an operation to be performed with the programming state-ments being encapsulated.
Another example (e.g., example 41) relates to a previously described example (e.g., example 40) or to any of the examples described herein, further comprising that the method comprises  translating (135) the at least one second programming statement into at least one correspond-ing programming statement in the second programming language, and compiling (140) the at least one corresponding programming statement in the second programming language to-gether with the second set of programming statements.
Another example (e.g., example 42) relates to a previously described example (e.g., one of the examples 37 to 41) or to any of the examples described herein, further comprising that the method comprises continuing compiling (150) the code using the first set of transformed pro-gramming statements and the second set of transformed programming statements.
Another example (e.g., example 43) relates to a previously described example (e.g., one of the examples 37 to 42) or to any of the examples described herein, further comprising that the compilation of the first set of programming statements and of the second set of programming statements is part of a of a multi-layer intermediate representation-based compilation of the code.
Another example (e.g., example 44) relates to a previously described example (e.g., one of the examples 37 to 43) or to any of the examples described herein, further comprising that the one or more pre-defined programming statements are defined in a namespace of the embedded domain-specific programming language.
Another example (e.g., example 45) relates to a previously described example (e.g., one of the examples 37 to 44) or to any of the examples described herein, further comprising that the encapsulated programming statements are programming statements of the first set of pro-gramming statements.
Another example (e.g., example 46) relates to a previously described example (e.g., one of the examples 37 to 45) or to any of the examples described herein, further comprising that the first set of programming statements and the second set of programming statements are inter-mixed within the code.
Another example (e.g., example 47) relates to a previously described example (e.g., one of the examples 37 to 46) or to any of the examples described herein, further comprising that the  embedded domain specific language corresponds to a library or framework that is based on the second programming language.
An example (e.g., example 48) relates to a computer system (100) configured to perform the method according to one of the examples 37 to 47 (or according to any other example) .
An example (e.g., example 49) relates to a non-transitory machine-readable storage medium including program code, when executed, to cause a machine to perform the method of one of the examples 37 to 47.
An example (e.g., example 50) relates to a computer program having a program code for performing the method of one of the examples the method of one of the examples 37 to 47 when the computer program is executed on a computer, a processor, or a programmable hard-ware component.
An example (e.g., example 51) relates to a machine-readable storage including machine read-able instructions, when executed, to implement a method or realize an apparatus as claimed in any pending claim or shown in any example.
The aspects and features described in relation to a particular one of the previous examples may also be combined with one or more of the further examples to replace an identical or similar feature of that further example or to additionally introduce the features into the further example.
Examples may further be or relate to a (computer) program including a program code to exe-cute one or more of the above methods when the program is executed on a computer, proces-sor or other programmable hardware component. Thus, steps, operations or processes of dif-ferent ones of the methods described above may also be executed by programmed computers, processors or other programmable hardware components. Examples may also cover program storage devices, such as digital data storage media, which are machine-, processor-or com-puter-readable and encode and/or contain machine-executable, processor-executable or com-puter-executable programs and instructions. Program storage devices may include or be dig-ital storage devices, magnetic storage media such as magnetic disks and magnetic tapes, hard disk drives, or optically readable digital data storage media, for example. Other examples may  also include computers, processors, control units, (field) programmable logic arrays ( (F) PLAs) , (field) programmable gate arrays ( (F) PGAs) , graphics processor units (GPU) , ap-plication-specific integrated circuits (ASICs) , integrated circuits (ICs) or system-on-a-chip (SoCs) systems programmed to execute the steps of the methods described above.
It is further understood that the disclosure of several steps, processes, operations or functions disclosed in the description or claims shall not be construed to imply that these operations are necessarily dependent on the order described, unless explicitly stated in the individual case or necessary for technical reasons. Therefore, the previous description does not limit the execu-tion of several steps or functions to a certain order. Furthermore, in further examples, a single step, function, process or operation may include and/or be broken up into several sub-steps, -functions, -processes or -operations.
If some aspects have been described in relation to a device or system, these aspects should also be understood as a description of the corresponding method. For example, a block, device or functional aspect of the device or system may correspond to a feature, such as a method step, of the corresponding method. Accordingly, aspects described in relation to a method shall also be understood as a description of a corresponding block, a corresponding element, a property or a functional feature of a corresponding device or a corresponding system.
As used herein, the term “module” refers to logic that may be implemented in a hardware component or device, software or firmware running on a processing unit, or a combination thereof, to perform one or more operations consistent with the present disclosure. Software and firmware may be embodied as instructions and/or data stored on non-transitory computer-readable storage media. As used herein, the term “circuitry” can comprise, singly or in any combination, non-programmable (hardwired) circuitry, programmable circuitry such as pro-cessing units, state machine circuitry, and/or firmware that stores instructions executable by programmable circuitry. Modules described herein may, collectively or individually, be em-bodied as circuitry that forms a part of a computing system. Thus, any of the modules can be implemented as circuitry. A computing system referred to as being programmed to perform a method can be programmed to perform the method via software, hardware, firmware, or com-binations thereof.
Any of the disclosed methods (or a portion thereof) can be implemented as computer-execut-able instructions or a computer program product. Such instructions can cause a computing system or one or more processing units capable of executing computer-executable instructions to perform any of the disclosed methods. As used herein, the term “computer” refers to any computing system or device described or mentioned herein. Thus, the term “computer-exe-cutable instruction” refers to instructions that can be executed by any computing system or device described or mentioned herein.
The computer-executable instructions can be part of, for example, an operating system of the computing system, an application stored locally to the computing system, or a remote appli-cation accessible to the computing system (e.g., via a web browser) . Any of the methods de-scribed herein can be performed by computer-executable instructions performed by a single computing system or by one or more networked computing systems operating in a network environment. Computer-executable instructions and updates to the computer-executable in-structions can be downloaded to a computing system from a remote server.
Further, it is to be understood that implementation of the disclosed technologies is not limited to any specific computer language or program. For instance, the disclosed technologies can be implemented by software written in C++, C#, Java, Perl, Python, JavaScript, Adobe Flash, C#, assembly language, or any other programming language. Likewise, the disclosed tech-nologies are not limited to any particular computer system or type of hardware.
Furthermore, any of the software-based examples (comprising, for example, computer-exe-cutable instructions for causing a computer to perform any of the disclosed methods) can be uploaded, downloaded, or remotely accessed through a suitable communication means. Such suitable communication means include, for example, the Internet, the World Wide Web, an intranet, cable (including fiber optic cable) , magnetic communications, electromagnetic com-munications (including RF, microwave, ultrasonic, and infrared communications) , electronic communications, or other such communication means.
The disclosed methods, apparatuses, and systems are not to be construed as limiting in any way. Instead, the present disclosure is directed toward all novel and nonobvious features and aspects of the various disclosed examples, alone and in various combinations and subcombi-nations with one another. The disclosed methods, apparatuses, and systems are not limited to  any specific aspect or feature or combination thereof, nor do the disclosed examples require that any one or more specific advantages be present or problems be solved.
Theories of operation, scientific principles, or other theoretical descriptions presented herein in reference to the apparatuses or methods of this disclosure have been provided for the pur-poses of better understanding and are not intended to be limiting in scope. The apparatuses and methods in the appended claims are not limited to those apparatuses and methods that function in the manner described by such theories of operation.
The following claims are hereby incorporated in the detailed description, wherein each claim may stand on its own as a separate example. It should also be noted that although in the claims a dependent claim refers to a particular combination with one or more other claims, other examples may also include a combination of the dependent claim with the subject matter of any other dependent or independent claim. Such combinations are hereby explicitly proposed, unless it is stated in the individual case that a particular combination is not intended. Further-more, features of a claim should also be included for any other independent claim, even if that claim is not directly defined as dependent on that other independent claim.

Claims (19)

  1. An apparatus (10) comprising processing circuitry (14) configured to:
    obtain code comprising a first set of programming statements in an embedded domain-specific programming language and a second set of programming statements in a sec-ond programming language, the first set of programming statements comprising one or more pre-defined programming statements encapsulating a block of programming statements;
    compile the first set of programming statements to generate a first set of transformed programming statements according to an intermediate representation, with the encap-sulated programming statements being represented as a function call to a function comprising transformed programming statements corresponding to the encapsulated programming statements; and
    compile the second set of programming statements, with the first set of programming statements being represented by the first set of transformed programming statements during the compilation of the second set of programming statements.
  2. The apparatus according to claim 1, wherein the one or more pre-defined program-ming statements are generic for a plurality of different types of encapsulated blocks of code.
  3. The apparatus according to claim 2, wherein the one or more pre-defined program-ming statements are generic for two or more of a for loop block, a while loop block, a switch case block and an if block.
  4. The apparatus according to claim 1, wherein the one or more pre-defined program-ming statements comprise at least one first programming statement defining the pro- gramming statements being encapsulated and at least one second programming state-ment defining an operation to be performed with the programming statements being encapsulated.
  5. The apparatus according to claim 4, wherein the processing circuitry is configured to translate the at least one second programming statement into at least one correspond-ing programming statement in the second programming language, and to compile the at least one corresponding programming statement in the second programming lan-guage together with the second set of programming statements.
  6. The apparatus according to claim 1, wherein the processing circuitry is configured to continue compiling the code using the first set of transformed programming state-ments and the second set of transformed programming statements.
  7. The apparatus according to claim 1, wherein the compilation of the first set of pro-gramming statements and of the second set of programming statements is part of a of a multi-layer intermediate representation-based compilation of the code.
  8. The apparatus according to claim 1, wherein the one or more pre-defined program-ming statements are defined in a namespace of the embedded domain-specific pro-gramming language.
  9. The apparatus according to claim 1, wherein the encapsulated programming state-ments are programming statements of the first set of programming statements.
  10. The apparatus according to claim 1, wherein the first set of programming statements and the second set of programming statements are intermixed within the code.
  11. The apparatus according to claim 1, wherein the embedded domain specific language corresponds to a library or framework that is based on the second programming lan-guage.
  12. A computer system (100) comprising the apparatus (10) according to one of the claims 1 to 11.
  13. An apparatus (10) comprising interface circuitry (12) , machine-readable instructions and processing circuitry (14) to execute the machine-readable instructions to:
    obtain code comprising a first set of programming statements in an embedded domain-specific programming language and a second set of programming statements in a sec-ond programming language, the first set of programming statements comprising one or more pre-defined programming statements encapsulating a block of programming statements;
    compile the first set of programming statements to generate a first set of transformed programming statements according to an intermediate representation, with the encap-sulated programming statements being represented as a function call to a function comprising transformed programming statements corresponding to the encapsulated programming statements; and
    compile the second set of programming statements, with the first set of programming statements being represented by the first set of transformed programming statements during the compilation of the second set of programming statements.
  14. A device (10) comprising means for processing (14) configured to:
    obtain code comprising a first set of programming statements in an embedded domain-specific programming language and a second set of programming statements in a sec-ond programming language, the first set of programming statements comprising one or more pre-defined programming statements encapsulating a block of programming statements;
    compile the first set of programming statements to generate a first set of transformed programming statements according to an intermediate representation, with the encap-sulated programming statements being represented as a function call to a function comprising transformed programming statements corresponding to the encapsulated programming statements; and
    compile the second set of programming statements, with the first set of programming statements being represented by the first set of transformed programming statements during the compilation of the second set of programming statements.
  15. A method comprising:
    obtaining (110) code comprising a first set of programming statements in an embed-ded domain-specific programming language and a second set of programming state-ments in a second programming language, the first set of programming statements comprising one or more pre-defined programming statements encapsulating a block of programming statements;
    compiling (130) the first set of programming statements to generate a first set of trans-formed programming statements according to an intermediate representation, with the encapsulated programming statements being represented as a function call to a func-tion comprising transformed programming statements corresponding to the encapsu-lated programming statements; and
    compiling (140) the second set of programming statements, with the first set of pro-gramming statements being represented by the first set of transformed programming statements during the compilation of the code defined by the second set of program-ming statements.
  16. The method according to claim 15, wherein the one or more pre-defined programming statements comprise at least one first programming statement defining the program-ming statements being encapsulated and at least one second programming statement defining an operation to be performed with the programming statements being encap-sulated, the method comprising translating (135) the at least one second programming statement into at least one corresponding programming statement in the second pro-gramming language, and compiling (140) the at least one corresponding programming statement in the second programming language together with the second set of pro-gramming statements.
  17. The method according to claim 15, wherein method comprises continuing compiling (150) the code using the first set of transformed programming statements and the sec-ond set of transformed programming statements.
  18. A non-transitory machine-readable storage medium including program code, when executed, to cause a machine to perform the method of one of the claims 15 to 17.
  19. A computer program having a program code for performing the method of one of the claims the method of one of the claims 15 to 17 when the computer program is exe-cuted on a computer, a processor, or a programmable hardware component.
PCT/CN2022/119150 2022-09-15 2022-09-15 Programming statements in embedded domain specific language WO2024055262A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/119150 WO2024055262A1 (en) 2022-09-15 2022-09-15 Programming statements in embedded domain specific language

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/119150 WO2024055262A1 (en) 2022-09-15 2022-09-15 Programming statements in embedded domain specific language

Publications (1)

Publication Number Publication Date
WO2024055262A1 true WO2024055262A1 (en) 2024-03-21

Family

ID=90273976

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/119150 WO2024055262A1 (en) 2022-09-15 2022-09-15 Programming statements in embedded domain specific language

Country Status (1)

Country Link
WO (1) WO2024055262A1 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140189664A1 (en) * 2009-10-20 2014-07-03 Russell WAYNE Guenthner METHOD FOR ENABLING COMPILATION OF A COBOL SOURCE PROGRAM UTILIZING A TWO-STAGE COMPILATION PROCESS, THE COBOL SOURCE PROGRAM INCLUDING A MIX OF COBOL, C++ or JAVA STATEMENTS, AND OPTIONAL OPENMP DIRECTIVES
CN113608748A (en) * 2021-07-19 2021-11-05 上海浦东发展银行股份有限公司 Data processing method, device and equipment for converting C language into Java language
CN114461221A (en) * 2022-01-27 2022-05-10 北京奕斯伟计算技术有限公司 Compiling method, compiling device, electronic device, and storage medium
CN114625372A (en) * 2022-03-10 2022-06-14 平安科技(深圳)有限公司 Automatic component compiling method and device, computer equipment and storage medium
CN114924750A (en) * 2022-06-09 2022-08-19 爱驰汽车(上海)有限公司 Vehicle-mounted application software generation method, device, equipment and storage medium

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140189664A1 (en) * 2009-10-20 2014-07-03 Russell WAYNE Guenthner METHOD FOR ENABLING COMPILATION OF A COBOL SOURCE PROGRAM UTILIZING A TWO-STAGE COMPILATION PROCESS, THE COBOL SOURCE PROGRAM INCLUDING A MIX OF COBOL, C++ or JAVA STATEMENTS, AND OPTIONAL OPENMP DIRECTIVES
CN113608748A (en) * 2021-07-19 2021-11-05 上海浦东发展银行股份有限公司 Data processing method, device and equipment for converting C language into Java language
CN114461221A (en) * 2022-01-27 2022-05-10 北京奕斯伟计算技术有限公司 Compiling method, compiling device, electronic device, and storage medium
CN114625372A (en) * 2022-03-10 2022-06-14 平安科技(深圳)有限公司 Automatic component compiling method and device, computer equipment and storage medium
CN114924750A (en) * 2022-06-09 2022-08-19 爱驰汽车(上海)有限公司 Vehicle-mounted application software generation method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
Besard et al. Effective extensible programming: unleashing Julia on GPUs
Rasch et al. ATF: A generic directive‐based auto‐tuning framework
US9164744B2 (en) Method and system for program building
Rompf Lightweight modular staging and embedded compilers: Abstraction without regret for high-level high-performance programming
JP5936118B2 (en) Code conversion method, program and system
CN110149800A (en) It is a kind of for handling the device of abstract syntax tree associated with the source code of source program
Déharbe et al. Formalizing freertos: First steps
US8516459B2 (en) XSLT-specific XJIT compiler
Balser et al. Verifying concurrent systems with symbolic execution
WO2024055262A1 (en) Programming statements in embedded domain specific language
Steinhöfel et al. Modular, correct compilation with automatic soundness proofs
Komatsu et al. Translation of large-scale simulation codes for an OpenACC platform using the Xevolver framework
Yu et al. RAF: Holistic compilation for deep learning model training
McCormick et al. Exploring the construction of a domain-aware toolchain for high-performance computing
Duran et al. An algebraic approach to the design of compilers for object-oriented languages
Jeanjean et al. From DSL specification to interactive computer programming environment
de Freitas From Circus to Java: Implementation and verification of a translation strategy
Barza et al. Model checking requirements
Di Giacomo et al. Building game scripting DSL’s with the Metacasanova metacompiler
Dearing et al. LASSI: An LLM-based Automated Self-Correcting Pipeline for Translating Parallel Scientific Codes
Pompougnac et al. Weaving synchronous reactions into the fabric of SSA-form compilers
Grebe et al. Rewriting a shallow DSL using a GHC compiler extension
Lengyel et al. Validated model transformation-driven software development
Rivera Code generation for Event-B
Di Giacomo et al. Metacasanova: an optimized meta-compiler for Domain-Specific Languages

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22958454

Country of ref document: EP

Kind code of ref document: A1