EP2009547A2 - Procédé et appareil pour la transformation XSL parallèle à faible contention et équilibrage de charge - Google Patents

Procédé et appareil pour la transformation XSL parallèle à faible contention et équilibrage de charge Download PDF

Info

Publication number
EP2009547A2
EP2009547A2 EP08251060A EP08251060A EP2009547A2 EP 2009547 A2 EP2009547 A2 EP 2009547A2 EP 08251060 A EP08251060 A EP 08251060A EP 08251060 A EP08251060 A EP 08251060A EP 2009547 A2 EP2009547 A2 EP 2009547A2
Authority
EP
European Patent Office
Prior art keywords
tasks
execution
stack
xsl
code
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP08251060A
Other languages
German (de)
English (en)
Other versions
EP2009547A3 (fr
Inventor
Yuanbao Sun
Qi Zhang
Tianyou Li
Udi Kalekin
Howard Tsoi
Brendon Cahoon
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intel Corp
Original Assignee
Intel Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Intel Corp filed Critical Intel Corp
Publication of EP2009547A2 publication Critical patent/EP2009547A2/fr
Publication of EP2009547A3 publication Critical patent/EP2009547A3/fr
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/12Use of codes for handling textual entities
    • G06F40/151Transformation
    • G06F40/154Tree transformation for tree-structured or markup documents, e.g. XSLT, XSL-FO or stylesheets
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/41Compilation
    • G06F8/45Exploiting coarse grain parallelism in compilation, i.e. parallelism between groups of instructions
    • G06F8/456Parallelism detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F40/00Handling natural language data
    • G06F40/10Text processing
    • G06F40/12Use of codes for handling textual entities
    • G06F40/131Fragmentation of text files, e.g. creating reusable text-blocks; Linking to fragments, e.g. using XInclude; Namespaces
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/48Program initiating; Program switching, e.g. by interrupt
    • G06F9/4806Task transfer initiation or dispatching
    • G06F9/4843Task transfer initiation or dispatching by program, e.g. task dispatcher, supervisor, operating system
    • G06F9/4881Scheduling strategies for dispatcher, e.g. round robin, multi-level priority queues
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5083Techniques for rebalancing the load in a distributed system

Definitions

  • XML Stylesheet Language Transformation has become one of the most popular languages for processing and/or transforming XML documents in various application domains.
  • Extensible stylesheet language transformation is a language for transforming Extensible Markup Language (XML) documents into other documents.
  • An XSLT processor typically requires as inputs an Extensible Stylesheet Language (XSL) document and an input XML document.
  • XSL Extensible Stylesheet Language
  • an XSLT processor may transform the input XML document into another document.
  • the format of the resulting output document may be in XML or another format.
  • the resulting document may be formatted according to hypertext markup language (HTML) or it may be a plain text document.
  • HTML hypertext markup language
  • XSLT does not typically enforce any execution order, namely, the instructions performed by an XSLT processor during the processing of an input XML document may be performed in any arbitrary order. However, executing XSLT may be costly in terms of time, memory and computing resources.
  • a data process is here, and generally, considered to be a self-consistent sequence of acts or operations on data leading to a desired result.
  • These include physical manipulations of physical quantities.
  • these quantities take the form of electrical or magnetic signals capable of being stored, transferred, combined, compared, and otherwise manipulated. It has proven convenient at times, principally for reasons of common usage, to refer to these signals as bits, values, elements, symbols, characters, terms, numbers or the like. It should be understood, however, that all of these and similar terms are to be associated with the appropriate physical quantities and are merely convenient labels applied to these quantities.
  • Embodiments of the present invention may include apparatuses for performing the operations herein.
  • This apparatus may be specially constructed for the desired purposes, or it may comprise a general purpose computer selectively activated or reconfigured by a computer program stored in the computer.
  • a computer program may be stored in a computer readable storage medium, such as, but is not limited to, any type of disk including floppy disks, optical disks, CD-ROMs, magnetic-optical disks, read-only memories (ROMs), random access memories (RAMs), electrically programmable read-only memories (EPROMs), electrically erasable and programmable read only memories (EEPROMs), magnetic or optical cards, or any other type of media suitable for storing electronic instructions, and capable of being coupled to a computer system bus.
  • parallel XSLT transformation may reduce time and memory consumption as well as possibly increase utilization of computing resources.
  • Fig. 1 showing an exemplary flow according to embodiments of the invention.
  • XSL document 110 may be used as input to XSLT compiler 120.
  • Compiler 120 may produce executable computer code 130 based on XSL document 110.
  • XML document 140 may be used as input to execution 150 which may be an executable instantiation of code 130.
  • Output document 160 may be the output of execution 150.
  • execution 150 may comprise a plurality of execution modules.
  • compiler 120 may parse input XSL document 110 and may further identify instructions, or groups of instructions, that may be combined together and executed separately from other instructions, for example, as a single task.
  • a task may comprise a set of instructions which may be executed separately, and independently from other instructions comprising document 110. Tasks may be executed simultaneously, in parallel, by multiple execution modules, e.g., by multiple threads of execution, or multiple, suitable, hardware modules.
  • Compiler 120 may further insert executable code into code 130 to transform instructions, or groups of instructions into separate tasks.
  • compiler 120 may detect or identify such separable or autonomous instructions and insert code into code 130 to transform these autonomous instructions into separate tasks.
  • Autonomous instructions may be instructions that do not rely on variables defined outside the code of those instructions, and may further have no flow dependency on other instructions.
  • An example of flow dependency may be the dependence of an instruction on output, or execution, of another instruction.
  • instruction X may require as input the output of instruction Y. Accordingly, executing instructions X and Y as independent tasks may result in both tasks, and consequently, both instructions, being executed at the same time, for example, by two different threads running at the same time, or two different hardware modules executing instructions X and Y simultaneously.
  • instruction Y may be incomplete or unavailable before execution of instruction Y is completed, instruction X may be provided with invalid or incorrect input. Accordingly, instructions X and Y may be considered to be dependent, and not autonomous with respect to each other. It will be noted that instructions X and Y may be autonomous with respect to other instructions.
  • instruction A relies on a variable, for example C, that may be modified by a previous instruction, for example instruction B. Accordingly, in such case, if instructions A and B where to be transformed into two separate tasks, then executing the two tasks independently, for example by two separate execution modules, then execution of instruction A may be provided with an incorrect variable C.
  • a mechanism is provided to ensure that dependent instructions are executed in suitable order.
  • embodiments of the present invention may require that instruction Y be executed before instruction X, or that the task executing instruction B completes its execution before the task executing instruction A begins execution.
  • Examples of autonomous instructions may be XSL instructions, such as but not limited to xsl:for-each and xsl:apply-templates, which may iterate over nodes in a node-set or node sequence of an XML document, and may further perform some instructions on each node. Because these instructions may be independent of each other, they may be transformed into tasks that may be executed independently, and possibly simultaneously.
  • compiler 120 may parse or examine document 110, locate instructions, and may further check these instructions for characteristics such as but not limited to, flow dependencies and/or variable dependencies.
  • compiler 120 may group one or more instructions into a single task. For example, if some dependencies may be identified between several instructions, these several instructions may be grouped into a single task. For example, in the case of a number of instructions using the same variables, compiler 120 may group these instructions with the variables' definitions into one task.
  • tasks may be nested.
  • a task may be created from within another task.
  • a xsl:apply-templates instruction appearing inside a xsl:for-each instruction may create tasks for the xsl:apply-templates instruction which will be created from within tasks which may be created for the xsl:for-each instruction.
  • compiler 120 may also create continuation tasks.
  • a continuation task may perform actions, such as but not limited to, releasing memory allocated, manipulating a heap, releasing pointers or any other, possibly sequential actions which may be required.
  • memory may be allocated for a template when an xsl:apply-templates instruction is first met, and context may need to be saved as well.
  • the xsl:apply-templates construct may be transformed into multiple tasks, which may in turn be executed by different execution modules; however, memory allocated may need to be released and context saved may need to be restored. Accordingly, such actions may be done by a continuation task which may be executed after tasks implementing the xsl:apply-templates construct have terminated.
  • a global continuation task may be created for executing instructions which were not grouped into any task as well as other actions required.
  • a global continuation task may perform actions such as, for example, freeing memory allocated, restoring context, releasing pointers, and/or restoring a heap, as well as possibly executing instructions which were not grouped into any task.
  • Compiler 120 may elect to leave one or more instructions in a global continuation task, for example, light-weight instructions for which the overhead of task creation may be relatively high.
  • the global continuation task may be the last task to execute.
  • transforming an XML document may be performed by a plurality of execution modules.
  • the number of execution modules may be any suitable number, for example, to provide scaleability.
  • the number of threads may be the number of processors of a multi-processor platform, or it may be any suitable number, for example, a number suitable for a specific multi-tasking operating system environment.
  • the code produced by compiler 120 may be embedded in hardware, in which case, the number of execution hardware modules may be chosen according to suitable considerations.
  • an execution module may own or otherwise be associated with, a task stack.
  • a task stack may contain one or a plurality of tasks to be executed.
  • An execution module may place tasks for execution in its stack, for example, tasks created by an execution module may be placed in a stack associated with the execution module.
  • An execution module may retrieve tasks from a stack.
  • an execution module may retrieve tasks from a stack associated with it and execute them.
  • an execution module may retrieve tasks from a task stack of another execution module.
  • an idle execution module may scan the stacks of other execution modules and based on such scanning, may retrieve tasks for it to execute.
  • the decision of which stack to retrieve tasks from may be made, for example, based on a stack containing more than a predefined number of tasks, or another parameter.
  • the execution module may retrieve one or more tasks from that stack of another execution module, for example, half of the tasks may be retrieved.
  • the execution module may further place the retrieved tasks in its own stack, and further, retrieve these tasks from its stack and execute them.
  • execution modules in particular, idle execution modules, to retrieve tasks from stacks of other execution modules may enable load balanced execution, since the load of executing tasks may be shared by, or balanced across, a plurality of execution modules.
  • the execution module when an execution module executes code for creation of tasks, the execution module may create multiple tasks and a continuation task associated therewith. The execution module may further place the continuation task, and the tasks created, in its stack in reverse order. For example, an xsl:for-each construct which iterates N nodes may yield N tasks. In such case, an execution module may create N tasks, each of which possibly implementing an iteration of the xsl:for-each construct, as well as a continuation task. The continuation task may be placed first in the stack, followed by the first task, then the second task, and so on, and the Nth task may be placed last in the stack.
  • an execution module when it retrieves tasks from its task stack, it may retrieve the last task placed in the stack first, e.g., in the example above, the Nth task may be retrieved first, possibly followed by the (N-1)th task, and so on.
  • the continuation task may be retrieved and executed last or after the multiple associated tasks. For example, in the case of iterative tasks, e.g., xsl:for-each and xsl:apply-templates, the continuation task may be retrieved and executed after all multiple associated tasks comprising the iterations have been executed.
  • an execution module may refrain from retrieving certain tasks from stacks associated with other execution modules. For example, in some embodiments of the invention, an execution module may refrain from taking a continuation task from the stack of another execution module, thereby ensuring that execution of continuation tasks may remain for execution by the execution module that created them. Leaving execution of continuation tasks to the execution module that created them may serve to reduce execution overhead and increase execution locality.
  • a continuation task may have context associated with it in the form of, for example, initialized variables, initialized pointers, allocated memory and the like. Allowing execution modules to retrieve continuation tasks may entail copying of context, which may be costly.
  • Allowing execution modules to retrieve continuation tasks may increase locality of code execution, which may be desirable in order to increase processor cache hits, thereby increasing efficiency by reuse of variables, data, and/or instructions stored in processor cache.
  • a counter may be associated with a continuation task, where the value of the counter may reflect the number of tasks needed to be executed before the continuation task may be executed. This counter may be initialized with the number of associated tasks upon creation of the continuation tasks and associated tasks. This counter may further be decreased for each associated task executed. In some embodiments of the invention, an execution module may verify the counter value is zero before executing the continuation task.
  • an execution module may retrieve more than one task from a stack of another execution module.
  • an execution module may retrieve a consecutive set of tasks, for example, half of the tasks, in a stack of another execution module, and may further place the retrieved tasks in its own stack for execution.
  • retrieving a set of consecutive tasks may serve to increase execution code locality, and hence, efficiency, for example, due to the fact that multiple consecutive tasks retrieved may call for the same code to be executed, possibly increasing processor cache hits.
  • an execution module retrieving tasks from another execution module's stack may retrieve tasks from the bottom of the stack, namely, the tasks which may otherwise be executed last by the execution module that owns the stack. Retrieving tasks from the bottom of the stack may increase code locality of the execution module who owns the stack since adjacent tasks in the stack may be likely to be sharing the same execution code, and since the owner of the stack may be executing tasks from the top of the stack. In addition, retrieving multiple tasks may reduce the number of times execution modules may need to retrieve tasks from stacks of other execution modules, thus possibly reducing overhead associated with the move of tasks from stack to stack. Contention may also be decreased by retrieval of multiple tasks which may in turn lower the number of retrieves, since execution modules may be less likely to compete for the same tasks when the number of retrieve attempts is low.
  • Such hierarchy and stack may be the result of an execution module executing code which calls for the creation of multiple tasks, for example, an xsl:for-each construct which iterates over n nodes.
  • the execution module may create n tasks A(1) to A(n) and a continuation task, A(cnt).
  • the execution module may further place tasks A(1) to A(n), and a continuation task, A(cnt) in its task stack.
  • the execution module may further retrieve task A(1) and execute it.
  • task A(1) may contain an xsl:for-each construct which iterates over m nodes as well as code calling for the creation of m tasks implementing the xsl:for-each construct.
  • the execution module may create m tasks B(1) to B(m) and a continuation task B(cnt).
  • the execution module may further place tasks B(1) to B(m), and a continuation task, B(cnt) in its task stack.
  • the execution module may further retrieve task B(1) and begin to execute it.
  • Task B(1) may contain code calling for the creation of another task as shown by task C(1) and its continuation task C(cnt).
  • Fig. 2A shows how created tasks under the above scenario may be placed in a task stack.
  • tasks may be placed in a stack in reverse order, namely, tasks created last may be executed first, and further, continuation tasks may be executed after all other tasks with which they may be associated have been extracted from the stack.
  • the execution module owning the stack may retrieve tasks from the top of the stack while other execution module may retrieve tasks from the bottom of the stack.
  • each XSLT instruction is executed in an implicit dynamic context. That context may include the context node, parameter and variable bindings, namespaces in scope and so on, as well as implementation-specific context information.
  • an execution module When an execution module creates a set of tasks, it may not need to copy the context information. Instead, the execution module may create a reference to the context and encapsulate this reference into the task. The context may be copied if another execution module retrieves the task. If the creating execution module is the one executing the task then the context need not be copied, insofar as the creating execution module may have this context in its memory.
  • execution of XSLT instructions may depend on the content of, for example, XPath and/or variables.
  • a variable content as well as XPath may be computed by a sequence of XSLT instructions that may, in turn, contain complex instructions, as well as calls to the operating system, such as "document ()" to open a file.
  • Such calls and computations may suspend the execution of an execution module. For example, accessing an external device may suspend execution until the access operation is complete.
  • compiler 120 may detect such scenarios. Compiler 120 may create separate tasks for instructions which may suspend execution and may further create a special synchronized continuation task.
  • a synchronized continuation task may depend on variables or XPath which may be computed by other tasks.
  • an associated counter may be checked. This associated counter may be decreased for each task associated with the synchronized continuation task which completes execution, and when the associated counter value reaches zero, the synchronized continuation task may be executed.
  • Parallel transformation of an XML document as described above may require output serialization.
  • the output of multiple execution modules may need to be combined together in order to construct output document 160.
  • Combining multiple outputs of multiple execution modules may entail ordering the outputs, for example, according to input document 110.
  • each execution module may have output objects associated with it.
  • An execution module may designate an output object as the current output object and may further direct its output to the current output object.
  • An execution module retrieving tasks from another execution module's stack may create a copy of the other execution module's current output object, and may further link the newly created output object to the current output object of the execution module owning the stack from which tasks were retrieved.
  • the execution module may further designate the newly created output object as its current output object and direct output to it.
  • tasks may be nested within tasks, such that when an execution module retrieves tasks from another execution module's stack, it may determine whether the tasks retrieved are in the same level of nesting as the tasks executed by the execution module owning the stack or by another execution module that may have also retrieved tasks from that stack. If the nesting level is not the same, the execution module may create a task barrier. A task barrier may be used in order to group output of nesting levels.
  • a serialization process may comprise traversing the output objects list according to the links between them, and collecting the data associated with them.
  • the task barriers may be used by a serialization process in order to identify the output of nesting levels.
  • Fig. 3A, Fig. 3B and Fig. 3C showing an example of output objects and task barriers created according to some embodiments of the present invention.
  • execution module 310 may have a stack containing tasks.
  • Output object 310A may be the current output object of execution module 310.
  • Execution module 320 may have retrieved tasks from the stack of execution module 310.
  • Execution module 320 may have created task barrier 310B.
  • Execution module 320 may have further copied output object 310A to output object 320A and may have further designated output object 320A as its current output object.
  • Execution module 320 may have further linked output object 320A to output object 310A and to task barrier 310B.
  • execution module 330 may have retrieved tasks from the stack of execution module 310. Execution module 330 may have further copied output object 310A to output object 330A and may have further designated output object 330A as its current output object. Execution module 330 may have uncoupled the link between output object 320A and output object 310A. Execution module 330 may have further linked output object 330A to output object 310A and linked output object 330A to output object 320A.
  • execution module 340 may have retrieved tasks from the stack of execution module 310. Execution module 340 may have detected that the nesting level of the tasks it retrieved is different from the nesting level of the tasks retrieved by execution module 320 and execution module 330. Accordingly, execution module 340 may have created task barrier 310C. Execution module 340 may have copied output object 310A to output object 340A. Execution module 340 may have further uncoupled the link between output object 330A and output object 310A. Execution module 340 may have further linked output 310A to output object 340A, execution module 340 may have further linked output object 340A to task barrier 310C. Execution module 340 may have further linked task barrier 310C to output object 330A.
  • Fig. 4A shows exemplary pseudo code implementing the main loop of an execution module.
  • an execution module may continue to retrieve tasks from its own stack, and if the execution module's stack is empty, it may scan other execution modules' stacks. If tasks are found in another execution module's stack, the execution module may retrieve some of them, place them in its own stack and execute them. It should be noted that not all procedures or details are shown by the pseudo code depicted in Fig. 4A . In addition, it should be noted that although a single task may be retrieved by the pseudo code shown, the number of tasks retrieved may be predefined or dynamically computed by an execution module. Fig.
  • FIG. 4B shows exemplary pseudo code implementing retrieval of tasks from another execution module's stack.
  • FIG. 4C shows exemplary pseudo code implementing creation of multiple tasks from an xsl:apply-templates construct or a xsl:for-each construct, as well as execution of the tasks created.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computational Linguistics (AREA)
  • General Health & Medical Sciences (AREA)
  • Devices For Executing Special Programs (AREA)
  • Multi Processors (AREA)
EP08251060A 2007-06-26 2008-03-26 Procédé et appareil pour la transformation XSL parallèle à faible contention et équilibrage de charge Withdrawn EP2009547A3 (fr)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US11/819,217 US20090007115A1 (en) 2007-06-26 2007-06-26 Method and apparatus for parallel XSL transformation with low contention and load balancing

Publications (2)

Publication Number Publication Date
EP2009547A2 true EP2009547A2 (fr) 2008-12-31
EP2009547A3 EP2009547A3 (fr) 2012-10-31

Family

ID=39790143

Family Applications (1)

Application Number Title Priority Date Filing Date
EP08251060A Withdrawn EP2009547A3 (fr) 2007-06-26 2008-03-26 Procédé et appareil pour la transformation XSL parallèle à faible contention et équilibrage de charge

Country Status (3)

Country Link
US (1) US20090007115A1 (fr)
EP (1) EP2009547A3 (fr)
CN (1) CN101350007B (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11494867B2 (en) * 2019-06-21 2022-11-08 Intel Corporation Asynchronous execution mechanism

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090094606A1 (en) * 2007-10-04 2009-04-09 National Chung Cheng University Method for fast XSL transformation on multithreaded environment
CN102004631A (zh) * 2010-10-19 2011-04-06 北京红旗中文贰仟软件技术有限公司 信息文档的处理方法及装置
CN102622334B (zh) * 2012-04-20 2014-04-16 北京信息科技大学 多线程环境下并行xslt转换方法和装置
US9098558B2 (en) * 2013-04-01 2015-08-04 Oracle International Corporation Enhanced flexibility for users to transform XML data to a desired format
US10860347B1 (en) 2016-06-27 2020-12-08 Amazon Technologies, Inc. Virtual machine with multiple content processes

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7590644B2 (en) * 1999-12-21 2009-09-15 International Business Machine Corporation Method and apparatus of streaming data transformation using code generator and translator
US6772413B2 (en) * 1999-12-21 2004-08-03 Datapower Technology, Inc. Method and apparatus of data exchange using runtime code generator and translator
US6708331B1 (en) * 2000-05-03 2004-03-16 Leon Schwartz Method for automatic parallelization of software
US6874141B1 (en) * 2000-06-29 2005-03-29 Microsoft Corporation Method of compiling schema mapping
JP2002116917A (ja) * 2000-10-05 2002-04-19 Fujitsu Ltd オブジェクト指向型プログラミング言語によるソース・プログラムをコンパイルするコンパイラ
US20050086584A1 (en) * 2001-07-09 2005-04-21 Microsoft Corporation XSL transform
GB2381340A (en) * 2001-10-27 2003-04-30 Hewlett Packard Co Document generation in a distributed information network
US6996781B1 (en) * 2001-10-31 2006-02-07 Qcorps Residential, Inc. System and method for generating XSL transformation documents
US6908034B2 (en) * 2001-12-17 2005-06-21 Zih Corp. XML system
US7502996B2 (en) * 2002-02-21 2009-03-10 Bea Systems, Inc. System and method for fast XSL transformation
US20040123280A1 (en) * 2002-12-19 2004-06-24 Doshi Gautam B. Dependence compensation for sparse computations
US7209925B2 (en) * 2003-08-25 2007-04-24 International Business Machines Corporation Method, system, and article of manufacture for parallel processing and serial loading of hierarchical data
US7458022B2 (en) * 2003-10-22 2008-11-25 Intel Corporation Hardware/software partition for high performance structured data transformation
US7614052B2 (en) * 2004-01-09 2009-11-03 Nexaweb Technologies Inc. System and method for developing and deploying computer applications over a network
CN100557601C (zh) * 2004-12-29 2009-11-04 复旦大学 一种快速执行可扩展样式表单语言转换的方法
US20060265712A1 (en) * 2005-05-18 2006-11-23 Docomo Communications Laboratories Usa, Inc. Methods for supporting intra-document parallelism in XSLT processing on devices with multiple processors
JP4899476B2 (ja) * 2005-12-28 2012-03-21 富士通株式会社 分割プログラム、連結プログラム、情報処理方法
US20080127146A1 (en) * 2006-09-06 2008-05-29 Shih-Wei Liao System and method for generating object code for map-reduce idioms in multiprocessor systems
GB2443277B (en) * 2006-10-24 2011-05-18 Advanced Risc Mach Ltd Performing diagnostics operations upon an asymmetric multiprocessor apparatus
US8516459B2 (en) * 2008-03-25 2013-08-20 Intel Corporation XSLT-specific XJIT compiler

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KELLY P M ET AL: "Distributed, Parallel Web Service Orchestration Using XSLT", E-SCIENCE AND GRID COMPUTING, FIRST INTERNATIONAL CONFERENCE ON PITTSBURG, PA, USA 05-08 DEC. 2005, PISCATAWAY, NJ, USA,IEEE, 5 December 2005 (2005-12-05), XP010874753, DOI: 10.1109/E-SCIENCE.2005.34 ISBN: 978-0-7695-2448-1 *
World Wide Web Consortium (W3C): "XSL Transformations (XSLT) Version 2.0 - W3C Working Draft 4 April 2005", , 4 April 2005 (2005-04-04), pages 1-180, XP055028208, Retrieved from the Internet: URL:http://www.w3.org/TR/2005/WD-xslt20-20050404/ [retrieved on 2012-05-25] *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11494867B2 (en) * 2019-06-21 2022-11-08 Intel Corporation Asynchronous execution mechanism

Also Published As

Publication number Publication date
CN101350007A (zh) 2009-01-21
US20090007115A1 (en) 2009-01-01
CN101350007B (zh) 2011-12-14
EP2009547A3 (fr) 2012-10-31

Similar Documents

Publication Publication Date Title
Abadi et al. Tensorflow: Large-scale machine learning on heterogeneous distributed systems
Vázquez et al. A new approach for sparse matrix vector product on NVIDIA GPUs
CN101535950B (zh) 用于事务应用排序和争用管理的方法和系统
US6446258B1 (en) Interactive instruction scheduling and block ordering
US20130226944A1 (en) Format independent data transformation
CN101681292B (zh) 使用事务来并行化顺序框架的方法
EP2815313B1 (fr) Rastérisation de systèmes d'ombrage informatiques
US10970130B2 (en) Composable and cancelable dataflow continuation passing
US8843920B2 (en) Systems and methods for deferring software implementation decisions until load time
EP2009547A2 (fr) Procédé et appareil pour la transformation XSL parallèle à faible contention et équilibrage de charge
Habermaier et al. On the correctness of the SIMT execution model of GPUs
US20090204953A1 (en) Transforming data structures between different programming languages
Nicolau Loop quantization: A generalized loop unwinding technique
US8490115B2 (en) Ambient state for asynchronous methods
Gijsbers et al. An efficient scalable runtime system for macro data flow processing using S-Net
Pieper et al. Structured stream parallelism for rust
Henrio et al. Active objects with deterministic behaviour
US8527962B2 (en) Promotion of a child procedure in heterogeneous architecture software
Blindell et al. Synthesizing code for GPGPUs from abstract formal models
Utture et al. Efficient lock‐step synchronization in task‐parallel languages
Adkisson et al. A shell-like model for general purpose programming
Baudisch et al. Reducing the communication of message-passing systems synthesized from synchronous programs
Bock Parallel spreadsheet evaluation and dynamic cycle detection
Abbas et al. LEARN. NET WITH PROGRAMMING (3-in-1): Covers. NET using C#, Visual Basic ASP. NET
Pereira Evaluation of Classical Data Structures in the Java Collections Framework

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20080326

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MT NL NO PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA MK RS

RIC1 Information provided on ipc code assigned before grant

Ipc: G06F 17/22 20060101ALI20120604BHEP

Ipc: G06F 9/45 20060101AFI20120604BHEP

Ipc: G06F 9/50 20060101ALI20120604BHEP

Ipc: G06F 9/48 20060101ALI20120604BHEP

PUAL Search report despatched

Free format text: ORIGINAL CODE: 0009013

AK Designated contracting states

Kind code of ref document: A3

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MT NL NO PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA MK RS

RIC1 Information provided on ipc code assigned before grant

Ipc: G06F 9/50 20060101ALI20120921BHEP

Ipc: G06F 17/22 20060101ALI20120921BHEP

Ipc: G06F 9/48 20060101ALI20120921BHEP

Ipc: G06F 9/45 20060101AFI20120921BHEP

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20130520