WO2019134084A1 - 代码执行方法、装置、终端设备及计算机可读存储介质 - Google Patents

代码执行方法、装置、终端设备及计算机可读存储介质 Download PDF

Info

Publication number
WO2019134084A1
WO2019134084A1 PCT/CN2018/071304 CN2018071304W WO2019134084A1 WO 2019134084 A1 WO2019134084 A1 WO 2019134084A1 CN 2018071304 W CN2018071304 W CN 2018071304W WO 2019134084 A1 WO2019134084 A1 WO 2019134084A1
Authority
WO
WIPO (PCT)
Prior art keywords
current
code
executed
processing
task
Prior art date
Application number
PCT/CN2018/071304
Other languages
English (en)
French (fr)
Inventor
刘二谋
Original Assignee
深圳市天软科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市天软科技开发有限公司 filed Critical 深圳市天软科技开发有限公司
Priority to PCT/CN2018/071304 priority Critical patent/WO2019134084A1/zh
Priority to US16/959,815 priority patent/US11372633B2/en
Publication of WO2019134084A1 publication Critical patent/WO2019134084A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/40Transformation of program code
    • G06F8/41Compilation
    • G06F8/45Exploiting coarse grain parallelism in compilation, i.e. parallelism between groups of instructions
    • G06F8/451Code distribution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/505Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering the load
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/52Program synchronisation; Mutual exclusion, e.g. by means of semaphores
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5021Priority

Definitions

  • the present application belongs to the field of computer technologies, and in particular, to a code execution method, apparatus, terminal device, and computer readable storage medium.
  • the embodiments of the present application provide a code execution method, apparatus, terminal device, and computer readable storage medium, to solve the prior art, which often takes a lot of time and processing efficiency when processing code involving a large number of complex operations. Low problem.
  • a first aspect of the embodiments of the present application provides a code execution method, which may include:
  • the to-be-processed task indicated by the current code statement to be executed is distributed to a preset grid computing system for multi-thread parallel processing.
  • the grid computing system includes more than two computing nodes;
  • the method may further include:
  • the current processing state of the input variable is the completion state, performing the step of determining whether there is a preset parallel processing identifier in the current code statement to be executed.
  • the distributing the to-be-processed task indicated by the current to-be-executed code statement to the preset grid computing system for multi-thread parallel processing may include:
  • the to-be-processed task is distributed to the computing node with the largest number of processing tasks for processing.
  • the distributing the to-be-processed task indicated by the currently-executed code statement to the preset grid computing system for multi-thread parallel processing may further include:
  • the priority is positively correlated with the number of manageable tasks, and is positively correlated with the reliability
  • code execution method may further include:
  • the task that is not processed by the computing node whose current processing state is abnormal is transferred to the computing node whose current processing state is normal to continue processing.
  • a second aspect of the embodiments of the present application provides a code execution apparatus, which may include:
  • a parallel processing identifier determining module configured to determine, in the execution of the target code, whether a preset parallel processing identifier exists in the currently executed code statement;
  • a task distribution module configured to: if the parallel processing identifier exists in the current code statement to be executed, distribute the to-be-processed task indicated by the current code statement to be executed to a preset grid computing system Performing multi-thread parallel processing, where the grid computing system includes more than two computing nodes;
  • the current to-be-executed statement determining module is configured to determine a next code statement after the current code statement to be executed as a new current code statement to be executed.
  • code execution device may further include:
  • a grid computing type judging module configured to determine whether there is an input variable of a preset grid type in the code statement currently to be executed, where the grid computing type is required in the grid computing system Data type for multithreaded parallel processing;
  • the variable processing state obtaining module is configured to acquire a current processing state of the input variable if the data type of the current to-be-executed code statement is an input variable of the grid computing type.
  • the task distribution module may include:
  • a current task number obtaining unit configured to respectively acquire current task numbers of each computing node in the grid computing system
  • a processable number calculation unit configured to calculate, according to a current task number of each of the computing nodes and a preset task number threshold of each of the computing nodes, a number of processable tasks of the respective computing nodes;
  • a first distribution unit configured to distribute the to-be-processed task to a computing node with the largest number of processing tasks for processing.
  • the task distribution module may further include:
  • Processing a record obtaining module configured to separately acquire historical task processing records of the computing nodes in a preset statistical time period
  • a duration calculation module configured to calculate, according to the historical task processing record, that the processing state of each computing node is a normal first duration and a second duration in which the processing state is abnormal;
  • a reliability calculation module configured to calculate reliability of each of the computing nodes according to the first duration and the second duration, the reliability being positively correlated with the first duration, and negative with the second duration
  • a priority calculation module configured to calculate a priority of each of the computing nodes according to the number of manageable tasks and the reliability, the priority is positively related to the number of the manageable tasks, and the reliability is positive Related
  • a second distribution unit configured to distribute the to-be-processed task to the computing node with the highest priority for processing.
  • code execution device may further include:
  • a node processing state obtaining module configured to acquire a current processing state of each computing node in the grid computing system
  • a task transfer module configured to transfer the unprocessed task of the current processing state to the abnormal computing node to a current processing state if the computing node with the current processing state is abnormal in the grid computing system The compute node continues to process.
  • a third aspect of the embodiments of the present application provides a code execution terminal device including a memory, a processor, and a computer program stored in the memory and operable on the processor, wherein the processor The steps of any of the above code execution methods are implemented when the computer program is executed.
  • a fourth aspect of the embodiments of the present application provides a computer readable storage medium storing a computer program, wherein the computer program is executed by a processor to implement any of the above code execution The steps of the method.
  • the embodiment of the present application has the beneficial effects that: the embodiment of the present application provides an identifier for performing parallel processing, and the user may use the parallel processing identifier to identify a code statement that needs to be processed in parallel.
  • the parallel processing identifier exists in the current code statement to be executed
  • the to-be-processed task indicated by the current code statement to be executed is distributed to the preset grid computing system.
  • Perform multi-thread parallel processing During the execution of the code statement, the next code statement can be executed.
  • the calculation of each computing node in the grid computing system can be fully utilized. The ability to process tasks in parallel greatly reduces code execution time and improves processing efficiency.
  • FIG. 1 is a schematic flowchart of a code execution method according to Embodiment 1 of the present application.
  • FIG. 2 is a schematic flow chart of a task distribution process
  • FIG. 3 is a schematic flowchart of a code execution method according to Embodiment 2 of the present application.
  • FIG. 4 is a schematic diagram of a specific example of parallel grid computing
  • FIG. 5 is a schematic block diagram of a code execution apparatus according to Embodiment 3 of the present application.
  • FIG. 6 is a schematic block diagram of a code execution terminal device provided by an embodiment of the present application.
  • This application puts the support of parallel grid computing into the kernel of the language, and provides operators or keywords for parallel grid computing, so that the code can be specified to run in parallel, and the data of the variables used in the specified code is automatically formed together with the parallel code.
  • the task of parallel grid computing puts the support of parallel grid computing into the kernel of the language, and provides operators or keywords for parallel grid computing, so that the code can be specified to run in parallel, and the data of the variables used in the specified code is automatically formed together with the parallel code.
  • Embodiment 1 is a diagrammatic representation of Embodiment 1:
  • FIG. 1 it is a schematic flowchart of a code execution method provided by an embodiment of the present application, where the method may include:
  • Step S101 During the execution of the target code, determine whether there is a preset parallel processing identifier in the current code statement to be executed.
  • the parallel processing identifier may be composed of specified symbols and/or keywords, for example, the symbol "#" may be selected as the parallel processing identifier.
  • step S102 and step S104 are performed, if the current to be executed.
  • the existence of the parallel processing identifier in the code statement indicates that the current code statement to be executed needs to be processed in parallel, and step S103 and step S104 are performed at this time.
  • Step S102 executing the current code statement to be executed according to a serial processing manner.
  • next code statement will continue to be executed only after the execution of the current code statement to be executed is completed and the result is returned.
  • Step S103 Distribute the to-be-processed task indicated by the current code statement to be executed to a preset grid computing system for multi-thread parallel processing.
  • the task distribution and the return of the result may be performed by a preset task distributor, which may be a single-machine multi-thread implementation or a multi-machine parallel network calculation.
  • the task distributor needs to maintain load balancing of each computing node in the process of task distribution. For example, the current task number of each computing node in the grid computing system may be separately obtained, and then the respective calculations are calculated according to the current task number of each computing node and the preset task number threshold of each computing node. The number of tasks that can be processed by the node is finally distributed to the computing node with the largest number of tasks that can be processed for processing.
  • the algorithm shown in FIG. 2 can be used to implement load balancing during task distribution:
  • Step S1031 respectively acquiring the current task number of each computing node in the grid computing system.
  • the current number of tasks includes the number of tasks being processed and the number of tasks waiting to be processed.
  • Step S1032 Calculate the number of manageable tasks of each computing node according to the current task number of each computing node and the preset task number threshold of each computing node.
  • the threshold of the number of tasks may be determined by the computing power of each computing node, and the two are positively related. For example, the faster the CPU processing speed of the computing node is, the higher the threshold of the corresponding task is. The slower the processing speed, the lower the corresponding task threshold.
  • the number of manageable tasks of a computing node may be the difference between the task threshold of the computing node and the current task number of the computing node.
  • Step S1033 Obtain historical task processing records of the computing nodes in a preset statistical time period.
  • the statistic time period can be set according to the facts.
  • the historical processing records of the previous month, the previous week, or the previous day can be used for statistics, which is not specifically limited in this embodiment.
  • Step S1034 According to the historical task processing record, the processing state of each computing node is a normal first duration and a second duration in which the processing state is abnormal.
  • the processing status is abnormal, which means that the task distributor receives abnormal feedback of a computing node or the computing node times out without response.
  • Step S1035 Calculate reliability of each computing node according to the first duration and the second duration.
  • the reliability is used to characterize the stability of the computing node in the calculation process.
  • the reliability of a computing node is positively correlated with its first duration and negatively correlated with its second duration.
  • the reliability may be used as a ratio of the first duration to the total duration, wherein the total duration is the sum of the first duration and the second duration.
  • Step S1036 Calculate priorities of the respective computing nodes according to the number of manageable tasks and the reliability.
  • the priority is positively correlated with the number of manageable tasks and is positively correlated with the reliability.
  • a first coefficient corresponding to the number of manageable tasks and a second coefficient corresponding to the reliability may be separately set, and a first product of the number of manageable tasks and the first coefficient may be calculated, and Determining a second product of the reliability and the second coefficient, and then calculating a sum of the first product and the second product, the greater the calculated sum value, the higher the priority.
  • Step S1037 Distribute the to-be-processed task to the computing node with the highest priority for processing.
  • Step S104 Determine the next code statement after the current code statement to be executed as a new current code statement to be executed.
  • step S101 After determining the new current code statement to be executed, the process returns to step S101 until the execution of the target code is completed.
  • the application can be applied to large-scale distributed computing, implemented in an interpreter of a scripting language language or a compiler in a compiled language, and can be used for parallel computing in a single machine, and can be applied to multiple machines in multiple machines.
  • Parallel grid computing For the language of the script class, you can directly submit the source code of the script and the required variables as tasks.
  • For the compiled language you can choose to compile the code that needs parallelism into a binary code that can be dispatched and work with the data as a task. Submit, you can also use the JIT mode to submit the code and data that need to be compiled together as a task.
  • a unified function source center is provided.
  • the subroutine called in the parallel grid computing can be obtained through a unified source center, and when the source code changes, the module providing parallel grid computing is notified in time. .
  • This method is used for parallel grid computing.
  • the development is very convenient. From the developer's point of view, there is no need to understand the task and the decomposition of the task, and it does not depend on the parallel method library. It can be applied to the development of various unspecified parallel grid methods.
  • the task of mesh decomposition does not need any deployment, it can be used at any time and can be modified at any time to run at any time.
  • the embodiment of the present application provides an identifier for performing parallel processing.
  • the user may use the parallel processing identifier to identify a code statement that needs to be processed in parallel.
  • the parallel processing identifier exists in the code statement, the to-be-processed task indicated by the current code statement to be executed is distributed to a preset grid computing system for multi-thread parallel processing, and executed in the code statement. In the process, the next code statement can be executed.
  • processing code involving a large number of complex operations the computing power of each computing node in the grid computing system can be fully utilized, the tasks are processed in parallel, and the code is greatly shortened. The execution time increases the processing efficiency.
  • Embodiment 2 is a diagrammatic representation of Embodiment 1:
  • FIG. 3 it is a schematic flowchart of a code execution method provided by an embodiment of the present application, where the method may include:
  • Step S301 During the execution of the target code, determine whether there is an input variable of a preset grid type in the code statement to be executed.
  • the grid computing type is a data type that requires multi-thread parallel processing in the grid computing system.
  • the data type of the grid calculation is added to the data type of the language for storing the parallel computing.
  • the language accesses the data type, such as assignment, operation, serialization, etc., a blocking wait occurs.
  • the data contains the end of the parallel grid calculation. If the parallel calculation ends, the return value of the parallel grid calculation is recorded in the data type.
  • step S302 and subsequent steps are performed, if the data type of the current code statement to be executed does not exist, For the input variable of the grid calculation type, step S305 and its subsequent steps are performed.
  • Step S302 Acquire a current processing state of the input variable.
  • the processing state of the variable of the grid computing type includes a completion state and an incomplete state, wherein the incomplete state includes a state of waiting for submission, submission, and error.
  • the processing state of the variables of the grid computing type is used to describe the state of the parallel task, and records the necessary information required for the execution of the task in the commit state, and is used for the termination of the task, resubmission, and the like.
  • Step S303 Determine whether the current processing state of the input variable is an incomplete state.
  • step S304 If the current processing state of the input variable is an incomplete state, step S304 and subsequent steps are performed. If the current processing state of the input variable is a completed state, step S305 and subsequent steps are performed.
  • Step S304 waiting for a preset duration.
  • the duration can be set according to the actual situation, and in particular, it can also be set to 0, that is, no waiting.
  • step S302 After waiting for the duration, the process returns to step S302 until the current processing state of the input variable is the completion state.
  • Step S305 Determine whether there is a preset parallel processing identifier in the current code statement to be executed.
  • step S306 and step S308 are performed, and if the parallel processing identifier exists in the current code statement to be executed, step S306 is performed. Step S307.
  • Step S306 executing the current code statement to be executed according to a serial processing manner.
  • Step S307 Distribute the to-be-processed task indicated by the current code statement to be executed to a preset grid computing system for multi-thread parallel processing.
  • Step S308 determining a next code statement after the current code statement to be executed as a new current code statement to be executed.
  • step S301 After determining the new current code statement to be executed, the process returns to step S301 until the execution of the target code is completed.
  • the steps S305 to S308 are the same as the steps S101 to S104 in the first embodiment. For details, refer to the description in the first embodiment, and the details are not described herein again.
  • each step is independent of the result of the previous grid calculation, and thus parallel grid calculations can be performed independently of each other, where a specific parallel for is not required, through the parallel Handling the use of identifiers makes the traditional for complete a work similar to parallel for.
  • "#" submits the data contained in the variable i used by subProc and the code of the subProc call to form a task to the task dispatcher, and puts the grid calculation type into the array c.
  • the code execution method may further include a process of exception handling, allowing a transfer of the task once an error occurs in the execution unit of the task.
  • the current processing state of each computing node in the grid computing system may be obtained. If there is a computing node whose current processing state is abnormal in the grid computing system, the current processing state is abnormal. The unprocessed task of the compute node is transferred to the compute node whose current processing state is normal to continue processing.
  • the task management mechanism is also provided, and the status of the parallel grid task assignment is viewed on the management side, and the administrator is allowed to perform the task termination.
  • the embodiment of the present application adds parallel grid computing to automatically block waiting when resources are accessed, and the parallel access processing mechanism without blocking access of parallel grid resources, such design support Parallel mesh nesting, and the number of parallel meshes is unlimited.
  • Embodiment 3 is a diagrammatic representation of Embodiment 3
  • FIG. 5 it is a schematic block diagram of a code execution apparatus provided by an embodiment of the present application, where the apparatus may include:
  • the parallel processing identifier determining module 501 is configured to determine whether a preset parallel processing identifier exists in the current code statement to be executed during execution of the target code;
  • a task distribution module 502 configured to: if the parallel processing identifier exists in the current code statement to be executed, distribute the to-be-processed task indicated by the current code statement to be executed to a preset grid computing system Performing multi-thread parallel processing in which the grid computing system includes more than two computing nodes;
  • the current to-be-executed statement determining module 503 is configured to determine a next code statement after the current code statement to be executed as a new current code statement to be executed.
  • code execution device may further include:
  • a grid computing type judging module configured to determine whether there is an input variable of a preset grid type in the code statement currently to be executed, where the grid computing type is required in the grid computing system Data type for multithreaded parallel processing;
  • the variable processing state obtaining module is configured to acquire a current processing state of the input variable if the data type of the current to-be-executed code statement is an input variable of the grid computing type.
  • the task distribution module may include:
  • a current task number obtaining unit configured to respectively acquire current task numbers of each computing node in the grid computing system
  • a processable number calculation unit configured to calculate, according to a current task number of each of the computing nodes and a preset task number threshold of each of the computing nodes, a number of processable tasks of the respective computing nodes;
  • a first distribution unit configured to distribute the to-be-processed task to a computing node with the largest number of processing tasks for processing.
  • the task distribution module may further include:
  • Processing a record obtaining module configured to separately acquire historical task processing records of the computing nodes in a preset statistical time period
  • a duration calculation module configured to calculate, according to the historical task processing record, that the processing state of each computing node is a normal first duration and a second duration in which the processing state is abnormal;
  • a reliability calculation module configured to calculate reliability of each of the computing nodes according to the first duration and the second duration, the reliability being positively correlated with the first duration, and negative with the second duration
  • a priority calculation module configured to calculate a priority of each of the computing nodes according to the number of manageable tasks and the reliability, the priority is positively related to the number of the manageable tasks, and the reliability is positive Related
  • a second distribution unit configured to distribute the to-be-processed task to the computing node with the highest priority for processing.
  • code execution device may further include:
  • a node processing state obtaining module configured to acquire a current processing state of each computing node in the grid computing system
  • a task transfer module configured to transfer the unprocessed task of the current processing state to the abnormal computing node to a current processing state if the computing node with the current processing state is abnormal in the grid computing system The compute node continues to process.
  • FIG. 6 is a schematic block diagram of a code execution terminal device according to an embodiment of the present application.
  • the code execution terminal device 6 of this embodiment includes a processor 60, a memory 61, and a computer program 62 stored in the memory 61 and operable on the processor 60.
  • the steps in the foregoing various code execution method embodiments are implemented, such as steps S101 to S104 shown in FIG. 2.
  • the processor 60 executes the computer program 62 the functions of the modules/units in the foregoing device embodiments are implemented, such as the functions of the modules 501 to 503 shown in FIG. 5.
  • the computer program 62 can be partitioned into one or more modules/units that are stored in the memory 61 and executed by the processor 60 to complete This application.
  • the one or more modules/units may be a series of computer program instruction segments capable of performing a particular function, the instruction segments being used to describe the execution of the computer program 62 in the code execution terminal device 6.
  • the computer program 62 can be segmented into a parallel processing identifier determination module, a task distribution module, and a current statement to be executed determination module.
  • the code execution terminal device 6 may be a computing device such as a desktop computer, a notebook, a palmtop computer, and a cloud server.
  • the code execution terminal device may include, but is not limited to, the processor 60 and the memory 61. It will be understood by those skilled in the art that FIG. 6 is merely an example of the code execution terminal device 6, does not constitute a limitation of the code execution terminal device 6, may include more or less components than those illustrated, or may combine certain components. Alternatively, different components, such as the code execution terminal device 6, may also include input and output devices, network access devices, buses, and the like.
  • the processor 60 can be a central processing unit (Central Processing Unit, CPU), can also be other general-purpose processors, digital signal processors (DSP), ASICs (Application) Specific Integrated Circuit (ASIC), Field-Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc.
  • the general purpose processor may be a microprocessor or the processor or any conventional processor or the like.
  • the memory 61 may be an internal storage unit of the code execution terminal device 6, such as a hard disk or a memory of the code execution terminal device 6.
  • the memory 61 may also be an external storage device of the code execution terminal device 6, for example, the code execution terminal device 6 is equipped with a plug-in hard disk, a smart memory card (SMC), and a secure digital (Secure) Digital, SD) card, flash card (Flash Card) and so on.
  • the memory 61 may also include both an internal storage unit of the code execution terminal device 6 and an external storage device.
  • the memory 61 is used to store the computer program and other programs and data required by the code execution terminal device 6.
  • the memory 61 can also be used to temporarily store data that has been output or is about to be output.
  • each functional unit and module described above is exemplified. In practical applications, the above functions may be assigned to different functional units as needed.
  • the module is completed by dividing the internal structure of the device into different functional units or modules to perform all or part of the functions described above.
  • Each functional unit and module in the embodiment may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit, and the integrated unit may be hardware.
  • Formal implementation can also be implemented in the form of software functional units.
  • the specific names of the respective functional units and modules are only for the purpose of facilitating mutual differentiation, and are not intended to limit the scope of protection of the present application.
  • For the specific working process of the unit and the module in the foregoing system reference may be made to the corresponding process in the foregoing method embodiment, and details are not described herein again.
  • the disclosed device/terminal device and method may be implemented in other manners.
  • the device/terminal device embodiments described above are merely illustrative.
  • the division of the modules or units is only a logical function division.
  • there may be another division manner for example, multiple units.
  • components may be combined or integrated into another system, or some features may be omitted or not performed.
  • the mutual coupling or direct coupling or communication connection shown or discussed may be an indirect coupling or communication connection through some interface, device or unit, and may be in electrical, mechanical or other form.
  • the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution of the embodiment.
  • each functional unit in each embodiment of the present application may be integrated into one processing unit, or each unit may exist physically separately, or two or more units may be integrated into one unit.
  • the above integrated unit can be implemented in the form of hardware or in the form of a software functional unit.
  • the integrated modules/units if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, the present application implements all or part of the processes in the foregoing embodiments, and may also be completed by a computer program to instruct related hardware.
  • the computer program may be stored in a computer readable storage medium. The steps of the various method embodiments described above may be implemented when the program is executed by the processor.
  • the computer program comprises computer program code, which may be in the form of source code, object code form, executable file or some intermediate form.
  • the computer readable medium can include any entity or device capable of carrying the computer program code, a recording medium, a USB flash drive, a removable hard drive, a magnetic disk, an optical disk, a computer memory, a read only memory (ROM, Read-Only) Memory), random access memory (RAM, Random) Access Memory), electrical carrier signals, telecommunications signals, and software distribution media.
  • ROM Read Only memory
  • RAM Random Access Memory

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Devices For Executing Special Programs (AREA)
  • Debugging And Monitoring (AREA)

Abstract

一种代码执行方法、装置、终端设备及计算机可读存储介质。所述方法提供了进行并行处理的标识符,用户可以预先使用该并行处理标识符标识出需要进行并行处理的代码语句中,在代码的执行过程中,当发现当前待执行的代码语句中存在所述并行处理标识符时,则将由所述当前待执行的代码语句所指示的待处理任务分发到预设的网格计算系统中进行多线程并行处理,在该代码语句执行的过程中,可以继续执行下一个代码语句。在处理涉及到大量复杂运算的代码时,可以充分的利用所述网格计算系统中各个计算节点的计算能力,并行地处理任务,大大缩短了代码执行的时间,提高了处理效率。

Description

代码执行方法、装置、终端设备及计算机可读存储介质 技术领域
本申请属于计算机技术领域,尤其涉及一种代码执行方法、装置、终端设备及计算机可读存储介质。
背景技术
现有技术在执行代码时,一般都是按照代码语句的顺序依次进行,只有在当前代码语句执行完成,返回结果后,才会继续执行下一个代码语句,这种串行处理的方式在处理涉及到大量复杂运算的代码时往往会耗费大量时间,处理效率低下。
技术问题
有鉴于此,本申请实施例提供了一种代码执行方法、装置、终端设备及计算机可读存储介质,以解决现有技术在处理涉及到大量复杂运算的代码时往往会耗费大量时间,处理效率低下的问题。
技术解决方案
本申请实施例的第一方面提供了一种代码执行方法,可以包括:
在目标代码的执行过程中,判断当前待执行的代码语句中是否存在预设的并行处理标识符;
若所述当前待执行的代码语句中存在所述并行处理标识符,则将由所述当前待执行的代码语句所指示的待处理任务分发到预设的网格计算系统中进行多线程并行处理,所述网格计算系统中包括两个以上的计算节点;
将在所述当前待执行的代码语句之后的下一个代码语句确定为新的当前待执行的代码语句,然后返回执行所述判断当前待执行的代码语句中是否存在预设的并行处理标识符的步骤,直至所述目标代码执行完毕为止。
进一步地,在判断当前待执行的代码语句中是否存在预设的并行处理标识符之前,还可以包括:
判断所述当前待执行的代码语句中是否存在数据类型为预设的网格计算类型的输入变量,所述网格计算类型为需要在所述网格计算系统中进行多线程并行处理的数据类型;
若所述当前待执行的代码语句中存在数据类型为所述网格计算类型的输入变量,则获取所述输入变量的当前处理状态;
若所述输入变量的当前处理状态为未完成状态,则返回执行所述获取所述输入变量的当前处理状态的步骤,直至所述输入变量的当前处理状态为完成状态为止;
若所述输入变量的当前处理状态为完成状态,则执行所述判断当前待执行的代码语句中是否存在预设的并行处理标识符的步骤。
进一步地,所述将由所述当前待执行的代码语句所指示的待处理任务分发到预设的网格计算系统中进行多线程并行处理可以包括:
分别获取所述网格计算系统中的各个计算节点的当前任务数;
根据所述各个计算节点的当前任务数和预设的所述各个计算节点的任务数阈值计算所述各个计算节点的可处理任务数;
将所述待处理任务分发到所述可处理任务数最多的计算节点进行处理。
进一步地,所述将由所述当前待执行的代码语句所指示的待处理任务分发到预设的网格计算系统中进行多线程并行处理还可以包括:
分别获取所述各个计算节点在预设的统计时间段内的历史任务处理记录;
根据所述历史任务处理记录统计所述各个计算节点的处理状态为正常的第一时长和处理状态为异常的第二时长;
根据所述第一时长和所述第二时长计算所述各个计算节点的可靠度,所述可靠度与所述第一时长正相关,且与所述第二时长负相关;
根据所述可处理任务数和所述可靠度计算所述各个计算节点的优先级,所述优先级与所述可处理任务数正相关,且与所述可靠度正相关;
将所述待处理任务分发到所述优先级最高的计算节点进行处理。
进一步地,所述代码执行方法还可以包括:
获取所述网格计算系统中的各个计算节点的当前处理状态;
若在所述网格计算系统中存在当前处理状态为异常的计算节点,则将所述当前处理状态为异常的计算节点未处理完的任务转移到当前处理状态为正常的计算节点继续处理。
本申请实施例的第二方面提供了一种代码执行装置,可以包括:
并行处理标识符判断模块,用于在目标代码的执行过程中,判断当前待执行的代码语句中是否存在预设的并行处理标识符;
任务分发模块,用于若所述当前待执行的代码语句中存在所述并行处理标识符,则将由所述当前待执行的代码语句所指示的待处理任务分发到预设的网格计算系统中进行多线程并行处理,所述网格计算系统中包括两个以上的计算节点;
当前待执行语句确定模块,用于将在所述当前待执行的代码语句之后的下一个代码语句确定为新的当前待执行的代码语句。
进一步地,所述代码执行装置还可以包括:
网格计算类型判断模块,用于判断所述当前待执行的代码语句中是否存在数据类型为预设的网格计算类型的输入变量,所述网格计算类型为需要在所述网格计算系统中进行多线程并行处理的数据类型;
变量处理状态获取模块,用于若所述当前待执行的代码语句中存在数据类型为所述网格计算类型的输入变量,则获取所述输入变量的当前处理状态。
进一步地,所述任务分发模块可以包括:
当前任务数获取单元,用于分别获取所述网格计算系统中的各个计算节点的当前任务数;
可处理任务数计算单元,用于根据所述各个计算节点的当前任务数和预设的所述各个计算节点的任务数阈值计算所述各个计算节点的可处理任务数;
第一分发单元,用于将所述待处理任务分发到所述可处理任务数最多的计算节点进行处理。
进一步地,所述任务分发模块还可以包括:
处理记录获取模块,用于分别获取所述各个计算节点在预设的统计时间段内的历史任务处理记录;
时长统计模块,用于根据所述历史任务处理记录统计所述各个计算节点的处理状态为正常的第一时长和处理状态为异常的第二时长;
可靠度计算模块,用于根据所述第一时长和所述第二时长计算所述各个计算节点的可靠度,所述可靠度与所述第一时长正相关,且与所述第二时长负相关;
优先级计算模块,用于根据所述可处理任务数和所述可靠度计算所述各个计算节点的优先级,所述优先级与所述可处理任务数正相关,且与所述可靠度正相关;
第二分发单元,用于将所述待处理任务分发到所述优先级最高的计算节点进行处理。
进一步地,所述代码执行装置还可以包括:
节点处理状态获取模块,用于获取所述网格计算系统中的各个计算节点的当前处理状态;
任务转移模块,用于若在所述网格计算系统中存在当前处理状态为异常的计算节点,则将所述当前处理状态为异常的计算节点未处理完的任务转移到当前处理状态为正常的计算节点继续处理。
本申请实施例的第三方面提供了一种代码执行终端设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现以上任一种代码执行方法的步骤。
本申请实施例的第四方面提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现以上任一种代码执行方法的步骤。
有益效果
本申请实施例与现有技术相比存在的有益效果是:本申请实施例提供了进行并行处理的标识符,用户可以预先使用该并行处理标识符标识出需要进行并行处理的代码语句中,在代码的执行过程中,当发现当前待执行的代码语句中存在所述并行处理标识符时,则将由所述当前待执行的代码语句所指示的待处理任务分发到预设的网格计算系统中进行多线程并行处理,在该代码语句执行的过程中,可以继续执行下一个代码语句,在处理涉及到大量复杂运算的代码时,可以充分的利用所述网格计算系统中各个计算节点的计算能力,并行地处理任务,大大缩短了代码执行的时间,提高了处理效率。
附图说明
为了更清楚地说明本申请实施例中的技术方案,下面将对实施例或现有技术描述中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动性的前提下,还可以根据这些附图获得其它的附图。
图1为本申请实施例一提供的代码执行方法的示意流程图;
图2为任务分发过程的示意流程图;
图3为本申请实施例二提供的代码执行方法的示意流程图;
图4为并行网格计算的一个具体实例的示意图;
图5是本申请实施例三提供的代码执行装置的示意框图;
图6是本申请实施例提供的代码执行终端设备的示意框图。
本发明的实施方式
为使得本申请的发明目的、特征、优点能够更加的明显和易懂,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,下面所描述的实施例仅仅是本申请一部分实施例,而非全部的实施例。基于本申请中的实施例,本领域普通技术人员在没有做出创造性劳动前提下所获得的所有其它实施例,都属于本申请保护的范围。
本申请将对并行网格计算的支撑置入语言的内核,提供并行网格计算的算符或者关键字,使得可以指定代码并行运行,指定代码中使用的变量的数据和并行的代码一起自动形成并行网格计算的任务。
实施例一:
如图1所示,是本申请实施例提供的一种代码执行方法的示意流程图,所述方法可以包括:
步骤S101、在目标代码的执行过程中,判断当前待执行的代码语句中是否存在预设的并行处理标识符。
在本实施例中,所述并行处理标识符可以由指定的符号和/或关键字构成,例如,可以选择符号“#”作为所述并行处理标识符。
若所述当前待执行的代码语句中不存在所述并行处理标识符,则说明无需对所述当前待执行的代码语句进行并行处理,此时执行步骤S102和步骤S104,若所述当前待执行的代码语句中存在所述并行处理标识符,则说明需要对所述当前待执行的代码语句进行并行处理,此时执行步骤S103和步骤S104。
步骤S102、按照串行处理方式执行所述当前待执行的代码语句。
也即按照现有技术中的处理方式,只有在所述当前待执行的代码语句执行完成,返回结果后,才会继续执行下一个代码语句。
步骤S103、将由所述当前待执行的代码语句所指示的待处理任务分发到预设的网格计算系统中进行多线程并行处理。
所述网格计算系统中包括两个以上的计算节点。在本实施例中,可以通过预设的任务分发器来进行任务分发以及结果的返回,所述任务分发器可以是单机的多线程实现,也可以是多机的并行网络计算。
所述任务分发器在进行任务分发的过程中需要保持各个计算节点的负载均衡。例如,可以分别获取所述网格计算系统中的各个计算节点的当前任务数,然后根据所述各个计算节点的当前任务数和预设的所述各个计算节点的任务数阈值计算所述各个计算节点的可处理任务数,最后将所述待处理任务分发到所述可处理任务数最多的计算节点进行处理。
优选地,可以采用如图2所示的算法来实现任务分发过程中的负载均衡:
步骤S1031、分别获取所述网格计算系统中的各个计算节点的当前任务数。
所述当前任务数包括正在处理的任务数以及等待处理的任务数。
步骤S1032、根据所述各个计算节点的当前任务数和预设的所述各个计算节点的任务数阈值计算所述各个计算节点的可处理任务数。
所述任务数阈值可以由各个计算节点的计算能力来决定,两者是正相关的,例如,计算节点的CPU处理速度越快,则其对应的任务数阈值也越高,反之,计算节点的CPU处理速度越慢,则其对应的任务数阈值也越低。
某计算节点的可处理任务数可以为该计算节点的任务数阈值与该计算节点的当前任务数之差。
步骤S1033、分别获取所述各个计算节点在预设的统计时间段内的历史任务处理记录。
所述统计时间段可以根据事实情况进行设置,例如,可以取前一个月、前一周或者前一天的历史处理记录进行统计,本实施对此不作具体限定。
步骤S1034、根据所述历史任务处理记录统计所述各个计算节点的处理状态为正常的第一时长和处理状态为异常的第二时长。
处理状态为异常是指所述任务分发器收到某计算节点的异常反馈或者该计算节点超时无响应等情况。
将某计算节点的处理状态为正常的时段求和,即可得到该计算节点的第一时长,将其处理状态为异常的时段求和,即可得到该计算节点的第二时长。
步骤S1035、根据所述第一时长和所述第二时长计算所述各个计算节点的可靠度。
所述可靠度用于表征计算节点在计算过程中的稳定性,某计算节点的可靠度与其第一时长正相关,且与其第二时长负相关。例如,可以用所述第一时长与总时长的比值来作为所述可靠度,其中,总时长为所述第一时长与所述第二时长之和。
步骤S1036、根据所述可处理任务数和所述可靠度计算所述各个计算节点的优先级。
所述优先级与所述可处理任务数正相关,且与所述可靠度正相关。例如,可以分别设置与所述可处理任务数对应的第一系数,以及与所述可靠度对应的第二系数,计算所述可处理任务数与所述第一系数的第一乘积,以及所述可靠度与所述第二系数的第二乘积,然后计算所述第一乘积与所述第二乘积之和,计算得到的和值越大,则优先级越高。
步骤S1037、将所述待处理任务分发到所述优先级最高的计算节点进行处理。
需要注意的是,以上过程仅仅只是一种可行的任务分配方法,还可以根据实际需要选择其它的任务分配方法,本实施例对此不作具体限定。
步骤S104、将在所述当前待执行的代码语句之后的下一个代码语句确定为新的当前待执行的代码语句。
在确定出新的当前待执行的代码语句后,返回执行步骤S101,直至所述目标代码执行完毕为止。
本申请可用于大规模的分布式计算,在脚本类语言的解释器或者在编译型语言的编译器中进行实现,在单机中可以用于并行计算的实现,在多机中可以应用于多机并行网格计算。对于脚本类的语言,可以直接将脚本的源代码以及所需的变量作为任务提交,对于编译型的语言,可以选择将需要并行的代码编译为可分发调用的二进制代码,并和数据一起作为任务提交,也可以采用JIT的模式,将需要编译的代码和数据一起提交作为任务。
本实施例中提供了统一的函数源码中心,在并行网格计算中调用到的子程序可以通过统一的源码中心获得,而当源代码发生变化的时候,及时通知到提供并行网格计算的模块。采用此种方法进行并行网格计算,开发非常便捷,在开发者的角度,无需理解任务以及任务的分解方式,亦不依赖并行方法库,可以适用于各种不特定的并行网格方法的开发,网格分解的任务也不需要任何部署,随时开发随时使用,随时修改随时运行。
综上所述,本申请实施例提供了进行并行处理的标识符,用户可以预先使用该并行处理标识符标识出需要进行并行处理的代码语句中,在代码的执行过程中,当发现当前待执行的代码语句中存在所述并行处理标识符时,则将由所述当前待执行的代码语句所指示的待处理任务分发到预设的网格计算系统中进行多线程并行处理,在该代码语句执行的过程中,可以继续执行下一个代码语句,在处理涉及到大量复杂运算的代码时,可以充分的利用所述网格计算系统中各个计算节点的计算能力,并行地处理任务,大大缩短了代码执行的时间,提高了处理效率。
实施例二:
如图3所示,是本申请实施例提供的一种代码执行方法的示意流程图,所述方法可以包括:
步骤S301、在目标代码的执行过程中,判断当前待执行的代码语句中是否存在数据类型为预设的网格计算类型的输入变量。
所述网格计算类型为需要在所述网格计算系统中进行多线程并行处理的数据类型。本实施例在语言的数据类型中增加网格计算的数据类型,用于存贮并行计算,当语言对该数据类型进行访问,如赋值,运算,序列化等等操作的时候,则产生阻塞等待该数据所包含的并行网格计算的结束。如果并行计算结束则在该数据类型中记载并行网格计算的返回值。
若所述当前待执行的代码语句中存在数据类型为所述网格计算类型的输入变量,则执行步骤S302及其后续步骤,若所述当前待执行的代码语句中不存在数据类型为所述网格计算类型的输入变量,则执行步骤S305及其后续步骤。
步骤S302、获取所述输入变量的当前处理状态。
在本实施例中,网格计算类型的变量的处理状态包括完成状态和未完成状态,其中,未完成状态包括等待提交、已提交以及错误等状态。网格计算类型的变量的处理状态用于描述并行任务的状态,并在提交状态中记录任务的执行所需的必要信息,用来进行任务的终止,重新提交等操作。
步骤S303、判断所述输入变量的当前处理状态是否为未完成状态。
若所述输入变量的当前处理状态为未完成状态,则执行步骤S304及其后续步骤,若所述输入变量的当前处理状态为完成状态,则执行步骤S305及其后续步骤。
步骤S304、等待预设的时长。
所述时长可以根据实际情况设置,特殊地,也可以将其设置为0,即无需等待。
在等待了所述时长后,返回执行步骤S302,直至所述输入变量的当前处理状态为完成状态为止。
步骤S305、判断所述当前待执行的代码语句中是否存在预设的并行处理标识符。
若所述当前待执行的代码语句中不存在所述并行处理标识符,则执行步骤S306和步骤S308,若所述当前待执行的代码语句中存在所述并行处理标识符,则执行步骤S306和步骤S307。
步骤S306、按照串行处理方式执行所述当前待执行的代码语句。
步骤S307、将由所述当前待执行的代码语句所指示的待处理任务分发到预设的网格计算系统中进行多线程并行处理。
步骤S308、将在所述当前待执行的代码语句之后的下一个代码语句确定为新的当前待执行的代码语句。
在确定出新的当前待执行的代码语句后,返回执行步骤S301,直至所述目标代码执行完毕为止。
其中,步骤S305~步骤S308的过程与实施例一中的步骤S101~步骤S104相同,具体可参照实施例一中的说明,本实施例在此不再赘述。
图4为进行并行网格计算的一个具体实例,在该实例中,使用符号“#”作为所述并行处理标识符,对“#”后的方法进行并行网格计算,在网格计算的结果未被使用时,程序将会不断并行下去,一直到网格计算的结果被访问。
具体地,在c这个数组的形成中,每一步骤都与之前的网格计算的结果无关,因而可以互相独立地进行并行网格计算,此处不需要一个特定的并行for,通过所述并行处理标识符的使用,使得传统的for完成了一个类似于并行for的工作。“#”将subProc使用到的i这个变量所包含的数据以及subProc这个调用的代码形成一个任务提交给任务分发器,并将网格计算类型置入到数组c中。当对c进行访问时,例如求和sum,或者求标准差stddev,就会等待c所包含的网格计算的任务的结束,而d:=#sum(c)以及e:=#stddev(c)由于两者直接均依赖c但互相之间是可以并行的,因此,又形成了sum和c的值的任务置入d网格计算类型,以及stddev和c的值的任务置入e网格计算类型,并将两个任务提交给任务分发器,这两个任务得以并发,在返回d/e的结果时,则会等待d以及e这两个网格计算类型的结束。
优选地,所述代码执行方法还可以包括异常处理的过程,一旦任务的执行单元发生错误,允许进行任务的转移。具体地,可以获取所述网格计算系统中的各个计算节点的当前处理状态,若在所述网格计算系统中存在当前处理状态为异常的计算节点,则将所述当前处理状态为异常的计算节点未处理完的任务转移到当前处理状态为正常的计算节点继续处理。
还可以在所述任务分发器中设置可并发数,防止并发数超限,这样可以在所述任务分发器里选择将超限的任务进行排队或者拒绝。
优选地,还可以提供任务的管理机制,在管理端查看并行网格任务分派的状态,允许管理者进行任务的终止等操作。
综上所述,本申请实施例在实施例一的基础上,加入了并行网格计算在资源访问时自动阻塞等待,无并行网格资源的阻塞访问则一直并行的处理机制,这样的设计支持并行网格的嵌套,且并行网格的数量不受限制。
实施例三:
如图5所示,是本申请实施例提供的一种代码执行装置的示意框图,所述装置可以包括:
并行处理标识符判断模块501,用于在目标代码的执行过程中,判断当前待执行的代码语句中是否存在预设的并行处理标识符;
任务分发模块502,用于若所述当前待执行的代码语句中存在所述并行处理标识符,则将由所述当前待执行的代码语句所指示的待处理任务分发到预设的网格计算系统中进行多线程并行处理,所述网格计算系统中包括两个以上的计算节点;
当前待执行语句确定模块503,用于将在所述当前待执行的代码语句之后的下一个代码语句确定为新的当前待执行的代码语句。
进一步地,所述代码执行装置还可以包括:
网格计算类型判断模块,用于判断所述当前待执行的代码语句中是否存在数据类型为预设的网格计算类型的输入变量,所述网格计算类型为需要在所述网格计算系统中进行多线程并行处理的数据类型;
变量处理状态获取模块,用于若所述当前待执行的代码语句中存在数据类型为所述网格计算类型的输入变量,则获取所述输入变量的当前处理状态。
进一步地,所述任务分发模块可以包括:
当前任务数获取单元,用于分别获取所述网格计算系统中的各个计算节点的当前任务数;
可处理任务数计算单元,用于根据所述各个计算节点的当前任务数和预设的所述各个计算节点的任务数阈值计算所述各个计算节点的可处理任务数;
第一分发单元,用于将所述待处理任务分发到所述可处理任务数最多的计算节点进行处理。
进一步地,所述任务分发模块还可以包括:
处理记录获取模块,用于分别获取所述各个计算节点在预设的统计时间段内的历史任务处理记录;
时长统计模块,用于根据所述历史任务处理记录统计所述各个计算节点的处理状态为正常的第一时长和处理状态为异常的第二时长;
可靠度计算模块,用于根据所述第一时长和所述第二时长计算所述各个计算节点的可靠度,所述可靠度与所述第一时长正相关,且与所述第二时长负相关;
优先级计算模块,用于根据所述可处理任务数和所述可靠度计算所述各个计算节点的优先级,所述优先级与所述可处理任务数正相关,且与所述可靠度正相关;
第二分发单元,用于将所述待处理任务分发到所述优先级最高的计算节点进行处理。
进一步地,所述代码执行装置还可以包括:
节点处理状态获取模块,用于获取所述网格计算系统中的各个计算节点的当前处理状态;
任务转移模块,用于若在所述网格计算系统中存在当前处理状态为异常的计算节点,则将所述当前处理状态为异常的计算节点未处理完的任务转移到当前处理状态为正常的计算节点继续处理。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
应理解,上述各个实施例中各步骤的序号的大小并不意味着执行顺序的先后,各过程的执行顺序应以其功能和内在逻辑确定,而不应对本申请实施例的实施过程构成任何限定。
图6是本申请一实施例提供的代码执行终端设备的示意框图。如图6所示,该实施例的代码执行终端设备6包括:处理器60、存储器61以及存储在所述存储器61中并可在所述处理器60上运行的计算机程序62。所述处理器60执行所述计算机程序62时实现上述各个代码执行方法实施例中的步骤,例如图2所示的步骤S101至步骤S104。或者,所述处理器60执行所述计算机程序62时实现上述各装置实施例中各模块/单元的功能,例如图5所示模块501至模块503的功能。
示例性的,所述计算机程序62可以被分割成一个或多个模块/单元,所述一个或者多个模块/单元被存储在所述存储器61中,并由所述处理器60执行,以完成本申请。所述一个或多个模块/单元可以是能够完成特定功能的一系列计算机程序指令段,该指令段用于描述所述计算机程序62在所述代码执行终端设备6中的执行过程。例如,所述计算机程序62可以被分割成并行处理标识符判断模块、任务分发模块、当前待执行语句确定模块。
所述代码执行终端设备6可以是桌上型计算机、笔记本、掌上电脑及云端服务器等计算设备。所述代码执行终端设备可包括,但不仅限于,处理器60、存储器61。本领域技术人员可以理解,图6仅仅是代码执行终端设备6的示例,并不构成对代码执行终端设备6的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件,例如所述代码执行终端设备6还可以包括输入输出设备、网络接入设备、总线等。
所述处理器60可以是中央处理单元(Central Processing Unit,CPU),还可以是其它通用处理器、数字信号处理器 (Digital Signal Processor,DSP)、专用集成电路 (Application Specific Integrated Circuit,ASIC)、现场可编程门阵列 (Field-Programmable Gate Array,FPGA) 或者其它可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件等。通用处理器可以是微处理器或者该处理器也可以是任何常规的处理器等。
所述存储器61可以是所述代码执行终端设备6的内部存储单元,例如代码执行终端设备6的硬盘或内存。所述存储器61也可以是所述代码执行终端设备6的外部存储设备,例如所述代码执行终端设备6上配备的插接式硬盘,智能存储卡(Smart Media Card, SMC),安全数字(Secure Digital, SD)卡,闪存卡(Flash Card)等。进一步地,所述存储器61还可以既包括所述代码执行终端设备6的内部存储单元也包括外部存储设备。所述存储器61用于存储所述计算机程序以及所述代码执行终端设备6所需的其它程序和数据。所述存储器61还可以用于暂时地存储已经输出或者将要输出的数据。
所属领域的技术人员可以清楚地了解到,为了描述的方便和简洁,仅以上述各功能单元、模块的划分进行举例说明,实际应用中,可以根据需要而将上述功能分配由不同的功能单元、模块完成,即将所述装置的内部结构划分成不同的功能单元或模块,以完成以上描述的全部或者部分功能。实施例中的各功能单元、模块可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中,上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。另外,各功能单元、模块的具体名称也只是为了便于相互区分,并不用于限制本申请的保护范围。上述系统中单元、模块的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。
在上述实施例中,对各个实施例的描述都各有侧重,某个实施例中没有详述或记载的部分,可以参见其它实施例的相关描述。
本领域普通技术人员可以意识到,结合本文中所公开的实施例描述的各示例的单元及算法步骤,能够以电子硬件、或者计算机软件和电子硬件的结合来实现。这些功能究竟以硬件还是软件方式来执行,取决于技术方案的特定应用和设计约束条件。专业技术人员可以对每个特定的应用来使用不同方法来实现所描述的功能,但是这种实现不应认为超出本申请的范围。
在本申请所提供的实施例中,应该理解到,所揭露的装置/终端设备和方法,可以通过其它的方式实现。例如,以上所描述的装置/终端设备实施例仅仅是示意性的,例如,所述模块或单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通讯连接可以是通过一些接口,装置或单元的间接耦合或通讯连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本申请各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。
所述集成的模块/单元如果以软件功能单元的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请实现上述实施例方法中的全部或部分流程,也可以通过计算机程序来指令相关的硬件来完成,所述的计算机程序可存储于一计算机可读存储介质中,该计算机程序在被处理器执行时,可实现上述各个方法实施例的步骤。其中,所述计算机程序包括计算机程序代码,所述计算机程序代码可以为源代码形式、对象代码形式、可执行文件或某些中间形式等。所述计算机可读介质可以包括:能够携带所述计算机程序代码的任何实体或装置、记录介质、U盘、移动硬盘、磁碟、光盘、计算机存储器、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、电载波信号、电信信号以及软件分发介质等。需要说明的是,所述计算机可读介质包含的内容可以根据司法管辖区内立法和专利实践的要求进行适当的增减,例如在某些司法管辖区,根据立法和专利实践,计算机可读介质不包括电载波信号和电信信号。
以上所述实施例仅用以说明本申请的技术方案,而非对其限制;尽管参照前述实施例对本申请进行了详细的说明,本领域的普通技术人员应当理解:其依然可以对前述各实施例所记载的技术方案进行修改,或者对其中部分技术特征进行等同替换;而这些修改或者替换,并不使相应技术方案的本质脱离本申请各实施例技术方案的精神和范围,均应包含在本申请的保护范围之内。

Claims (10)

  1. 一种代码执行方法,其特征在于,包括:
    在目标代码的执行过程中,判断当前待执行的代码语句中是否存在预设的并行处理标识符;
    若所述当前待执行的代码语句中存在所述并行处理标识符,则将由所述当前待执行的代码语句所指示的待处理任务分发到预设的网格计算系统中进行多线程并行处理,所述网格计算系统中包括两个以上的计算节点;
    将在所述当前待执行的代码语句之后的下一个代码语句确定为新的当前待执行的代码语句,然后返回执行所述判断当前待执行的代码语句中是否存在预设的并行处理标识符的步骤,直至所述目标代码执行完毕为止。
  2. 根据权利要求1所述的代码执行方法,其特征在于,在判断当前待执行的代码语句中是否存在预设的并行处理标识符之前,还包括:
    判断所述当前待执行的代码语句中是否存在数据类型为预设的网格计算类型的输入变量,所述网格计算类型为需要在所述网格计算系统中进行多线程并行处理的数据类型;
    若所述当前待执行的代码语句中存在数据类型为所述网格计算类型的输入变量,则获取所述输入变量的当前处理状态;
    若所述输入变量的当前处理状态为未完成状态,则返回执行所述获取所述输入变量的当前处理状态的步骤,直至所述输入变量的当前处理状态为完成状态为止;
    若所述输入变量的当前处理状态为完成状态,则执行所述判断当前待执行的代码语句中是否存在预设的并行处理标识符的步骤。
  3. 根据权利要求1所述的代码执行方法,其特征在于,所述将由所述当前待执行的代码语句所指示的待处理任务分发到预设的网格计算系统中进行多线程并行处理包括:
    分别获取所述网格计算系统中的各个计算节点的当前任务数;
    根据所述各个计算节点的当前任务数和预设的所述各个计算节点的任务数阈值计算所述各个计算节点的可处理任务数;
    将所述待处理任务分发到所述可处理任务数最多的计算节点进行处理。
  4. 根据权利要求3所述的代码执行方法,其特征在于,所述将由所述当前待执行的代码语句所指示的待处理任务分发到预设的网格计算系统中进行多线程并行处理还包括:
    分别获取所述各个计算节点在预设的统计时间段内的历史任务处理记录;
    根据所述历史任务处理记录统计所述各个计算节点的处理状态为正常的第一时长和处理状态为异常的第二时长;
    根据所述第一时长和所述第二时长计算所述各个计算节点的可靠度,所述可靠度与所述第一时长正相关,且与所述第二时长负相关;
    根据所述可处理任务数和所述可靠度计算所述各个计算节点的优先级,所述优先级与所述可处理任务数正相关,且与所述可靠度正相关;
    将所述待处理任务分发到所述优先级最高的计算节点进行处理。
  5. 根据权利要求1至4中任一项所述的代码执行方法,其特征在于,还包括:
    获取所述网格计算系统中的各个计算节点的当前处理状态;
    若在所述网格计算系统中存在当前处理状态为异常的计算节点,则将所述当前处理状态为异常的计算节点未处理完的任务转移到当前处理状态为正常的计算节点继续处理。
  6. 一种代码执行装置,其特征在于,包括:
    并行处理标识符判断模块,用于在目标代码的执行过程中,判断当前待执行的代码语句中是否存在预设的并行处理标识符;
    任务分发模块,用于若所述当前待执行的代码语句中存在所述并行处理标识符,则将由所述当前待执行的代码语句所指示的待处理任务分发到预设的网格计算系统中进行多线程并行处理,所述网格计算系统中包括两个以上的计算节点;
    当前待执行语句确定模块,用于将在所述当前待执行的代码语句之后的下一个代码语句确定为新的当前待执行的代码语句。
  7. 根据权利要求6所述的代码执行装置,其特征在于,还包括:
    网格计算类型判断模块,用于判断所述当前待执行的代码语句中是否存在数据类型为预设的网格计算类型的输入变量,所述网格计算类型为需要在所述网格计算系统中进行多线程并行处理的数据类型;
    变量处理状态获取模块,用于若所述当前待执行的代码语句中存在数据类型为所述网格计算类型的输入变量,则获取所述输入变量的当前处理状态。
  8. 根据权利要求6或7所述的代码执行装置,其特征在于,所述任务分发模块可以包括:
    当前任务数获取单元,用于分别获取所述网格计算系统中的各个计算节点的当前任务数;
    可处理任务数计算单元,用于根据所述各个计算节点的当前任务数和预设的所述各个计算节点的任务数阈值计算所述各个计算节点的可处理任务数;
    第一分发单元,用于将所述待处理任务分发到所述可处理任务数最多的计算节点进行处理。
  9. 一种代码执行终端设备,包括存储器、处理器以及存储在所述存储器中并可在所述处理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时实现如权利要求1至5中任一项所述的代码执行方法的步骤。
  10. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时实现如权利要求1至5中任一项所述的代码执行方法的步骤。
PCT/CN2018/071304 2018-01-04 2018-01-04 代码执行方法、装置、终端设备及计算机可读存储介质 WO2019134084A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/CN2018/071304 WO2019134084A1 (zh) 2018-01-04 2018-01-04 代码执行方法、装置、终端设备及计算机可读存储介质
US16/959,815 US11372633B2 (en) 2018-01-04 2018-01-04 Method, device and terminal apparatus for code execution and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2018/071304 WO2019134084A1 (zh) 2018-01-04 2018-01-04 代码执行方法、装置、终端设备及计算机可读存储介质

Publications (1)

Publication Number Publication Date
WO2019134084A1 true WO2019134084A1 (zh) 2019-07-11

Family

ID=67143506

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/071304 WO2019134084A1 (zh) 2018-01-04 2018-01-04 代码执行方法、装置、终端设备及计算机可读存储介质

Country Status (2)

Country Link
US (1) US11372633B2 (zh)
WO (1) WO2019134084A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113568736A (zh) * 2021-06-24 2021-10-29 阿里巴巴新加坡控股有限公司 数据处理方法及装置

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113918323B (zh) * 2021-09-17 2022-10-21 中标慧安信息技术股份有限公司 边缘计算中高能效的计算任务分配方法和装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7370156B1 (en) * 2004-11-04 2008-05-06 Panta Systems, Inc. Unity parallel processing system and method
CN102289347A (zh) * 2010-06-15 2011-12-21 微软公司 使用用户可见事件来指示并行操作
CN103064668A (zh) * 2012-12-17 2013-04-24 山东中创软件商用中间件股份有限公司 文件处理方法及装置
CN103942099A (zh) * 2014-04-30 2014-07-23 广州唯品会网络技术有限公司 基于Hive的并行执行任务方法及装置

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7603404B2 (en) * 2004-12-20 2009-10-13 Sap Ag Grid parallel execution
US8566804B1 (en) * 2009-08-13 2013-10-22 The Mathworks, Inc. Scheduling generated code based on target characteristics
US8990783B1 (en) * 2009-08-13 2015-03-24 The Mathworks, Inc. Scheduling generated code based on target characteristics
US9841958B2 (en) * 2010-12-23 2017-12-12 Microsoft Technology Licensing, Llc. Extensible data parallel semantics
CN102098223B (zh) 2011-02-12 2012-08-29 浪潮(北京)电子信息产业有限公司 节点设备调度方法、装置和系统
US10521272B1 (en) * 2016-03-30 2019-12-31 Amazon Technologies, Inc. Testing in grid computing systems
CN107122246B (zh) 2017-04-27 2020-05-19 中国海洋石油集团有限公司 智能数值模拟作业管理与反馈方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7370156B1 (en) * 2004-11-04 2008-05-06 Panta Systems, Inc. Unity parallel processing system and method
CN102289347A (zh) * 2010-06-15 2011-12-21 微软公司 使用用户可见事件来指示并行操作
CN103064668A (zh) * 2012-12-17 2013-04-24 山东中创软件商用中间件股份有限公司 文件处理方法及装置
CN103942099A (zh) * 2014-04-30 2014-07-23 广州唯品会网络技术有限公司 基于Hive的并行执行任务方法及装置

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113568736A (zh) * 2021-06-24 2021-10-29 阿里巴巴新加坡控股有限公司 数据处理方法及装置

Also Published As

Publication number Publication date
US20210026610A1 (en) 2021-01-28
US11372633B2 (en) 2022-06-28

Similar Documents

Publication Publication Date Title
US11106486B2 (en) Techniques to manage virtual classes for statistical tests
CN108334408B (zh) 代码执行方法、装置、终端设备及计算机可读存储介质
CN108536761A (zh) 报表数据查询方法及服务器
WO2018045753A1 (zh) 用于分布式图计算的方法与设备
CN109067841B (zh) 基于ZooKeeper的服务限流方法、系统、服务器及存储介质
GB2508503A (en) Batch evaluation of remote method calls to an object oriented database
US20170177737A9 (en) Method, Controller, Program, and Data Storage System for Performing Reconciliation Processing
EP3912074B1 (en) Generating a synchronous digital circuit from a source code construct defining a function call
US11113176B2 (en) Generating a debugging network for a synchronous digital circuit during compilation of program source code
US20130198723A1 (en) Mapping high-performance computing applications to platforms
WO2020119188A1 (zh) 一种程序检测方法、装置、设备及可读存储介质
US20210166156A1 (en) Data processing system and data processing method
US8612597B2 (en) Computing scheduling using resource lend and borrow
WO2019134084A1 (zh) 代码执行方法、装置、终端设备及计算机可读存储介质
CN112306713A (zh) 一种任务的并发计算方法及装置、设备、存储介质
CN111144830A (zh) 一种企业级计算资源管理方法、系统和计算机设备
CN110704193B (zh) 一种适合向量处理的多核软件架构的实现方法及装置
WO2023108800A1 (zh) 基于cpu-gpu异构架构的性能分析方法、设备以及存储介质
CN107195144A (zh) 管理支付终端硬件模块的方法、装置及计算机可读存储介质
CN111353766A (zh) 分布式业务系统的业务流程处理系统及方法
Thomasian et al. Queueing network models for parallel processing of task systems
US20120323958A1 (en) Specification of database table relationships for calculation
CN117435367B (zh) 用户行为处理方法、装置、设备、存储介质和程序产品
JP7458512B2 (ja) 分散トランザクション処理方法、端末およびコンピュータ読み取り可能な記憶媒体
CN112907198B (zh) 业务状态流转维护方法、装置及电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18898337

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 13.11.2020)

122 Ep: pct application non-entry in european phase

Ref document number: 18898337

Country of ref document: EP

Kind code of ref document: A1