CN108334408B - Code execution method and device, terminal equipment and computer readable storage medium - Google Patents

Code execution method and device, terminal equipment and computer readable storage medium Download PDF

Info

Publication number
CN108334408B
CN108334408B CN201810007169.8A CN201810007169A CN108334408B CN 108334408 B CN108334408 B CN 108334408B CN 201810007169 A CN201810007169 A CN 201810007169A CN 108334408 B CN108334408 B CN 108334408B
Authority
CN
China
Prior art keywords
current
code
executed
task
processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810007169.8A
Other languages
Chinese (zh)
Other versions
CN108334408A (en
Inventor
刘二谋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Tinysoft Co Ltd
Original Assignee
Shenzhen Tinysoft Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Tinysoft Co Ltd filed Critical Shenzhen Tinysoft Co Ltd
Priority to CN201810007169.8A priority Critical patent/CN108334408B/en
Publication of CN108334408A publication Critical patent/CN108334408A/en
Application granted granted Critical
Publication of CN108334408B publication Critical patent/CN108334408B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5061Partitioning or combining of resources
    • G06F9/5072Grid computing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2209/00Indexing scheme relating to G06F9/00
    • G06F2209/50Indexing scheme relating to G06F9/50
    • G06F2209/5018Thread allocation

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Devices For Executing Special Programs (AREA)

Abstract

The present invention belongs to the field of computer technologies, and in particular, to a code execution method, apparatus, terminal device, and computer-readable storage medium. The invention provides an identifier for parallel processing, a user can use the identifier for parallel processing to identify code sentences needing to be processed in parallel in advance, when the parallel processing identifier exists in the code sentences to be executed currently in the execution process of codes, tasks to be processed indicated by the code sentences to be executed currently are distributed to a preset grid computing system for multi-thread parallel processing, the next code sentences can be continuously executed in the execution process of the code sentences, and when a large amount of codes with complex operation are processed, the computing power of each computing node in the grid computing system can be fully utilized to process the tasks in parallel, so that the code execution time is greatly shortened, and the processing efficiency is improved.

Description

Code execution method and device, terminal equipment and computer readable storage medium
Technical Field
The present invention belongs to the field of computer technologies, and in particular, to a code execution method, apparatus, terminal device, and computer-readable storage medium.
Background
In the prior art, when executing a code, the execution is generally performed in sequence according to the order of code statements, and only after the current code statement is executed and a result is returned, the next code statement is continuously executed.
Disclosure of Invention
In view of this, embodiments of the present invention provide a code execution method, a code execution apparatus, a terminal device, and a computer-readable storage medium, so as to solve the problems in the prior art that a large amount of time is often consumed and processing efficiency is low when a large amount of codes involving complex operations are processed.
A first aspect of an embodiment of the present invention provides a code execution method, which may include:
in the execution process of the target code, judging whether a preset parallel processing identifier exists in a current code statement to be executed or not;
if the parallel processing identifier exists in the current code statement to be executed, distributing a task to be processed indicated by the current code statement to be executed to a preset grid computing system for multi-thread parallel processing, wherein the grid computing system comprises more than two computing nodes;
and determining the next code statement after the current code statement to be executed as a new current code statement to be executed, and then returning to the step of judging whether a preset parallel processing identifier exists in the current code statement to be executed until the target code is executed.
Further, before determining whether a preset parallel processing identifier exists in the current code statement to be executed, the method may further include:
judging whether an input variable with a data type of a preset grid computing type exists in the current code statement to be executed, wherein the grid computing type is a data type needing to be subjected to multi-thread parallel processing in the grid computing system;
if the data type of the input variable is the grid computing type, acquiring the current processing state of the input variable;
if the current processing state of the input variable is an unfinished state, returning to the step of acquiring the current processing state of the input variable until the current processing state of the input variable is a finished state;
and if the current processing state of the input variable is a finished state, executing the step of judging whether a preset parallel processing identifier exists in the current code statement to be executed.
Further, the distributing the to-be-processed task indicated by the currently-to-be-executed code statement to a preset grid computing system for multi-thread parallel processing may include:
respectively acquiring the current task number of each computing node in the grid computing system;
calculating the processable task number of each computing node according to the current task number of each computing node and a preset task number threshold value of each computing node;
and distributing the tasks to be processed to the computing nodes with the maximum number of the tasks to be processed for processing.
Further, the distributing the to-be-processed task indicated by the currently-to-be-executed code statement to a preset grid computing system for multi-thread parallel processing may further include:
respectively acquiring historical task processing records of each computing node in a preset statistical time period;
counting a first time length of which the processing state of each computing node is normal and a second time length of which the processing state is abnormal according to the historical task processing record;
calculating the reliability of each computing node according to the first time length and the second time length, wherein the reliability is positively correlated with the first time length and negatively correlated with the second time length;
calculating the priority of each computing node according to the processable task number and the reliability, wherein the priority is positively correlated with the processable task number and the reliability;
and distributing the task to be processed to the computing node with the highest priority for processing.
Further, the code execution method may further include:
acquiring the current processing state of each computing node in the grid computing system;
if the grid computing system has the computing node with the abnormal current processing state, transferring the task which is not processed by the computing node with the abnormal current processing state to the computing node with the normal current processing state for continuous processing.
A second aspect of an embodiment of the present invention provides a code execution apparatus, which may include:
the parallel processing identifier judging module is used for judging whether a preset parallel processing identifier exists in a current code statement to be executed or not in the execution process of the target code;
a task distribution module, configured to distribute, if the parallel processing identifier exists in the current code statement to be executed, a to-be-processed task indicated by the current code statement to be executed to a preset grid computing system for multi-threaded parallel processing, where the grid computing system includes more than two computing nodes;
and the current to-be-executed statement determining module is used for determining the next code statement after the current to-be-executed code statement as the new current to-be-executed code statement.
Further, the code execution apparatus may further include:
the grid computing type judging module is used for judging whether an input variable with a data type of a preset grid computing type exists in the code sentence to be executed currently, and the grid computing type is a data type needing to be subjected to multi-thread parallel processing in the grid computing system;
and the variable processing state acquisition module is used for acquiring the current processing state of the input variable if the data type of the input variable is the grid computing type in the current code statement to be executed.
Further, the task distribution module may include:
a current task number obtaining unit, configured to obtain current task numbers of each computing node in the grid computing system respectively;
the processable task number calculating unit is used for calculating the processable task number of each computing node according to the current task number of each computing node and a preset task number threshold value of each computing node;
and the first distribution unit is used for distributing the tasks to be processed to the computing nodes with the largest number of processable tasks for processing.
Further, the task distribution module may further include:
the processing record acquisition module is used for respectively acquiring the historical task processing records of each computing node in a preset statistical time period;
the time length counting module is used for counting a first time length of which the processing state of each computing node is normal and a second time length of which the processing state is abnormal according to the historical task processing record;
the reliability calculation module is used for calculating the reliability of each calculation node according to the first time length and the second time length, wherein the reliability is positively correlated with the first time length and negatively correlated with the second time length;
a priority calculation module, configured to calculate a priority of each computing node according to the number of processable tasks and the reliability, where the priority is positively correlated with the number of processable tasks and positively correlated with the reliability;
and the second distribution unit is used for distributing the tasks to be processed to the computing nodes with the highest priority for processing.
Further, the code execution apparatus may further include:
a node processing state obtaining module, configured to obtain a current processing state of each computing node in the grid computing system;
and the task transferring module is used for transferring the task which is not processed by the computing node with the abnormal current processing state to the computing node with the normal current processing state for continuous processing if the computing node with the abnormal current processing state exists in the grid computing system.
A third aspect of the embodiments of the present invention provides a code execution terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor implements the steps of any one of the above code execution methods when executing the computer program.
A fourth aspect of the embodiments of the present invention provides a computer-readable storage medium, which stores a computer program, wherein the computer program, when executed by a processor, implements the steps of any one of the above code-executing methods.
Compared with the prior art, the embodiment of the invention has the following beneficial effects: the embodiment of the invention provides an identifier for parallel processing, a user can use the identifier for parallel processing to identify code statements needing to be subjected to parallel processing in advance, when the parallel processing identifier is found in the current code statement to be executed in the code execution process, tasks to be processed indicated by the current code statement to be executed are distributed to a preset grid computing system for multi-thread parallel processing, in the code statement execution process, the next code statement can be continuously executed, and when a large amount of codes related to complex operation are processed, the computing capability of each computing node in the grid computing system can be fully utilized to process the tasks in parallel, so that the code execution time is greatly shortened, and the processing efficiency is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
FIG. 1 is a schematic flow chart of a code execution method according to an embodiment of the present invention;
FIG. 2 is a schematic flow diagram of a task distribution process;
FIG. 3 is a schematic flow chart of a code execution method according to a second embodiment of the present invention;
FIG. 4 is a diagram of one embodiment of parallel grid computing;
FIG. 5 is a schematic block diagram of a code execution apparatus according to a third embodiment of the present invention;
fig. 6 is a schematic block diagram of a code execution terminal device provided in an embodiment of the present invention.
Detailed Description
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is obvious that the embodiments described below are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention puts the support of the parallel grid computing into the kernel of the language and provides the operators or keywords of the parallel grid computing, so that the specified codes can run in parallel, and the data of the variables used in the specified codes and the parallel codes automatically form the task of the parallel grid computing.
The first embodiment is as follows:
as shown in fig. 1, which is a schematic flowchart of a code execution method provided in an embodiment of the present invention, the method may include:
step S101, in the execution process of the target code, judging whether a preset parallel processing identifier exists in a current code statement to be executed.
In this embodiment, the parallel processing identifier may be formed of a specified symbol and/or key, for example, the symbol "#" may be selected as the parallel processing identifier.
If the parallel processing identifier does not exist in the currently to-be-executed code statement, it indicates that parallel processing does not need to be performed on the currently to-be-executed code statement, at this time, step S102 and step S104 are executed, and if the parallel processing identifier exists in the currently to-be-executed code statement, it indicates that parallel processing needs to be performed on the currently to-be-executed code statement, at this time, step S103 and step S104 are executed.
And step S102, executing the current code statement to be executed according to a serial processing mode.
That is, according to the processing method in the prior art, only after the execution of the current code statement to be executed is completed and a result is returned, the next code statement is executed continuously.
And step S103, distributing the tasks to be processed indicated by the currently executed code statements to a preset grid computing system for multi-thread parallel processing.
The grid computing system comprises more than two computing nodes. In this embodiment, task distribution and result return may be performed by a preset task distributor, where the task distributor may be implemented by a single machine and multiple threads, or may be implemented by a parallel network computing of multiple machines.
The task distributor needs to keep the load balance of each computing node in the process of task distribution. For example, the current task number of each computing node in the grid computing system may be acquired, then the processable task number of each computing node is calculated according to the current task number of each computing node and a preset task number threshold of each computing node, and finally the task to be processed is distributed to the computing node with the largest processable task number for processing.
Preferably, an algorithm as shown in fig. 2 can be adopted to realize load balancing in the task distribution process:
and step S1031, respectively obtaining the current task number of each computing node in the grid computing system.
The current task number comprises the number of tasks being processed and the number of tasks waiting to be processed.
Step S1032, calculating the processable task number of each computing node according to the current task number of each computing node and a preset task number threshold of each computing node.
The task number threshold may be determined by the computing capability of each computing node, and the two are positively correlated, for example, the faster the CPU processing speed of the computing node is, the higher the corresponding task number threshold is, and conversely, the slower the CPU processing speed of the computing node is, the lower the corresponding task number threshold is.
The number of tasks that can be processed for a computing node may be the difference between the threshold number of tasks for that computing node and the current number of tasks for that computing node.
And step S1033, respectively acquiring historical task processing records of each computing node in a preset statistical time period.
The statistical time period may be set according to the actual situation, for example, historical processing records of the previous month, the previous week or the previous day may be taken for statistics, which is not specifically limited in this embodiment.
Step S1034, counting a first time length when the processing state of each computing node is normal and a second time length when the processing state is abnormal according to the historical task processing record.
The processing state is abnormal, which means that the task distributor receives abnormal feedback of a certain computing node or the computing node has no response after time out.
The first time length of a certain computing node can be obtained by summing the time intervals when the processing state of the computing node is normal, and the second time length of the computing node can be obtained by summing the time intervals when the processing state of the computing node is abnormal.
And step S1035, calculating the reliability of each computing node according to the first time length and the second time length.
The reliability is used for representing the stability of the computing node in the computing process, and the reliability of a certain computing node is positively correlated with the first time length and negatively correlated with the second time length. For example, the reliability may be a ratio of the first duration to a total duration, wherein the total duration is a sum of the first duration and the second duration.
Step S1036, calculating the priority of each computing node according to the number of the tasks which can be processed and the reliability.
The priority is positively correlated with the number of processable tasks and positively correlated with the reliability. For example, a first coefficient corresponding to the processable task number and a second coefficient corresponding to the reliability may be set, respectively, a first product of the processable task number and the first coefficient and a second product of the reliability and the second coefficient may be calculated, and then a sum of the first product and the second product may be calculated, and the larger the calculated sum value is, the higher the priority is.
And step S1037, distributing the tasks to be processed to the computing nodes with the highest priority for processing.
It should be noted that the above process is only one possible task allocation method, and other task allocation methods may also be selected according to actual needs, which is not specifically limited in this embodiment.
And step S104, determining the next code statement after the current code statement to be executed as a new current code statement to be executed.
And after determining a new code statement to be executed currently, returning to execute the step S101 until the target code is executed completely.
The invention can be used for large-scale distributed computation, can be realized in an interpreter of script language or a compiler of compiling type language, can be used for realizing parallel computation in a single machine, and can be applied to multi-machine parallel grid computation in multiple machines. For the script language, the source code of the script and the needed variables can be directly submitted as tasks, for the compiling language, the code needing to be parallel can be compiled into binary code capable of distributing calls and submitted as tasks together with the data, or the code needing to be compiled and the data can be submitted as tasks together by adopting a JIT mode.
In the embodiment, a uniform function source code center is provided, subroutines called in parallel grid computing can be obtained through the uniform source code center, and when the source code changes, the subroutines are timely notified to a module providing parallel grid computing. The method is adopted to carry out the parallel grid calculation, the development is very convenient, the task and the decomposition mode of the task do not need to be understood from the perspective of a developer, the method does not depend on a parallel method library, the method is suitable for the development of various unspecified parallel grid methods, the task of grid decomposition does not need any deployment, the development is carried out at any time, the use is carried out at any time, and the modification is carried out at any time.
In summary, the embodiments of the present invention provide an identifier for performing parallel processing, where a user may use the identifier for performing parallel processing in advance to identify a code statement that needs to be performed parallel processing, and in a code execution process, when it is found that the parallel processing identifier exists in a code statement to be currently executed, a task to be processed indicated by the code statement to be currently executed is distributed to a preset grid computing system for performing multi-thread parallel processing.
Example two:
fig. 3 is a schematic flow chart of a code execution method provided in an embodiment of the present invention, where the method may include:
step S301, in the execution process of the target code, judging whether an input variable with a data type being a preset grid computing type exists in a current code statement to be executed.
The grid computing type is a data type which needs to be subjected to multi-thread parallel processing in the grid computing system. The data type of grid computing is added to the data type of the language for storing parallel computing, and when the language accesses the data type, such as assignment, operation, serialization and the like, a block is generated to wait for the end of the parallel grid computing contained in the data. If the parallel computation is finished, the return value of the parallel grid computation is recorded in the data type.
If the data type of the input variable is the grid computing type in the current code statement to be executed, executing step S302 and the subsequent steps, and if the data type of the input variable is not the grid computing type in the current code statement to be executed, executing step S305 and the subsequent steps.
And step S302, acquiring the current processing state of the input variable.
In this embodiment, the processing states of the variables of the grid computing type include a completion state and an incomplete state, wherein the incomplete state includes states of pending commit, committed and error. The processing state of the grid computing type variable is used for describing the state of the parallel task, and necessary information required by the execution of the task is recorded in the submission state, so that the operations of task termination, task resubmission and the like are performed.
Step S303, determining whether the current processing state of the input variable is an unfinished state.
If the current processing state of the input variable is an incomplete state, step S304 and the subsequent steps are performed, and if the current processing state of the input variable is a complete state, step S305 and the subsequent steps are performed.
And step S304, waiting for a preset time length.
The duration may be set according to the actual situation, and specifically, may also be set to 0, i.e. without waiting.
After waiting for the time period, the process returns to step S302 until the current processing state of the input variable is the completed state.
Step S305, determining whether a preset parallel processing identifier exists in the current code statement to be executed.
If the parallel processing identifier does not exist in the currently to-be-executed code statement, executing step S306 and step S308, and if the parallel processing identifier exists in the currently to-be-executed code statement, executing step S306 and step S307.
And S306, executing the current code statement to be executed according to a serial processing mode.
And step S307, distributing the tasks to be processed indicated by the currently executed code statements to a preset grid computing system for multi-thread parallel processing.
Step S308, determining a next code statement after the currently to-be-executed code statement as a new currently to-be-executed code statement.
And after determining a new code statement to be executed currently, returning to execute the step S301 until the target code is executed completely.
The processes of steps S305 to S308 are the same as steps S101 to S104 in the first embodiment, and specific reference may be made to the description in the first embodiment, which is not repeated herein.
Fig. 4 shows a specific example of performing parallel grid computing, in which parallel grid computing is performed on the method after the "#" using the symbol "#" as the parallel processing identifier, and when the result of the grid computing is not used, the program continues to perform the parallel computing until the result of the grid computing is accessed.
Specifically, in the formation of the array c, each step is independent of the result of the previous grid computing, so that the parallel grid computing can be performed independently of each other, and a specific parallel for is not required here, and through the use of the parallel processing identifier, the conventional for completes a work similar to the parallel for. "#" forms a task to be submitted to the task distributor by the data contained in the variable i used by the subProc and the called code of the subProc, and puts the grid computing type into the array c. When c is accessed, for example sum or stddev is found with standard deviation, the end of the task of grid computation contained in c is waited, and d.
Preferably, the code execution method may further include a process of exception handling, which allows the task to be transferred once an error occurs in an execution unit of the task. Specifically, the current processing state of each computing node in the grid computing system may be obtained, and if a computing node with an abnormal current processing state exists in the grid computing system, the task that is not processed by the computing node with the abnormal current processing state is transferred to the computing node with a normal current processing state for further processing.
The task distributor can be used for setting a concurrency number to prevent the concurrency number from exceeding the limit, so that the task distributor can select to queue or reject the task which exceeds the limit.
Preferably, a task management mechanism can be further provided, the state of the parallel grid task assignment is checked at a management end, and an administrator is allowed to perform operations such as task termination.
In summary, on the basis of the first embodiment, the embodiment of the present invention adds a processing mechanism in which parallel grid computing automatically blocks and waits when accessing resources, and accesses are always parallel without blocking parallel grid resources, so that the design supports nesting of parallel grids, and the number of parallel grids is not limited.
Example three:
as shown in fig. 5, which is a schematic block diagram of a code execution apparatus provided in an embodiment of the present invention, the apparatus may include:
a parallel processing identifier determining module 501, configured to determine whether a preset parallel processing identifier exists in a current code statement to be executed in an execution process of a target code;
a task distributing module 502, configured to, if the parallel processing identifier exists in the current code statement to be executed, distribute a to-be-processed task indicated by the current code statement to be executed to a preset grid computing system for multi-threaded parallel processing, where the grid computing system includes more than two computing nodes;
a current to-be-executed statement determining module 503, configured to determine a next code statement after the current to-be-executed code statement as a new current to-be-executed code statement.
Further, the code execution apparatus may further include:
the grid computing type judging module is used for judging whether an input variable with a data type of a preset grid computing type exists in the code sentence to be executed currently, and the grid computing type is a data type needing to be subjected to multi-thread parallel processing in the grid computing system;
and the variable processing state acquisition module is used for acquiring the current processing state of the input variable if the data type of the input variable is the grid computing type in the current code statement to be executed.
Further, the task distribution module may include:
a current task number obtaining unit, configured to obtain current task numbers of each computing node in the grid computing system respectively;
the processable task number calculating unit is used for calculating the processable task number of each computing node according to the current task number of each computing node and a preset task number threshold value of each computing node;
and the first distribution unit is used for distributing the tasks to be processed to the computing nodes with the largest number of processable tasks for processing.
Further, the task distribution module may further include:
the processing record acquisition module is used for respectively acquiring the historical task processing records of each computing node in a preset statistical time period;
the time length counting module is used for counting a first time length of which the processing state of each computing node is normal and a second time length of which the processing state is abnormal according to the historical task processing record;
the reliability calculation module is used for calculating the reliability of each calculation node according to the first time length and the second time length, wherein the reliability is positively correlated with the first time length and negatively correlated with the second time length;
a priority calculation module, configured to calculate a priority of each computing node according to the number of processable tasks and the reliability, where the priority is positively correlated with the number of processable tasks and positively correlated with the reliability;
and the second distribution unit is used for distributing the tasks to be processed to the computing nodes with the highest priority for processing.
Further, the code execution apparatus may further include:
a node processing state obtaining module, configured to obtain a current processing state of each computing node in the grid computing system;
and the task transferring module is used for transferring the task which is not processed by the computing node with the abnormal current processing state to the computing node with the normal current processing state for continuous processing if the computing node with the abnormal current processing state exists in the grid computing system.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present invention.
Fig. 6 is a schematic block diagram of a code execution terminal device according to an embodiment of the present invention. As shown in fig. 6, the code execution terminal device 6 of this embodiment includes: a processor 60, a memory 61 and a computer program 62 stored in said memory 61 and executable on said processor 60. The processor 60, when executing the computer program 62, implements the steps in the various code execution method embodiments described above, such as the steps S101 to S104 shown in fig. 2. Alternatively, the processor 60, when executing the computer program 62, implements the functions of each module/unit in the above-mentioned device embodiments, for example, the functions of the modules 501 to 503 shown in fig. 5.
Illustratively, the computer program 62 may be partitioned into one or more modules/units that are stored in the memory 61 and executed by the processor 60 to implement the present invention. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 62 in the code execution terminal device 6. For example, the computer program 62 may be divided into a parallel processing identifier determination module, a task distribution module, and a current to-be-executed statement determination module.
The code execution terminal device 6 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The code execution terminal device may include, but is not limited to, a processor 60, a memory 61. It will be understood by those skilled in the art that fig. 6 is only an example of the code-executing terminal device 6, and does not constitute a limitation of the code-executing terminal device 6, and may include more or less components than those shown, or combine some components, or different components, for example, the code-executing terminal device 6 may further include an input-output device, a network access device, a bus, etc.
The Processor 60 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 61 may be an internal storage unit of the code execution terminal device 6, such as a hard disk or a memory of the code execution terminal device 6. The memory 61 may also be an external storage device of the code execution terminal device 6, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the code execution terminal device 6. Further, the memory 61 may also include both an internal storage unit and an external storage device of the code execution terminal device 6. The memory 61 is used for storing the computer programs and other programs and data required for the code execution of the terminal device 6. The memory 61 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present invention.
In the embodiments provided in the present invention, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implemented, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow of the method according to the embodiments of the present invention may also be implemented by a computer program, which may be stored in a computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method embodiments may be implemented. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present invention, and are intended to be included within the scope of the present invention.

Claims (10)

1. A method of code execution, comprising:
in the execution process of the target code, judging whether a preset parallel processing identifier exists in a current code statement to be executed or not;
if the parallel processing identifier exists in the current code statement to be executed, distributing a task to be processed indicated by the current code statement to be executed to a preset grid computing system for multi-thread parallel processing, wherein the grid computing system comprises more than two computing nodes;
and determining the next code statement after the current code statement to be executed as a new current code statement to be executed, and then returning to the step of judging whether a preset parallel processing identifier exists in the current code statement to be executed until the target code is executed.
2. The code execution method according to claim 1, before determining whether the preset parallel processing identifier exists in the current code statement to be executed, further comprising:
judging whether an input variable with a data type of a preset grid computing type exists in the current code statement to be executed, wherein the grid computing type is a data type needing to be subjected to multi-thread parallel processing in the grid computing system;
if the data type of the input variable is the grid computing type, acquiring the current processing state of the input variable;
if the current processing state of the input variable is an unfinished state, returning to the step of acquiring the current processing state of the input variable until the current processing state of the input variable is a finished state;
and if the current processing state of the input variable is a finished state, executing the step of judging whether a preset parallel processing identifier exists in the current code statement to be executed.
3. The code execution method of claim 1, wherein the distributing the to-be-processed task indicated by the currently-to-be-executed code statement to a preset grid computing system for multi-thread parallel processing comprises:
respectively acquiring the current task number of each computing node in the grid computing system;
calculating the processable task number of each computing node according to the current task number of each computing node and a preset task number threshold value of each computing node;
and distributing the tasks to be processed to the computing nodes with the maximum number of the tasks to be processed for processing.
4. The code execution method of claim 3, wherein the distributing the to-be-processed task indicated by the currently-to-be-executed code statement to a preset grid computing system for multi-thread parallel processing further comprises:
respectively acquiring historical task processing records of each computing node in a preset statistical time period;
counting a first time length of which the processing state of each computing node is normal and a second time length of which the processing state is abnormal according to the historical task processing record;
calculating the reliability of each computing node according to the first time length and the second time length, wherein the reliability is positively correlated with the first time length and negatively correlated with the second time length;
calculating the priority of each computing node according to the processable task number and the reliability, wherein the priority is positively correlated with the processable task number and the reliability;
and distributing the task to be processed to the computing node with the highest priority for processing.
5. The code execution method of any one of claims 1 to 4, further comprising:
acquiring the current processing state of each computing node in the grid computing system;
if the grid computing system has the computing node with the abnormal current processing state, transferring the task which is not processed by the computing node with the abnormal current processing state to the computing node with the normal current processing state for continuous processing.
6. A code execution apparatus, comprising:
the parallel processing identifier judging module is used for judging whether a preset parallel processing identifier exists in a current code statement to be executed or not in the execution process of the target code;
a task distribution module, configured to distribute, if the parallel processing identifier exists in the current code statement to be executed, a to-be-processed task indicated by the current code statement to be executed to a preset grid computing system for multi-threaded parallel processing, where the grid computing system includes more than two computing nodes;
and the current to-be-executed statement determining module is used for determining the next code statement after the current to-be-executed code statement as the new current to-be-executed code statement.
7. The code execution apparatus of claim 6, further comprising:
the grid computing type judging module is used for judging whether an input variable with a data type of a preset grid computing type exists in the code sentence to be executed currently, and the grid computing type is a data type needing to be subjected to multi-thread parallel processing in the grid computing system;
a variable processing state obtaining module, configured to obtain a current processing state of an input variable if an input variable whose data type is the grid computing type exists in the current code statement to be executed; if the current processing state of the input variable is an unfinished state, returning to the step of acquiring the current processing state of the input variable until the current processing state of the input variable is a finished state;
and if the current processing state of the input variable is a finished state, the parallel processing identifier judging module executes the step of judging whether a preset parallel processing identifier exists in the current code statement to be executed.
8. The code execution device of claim 6 or 7, wherein the task distribution module comprises:
a current task number obtaining unit, configured to obtain current task numbers of each computing node in the grid computing system respectively;
the processable task number calculating unit is used for calculating the processable task number of each computing node according to the current task number of each computing node and a preset task number threshold value of each computing node;
and the first distribution unit is used for distributing the tasks to be processed to the computing nodes with the largest number of processable tasks for processing.
9. Code-executing terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor realizes the steps of the code-executing method according to any of claims 1 to 5 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the code-executing method according to any one of claims 1 to 5.
CN201810007169.8A 2018-01-04 2018-01-04 Code execution method and device, terminal equipment and computer readable storage medium Active CN108334408B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810007169.8A CN108334408B (en) 2018-01-04 2018-01-04 Code execution method and device, terminal equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810007169.8A CN108334408B (en) 2018-01-04 2018-01-04 Code execution method and device, terminal equipment and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN108334408A CN108334408A (en) 2018-07-27
CN108334408B true CN108334408B (en) 2020-10-02

Family

ID=62924741

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810007169.8A Active CN108334408B (en) 2018-01-04 2018-01-04 Code execution method and device, terminal equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN108334408B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020177074A1 (en) * 2019-03-05 2020-09-10 深圳市天软科技开发有限公司 Data extraction method, terminal device and computer readable storage medium
CN110134437B (en) * 2019-05-13 2022-12-16 中国电子科技集团公司第三十八研究所 Software flow optimization method and device
CN111198689B (en) * 2019-12-30 2023-04-28 北京明略软件系统有限公司 Code execution method, device and computer readable storage medium
CN112000909B (en) * 2020-10-29 2021-02-09 南京研利科技有限公司 Method, computing device and storage medium for browser information processing
CN113110329B (en) * 2021-04-14 2023-01-10 深圳赛动智造科技有限公司 Parallel operation control method, device, system and medium based on stem cell preparation
CN115617322A (en) * 2022-09-29 2023-01-17 联通智网科技股份有限公司 Customized script running method and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102098223A (en) * 2011-02-12 2011-06-15 浪潮(北京)电子信息产业有限公司 Method, device and system for scheduling node devices
CN103942099A (en) * 2014-04-30 2014-07-23 广州唯品会网络技术有限公司 Parallel task execution method and device based on Hive

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107122246B (en) * 2017-04-27 2020-05-19 中国海洋石油集团有限公司 Intelligent numerical simulation operation management and feedback method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102098223A (en) * 2011-02-12 2011-06-15 浪潮(北京)电子信息产业有限公司 Method, device and system for scheduling node devices
CN103942099A (en) * 2014-04-30 2014-07-23 广州唯品会网络技术有限公司 Parallel task execution method and device based on Hive

Also Published As

Publication number Publication date
CN108334408A (en) 2018-07-27

Similar Documents

Publication Publication Date Title
CN108334408B (en) Code execution method and device, terminal equipment and computer readable storage medium
US10642642B2 (en) Techniques to manage virtual classes for statistical tests
US8595732B2 (en) Reducing the response time of flexible highly data parallel task by assigning task sets using dynamic combined longest processing time scheme
CN111738446B (en) Scheduling method, device, equipment and medium of deep learning reasoning engine
CN109067841B (en) Service current limiting method, system, server and storage medium based on ZooKeeper
US11907770B2 (en) Method and apparatus for vectorized resource scheduling in distributed computing systems using tensors
CN115880132B (en) Graphics processor, matrix multiplication task processing method, device and storage medium
CN109614227A (en) Task resource concocting method, device, electronic equipment and computer-readable medium
CN109492024A (en) Data processing method, device, computer equipment and storage medium
CN109343972A (en) Task processing method and terminal device
CN115543577B (en) Covariate-based Kubernetes resource scheduling optimization method, storage medium and device
CN115033352A (en) Task scheduling method, device and equipment for multi-core processor and storage medium
CN116302708A (en) Data backup method, device, equipment and storage medium based on load balancing
CN107977504A (en) A kind of asymmetric in-core fuel management computational methods, device and terminal device
US11372633B2 (en) Method, device and terminal apparatus for code execution and computer readable storage medium
CN111324454A (en) Multi-core CPU allocation method and device, electronic equipment and storage medium
Unger Programming languages for computer system simulation
CN116701091A (en) Method, electronic device and computer program product for deriving logs
CN115033374A (en) Task-to-thread matching method of multi-core programmable controller
Sen et al. Predictive price-performance optimization for serverless query processing
CN113010315A (en) Resource allocation method, resource allocation device and computer-readable storage medium
CN111353766A (en) Service process processing system and method of distributed service system
CN115329923A (en) Compiling method for neural network model and related product
CN110929957A (en) Optimization method and device for comprehensive energy system
CN110865877A (en) Task request response method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant