EP2193441A2 - Procédé, système et programme informatique pour programmer l'exécution de tâches requises par des événements - Google Patents

Procédé, système et programme informatique pour programmer l'exécution de tâches requises par des événements

Info

Publication number
EP2193441A2
EP2193441A2 EP08786903A EP08786903A EP2193441A2 EP 2193441 A2 EP2193441 A2 EP 2193441A2 EP 08786903 A EP08786903 A EP 08786903A EP 08786903 A EP08786903 A EP 08786903A EP 2193441 A2 EP2193441 A2 EP 2193441A2
Authority
EP
European Patent Office
Prior art keywords
event
target entity
execution
events
action
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Ceased
Application number
EP08786903A
Other languages
German (de)
English (en)
Inventor
Franco Mossotto
Arcangelo Di Balsamo
Pietro Iannucci
Francesca Pasceri
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Priority to EP08786903A priority Critical patent/EP2193441A2/fr
Publication of EP2193441A2 publication Critical patent/EP2193441A2/fr
Ceased legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/54Interprogram communication
    • G06F9/542Event management; Broadcasting; Multicasting; Notifications
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/46Multiprogramming arrangements
    • G06F9/50Allocation of resources, e.g. of the central processing unit [CPU]
    • G06F9/5005Allocation of resources, e.g. of the central processing unit [CPU] to service a request
    • G06F9/5027Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals
    • G06F9/5055Allocation of resources, e.g. of the central processing unit [CPU] to service a request the resource being a machine, e.g. CPUs, Servers, Terminals considering software capabilities, i.e. software resources associated or available to the machine

Definitions

  • the present invention relates to the data processing field. More specifically, the present invention relates to the scheduling of the execution of jobs in a data processing system.
  • Workload schedulers are commonly used to control the execution of large quantities of jobs in a data processing system.
  • An example of commercial scheduler is the "IBM Tivoli Workload Scheduler (TWS)" by IBM Corporation.
  • the jobs consist of any sort of work units that can be executed in the system.
  • the scheduler is used to control the downloading of configuration files to network devices (in a network configuration management system) .
  • Each configuration file is generated dynamically by evaluating corresponding policies, which are formed by one or more rules; each rule includes conditions (for determining how to identify the corresponding network devices in an infrastructure database) , actions (for determining how to set desired configuration parameters) and verifications (for determining how to interpret any discrepancy between the rules and the actual configurations of the corresponding network devices during the verification of a network configuration) .
  • the scheduler controls the execution of the jobs on multiple workstations from a central scheduling server; the workstation for each job may be either defined statically or selected dynamically when the job is submitted for execution (among all the available ones having the required characteristics) .
  • the latter solution allows implementing systems that are easily scaleable and high reliable; moreover, workload balancing techniques may be exploited to optimize the distribution of the jobs on the workstations .
  • the submission of the jobs is controlled according to a predefined workload plan (or simply plan) .
  • the plan establishes a flow of execution of the jobs based on temporal constraints (i.e., date and/or time); in addition, the execution of the jobs may also be conditioned on specific dependencies (such as the completion of preceding jobs).
  • temporal constraints i.e., date and/or time
  • specific dependencies such as the completion of preceding jobs.
  • the schedulers are completely ineffective in controlling the execution of jobs that are not defined in the plan. This is a problem when the need of executing a job is not known a priori (for example, because it is triggered by the occurrence of a specific event) .
  • this document discloses a system for auditing an Information Technology (IT) infrastructure of an enterprise.
  • a server of the system controls the execution of static assessments or dynamic assessments (including sequences of steps defined in corresponding policies) of particular resources of the IT infrastructure.
  • the assessments may be triggered by exploiting a scheduler as usual to provide year, date and time of day information; alternatively, the same assessments may also be triggered by predefined events detected on nodes of the system.
  • each node must monitor all the possible events of interest; the information so obtained is then collected on the server from the different nodes.
  • the present disclosure is aimed at supporting the scheduling of jobs either according to a plan or in response to events.
  • different aspects of the present invention provide a solution as set out in the independent claims.
  • Advantageous embodiments of the invention are described in the dependent claims.
  • an aspect of the invention proposes a method for scheduling execution of jobs on target entities
  • the method starts with the step of providing a plan, which defines a flow of execution of a set of jobs. The method continues by submitting each job for execution on a selected target entity according to the plan. A set of rules is also provided; each rule defines an action to be executed on an action target entity in response to an event on an event target entity. The method then includes the step of determining the events that are defined for each event target entity in the rules. Each event target entity is then enabled to detect the corresponding events. The execution of each action on the corresponding action target entity is now triggered in response to the detection of the corresponding event .
  • the actions may consist of further jobs that are even not defined in the plan.
  • each (event) workstation is enabled to detect the corresponding events by deploying a configuration structure for one or more detection modules running on it.
  • the deployment of the configuration structure is prevented when it is equal to a previous version thereof being already available on the workstation .
  • the server receives the notification of each event from each (event) workstation, and then submits the corresponding action for execution on the relevant (action) workstation.
  • a way to further improve the solution is to monitor the rules, so as to perform the operations described above only in response to any change thereof.
  • Another aspect of the invention proposes a computer program for performing the above-described method.
  • a different aspect of the invention proposes a corresponding system.
  • FIG.l is a schematic block diagram of a data processing system in which the solution according to an embodiment of the invention may be applied,
  • FIG.2 shows the functional blocks of an exemplary computer of the system
  • FIG.3 illustrates the main software components that can be used to implement the solution according to an embodiment of the invention.
  • FIGs.4A-4B show a diagram describing the flow of activities relating to an implementation of the solution according to an embodiment of the invention.
  • the system 100 includes a scheduling server (or simply server) 105, which is used to control the execution of jobs in the system 100; typically, the jobs consist of batch (i.e., non-interactive) applications - such as payroll or cost analysis programs.
  • the jobs are executed under the control of the server 105 on a plurality of target workstations (or simply workstations) 110.
  • the server 105 and the workstations 110 communicate through a network 115 (for example, a LAN) .
  • a generic computer of the above-described system (server or workstation) is denoted with 200.
  • the computer 200 is formed by several units that are connected in parallel to a system bus 205 (with a structure that is suitably scaled according to the actual function of the computer 200 in the system) .
  • one or more microprocessors (//P) 210 control operation of the computer 200;
  • a RAM 215 is directly used as a working memory by the microprocessors 210, and
  • a ROM 220 stores basic code for a bootstrap of the computer 200.
  • Several peripheral units are clustered around a local bus 225 (by means of respective interfaces) .
  • a mass memory consists of one or more hard-disks 230 and drives 235 for reading CD-ROMs 240.
  • the computer 200 includes input units 245 (for example, a keyboard and a mouse) , and output units 250 (for example, a monitor and a printer) .
  • An adapter 255 is used to connect the computer 200 to the network (not shown in the figure) .
  • a bridge unit 260 interfaces the system bus 205 with the local bus 225.
  • Each microprocessor 210 and the bridge unit 260 can operate as master agents requesting an access to the system bus 205 for transmitting information.
  • An arbiter 265 manages the granting of the access with mutual exclusion to the system bus 205.
  • the information is typically stored on the hard-disk and loaded (at least partially) into the working memory of each computer when the programs are running, together with an operating system and other application programs (not shown in the figure) .
  • the programs are initially installed onto the hard disk, for example, from CD-ROM.
  • the server 105 runs a scheduler 305 (for example, the above-mentioned TWS) .
  • the scheduler 305 includes a configurator 310 (such as the "Composer" of the TWS) , which is used to maintain a workload database 315 (written in a suitable control language - for example, XML-based) .
  • the workload database 315 contains a definition of all the workstations that are available to the scheduler 305; for example, each workstation is defined by information for accessing it (such as name, address, and the like) , together with its physical/logical characteristics (such as processing power, memory size, operating system, and the like) .
  • the workload database 315 also includes a descriptor of each job. The job descriptor specifies the programs to be invoked (with their arguments and environmental variables).
  • the job descriptor indicates the workstations on which the job may be executed - either statically (by their names) or dynamically (by their characteristics).
  • the job descriptor then provides temporal constraints for the execution of the job (such as its run-cycle, like every day, week or month, an earliest time or a latest time for its starting, or a maximum allowable duration).
  • the job descriptor specifies dependencies of the job (i.e., conditions that must be met before the job can start); exemplary dependencies are sequence constraints (such as the successful completion of other jobs), or enabling constraints (such as the entering of a response to a prompt by an operator) .
  • each job stream consists of an ordered sequence of (logically related) jobs, which should be run as a single work unit respecting predefined dependencies.
  • job will be used hereinafter to denote either a single job or a job stream.
  • the workload database 315 also stores statistic information relating to previous executions of the jobs (such as a log of their duration from which a corresponding estimated duration may be inferred) .
  • a planner 320 (such as the "Master Domain Manager” of the TWS) is used to create a plan, which definition is stored in a control file 325 (such as the "Symphony" of the TWS) .
  • the plan specifies the flow of execution of a batch of jobs in a specific production period (typically, one day) , together with the definition of the required workstations.
  • a new plan is generally created automatically before every production period.
  • the planner 320 processes the information available in the workload database 315 so as to select the jobs to be run and to arrange them in the desired sequence (according to their expected duration, temporal constraints, and dependencies) .
  • the planner 320 creates the plan by adding the jobs to be executed (for the next production period) and by removing the preexisting jobs (of the previous production period) that have been completed; in addition, the jobs of the previous production period that did not complete successfully or that are still running or waiting to be run can be maintained in the plan (for their execution during the next production period) .
  • a handler 330 (such as the "Batchman” process of the "TWS") starts the plan at the beginning of every production period.
  • the handler 330 submits each job for execution as soon as possible; for this purpose, the handler 330 selects a workstation - among the available ones - having the required characteristics (typically, according to information provided by a load balancer - not shown in the figure) .
  • the actual execution of the jobs is managed by a corresponding module 335 (such as the "Jobman” process of the "TWS") ; for this purpose, the executor 335 interfaces with an execution agent 340 running on each workstation 110 (only one shown in the figure) .
  • a corresponding module 335 such as the "Jobman” process of the "TWS”
  • the executor 335 interfaces with an execution agent 340 running on each workstation 110 (only one shown in the figure) .
  • the agent 340 enforces the execution of each job in response to a corresponding command received from the executor 335, and returns feedback information relating to the result of its execution (for example, whether the job has been completed successfully, its actual duration, and the like) .
  • the feedback information of all the executed jobs is passed by the executor 335 to the handler 330, which enters it into the control file 325 (so as to provide a real-time picture of the current state of all the jobs of the plan) .
  • the planner 320 accesses the control file 325 for updating the statistic information relating to the executed jobs in the workload database 315.
  • the scheduler 305 also supports the executions of jobs (or more generally, any other actions) in response to corresponding events.
  • each workstation is enabled to detect only the events of interest - i.e., the ones which occurrence on the workstation triggers the execution of a corresponding action (for example, by deploying customized configuration files selectively) .
  • the scheduler can control the execution of whatever actions, even when the need of their execution is not known a priori; particularly, this allows submitting jobs that are not defined in the plan.
  • the desired result is achieved with a minimal overhead of the workstations and the server; moreover, no significant increase of the network traffic is brought about.
  • an editor 345 is used to maintain a rule repository 350 (preferably secured by an authentication/authorization mechanism to control any update thereof) .
  • Each rule in the repository 350 defines an action to be executed on a corresponding (action) workstation in response to the detection of an event on a corresponding (event) workstation.
  • action a corresponding workstation
  • event a corresponding workstation
  • event a corresponding workstation
  • event a corresponding workstation
  • the actions consist of the submission of a job for its execution; in this respect, it is emphasized that the rule can specify any job, even if it is not included in the plan.
  • actions may be supported - for example, an e-mail notification to a user, the turn-on of a workstation, and the like.
  • the events may be detected and the actions may be executed on any computer of the system; for example, the events relating to the change of status of the jobs are detected by the server itself (in this case operating as a workstation as well); moreover, the actions consisting of the submissions of the jobs may be executed on workstations that are defined either statically or dynamically (according to required characteristics) .
  • a set of plug-in modules is provided for detecting the events and for executing the actions (different from the submission of the jobs); an example of (event) plug-in may be a file scanner, whereas an example of (action) plug-in may be an e-mail sender.
  • the rule repository 350 is accessed by the planner 320 (so as to add the information required for the detection of the events and the execution of the corresponding actions into the control file 325) .
  • An event plug-in database 355 associates each event with the corresponding event plug-in for its detection.
  • a monitor 360 processes the rules in the repository 350 (for example, whenever a change is detected) . More specifically, the monitor 360 determines the events that are defined for each workstation in the rules.
  • the monitor 360 then creates a configuration file for each event plug-in associated with these events (as indicated in the event plug-in database 355) ; the configuration file sets configuration parameters of the event plug-in that enable it to detect the desired event (s) .
  • the configuration files of each workstation are then combined into a single configuration archive (for example, in a compressed form) .
  • the monitor 360 saves all the configuration archives so obtained into a corresponding repository 365.
  • the monitor 360 calculates a Cyclic Redundancy Code (CRC) of each configuration archive (by applying a 16- or 32-bit polynomial to it) .
  • CRC Cyclic Redundancy Code
  • a configuration table 370 is used to associate each workstation with the corresponding configuration archive and its CRC (under the control of the monitor 360) .
  • a deployer 375 transmits each CRC to the corresponding workstation (as indicated in the configuration table 370); for this purpose, the deployer 375 retrieves the required information from the definition of the workstations in the control file 325. With reference to the same workstation 110 as above for the sake of simplicity, this information is received by a controller 380.
  • the controller 380 accesses the current configuration files (denoted with 385) of the (event and/or action) plug-ins that are installed on the workstation
  • the controller 380 downloads the (new) configuration archive from the server 105
  • the plug-ins 390 interface with the agent 340 for exchanging information with the server 105.
  • the agent 340 notifies the events detected on the workstation 110 to an event collector 391; preferably, the notifications of the events provided by the workstation 110 are encrypted and secured, so as to ensure their confidentiality and integrity.
  • the event collector 391 passes the notifications of the events detected on all the workstations to an event correlator 392.
  • the event correlator 392 accesses the rule repository 350, so as to determine the actions to be executed in response thereto (together with the corresponding workstations) .
  • the event correlator 392 calls the handler 330 (by passing this information) .
  • the handler 330 accesses an action plug-in database 393, which associates each action with the corresponding action plug-in for its execution.
  • the handler 330 then invokes the action plug-in - denoted as a whole with 394 - associated with the action to be executed (as indicated in the action plug-in database 394) .
  • Each action plug-in 394 manages the actual execution of the corresponding action on the desired workstations; for this purpose, the action plug-in 394 interface with the agent 340 running on each relevant workstation (as shown for the same workstation 110 as above in the figure) .
  • the action plug-ins 394 may also include modules adapted to perform user's notifications (for example, by e-mails) .
  • FIGS.4A-4B the logic flow of an exemplary process that can be implemented in the above-described system to schedule the execution of jobs is represented with a method 400.
  • the method begins at the black start circle 403 in the swim-lane of the server.
  • the process passes to block 409; in this phase, the definition of the plan (including the specification of the flow of execution of the jobs and the definition of the workstations required for their execution) is generated and then stored into the control file.
  • the flow of activity passes to block 412 when the monitor detects any change in the rules (stored in the corresponding repository) .
  • the plan is regenerated and replaced in the control file, so as to add the definition of the workstations where the events are to be detected and the corresponding actions are to be executed.
  • a loop is then performed for processing the rules that have been changed; the loop begins at block 418 wherein every changed rule is identified (starting from the first one) .
  • the event plug-in associated with the event specified in the (current) changed rule is extracted from the event plug-in database.
  • this event plug-in is invoked (by passing an indication of the event to be detected) ; in this way, the configuration file of the event plug-in is generated (with the corresponding configuration parameters properly set so as to enable the event plug-in to detect the desired event) .
  • the workstation wherein the event indicated in the rule is to be detected is identified at block 430.
  • the configuration file so obtained is added to the configuration archive of this workstation.
  • the workstation at block 451 requires the new configuration archive to the server.
  • the required new configuration archive is transmitted to the workstation at block 454.
  • the new configuration archive Once the new configuration archive has been received by the workstation at block 457, its configuration files are extracted and installed onto the workstation.
  • the method then descends into block 460 in the swim-lane of the server; the same point is also reached directly from block 448 when the new CRC is equal to the old CRC. At this point, a test is made to determine whether all the new configuration archives have been processed. If not, the method returns to block 439 to repeat the same operations described above for another new configuration archive.
  • the method passes from block 466 to block 469; in this phase, the job is submitted for execution on a selected workstation (among the available ones having the required characteristics) . In response thereto, the job is executed on the (selected) workstation at block 472 - for the sake of simplicity, represented with the same one as above.
  • the workstation returns feedback information (relating to the result of the execution of the job) to the server. Moving to the swim-lane of the server at block 478, the feedback information is entered into the control file.
  • the flow of activity passes to block 481 whenever a generic (event) workstation - for the sake of simplicity, represented with the same one as above - detects one of the events indicated in the configuration files of its event plug-ins. In response thereto, the workstation notifies the event to the server at block 484.
  • any actions to be executed in response to this event (together with the corresponding workstations) are determined according to the rules extracted from the rule repository.
  • the event correlator may simply evaluate the rules (each one defining the execution of an action in response to an event) ; moreover, the event correlator may also evaluate relationships among the rules (for example, defining the execution of an action in response to the detection of different events).
  • the server submits the execution of each action on the corresponding workstation
  • the handler invokes the corresponding action plug-in (as indicated in the action plug-in database) .
  • the server may also send a corresponding notification (for example, with an e-mail to the user of the workstation 110) .
  • the action is executed on the workstation at block 490 (by means of the execution agent or the corresponding action plug-in).
  • the workstation returns feedback information (relating to the result of the execution of the action) to the server.
  • the feedback information is entered into the control file as above.
  • the flow of activity then ends at the concentric white/black stop circles 499.
  • the proposed solution lends itself to be implemented with an equivalent method (by using similar steps, removing some steps being non-essential, or adding further optional steps) ; moreover, the steps may be performed in a different order, concurrently or in an interleaved way (at least in part) .
  • the same solution may be applied to any other workload scheduler (or equivalent application) .
  • any other workload scheduler or equivalent application
  • the same solution may be used to schedule the execution of any kind of work units (for example, interactive tasks) .
  • the plan may be defined and/or generated in a different way - for example, based on any additional or alternative temporal constraints or dependencies (even based on dynamic relationships among the workstations) ; in addition, any other criteria may be used for selecting the workstations for the submission of the jobs (for example, according to statistic methods for distributing the execution of the jobs uniformly) .
  • the proposed solution may be implemented with any other type of rules (or policies) for defining actions to be executed in response to corresponding events; likewise, the above-described events and actions are merely illustrative, and they are not to be interpreted in a limitative manner.
  • (basic) rules may be combined into (complex) rules with any logical operator (such as OR, AND, and the like) , so as to define the execution of actions in response to any combination of events (even on different workstations); likewise, the rules may define the execution of (complex) actions consisting of multiple (basic) actions even on (complex) entities each one consisting of multiple (basic) workstations - i.e., by aggregating more rules based on the same event.
  • the events may consist of the outcome of other rules; moreover, the actions may also be conditioned by temporal constraints and/or dependencies. Similar considerations apply if the notifications are sent to additional or different users, if they are made by SMS, and the like.
  • the actions may only consist of jobs, of notifications, or of any other predefined type of operations; of course, the need of regenerating the plan for including the information relating to the rules is not strictly necessary.
  • configuration files and the configuration archives
  • equivalent structures for example, simply consisting of commands for forcing the desired behavior of the event plug-ins;
  • configuration files may be deployed to the relevant workstations in any other way (for example, by exploiting a software distribution infrastructure) .
  • the CRC may be of another type (for example, a CRC-4), it may be replaced by a simple checksum of the configuration archive, by a hash value, or more generally by any other digest value representing the configuration archive in a far shorter form.
  • a CRC-4 CRC-4
  • a general variant of the proposed solution also allows each (event) workstation to notify each event to the corresponding (action) workstations directly - without passing through the server. For example, this may happen for every event or only when the action is to be executed on the same workstation wherein the corresponding event has been detected.
  • the possibility of forcing the deployment of the desired configuration files on request is within the scope of the present solution.
  • the program (which may be used to implement each embodiment of the invention) is structured in a different way, or if additional modules or functions are provided; likewise, the memory structures may be of other types, or may be replaced with equivalent entities (not necessarily consisting of physical storage media) .
  • the program may take any form suitable to be used by or in connection with any data processing system, such as external or resident software, firmware, or microcode (either in object code or in source code - for example, to be compiled or interpreted) .
  • the program can take any form suitable to be used by or in connection with any data processing system, such as external or resident software, firmware, or microcode (either in object code or in source code - for example, to be compiled or interpreted) .
  • the medium can be any element suitable to contain, store, communicate, propagate, or transfer the program.
  • the medium may be of the electronic, magnetic, optical, electromagnetic, infrared, or semiconductor type; examples of such medium are fixed disks (where the program can be pre-loaded) , removable disks, tapes, cards, wires, fibers, wireless connections, networks, broadcast waves, and the like.
  • the solution according to an embodiment of the present invention lends itself to be implemented with a hardware structure (for example, integrated in a chip of semiconductor material) , or with a combination of software and hardware. It would be readily apparent that it is also possible to deploy the proposed solution as a service that is accessed through a network (such as the Internet) .
  • each computer may include similar elements (such as cache memories temporarily storing the programs or parts thereof to reduce the accesses to the mass memory during execution) ; in any case, it is possible to replace the computer with any code execution entity (such as a PDA, a mobile phone, and the like) , or with a combination thereof (such as a multi-tier server architecture, a grid computing infrastructure, and the like) .
  • code execution entity such as a PDA, a mobile phone, and the like
  • a combination thereof such as a multi-tier server architecture, a grid computing infrastructure, and the like

Landscapes

  • Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Multi Processors (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
  • Management, Administration, Business Operations System, And Electronic Commerce (AREA)
  • Stored Programmes (AREA)
  • Debugging And Monitoring (AREA)

Abstract

La solution proposée permet de programmer l'exécution de tâches sur des entités cibles (telles que des postes de travail) d'un système de traitement de données, sous l'autorité d'une entité de programmation du système (telle qu'un serveur de programmation). Un procédé correspondant (400) commence par l'étape (406-409) consistant à fournir un plan qui définit un flux d'exécution d'un ensemble de tâches. Le procédé se poursuit par la soumission (466-475) de chaque tâche pour exécution sur une entité cible sélectionnée selon le plan. Un ensemble de règles est également fourni (412-415) ; chaque règle définit une action devant être exécutée sur une entité cible d'action en réponse à un événement sur une entité cible d'événement. Le procédé comporte ensuite l'étape consistant à déterminer (421) les événements qui sont définis pour chaque entité cible d'événement dans les règles. Chaque entité cible d'événement est alors activée (424-457) de manière à détecter les événements correspondants. L'exécution de chaque action sur l'entité cible d'action correspondante est maintenant enclenchée (481-496) en réponse à la détection de l'événement correspondant.
EP08786903A 2007-09-28 2008-08-05 Procédé, système et programme informatique pour programmer l'exécution de tâches requises par des événements Ceased EP2193441A2 (fr)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP08786903A EP2193441A2 (fr) 2007-09-28 2008-08-05 Procédé, système et programme informatique pour programmer l'exécution de tâches requises par des événements

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
EP07117512 2007-09-28
PCT/EP2008/060295 WO2009040171A2 (fr) 2007-09-28 2008-08-05 Procédé, système et programme informatique pour programmer l'exécution de tâches requises par des événements
EP08786903A EP2193441A2 (fr) 2007-09-28 2008-08-05 Procédé, système et programme informatique pour programmer l'exécution de tâches requises par des événements

Publications (1)

Publication Number Publication Date
EP2193441A2 true EP2193441A2 (fr) 2010-06-09

Family

ID=39870564

Family Applications (1)

Application Number Title Priority Date Filing Date
EP08786903A Ceased EP2193441A2 (fr) 2007-09-28 2008-08-05 Procédé, système et programme informatique pour programmer l'exécution de tâches requises par des événements

Country Status (5)

Country Link
EP (1) EP2193441A2 (fr)
JP (1) JP5695420B2 (fr)
KR (1) KR20100081305A (fr)
CN (1) CN101809538B (fr)
WO (1) WO2009040171A2 (fr)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105190536B (zh) * 2013-02-28 2019-05-31 安提特软件有限责任公司 一种用于验证作业的系统及方法
CN112262352B (zh) * 2018-05-12 2024-04-05 吉奥奎斯特系统公司 多域规划和执行

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997034219A1 (fr) * 1996-03-15 1997-09-18 Netvision, Inc. Systeme et procede de notification et de distribution d'evenements mondiaux dans un environnement informatique reparti
US20020178380A1 (en) * 2001-03-21 2002-11-28 Gold Wire Technology Inc. Network configuration manager

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06149401A (ja) * 1992-11-11 1994-05-27 Chubu Nippon Denki Software Kk 運用スケジュール設定方式
US5790789A (en) * 1996-08-02 1998-08-04 Suarez; Larry Method and architecture for the creation, control and deployment of services within a distributed computer environment
US7444639B2 (en) * 2001-12-20 2008-10-28 Texas Insturments Incorporated Load balanced interrupt handling in an embedded symmetric multiprocessor system
JP2004280422A (ja) * 2003-03-14 2004-10-07 Nec Software Chubu Ltd 分散システム、計算機及び分散システムの自動運転スケジュール生成方法
US7487503B2 (en) * 2004-08-12 2009-02-03 International Business Machines Corporation Scheduling threads in a multiprocessor computer
JP4538736B2 (ja) * 2005-03-30 2010-09-08 日本電気株式会社 ジョブ実行監視システム、ジョブ制御装置、ジョブ実行方法及びジョブ制御プログラム
JP2007058478A (ja) * 2005-08-24 2007-03-08 Hitachi Kokusai Electric Inc 制御内容更新装置

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1997034219A1 (fr) * 1996-03-15 1997-09-18 Netvision, Inc. Systeme et procede de notification et de distribution d'evenements mondiaux dans un environnement informatique reparti
US20020178380A1 (en) * 2001-03-21 2002-11-28 Gold Wire Technology Inc. Network configuration manager

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ROBERTO BALDONI ET AL: "Distributed Event Routing in Publish/Subscribe Communication Systems: a Survey (revised version)", 1 January 2006 (2006-01-01), Dipartimento di Informatica e Sistemistica, Università di Roma la Sapienza, pages 1 - 27, XP055034894, Retrieved from the Internet <URL:http://www.dis.uniroma1.it/~midlab/articoli/BV.pdf> [retrieved on 20120807] *

Also Published As

Publication number Publication date
JP2010541055A (ja) 2010-12-24
JP5695420B2 (ja) 2015-04-08
WO2009040171A3 (fr) 2009-06-18
WO2009040171A2 (fr) 2009-04-02
CN101809538A (zh) 2010-08-18
KR20100081305A (ko) 2010-07-14
CN101809538B (zh) 2013-06-05

Similar Documents

Publication Publication Date Title
US10642599B1 (en) Preemptive deployment in software deployment pipelines
US8166458B2 (en) Method and system for automated distributed software testing
US8863137B2 (en) Systems and methods for automated provisioning of managed computing resources
Tannenbaum et al. Condor: a distributed job scheduler
US8413134B2 (en) Method, system and computer program for installing software products based on package introspection
US8230426B2 (en) Multicore distributed processing system using selection of available workunits based on the comparison of concurrency attributes with the parallel processing characteristics
EP4046017A1 (fr) Systèmes et procédés de planification et d&#39;automatisation de charge de travail parmi des plateformes
EP2008400B1 (fr) Procede, systeme et programme informatique pour la gestion de systeme centralisee sur des points d&#39;extremite d&#39;un systeme de traitement de donnees distribue
US8521865B2 (en) Method and apparatus for populating a software catalog with automated use signature generation
US8621472B2 (en) Job scheduling with optimization of power consumption
US20110119478A1 (en) System and method for providing object triggers
US7966612B2 (en) Method, system and computer program for installing shared software components
US20060075079A1 (en) Distributed computing system installation
JP2005502118A (ja) 完全なエンドツーエンド・ソフトウェア送達プロセス管理のための統合システムおよび方法
US20090158286A1 (en) Facility for scheduling the execution of jobs based on logic predicates
US20090089772A1 (en) Arrangement for scheduling jobs with rules and events
CN114035925A (zh) 一种工作流调度方法、装置、设备及可读存储介质
US20160179570A1 (en) Parallel Computing Without Requiring Antecedent Code Deployment
US20080082982A1 (en) Method, system and computer program for translating resource relationship requirements for jobs into queries on a relational database
WO2009040171A2 (fr) Procédé, système et programme informatique pour programmer l&#39;exécution de tâches requises par des événements
US20060288049A1 (en) Method, System and computer Program for Concurrent File Update
US7480914B2 (en) Restricting resources consumed by ghost agents
Wang et al. Tjosconf: Automatic and safe system environment operations platform
WO2011061034A1 (fr) Procédé et système permettant de programmer des tâches dans un système de traitement de données avec un environnement virtuel
woon Ahn et al. Mirra: Rule-based resource management for heterogeneous real-time applications running in cloud computing infrastructures

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20100323

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MT NL NO PL PT RO SE SI SK TR

AX Request for extension of the european patent

Extension state: AL BA MK RS

DAX Request for extension of the european patent (deleted)
17Q First examination report despatched

Effective date: 20120830

REG Reference to a national code

Ref country code: DE

Ref legal event code: R003

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN REFUSED

18R Application refused

Effective date: 20141127