WO2015071978A1 - イベント管理プログラム、イベント管理方法および分散システム - Google Patents
イベント管理プログラム、イベント管理方法および分散システム Download PDFInfo
- Publication number
- WO2015071978A1 WO2015071978A1 PCT/JP2013/080688 JP2013080688W WO2015071978A1 WO 2015071978 A1 WO2015071978 A1 WO 2015071978A1 JP 2013080688 W JP2013080688 W JP 2013080688W WO 2015071978 A1 WO2015071978 A1 WO 2015071978A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- event
- time
- engine
- timer
- query
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/54—Interprogram communication
- G06F9/542—Event management; Broadcasting; Multicasting; Notifications
Definitions
- the present invention relates to an event management program, an event management method, and a distributed system.
- Stream-type large-scale data processing systems are often used when emphasizing the real-time nature of analysis. For example, a system that collects credit card usage histories in real time and detects credit cards that may have been used illegally, such as those that have been used multiple times in different stores in a short time, can be considered. Further, for example, a system that collects vehicle speed data from sensors provided on a road in real time and predicts where a traffic jam occurs can be considered.
- CEP complex event processing
- a user creates a program module that describes a pattern of data to be detected and a processing method of data that matches the pattern, and the program module is put into an execution state. Then, according to the program module in the execution state, data that matches the pattern is retrieved from the continuously arriving data and processed.
- a program module as described above or an instance at the time of execution of the program module may be referred to as a “query”.
- a stream-type large-scale data processing system may be implemented as a distributed system including a plurality of computers (physical machines).
- program modules describing similar processing for example, the same copied program module
- the data processing described in the program module can be parallelized and the processing capability of the system can be improved.
- a distributed system that synchronizes time (TOD: Time of Day) among a plurality of nodes has been proposed.
- TOD Time of Day
- a master node selected from a plurality of nodes broadcasts a TOD packet including the current TOD value to other nodes.
- the other node updates the local TOD value based on the received TOD packet.
- some program modules created by the user use a timer to perform data processing according to the relative time from when a predetermined condition is satisfied.
- a program module may be considered in which subsequent data is accumulated and selected as an analysis target for N seconds (N is a positive integer) after a certain type of data arrives first.
- an object of the present invention is to provide an event management program, an event management method, and a distributed system that make it possible to easily parallelize processing using a timer.
- a program used to control a distributed system that performs distributed processing by a plurality of processes causes a computer to execute the following processing.
- a timer event first issue request generated by a first process executed by a computer among the plurality of processes is acquired.
- an event management method provided by a distributed system that includes a plurality of computers and performs distributed processing by a plurality of processes.
- the first computer that executes the first process among the plurality of processes
- the timer event generation request generated by the first process is acquired.
- the issuance timing information of the timer event issued in response to the issuance request of the timer event is transmitted from the first computer to the second computer that executes the second process among the plurality of processes.
- the issuance timing of the timer event to the second process is controlled based on the issuance timing information of the timer event received from the first computer.
- a distributed system having a first information processing apparatus and a second information processing apparatus that performs distributed processing by a plurality of processes.
- the first information processing apparatus includes: a first control unit that acquires a timer event issue request generated by a first process executed by the first information processing apparatus among a plurality of processes; A communication unit that transmits issuance timing information of a timer event issued in response to the issue request to the second information processing apparatus.
- the second information processing device based on the timer event issuance timing information received from the first information processing device, outputs a timer event to a second process executed by the second information processing device among the plurality of processes.
- a second control unit for controlling the issuance timing.
- processing using a timer can be easily parallelized.
- FIG. 1st Embodiment It is a figure which shows the distribution system of 1st Embodiment. It is a figure which shows the distribution system of 2nd Embodiment. It is a block diagram which shows the hardware example of an engine node. It is a figure which shows the example of the execution order of a query. It is a figure which shows the example of expansion or contraction of an engine node. It is a figure which shows the example of a time base operator. It is a figure which shows the example of mounting of a time base operator. It is a figure which shows the example of the timer notification time for every parallelized query. It is a figure which shows the example which makes an arrival time common. It is a figure which shows the example of the determination procedure of the first arrival time.
- FIG. 1 is a diagram illustrating a distributed system according to the first embodiment.
- the distributed system performs distributed processing of data by a plurality of processes.
- This distributed system includes information processing apparatuses 10 and 20.
- the information processing apparatuses 10 and 20 may be called physical machines, computers, server apparatuses, or the like.
- the process 11 is executed in the information processing apparatus 10 and the process 21 is executed in the information processing apparatus 20.
- Processes 11 and 21 are execution units activated based on a program module describing a similar processing procedure, and are activated from the same copied program module, for example.
- the processes 11 and 21 may be called threads, tasks, jobs, etc., or may be “queries” in the CEP. Data arriving at the distributed system is distributed to the processes 11 and 21. Process 11 and process 21 process different data in parallel according to the same processing procedure.
- the information processing apparatus 10 includes a control unit 12 and a communication unit 13.
- the information processing apparatus 20 includes a control unit 22 and a communication unit 23.
- the control units 12 and 22 are, for example, processors.
- the processor may be a CPU (Central Processing Unit) or a DSP (Digital Signal Processor), and may include an integrated circuit such as an ASIC (Application Specific Integrated Circuit) or an FPGA (Field Programmable Gate Array).
- the processor executes a program stored in a semiconductor memory such as a RAM (Random Access Memory).
- the “processor” may be a set of two or more processors (multiprocessor).
- the communication units 13 and 23 are communication interfaces for communication between a plurality of information processing apparatuses, and may be wired communication interfaces or wireless communication interfaces.
- the control unit 12 acquires a timer event issuance request (first issuance request) generated by the process 11 that requests issuance of a timer event.
- the first issue request is generated when the process 11 tries to perform data processing according to a relative time from a point in time when a predetermined condition is satisfied (for example, when a certain kind of data is first received). Can be done.
- the control unit 12 manages the timer and issues a timer event to the process 11 at an appropriate timing. By receiving the timer event, the process 11 can recognize the timing at which data processing should be performed.
- the control unit 22 acquires a timer event issuance request (second issuance request) generated by the process 21 that requests issuance of a timer event.
- the control unit 22 manages the timer and issues a timer event to the process 21 at an appropriate timing.
- the information processing apparatus 10 and the information processing apparatus 20 manage timers independently of each other.
- the recognition when the predetermined condition is satisfied may be different between the process 11 and the process 21.
- the process 11 recognizes “the time when it is first received” among the data handled by the process 11 among the data arriving at the distributed system.
- the process 21 recognizes “the time when it is first received” in the data handled by the process 21.
- the program module describing the processing procedures of the processes 11 and 21 may be intended to recognize the “first reception time” in the entire data that has arrived at the distributed system. Therefore, the control units 12 and 22 share issuance timing information for specifying the issuance timing of the timer event between the information processing apparatuses 10 and 20 so that the issuance timing of the timer event is aligned.
- the control unit 12 transmits the issuance timing information of the timer event to the information processing apparatus 20 via the communication unit 13.
- the timer event issuance timing information includes, for example, information indicating a reference time (first reference time) recognized by the process 11.
- the first reference time may be a time when the process 11 detects data satisfying a predetermined condition, or may be a time when the control unit 12 acquires the first issue request.
- the first issue request generated by the process 11 may include information indicating the first reference time.
- the control unit 22 monitors issue timing information received from the information processing apparatus 10 via the communication unit 23.
- the control unit 22 When receiving the issuance timing information, the control unit 22 considers the received issuance timing information (for example, taking into account the first reference time indicated by the issuance timing information), and issues the timer event to the process 21. To control.
- the timer event issued to the process 21 is a timer event as a response to the second issue request generated by the process 21, for example.
- the control unit 22 transmits the issue timing information of the timer event to the information processing apparatus 10 via the communication unit 23.
- the timer event issuance timing information includes, for example, a reference time (second reference time) recognized by the process 21.
- the control unit 12 monitors issuance timing information received from the information processing apparatus 20 via the communication unit 13.
- the control unit 12 considers the received issue timing information (for example, taking into account the second reference time indicated by the issue timing information), and issues the timer event to the process 11 To control.
- the timer event issued to the process 11 is, for example, a timer event as a response to the first issue request generated by the process 11.
- the control units 12 and 22 compare the first reference time with the second reference time, and the reference time common to the information processing apparatuses 10 and 20. To decide. As the common reference time, for example, the smaller one of the first reference time and the second reference time can be adopted. In order to determine a common reference time, a determination procedure may be performed between the information processing apparatus 10 and the information processing apparatus 20. And each control part 12 and 22 manages a timer independently based on common reference time. For example, the control units 12 and 22 use a time obtained by adding a waiting time to a common reference time as a timer event issuance timing. As the waiting time, a value designated by the processes 11 and 21 can be used.
- a program module intends to store subsequent data as an analysis target for N seconds after a certain type of data first arrives at the distributed system.
- the processes 11 and 21 are started based on this program module, there is a possibility that the point in time when an issuance request is generated by recognizing that a certain type of data arrives first differs between the processes 11 and 21. .
- the reference time of the timer is made common by the control units 12 and 22. It can also be said that the objective first arrival time is shared between the control units 12 and 22. Therefore, the control units 12 and 22 can issue a timer event N seconds after the objective first arrival time as intended by the program module, and N seconds during which the processes 11 and 21 accumulate data to be analyzed.
- the time zone can be aligned.
- issue timing information corresponding to the issue request is transmitted to the information processing apparatus 20, Used for timer management of the information processing apparatus 20.
- issue timing information corresponding to the issue request is transmitted to the information processing device 10 and used for timer management of the information processing device 10. .
- the timer reference time can be shared between the information processing apparatuses 10 and 20, and timer management can be performed based on the common reference time.
- FIG. 2 illustrates a distributed system according to the second embodiment.
- the distributed system according to the second embodiment is an information processing system that analyzes a large amount of sensor data received from a sensor device in real time and provides an analysis result to a client device.
- sensor data include vehicle speed data collected in real time from a speedometer provided on a road, credit card usage history collected in real time from a card reader provided in a store, and the like.
- sensor data analysis include, for example, a process for predicting a traffic jam occurrence location based on collected vehicle speed data and a process for detecting unauthorized use of a credit card based on a collected credit card usage history. It is done.
- the distributed system according to the second embodiment includes networks 31 and 32, a gateway 33, a sensor device 100, an input adapter node 200, engine nodes 300, 300a, 300b, and 300c, an output adapter node 400, a client device 500, and a manager node 600.
- the engine nodes 300, 300a, 300b, and 300c are examples of the information processing apparatuses 10 and 20 of the system according to the first embodiment.
- the input adapter node 200, the engine nodes 300, 300a, 300b, and 300c, the output adapter node 400, and the manager node 600 are connected to the network 31.
- the sensor device 100 and the client device 500 are connected to the network 32.
- the network 31 and the network 32 are connected to each other via a gateway 33.
- the network 31 may be a LAN (Local Area Network), and the network 32 may be a wide area network such as the Internet.
- the gateway 33 is a network node that connects networks having different protocols. The gateway 33 interconnects the networks 31 and 32 by converting the protocol.
- the sensor device 100 transmits sensor data to the input adapter node 200.
- the input adapter node 200 is a computer that receives sensor data from the sensor device 100.
- the input adapter node 200 converts the received sensor data into data in a format called an event that is processed by the query.
- Query is an event-driven processing entity that performs information processing in response to the arrival of an event.
- a query is an instance that is activated based on a query program written by a user, and may be a process, a thread, a task, a job, an object, or the like.
- a plurality of queries as instances can be started from the same query program.
- the query is arranged in the engine nodes 300, 300a, 300b, and 300c. Each engine node is arranged with one or more queries.
- the input adapter node 200 transmits an event to an engine node (for example, the engine node 300) according to the defined flow.
- Engine nodes 300, 300a, 300b, and 300c are computers that execute queries. As will be described later, a plurality of queries cooperate to perform information processing according to a defined flow.
- the query receives an event from the input adapter node 200 or the preceding query.
- the query executes processing according to the received event, and outputs the event as an execution result.
- an event to be output an event in which a received event is accumulated for a certain period of time, an event newly generated based on the received event, and the like can be cited.
- the event output by the query is transmitted to the output adapter node 400 or the subsequent query.
- the subsequent query is parallelized, a plurality of events are distributed to a plurality of subsequent queries. Note that an event may be passed between different queries in the same engine node, or may be passed between queries arranged in different engine nodes.
- the engine nodes 300, 300a, and 300b are operating and the engine node 300c is not operating.
- the engine node 300c can be positioned as a spare engine node used when the engine nodes 300, 300a, and 300b have a high load.
- the engine nodes 300, 300a, 300b, and 300c in which the queries are arranged are physical computers (physical machines), but may be virtual computers (virtual machines).
- the output adapter node 400 is a computer that receives an event as a final result from an engine node (for example, the engine node 300b) according to a defined flow.
- the received event includes information indicating the analysis result of the sensor data.
- the output adapter node 400 converts the received event into result data in a format that can be referred to by the client device 500, and transmits the result data to the client device 500.
- the client device 500 is a computer that receives the result data indicating the analysis result of the sensor data from the output adapter node 400 and provides the received result data to the user. For example, the client device 500 displays the result data on the display.
- the manager node 600 is a computer that monitors the load of the engine nodes 300, 300a, and 300b, and controls the operation status of each engine node and the arrangement of queries.
- the manager node 600 may add or delete engine nodes to be operated according to the load of the engine nodes 300, 300a, and 300b.
- the manager node 600 may move a query between engine nodes for load distribution.
- the manager node 600 may increase or decrease the number of queries (parallelism) started from the same query program for load distribution.
- the input adapter node 200 receives sensor data from the sensor device 100, converts the received sensor data into an event, and transmits the event to the engine node 300.
- a query for performing the first stage process is executed.
- the engine node 300 performs processing according to the received event, and distributes the event as the processing result to the engine nodes 300, 300a, and 300b.
- queries for performing the second stage process started from the same query program are executed in parallel.
- Engine nodes 300, 300a, and 300b process different events received from engine node 300 in parallel, and transmit events as processing results to engine node 300b.
- a query for performing the third stage process is executed.
- the engine node 300b performs processing according to the received event, and transmits an event as a processing result to the output adapter node 400.
- the output adapter node 400 converts the received event into result data and transmits it to the client device 500.
- FIG. 3 is a block diagram showing a hardware example of the engine node.
- the engine node 300 includes a processor 301, a RAM 302, an HDD (Hard Disk Drive) 303, an image signal processing unit 304, an input signal processing unit 305, a disk drive 306, and a communication interface 307. These units are connected to the bus 308 in the engine node 300.
- the processor 301 is an example of the control units 12 and 22 according to the first embodiment.
- the communication interface 307 is an example of the communication units 13 and 23 according to the first embodiment.
- the processor 301 is a processor including an arithmetic unit that executes program instructions, for example, a CPU.
- the processor 301 loads at least a part of the program and data stored in the HDD 303 into the RAM 302 and executes the program.
- the processor 301 may include a plurality of processor cores, and the engine node 300 may include a plurality of processors.
- the engine node 300 may execute program instructions in parallel using a plurality of processors or a plurality of processor cores.
- a set of two or more processors may be called a “processor”, and the processor 301 may include a dedicated circuit such as an FPGA or an ASIC.
- the RAM 302 is a volatile memory that temporarily stores a program executed by the processor 301 and data referred to by the program.
- the engine node 300 may include a memory of a type other than the RAM, and may include a plurality of volatile memories.
- the HDD 303 is a nonvolatile storage device that stores software programs and data such as an OS (Operating System), firmware, and application software.
- the engine node 300 may include other types of storage devices such as a flash memory, and may include a plurality of nonvolatile storage devices.
- the image signal processing unit 304 outputs an image to the display 41 connected to the engine node 300 in accordance with an instruction from the processor 301.
- a CRT (Cathode Ray Tube) display As the display 41, a CRT (Cathode Ray Tube) display, a liquid crystal display, or the like can be used.
- the input signal processing unit 305 acquires an input signal from the input device 42 connected to the engine node 300 and notifies the processor 301 of the input signal.
- a pointing device such as a mouse or a touch panel, a keyboard, or the like can be used.
- the disk drive 306 is a drive device that reads programs and data recorded on the recording medium 43.
- a magnetic disk such as a flexible disk (FD) or HDD
- an optical disk such as a CD (Compact Disk) or a DVD (Digital Versatile Disk), or a magneto-optical disk (MO) is used.
- CD Compact Disk
- DVD Digital Versatile Disk
- MO magneto-optical disk
- the disk drive 306 stores the program and data read from the recording medium 43 in the RAM 302 or the HDD 303 in accordance with an instruction from the processor 301.
- the communication interface 307 communicates with another information processing apparatus (for example, the engine node 300a) via a network such as the network 31.
- the engine node 300 may not include the disk drive 306, and may not include the image signal processing unit 304 and the input signal processing unit 305 when controlled by a terminal device operated by the user.
- the display 41 and the input device 42 may be formed integrally with the casing of the engine node 300.
- the input adapter node 200, the engine nodes 300a, 300b, and 300c, the output adapter node 400, and the client device 500 can also be realized using the same hardware as the engine node 300.
- FIG. 4 is a diagram illustrating an example of a query execution order.
- the queries 310 and 311 are arranged in the engine node 300.
- the query 312 is arranged in the engine node 300a.
- the queries 313 and 314 are arranged in the engine node 300b.
- the queries 311, 312, and 313 are queries started from the same query program, and execute similar processes in parallel for a plurality of different events.
- queries started from the same query program may be referred to as “same type queries”.
- the query 310, the queries 311, 312, 313, and the query 314 are queries started from different query programs.
- queries started from different query programs may be referred to as “different types of queries”.
- the query 310 performs processing according to the event received from the input adapter node 200 and transmits an event as a processing result to any of the queries 311, 312, and 313. At this time, the query 310 distributes a plurality of generated events to the queries 311, 312, and 313 in order to equalize the loads of the queries 311, 312, and 313.
- the queries 311, 312, and 313 process different events in parallel.
- each of the queries 311, 312, and 313 performs processing according to the event received from the query 310. Since the queries 311, 312, and 313 are started from the same query program, the same processing procedure is executed for different events.
- the queries 311, 312, and 313 transmit processing result events to the query 314.
- the query 314 performs processing according to the event received from the queries 311, 312, and 313, and transmits an event as a processing result to the output adapter node 400.
- each query executes processing defined in the query program when an event arrives, and generates another event.
- Multiple queries can be combined in series.
- a large amount of sensor data that arrives continuously can be analyzed in real time.
- the processing defined in the query program can be distributed and parallelized, and the throughput can be improved.
- the queries 311, 312, and 313 are examples of the processes 11 and 21 according to the first embodiment.
- FIG. 5 is a diagram showing an example of enlargement or reduction of the engine node.
- a query arranged in the engine node 300 processes events # 1, # 2, and # 3 belonging to different data ranges (for example, keys belong to different ranges).
- the load of the engine node 300 is equal to or greater than the threshold value, it is assumed that the processes of the events # 1, # 2, and # 3 are parallelized using the engine nodes 300a and 300b.
- the query arranged in the engine node 300 is copied to the engine nodes 300a and 300b.
- “Copying” a query means increasing the number of queries of the same type (increasing the degree of parallelism), and can be realized by copying a query program for starting the query and executing it on another engine node. By copying the query, the processes defined in the query program can be executed in parallel by the engine nodes 300, 300a, 300b.
- the engine node 300 processes the event # 1
- the engine node 300a processes the event # 2
- the engine node 300b processes the event # 3.
- the engine nodes 300, 300a, 300b can process different events in parallel. Event distribution is performed based on an identifier such as a key included in the event, for example.
- the engine node 300 processes event # 1
- the engine node 300a processes event # 2
- the engine node 300b processes event # 3
- the parallelism of the query can be reduced from 3 to 1.
- the query copied to the engine nodes 300a and 300b is deleted.
- the engine node 300 also processes events belonging to the data range assigned to the engine nodes 300a and 300b.
- the distributed system according to the second embodiment can dynamically change the parallel degree of the query according to the load of the engine nodes 300, 300a, and 300b.
- Increasing the degree of parallelism of a query can be referred to as system expansion, and decreasing the degree of parallelism of a query can be referred to as reduction of the system.
- the data range assigned to each of one or more queries of the same type is changed.
- the allocation of the data range is determined by, for example, the manager node 600 and notified to the engine nodes 300, 300a, and 300b. Thereby, the event output from the preceding query can be distributed to a plurality of queries of the same type that are executed in parallel.
- FIG. 6 is a diagram illustrating an example of a time-based operator.
- the query 311 has an operator 311a.
- the operator is an instance of the operator described in the query program and is executed in the query.
- the operators include operators corresponding to event collection / search / selection, operators for arithmetic operations / logical operations / relationship operations, functions, and the like.
- One query may include a plurality of operators. In that case, for example, the relationship between a plurality of operators is defined in a tree structure, and the plurality of operators are called in an order corresponding to the tree structure.
- the time-based operator is executed at a timing when a predetermined time condition is satisfied. For example, the time-based operator is executed at a timing corresponding to a relative time from the reference time. As an example of a time-based operator, one that performs processing after a certain time (for example, 4 seconds) after an event arrives can be considered. This process may be repeated at regular intervals. Note that when the time condition is satisfied and the time-based operator is executed, it can also be said that the time-based operator “fires”.
- the operator 311a included in the query 311 is a time-based operator.
- the query program can be described using an event processing language (EPL: Event Processing Language).
- EPL Event Processing Language
- the query program of the query 311 is described as “select * from InputStream.win:time_batch (4 sec)”.
- This query program includes an operator “select * from InputStream” and an operator “win: time_batch (4 sec)”.
- the operator “win: time_batch (4 sec)” indicates that the operator fires every 4 seconds after the first arrival of the event.
- the operator “select * from InputStream” indicates that an event that has arrived for an instance of this query program is selected and output.
- the time at which the first event arrives after the start of one query based on the query program can be referred to as the “arrival time” for the query.
- the time when the accumulated event is output is “ignition time”
- the time from arrival time to the first ignition is “waiting time”
- the interval from the previous ignition to the next ignition (the repetition period of the second and subsequent ignitions) ) Can be called the “ignition interval”.
- the engine node 300 in which the query 311 is arranged is provided with a state storage unit 311b corresponding to the query 311.
- the state storage unit 311b stores information indicating the internal state of the query 311.
- the information indicating the internal state includes events temporarily accumulated based on the operator 311a.
- the information indicating the internal state includes information indicating a data range assigned to the query 311 among the queries 311, 312, and 313.
- the query 311 receives an event (event Ev # 1) whose identifier is Ev # 1 as the first event after activation. Thereafter, the query 311 continuously receives the events Ev # 2 and Ev # 3 within N seconds after the event Ev # 1 is received. Events Ev # 1, Ev # 2, and Ev # 3 are events that belong to the data range handled by the query 311 among the events output by the previous query 310. Meanwhile, the query 311 accumulates the events Ev # 1, Ev # 2, and Ev # 3 in the state storage unit 311b. Then, when N seconds have elapsed since the arrival of the event Ev # 1, the operator 311a is ignited. With the ignition of the operator 311a, the query 311 transmits the events Ev # 1, Ev # 2, and Ev # 3 accumulated in the state storage unit 311b to the subsequent query 314.
- event Ev # 1 whose identifier is Ev # 1 as the first event after activation. Thereafter, the query 311 continuously receives the events Ev
- FIG. 7 is a diagram illustrating an implementation example of a time-based operator.
- the engine node 300 has an engine 320.
- Engine 320 controls one or more queries executed on engine node 300.
- One engine is executed for each engine node as a physical machine or a virtual machine.
- the engine 320 has a timer function that can control the ignition timing of the operator 311a.
- the first event Ev # 1 arrives at the query 311 for the query 311.
- the operator 311a transmits a timer request to the engine 320 when the event Ev # 1 arrives.
- the timer request is a request for issuing a timer notification indicating the ignition timing to the operator 311a.
- the timer request can include information on the arrival time and standby time of the event Ev # 1. If the timer request requires that timer notifications be issued repeatedly, the timer request can include an firing interval.
- the query 311 accumulates events distributed from the previous query 310 until the operator 311 a receives a timer notification from the engine 320.
- the engine 320 calculates the first firing time from the arrival time and standby time specified in the timer request. Usually, the first firing time is the arrival time plus the waiting time.
- the engine 320 issues a timer notification to the operator 311a when the calculated ignition time is reached. Since the operator 311a is an event-driven processing subject, the timer notification can be implemented as a kind of event. Further, when the firing interval is specified in the timer request, the engine 320 issues a timer notification to the operator 311a every time the firing interval elapses from the previous firing time.
- (S4) Upon receiving the timer notification from the engine 320, the operator 311a detects that the ignition timing has arrived. When the first timer notification is issued, the query 311 outputs the event accumulated after the arrival of the event EV # 1 to the subsequent query 314. Further, when the second timer notification is issued, the query 311 outputs the event accumulated since the previous timer notification was issued to the subsequent query 314.
- the timer request generated by the operator 311a may include information on the maximum number of firings. In that case, a timer notification is issued from the engine 320 to the operator 311a at the maximum number of times of ignition. When the query 311 is shut down by an instruction from the user, the issue of the timer notification to the operator 311a is stopped.
- FIG. 8 is a diagram illustrating an example of timer notification time for each parallelized query.
- the input stream 51 is a virtual input channel that passes the event output from the preceding query 310 to the queries 311, 312, and 313.
- the output stream 52 is a virtual output channel that passes the events output from the queries 311, 312, and 313 to the subsequent query 314.
- the input stream 51 and the output stream 52 are formed by the engines of the engine nodes 300, 300a, and 300b.
- the queries 311 and 312 need only recognize the input stream 51 and the output stream 52, and do not need to be directly aware of the preceding and subsequent queries.
- the engine node 300a includes the query 312 and the state storage unit 312b, and the query 312 includes the operator 312a so as to correspond to the query 311 of the engine node 300.
- the queries 311 and 312 are started from the same query program and process different events received from the input stream 51 in parallel.
- event Ev # 1 is distributed to the query 311 and event Ev # 2 is distributed to the query 312. Assume that the time when the event Ev # 1 arrives at the query 311 is 00:00:01, and the time when the event Ev # 2 arrives at the query 312 is 00:00:02.
- the query 311 outputs an event that arrived at the query 311 between 00:00:01 and 00:00:05 to the output stream 52 when it becomes 00:00:05.
- the “arrival time” recognized by the operator 312a is 00:00:02, and the first firing time of the operator 312a is 00:00:06. Therefore, the query 312 outputs an event that arrived at the query 312 between 00:00:02 and 00:00:06 to the output stream 52 when the time becomes 00:00:06.
- the operations of the above queries 311 and 312 are different from those in the case where parallelization is not performed, and are different from the original intention of the query program. If parallelization is not performed, the “arrival time” recognized by a single query is 00:00:01 when the event Ev # 1 has arrived, so the single query is between 00:00:01 and 00:00:05. Only events that arrive at one query are output to the output stream 52. As described above, if a query program including an operator corresponding to a time-based operator is simply executed in parallel on different engine nodes, the processing result may be different from the case where the processing result is not executed in parallel.
- the distributed system performs a procedure for sharing the “arrival time” that is the reference time for calculating the ignition time among the engine nodes 300, 300a, and 300b.
- the time at which the first event (event Ev # 1 in the above example) arrives for the entire queries 311, 312, 313 can be referred to as “first arrival time”.
- FIG. 9 is a diagram showing an example of common arrival times.
- the engine node 300a has an engine 320a
- the engine node 300b has an engine 320b.
- the query 313 includes an operator 313a so as to correspond to the operators 311a and 312a.
- the current time (real time clock) managed by the engine nodes 300, 300a, and 300b using the time device is synchronized.
- the operator 311a transmits a timer request to the engine 320 (S2). It is assumed that the arrival time designated by this timer request is 00:00:01. Then, the engine 320 transmits information indicating the arrival time for the query 311 to the engine nodes 300a and 300b.
- the engine nodes 300a and 300b determine “first arrival time” as a reference time for calculating the firing time based on the received arrival time information (S11 and S12). If the queries 312 and 313 have not received an event, 00:00:01 is determined as the first arrival time.
- the engines 320, 320a, and 320b share the first arrival time for the same type of queries 311, 312, and 313 at 00:00:01.
- Engines 320, 320a, and 320b can manage firing times independently based on a common first arrival time.
- the engine 320 calculates the first firing time 00:00:05 from the first arrival time 00:00:01, and issues a timer notification to the operator 311a at the firing time (S3).
- the engine 320a calculates the first firing time from the first arrival time, and issues a timer notification to the operator 312a at the firing time (S3a).
- the engine 320b calculates the first ignition time from the first arrival time, and issues a timer notification to the operator 313a at the ignition time (S3b).
- the engines 320, 320a, and 320b determine a single first arrival time for a plurality of queries of the same type. Thereby, the issuance timing of the timer notification to the time base operator on the engines 320, 320a, and 320b can be made uniform.
- arrival time information for each query is transmitted between the engines 320, 320a, and 320b. Become. In this case, a procedure for determining the first arrival time is performed between the engines 320, 320a, and 320b.
- the procedure for determining the first arrival time will be specifically described with reference to FIGS.
- FIG. 10 is a diagram illustrating an example of a procedure for determining the first arrival time.
- the engine 320a receives a timer request from the operator 312a of the query 312. In this timer request, arrival time 00:00:01 is designated.
- the engine 320a provisionally registers 00:00:01 as a candidate for the first arrival time for the queries 311, 312, and 313. The temporarily registered arrival time can be updated later.
- the engine 320a transmits a confirmation notification to the engines 320, 320b for confirming whether or not the temporarily registered arrival time 00:00:01 is the earliest arrival time.
- This confirmation notification includes information on arrival time 00:00:01.
- the engine 320 receives a timer request from the operator 311a of the query 311. In this timer request, arrival time 00:00:02 is designated. The engine 320 provisionally registers 00:00:02 as a candidate for the first arrival time for the queries 311, 312, and 313. The temporarily registered arrival time can be updated later.
- the engine 320 transmits a confirmation notification to the engines 320a, 320b for confirming whether or not the temporarily registered arrival time 00:00:02 is the earliest arrival time.
- This confirmation notification includes information on arrival time 00:00:02.
- the engine 320a receives a confirmation notification from the engine 320. Then, the engine 320a determines whether the arrival time specified in the received confirmation notification is earlier (smaller) than the arrival time temporarily registered in the engine 320a. When the arrival time of the confirmation notification is earlier than the temporarily registered arrival time or when the temporarily registered arrival time does not exist, the engine 320a returns a permission notification, and the arrival time specified in the confirmation notification is temporarily stored in the engine 320a. sign up. In other cases, that is, when the arrival time of the confirmation notification is later (larger) or the same as the temporarily registered arrival time, the engine 320a returns a rejection notification. Here, since the arrival time 00:00:02 designated in the confirmation notification is later than the arrival time 00:00:01 provisionally registered in the engine 320a, a rejection notification is transmitted to the engine 320.
- the engine 320b receives a confirmation notification from the engine 320. Then, similar to the engine 320a, the arrival time designated by the received confirmation notification is compared with the temporarily registered arrival time. Here, since there is no arrival time temporarily registered in the engine 320b, the engine 320b temporarily registers the arrival time 00:00:02 designated in the confirmation notification in the engine 320b, and transmits a permission notification to the engine 320.
- Engine 320 receives a permission notification or a rejection notification as a response to the confirmation notification from engines 320a and 320b. When all responses are received, engine 320 determines whether the response includes a rejection notice. If even one rejection notification is received, the engine 320 does not set the arrival time of the query 311 as the first arrival time because there is another similar query that received the event earlier than the query 311. On the other hand, if no rejection notification has been received (only the permission notification has been received), engine 320 determines the temporarily registered arrival time as the first arrival time. Here, the engine 320 has received the rejection notification from the engine 320a, and therefore does not set the arrival time of the query 311 as the first arrival time.
- the engine 320 receives a confirmation notification from the engine 320a.
- the engine 320 sets the arrival time temporarily registered in the engine 320 to 00: 0.
- the information is updated to 00:01, and a permission notice is transmitted to the engine 320a.
- the engine 320b receives a confirmation notification from the engine 320a.
- the engine 320b since the arrival time 00:00:01 designated in the confirmation notification is earlier than the arrival time 00:00:02 provisionally registered in the engine 320b, the engine 320b temporarily registers the arrival time 00:00:00. The information is updated to 00:01, and a permission notice is transmitted to the engine 320a.
- the engine 320a receives a permission notice from the engines 320, 320b. Then, engine 320a determines arrival time 00:00:01 provisionally registered in engine 320a as the first arrival time (switching provisional registration to main registration), and transmits a confirmation notification to engines 320 and 320b. The engines 320 and 320b that have received the confirmation notification use the temporarily registered arrival time 00:00:01 as the first arrival time (main registration).
- the above confirmation procedure involves communication overhead. Therefore, if the waiting time specified by the time-based operator (the time from the arrival time to the first firing time) is short, the confirmation procedure cannot be completed between the engines 320, 320a, and 320b by the first timer notification. It is also possible. Therefore, if each of the engines 320, 320a, and 320b cannot complete the confirmation procedure by the “specific time”, it is assumed that the arrival time of the current provisional registration is the first arrival time, and the provisional first time Issue a timer notification.
- Specific time is a time limit for determining whether or not to use the temporary registration arrival time as a reference time in order to issue the first timer notification.
- the specific time may be a temporary firing time calculated based on the current temporary registration arrival time (a time obtained by adding a waiting time to the temporary registration arrival time). Further, the specific time may be a time earlier than the temporary ignition time by “minimum specific time”.
- the engines 320, 320a, and 320b can prepare to issue the first timer notification with a margin, and can ensure the accuracy of the issue timing.
- the minimum specific time may be a predetermined fixed value. Further, the minimum specific time may be a time calculated based on communication overhead between the engines 320, 320a, and 320b. After one engine receives a confirmation notification from another engine, to allow the arrival time specified in the confirmation notification as the first arrival time, send a permission notification and a confirmation notification between the two engines Will do. If the interval between the current time and the temporary firing time is shorter than the expected round trip time of communication, it can be determined that the first arrival time cannot be determined by the temporary firing time. Therefore, it is conceivable that the minimum specific time is the expected round-trip time of communication with the transmission source engine of the confirmation notification.
- FIG. 11 is a diagram illustrating an example of a provisional timer notification when a specific time has elapsed.
- the standby time is 4 seconds
- the minimum specific time is 2 seconds. 4 seconds after the temporary registration arrival time is the temporary firing time, and 2 seconds before the temporary firing time (2 seconds after the temporary registration arrival time) is the specific time.
- engine 320 When engine 320 receives a timer request specifying arrival time 00:00:02 from operator 311a of query 311, it temporarily registers arrival time 00:00:02 and transmits a confirmation notification to engines 320a and 320b. When engine 320a accepts a timer request specifying arrival time 00:00:01 from operator 312a of query 312, it temporarily registers arrival time 00:00:01 and transmits a confirmation notification to engines 320 and 320b.
- the confirmation notification transmitted from the engine 320 has arrived at the engines 320a and 320b before 00:00:03.
- the confirmation notification transmitted from the engine 320a has arrived at the engines 320 and 320b after 00:00:04.
- the temporary registration of the engines 320 and 320b first is arrival time 00:00:02, and the specific time is 00:00:04.
- the temporary registration of the engine 320a first is arrival time 00:00:01, and the specific time is 00:00:03.
- the first arrival time is not fixed by the specific time. Therefore, the engines 320 and 320b determine to issue the first timer notification at the temporary firing time 00:00:06 determined based on the temporarily registered arrival time at the specific time 00:00:04. Further, the engine 320a determines to issue the first timer notification at the temporary firing time 00:00:05 determined based on the temporarily registered arrival time at the specific time 00:00:03. The first timer notification is not unified among the engines 320, 320a, and 320b because the first arrival time cannot be determined in time.
- the confirmation procedure as shown in FIG. 10 is advanced. Therefore, after the engines 320, 320a, and 320b issue the first timer notification, the first arrival time 00:00:01 is determined. Then, the engines 320, 320a, and 320b issue second and subsequent timer notifications based on the determined first arrival time. If the firing interval (second timer notification issuance interval) is 4 seconds, the engines 320, 320a, and 320b issue a second timer notification at 0:00:09.
- the engines 320, 320a, and 320b tentatively issue a first timer notification using the temporarily registered arrival time as a reference time. Therefore, it is possible to avoid failing to issue the first timer notification. Further, when the procedure for determining the first arrival time is completed, the engines 320, 320a, and 320b can issue the second and subsequent timer notifications at the same timing.
- FIG. 12 is a block diagram illustrating an example of functions of the engine node.
- the engine node 300 includes queries 310 and 311, state storage units 310 b and 311 b, an engine 320, and a management information storage unit 330.
- the engine 320 includes a timer management unit 340, an event management unit 350, and a communication unit 360.
- the engine 320 can be implemented as a module of a program executed by the processor 301.
- the status storage units 310 b and 311 b and the management information storage unit 330 can be implemented as storage areas secured in the RAM 302 or the HDD 303.
- the engine nodes 300a and 300b also have the same function as the engine node 300.
- the query 310 has an operator 310a.
- the operator 310a may be a time-based operator or an operator other than the time-based operator.
- the query 311 has an operator 311a that is a time-based operator (not shown in FIG. 12).
- the state storage unit 310b is provided corresponding to the query 310, and stores information indicating the internal state of the query 310.
- the internal state information includes information indicating the temporarily accumulated event and the data range handled by the query 310 (for example, the range of identifiers included in the event distributed to the query 310).
- the state storage unit 311b is provided corresponding to the query 311 and stores information indicating the internal state of the query 311.
- a plurality of queries can be arranged in each engine node, and a state storage unit that stores information of an internal state for each arranged query is provided.
- One engine for each engine node manages a plurality of queries and state storage units on the engine node.
- the management information storage unit 330 stores management information including a shared management table that stores information related to timer requests, an event management table that stores information related to timer notifications, and a routing table that stores information related to event distribution destinations.
- the timer management unit 340 manages the issuance of timer notifications.
- the timer management unit 340 includes a request reception unit 341, a sharing unit 342, and a generation unit 343.
- the request reception unit 341 receives a timer request from a query time-based operator (for example, the operator 311 a of the query 311) arranged in the engine node 300.
- the timer request includes information such as arrival time, standby time, firing interval, firing limit number, and the like.
- the sharing unit 342 When the request reception unit 341 receives the timer request, the sharing unit 342 registers information included in the timer request in the shared management table stored in the management information storage unit 330. At this time, the sharing unit 342 handles the arrival time specified by the timer request as the arrival time of provisional registration. Then, sharing unit 342 generates a confirmation notification for the temporarily registered arrival time, and transmits it to engine nodes 300a and 300b via event management unit 350 and communication unit 360.
- the transmission destination of the confirmation notification is an engine node in which a query of the same type as the transmission source of the timer request is arranged, and can be searched with reference to the routing table stored in the management information storage unit 330. However, the confirmation notification may be broadcast to the network 31 or may be transmitted to all other engine nodes.
- the sharing unit 342 receives a permission notification or a rejection notification as a response to the confirmation notification from the engine nodes 300a and 300b via the communication unit 360 and the event management unit 350. Then, sharing unit 342 determines whether or not to temporarily determine the arrival time temporarily registered based on the received response. When determining the first arrival time, the sharing unit 342 treats the temporary registration arrival time as main registration, and stores the first firing time, firing interval, firing upper limit in the event management table stored in the management information storage unit 330. Register the number of times. Then, sharing unit 342 transmits a confirmation notification to engine nodes 300a and 300b.
- the sharing unit 342 compares the arrival time specified by the confirmation notification with the arrival time of provisional registration in the engine node 300. If the former is earlier (smaller) than the latter, the sharing unit 342 updates the temporary registration arrival time to the one specified in the confirmation notification, and returns a permission notification. In other cases, the sharing unit 342 returns a rejection notice.
- the sharing unit 342 treats the temporary registration arrival time as the main registration, and stores the first firing time, firing interval, firing upper limit in the event management table. Register the number of times.
- the sharing unit 342 performs the first firing time, firing interval, firing upper limit in order to issue the first timer notification.
- the number of times is provisionally registered in the event management table.
- the sharing unit 342 may accept a change or cancellation of the timer request from the time base operator. In that case, the sharing unit 342 may reflect the change or cancellation in the sharing management table, the event management table, or the like.
- the generation unit 343 refers to the event management table stored in the management information storage unit 330, generates a timer notification at an appropriate timing, and outputs the timer notification to the event management unit 350.
- the event management unit 350 manages event distribution between queries on the engine 320 and the engine node 300 and event distribution between engine nodes.
- the event management unit 350 includes a distribution unit 351 and a distribution unit 352.
- the distribution unit 351 acquires a timer notification from the timer management unit 340 as a kind of event. Then, the allocating unit 351 refers to the routing table stored in the management information storage unit 330 and determines the time-based operator to which the timer notification is distributed. In addition, when the distribution unit 351 obtains an event control notification such as a confirmation notification / permission notification / rejection notification / confirmation notification from the timer management unit 340 as a kind of event, the distribution unit 351 refers to the routing table to determine the engine node of the transmission destination. Determine and pass an event control notification to the communication unit 360. In addition, the distribution unit 351 passes the event control notification acquired from the communication unit 360 to the timer management unit 340.
- the distribution unit 351 controls the distribution of events as data passed between the queries 310 to 314.
- the distribution unit 351 acquires an event as data from the communication unit 360
- the distribution unit 351 refers to the information on the data range in charge stored in the routing table and the status storage units 310b and 311b, and determines the distribution destination query of the event. To do.
- the distribution unit 351 acquires an event as data from the queries 310 and 311, the distribution unit 351 determines a transmission destination engine node with reference to the routing table, and passes the event to the communication unit 360.
- the distribution unit 352 passes the event to the distribution destination query or the time-based operator.
- the communication unit 360 communicates with the engine nodes 300a and 300b via the network 31.
- the communication unit 360 acquires an event control notification or an event as data from the event management unit 350
- the communication unit 360 transmits the event control notification to the engine node designated by the event management unit 350.
- the communication unit 360 passes the event control unit 350 to the event management unit 350.
- FIG. 13 is a diagram showing an example of the sharing management table.
- the sharing management table 331 stores information related to timer requests.
- the shared management table 331 is stored in the management information storage unit 330.
- the shared management table 331 includes items of query name, operator name, arrival time, standby time, firing interval, firing upper limit count, minimum specified time, and registration type.
- an identifier indicating the type of query that sent the timer request is set.
- the query name can also be called an identifier of the query program.
- the same query name is given to the same type of query started from the same query program.
- an identifier indicating the type of the operator who transmitted the timer request is set.
- the operator name can also be referred to as an operator identifier included in the query program.
- the same operator name is given to the same type of operators included in the same type of query.
- the arrival time specified in the timer request is set in the arrival time item. This arrival time indicates the time when the first event arrives at the query that transmitted the timer request.
- the waiting time specified by the timer request is set in the waiting time item.
- the firing interval designated by the timer request (interval of issuing timer notifications for the second and subsequent times) is set.
- the upper limit number of firings specified in the timer request (the maximum number of times the timer notification is issued) is set. Since the standby time, firing interval, and firing upper limit count are described in the query program, the same type of operator designates the same waiting time, firing interval, firing limit count. When the time-based operator requests the timer notification only once, the timer request may not include information on the firing interval and the firing limit number.
- the minimum specific time for calculating the specific time is set in the item of the minimum specific time.
- the minimum specific time may be set to zero.
- the minimum specific time is set to a positive fixed value, the fixed value may be set in the item of the minimum specific time.
- the minimum specific time is set to a variable value according to the communication status or the like, the calculated variable value is set in the item of the minimum specific time.
- the sharing unit 342 calculates the difference between the time when the confirmation notification is received from another engine node and the arrival time specified by the confirmation notification as the notification delay time, and sets twice the notification delay time as the minimum specific time. . Note that the minimum specified time is less than the waiting time.
- FIG. 14 is a diagram illustrating an example of an event management table.
- the event management table 332 stores information related to timer notification.
- the event management table 332 is stored in the management information storage unit 330.
- the event management table 332 includes items of query name, operator name, first firing time, firing interval, firing upper limit number, firing completed number, and registration type.
- Items registered in the shared management table 331 are copied to the items of query name, operator name, firing interval, and firing upper limit count.
- the first ignition time calculated from the first arrival time and the standby time is set.
- the number of times that the generation unit 343 has generated a timer notification is set in the item of the number of times of firing.
- the generation unit 343 repeatedly generates a timer notification according to the firing interval until the number of firings reaches the firing upper limit number.
- the registration type item the fixed status of the first firing time is set. If the first timer notification is issued provisionally because the first arrival time cannot be determined even after the specific time has elapsed, “temporary registration” is set in the registration type item. In other cases, that is, when the first arrival time is determined, “registration” is set in the item of registration type.
- FIG. 15 is a diagram illustrating an example of a routing table.
- the routing table 333 stores information related to event distribution destinations.
- the routing table 333 is stored in the management information storage unit 330.
- the routing table 333 has items of event name, query name, operator name, and node name.
- Event1 and Event2 indicate events corresponding to data
- TimeEvent1 and TimeEvent2 indicate timer notifications
- TimeEvent_Ctl1 and TimeEvent_Ctl2 indicate event control notifications.
- an identifier indicating the type of the event delivery destination query is set.
- an identifier indicating the type of operator receiving the event is set.
- an identifier of the engine node in which the query is arranged is set.
- FIG. 16 is a diagram illustrating an example of a timer request.
- a time-based operator such as the operator 311 a transmits a timer request 61 to the timer management unit 340.
- the timer request 61 has items of query name, operator name, arrival time, standby time, firing interval, and firing upper limit number.
- the query name, operator name, arrival time, standby time, firing interval, and firing upper limit count are registered in the above-described shared management table 331.
- the query name and operator name included in the timer request 61 indicate the transmission source of the timer request 61.
- the arrival time specified by the timer request 61 is the time when the first event arrives at the transmission source of the timer request 61.
- FIG. 17 is a diagram showing an example of event control notification.
- the event control notification 62 is transmitted between the engine nodes 300, 300a, 300b to determine the first arrival time.
- the event control notification 62 has items of query name, operator name, transmission source, control type, arrival time, standby time, firing interval, and firing upper limit number.
- the query name, operator name, arrival time, standby time, firing interval, and firing upper limit count are provisionally registered in the shared management table 331.
- the identifier of the engine node that transmitted the event control notification 62 is set in the transmission source item.
- any of confirmation / permission / rejection / confirmation is set as the type of the event control notification 62.
- the engine node 300 executes the processes of FIGS.
- the engine nodes 300a and 300b also execute the same processing as the engine node 300. 18 to 21 are executed independently of each other.
- FIG. 18 is a flowchart illustrating an example of transmission side processing for common arrival time.
- the request receiving unit 341 receives the timer request 61.
- the sharing unit 342 refers to the routing table 333 stored in the management information storage unit 330 and searches for another engine node in which a query of the same type as the transmission source of the timer request 61 is arranged. For example, the sharing unit 342 obtains the event name indicating the event control notification (the one with “_Ctl” in FIG. 15) and the node name corresponding to the query name / operator name specified in the timer request 61 from the routing table 333. Extract.
- step S102 The sharing unit 342 determines whether one or more other engine nodes are searched in step S102, that is, whether the query of the transmission source of the timer request 61 is parallelized. If the source query is parallelized, the process proceeds to step S105. If the source query is not parallelized, the process proceeds to step S104.
- the sharing unit 342 calculates the first firing time from the arrival time and the waiting time specified by the timer request 61 (the time obtained by adding the waiting time to the arrival time is the first firing time). Then, the sharing unit 342 stores, in the event management table 332 stored in the management information storage unit 330, the query name, operator name, firing interval, firing upper limit count specified in the timer request 61, and the calculated initial firing time. Register as “Registration”. Then, the process ends.
- the sharing unit 342 has registered the record including the query name and operator name specified in the timer request 61 in the shared management table 331 stored in the management information storage unit 330 as main registration or temporary registration. Judge. If registered, the timer request 61 is discarded and the process ends. If not registered, the process proceeds to step S106.
- the sharing unit 342 calculates the minimum specific time.
- the minimum specific time used here may be a predetermined fixed value or may be calculated based on the past communication time with another engine node searched in step S102.
- the sharing unit 342 stores the query name, operator name, arrival time, waiting time, firing interval, firing upper limit number of times specified in the timer request 61, and the minimum specified time calculated in step S106 in the sharing management table 331. Register as “provisional registration”.
- the sharing unit 342 generates an event control notification (confirmation notification) whose control type is “confirmation”.
- confirmation notification the query name, operator name, arrival time, standby time, firing interval, firing upper limit number specified in the timer request 61 and the node name of the engine node 300 are described.
- the sharing unit 342 transmits a confirmation notification to the other engine nodes searched in step S102 via the event management unit 350 and the communication unit 360.
- the confirmation notification may be broadcast to all engine nodes.
- the sharing unit 342 receives the event control notification from the other engine node to which the confirmation notification is transmitted via the communication unit 360 and the event management unit 350.
- the distribution unit 351 of the event management unit 350 determines whether the event is an event control notification based on the event identifier acquired from the communication unit 360, and passes the event control notification to the sharing unit 342.
- the sharing unit 342 determines whether all of the received event control notifications are permission notifications. If all event control notifications are permission notifications, the process proceeds to step S110. If one or more event control notifications are rejection notifications, the process ends.
- the sharing unit 342 searches the share management table 331 for a record including the query name and operator name specified in the permission notification (the record temporarily registered in step S107), and searches the retrieved record from “temporary registration”. Rewrite as “Registration”.
- the sharing unit 342 calculates the initial firing time from the arrival time and the standby time described in the record of the sharing management table 331 that has been “mainly registered” in step S110. Then, the sharing unit 342 registers the query name, operator name, firing interval, firing upper limit number of times described in the record, and the calculated first firing time as “main registration” in the event management table 332. However, as will be described later, a query name, an operator name, and the like to be permanently registered may be temporarily registered in the event management table 332. In this case, the sharing unit 342 may set “main registration” for the record of the event management table 332 that has been temporarily registered.
- the sharing unit 342 generates an event control notification (confirmation notification) whose control type is “confirmed”.
- the confirmation notification includes the query name, operator name, arrival time, standby time, firing interval, firing limit number of times that are registered in the sharing management table 331 in step S110.
- the sharing unit 342 transmits a confirmation notification to all engine nodes via the event management unit 350 and the communication unit 360. However, each notification may be broadcast to all engine nodes.
- FIG. 19 is a flowchart illustrating an example of reception side processing for common arrival time.
- the sharing unit 342 receives an event control notification (confirmation notification) whose control type is “confirmation” from another engine node via the communication unit 360 and the event management unit 350.
- the distribution unit 351 of the event management unit 350 determines whether the event is an event control notification based on the event identifier acquired from the communication unit 360, and passes the event control notification to the sharing unit 342. To.
- the sharing unit 342 determines whether the record including the query name and the operator name specified in the confirmation notification is fully registered in the share management table 331 stored in the management information storage unit 330. If it has been registered, the process proceeds to step S123. If it has not been registered (temporarily registered or not registered), the process proceeds to step S124.
- the sharing unit 342 generates an event control notification (rejection notification) whose control type is “rejection”.
- rejection notification the query name, operator name, arrival time, standby time, firing interval, firing upper limit number, and node name of the engine node 300 specified in the confirmation notification are described.
- the sharing unit 342 returns a rejection notification to another engine node that has transmitted the confirmation notification via the event management unit 350 and the communication unit 360. Then, the process ends.
- the sharing unit 342 determines whether a record including the query name and operator name specified in the confirmation notification is provisionally registered in the share management table 331. If temporarily registered, the process proceeds to step S125, and if not temporarily registered (no corresponding record exists), the process proceeds to step S127.
- the sharing unit 342 compares the arrival time specified in the confirmation notification with the arrival time provisionally registered in the sharing management table 331. (S126) As a result of the comparison in step S125, the sharing unit 342 determines whether the arrival time specified in the confirmation notification is earlier (smaller) than the arrival time temporarily registered in the sharing management table 331. If the former is earlier than the latter, the process proceeds to step S127. Otherwise (if the former is later (larger) or the same than the latter), the process proceeds to step S123.
- the sharing unit 342 calculates the minimum specific time.
- the sharing unit 342 temporarily registers the information related to the received confirmation notification in the share management table 331. If no provisionally registered record is found in step S124, the sharing unit 342 stores the query name, operator name, arrival time, standby time, firing interval, firing limit number specified in the confirmation notification in the sharing management table 331. The minimum specific time calculated in step S127 is registered as “temporary registration”. If a temporary registration record is found in step S124, the sharing unit 342 updates the temporary registration arrival time to that specified in the confirmation notification, and updates the minimum specific time to that calculated in step S127.
- the sharing unit 342 generates an event control notification (permission notification) whose control type is “permitted”.
- the permission notification the query name, operator name, arrival time, standby time, firing interval, firing upper limit count, and node name of the engine node 300 specified in the confirmation notification are described.
- the sharing unit 342 returns a permission notification to the other engine node that has transmitted the confirmation notification via the event management unit 350 and the communication unit 360.
- FIG. 20 is a flowchart illustrating an example of processing when a confirmation notification is received.
- the sharing unit 342 receives an event control notification (confirmation notification) whose control type is “confirmed” from another engine node via the communication unit 360 and the event management unit 350.
- the distribution unit 351 of the event management unit 350 determines whether the event is an event control notification based on the event identifier acquired from the communication unit 360, and passes the event control notification to the sharing unit 342. To.
- the sharing unit 342 searches the shared management table 331 stored in the management information storage unit 330 for a temporary registration record including the query name and operator name specified in the confirmation notification.
- the sharing unit 342 changes the retrieved record from temporary registration to main registration.
- the sharing unit 342 calculates the initial firing time from the arrival time and the standby time that are registered in the sharing management table 331 in step S132. Then, the sharing unit 342 registers the query name, operator name, firing interval, firing upper limit number of times and the calculated first firing time registered in the sharing management table 331 as “main registration” in the event management table 332. However, as will be described later, a query name, an operator name, and the like to be permanently registered may be temporarily registered in the event management table 332. In this case, the sharing unit 342 may set “main registration” for the record of the event management table 332 that has been temporarily registered.
- FIG. 21 is a flowchart illustrating an example of processing when a specific time has elapsed.
- the sharing unit 342 searches the sharing management table 331 for a temporarily registered record whose current time has passed a specific time.
- step S142 The sharing unit 342 determines in step S141 whether a temporarily registered record whose current time has passed a specific time has been searched. If a search is made, the process proceeds to step S143. If no search is performed, the process proceeds to step S141.
- the sharing unit 342 calculates a temporary initial firing time from the arrival time and the standby time described in the temporary registration record of the sharing management table 331 searched in Step S141. Then, the sharing unit 342 sets the query name, operator name, firing interval, firing upper limit number of times temporarily registered in the sharing management table 331 and the calculated temporary initial firing time as “temporary registration” in the event management table 332. sign up. Accordingly, the first timer notification is provisionally issued by the generation unit 343 based on the provisional arrival time.
- FIG. 22 is a diagram illustrating a sequence example when confirmation notification is permitted.
- the engine 320 receives the timer request 61 from the operator 311a of the query 311 that is a time-based operator.
- the engine 320 provisionally registers the arrival time designated by the timer request 61 in the shared management table 331 stored in the engine node 300.
- the engine 320 transmits a confirmation notification describing the arrival time and the like temporarily registered in the sharing management table 331 in step S202 to the engine node 300a.
- the engine 320a compares the arrival time specified in the received confirmation notification with the arrival time registered in the shared management table stored in the engine node 300a.
- the arrival time or the like related to the operator 311a is not registered in the shared management table of the engine node 300a, or the arrival time specified in the confirmation notification is earlier than that temporarily registered in the shared management table of the engine node 300a.
- the engine 320a temporarily registers the arrival time and the like in the sharing management table of the engine node 300a.
- the engine 320a transmits a permission notice to the engine node 300.
- the engine 320 fully registers the arrival time and the like temporarily registered in the sharing management table 331 of the engine node 300 in step S202.
- the engine 320 transmits a confirmation notification to the engine node 300a.
- the engine 320a fully registers the arrival time and the like temporarily registered in the sharing management table of the engine node 300a in step S204. As a result, a common “first arrival time” is determined between the engine nodes 300 and 300a.
- FIG. 23 is a diagram illustrating a sequence example when the confirmation notification is rejected.
- the engine 320 receives the timer request 61 from the operator 311a of the query 311 that is a time-based operator.
- the engine 320 provisionally registers the arrival time designated by the timer request 61 in the shared management table 331 stored in the engine node 300. (S213) The engine 320 transmits to the engine node 300a a confirmation notification describing the arrival time and the like temporarily registered in the sharing management table 331 in step S212.
- the engine 320a compares the arrival time specified in the received confirmation notification with the arrival time registered in the shared management table stored in the engine node 300a.
- the arrival time specified in the confirmation notification is later than or the same as that temporarily registered in the shared management table of the engine node 300a.
- the engine 320a transmits a rejection notification to the engine node 300 without temporarily registering the arrival time or the like in the sharing management table of the engine node 300.
- the arrival time designated by the operator 311a is not adopted as the common “first arrival time”, but the arrival time designated by another operator is adopted.
- FIG. 24 is a diagram illustrating an example of management when an engine node is added.
- the manager node 600 determines that the load on the engine nodes 300, 300a, and 300b is high, the manager node 600 sets the engine node 300c as a spare engine node in a state where a query can be arranged. For example, the manager node 600 causes the engine node 300c to start an engine for managing one or more queries.
- the manager node 600 can copy the query 311 arranged in the engine node 300 to the engine node 300c or move it to the engine node 300c. By copying the query 311 and increasing the number of queries of the same type as the query 311 (increasing the degree of parallelism), the load on the query 311 can be reduced. Further, by moving the query 311, the load on the engine node 300 can be reduced.
- the copy of the query 311 can be realized by, for example, copying a query program from the engine node 300 to the engine node 300c and starting the query 315 in the engine node 300c based on the copied query program.
- the query 311 on the engine node 300 and the query 315 on the engine node 300c are executed in parallel.
- the movement of the query 311 can be realized, for example, by stopping the query 311, moving the query program from the engine node 300 to the engine node 300 c, and starting the query 311 on the engine node 300 c based on the moved query program.
- a query 315 of the same type as the query 311 is placed in the engine node 300c by copying the query 311 of the engine node 300.
- the manager node 600 instructs the engine 320 of the engine node 300 that is the copy source and the engine of the engine node 300c that is the copy destination to copy (S301).
- Engine node 300 transmits a query program describing the processing of query 311 to engine node 300c in response to an instruction from manager node 600.
- the engine node 300c activates the query 315 including the operator 315a based on the query program received from the engine node 300. Further, the engine node 300c secures a state storage unit 315b that stores information on the internal state of the query 315 (S302).
- the engine node 300 extracts information about the query 311 from the shared management table 331 and the event management table 332 stored in the management information storage unit 330, and transmits the information to the engine node 300c.
- the engine node 300 is notified of the determined first arrival time, waiting time, firing interval, firing upper limit number, and the like.
- the engine node 300c secures a management information storage unit 330c that stores the shared management table and the event management table, and registers the information received from the engine node 300 in the shared management table and the event management table.
- the engine node 300c can share the first arrival time determined by the engine nodes 300, 300a, and 300b afterwards. Therefore, a timer notification is issued to the operator 315a of the query 315 at the same time as the operator 311a of the query 311 (S303).
- the manager node 600 updates the routing table of each node and the internal state information of the queries 310 to 315 so that the event output from the query 310 is also distributed to the query 315.
- the manager node 600 indicates that the same type of query as the queries 311, 312, and 313 exists in the engine node 300 c in the routing table of the input adapter node 200 and the engine nodes 300, 300 a, 300 b, and 300 c. sign up.
- the manager node 600 reviews the data ranges handled by the queries 311, 312, and 313 so that the load is distributed, and updates the internal state information of the queries 311, 312, and 313.
- the first arrival time and the like can be notified from the engine node 300 to the engine node 300c even when the query 311 is moved.
- the engine node 300c can issue a timer notification to the operator 315a at an appropriate timing by receiving a notification such as the first arrival time from another engine node. .
- the engine node 300 notifies the first arrival time and the like.
- at least one of the engine nodes such as the engine nodes 300a and 300b in which the same type of query as the query 315 is arranged may be notified.
- the engine node that notifies the first arrival time or the like may be designated by the manager node 600.
- the arrival time specified by the timer request 61 from the engine node 300 in which the query 311 is arranged to another engine node. Will be notified.
- a common “first arrival time” is determined between the engine nodes 300, 300a, and 300b. Then, using the common “first arrival time” as a reference time, the timing of issuing a timer notification in each of the engine nodes 300, 300a, and 300b is managed.
- the timer notification issuance timing to the queries 311, 312, and 313 can be aligned.
- the processing result is consistent with the case where the parallelization is not performed, and the event intended by the query program Processing can be realized. Therefore, event processing can be smoothly parallelized, and the performance of the distributed system can be improved.
- a specific time is set, and when the specific time has passed, the first timer notification is provisionally issued based on the temporary registration arrival time at the judgment of each engine node. Therefore, even if the communication delay between the engine nodes 300, 300a, and 300b is large and it takes time for the confirmation procedure, it is possible to avoid failing to issue the first timer notification.
- the query 311 is copied or moved to the added engine node 300c, the determined first arrival time is notified to the engine node 300c. Therefore, the added engine node 300c can issue a timer notification at the same timing as the engine nodes 300, 300a, and 300b.
- the system according to the first embodiment can be realized by causing the information processing apparatuses 10 and 20 to execute a program.
- the information processing according to the second embodiment can be realized by causing the input adapter node 200, the engine nodes 300, 300a, 300b, and 300c, the output adapter node 400, the client device 500, and the manager node 600 to execute programs.
- a program can be recorded on a computer-readable recording medium (for example, the recording medium 43).
- a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory, or the like can be used.
- Magnetic disks include FD and HDD.
- Optical disks include CD, CD-R (Recordable) / RW (Rewritable), DVD, and DVD-R / RW.
- a portable recording medium on which the program is recorded is provided.
- the computer stores a program recorded in a portable recording medium in a storage device (for example, HDD 303), reads the program from the storage device, and executes the program.
- the program read from the portable recording medium may be directly executed.
- at least a part of the information processing described above can be realized by an electronic circuit such as a DSP, an ASIC, or a PLD (Programmable Logic Device).
Landscapes
- Engineering & Computer Science (AREA)
- Software Systems (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Debugging And Monitoring (AREA)
- Multi Processors (AREA)
Abstract
Description
本発明の上記および他の目的、特徴および利点は、本発明の例として好ましい実施の形態を表す添付の図面と関連した以下の説明により明らかになるであろう。
[第1の実施の形態]
図1は、第1の実施の形態の分散システムを示す図である。
図2は、第2の実施の形態の分散システムを示す図である。第2の実施の形態の分散システムは、センサデバイスから受信した大量のセンサデータをリアルタイムに分析し、分析結果をクライアント装置に提供する情報処理システムである。センサデータとしては、例えば、道路上に設けた速度計からリアルタイムに収集する車両速度のデータや、店舗に設けたカードリーダからリアルタイムに収集するクレジットカードの使用履歴等が挙げられる。センサデータの分析としては、例えば、収集した車両速度のデータに基づいて渋滞の発生場所を予測する処理や、収集したクレジットカードの使用履歴に基づいてクレジットカードの不正使用を検出する処理等が挙げられる。
なお、エンジンノード300はディスクドライブ306を備えていなくてもよく、ユーザが操作する端末装置から制御される場合には、画像信号処理部304や入力信号処理部305を備えていなくてもよい。また、ディスプレイ41や入力デバイス42は、エンジンノード300の筐体と一体に形成されていてもよい。入力アダプタノード200、エンジンノード300a,300b,300c、出力アダプタノード400およびクライアント装置500も、エンジンノード300と同様のハードウェアを用いて実現できる。
図6は、時間ベースオペレータの例を示す図である。クエリ311は、オペレータ311aを有する。オペレータは、クエリプログラムに記述された演算子のインスタンスであり、クエリの中で実行される。演算子には、イベントの収集・検索・選択などに相当する演算子、算術演算・論理演算・関係演算の演算子、関数などが含まれる。1つのクエリに複数のオペレータが含まれることがある。その場合、例えば、複数のオペレータの間の関係が木構造で定義され、木構造に応じた順序で複数のオペレータが呼び出される。
図7は、時間ベースオペレータの実装例を示す図である。エンジンノード300は、エンジン320を有する。エンジン320は、エンジンノード300で実行される1または2以上のクエリを制御する。物理マシンまたは仮想マシンとしての1つのエンジンノード毎に、1つのエンジンが実行される。以下に説明するように、エンジン320は、オペレータ311aの発火タイミングを制御することができるタイマー機能を有する。
(S2)オペレータ311aは、イベントEv#1が到着したことを契機として、タイマー依頼をエンジン320へ送信する。タイマー依頼は、発火タイミングを示すタイマー通知をオペレータ311aに対して発行するよう要求するものである。タイマー依頼には、イベントEv#1の到着時刻や待機時間の情報が含まれ得る。タイマー依頼がタイマー通知を繰り返し発行するよう要求するものである場合、タイマー依頼に発火間隔が含まれ得る。クエリ311は、オペレータ311aがエンジン320からタイマー通知を受け取るまで、前段のクエリ310から振り分けられたイベントを蓄積する。
図8は、並列化されたクエリ毎のタイマー通知時刻の例を示す図である。入力ストリーム51は、前段のクエリ310が出力したイベントをクエリ311,312,313に渡す仮想的な入力チャネルである。出力ストリーム52は、クエリ311,312,313から出力されたイベントを後段のクエリ314に渡す仮想的な出力チャネルである。入力ストリーム51や出力ストリーム52は、エンジンノード300,300a,300bのエンジンによって形成される。クエリ311,312は、入力ストリーム51および出力ストリーム52を認識すればよく、前段や後段のクエリを直接意識しなくてよい。
(S31)エンジン320aは、クエリ312のオペレータ312aからタイマー依頼を受信する。このタイマー依頼には、到着時刻00:00:01が指定されている。エンジン320aは、00:00:01をクエリ311,312,313についての最初の到着時刻の候補として仮登録しておく。仮登録された到着時刻は、後に更新され得る。
しかし、上記の確定手続は通信のオーバヘッドを伴う。そのため、時間ベースオペレータから指定される待機時間(到着時刻から1回目の発火時刻までの時間)が短い場合、1回目のタイマー通知までにエンジン320,320a,320bの間で確定手続を完了できないことも有り得る。そこで、エンジン320,320a,320bそれぞれは、「特定時刻」までに確定手続を完了できない場合には、現在の仮登録の到着時刻を最初の到着時刻であると仮定して、暫定的に1回目のタイマー通知を発行する。
図12は、エンジンノードの機能例を示すブロック図である。エンジンノード300は、クエリ310,311、状態記憶部310b,311b、エンジン320および管理情報記憶部330を有する。エンジン320は、タイマー管理部340、イベント管理部350および通信部360を有する。エンジン320は、プロセッサ301が実行するプログラムのモジュールとして実装できる。状態記憶部310b,311bおよび管理情報記憶部330は、RAM302またはHDD303に確保した記憶領域として実装できる。エンジンノード300a,300bも、エンジンノード300と同様の機能を有する。
依頼受信部341は、エンジンノード300に配置されたクエリの時間ベースオペレータ(例えば、クエリ311のオペレータ311a)からタイマー依頼を受け付ける。タイマー依頼には、到着時刻・待機時間・発火間隔・発火上限回数等の情報が含まれる。
イベント管理部350は、エンジン320とエンジンノード300上のクエリ間でのイベントの配信、および、エンジンノード間でのイベントの配信を管理する。イベント管理部350は、振分部351および配信部352を有する。
通信部360は、ネットワーク31を介してエンジンノード300a,300bと通信する。通信部360は、イベント管理部350からイベント制御通知やデータとしてのイベントを取得すると、イベント管理部350から指定されたエンジンノードに送信する。また、通信部360は、エンジンノード300a,300bからイベント制御通知やデータとしてのイベントを受信すると、イベント管理部350に渡す。
図14は、イベント管理テーブルの例を示す図である。イベント管理テーブル332は、タイマー通知に関する情報を格納する。イベント管理テーブル332は、管理情報記憶部330に記憶されている。イベント管理テーブル332は、クエリ名、オペレータ名、初回発火時刻、発火間隔、発火上限回数、発火済回数および登録種別の項目を有する。
(S101)依頼受信部341は、タイマー依頼61を受信する。
(S102)共有部342は、管理情報記憶部330に記憶されたルーティングテーブル333を参照して、タイマー依頼61の送信元と同じ種類のクエリが配置されている他のエンジンノードを検索する。例えば、共有部342は、イベント制御通知を示すイベント名(図15の「_Ctl」が付されたもの)およびタイマー依頼61で指定されたクエリ名・オペレータ名に対応するノード名をルーティングテーブル333から抽出する。
(S121)共有部342は、通信部360およびイベント管理部350を介して、他のエンジンノードから、制御種別が「確認」であるイベント制御通知(確認通知)を受信する。このとき、イベント管理部350の振分部351は、例えば、通信部360から取得したイベントの識別子に基づいて当該イベントがイベント制御通知であるか判定し、イベント制御通知を共有部342に渡すようにする。
(S126)共有部342は、ステップS125の比較の結果、確認通知で指定された到着時刻が共有管理テーブル331に仮登録された到着時刻より早い(小さい)か判断する。前者が後者より早い場合はステップS127に処理が進み、それ以外の場合(前者が後者より遅い(大きい)か同じ場合)はステップS123に処理が進む。
(S131)共有部342は、通信部360およびイベント管理部350を介して、他のエンジンノードから、制御種別が「確定」であるイベント制御通知(確定通知)を受信する。このとき、イベント管理部350の振分部351は、例えば、通信部360から取得したイベントの識別子に基づいて当該イベントがイベント制御通知であるか判定し、イベント制御通知を共有部342に渡すようにする。
(S141)共有部342は、共有管理テーブル331から、現在時刻が特定時刻を経過した仮登録のレコードを検索する。特定時刻は、仮登録の到着時刻を基準時刻として用いた場合の1回目の発火時刻から最低特定時間だけ遡った時刻である。具体的には、特定時刻=到着時刻+待機時間-最低特定時間と算出することができる。
(S201)エンジン320は、時間ベースオペレータであるクエリ311のオペレータ311aから、タイマー依頼61を受信する。
(S203)エンジン320は、ステップS202で共有管理テーブル331へ仮登録した到着時刻等を記載した確認通知を、エンジンノード300aに送信する。
(S206)エンジン320は、ステップS202でエンジンノード300の共有管理テーブル331へ仮登録した到着時刻等を本登録する。
(S208)エンジン320aは、ステップS204でエンジンノード300aの共有管理テーブルへ仮登録した到着時刻等を本登録する。これにより、エンジンノード300,300aの間で共通の「最初の到着時刻」が確定する。
(S211)エンジン320は、時間ベースオペレータであるクエリ311のオペレータ311aから、タイマー依頼61を受信する。
(S213)エンジン320は、ステップS212で共有管理テーブル331へ仮登録した到着時刻等を記載した確認通知を、エンジンノード300aに送信する。
図24は、エンジンノード追加時の管理の例を示す図である。
11,21 プロセス
12,22 制御部
13,23 通信部
Claims (9)
- 複数のプロセスによって分散処理を行う分散システムの制御に用いられるプログラムであって、
コンピュータに、
前記複数のプロセスのうち前記コンピュータで実行される第1のプロセスによって生成される、タイマーイベントの第1の発行要求を取得し、
前記複数のプロセスのうち第2のプロセスを実行する他のコンピュータから、前記第2のプロセスによって生成されたタイマーイベントの第2の発行要求に応じて前記他のコンピュータで発行されるタイマーイベントの発行タイミング情報を受信し、
受信した前記タイマーイベントの発行タイミング情報に基づいて、前記タイマーイベントの第1の発行要求に応じた、前記第1のプロセスへのタイマーイベントの発行タイミングを制御する、
処理を実行させるイベント管理プログラム。 - 前記タイマーイベントの第1の発行要求に基づいて、タイマーイベントの発行タイミングの特定に利用される第1の基準時刻が特定され、
前記タイマーイベントの発行タイミング情報は、タイマーイベントの発行タイミングの特定に利用される第2の基準時刻を示す情報を含み、
前記第1および第2の基準時刻から前記コンピュータと前記他のコンピュータとの間の共通の基準時刻を決定し、前記共通の基準時刻に基づいて前記第1のプロセスへのタイマーイベントの発行タイミングを制御する、
請求項1記載のイベント管理プログラム。 - 前記共通の基準時刻は、前記タイマーイベントの第1の発行要求の取得状況および前記タイマーイベントの発行タイミング情報の受信状況に応じて仮決定し、前記他のコンピュータとの間の確定手続を通じて確定させ、
前記仮決定した共通の基準時刻によって特定される仮決定の発行タイミングまで、または、前記仮決定の発行タイミングから所定時間前までに前記確定手続が完了しない場合、前記仮決定の発行タイミングで前記第1のプロセスへタイマーイベントを発行する、
請求項2記載のイベント管理プログラム。 - 前記所定時間は、前記コンピュータと前記他のコンピュータとの間の通信品質を示す指標値に基づいて算出する、
請求項3記載のイベント管理プログラム。 - 前記コンピュータおよび前記他のコンピュータと異なる一のコンピュータで第3のプロセスが起動した場合、前記一のコンピュータに前記共通の基準時刻を通知する、
請求項2乃至4の何れか一項に記載のイベント管理プログラム。 - 前記第1の基準時刻は、前記第1のプロセスが所定の条件を満たすデータを受け付けた時刻であり、前記第2の基準時刻は、前記第2のプロセスが前記所定の条件を満たすデータを受け付けた時刻である、
請求項2乃至5の何れか一項に記載のイベント管理プログラム。 - 前記タイマーイベントの第1の発行要求に応じて発行されるタイマーイベントの他の発行タイミング情報を、前記他のコンピュータに送信し、
前記他のコンピュータでは、前記タイマーイベントの他の発行タイミング情報に基づいて、前記タイマーイベントの第2の発行要求に応じた、前記第2のプロセスへのタイマーイベントの発行タイミングが制御される、
請求項1乃至6の何れか一項に記載のイベント管理プログラム。 - 複数のコンピュータを備えており、複数のプロセスによって分散処理を行う分散システムが行うイベント管理方法であって、
前記複数のプロセスのうち第1のプロセスを実行する第1のコンピュータにおいて、前記第1のプロセスによって生成される、タイマーイベントの発行要求を取得し、
前記タイマーイベントの発行要求に応じて発行するタイマーイベントの発行タイミング情報を、前記第1のコンピュータから、前記複数のプロセスのうち第2のプロセスを実行する第2のコンピュータに送信し、
前記第2のコンピュータにおいて、前記第1のコンピュータから受信した前記タイマーイベントの発行タイミング情報に基づいて、前記第2のプロセスへのタイマーイベントの発行タイミングを制御する、
イベント管理方法。 - 複数のプロセスによって分散処理を行う分散システムであって、
第1の情報処理装置と第2の情報処理装置とを有し、
前記第1の情報処理装置は、
前記複数のプロセスのうち前記第1の情報処理装置で実行される第1のプロセスによって生成される、タイマーイベントの発行要求を取得する第1の制御部と、
前記タイマーイベントの発行要求に応じて発行するタイマーイベントの発行タイミング情報を、前記第2の情報処理装置に送信する通信部とを有し、
前記第2の情報処理装置は、
前記第1の情報処理装置から受信した前記タイマーイベントの発行タイミング情報に基づいて、前記複数のプロセスのうち前記第2の情報処理装置で実行される第2のプロセスへのタイマーイベントの発行タイミングを制御する第2の制御部を有する、
分散システム。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015547324A JP6056986B2 (ja) | 2013-11-13 | 2013-11-13 | イベント管理プログラム、イベント管理方法および分散システム |
PCT/JP2013/080688 WO2015071978A1 (ja) | 2013-11-13 | 2013-11-13 | イベント管理プログラム、イベント管理方法および分散システム |
EP13897409.2A EP3070606B1 (en) | 2013-11-13 | 2013-11-13 | Event management program, event management method, and distributed system |
US15/144,385 US9733997B2 (en) | 2013-11-13 | 2016-05-02 | Event management method and distributed system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2013/080688 WO2015071978A1 (ja) | 2013-11-13 | 2013-11-13 | イベント管理プログラム、イベント管理方法および分散システム |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/144,385 Continuation US9733997B2 (en) | 2013-11-13 | 2016-05-02 | Event management method and distributed system |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2015071978A1 true WO2015071978A1 (ja) | 2015-05-21 |
Family
ID=53056947
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2013/080688 WO2015071978A1 (ja) | 2013-11-13 | 2013-11-13 | イベント管理プログラム、イベント管理方法および分散システム |
Country Status (4)
Country | Link |
---|---|
US (1) | US9733997B2 (ja) |
EP (1) | EP3070606B1 (ja) |
JP (1) | JP6056986B2 (ja) |
WO (1) | WO2015071978A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9811378B1 (en) | 2016-04-27 | 2017-11-07 | Fujitsu Limited | Information processing device, complex event processing method, and computer readable storage medium |
JP2019512973A (ja) * | 2016-03-23 | 2019-05-16 | フォグホーン システムズ, インコーポレイテッドFoghorn Systems, Inc. | リアルタイムデータフロープログラミングのための効率的な状態機械 |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019028547A1 (en) * | 2017-08-08 | 2019-02-14 | Crypto4A Technologies Inc. | METHOD AND SYSTEM FOR DEPLOYING AND EXECUTING EXECUTABLE CODE BY SECURE MACHINE |
KR102026301B1 (ko) * | 2017-12-29 | 2019-09-27 | 주식회사 포스코아이씨티 | 데이터 유실 방지 기능을 구비한 분산 병렬 처리 시스템 및 방법 |
CN112306657A (zh) * | 2020-10-30 | 2021-02-02 | 上海二三四五网络科技有限公司 | 一种基于优先级排序实现多个事件的线性倒计时的控制方法及装置 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000259429A (ja) * | 1999-03-11 | 2000-09-22 | Mitsubishi Electric Corp | タイマー管理装置および方法 |
JP2001297071A (ja) | 2000-02-29 | 2001-10-26 | Internatl Business Mach Corp <Ibm> | 正確な分散システム時刻 |
JP2009187567A (ja) * | 2000-07-31 | 2009-08-20 | Toshiba Corp | エージェントシステム |
WO2012032572A1 (ja) * | 2010-09-08 | 2012-03-15 | 株式会社日立製作所 | 計算機 |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0766345B2 (ja) | 1992-12-11 | 1995-07-19 | 日本電気株式会社 | アプリケーション・プログラムのイベント監視方式 |
JP3334695B2 (ja) | 1999-10-29 | 2002-10-15 | 日本電気株式会社 | クラスタ型計算機システム |
US6512990B1 (en) | 2000-01-05 | 2003-01-28 | Agilent Technologies, Inc. | Distributed trigger node |
JP2001282754A (ja) | 2000-04-03 | 2001-10-12 | Nec Eng Ltd | 状態監視システム、状態監視方法およびその記録媒体 |
US8099452B2 (en) * | 2006-09-05 | 2012-01-17 | Microsoft Corporation | Event stream conditioning |
JP2009087190A (ja) | 2007-10-02 | 2009-04-23 | Nec Corp | ストリームデータ解析高速化装置、方法およびプログラム |
TW201216656A (en) * | 2010-10-01 | 2012-04-16 | Interdigital Patent Holdings | Method and apparatus for media session sharing and group synchronization of multi media streams |
US9116220B2 (en) * | 2010-12-27 | 2015-08-25 | Microsoft Technology Licensing, Llc | Time synchronizing sensor continuous and state data signals between nodes across a network |
US9497722B2 (en) * | 2011-05-02 | 2016-11-15 | Ziva Corp. | Distributed co-operating nodes using time reversal |
US9170603B2 (en) * | 2012-03-16 | 2015-10-27 | Tektronix, Inc. | Time-correlation of data |
-
2013
- 2013-11-13 EP EP13897409.2A patent/EP3070606B1/en active Active
- 2013-11-13 JP JP2015547324A patent/JP6056986B2/ja active Active
- 2013-11-13 WO PCT/JP2013/080688 patent/WO2015071978A1/ja active Application Filing
-
2016
- 2016-05-02 US US15/144,385 patent/US9733997B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2000259429A (ja) * | 1999-03-11 | 2000-09-22 | Mitsubishi Electric Corp | タイマー管理装置および方法 |
JP2001297071A (ja) | 2000-02-29 | 2001-10-26 | Internatl Business Mach Corp <Ibm> | 正確な分散システム時刻 |
JP2009187567A (ja) * | 2000-07-31 | 2009-08-20 | Toshiba Corp | エージェントシステム |
WO2012032572A1 (ja) * | 2010-09-08 | 2012-03-15 | 株式会社日立製作所 | 計算機 |
Non-Patent Citations (1)
Title |
---|
See also references of EP3070606A4 |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2019512973A (ja) * | 2016-03-23 | 2019-05-16 | フォグホーン システムズ, インコーポレイテッドFoghorn Systems, Inc. | リアルタイムデータフロープログラミングのための効率的な状態機械 |
JP7019589B2 (ja) | 2016-03-23 | 2022-02-15 | フォグホーン システムズ,インコーポレイテッド | リアルタイムデータフロープログラミングのための効率的な状態機械 |
US9811378B1 (en) | 2016-04-27 | 2017-11-07 | Fujitsu Limited | Information processing device, complex event processing method, and computer readable storage medium |
Also Published As
Publication number | Publication date |
---|---|
JPWO2015071978A1 (ja) | 2017-03-09 |
EP3070606B1 (en) | 2022-03-16 |
US9733997B2 (en) | 2017-08-15 |
EP3070606A4 (en) | 2016-11-30 |
EP3070606A1 (en) | 2016-09-21 |
US20160246656A1 (en) | 2016-08-25 |
JP6056986B2 (ja) | 2017-01-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6056986B2 (ja) | イベント管理プログラム、イベント管理方法および分散システム | |
US11494380B2 (en) | Management of distributed computing framework components in a data fabric service system | |
US11341131B2 (en) | Query scheduling based on a query-resource allocation and resource availability | |
US11921672B2 (en) | Query execution at a remote heterogeneous data store of a data fabric service | |
US11580107B2 (en) | Bucket data distribution for exporting data to worker nodes | |
US11593377B2 (en) | Assigning processing tasks in a data intake and query system | |
US11663227B2 (en) | Generating a subquery for a distinct data intake and query system | |
US20200050607A1 (en) | Reassigning processing tasks to an external storage system | |
US20190258635A1 (en) | Determining Records Generated by a Processing Task of a Query | |
US20190258636A1 (en) | Record expansion and reduction based on a processing task in a data intake and query system | |
US20190258637A1 (en) | Partitioning and reducing records at ingest of a worker node | |
JP5888336B2 (ja) | データ処理方法、分散処理システムおよびプログラム | |
US10133779B2 (en) | Query hint management for a database management system | |
US10812322B2 (en) | Systems and methods for real time streaming | |
Gong et al. | RT-DBSCAN: real-time parallel clustering of spatio-temporal data using spark-streaming | |
Ottenwälder et al. | Recep: Selection-based reuse for distributed complex event processing | |
US11188532B2 (en) | Successive database record filtering on disparate database types | |
JP2009087190A (ja) | ストリームデータ解析高速化装置、方法およびプログラム | |
US10970143B1 (en) | Event action management mechanism | |
Davoudian et al. | A workload-adaptive streaming partitioner for distributed graph stores | |
JP7192645B2 (ja) | 情報処理装置、分散処理システム及び分散処理プログラム | |
JP2014071495A (ja) | データ管理方法、情報処理装置およびプログラム | |
Li et al. | Client-side service for recommending rewarding routes to mobile crowdsourcing workers | |
US9870404B2 (en) | Computer system, data management method, and recording medium storing program | |
JP6155861B2 (ja) | データ管理方法、データ管理プログラム、データ管理システム及びデータ管理装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13897409 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2015547324 Country of ref document: JP Kind code of ref document: A |
|
REEP | Request for entry into the european phase |
Ref document number: 2013897409 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2013897409 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |