WO2010041515A1 - 複数のアプリケーションサーバにより共有データをアクセスするシステム - Google Patents
複数のアプリケーションサーバにより共有データをアクセスするシステム Download PDFInfo
- Publication number
- WO2010041515A1 WO2010041515A1 PCT/JP2009/064316 JP2009064316W WO2010041515A1 WO 2010041515 A1 WO2010041515 A1 WO 2010041515A1 JP 2009064316 W JP2009064316 W JP 2009064316W WO 2010041515 A1 WO2010041515 A1 WO 2010041515A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- mode
- centralized
- distributed
- transition
- shared data
- Prior art date
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L67/00—Network arrangements or protocols for supporting network services or applications
- H04L67/01—Protocols
- H04L67/10—Protocols in which an application is distributed across nodes in the network
- H04L67/1097—Protocols in which an application is distributed across nodes in the network for distributed storage of data in networks, e.g. transport arrangements for network file system [NFS], storage area networks [SAN] or network attached storage [NAS]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/20—Information retrieval; Database structures therefor; File system structures therefor of structured data, e.g. relational data
- G06F16/23—Updating
- G06F16/2308—Concurrency control
- G06F16/2336—Pessimistic concurrency control approaches, e.g. locking or multiple versions without time stamps
- G06F16/2343—Locking methods, e.g. distributed locking or locking implementation details
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/16—Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
- G06F15/161—Computing infrastructure, e.g. computer clusters, blade chassis or hardware partitioning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F15/00—Digital computers in general; Data processing equipment in general
- G06F15/16—Combinations of two or more digital computers each having at least an arithmetic unit, a program unit and a register, e.g. for a simultaneous processing of several programs
- G06F15/163—Interprocessor communication
- G06F15/167—Interprocessor communication using a common memory, e.g. mailbox
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F16/00—Information retrieval; Database structures therefor; File system structures therefor
- G06F16/10—File systems; File servers
- G06F16/17—Details of further file system functions
- G06F16/176—Support for shared access to files; File sharing support
- G06F16/1767—Concurrency control, e.g. optimistic or pessimistic approaches
- G06F16/1774—Locking methods, e.g. locking methods for file systems allowing shared and concurrent access to files
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/46—Multiprogramming arrangements
- G06F9/52—Program synchronisation; Mutual exclusion, e.g. by means of semaphores
Definitions
- the present invention relates to a system for accessing shared data by a plurality of application servers, an application server in the system, a program, and a method.
- a system including a database server that stores a database and a plurality of application servers each accessing the database is known. Such a system can reduce the load on the database server by causing the application server to cache the database reference result.
- lock control In a system in which the application server caches database reference results, lock control must be performed between multiple application servers in order to prevent reference to cache data that is inconsistent with the database.
- a lock control method for example, a distributed lock method in which each application server individually manages locks, and a centralized lock method in which lock servers or the like centrally manage locks are known.
- the lock control of the distributed lock method is referred to as a cache mode
- the lock control of the central lock method is referred to as a database mode.
- the application server when referring to a database, acquires a reference lock managed locally prior to referencing. Further, when updating a database in a system to which the cache mode is applied, the application server acquires an exclusive lock managed in all other application servers prior to the update. Further, when referring to or updating a database in a system to which the database mode is applied, the application server acquires a reference lock or an exclusive lock managed by the lock server prior to the reference or update.
- the cache mode the latency when acquiring the reference lock is shortened, but the exclusive lock must be acquired from each of the plurality of application servers, and the processing becomes complicated.
- the database mode is simple because the exclusive lock need only be acquired from one lock server, but the latency at the time of acquiring the reference lock becomes long. Therefore, it is preferable to apply the cache mode in a system that implements an application with many references, and apply the database mode in a system that implements an application with many updates.
- a system that realizes banking business is referred to more frequently than a database update in the daytime period, and a batch update of the database is performed in a nighttime period when the use by the customer is relatively low.
- the cache mode is applied to such a system, the operation efficiency is good in the daytime period when the number of references is relatively high, but the operation efficiency is deteriorated in the nighttime period when the batch update is performed.
- the database mode is applied to such a system, the operation efficiency is good in the night time zone in which batch update is performed, and the operation efficiency is deteriorated in the day time time when there are relatively many references. . Therefore, for example, in a system that realizes an application in which updates are relatively large in a specific time zone, it is difficult to improve the operation efficiency regardless of the time zone.
- an object of the present invention is to provide a system, an application server, a program, and a method that can solve the above problems. This object is achieved by a combination of features described in the independent claims.
- the dependent claims define further advantageous specific examples of the present invention.
- a plurality of application servers that access shared data, and a centralized management unit that centrally manages locking of the shared data by each of the plurality of application servers;
- Each of the plurality of application servers includes a distributed management unit that manages the lock of the shared data by the application server, and a distributed mode that acquires a lock from the distributed management unit, or a lock from the centralized management unit.
- a system having a selection unit for selecting whether to obtain a centralized mode.
- an application server in the system, a program and a method for causing a computer to function as the application server are provided.
- FIG. 1 shows a configuration of an information processing system 10 according to an embodiment of the present invention.
- the configuration of each of the plurality of application servers 30 is shown.
- An example of the schema which defines the data structure of shared data (ITEM table) is shown.
- An example of a reference query for referring to a value from the ITEM table shown in FIG. 3 is shown.
- An example of data cached by the cache unit 56 is shown.
- An example of the mode selected by the selection part 60 is shown.
- An example of the transition condition when the mode is changed is shown.
- An example of whether to refer to a cache, whether to refer to a database, and whether to update a database in each mode is shown.
- FIG. 1 An example of a processing flow by one application server 30 (A1) and a plurality of other application servers 30 (A2 to An) in the information processing system 10 is shown.
- 2 shows an exemplary hardware configuration of a computer 1900 according to an embodiment of the present invention.
- FIG. 1 shows a configuration of an information processing system 10 according to the present embodiment.
- the information processing system 10 includes a database server 20, a plurality of application servers 30, and a central management unit 40.
- the database server 20 stores shared data.
- the shared data is a table in a database.
- Each of the plurality of application servers 30 executes information processing described in the application program by executing the application program.
- Each of the plurality of application servers 30 accesses the shared data stored in the database server 20 via the network according to the description of the application program. That is, each of the plurality of application servers 30 refers to shared data and updates shared data.
- the central management unit 40 centrally manages the lock of shared data by each of the plurality of application servers 30.
- the central management unit 40 manages locks for each record of shared data.
- the central management unit 40 when the central management unit 40 receives a reference lock acquisition request from one application server 30 for one record, the central management unit 40 is subject to the condition that the other application server 30 has not acquired an exclusive lock.
- One application server 30 is made to acquire a reference lock.
- the central management unit 40 receives an acquisition request for an exclusive lock from one application server 30 for one record, the other application server 30 does not acquire a reference lock or an exclusive lock.
- the one application server 30 is made to acquire an exclusive lock.
- the plurality of application servers 30 can refer to and update the shared data without any inconsistency.
- the database server 20 and the central management unit 40 may be configured to be managed by the same system.
- FIG. 2 shows the configuration of each of the plurality of application servers 30.
- Each of the plurality of application servers 30 includes an execution unit 52, an access control unit 54, a cache unit 56, and a distribution management unit 58.
- Such an application server 30 is realized by a computer executing a program.
- the execution unit 52 executes information processing provided by the application program.
- the execution unit 52 executes, for example, processing according to a given request and returns a processing result as a response.
- the execution unit 52 issues a reference request or an update request to the database server 20 via the access control unit 54 when executing a process of referring to or updating the shared data.
- the execution unit 52 may issue a reference request or an update request described in SQL (Structured Query Language).
- the access control unit 54 transmits a reference request or update request for the shared data issued by the execution unit 52 to the database server 20 via the network. Then, the access control unit 54 acquires a processing result corresponding to the request transmitted to the database server 20 and sends it back to the execution unit 52.
- the access control unit 54 acquires a lock from the centralized management unit 40 or the distributed management unit 58 via the selection unit 60 when an access transaction to the database server 20 is started. More specifically, the access control unit 54 acquires an exclusive lock when starting a transaction including an update request (hereinafter referred to as an update transaction). Further, the access control unit 54 acquires a reference lock when starting a transaction not including an update request (hereinafter referred to as a reference transaction). The access control unit 54 does not access the shared data when the lock cannot be acquired.
- a transaction is a unit in which a plurality of inseparable processes exchanged with the database server 20 are collected. If the database server 20 is, for example, an SQL server, a transaction refers to a series of processes from “Begin” to “Commit” or “Rollback”.
- the cache unit 56 caches the shared data referred to by the access control unit 54. When the transaction ends, the cache unit 56 may invalidate the cached shared data.
- the distributed management unit 58 manages the lock of shared data by the application server 30. In the present embodiment, the distribution management unit 58 manages the lock for each record of shared data.
- the distribution management unit 58 when the distribution management unit 58 receives a reference lock acquisition request from the access control unit 54 for one record, the distribution control unit 58 confirms that the access control unit 54 of another application server 30 has not acquired an exclusive lock. As a condition, the access control unit 54 is made to acquire a reference lock. Further, when the distribution management unit 58 receives an exclusive lock acquisition request from the access control unit 54 for one record, the distribution management unit 58 makes an inquiry to each of the other application servers 30, and all the other application servers 30 are reference-locked. Alternatively, on the condition that the exclusive lock is not acquired, the access control unit 54 is made to acquire the exclusive lock. Thereby, the distribution management unit 58 can refer to and update the shared data without any inconsistency with the other application servers 30.
- the selection unit 60 selects a distributed mode for acquiring a lock from the distributed management unit 58 or a centralized mode for acquiring a lock from the centralized management unit 40.
- the selection unit 60 gives a lock acquisition request from the access control unit 54 to the distribution management unit 58 and causes the access control unit 54 to acquire a lock from the distribution management unit 58.
- the selection unit 60 gives a lock acquisition request from the access control unit 54 to the centralized management unit 40 via the network, and locks the access control unit 54 from the centralized management unit 40. To get.
- the selection unit 60 communicates with each selection unit 60 in the other application server 30.
- the selection unit 60 transitions to the centralized mode on condition that at least one of the plurality of application servers 30 updates the shared data.
- the selection unit 60 transitions to the distributed mode on condition that all of the plurality of application servers 30 have not updated the shared data.
- the access control unit 54 may permit reference to shared data and prohibit updating in the distributed mode. Further, the access control unit 54 may permit reference and update of shared data in the centralized mode.
- Such an application server 30 acquires an exclusive lock from the centralized management unit 40 when updating the shared data, so that it is possible to eliminate exchanges with other application servers.
- the application server 30 acquires the reference lock from the distribution management unit 58, so that the latency for acquiring the reference lock can be shortened. Therefore, the application server 30 can perform distributed lock control efficiently.
- FIG. 3 shows an example of a schema that defines the data structure of shared data (ITEM table).
- FIG. 4 shows an example of a reference query for referring to a value from the ITEM table shown in FIG.
- FIG. 5 shows an example of data cached by the cache unit 56.
- the cache unit 56 stores a result of referring to the shared data in the database server 20 using a reference query described in SQL.
- the database server 20 stores an ITEM table as shown in the schema of FIG. 3 as shared data.
- the access control unit 54 issues the reference query of FIG. 4 to the database server 20
- the query result as shown in FIG. 5 can be acquired from the database server 20.
- the cache unit 56 caches the query result obtained by the access control unit 54 as shown in FIG.
- the access control unit 54 When the access control unit 54 receives a reference request for all or a part of the data shown in FIG. 5 from the execution unit 52 again, the access control unit 54 issues a reference query to the database server 20 instead of issuing a reference query.
- the shared data is acquired from the data and returned to the execution unit 52 as a query result. Thereby, the access control unit 54 can reduce the load on the database server 20 and further reduce the latency for referring to the shared data.
- FIG. 6 shows an example of the mode selected by the selection unit 60.
- FIG. 7 shows an example of the transition condition when the mode is changed.
- the selection unit 60 includes a distributed mode, a centralized transition mode for transitioning from the distributed mode to the centralized mode, a centralized reference mode that is a centralized mode, a centralized update mode that is a centralized mode, and Then, one of the distributed transition modes for changing from the concentrated mode to the distributed mode is selected.
- the selection unit 60 transitions from the distributed mode to the centralized transition mode when updating the shared data in the distributed mode. For example, when the execution of the update transaction is started, the selection unit 60 may transition from the distributed mode to the centralized transition mode. Alternatively, the selection unit 60 transitions from the distributed mode to the centralized transition mode in the distributed mode on condition that at least one other application server 30 is in the centralized transition mode. Thereby, when any one of the application servers 30 updates the shared data (for example, when the execution of the update transaction is started), all of the plurality of application servers 30 may transition from the distributed mode to the centralized transition mode. it can.
- the selection unit 60 uses the centralized transition mode in the centralized transition mode on condition that all the application servers 30 are in the centralized transition mode, the centralized reference mode that is the centralized mode, or the centralized update mode that is the centralized mode.
- the centralized reference mode which is the centralized mode. Accordingly, the plurality of application servers 30 can transition to the concentrated mode (the concentrated reference mode or the concentrated transition mode) on condition that all have transitioned from the distributed mode to the concentrated transition mode. Note that the plurality of application servers 30 may transition from the centralized transition mode to the centralized reference mode that is the centralized mode in synchronization with each other.
- the selection unit 60 transitions from the central reference mode to the central update mode. For example, when executing an update transaction, the selection unit 60 may transition from the central reference mode to the central update mode.
- the selection unit 60 transitions from the central update mode to the central reference mode.
- the selection unit 60 may transition from the centralized update mode to the centralized reference mode when execution of all the update transactions is completed.
- the selection unit 60 is configured so that, in the centralized reference mode, all application servers 30 are in the centralized reference mode or the centralized update mode, or at least one application server 30 is in the distributed transition mode. Transition from centralized reference mode to distributed transition mode on condition Further, the selection unit 60 may transition from the intensive reference mode to the distributed transition mode on the condition that a certain period has elapsed since the transition to the intensive reference mode. Thus, each application server 30 can transition to the distributed transition mode when the shared data is not updated in the centralized mode.
- the selection unit 60 transitions from the centralized transition mode to the distributed mode in the distributed transition mode on condition that all the application servers 30 are in the distributed transition mode, the distributed mode, or the centralized transition mode. Accordingly, the plurality of application servers 30 can transition to the distributed mode on condition that all have transitioned from the centralized mode to the distributed transition mode. Note that the plurality of application servers 30 may transition from the distributed transition mode to the distributed mode in synchronization with each other.
- the selection unit 60 uses the distributed transition mode on condition that all the application servers 30 are in the distributed transition mode, the centralized reference mode, or the centralized update mode. It may be a configuration that makes a transition to the centralized reference mode.
- the application server 30 transits from the distributed transition mode to the centralized update mode via the centralized reference mode. Can do.
- FIG. 8 shows an example of whether the cache can be referenced, whether the database can be referenced, and whether the database can be updated in each mode.
- the selection unit 60 acquires a lock from the distribution management unit 58 in the distribution mode.
- the selection unit 60 acquires a lock from the central management unit 40 in the centralized transition mode, the centralized reference mode, the centralized update mode, and the distributed transition mode.
- the selection unit 60 acquires the lock from the central management unit 40 and acquires it from the distribution management unit 58 in the transition from the distribution mode to the centralized transition mode. Release the lock.
- the selection unit 60 acquires the lock from the distributed management unit 58 in the transition from the distributed transition mode to the distributed mode. Release the acquired lock. Thereby, the selection unit 60 can eliminate the inconsistency of the shared data when switching the acquisition destination of the lock.
- the access control unit 54 permits reference to the shared data cached in the cache unit 56 and the shared data stored in the database server 20 in the distributed mode. That is, the access control unit 54 refers to the shared data using the cache unit 56 in the distributed mode. Thereby, the access control unit 54 can reduce the burden on the database server 20 and speed up access to the shared data in the distributed mode. Furthermore, the access control unit 54 prohibits updating of shared data stored in the database server 20 in the distributed mode. Thereby, the access control unit 54 can simplify the distributed lock control by eliminating the process of acquiring the exclusive lock from the plurality of other application servers 30 in the distributed mode.
- the access control unit 54 prohibits reference to shared data cached in the cache unit 56 and permits reference to shared data stored in the database server 20 in the centralized update mode. . That is, the access control unit 54 refers to the shared data without using the cache unit 56 in the centralized update mode. Furthermore, the access control unit 54 permits the update of the shared data stored in the database server 20 in the centralized update mode. As a result, the access control unit 54 can prevent the cached data from becoming inconsistent by prohibiting the cache access in the centralized update mode.
- the access control unit 54 prohibits reference to the shared data cached in the cache unit 56 in the centralized transition mode, the centralized reference mode, and the distributed transition mode, and is stored in the database server 20. Allow browsing of shared data. That is, the access control unit 54 refers to the shared data without using the cache unit 56 in the centralized transition mode, the centralized reference mode, and the distributed transition mode. Furthermore, the access control unit 54 prohibits updating of shared data stored in the database server 20 in the centralized transition mode, the centralized reference mode, and the distributed transition mode. As a result, the access control unit 54 can prevent cache access from being made inconsistent by prohibiting cache access at the transition from the distributed mode to the centralized update mode and from the centralized update mode to the distributed mode.
- the selection unit 60 may invalidate the shared data cached in the cache unit 56 at the time of transition from the distributed transition mode to the distributed mode. Instead, the selection unit 60 receives a notification of data updated by any of the application servers 30 among the shared data cached in the cache unit 56 at the time of transition from the distributed transition mode to the distributed mode. The received data may be selectively invalidated. Thereby, the selection unit 60 can eliminate inconsistency between the shared data cached in the cache unit 56 and the shared data stored in the database server 20.
- FIG. 9 shows an example of a processing flow by one application server 30 (A1) and a plurality of other application servers 30 (A2 to An) in the information processing system 10.
- the application servers 30 (A1 to An) are in the distributed mode, when one application server 30 (A1) starts executing an update transaction, the one application server 30 (A1) and other The plurality of application servers 30 (A2 to An) operate according to the flow of FIG.
- the one application server 30 (A1) transitions to a centralized transition mode (S101, S102, S103).
- Each of the other plurality of application servers 30 (A2 to An) receives the notification (S103A) from the one application server 30 (A1) and recognizes that the one application server 30 (A1) is in the centralized transition mode.
- all the application servers 30 (A1 to An) enter the centralized transition mode.
- Each of the plurality of application servers 30 (A1 to An) receives the notification (S205A) from each of the other application servers 30 (A1 to An), and all the application servers 30 (A1 to An) are in the centralized transition mode. Recognize that there is (S106, S206), transition to the concentrated reference mode (S107, S207).
- each of the plurality of application servers 30 (A1 to An) may transition from the centralized transition mode to the centralized reference mode in synchronization with each other.
- each of the other application servers 30 transitions to the distributed transition mode after a certain period of time has elapsed since the transition to the centralized reference mode (S212) (S213).
- the one application server 30 (A1) transitions from the centralized reference mode to the centralized update mode (S108). Subsequently, the one application server 30 (A1) updates the shared data (S109). Subsequently, when all the update transactions are completed (S110), the one application server 30 (A1) transitions from the centralized update mode to the centralized reference mode (S111). One application server 30 (A1) transitions to the distributed transition mode (S113) after a certain period of time has elapsed since the transition to the centralized reference mode (S112). As a result, all the application servers 30 (A1 to An) are in the distributed transition mode.
- Each of the plurality of application servers 30 (A1 to An) receives notifications (S113A and S213A) from the other application servers 30 (A1 to An), and all the application servers 30 (A1 to An) are in the distributed transition mode. Recognize that there is (S114, S214), transition to the distributed mode (S115, S215).
- each of the plurality of application servers 30 (A1 to An) may transition from the distributed transition mode to the distributed mode in synchronization with each other.
- each of the plurality of application servers 30 transitions from the distributed mode to the centralized reference mode through the centralized transition mode when an update transaction is started in any one of the application servers 30 in the distributed mode. Can do. Furthermore, the one application server 30 can change from the centralized reference mode to the centralized update mode and execute the update. Each of the plurality of application servers 30 can transition from the centralized reference mode to the distributed mode through the distributed transition mode when the update in one application server 30 is completed.
- FIG. 10 shows a flow for determining the mode of the new application server 30 when a new application server 30 is added to the information processing system 10.
- the information processing system 10 can add a new application server 30.
- the selection unit 60 of the application server 30 newly added to the information processing system 10 selects a mode according to the determination shown in FIG.
- the selection unit 60 determines whether or not at least one other application server 30 is in the centralized transition mode (S301).
- the selection unit 60 makes a transition to the centralized transition mode on condition that at least one other application server 30 is in the centralized transition mode (Yes in S301) (S302).
- the selection unit 60 When all the other application servers 30 are not in the centralized transition mode (No in S301), the selection unit 60 subsequently determines whether or not at least one other application server 30 is in the distributed mode (S303). . The selection unit 60 transitions to the distributed mode on condition that all other application servers 30 are not in the centralized transition mode and at least one other application server 30 is in the distributed mode (Yes in S303) ( S304).
- the selection unit 60 selects at least one other application server 30 in the distributed transition mode. Whether or not (S305).
- the selection unit 60 on condition that all the other application servers 30 are not in the centralized transition mode or the distributed mode and at least one other application server 30 is in the distributed transition mode (Yes in S305). Transition to the distributed transition mode (S306).
- the selection unit 60 makes a transition to the centralized reference mode on condition that all other application servers 30 are not in the centralized transition mode, the distributed mode, or the distributed transition mode (No in S305) (S307).
- the application server 30 can access the shared data while maintaining consistency with the other application servers 30. it can.
- FIG. 11 shows an example of a hardware configuration of a computer 1900 according to this embodiment.
- a computer 1900 according to this embodiment is connected to a CPU peripheral unit having a CPU 2000, a RAM 2020, a graphic controller 2075, and a display device 2080 that are connected to each other by a host controller 2082, and to the host controller 2082 by an input / output controller 2084.
- Input / output unit having communication interface 2030, hard disk drive 2040, and CD-ROM drive 2060, and legacy input / output unit having ROM 2010, flexible disk drive 2050, and input / output chip 2070 connected to input / output controller 2084 With.
- the host controller 2082 connects the RAM 2020 to the CPU 2000 and the graphic controller 2075 that access the RAM 2020 at a high transfer rate.
- the CPU 2000 operates based on programs stored in the ROM 2010 and the RAM 2020 and controls each unit.
- the graphic controller 2075 acquires image data generated by the CPU 2000 or the like on a frame buffer provided in the RAM 2020 and displays it on the display device 2080.
- the graphic controller 2075 may include a frame buffer for storing image data generated by the CPU 2000 or the like.
- the input / output controller 2084 connects the host controller 2082 to the communication interface 2030, the hard disk drive 2040, and the CD-ROM drive 2060, which are relatively high-speed input / output devices.
- the communication interface 2030 communicates with other devices via a network.
- the hard disk drive 2040 stores programs and data used by the CPU 2000 in the computer 1900.
- the CD-ROM drive 2060 reads a program or data from the CD-ROM 2095 and provides it to the hard disk drive 2040 via the RAM 2020.
- the ROM 2010, the flexible disk drive 2050, and the relatively low-speed input / output device of the input / output chip 2070 are connected to the input / output controller 2084.
- the ROM 2010 stores a boot program that the computer 1900 executes at startup and / or a program that depends on the hardware of the computer 1900.
- the flexible disk drive 2050 reads a program or data from the flexible disk 2090 and provides it to the hard disk drive 2040 via the RAM 2020.
- the input / output chip 2070 connects the flexible disk drive 2050 to the input / output controller 2084 and inputs / outputs various input / output devices via, for example, a parallel port, a serial port, a keyboard port, a mouse port, and the like. Connect to controller 2084.
- the program provided to the hard disk drive 2040 via the RAM 2020 is stored in a recording medium such as the flexible disk 2090, the CD-ROM 2095, or an IC card and provided by the user.
- the program is read from the recording medium, installed in the hard disk drive 2040 in the computer 1900 via the RAM 2020, and executed by the CPU 2000.
- the program installed in the computer 1900 and causing the computer 1900 to function as the application server 30 includes an execution module, an access control module, a cache module, a distributed management module, and a selection module. These programs or modules work with the CPU 2000 or the like to cause the computer 1900 to function as the execution unit 52, the access control unit 54, the cache unit 56, the distribution management unit 58, and the selection unit 60, respectively.
- the information processing described in these programs is read by the computer 1900, whereby the execution unit 52, the access control unit 54, the cache unit, which are specific means in which the software and the various hardware resources described above cooperate. 56, functions as a distribution management unit 58 and a selection unit 60. And the specific application server 30 according to the use purpose is constructed
- the CPU 2000 executes a communication program loaded on the RAM 2020 and executes a communication interface based on the processing content described in the communication program.
- a communication process is instructed to 2030.
- the communication interface 2030 reads transmission data stored in a transmission buffer area or the like provided on a storage device such as the RAM 2020, the hard disk drive 2040, the flexible disk 2090, or the CD-ROM 2095, and sends it to the network.
- the reception data transmitted or received from the network is written into a reception buffer area or the like provided on the storage device.
- the communication interface 2030 may transfer transmission / reception data to / from the storage device by a DMA (direct memory access) method. Instead, the CPU 2000 transfers the storage device or the communication interface 2030 as a transfer source.
- the transmission / reception data may be transferred by reading the data from the data and writing the data to the communication interface 2030 or the storage device of the transfer destination.
- the CPU 2000 is all or necessary from among files or databases stored in an external storage device such as a hard disk drive 2040, a CD-ROM drive 2060 (CD-ROM 2095), and a flexible disk drive 2050 (flexible disk 2090).
- This portion is read into the RAM 2020 by DMA transfer or the like, and various processes are performed on the data on the RAM 2020. Then, CPU 2000 writes the processed data back to the external storage device by DMA transfer or the like.
- the RAM 2020 and the external storage device are collectively referred to as a memory, a storage unit, or a storage device.
- the CPU 2000 can also store a part of the RAM 2020 in the cache memory and perform reading and writing on the cache memory. Even in such a form, the cache memory bears a part of the function of the RAM 2020. Therefore, in the present embodiment, the cache memory is also included in the RAM 2020, the memory, and / or the storage device unless otherwise indicated. To do.
- the CPU 2000 performs various operations, such as various operations, information processing, condition determination, information search / replacement, etc., described in the present embodiment, specified for the data read from the RAM 2020 by the instruction sequence of the program. Is written back to the RAM 2020. For example, when performing the condition determination, the CPU 2000 determines whether the various variables shown in the present embodiment satisfy the conditions such as large, small, above, below, equal, etc., compared to other variables or constants. When the condition is satisfied (or not satisfied), the program branches to a different instruction sequence or calls a subroutine.
- the CPU 2000 can search for information stored in a file or database in the storage device. For example, in the case where a plurality of entries in which the attribute value of the second attribute is associated with the attribute value of the first attribute are stored in the storage device, the CPU 2000 displays the plurality of entries stored in the storage device. The entry that matches the condition in which the attribute value of the first attribute is specified is retrieved, and the attribute value of the second attribute that is stored in the entry is read, thereby associating with the first attribute that satisfies the predetermined condition The attribute value of the specified second attribute can be obtained.
- the program or module shown above may be stored in an external recording medium.
- an optical recording medium such as DVD or CD
- a magneto-optical recording medium such as MO
- a tape medium such as an IC card, and the like
- a storage device such as a hard disk or RAM provided in a server system connected to a dedicated communication network or the Internet may be used as a recording medium, and the program may be provided to the computer 1900 via the network.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Software Systems (AREA)
- General Physics & Mathematics (AREA)
- Databases & Information Systems (AREA)
- Data Mining & Analysis (AREA)
- Computer Hardware Design (AREA)
- Signal Processing (AREA)
- Computer Networks & Wireless Communication (AREA)
- Mathematical Physics (AREA)
- Information Retrieval, Db Structures And Fs Structures Therefor (AREA)
Abstract
Description
20 データベースサーバ
30 アプリケーションサーバ
40 集中管理部
52 実行部
54 アクセス制御部
56 キャッシュ部
58 分散管理部
60 選択部
1900 コンピュータ
2000 CPU
2010 ROM
2020 RAM
2030 通信インターフェイス
2040 ハードディスクドライブ
2050 フレキシブルディスク・ドライブ
2060 CD-ROMドライブ
2070 入出力チップ
2075 グラフィック・コントローラ
2080 表示装置
2082 ホスト・コントローラ
2084 入出力コントローラ
2090 フレキシブルディスク
2095 CD-ROM
Claims (16)
- 共有データをアクセスする複数のアプリケーションサーバと、
前記複数のアプリケーションサーバのそれぞれによる前記共有データのロックを集中管理する集中管理部と、
を備え、
前記複数のアプリケーションサーバのそれぞれは、
当該アプリケーションサーバによる前記共有データのロックを管理する分散管理部と、
前記分散管理部からロックを取得する分散モードか、前記集中管理部からロックを取得する集中モードかを選択する選択部と、
を有するシステム。 - 前記分散モードにおいて、前記共有データの更新を禁止するアクセス制御部を更に備える
請求項1に記載のシステム。 - 前記選択部は、
前記複数のアプリケーションサーバの全てが前記共有データを更新していないことを条件として、前記分散モードに遷移し、
前記複数のアプリケーションサーバのうち少なくとも1つが前記共有データを更新することを条件として、前記集中モードに遷移する
請求項2に記載のシステム。 - 前記選択部は、
前記分散モードにおいて、前記共有データを更新する場合、前記分散モードから集中遷移モードに遷移し、
前記分散モードにおいて、他の少なくとも1つの前記アプリケーションサーバが前記集中遷移モードであることを条件として、前記分散モードから前記集中遷移モードに遷移し、
前記集中遷移モードにおいて、全ての前記アプリケーションサーバが前記集中遷移モードまたは前記集中モードの何れかであることを条件として、前記集中遷移モードから前記集中モードに遷移する
請求項3に記載のシステム。 - 前記選択部は、前記集中遷移モードにおいて、前記集中管理部からロックを取得し、
前記アクセス制御部は、前記集中遷移モードにおいて、前記共有データの更新を禁止する
請求項4に記載のシステム。 - 前記選択部は、
前記集中遷移モードにおいて、全ての前記アプリケーションサーバが前記集中遷移モードまたは前記集中モードの何れかであることを条件として、前記集中遷移モードから前記集中モードである集中参照モードに遷移し、
前記集中参照モードにおいて、前記共有データを更新する場合、前記集中参照モードから前記集中モードである集中更新モードに遷移し、
前記集中更新モードにおいて、前記共有データの更新を終えた場合、前記集中更新モードから前記集中参照モードに遷移する
請求項5に記載のシステム。 - 前記アクセス制御部は、前記集中参照モードにおいて、前記共有データの更新を禁止する
請求項6に記載のシステム。 - 前記選択部は、
前記集中参照モードにおいて、全ての前記アプリケーションサーバが前記集中参照モードまたは前記集中更新モードの何れかである、または、少なくとも1つの前記アプリケーションサーバが分散遷移モードであることを条件として、前記集中参照モードから前記分散遷移モードに遷移し、
前記分散遷移モードにおいて、全ての前記アプリケーションサーバが、前記分散遷移モード、前記分散モードまたは前記集中遷移モードの何れかであることを条件として、前記集中遷移モードから前記分散モードに遷移する
請求項7に記載のシステム。 - 前記選択部は、前記分散遷移モードにおいて、前記集中管理部からロックを取得し、
前記アクセス制御部は、前記分散遷移モードにおいて、前記共有データの更新を禁止する
請求項8に記載のシステム。 - 当該システムは、新たな前記アプリケーションサーバを追加することが可能であり、
新たに追加された前記アプリケーションサーバの前記選択部は、
少なくとも1つの他の前記アプリケーションサーバが前記集中遷移モードであることを条件として、前記集中遷移モードに遷移し、
全ての他の前記アプリケーションサーバが前記集中遷移モードではなく、且つ、少なくとも1つの他の前記アプリケーションサーバが前記分散モードであることを条件として、前記分散モードに遷移し、
全ての他の前記アプリケーションサーバが前記集中遷移モードまたは前記分散モードの何れでもなく、且つ、少なくとも1つの他の前記アプリケーションサーバが前記分散遷移モードであることを条件として、前記分散遷移モードに遷移し、
全ての他の前記アプリケーションサーバが前記集中遷移モード、前記分散モードまたは前記分散遷移モードの何れでもないことを条件として、前記集中参照モードに遷移する
請求項9に記載のシステム。 - 前記選択部は、前記分散モードから前記集中遷移モードへの遷移において、前記集中管理部からロックを取得し、前記分散管理部から取得したロックを開放する
請求項4から7の何れかに記載のシステム。 - 前記複数のアプリケーションサーバのそれぞれは、
前記共有データにアクセスするアクセス制御部と、
前記共有データをキャッシュするキャッシュ部と
を更に有し、
前記アクセス制御部は、前記分散モードにおいて、前記キャッシュ部を用いて前記共有データを参照し、前記集中モードにおいて、前記キャッシュ部を用いずに前記共有データを参照し、
前記選択部は、前記分散モードへの遷移時において、前記キャッシュ部にキャッシュされた前記共有データを無効にする
請求項1から11の何れかに記載のシステム。 - 共有データをアクセスする複数のアプリケーションサーバと、
前記複数のアプリケーションサーバのそれぞれによる前記共有データのロックを集中管理する集中管理部と、
を備え、
前記複数のアプリケーションサーバのそれぞれは、
当該アプリケーションサーバによる前記共有データのロックを管理する分散管理部と、
前記分散管理部からロックを取得する分散モード、または、前記集中管理部からロックを取得する集中遷移モード、集中参照モード、集中更新モードおよび分散遷移モードの何れか、を選択する選択部と
を有し、
前記選択部は、
前記分散モードにおいて、前記共有データを更新する場合、前記分散モードから集中遷移モードに遷移し、
前記分散モードにおいて、他の少なくとも1つの前記アプリケーションサーバが前記集中遷移モードであることを条件として、前記分散モードから前記集中遷移モードに遷移し、
前記集中遷移モードにおいて、全ての前記アプリケーションサーバが前記集中遷移モード、前記集中参照モードまたは前記集中更新モードの何れかであることを条件として、前記集中遷移モードから前記集中参照モードに遷移し、
前記集中参照モードにおいて、前記共有データを更新する場合、前記集中参照モードから前記集中更新モードに遷移し、
前記集中更新モードにおいて、前記共有データの更新を終えた場合、前記集中更新モードから前記集中参照モードに遷移し、
前記集中参照モードにおいて、全ての前記アプリケーションサーバが前記集中参照モードまたは前記集中更新モードの何れかである、または、少なくとも1つの前記アプリケーションサーバが分散遷移モードであることを条件として、前記集中参照モードから前記分散遷移モードに遷移し、
前記分散遷移モードにおいて、全ての前記アプリケーションサーバが、前記分散遷移モード、前記分散モードまたは前記集中遷移モードの何れかであることを条件として、前記集中遷移モードから前記分散モードに遷移する
システム。 - 共有データをアクセスする複数のアプリケーションサーバと、前記複数のアプリケーションサーバのそれぞれによる前記共有データのロックを集中管理する集中管理部とを備えるシステムにおける、前記アプリケーションサーバであって、
当該アプリケーションサーバによる前記共有データのロックを管理する分散管理部と、
前記分散管理部からロックを取得する分散モードか、前記集中管理部からロックを取得する集中モードかを選択する選択部と、
有するアプリケーションサーバ。 - 共有データをアクセスする複数のアプリケーションサーバと、前記複数のアプリケーションサーバのそれぞれによる前記共有データのロックを集中管理する集中管理部とを備えるシステムにおける、前記アプリケーションサーバとして、コンピュータを機能させるプログラムであって、
前記コンピュータを、
当該アプリケーションサーバによる前記共有データのロックを管理する分散管理部と、
前記分散管理部からロックを取得する分散モードか、前記集中管理部からロックを取得する集中モードかを選択する選択部と
して機能させるプログラム。 - 共有データをアクセスする複数のアプリケーションサーバと、前記複数のアプリケーションサーバのそれぞれによる前記共有データのロックを集中管理する集中管理部とを備えるシステムにおける、前記アプリケーションサーバとして、コンピュータを機能させる方法であって、
前記コンピュータを、当該アプリケーションサーバによる前記共有データのロックを管理する分散管理部として機能させるステップと、
前記コンピュータを、前記分散管理部からロックを取得する分散モードか、前記集中管理部からロックを取得する集中モードかを選択する選択部として機能させるステップと
を有する方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP09819051.5A EP2352090B1 (en) | 2008-10-06 | 2009-08-13 | System accessing shared data by a plurality of application servers |
JP2010532855A JP5213077B2 (ja) | 2008-10-06 | 2009-08-13 | 複数のアプリケーションサーバにより共有データをアクセスするシステム |
CN200980138187.9A CN102165420B (zh) | 2008-10-06 | 2009-08-13 | 通过多个应用程序服务器访问共享数据的系统 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2008259926 | 2008-10-06 | ||
JP2008-259926 | 2008-10-06 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2010041515A1 true WO2010041515A1 (ja) | 2010-04-15 |
Family
ID=42100471
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2009/064316 WO2010041515A1 (ja) | 2008-10-06 | 2009-08-13 | 複数のアプリケーションサーバにより共有データをアクセスするシステム |
Country Status (6)
Country | Link |
---|---|
US (2) | US8589438B2 (ja) |
EP (1) | EP2352090B1 (ja) |
JP (1) | JP5213077B2 (ja) |
KR (1) | KR20110066940A (ja) |
CN (1) | CN102165420B (ja) |
WO (1) | WO2010041515A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013046883A1 (ja) * | 2011-09-30 | 2013-04-04 | インターナショナル・ビジネス・マシーンズ・コーポレーション | トランザクション処理システム、方法及びプログラム |
Families Citing this family (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP5213077B2 (ja) | 2008-10-06 | 2013-06-19 | インターナショナル・ビジネス・マシーンズ・コーポレーション | 複数のアプリケーションサーバにより共有データをアクセスするシステム |
CN102238203A (zh) * | 2010-04-23 | 2011-11-09 | 中兴通讯股份有限公司 | 一种实现物联网业务的方法及系统 |
US8620998B2 (en) * | 2010-09-11 | 2013-12-31 | Steelcloud, Inc. | Mobile application deployment for distributed computing environments |
US8484649B2 (en) | 2011-01-05 | 2013-07-09 | International Business Machines Corporation | Amortizing costs of shared scans |
US9088569B2 (en) * | 2011-05-12 | 2015-07-21 | International Business Machines Corporation | Managing access to a shared resource using client access credentials |
GB2503266A (en) * | 2012-06-21 | 2013-12-25 | Ibm | Sharing aggregated cache hit and miss data in a storage area network |
US20140280347A1 (en) * | 2013-03-14 | 2014-09-18 | Konica Minolta Laboratory U.S.A., Inc. | Managing Digital Files with Shared Locks |
KR101645163B1 (ko) * | 2014-11-14 | 2016-08-03 | 주식회사 인프라웨어 | 분산 시스템에서의 데이터베이스 동기화 방법 |
CN111868707A (zh) * | 2018-03-13 | 2020-10-30 | 谷歌有限责任公司 | 在关系数据库的主键中包括事务提交时间戳 |
US11176121B2 (en) * | 2019-05-28 | 2021-11-16 | International Business Machines Corporation | Global transaction serialization |
US11032361B1 (en) * | 2020-07-14 | 2021-06-08 | Coupang Corp. | Systems and methods of balancing network load for ultra high server availability |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS62140159A (ja) * | 1985-12-16 | 1987-06-23 | Hitachi Ltd | 共用データ管理方法 |
JPH08202567A (ja) * | 1995-01-25 | 1996-08-09 | Hitachi Ltd | システム間ロック処理方法 |
JP2005534081A (ja) * | 2001-09-21 | 2005-11-10 | ポリサーブ・インコーポレーテッド | 共有ストレージを備えたマルチノード環境のためのシステムおよび方法 |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6516351B2 (en) * | 1997-12-05 | 2003-02-04 | Network Appliance, Inc. | Enforcing uniform file-locking for diverse file-locking protocols |
US7200623B2 (en) * | 1998-11-24 | 2007-04-03 | Oracle International Corp. | Methods to perform disk writes in a distributed shared disk system needing consistency across failures |
US7406473B1 (en) * | 2002-01-30 | 2008-07-29 | Red Hat, Inc. | Distributed file system using disk servers, lock servers and file servers |
US7240058B2 (en) * | 2002-03-01 | 2007-07-03 | Sun Microsystems, Inc. | Lock mechanism for a distributed data system |
US6732171B2 (en) * | 2002-05-31 | 2004-05-04 | Lefthand Networks, Inc. | Distributed network storage system with virtualization |
US20050289143A1 (en) * | 2004-06-23 | 2005-12-29 | Exanet Ltd. | Method for managing lock resources in a distributed storage system |
US20060271930A1 (en) * | 2005-05-25 | 2006-11-30 | Letizi Orion D | Clustered object state using synthetic transactions |
US8103642B2 (en) * | 2006-02-03 | 2012-01-24 | Oracle International Corporation | Adaptive region locking |
US20080243847A1 (en) * | 2007-04-02 | 2008-10-02 | Microsoft Corporation | Separating central locking services from distributed data fulfillment services in a storage system |
JP5213077B2 (ja) | 2008-10-06 | 2013-06-19 | インターナショナル・ビジネス・マシーンズ・コーポレーション | 複数のアプリケーションサーバにより共有データをアクセスするシステム |
-
2009
- 2009-08-13 JP JP2010532855A patent/JP5213077B2/ja active Active
- 2009-08-13 EP EP09819051.5A patent/EP2352090B1/en active Active
- 2009-08-13 WO PCT/JP2009/064316 patent/WO2010041515A1/ja active Application Filing
- 2009-08-13 CN CN200980138187.9A patent/CN102165420B/zh active Active
- 2009-08-13 KR KR20117008669A patent/KR20110066940A/ko not_active Application Discontinuation
- 2009-10-01 US US12/571,496 patent/US8589438B2/en not_active Expired - Fee Related
-
2013
- 2013-11-18 US US14/082,371 patent/US9031923B2/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS62140159A (ja) * | 1985-12-16 | 1987-06-23 | Hitachi Ltd | 共用データ管理方法 |
JPH08202567A (ja) * | 1995-01-25 | 1996-08-09 | Hitachi Ltd | システム間ロック処理方法 |
JP2005534081A (ja) * | 2001-09-21 | 2005-11-10 | ポリサーブ・インコーポレーテッド | 共有ストレージを備えたマルチノード環境のためのシステムおよび方法 |
Non-Patent Citations (1)
Title |
---|
See also references of EP2352090A4 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2013046883A1 (ja) * | 2011-09-30 | 2013-04-04 | インターナショナル・ビジネス・マシーンズ・コーポレーション | トランザクション処理システム、方法及びプログラム |
GB2511222A (en) * | 2011-09-30 | 2014-08-27 | Ibm | Transaction processing system, method and program |
US8930323B2 (en) | 2011-09-30 | 2015-01-06 | International Business Machines Corporation | Transaction processing system, method, and program |
JPWO2013046883A1 (ja) * | 2011-09-30 | 2015-03-26 | インターナショナル・ビジネス・マシーンズ・コーポレーションInternational Business Machines Corporation | トランザクション処理システム、方法及びプログラム |
Also Published As
Publication number | Publication date |
---|---|
JPWO2010041515A1 (ja) | 2012-03-08 |
JP5213077B2 (ja) | 2013-06-19 |
KR20110066940A (ko) | 2011-06-17 |
EP2352090B1 (en) | 2019-09-25 |
US20100106697A1 (en) | 2010-04-29 |
US9031923B2 (en) | 2015-05-12 |
US8589438B2 (en) | 2013-11-19 |
CN102165420B (zh) | 2014-07-16 |
US20140082127A1 (en) | 2014-03-20 |
EP2352090A4 (en) | 2015-05-06 |
CN102165420A (zh) | 2011-08-24 |
EP2352090A1 (en) | 2011-08-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5213077B2 (ja) | 複数のアプリケーションサーバにより共有データをアクセスするシステム | |
US7421562B2 (en) | Database system providing methodology for extended memory support | |
EP3170106B1 (en) | High throughput data modifications using blind update operations | |
CN1755635B (zh) | 事务型存储器访问的混合硬件软件实现 | |
US8108587B2 (en) | Free-space reduction in cached database pages | |
US7831772B2 (en) | System and methodology providing multiple heterogeneous buffer caches | |
US20180210908A1 (en) | Accessing data entities | |
JP2505939B2 (ja) | デ―タのキャストアウトを制御する方法 | |
US9632944B2 (en) | Enhanced transactional cache | |
US11157466B2 (en) | Data templates associated with non-relational database systems | |
JP2003006036A (ja) | クラスタ化したアプリケーションサーバおよびデータベース構造を持つWebシステム | |
WO2022095366A1 (zh) | 基于Redis的数据读取方法、装置、设备及可读存储介质 | |
US11880318B2 (en) | Local page writes via pre-staging buffers for resilient buffer pool extensions | |
CN112540982A (zh) | 具有可更新逻辑表指针的虚拟数据库表 | |
US20060224949A1 (en) | Exclusion control method and information processing apparatus | |
JP2009265840A (ja) | データベースのキャッシュシステム | |
CN103353891A (zh) | 数据库管理系统及其数据处理方法 | |
CN110019113B (zh) | 一种数据库的业务处理方法及数据库服务器 | |
US11940994B2 (en) | Mechanisms for maintaining chains without locks | |
Pollack et al. | Indexing Memory-Optimized Tables | |
WO2023075910A1 (en) | Local page writes via pre-staging buffers for resilient buffer pool extensions | |
JP2002063055A (ja) | 書き込み遅延データベース管理方式及びシステム | |
Fritchey et al. | Memory-Optimized OLTP Tables and Procedures | |
CN116257519A (zh) | 一种数据读写的方法、装置、计算机设备及存储介质 | |
JPH0683702A (ja) | データ転送及びデータ除去のための制御方法並びにコンピュータ・システム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200980138187.9 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 09819051 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2010532855 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
ENP | Entry into the national phase |
Ref document number: 20117008669 Country of ref document: KR Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2953/CHENP/2011 Country of ref document: IN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2009819051 Country of ref document: EP |