JP2003076671A - Fault containment and error handling in partitioned system with shared resources - Google Patents

Fault containment and error handling in partitioned system with shared resources

Info

Publication number
JP2003076671A
JP2003076671A JP2002190699A JP2002190699A JP2003076671A JP 2003076671 A JP2003076671 A JP 2003076671A JP 2002190699 A JP2002190699 A JP 2002190699A JP 2002190699 A JP2002190699 A JP 2002190699A JP 2003076671 A JP2003076671 A JP 2003076671A
Authority
JP
Japan
Prior art keywords
domain
resource
manager
allocated
fault
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2002190699A
Other languages
Japanese (ja)
Other versions
JP4213415B2 (en
Inventor
Jeremy J Farrell
Kazunori Masuyama
Sudheer Miryala
Hitoshi Oi
N Conway Patrick
Takeshi Shimizu
Yasushi Umezawa
ミルヤラ サディール
ジェイ.ファレル ジェレミー
エヌ.コンウェイ パトリック
和則 増山
ヒトシ 大井
靖 梅澤
剛 清水
Original Assignee
Fujitsu Ltd
富士通株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority to US30196901P priority Critical
Priority to US60/301969 priority
Priority to US10/150618 priority
Priority to US10/150,618 priority patent/US7380001B2/en
Application filed by Fujitsu Ltd, 富士通株式会社 filed Critical Fujitsu Ltd
Publication of JP2003076671A publication Critical patent/JP2003076671A/en
Application granted granted Critical
Publication of JP4213415B2 publication Critical patent/JP4213415B2/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

(57) [Summary] [PROBLEMS] To provide a system and a method for suppressing an error in a domain and performing error processing in a divided computer system. A computer system includes a system manager having read and write access to a resource definition table. The system manager quiesces the system in the event of a failure in the domain, identifies the allocated resources associated with the failed domain, identifies the non-failed domain, and exits the dormant state of the non-failed domain, Keep faults in the fault domain. The system manager further deallocates the resources assigned to the failed domain so that surviving domains can use the resources.
Handle errors in the fault domain.

Description

Detailed Description of the Invention

[0001]

BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention generally relates to partitioning a partition of a computer system into domains, and more particularly to fault suppression and error handling in partitioned computer systems having shared resources.

[0002]

Multi-node computer systems are often divided into domains, each domain functioning as an independent machine with its own address space. Partitioning effectively allocates computer system resources to different tasks. Domains in a partitioned computer system can dynamically share resources. When a critical packet processing failure occurs in the domain, the system cannot continue processing. As a result, the entire shared resource is placed in an intermediate state. Reset the fault domain in the system,
The entire shared resource must be reset to restart. All domains must be reset, even if the other domains are working fine.

One solution to error suppression and recovery in a partitioned system is to use dedicated resources for each domain so that if one domain fails, the unfailed domains are unaffected. To do so. However, in a divided system, more resources are required to perform error suppression and recovery using dedicated resources for each domain than when using shared resources. This is because the resource amount has to correspond to the maximum demand of all domains of the system.

[0004]

Therefore, it would be desirable to provide a mechanism such that if a system contains an error in a fault domain, other non-fault domains will not be affected.

The present invention is a system and method for fault suppression and error handling in a logically partitioned computer system having multiple computer nodes coupled by an interconnect.

[0006]

The system includes at least one resource that is dynamically shared by some or all domains. The resource definition table stores information about the status of each resource, eg, whether the resource is assigned to a domain. The resource definition table also manages the association between a resource and the domain to which that resource is assigned.

The system further includes a system manager having read and write access to the resource definition table. In the event of a packet processing failure in the domain, the system manager forces the system to dormant, suspending the start of new packets for the system. The system manager monitors status information of shared resources. For example, identify allocated resources that are in an intermediate state. Utilizing the domain identifier stored in the resource definition table, the system manager also detects the fault domain associated with the allocated resource. The system manager also detects one or more non-faulty domains whose associated resources are not in the resource definition table. The system manager then exits the non-failed domain from hibernation and the non-failed domain resumes operation, thereby suppressing the error within the failed domain. The system manager then handles the error in the fault domain. For example, deallocating resources that other domains have allocated for future use and resetting the failed domain. As a result, the fault is suppressed within the fault domain and the non-fault domain continues to operate without being reset.

[0008]

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT Referring to FIG. 1, there is shown a block diagram of a multi-node computer system 100 partitioned into multiple domains. Each of the domains 131, 135 and 137 shown in FIG. 1 has a plurality of nodes,
That is, it includes a central control unit (CPU) node 105, a memory node 110 and an input / output (I / O) node 115, which are connected via an interconnect 120. CPU node 105 may be a conventional processor, such as an Intel or Intel-compatible Pentium class processor or higher, a Sun SPARC class processor or higher, or an IBM / Motorola PowerPC processor.
A processor of class or better. The I / O node 115 is a conventional I / O system, such as a storage device, an input device, and a peripheral device. The memory node 110 is a conventional memory system such as a dynamic random access memory system or a static random access memory system. Each node may be implemented on a separate computer chip, computer board or stand-alone unit. CPU
Node 105, memory node 110 and I / O node 1
15 use packets to communicate with each other via interconnect 120. Interconnect 120 may be, for example, a conventional global interconnect or may include a router. Each domain 131, 135 and 137 has a local domain register that controls the state of each local domain. The domain register 145 is shown in FIG. 1 as an example. Each local domain register is preferably a control register, status register, error recording register (not shown).
, Etc., including various different types of local registers.

System 100 further includes one or more shared resources 130 that are dynamically used by at least one domain within system 100. The system 100 further includes a resource definition table 155 to store the state of the resource and the relationship between the resource and the domain to which the resource is assigned, even if the resource is not already assigned to the domain.
The resource definition table 155 is realized as a register array including address decoding logic, and reading or writing of entries is permitted. The resource definition table 155 may be implemented as a static RAM array with separate read and write ports. The resource definition table 155 is shown in FIG.
5 to 5 will be described in more detail below.

The system 100 further includes an external agent, called the system manager 140, which is connected to the interconnect 120. In the preferred embodiment, the system manager 140 has read and write access to the resource definition table 155. This helps the system manager 140 identify allocated resources that are in an intermediate state. Domain ID
The system manager 140 identifies the fault domain associated with the allocated resource by utilizing the. The system manager 140 manages a list of all domains in the system 100 and a list of failed domains.
As a result, the system manager 140 can identify a domain having no related resource and no failure in the resource definition table 155.

System manager 140 has read and write access to one or more local domain registers, eg domain register 145. This right allows the system manager 140
Domain 131, 13 as part of the reconfiguration process.
The status of each individual domain can be monitored and controlled, such as by pausing 5 and 137. In the event of a hardware failure within the domain, the domain is deadlocked because the interconnect 120 is deadlocked. In a conventional computer system, resources are shared between domains, so a deadlocked domain can cause the operation of other domains to fail. The system manager 140 has write and read access to local domain registers, such as register 145, so that it can reset the domain state of the deadlocked domain. The system manager 140 operates independently of the hardware and software operating on any individual domain. Thus, it is immune to hardware or software failures in any of the individual domains within computer system 100. The system manager 140 may be implemented in hardware, software, firmware and combinations thereof. The system manager 140 may be part of a system controller (not shown) having a control interface (not shown) for a system administrator (not shown).

Referring to FIG. 2, there is shown a resource definition table 155 that keeps track of the status of outstanding transactions in the system 100. The resource definition table 155 shown in FIG. 2 includes 8 entries.
It should be noted that the resource definition table 155 may contain any number of entries. Each shared resource entry 40 is assigned to a domain as the packet is sent from the node to the interconnect 120. The state information of the shared resource entry 40 is updated when further processing is executed. The shared resource entry 40 is deallocated when the series of packet processing is completed. The resource definition table 155 preferably includes fields for valid bit 10, domain ID 20, and resource entry 30. The valid bit field 10 has a specific value and indicates whether the resource has been assigned to the domain. In one embodiment of the present invention, the valid bit field 10 is "1" when the resource is allocated and the valid bit field 10 is "0" when the resource is deallocated. The domain ID field 20 identifies the domain to which the resource is assigned. The presence of the domain ID 20 allows the system 100 to manage the relationship between a resource and its corresponding domain, so that in the event of a failure within the system 100, the system manager 140 will have one or more unaffected domains. Can be identified. As shown in FIG. 2, resources 0 and 1 are assigned to domain 0, resource 2 is assigned to domain 3, and resources 4 and 7 are assigned to domain 2.

FIG. 3 shows a resource deallocating process of the resource definition table 155. For example, the resource 4 is deallocated when the series of packet processing is completed. Then, the valid bit field 10 for the resource 4 is cleared from 1 to 0. It should be noted that the domain ID field 20 holds the value when the resource 4 was assigned to the domain 2. This information is useful in identifying which domain last used the resource 4.

FIG. 4 shows a resource selection process for allocation. To select a resource for allocation, a priority encoder (not shown) decodes the effective bit 10 of all resources in the resource definition table 155 and selects the lowest numbered unused resource. In FIG. 4, the resource 3 is the resource to which the minimum number is not assigned. Resource 3 is allocated when a packet is sent from a node, eg CPU node 105, to the interconnect 120 and holds the state of packet processing. The shared resource status information is updated when further processing is performed.

As shown in FIG. 5, the resource definition table 155 traces the allocation of the resource 3 by the domain 1. When a domain allocates a resource, only that domain or system manager 140 is allowed to modify or deallocate that resource. In the example shown, only domain 1 or system manager 140 is allowed to modify or deallocate resource 3. This allows the system 100 to maintain resource isolation.
Resource separation is realized by checking the domain IDs of all messages that have accessed the resource definition table 155. If a message originates from a domain different from the domain ID in the domain ID field 20 of the resource being modified,
An error condition, which must be recorded and reported to the system manager 140.

FIG. 6 is a flowchart showing a method of error suppression and recovery in a logically divided system having shared resources. The process starts 10 when a packet processing failure occurs within a domain and the domain is deadlocked. The system manager 140 suspends 20 all nodes in all domains in the system 100,
It does not accept new transactions and all outstanding transactions in all domains work to completion.

The system manager 140 is the system 1
A mechanism called "bus lock" is preferably used to put 00 into the "dormant" state. This is issued when a node, eg CPU node 105, needs to lock all resources in the partitioned system. The system manager 140 broadcasts the lock acquisition request to each node in all domains. System 1 that received the request
Each node of 00 stops issuing a new processor request to the system 100. Each node is a system 100
To all outstanding requests from the node to the node, guaranteeing sufficient resources to complete the outstanding request and waiting for a reply to all outstanding requests. Thereafter, the response generated for the lock acquisition request is transmitted to the system manager 140 by each node. When replies from all nodes are received, the system 100 drains all outstanding requests,
Enter the "pause" state.

If the request cannot be completed due to a packet processing error, no reply to the lock acquisition request is received from that particular node. This situation is simply detected by the system manager 140 timeout. When the timeout expires, the system manager 140
Examines the resource definition table 155 to identify allocated resources in the intermediate state. Using the domain ID, the system manager 140 detects 40 the fault domain associated with its assigned resource. Also, in the resource definition table, there is no allocated resource 1
The detection 50 of one or more non-faulty domains is also performed. For example, as shown in FIG. 2, domain 0 is a fault-free domain with no associated domain. When the system manager 140 identifies a domain that has no failures, it exits the dormant state for that domain. For example, the system manager 140 issues a lock release request to all nodes in all domains, allowing the interconnection 120 to continue issuing new requests. This allows the system manager 140 to contain the failure within the failed domain, without having to restart the unfailed domain.

The system manager 140 then handles the error in the fault domain. For example, deallocating 70 the resources associated with the faulty domain so that other non-faulty domains can utilize the resource. As described above, in FIG. 3, when the domain 2 is the fault domain to which the resource 4 is allocated, the system manager 140 deallocates the resource 4 and the effective bit field 10 of the resource definition table 155.
Is cleared and the value of the valid bit field 10 is changed from "1" to "0" so that the resource can be used by another non-failed domain. It should be noted that the domain ID field 20 holds the value when the resource 4 was assigned. The system manager 140 uses this information to identify that domain "2" last used resource 4.

[0020]

In accordance with the preferred embodiment of the present invention, channel 165 selectively resets the hardware state in a deadlocked domain by system manager 140 reinitializing or rebooting the system.
It is effective to zero. When the fault domain is reset, the process ends 90. As a result, the fault is suppressed within the fault domain, the non-fault domain continues to operate without being reset, and the fault domain is reset.

Supplementary note (Supplementary note 1) A divided computer system for suppressing a packet processing failure within a fault domain for processing, and storing a state of at least one allocated resource dynamically shared by at least one domain. Including a definition table, each resource is associated with a domain ID that identifies the domain to which it is assigned,
A computer system including a system manager having write and read access to a resource definition file and using a domain ID to identify an assigned resource and a fault domain associated with the assigned resource. (Supplementary note 2) The system according to supplementary note 1, further comprising a plurality of computer nodes connected via an interconnection, wherein the system manager can further suspend each node of each domain. (Supplementary note 3) The system of Supplementary note 2, further comprising the system manager identifying at least one non-failed domain and ending the hibernation of the at least one non-failed domain. (Supplementary note 4) The system according to supplementary note 1, further comprising: the system manager deallocating the allocated resource associated with the failure domain by changing the state of the resource indicated in the resource definition table. (Supplementary note 5) The system according to supplementary note 1, wherein each resource of the resource definition table is associated with a valid bit having a specific value indicating whether or not the resource is allocated. (Supplementary note 6) The system according to supplementary note 5, wherein when the valid bit is 0, the specific value indicates that a resource is allocated. (Supplementary note 7) The system according to supplementary note 5, wherein when the valid bit is 1, the specific value indicates that a resource has been allocated. (Supplementary note 8) A method of suppressing packet processing failure in a failure domain in a computer system having at least two domains, each domain having a plurality of computer nodes, and handling the system packet processing failure. To put each node in each domain into a dormant state, to identify the allocation resource in the resource definition table, to identify the fault domain related to the allocation resource in the resource definition table, and to have the allocation resource in the resource definition table. Identifying at least one non-faulty domain that does not exist, terminating the non-faulty domain from hibernation, and deallocating allocated resources associated with the failed resource in the resource definition table. (Supplementary note 9) The method according to supplementary note 8, further comprising resetting the fault domain. (Supplementary note 10) The method of Supplementary note 8, wherein the step of resetting the fault domain further comprises changing a state of the fault domain. (Supplementary Note 11) The step of entering the hibernation state includes issuing a lock acquisition request to each node of each domain,
The method according to attachment 8. (Supplementary note 12) The method according to supplementary note 8, wherein the step of ending the hibernation includes issuing a lock release request to each node of each domain. (Supplementary note 13) The system according to supplementary note 2, wherein the computer node is a CPU node. (Supplementary Note 14) The system according to Supplementary Note 2, wherein the computer node is an I / O node. (Supplementary note 15) The system according to supplementary note 2, wherein the computer node is a memory node. (Supplementary note 16) The system according to supplementary note 1, wherein the system manager is realized by hardware. (Supplementary note 17) The system according to supplementary note 1, wherein the system manager is realized by software. (Supplementary note 18) The system according to supplementary note 1, wherein the system manager is realized by software in a computer outside the system.

[Brief description of drawings]

FIG. 1 is a block diagram of the overall architecture of a multi-node computer system of the present invention.

FIG. 2 is a block diagram of a resource definition table according to the embodiment of FIG.

3 is a block diagram illustrating a resource deallocation process in the resource definition table of FIG.

4 is a block diagram illustrating a process of selecting to allocate the lowest numbered resource in the resource definition table of FIG.

FIG. 5 is a block diagram showing a resource definition table that traces allocation of resources 3 by domain 1.

FIG. 6 is a flowchart of a method performed by the embodiment of FIG.

─────────────────────────────────────────────────── ─── Continuation of front page (51) Int.Cl. 7 Identification code FI theme code (reference) G06F 15/16 640 G06F 15/16 640A (72) Inventor Yasushi Umezawa USA, California 95014, Valley Green Dry Bou 20875 Number 51 (72) Inventor Jeremy Jay. Farrell United States, California 95008, Campbell, Patricia Court 1030 (72) Inventor Sadir Milyara United States, California 95129, San Jose, West Walbrook Drive 5725 (72) Inventor Tsuyoshi Shimizu United States, California 95134, San Jose, Elan Village Lane 310 No. 113 (72) Inventor Hitoshi Oi USA, Florida 33431, Boca Raton, Glades Road 777 (72) Inventor Patrick N. Conway United States, California 94024, Los Altos, Dolores 973 F term (reference) 5B045 BB28 BB32 HH01 HH04 JJ02 JJ07 JJ13 5B098 HH01 JJ03

Claims (10)

[Claims]
1. A split computer system for suppressing a packet processing failure within a failure domain for processing, and comprising a resource definition table storing a state of at least one allocated resource dynamically shared by at least one domain. Each resource is associated with a domain ID that identifies the domain to which it is allocated, has write and read access to the resource definition file, and uses the domain ID to allocate resources and faults associated with the allocated resources. A computer system that includes a system manager that can identify a domain.
2. The system of claim 1, further comprising a plurality of computer nodes connected via an interconnect, the system manager further capable of hibernating each node of each domain.
3. The system of claim 2, further comprising a system manager identifying at least one non-faulty domain and ending hibernation of the at least one non-faulty domain.
4. The system of claim 1, further comprising: the system manager deallocating the allocated resources associated with the fault domain by changing the state of the resources shown in the resource definition table.
5. Each resource of the resource definition table is
The system of claim 1, associated with a valid bit having a particular value that indicates whether a resource is allocated.
6. The system of claim 5, wherein a valid bit of 0 indicates that the particular value has been allocated a resource.
7. The system of claim 5, wherein a valid bit of 1 indicates that the particular value has been allocated a resource.
8. Dividing into at least two domains,
A method of suppressing a packet processing failure in a failure domain in a computer system in which each domain has a plurality of computer nodes and putting each node of each domain into a dormant state in response to the system packet processing failure Identifying an allocated resource in the resource definition table, identifying a fault domain associated with the allocated resource in the resource definition table, and identifying at least one fault-free domain having no allocated resource in the resource definition table. Terminating the dormancy-free domain, and deallocating the allocated resources associated with the failed resource in the resource definition table.
9. The method of claim 8, further comprising resetting the fault domain.
10. The method of claim 8, wherein resetting the fault domain further comprises changing a state of the fault domain.
JP2002190699A 2001-05-17 2002-06-28 Error suppression and error handling in partitioned systems with shared resources Active JP4213415B2 (en)

Priority Applications (4)

Application Number Priority Date Filing Date Title
US30196901P true 2001-06-29 2001-06-29
US60/301969 2001-06-29
US10/150618 2002-05-17
US10/150,618 US7380001B2 (en) 2001-05-17 2002-05-17 Fault containment and error handling in a partitioned system with shared resources

Publications (2)

Publication Number Publication Date
JP2003076671A true JP2003076671A (en) 2003-03-14
JP4213415B2 JP4213415B2 (en) 2009-01-21

Family

ID=26847856

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2002190699A Active JP4213415B2 (en) 2001-05-17 2002-06-28 Error suppression and error handling in partitioned systems with shared resources

Country Status (1)

Country Link
JP (1) JP4213415B2 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008165556A (en) * 2006-12-28 2008-07-17 Hitachi Ltd Computer system and chip set therefor
WO2008120383A1 (en) * 2007-03-29 2008-10-09 Fujitsu Limited Information processor and fault processing method
WO2009147716A1 (en) * 2008-06-02 2009-12-10 富士通株式会社 Data processing system, data processing method, and data processing program
JP5930046B2 (en) * 2012-08-17 2016-06-08 富士通株式会社 Information processing apparatus and control method of information processing apparatus
US9483502B2 (en) 2012-08-09 2016-11-01 Fujitsu Limited Computational processing device including request holding units each provided for each type of commands, information processing device including request holding units each provided for each type of commands, and method of controlling information processing device

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008165556A (en) * 2006-12-28 2008-07-17 Hitachi Ltd Computer system and chip set therefor
JP4723470B2 (en) * 2006-12-28 2011-07-13 株式会社日立製作所 Computer system and its chipset
WO2008120383A1 (en) * 2007-03-29 2008-10-09 Fujitsu Limited Information processor and fault processing method
JP4495248B2 (en) * 2007-03-29 2010-06-30 富士通株式会社 Information processing apparatus and failure processing method
JPWO2008120383A1 (en) * 2007-03-29 2010-07-15 富士通株式会社 Information processing apparatus and failure processing method
US7930599B2 (en) 2007-03-29 2011-04-19 Fujitsu Limited Information processing apparatus and fault processing method
WO2009147716A1 (en) * 2008-06-02 2009-12-10 富士通株式会社 Data processing system, data processing method, and data processing program
JP5212471B2 (en) * 2008-06-02 2013-06-19 富士通株式会社 Data processing system, data processing method, and data processing program
US8806276B2 (en) 2008-06-02 2014-08-12 Fujitsu Limited Control system for driving a data processing apparatus
US9483502B2 (en) 2012-08-09 2016-11-01 Fujitsu Limited Computational processing device including request holding units each provided for each type of commands, information processing device including request holding units each provided for each type of commands, and method of controlling information processing device
JP5930046B2 (en) * 2012-08-17 2016-06-08 富士通株式会社 Information processing apparatus and control method of information processing apparatus

Also Published As

Publication number Publication date
JP4213415B2 (en) 2009-01-21

Similar Documents

Publication Publication Date Title
US8327086B2 (en) Managing migration of a shared memory logical partition from a source system to a target system
JP4056471B2 (en) System for transferring to a processor
US6457098B1 (en) Methods and apparatus for coordinating shared multiple raid controller access to common storage devices
US4378588A (en) Buffer control for a data path system
US5361347A (en) Resource management in a multiple resource system where each resource includes an availability state stored in a memory of the resource
JP3940404B2 (en) Method, apparatus, and program for deallocating computer data in a multi-thread computer
US6976197B2 (en) Apparatus and method for error logging on a memory module
US7484043B2 (en) Multiprocessor system with dynamic cache coherency regions
US6647514B1 (en) Host I/O performance and availability of a storage array during rebuild by prioritizing I/O request
US7496045B2 (en) Broadcast of shared I/O fabric error messages in a multi-host environment to all affected root nodes
US6181614B1 (en) Dynamic repair of redundant memory array
TWI222025B (en) Method and apparatus for dynamically allocating and deallocating processors in a logical partitioned data processing system
US6505305B1 (en) Fail-over of multiple memory blocks in multiple memory modules in computer system
US7069465B2 (en) Method and apparatus for reliable failover involving incomplete raid disk writes in a clustering system
US4674038A (en) Recovery of guest virtual machines after failure of a host real machine
US7631066B1 (en) System and method for preventing data corruption in computer system clusters
US7318127B2 (en) Method, apparatus, and computer program product for sharing data in a cache among threads in an SMT processor
EP0357768B1 (en) Record lock processor for multiprocessing data system
EP0539012B1 (en) Improved digital processor with distributed memory system
JP4001877B2 (en) Automatic recovery from hardware errors in the I / O fabric
EP0117408B1 (en) Method and mechanism for load balancing in a multiunit system
US6912625B2 (en) Method, system, and computer program product for creating and managing memory affinity in logically partitioned data processing systems
US5257368A (en) System for dynamically changing a system I/O configuration by determining differences between current and future configurations and describing differences to software and hardware control blocks
JP2572518B2 (en) Managing data objects used to maintain state information for shared data in a local complex
KR100288020B1 (en) Apparatus and method for sharing hot spare drives in multiple subsystems

Legal Events

Date Code Title Description
A621 Written request for application examination

Free format text: JAPANESE INTERMEDIATE CODE: A621

Effective date: 20050520

A977 Report on retrieval

Free format text: JAPANESE INTERMEDIATE CODE: A971007

Effective date: 20070308

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20070327

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20070528

A131 Notification of reasons for refusal

Free format text: JAPANESE INTERMEDIATE CODE: A131

Effective date: 20071106

A521 Written amendment

Free format text: JAPANESE INTERMEDIATE CODE: A523

Effective date: 20080107

TRDD Decision of grant or rejection written
A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

Effective date: 20080930

A01 Written decision to grant a patent or to grant a registration (utility model)

Free format text: JAPANESE INTERMEDIATE CODE: A01

A61 First payment of annual fees (during grant procedure)

Free format text: JAPANESE INTERMEDIATE CODE: A61

Effective date: 20081030

R150 Certificate of patent or registration of utility model

Free format text: JAPANESE INTERMEDIATE CODE: R150

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20111107

Year of fee payment: 3

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20111107

Year of fee payment: 3

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20121107

Year of fee payment: 4

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20121107

Year of fee payment: 4

FPAY Renewal fee payment (event date is renewal date of database)

Free format text: PAYMENT UNTIL: 20131107

Year of fee payment: 5