CN101043460A - Apparatus and method for realizing single stream forwarding of multi-network processing unit - Google Patents
Apparatus and method for realizing single stream forwarding of multi-network processing unit Download PDFInfo
- Publication number
- CN101043460A CN101043460A CN200710098010.3A CN200710098010A CN101043460A CN 101043460 A CN101043460 A CN 101043460A CN 200710098010 A CN200710098010 A CN 200710098010A CN 101043460 A CN101043460 A CN 101043460A
- Authority
- CN
- China
- Prior art keywords
- data
- network processing
- processing unit
- single current
- unit
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04L—TRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
- H04L65/00—Network arrangements, protocols or services for supporting real-time applications in data packet communication
- H04L65/60—Network streaming of media packets
- H04L65/61—Network streaming of media packets for supporting one-way streaming services, e.g. Internet radio
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Data Exchanges In Wide-Area Networks (AREA)
Abstract
The disclosed device for transmitting single-flow in multi-network processing unit comprises: multiple network processing units, a data receiving management unit to allocate received data to the said network processing units, and a data transmitting management unit to combine all processed data into single-flow data fro transmission. This invention uses multiple NPs to improves processing performance and save cost on development and investment.
Description
Technical field
The present invention relates to the data communication field, relate in particular to a kind of Apparatus and method for of realizing that the multi-network processing unit single current is transmitted.
Background technology
Along with the fast development of broadband network, and new continuous increase of using, also face the more pressure of high flow capacity even have the data communications equipment of highest point reason ability in the prior art.Existing forwarding processing capacity is mainly realized by CP (Control Processor, processor controls), NP (Network Processor, network processing unit) and TM (Traffic Management, traffic management)/Fabric devices such as (switching networks).Wherein NP realizes the core forwarding processing capacity of data communications equipment, can be realized by the NP unit of a two-way processing capacity, as shown in Figure 1; The user side business is finished the physical layer process of physical circuit signal, also string conversion, is gone frame processing, two layers of functions such as overhead processing of part by the incoming interface unit, business is admitted to the NP cell processing then, then undertaken alternately if protocol massages etc. need be handled by CP, enter network side by traffic management/switching network unit at last by NP and CP; Network side is professional to arrive the NP cell processing by traffic management/switching network unit, is finished physical layer process, string and conversion, framing processing, two layers of functions such as overhead processing of part of physical circuit signal then by the outgoing interface unit, reaches user side.
The NP function also can be realized by the NP unit of two unidirectional processing capacities, as shown in Figure 2: professional up direction by the incoming interface unit finish the physical circuit signal physical layer process, and string conversion, go that frame is handled, after two layers of functions such as overhead processing of part, enter up NP processing unit, enter network side by traffic management/switching network unit again; The network side business enters descending NP processing unit by traffic management/switching network unit, then, is finished physical layer process, string and conversion, framing processing, two layers of overhead processing of part of physical circuit signal etc. by the outgoing interface unit, sends to user side.
If service traffics improve, the NP of existing disposal ability certainly will be able to not meet the demands, if handle framework according to present forwarding, high-performance NP is the unique selection that solves big service traffics.In order to improve the forwarding disposal ability of data communications equipment, though can research and develop high-performance NP, substitute existing reduction process ability NP, can bring the huge investment in research and development and the waste of existing resource.In addition, if face the lifting of traffic carrying capacity next time again, the great number cost input that will cause again HardwareUpgring again, make new high-performance NP can not satisfy the demand that traffic carrying capacity increases fast very soon, finally can cause the raising of network application merchant CapEx (Capital Expense, basic fund cost).
Summary of the invention
The embodiment of the invention provides a kind of many NP to realize the method and apparatus that the forwarding of high-performance single current is handled, to reduce the cost that the network traffic increase causes system upgrade.
The embodiment of the invention provides a kind of equipment of realizing that the multi-network processing unit single current is transmitted, comprises a plurality of network processing units, Data Receiving administrative unit and data transmitting administrative unit;
Described Data Receiving administrative unit is handled the data allocations that receives to described a plurality of network processing units;
Described data transmitting administrative unit is combined into the single current data with the data after described a plurality of network processing units processing and sends.
The embodiment of the invention also provides a kind of method that realizes that the multi-network processing unit single current is transmitted to comprise:
The data allocations that receives is handled to a plurality of network processing units;
Data after described a plurality of network processing units processing are combined into the single current data to be sent.
In the embodiments of the invention,, the data that receive handle by being sent to a plurality of network processing units respectively; And the data that a plurality of network processing units are handled reorder, synthetic single current data send, and have realized many NP single current forwarding capability, are using the transfer capability that has improved equipment on a plurality of existing NP basis, do not need to develop new high-performance NP, therefore saved investment in research and development and development cost.
Description of drawings
Fig. 1 is the system of a two-way NP unit realization NP function in the prior art;
Fig. 2 is the system of two unidirectional NP unit realization NP functions in the prior art;
Fig. 3 is a kind of method flow diagram of realizing that the multi-network processing unit single current is transmitted of the embodiment of the invention;
Fig. 4 is a kind of equipment structure chart of realizing that the multi-network processing unit single current is transmitted of the embodiment of the invention;
Fig. 5 is a Data Receiving administrative unit structure chart in the embodiment of the invention;
Fig. 6 is data transmitting administrative cellular construction figure in the embodiment of the invention;
Fig. 7 is a network processing unit backup control sub unit structure chart in the embodiment of the invention Data Receiving administrative unit;
Fig. 8 is a network processing unit backup control sub unit structure chart in the embodiment of the invention data transmitting administrative unit.
Embodiment
The embodiment of the invention provides a kind of method that realizes that the multi-network processing unit single current is transmitted, and as shown in Figure 3, may further comprise the steps:
Step s301 will handle to a plurality of network processing units from the single current data allocations of user side or network side reception.Processing procedure specifically comprises: the data that receive are carried out the message ordering, and the distribution sort sign, the data with the distribution sort sign send to a plurality of network processing units respectively then.Since message the handling process difference of each NP forwarding engine itself, when list item is searched to the visit difference of memory, PCB (Print Circuit Board, printed circuit board (PCB)) cabling difference etc., can cause causing delay inequality behind the single current data distribution.Therefore, before entering NP, the sequence number byte is added in data message front, give NP then and handle.
In order to save Internet resources, improve the utilization ratio of network processing unit, can also carry out load balancing, congested control etc. to the data that receive.In addition, can increase the backup network processing unit in system, if when trigger condition satisfies, for example network processing unit situation such as break down switches to the backup network processing unit with data by in running order network processing unit.
Step s302 is combined into the single current data with the data after a plurality of network processing units processing and sends.Processing procedure specifically comprises: the data that a plurality of network processing units are handled reorder according to the ordering sign, and synthetic single current data send; Terminating in of ordering and sequence number added symmetrical direction logic with sequence number and finished.
In the above-described embodiments, when the total data traffic of system greater than a plurality of NP disposal ability sums, perhaps certain NP breaks down etc. and to cause flow to handle by linear speed, data take place to need to adopt congestion control policy and algorithm when congested.For example, adopt FQ (Fair Queueing, Fair Queue) algorithm, n NP is exactly n formation, regularly data query sends to the preceding buffer status of each NP, if in case certain or all NP corresponding cache are overflowed, think that this NP or all NP generation problems cause congested, at this moment the flow of distributing to this NP is originally abandoned, up to this correspondence buffer memory do not have overflow till.
In the above-described embodiments, according to the disposal ability of each business, each NP, data traffic is rationally balancedly distributed to a plurality of NP, maximal efficiency utilizes NP to handle resource can adopt load-balancing algorithm.Load-balancing algorithm has poll equilibrium, the equilibrium of weight poll, balanced, weight equilibrium, disposal ability equilibrium etc. at random at random, in the present embodiment, each NP has identical performance and software and hardware configuration, and it is proper therefore to adopt the poll equilibrium to add the balanced combination of disposal ability.Specifically, distribute to each NP the packet poll that interface is entered, as packet P1 enter NP1, packet P2 enter NP2 ... packet Pn enters NPn, carries out equilibrium according to the NP disposal ability with the bag processing speed by allocation of packets like this.Simultaneously, consider that length of data package changes, therefore when load balancing, also to consider to adjust distribution toward the buffer memory remaining space of NP transmitter side, enter NP1, packet Pn+2 as distribute data bag Pn+1 and then select maximum one of corresponding buffer memory remaining space among NP2~NPn, reach the purpose of optimal load balancing.
In the above-described embodiments, because user side, the data bus bandwidth maximum at TM/Fabric side total flow place, and the data bus bandwidth of each NP side is little a lot, therefore, often different bus standards can be selected in both sides, if there is this difference, can pass through interface conversion function, make the data seamless link, for example with the SERDES bus of user side or the big flow of network side, be converted to the low discharge SPI (SerialPeripheral Interface, Serial Peripheral Interface (SPI)) 4.2 that is fit to NP, XAUI interfaces such as (Xilinx Assistant Unit Interface, Xilinx auxiliary unit interfaces).
In addition, above-mentioned jam control function is added in the total flow porch usually, rather than be placed on each NP porch, this be because: router IP congestion avoidance algorithm is to connect at difference, carry out discard processing according to certain strategy, and all flow equalizations are assigned to a plurality of NP place, same like this connection will be distributed in n the stream, if congested control is placed on the congestion avoidance algorithm that accurately to realize among each Np with weighting, as WFQ (Weighted Fair Queuing, the Weighted Fair Queuing method), WRED (Weighted RandomEarly Detection, Weighted random detects in advance); The embodiment of the invention also needs to use sequence of message number to guarantee the strict order of message, and the interpolation of test serial number is at the total flow place, if will cause the disappearance of test serial number and discontinuous at each NP place dropping packets.
In addition, core router is on the very important network node, and reliability is an important index.Generally adopt technology such as physical link backup, the backup of important veneer plate level to guarantee high reliability at present, and as nucleus module on the most important interface disposable plates in the router, NP transmits processing module also should realize the module level n+1 backup.In embodiments of the present invention, utilize network processing unit backup controlled function, can be easy to realize the n+1 backup of np module.For example, normal single current flow needs n NP to handle, and increases the NP backup units and backups.The fault detection mechanism of NP is by mailing to independent FIFO (the First In First Out before each NP, first in first out) overflow status is judged, if certain moment detects overflow status, then system start-up backup functionality is forwarded to the flow that mails to this NP on the backup NP.
In the foregoing description, message ordering and reordering function are necessary functions, and are optional functions for functions such as congested control, load balancing, interface conversion and backup controls, can possess simultaneously, also can only possess wherein one or several functions.
The embodiment of the invention provides a kind of equipment of realizing that the multi-network processing unit single current is transmitted, comprise n-2 sheet NP, be complete peer-to-peer each other, simultaneously, increased the specific logical function in the front and back of many NP, this logic function can be realized by FPGA (Field Programmable Gate Array, field-programmable gate circuit) or ASIC (Application Specific Integrated Circuits, application-specific integrated circuit (ASIC)).Specifically as shown in Figure 4, comprising: a plurality of network processing units (can be divided into up/downlink network processing unit), Data Receiving administrative unit 100, data transmitting administrative unit 200 and switching network 300 with traffic management function.The single current data allocations that Data Receiving administrative unit 100 will receive from user side is to a plurality of network processing units, and the data after a plurality of network processing units are handled in data transmitting administrative unit 200 are combined into the single current data and send to switching network 300; Or the data allocations that Data Receiving administrative unit 100 will receive from switching network 300 is to a plurality of network processing units, and the data after a plurality of network processing units are handled in data transmitting administrative unit 200 are combined into the single current data and send to user side.Wherein, a plurality of network processing units, Data Receiving processing unit 100 and data sending processing unit 200 are connected with controlled processing unit, by the controlled processing unit control and management.
With reference to figure 5, Data Receiving administrative unit 100 further comprises: message ordering subelement 130, the data that receive are carried out the message ordering, and the data that distribution sort is identified send to a plurality of network processing units respectively; Network processing unit backup control sub unit 140, when trigger condition satisfies, when the network processing unit of promptly working broke down or data carried by data overloads, the data that network processing unit is handled switched to the backup network processing unit by network processing unit; Load balancing subelement 120 carries out load balancing and by message ordering subelement 130 data balancing that receives is assigned to a plurality of network processing units the data that receive; Congested control sub unit 110 is carried out congested control for entering load balancing subelement 120 data before; Interface conversion subelement 150 carries out interface conversion to receiving data.The annexation of each subelement is an application example in the present embodiment, is not limited to shown in Figure 5ly, and for example, when not having load balancing subelement 120, message ordering subelement 130 directly is connected with congested control sub unit 110 etc.
With reference to figure 6, data transmitting administrative unit 200 comprises that further message separates ordering subelement 230, and the message that a plurality of network processing units are handled reorders according to the ordering sign, and synthetic single current data send; Network processing unit backup control sub unit 220, when trigger condition satisfies, when the network processing unit of promptly working broke down or data carried by data overloads, the data that network processing unit is handled switched to the backup network processing unit by network processing unit; Interface conversion subelement 210 carries out interface conversion to the data after a plurality of network processing units processing.
Increasing backup NP unit, the flow of each NP is all received the NP backup simultaneously, but need switch control, the folding of switch is determined by NP fault detect result, network processing unit backup control sub unit 220 among Fig. 5 among network processing unit backup control sub unit 140 and Fig. 6 realizes principle respectively as shown in Figure 7 and Figure 8, and work NP1 is connected with backup NP by switch respectively to work NPn.Wherein, work NP is mainly determined by failure detection time with the switching time of backup NP, is the microsecond rank.Certainly, network processing unit backup control sub unit also can only be present in Data Receiving administrative unit 100 or the data transmitting administrative unit 200.
The embodiment of the invention can provide the n+1 backup ability of np module easily, improves the reliability of system, saves investment in research and development and the development cost of high-performance NP, prolongs the life cycle of existing NP, and can release market fast, shortens Time To Market.
More than disclosed only be several specific embodiment of the present invention, still, the present invention is not limited thereto, any those skilled in the art can think variation all should fall into protection scope of the present invention.
Claims (13)
1, a kind of equipment of realizing that the multi-network processing unit single current is transmitted is characterized in that, comprises a plurality of network processing units, Data Receiving administrative unit and data transmitting administrative unit;
Described Data Receiving administrative unit is handled the data allocations that receives to described a plurality of network processing units;
Described data transmitting administrative unit is combined into the single current data with the data after described a plurality of network processing units processing and sends.
2, realize the equipment that the multi-network processing unit single current is transmitted according to claim 1, it is characterized in that,
Described Data Receiving administrative unit further comprises message ordering subelement, and the data of described reception are carried out the message ordering, and the data that distribution sort is identified send to described a plurality of network processing unit respectively;
Described data transmitting administrative unit comprises that further message separates the ordering subelement, and the data that described a plurality of network processing units are handled reorder according to the ordering sign, and synthetic single current data send.
3, the equipment of transmitting as realization multi-network processing unit single current as described in the claim 2, it is characterized in that, described equipment also comprises the backup network processing unit, and described Data Receiving administrative unit and data transmitting administrative unit also comprise network processing unit backup control sub unit respectively;
Described network processing unit backup control sub unit, when trigger condition satisfied, the data that described network processing unit is handled switched to described backup network processing unit by described network processing unit.
4, the equipment of transmitting as realization multi-network processing unit single current as described in the claim 2, it is characterized in that, described Data Receiving administrative unit also comprises the load balancing subelement, and the data that receive are carried out load balancing and by described message ordering subelement the data balancing of described reception are assigned to described a plurality of network processing unit.
5, the equipment of transmitting as realization multi-network processing unit single current as described in the claim 4 is characterized in that described Data Receiving administrative unit also comprises congested control sub unit, carries out congested control for the data that enter before the described load balancing subelement.
6, the equipment of transmitting as realization multi-network processing unit single current as described in the claim 2 is characterized in that described Data Receiving administrative unit also comprises the interface conversion subelement, and the data of described reception are carried out interface conversion; Described data transmitting administrative unit also comprises the interface conversion subelement, and the data after described a plurality of network processing units are handled are carried out interface conversion.
As the equipment that realization multi-network processing unit single current is transmitted as described in each in the claim 1 to 6, it is characterized in that 7, described Data Receiving unit receives user side single current data, described data transmission unit sends the single current data to network side; Or described Data Receiving unit reception network side single current data, described data transmission unit sends the single current data to user side.
8, a kind of method that realizes that the multi-network processing unit single current is transmitted is characterized in that, comprising:
The data allocations that receives is handled to a plurality of network processing units;
Data after described a plurality of network processing units processing are combined into the single current data to be sent.
9, the method for transmitting as realization multi-network processing unit single current as described in the claim 8 is characterized in that,
The described data allocations that receives is handled specifically to a plurality of network processing units comprises: the data that receive are carried out the message ordering, and the data that distribution sort is identified send to described a plurality of network processing unit respectively and handle;
Described data after described a plurality of network processing units are handled are combined into the single current data and send and specifically comprise: the data that described a plurality of network processing units are handled reorder according to the ordering sign, and synthetic single current data send.
10, the method for transmitting as realization multi-network processing unit single current as described in the claim 9 is characterized in that, when trigger condition satisfied, the data that described network processing unit is handled switched to the backup network processing unit by described network processing unit.
11, the method for transmitting as realization multi-network processing unit single current as described in the claim 9 is characterized in that, described the data allocations that receives was also comprised before a plurality of network processing units: the data to described reception are carried out load balancing.
12, the method for transmitting as realization multi-network processing unit single current as described in the claim 9 is characterized in that, described the data allocations that receives was also comprised before a plurality of network processing units: the data to described reception are carried out congested control.
13, the method for transmitting as realization multi-network processing unit single current as described in the claim 9 is characterized in that, described the data allocations that receives is also comprised to a plurality of network processing units before: the data to described reception are carried out interface conversion; Described data after a plurality of network processing units are handled also comprise before being combined into the single current data: the data after described a plurality of network processing units are handled are carried out interface conversion.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN200710098010.3A CN101043460B (en) | 2007-04-24 | 2007-04-24 | Apparatus and method for realizing single stream forwarding of multi-network processing unit |
PCT/CN2008/070733 WO2008128464A1 (en) | 2007-04-24 | 2008-04-17 | Device and method for realizing transferring single stream by multiple network processing units |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN200710098010.3A CN101043460B (en) | 2007-04-24 | 2007-04-24 | Apparatus and method for realizing single stream forwarding of multi-network processing unit |
Publications (2)
Publication Number | Publication Date |
---|---|
CN101043460A true CN101043460A (en) | 2007-09-26 |
CN101043460B CN101043460B (en) | 2010-07-07 |
Family
ID=38808664
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN200710098010.3A Expired - Fee Related CN101043460B (en) | 2007-04-24 | 2007-04-24 | Apparatus and method for realizing single stream forwarding of multi-network processing unit |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN101043460B (en) |
WO (1) | WO2008128464A1 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008128464A1 (en) * | 2007-04-24 | 2008-10-30 | Huawei Technologies Co., Ltd. | Device and method for realizing transferring single stream by multiple network processing units |
US8391706B2 (en) | 2008-03-24 | 2013-03-05 | Nec Corporation | Optical signal division transmission system, optical transmitter, optical receiver, and optical signal division transmission method |
CN111245627A (en) * | 2020-01-15 | 2020-06-05 | 湖南高速铁路职业技术学院 | Communication terminal device and communication method |
CN113067778A (en) * | 2021-06-04 | 2021-07-02 | 新华三半导体技术有限公司 | Flow management method and flow management chip |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7430223B2 (en) * | 2002-08-28 | 2008-09-30 | Advanced Micro Devices, Inc. | Wireless interface |
CN100486189C (en) * | 2003-01-03 | 2009-05-06 | 华为技术有限公司 | Router |
CN100579065C (en) * | 2006-09-30 | 2010-01-06 | 华为技术有限公司 | Transmission method and device for high speed data flow and data exchange device |
CN101043460B (en) * | 2007-04-24 | 2010-07-07 | 华为技术有限公司 | Apparatus and method for realizing single stream forwarding of multi-network processing unit |
-
2007
- 2007-04-24 CN CN200710098010.3A patent/CN101043460B/en not_active Expired - Fee Related
-
2008
- 2008-04-17 WO PCT/CN2008/070733 patent/WO2008128464A1/en active Application Filing
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2008128464A1 (en) * | 2007-04-24 | 2008-10-30 | Huawei Technologies Co., Ltd. | Device and method for realizing transferring single stream by multiple network processing units |
US8391706B2 (en) | 2008-03-24 | 2013-03-05 | Nec Corporation | Optical signal division transmission system, optical transmitter, optical receiver, and optical signal division transmission method |
CN111245627A (en) * | 2020-01-15 | 2020-06-05 | 湖南高速铁路职业技术学院 | Communication terminal device and communication method |
CN111245627B (en) * | 2020-01-15 | 2022-05-13 | 湖南高速铁路职业技术学院 | Communication terminal device and communication method |
CN113067778A (en) * | 2021-06-04 | 2021-07-02 | 新华三半导体技术有限公司 | Flow management method and flow management chip |
Also Published As
Publication number | Publication date |
---|---|
CN101043460B (en) | 2010-07-07 |
WO2008128464A1 (en) | 2008-10-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US8472312B1 (en) | Stacked network switch using resilient packet ring communication protocol | |
US7295519B2 (en) | Method of quality of service based flow control within a distributed switch fabric network | |
US7221647B2 (en) | Packet communication apparatus and controlling method thereof | |
US7756029B2 (en) | Dynamic load balancing for layer-2 link aggregation | |
US7274660B2 (en) | Method of flow control | |
EP2671352B1 (en) | System and method for aggregating and estimating the bandwidth of multiple network interfaces | |
US8605752B2 (en) | Communication apparatus, communication method, and computer program | |
US20060098573A1 (en) | System and method for the virtual aggregation of network links | |
CN1825836A (en) | System and method for avoiding network apparatus jamming | |
WO2014068426A1 (en) | A method for dynamic load balancing of network flows on lag interfaces | |
CN101668005A (en) | Data transmission accelerating engine method based on multiple access passages of transmitting end | |
WO2013016971A1 (en) | Method and device for sending and receiving data packet in packet switched network | |
CN101060533A (en) | A method, system and device for improving the reliability of VGMP protocol | |
CN101043460B (en) | Apparatus and method for realizing single stream forwarding of multi-network processing unit | |
Huang et al. | Tuning high flow concurrency for MPTCP in data center networks | |
WO2011147257A1 (en) | Method for group-based multicast with non-uniform receivers | |
CN111224888A (en) | Method for sending message and message forwarding equipment | |
CN101051957A (en) | Dynamically regulating method and device for link state and bundled link state | |
Hari et al. | An architecture for packet-striping protocols | |
CN102209028A (en) | Flow control device and method for CPU (Central Processing Unit) | |
US20210092058A1 (en) | Transmission of high-throughput streams through a network using packet fragmentation and port aggregation | |
JPH10224356A (en) | Network system and its load control method | |
CN1571354A (en) | A method for implementing link aggregation | |
WO2022147792A1 (en) | Switching system, switching network and switching node | |
EP3972209A1 (en) | Method for processing network congestion, and related apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
C14 | Grant of patent or utility model | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20100707 Termination date: 20180424 |