CN103220230B - Support the dynamic shared buffer method that message crossbar stores - Google Patents

Support the dynamic shared buffer method that message crossbar stores Download PDF

Info

Publication number
CN103220230B
CN103220230B CN201310176871.4A CN201310176871A CN103220230B CN 103220230 B CN103220230 B CN 103220230B CN 201310176871 A CN201310176871 A CN 201310176871A CN 103220230 B CN103220230 B CN 103220230B
Authority
CN
China
Prior art keywords
message
slice
packet slice
register item
current message
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201310176871.4A
Other languages
Chinese (zh)
Other versions
CN103220230A (en
Inventor
徐炜遐
肖灿文
王永庆
刘路
张鹤颖
沈胜宇
张磊
戴艺
伍楠
曹继军
高蕾
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
National University of Defense Technology
Original Assignee
National University of Defense Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University of Defense Technology filed Critical National University of Defense Technology
Priority to CN201310176871.4A priority Critical patent/CN103220230B/en
Publication of CN103220230A publication Critical patent/CN103220230A/en
Application granted granted Critical
Publication of CN103220230B publication Critical patent/CN103220230B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Data Exchanges In Wide-Area Networks (AREA)

Abstract

The invention discloses a kind of dynamic shared buffer method supporting message crossbar to store, implementation step is as follows: 1) set up buffering area and Parasites Fauna in advance; 2) judge the type of new packet slice, according to type distributing buffer address, distribution or renewal Parasites Fauna, subordinate's slice address pointer of previous packet slice in amendment buffering area, by new packet slice write buffer address; If Parasites Fauna non-NULL, then perform next step; 3) select current message cut into slices and judge its type from buffering area, the type of cutting into slices according to current message and next stage node carry out communication to realize the Flow Control of cutting into slices to current message, if current message section meets Flow Control condition, perform next step; 4) current message section is arbitrated, arbitrate and successfully then current message section is flowed out and upgraded corresponding register item, arbitrate unsuccessful, return step 3).The present invention has the advantage that data transmission performance is good, resource utilization is high, versatility is good.

Description

Support the dynamic shared buffer method that message crossbar stores
Technical field
The present invention relates to the message caching method in connection communication field, be specifically related to a kind of dynamic shared buffer method supporting message crossbar to store.
Background technology
Packet buffer technology is the key technology of communication system.As shown in Figure 1, packet slice is the basic information unit (BIU) of message in network transmission process, each message comprises multiple packet slice, the type of packet slice is cut into slices (also referred to as heading) by head, body is cut into slices and tail is cut into slices, and (also referred to as message trailer) is formed, the wherein beginning of a head section sign message, the routing iinformation of message is contained, the end of a tail section sign message in head section.The packet buffer technology generally adopted at present has the cache way based on first in first out (FIFO) structure and dynamic access queue (DAMQ) mode towards output port tissue.These two kinds of modes all sequential storage messages, do not support interleaved between message, and message can be caused so to some extent to block (HOL) phenomenon, thus cause communication system to there is the problems such as data transmission performance difference, resource utilization are low.
Summary of the invention
The technical problem to be solved in the present invention is to provide the dynamic shared buffer method of the support message crossbar storage that a kind of data transmission performance is good, resource utilization is high, versatility is good.
In order to solve the problems of the technologies described above, the technical solution used in the present invention is:
The dynamic shared buffer method supporting message crossbar to store, implementation step is as follows:
1) set up in advance for storing the buffering area of inflow packet slice and the Parasites Fauna for storing the routing iinformation flowing into message, the packet slice item of described buffer cache comprises packet slice itself and points to subordinate's slice address pointer of next packet slice in the corresponding message of packet slice, not yet all flow out message one_to_one corresponding in the register item of described Parasites Fauna buffer memory and buffering area, described register item comprise message routing information and point in message by the packet slice of outflow wait flow out slice address pointer; Present node is when receiving the new packet slice that even higher level of node sends, and redirect performs next step;
2) present node judges the type of described new packet slice, if new packet slice is head section, be then new packet slice distributing buffer address in the buffer, in the register bank for new packet slice distributes register item, the routing iinformation carry new packet slice and buffer address write in described register item, and write in described buffer address by new packet slice; If new packet slice is body section or tail section, be then new packet slice distributing buffer address in the buffer, determine the register item that new packet slice is corresponding, according to register item wait flow out slice address pointer to find the corresponding message of new packet slice previous packet slice in buffering area, the subordinate's slice address pointer revising described previous packet slice is pointed to the buffer address distributing to new packet slice, new packet slice write is distributed to the buffer address of new packet slice; Meanwhile, timing judges whether described Parasites Fauna is empty, if be sky, continues to wait for; If Parasites Fauna non-NULL, then redirect performs step 3);
3) present node selects a packet slice cut into slices as current message and judge the type that current message is cut into slices from described buffering area, if current message section is head section, then present node obtains the idle register item quantity in the storable packet slice quantity in next stage node buffering area and Parasites Fauna, if the storable packet slice quantity in next stage node buffering area is more than or equal to 1 and idle register item quantity in next stage node register group is more than or equal to 1, redirect performs step 4); If current message section is body section or tail section, then present node obtains the idle register item quantity in next stage node register group, if the idle register item quantity in next stage node register group is more than or equal to 1, redirect performs step 4);
4) present node is arbitrated current message section, if current message section is arbitrated successfully, then first judge the type that current message is cut into slices, if current message section is tail section, delete the register item that current message section is corresponding, if current message section is head section or body section, then obtain current message according to the subordinate slice address pointer of current message section to cut into slices next packet slice buffer address in corresponding message, what amendment current message cut into slices corresponding register item waits that flowing out slice address pointer is pointed to described current message and to cut into slices next packet slice buffer address in corresponding message, then current message section is flowed out to next stage node by network, if current message arbitration is unsuccessful, then waits for that next cycle returns and re-execute step 4),
In described step 4), present node is as follows to the cut into slices detailed step arbitrated of current message: the priority of initial setting up port 0 is the highest, message request from different input port is arbitrated, if the input port that current priority is the highest has request just to respond this port, and the next port arranging the highest port of current priority is the port that next priority arbitration is the highest, the input port that current priority is the highest is else if without request, with regard to selective sequential next port until the request of a port is responded, arranging by the next port of the port responded is the port that next priority arbitration is the highest, if in the port in arbitration, have multiple message request, then the packet slice flowed at first is selected to flow out according to the principle of first in first out.
Further improvement as technique scheme of the present invention: described register item also comprises superior node register item No. and superior node input slogan field domain, before current message section is flowed out to next stage node by network by described step 4) present node, present node also obtain current message cut into slices corresponding register item register item No., current message section input present node input slogan, present node by current message section flow out to next stage node by network time also send described register item No. and input slogan in the lump, described step 2) in present node when the routing iinformation carried by new packet slice and buffer address write in described register item, present node also obtains register item No. and the input slogan of even higher level of node transmission, and by the superior node register item No. field domain of described register item No. write register item, by the superior node input slogan field domain of the register item of described input slogan write distribution, described step 2) in present node when determining register item corresponding to new packet slice for new packet slice, then first obtain the register item No. and input slogan that send in the lump when even higher level of node sends new packet slice, then the superior node register item No. of each register item in described register item No. and Parasites Fauna is compared, described input slogan is compared with the superior node input slogan field domain of each register item in Parasites Fauna respectively, if find the register item both mated, namely the register item that this finds is defined as register item corresponding to new packet slice.
The present invention has following advantage: message carries out storing and arbitrating by the present invention in units of packet slice, the packet slice of different message is cushioned by buffering area, by the message routing information of register item and wait to flow out slice address pointer, subordinate's slice address pointer of packet slice item carries out interleaved and flow-control administration to the packet slice of different message, therefore link packet interleaved and transmission can be realized, thorough solution head message obstructing problem, substantially increase the performance of communication system and the utilance of resource, the head message in message transmission procedure can be effectively suppressed to block (HOL) phenomenon, and the present invention is not limited to particular hardware platform, there is data transmission performance good, resource utilization is high, the advantage that versatility is good.
Accompanying drawing explanation
Packet slice structural representation when Fig. 1 is prior art employing packet slice transmission.
Fig. 2 is the basic implementing procedure schematic diagram of the embodiment of the present invention.
Fig. 3 is the dynamic shared buffer structural representation of the embodiment of the present invention.
Embodiment
As shown in Figure 2, the present embodiment supports that the implementation step of the dynamic shared buffer method that message crossbar stores is as follows:
1) set up in advance for storing the buffering area of inflow packet slice and the Parasites Fauna for storing the routing iinformation flowing into message, the packet slice item of buffer cache comprises packet slice itself and points to subordinate's slice address pointer of next packet slice in the corresponding message of packet slice, not yet all flow out message one_to_one corresponding in the register item of Parasites Fauna buffer memory and buffering area, register item comprise message routing information and point in message by the packet slice of outflow wait flow out slice address pointer; Present node is when receiving the new packet slice that even higher level of node sends, and redirect performs next step;
2) present node judges the type of new packet slice, if new packet slice is head section, be then new packet slice distributing buffer address in the buffer, in the register bank for new packet slice distributes register item, the routing iinformation carry new packet slice and buffer address write in register item, and by new packet slice write buffer address; If new packet slice is body section or tail section, be then new packet slice distributing buffer address in the buffer, determine the register item that new packet slice is corresponding, according to register item wait flow out slice address pointer to find the corresponding message of new packet slice previous packet slice in buffering area, the subordinate's slice address pointer revising previous packet slice is pointed to the buffer address distributing to new packet slice, new packet slice write is distributed to the buffer address of new packet slice; Meanwhile, timing judges whether Parasites Fauna is empty, if be sky, continues to wait for; If Parasites Fauna non-NULL, then redirect performs step 3);
3) present node is selected from buffering area a packet slice to cut into slices as current message and is judged the type that current message is cut into slices, if current message section is head section, then present node obtains the idle register item quantity in the storable packet slice quantity in next stage node buffering area and Parasites Fauna, if the storable packet slice quantity in next stage node buffering area is more than or equal to 1 and idle register item quantity in next stage node register group is more than or equal to 1, redirect performs step 4); If current message section is body section or tail section, then present node obtains the idle register item quantity in next stage node register group, if the idle register item quantity in next stage node register group is more than or equal to 1, redirect performs step 4);
4) present node is arbitrated current message section, if current message section is arbitrated successfully, then first judge the type that current message is cut into slices, if current message section is tail section, delete the register item that current message section is corresponding, if current message section is head section or body section, then obtain current message according to the subordinate slice address pointer of current message section to cut into slices next packet slice buffer address in corresponding message, what amendment current message cut into slices corresponding register item waits that flowing out slice address pointer is pointed to current message and to cut into slices next packet slice buffer address in corresponding message, then current message section is flowed out to next stage node by network, if current message arbitration is unsuccessful, then waits for that next cycle returns and re-execute step 4).
See above-mentioned steps 1) ~ step 4), message carries out storing and arbitrating by the present embodiment in units of packet slice, the packet slice of different message is cushioned by buffering area, by the message routing information of register item and wait to flow out slice address pointer, subordinate's slice address pointer of packet slice item carries out interleaved and flow-control administration to the packet slice of different message, therefore link packet interleaved and transmission can be realized, thorough solution head message obstructing problem, substantially increase the performance of communication system and the utilance of resource, the head message in message transmission procedure can be effectively suppressed to block (HOL) phenomenon, and the storage organization of the present embodiment is not limited to particular hardware platform, there is data transmission performance good, resource utilization is high, the advantage that versatility is good.
As shown in Figure 3, the dynamic shared buffer structure set up in advance in the present embodiment comprises two memory blocks, one is for storing the buffering area flowing into packet slice, another is the Parasites Fauna for storing the routing iinformation flowing into message, the corresponding message also all do not flowed out in the buffer of each register item busy in Parasites Fauna; Each register item, except storing the routing iinformation of corresponding message, also stores corresponding message next by the buffer address of the section of outflow, namely waits to flow out slice address pointer.See Fig. 3, R 1, R 2, R p, R kbe four register items busy in Parasites Fauna, register item R 1corresponding message is the message 1 also all do not flowed out in buffering area, register item R 2corresponding message is the message 2 also all do not flowed out in buffering area, register item R pcorresponding message is the message p also all do not flowed out in buffering area, register item R kcorresponding message is the message k also all do not flowed out in buffering area.The packet slice that message 1 stores in the buffer comprises heading B 1, message body B 2, message body B 5; The packet slice that message 2 stores in the buffer comprises heading B 3, message trailer B 6; The packet slice that message p stores in the buffer comprises heading B ndeng; The packet slice that message k stores in the buffer comprises heading B 9, message body B 8, message body B 7.For message 1, heading B 1, message body B 2, message body B 5the arrow that right side is drawn is subordinate's slice address pointer, the next packet slice in the same message of each subordinate's slice address pointed, such as heading B 1subordinate slice address pointed message body B 2, message body B 2subordinate slice address pointed message body B 5, message body B 5owing to there is not next packet slice, message body B 5subordinate's slice address pointer be empty.
In the present embodiment, register item also comprises superior node register item No. and superior node input slogan field domain, before current message section is flowed out to next stage node by network by step 4) present node, present node also obtains current message and to cut into slices the register item No. of corresponding register item, the input slogan of current message section input present node, also transmitter register item No. and input slogan in the lump when current message section is flowed out to next stage node by network by present node, step 2) in present node in the routing iinformation that new packet slice is carried and buffer address write register item time, present node also obtains register item No. and the input slogan of even higher level of node transmission, and by the superior node register item No. field domain of register item No. write register item, by the superior node input slogan field domain of the register item of input slogan write distribution, step 2) in present node when determining register item corresponding to new packet slice for new packet slice, then first obtain the register item No. and input slogan that send in the lump when even higher level of node sends new packet slice, then the superior node register item No. of each register item in register item No. and Parasites Fauna is compared, input slogan is compared with the superior node input slogan field domain of each register item in Parasites Fauna respectively, if find the register item both mated, namely the register item that this finds is defined as register item corresponding to new packet slice.In addition, other mode also can be adopted as required to distinguish the register item determining that new packet slice is corresponding, such as, use the modes such as the Global ID of packet slice equally also can realize.
In the present embodiment, in step 4), present node is as follows to the cut into slices detailed step arbitrated of current message: the priority of initial setting up port 0 is the highest, message request from different input port is arbitrated, if the input port that current priority is the highest has request just to respond this port, and the next port arranging the highest port of current priority is the port that next priority arbitration is the highest, the input port that current priority is the highest is else if without request, with regard to selective sequential next port until the request of a port is responded, arranging by the next port of the port responded is the port that next priority arbitration is the highest, if in the port in arbitration, have multiple message request, then the packet slice flowed at first is selected to flow out according to the principle of first in first out.
The above is only the preferred embodiment of the present invention, protection scope of the present invention be not only confined to above-described embodiment, and all technical schemes belonged under thinking of the present invention all belong to protection scope of the present invention.It should be pointed out that for those skilled in the art, some improvements and modifications without departing from the principles of the present invention, these improvements and modifications also should be considered as protection scope of the present invention.

Claims (2)

1. the dynamic shared buffer method supporting message crossbar to store, is characterized in that implementation step is as follows:
1) set up in advance for storing the buffering area of inflow packet slice and the Parasites Fauna for storing the routing iinformation flowing into message, the packet slice item of described buffer cache comprises packet slice itself and points to subordinate's slice address pointer of next packet slice in the corresponding message of packet slice, not yet all flow out message one_to_one corresponding in the register item of described Parasites Fauna buffer memory and buffering area, described register item comprise message routing information and point in message by the packet slice of outflow wait flow out slice address pointer; Present node is when receiving the new packet slice that even higher level of node sends, and redirect performs next step;
2) present node judges the type of described new packet slice, if new packet slice is head section, be then new packet slice distributing buffer address in the buffer, in the register bank for new packet slice distributes register item, the routing iinformation carry new packet slice and buffer address write in described register item, and write in described buffer address by new packet slice; If new packet slice is body section or tail section, be then new packet slice distributing buffer address in the buffer, determine the register item that new packet slice is corresponding, according to register item wait flow out slice address pointer to find the corresponding message of new packet slice previous packet slice in buffering area, the subordinate's slice address pointer revising described previous packet slice is pointed to the buffer address distributing to new packet slice, new packet slice write is distributed to the buffer address of new packet slice; Meanwhile, timing judges whether described Parasites Fauna is empty, if be sky, continues to wait for; If Parasites Fauna non-NULL, then redirect performs step 3);
3) present node selects a packet slice cut into slices as current message and judge the type that current message is cut into slices from described buffering area, if current message section is head section, then present node obtains the idle register item quantity in the storable packet slice quantity in next stage node buffering area and Parasites Fauna, if the storable packet slice quantity in next stage node buffering area is more than or equal to 1 and idle register item quantity in next stage node register group is more than or equal to 1, redirect performs step 4); If current message section is body section or tail section, then present node obtains the idle register item quantity in next stage node register group, if the idle register item quantity in next stage node register group is more than or equal to 1, redirect performs step 4);
4) present node is arbitrated current message section, if current message section is arbitrated successfully, then first judge the type that current message is cut into slices, if current message section is tail section, delete the register item that current message section is corresponding, if current message section is head section or body section, then obtain current message according to the subordinate slice address pointer of current message section to cut into slices next packet slice buffer address in corresponding message, what amendment current message cut into slices corresponding register item waits that flowing out slice address pointer is pointed to described current message and to cut into slices next packet slice buffer address in corresponding message, then current message section is flowed out to next stage node by network, if current message arbitration is unsuccessful, then waits for that next cycle returns and re-execute step 4),
In described step 4), present node is as follows to the cut into slices detailed step arbitrated of current message: the priority of initial setting up port 0 is the highest, message request from different input port is arbitrated, if the input port that current priority is the highest has request just to respond this port, and the next port arranging the highest port of current priority is the port that next priority arbitration is the highest, the input port that current priority is the highest is else if without request, with regard to selective sequential next port until the request of a port is responded, arranging by the next port of the port responded is the port that next priority arbitration is the highest, if in the port in arbitration, have multiple message request, then the packet slice flowed at first is selected to flow out according to the principle of first in first out.
2. the dynamic shared buffer method of support message crossbar storage according to claim 1, is characterized in that: described register item also comprises superior node register item No. and superior node input slogan field domain, before current message section is flowed out to next stage node by network by described step 4) present node, present node also obtain current message cut into slices corresponding register item register item No., current message section input present node input slogan, present node by current message section flow out to next stage node by network time also send described register item No. and input slogan in the lump, described step 2) in present node when the routing iinformation carried by new packet slice and buffer address write in described register item, present node also obtains register item No. and the input slogan of even higher level of node transmission, and by the superior node register item No. field domain of described register item No. write register item, by the superior node input slogan field domain of the register item of described input slogan write distribution, described step 2) in present node when determining register item corresponding to new packet slice for new packet slice, then first obtain the register item No. and input slogan that send in the lump when even higher level of node sends new packet slice, then the superior node register item No. of each register item in described register item No. and Parasites Fauna is compared, described input slogan is compared with the superior node input slogan field domain of each register item in Parasites Fauna respectively, if find the register item both mated, namely the register item that this finds is defined as register item corresponding to new packet slice.
CN201310176871.4A 2013-05-14 2013-05-14 Support the dynamic shared buffer method that message crossbar stores Active CN103220230B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310176871.4A CN103220230B (en) 2013-05-14 2013-05-14 Support the dynamic shared buffer method that message crossbar stores

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310176871.4A CN103220230B (en) 2013-05-14 2013-05-14 Support the dynamic shared buffer method that message crossbar stores

Publications (2)

Publication Number Publication Date
CN103220230A CN103220230A (en) 2013-07-24
CN103220230B true CN103220230B (en) 2015-10-28

Family

ID=48817708

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310176871.4A Active CN103220230B (en) 2013-05-14 2013-05-14 Support the dynamic shared buffer method that message crossbar stores

Country Status (1)

Country Link
CN (1) CN103220230B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104639444B (en) * 2013-11-11 2019-10-15 南京中兴软件有限责任公司 The method for releasing and device of multicast caching
CN104104617B (en) * 2014-08-07 2017-10-17 曙光信息产业(北京)有限公司 A kind of message referee method and device
CN105162724B (en) 2015-07-30 2018-06-26 华为技术有限公司 A kind of data are joined the team and go out group method and queue management unit
CN111226465B (en) * 2017-10-11 2023-03-24 日本电气株式会社 UE configuration and update with network slice selection policy
CN110247970B (en) * 2019-06-17 2021-12-24 中国人民解放军国防科技大学 Dynamic sharing buffer device for interconnected chips

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101001210A (en) * 2006-12-21 2007-07-18 华为技术有限公司 Implementing device, method and network equipment and chip of output queue
CN101232456A (en) * 2008-01-25 2008-07-30 浙江大学 Distributed type testing on-chip network router

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7366092B2 (en) * 2003-10-14 2008-04-29 Broadcom Corporation Hash and route hardware with parallel routing scheme

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101001210A (en) * 2006-12-21 2007-07-18 华为技术有限公司 Implementing device, method and network equipment and chip of output queue
CN101232456A (en) * 2008-01-25 2008-07-30 浙江大学 Distributed type testing on-chip network router

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"一种新型缓冲结构——支持无死锁的完全自适应路由";肖灿文等;《上海交通大学学报》;20130128;第47卷(第1期);正文第13-17页 *
"交叉流水线多队列缓冲报文转发技术";毛席龙等;《计算机工程与应用》;20070201;第43卷(第4期);正文第150-152、163页 *

Also Published As

Publication number Publication date
CN103220230A (en) 2013-07-24

Similar Documents

Publication Publication Date Title
CN103220230B (en) Support the dynamic shared buffer method that message crossbar stores
US20160132541A1 (en) Efficient implementations for mapreduce systems
US10476697B2 (en) Network-on-chip, data transmission method, and first switching node
KR101172103B1 (en) Method for transmitting data in messages via a communications link of a communications system and communications module, subscriber of a communications system and associated communications system
CN102521201A (en) Multi-core DSP (digital signal processor) system-on-chip and data transmission method
CN102130833A (en) Memory management method and system of traffic management chip chain tables of high-speed router
CN102378971B (en) Method for reading data and memory controller
CN106537858B (en) A kind of method and apparatus of queue management
JP2006101525A (en) Network-on-chip half automatic transmission architecture for data flow application
CN105183662A (en) Cache consistency protocol-free distributed sharing on-chip storage framework
CN103077132A (en) Cache processing method and protocol processor cache control unit
US20170364291A1 (en) System and Method for Implementing Hierarchical Distributed-Linked Lists for Network Devices
CN101848135A (en) Management method and management device for statistical data of chip
CN102916902A (en) Method and device for storing data
CN102821045A (en) Method and device for copying multicast message
CN101309194A (en) SPI4.2 bus bridging implementing method and SPI4.2 bus bridging device
CN102984089A (en) Method and device for traffic management and scheduling
CN113157465A (en) Message sending method and device based on pointer linked list
CN105530153A (en) Slave device communication method in network, communication network, master device and slave device
CN104991745A (en) Data writing method and system of storage system
CN110413536B (en) High-speed parallel NandFlash storage device with multiple data formats
CN104899105A (en) Interprocess communication method
CN104714832A (en) Buffer management method used for airborne data network asynchronous data interaction area
CN103442091A (en) Data transmission method and device
CN108632148B (en) Device and method for learning MAC address based on pre-reading mode

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant