CN104199782A - GPU memory access method - Google Patents

GPU memory access method Download PDF

Info

Publication number
CN104199782A
CN104199782A CN201410419711.2A CN201410419711A CN104199782A CN 104199782 A CN104199782 A CN 104199782A CN 201410419711 A CN201410419711 A CN 201410419711A CN 104199782 A CN104199782 A CN 104199782A
Authority
CN
China
Prior art keywords
access
memory
address
internal memory
access request
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410419711.2A
Other languages
Chinese (zh)
Other versions
CN104199782B (en
Inventor
吴明晖
裴玉龙
陈天洲
李颂元
孟静磊
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University City College ZUCC
Original Assignee
Zhejiang University City College ZUCC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University City College ZUCC filed Critical Zhejiang University City College ZUCC
Priority to CN201410419711.2A priority Critical patent/CN104199782B/en
Publication of CN104199782A publication Critical patent/CN104199782A/en
Application granted granted Critical
Publication of CN104199782B publication Critical patent/CN104199782B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Dram (AREA)
  • Memory System Of A Hierarchy Structure (AREA)

Abstract

The invention discloses a GPU memory access method. According to the GPU memory access method, requests sent by a stream processor are subjected to memory access fusion, the stream processor sends the fused memory access requests to corresponding memories, the fused memory access requests in the memories are split to read out data, the read-out data in the memories form data blocks to be sent back to the stream processor, and the stream processor processes and stores the returned data blocks. By means of the method, the memory access requests with identical intervals of memory access addresses are fused, memory access efficiency is improved, memory latency is hidden, and comprehensive performance of a GPU is improved. The method can be used in cooperation with an existing method, and therefore performance of a program can be improved to some extent.

Description

Access method on a kind of GPU
Technical field
The present invention relates to the access method design field of GPU architecture and GPU, particularly relate to the access method on a kind of GPU under GPU architecture.
Background technology
The hardware configuration of GPU and the hardware configuration of CPU have difference very big, and the hardware of GPU is made up of internal memory and stream handle.GPU is actually the array of a processor core, and each stream handle comprises multiple core, in a GPU equipment, comprises one or more stream handles, and therefore processor just has extensibility.If increase more stream handle in equipment, GPU just can process more task at synchronization, or for same task, if there is sufficient concurrency, GPU can complete this task faster.
What GPU used is high-speed memory, has stable bandwidth, but the same with all internal memories, has serious memory access latency.By the access with amalgamation mode to internal memory, memory access latency can be hidden to a certain extent.Original fusion access is exactly the memory block that merges the continuous alignment of all thread accesses.If internally deposit into row connected reference one to one, the memory access address of each thread can be merged, and only needs an access request to deal with problems.Suppose the memory block of 4 bytes of each thread accesses.Internal memory can merge based on thread Shu Jinhang, namely accesses internal memory and will obtain the data of 32 × 4=128 byte.The size merging is supported 32 bytes, and 64 bytes and 128 bytes, represent that respectively the intrafascicular each thread of thread is with 1 byte, and 2 bytes and 4 bytes are unit reading out data, but prerequisite be access request must be continuously, and the alignment taking 32 bytes as benchmark.
Summary of the invention
In order to solve the problem existing in background technology, the object of the present invention is to provide the access method on a kind of GPU, the present invention can improve memory access efficiency, hides delay memory, improves the combination property of GPU.
The technical scheme that the present invention solves its technical matters employing is as follows:
1) request of sending in stream handle is carried out to memory access fusion;
2) stream handle sends to the access request after merging in correspondence memory;
3) in internal memory, the access request after merging is split, and sense data;
4) in internal memory, sense data is formed to data block and turn back to stream handle;
5) stream handle is processed, is stored the data block of beaming back.
In described step 1), the request of sending in stream handle being carried out to memory access fusion specifically comprises:
1.1) request address stream handle in GPU multinuclear being sent is placed in an array;
1.2) sorted from small to large ord in the memory access address in array, and the each memory access address in same array is not repeated; In each array, successively the memory access address that in all memory access address, the distance between memory access address is identical is fused to access request one time by order from small to large.
Described step 2) specifically comprise:
2.1) stream handle is judged the access request obtaining after merging need to be sent to internal memory sequence number by memory access address, and the access request obtaining after merging is sent in the internal memory corresponding with this internal memory sequence number;
Described step 2.1) judgment mode that judges by memory access address is: all memory access address in the access request after a certain fusion is expanded into array, memory access address in array after launching, to internal memory number remainder, is obtained sending the internal memory sequence number of access request.
Described step 3) specifically comprises:
3.1) internal memory receive send from stream handle after the access request merging, the access request merging is reduced to not to multiple access requests of fusion;
3.3) the multiple access requests that do not merge are sent to corresponding memory block, read desired data.
Described step 3.1) in the access request merging be reduced to not to the multiple access request detailed processes that merge be:
All memory access address in access request after a certain fusion is expanded into array, form the multiple access requests that do not merge, one of them memory access address is as an access request.
Described step 3.3) in by do not merge multiple access requests send to corresponding memory block, be specially:
Wherein each memory access address, to internal memory number remainder, obtains sending the internal memory sequence number of access request; If the internal memory sequence number obtaining is identical with current internal memory sequence number, send this access request to corresponding memory block; If the internal memory sequence number obtaining is not identical with current internal memory sequence number, ignore this access request.
Described step 4) specifically comprises:
4.1) internal memory is placed on all data of reading from memory block in buffer zone;
4.2) each data header is added to access request and the current internal memory sequence number after fusion corresponding to these data, obtain data block;
4.3) data block is sent it back to the stream handle that sends this data block corresponding requests.
Described step 5) specifically comprises:
5.1) receive the data block sending it back from internal memory, calculate the address of each byte in data block by the access request after merging in data block and internal memory sequence number;
5.2) finally by the address storage data of each byte in data block.
Described step 5.1) calculate in the following ways the address of each byte in data block: all memory access address of the access request after merging in data block is expanded into array, and the internal memory sequence number that each memory access address after launching is obtained internal memory number remainder is compared with the internal memory sequence number in data block; If both are identical, the address that is corresponding byte in data block by this memory access address; If both are not identical, ignore.
The present invention is compared with background technology, and the useful effect having is:
The present invention is that the access request that memory access address is had to a same intervals merges.By some existing standard programs are tested, improve memory access efficiency, hide delay memory, improve the combination property of GPU.This method can be combined with existing method, thereby make the performance of calling program can obtain different raisings.
Brief description of the drawings
Accompanying drawing is overview flow chart of the present invention.
Embodiment
Below in conjunction with drawings and Examples, the invention will be further described.
As shown in drawings, the present invention includes:
1) request of sending in stream handle is carried out to memory access fusion;
2) stream handle sends to the access request after merging in correspondence memory;
3) in internal memory, the access request after merging is split, and sense data;
4) in internal memory, sense data is formed to data block and turn back to stream handle;
5) stream handle is processed, is stored the data block of beaming back.
Above-mentioned steps 1) in the request of sending in stream handle carried out to memory access fusion specifically comprise:
1.1) request address stream handle in GPU multinuclear being sent is placed in an array;
1.2) sorted from small to large ord in the memory access address in array, if having certain memory access address to repeat in array repeatedly after sequence, the memory access address of Delete superfluous, makes to be kept to once, makes the each memory access address in same array not repeat; In each array, successively the memory access address that in all memory access address, the distance between memory access address is identical is fused to access request one time by order from small to large.
Fusion process is specially:
1.2.1) take out the first two address in array, differed from and be assigned to apart from variable;
1.2.2) get by the difference of the 3rd address and second address and apart from variable comparison, the first two address is fused to one time to access request if unequal, if equal the 3rd address is classified as to the access request identical with the first two address, continue again memory access address relatively below, the like and repeat step, until in array all memory access address all by merge fall.
Above-mentioned steps 2) in specifically comprise:
2.1) stream handle is judged the access request obtaining after merging need to be sent to internal memory sequence number by memory access address, and the access request obtaining after merging is sent in the internal memory corresponding with this internal memory sequence number.Owing to having multiple internal memories in a GPU, an access request merging may need to send to multiple internal memories, also likely sends in an internal memory.
Its judgment mode is: all memory access address in the access request after a certain fusion is expanded into array, the memory access address in array after launching, to internal memory number remainder, is obtained sending the internal memory sequence number of access request.
Above-mentioned steps 3) in specifically comprise:
3.1) internal memory receive send from stream handle after the access request merging, the access request merging is reduced to not to multiple access requests of fusion;
3.2) all memory access address in the access request after a certain fusion is expanded into array, form the multiple access requests that do not merge, one of them memory access address is as an access request.
3.3) the multiple access requests that do not merge are sent to corresponding memory block, read desired data.Before transmission, wherein each memory access address, to internal memory number remainder, obtains sending the internal memory sequence number of access request; If the internal memory sequence number obtaining is identical with current internal memory sequence number, send this access request to corresponding memory block; If the internal memory sequence number obtaining is not identical with current internal memory sequence number, ignore this access request.
Above-mentioned steps 4) specifically comprise:
4.1) internal memory is placed on all data of reading from memory block in buffer zone;
4.2) each data header is added to access request and the current internal memory sequence number after fusion corresponding to these data, obtain data block;
4.3) data block is sent it back to the stream handle that sends this data block corresponding requests.
Above-mentioned steps 5) specifically comprise:
5.1) receive the data block sending it back from internal memory, calculate the address of each byte in data block by the access request after merging in data block and internal memory sequence number; All memory access address of the access request after merging in data block is expanded into array, and the internal memory sequence number that each memory access address after launching is obtained internal memory number remainder is compared with the internal memory sequence number in data block; If both are identical, the address that is corresponding byte in data block by this memory access address; If both are not identical, ignore.
5.2) finally by the address storage data of each byte in data block.
The present invention is because one group of address that is arithmetic progression can be by first address, number of addresses, and 3 variablees of spacing distance represent, the access request that therefore a series of addresses are arithmetic progression by the present invention converts an access request to.The quantity that has effectively reduced access request, has improved performance.
Embodiments of the invention are as follows:
To send 16 access requests as example by 1 stream handle, totally 6 internal memories, internal memory sequence number is 0-5.
1. the capable memory access of stream handle is merged
1) stream handle sends 16 access requests, and address sequence is { 1664,1792,1920,2560,3328,4096,4864,5632,128,256,384,512,640,768,896,640};
2) above request address is placed in an array;
3) sorted from small to large ord in the memory access address in array, if having certain memory access address to repeat in array repeatedly after sequence, the memory access address of Delete superfluous, makes to be kept to once, make the each memory access address in same array not repeat, after operation, array is { 128,256,384,512,640,768,896,1664,1792,1920,2560,3328,4096,4864,5632};
4) successively the memory access address that in all memory access address, the distance between memory access address is identical is fused to access request one time by order from small to large, after operation, array is { 128,256,384,512,640,768,896} be fused to 128(first address), 7(number of addresses), 128(spacing distance), { 1664,1792,1920} is fused to { 1664,3,128}, { 2560,3328,4096,4864,5632} is fused to { 2560,5,768};
2. the access request after merging is sent to correspondence memory by stream handle
1) stream handle is judged the access request obtaining after merging need to be sent to internal memory sequence number by memory access address, and the access request obtaining after merging is sent in the internal memory corresponding with this internal memory sequence number to { 128 after fusion, 7,128} sends to 0,1,2,3,4, in No. 5 internal memories, { 1664,3,128} sends to 1,2, in No. 3 internal memories, { 2560,5,768} sends in No. 2 internal memories;
3. in internal memory, the access request after merging is split, and sense data
1) internal memory receive send from stream handle after the access request merging, the access request merging is reduced to not to multiple access requests of fusion, { 128,7,128} is reduced into { 128,256,384,512,640,768,896}, { 1664,3,128} is reduced into { 1664,1792,1920}, { 2560,5,768} is reduced into { 2560,3328,4096,4864,5632}, in No. 0 internal memory, take out address { the corresponding data of 768} [D768], in No. 1 internal memory, take out address { 128, 896, corresponding data [the D128 of 1664}, D896, D1664], in No. 2 internal memories, take out address { 256, 1792, 2560, 3328, 4096, 4864, corresponding data [the D256 of 5632}, D1792, D2560, D3328, D4096, D4864, D5632], in No. 3 internal memories, take out address { 384, corresponding data [the D384 of 1920}, D1920], in No. 4 internal memories, take out address { the corresponding data of 512} [D512], in No. 5 internal memories, take out address { the corresponding data of 640} [D640],
4. in internal memory, sense data is formed to data block and turn back to stream handle
1) each internal memory is placed on all data of reading from memory block in buffer zone;
2) each data header is added to access request and the current internal memory sequence number after fusion corresponding to these data, obtain data block, data block in No. 0 internal memory comprises { 0 (internal memory number), 128, 7, 128, [D768] }, data block in No. 1 internal memory comprises { 1, 128, 7, 128, [D128, D896] }, { 1, 1664, 3, 128, [D1664] }, data block in No. 2 internal memories comprises { 2, 128, 7, 128, [D256] }, { 2, 1664, 3, 128, [D1792] }, { 2, 2560, 5, 768, [D2560, D3328, D4096, D4864, D5632] }, data block in No. 3 internal memories comprises { 3, 128, 7, 128, [D384] }, { 3, 1664, 3, 128, [D1920] }, data block in No. 4 internal memories comprises { 4, 128, 7, 128, [D512] }, data block in No. 5 internal memories comprises { 5, 128, 7, 128, [D640] },
3) data block is sent it back to the stream handle that sends this data block corresponding requests.
5. stream handle is processed, is stored the data block of beaming back
1) receive the data block sending it back from internal memory, calculate the address of each byte in data block by the access request after merging in data block and internal memory sequence number, all memory access address of the access request after merging in data block is expanded into array, and the internal memory sequence number that each memory access address after launching is obtained internal memory number remainder is compared with the internal memory sequence number in data block, if both are identical, the address that is corresponding byte in data block by this memory access address, if both are not identical, ignore, { 0, 128, 7, 128, [D768] } be { 768 (memory access addresses), [D768] }, { 1, 128, 7, 128, [D128, D896] } be { 128, 896[D128, D896] }, { 1, 1664, 3, 128, [D1664] } be { 1664, [D1664] }, { 2, 128, 7, 128, [D256] } be { 256, [D256] }, { 2, 1664, 3, 128, [D1792] } be { 1792, [D1792] }, { 2, 2560, 5, 768, [D2560, D3328, D4096, D4864, D5632] } be { 2560, 3328, 4096, 4864, 5632, [D2560, D3328, D4096, D4864, D5632] }, { 3, 128, 7, 128, [D384] } be { 384, [D384] }, { 3, 1664, 3, 128, [D1920] } be { 1920, [D1920] }, { 4, 128, 7, 128, [D512] } be { 512, [D512] }, { 5, 128, 7, 128, [D640] } be { 640, [D640] },
2) finally by the address storage data of each byte in data block.
The present invention can obviously reduce the memory access number sending from stream handle.While using former method, memory access number is 15 times, and memory access number of the present invention is 10 times, has improved memory access efficiency.
Adopt as above the inventive method to move polybench (http://web.cse.ohio-state.edu/ ~ pouchet/software/polybench/) and Rodinia(http: the program //www.cs.virginia.edu/ ~ skadron/wiki/rodinia/index.php/Main_Page), result is as following table 1.
Table 1
Program name Former memory access number Memory access number of the present invention Memory access number of the present invention/former memory access number
particlefilter 1987620 989658 49.79%
nw 1073152 868352 80.92%
ATAX 19398912 4194560 21.62%
BICG 19398912 4194560 21.62%
lava_MD 346760355 263634505 76.03%
k_means 18186502 4539310 24.96%
CORR 165340399 47765394 28.89%
GESUMMV 14037999 2406447 17.14%
MVT 16493875 2425656 14.71%
COVAR 170508760 47833622 28.05%
SYR2K 920221 123997 13.47%
SYRK 8527271 1254680 14.71%
Can find out thus, the present invention, by memory access address being had to the fusion of the access request of same intervals, obviously reduces the memory access number sending from stream handle, thereby improve memory access efficiency, hide delay memory, improved the combination property of GPU, there is significant technique effect.

Claims (10)

1. the access method on GPU, is characterized in that:
1) request of sending in stream handle is carried out to memory access fusion;
2) stream handle sends to the access request after merging in correspondence memory;
3) in internal memory, the access request after merging is split, and sense data;
4) in internal memory, sense data is formed to data block and turn back to stream handle;
5) stream handle is processed, is stored the data block of beaming back.
2. the access method on a kind of GPU according to claim 1, is characterized in that: in described step 1), the request of sending in stream handle is carried out to memory access fusion and specifically comprise:
1.1) request address stream handle in GPU multinuclear being sent is placed in an array;
1.2) sorted from small to large ord in the memory access address in array, and the each memory access address in same array is not repeated; In each array, successively the memory access address that in all memory access address, the distance between memory access address is identical is fused to access request one time by order from small to large.
3. the access method on a kind of GPU according to claim 1, is characterized in that: described step 2) in specifically comprise:
2.1) stream handle is judged the access request obtaining after merging need to be sent to internal memory sequence number by memory access address, and the access request obtaining after merging is sent in the internal memory corresponding with this internal memory sequence number.
4. the access method on a kind of GPU according to claim 3, is characterized in that:
Described step 2.1) judgment mode that judges by memory access address is: all memory access address in the access request after a certain fusion is expanded into array, memory access address in array after launching, to internal memory number remainder, is obtained sending the internal memory sequence number of access request.
5. the access method on a kind of GPU according to claim 1, is characterized in that: in described step 3), specifically comprise:
3.1) internal memory receive send from stream handle after the access request merging, the access request merging is reduced to not to multiple access requests of fusion;
3.3) the multiple access requests that do not merge are sent to corresponding memory block, read desired data.
6. the access method on a kind of GPU according to claim 5, is characterized in that: described step 3.1) in the access request merging be reduced to not to the multiple access request detailed processes that merge be:
All memory access address in access request after a certain fusion is expanded into array, form the multiple access requests that do not merge, one of them memory access address is as an access request.
7. the access method on a kind of GPU according to claim 5, is characterized in that: described step 3.3) in by do not merge multiple access requests send to corresponding memory block, be specially:
Wherein each memory access address, to internal memory number remainder, obtains sending the internal memory sequence number of access request; If the internal memory sequence number obtaining is identical with current internal memory sequence number, send this access request to corresponding memory block; If the internal memory sequence number obtaining is not identical with current internal memory sequence number, ignore this access request.
8. the access method on a kind of GPU according to claim 1, is characterized in that: described step 4) specifically comprises:
4.1) internal memory is placed on all data of reading from memory block in buffer zone;
4.2) each data header is added to access request and the current internal memory sequence number after fusion corresponding to these data, obtain data block;
4.3) data block is sent it back to the stream handle that sends this data block corresponding requests.
9. the access method on a kind of GPU according to claim 1, is characterized in that: described step 5) specifically comprises:
5.1) receive the data block sending it back from internal memory, calculate the address of each byte in data block by the access request after merging in data block and internal memory sequence number;
5.2) finally by the address storage data of each byte in data block.
10. the access method on a kind of GPU according to claim 9, it is characterized in that: described step 5.1) calculate in the following ways the address of each byte in data block: all memory access address of the access request after merging in data block is expanded into array, and the internal memory sequence number that each memory access address after launching is obtained internal memory number remainder is compared with the internal memory sequence number in data block; If both are identical, the address that is corresponding byte in data block by this memory access address; If both are not identical, ignore.
CN201410419711.2A 2014-08-25 2014-08-25 GPU memory access method Active CN104199782B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410419711.2A CN104199782B (en) 2014-08-25 2014-08-25 GPU memory access method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410419711.2A CN104199782B (en) 2014-08-25 2014-08-25 GPU memory access method

Publications (2)

Publication Number Publication Date
CN104199782A true CN104199782A (en) 2014-12-10
CN104199782B CN104199782B (en) 2017-04-26

Family

ID=52085078

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410419711.2A Active CN104199782B (en) 2014-08-25 2014-08-25 GPU memory access method

Country Status (1)

Country Link
CN (1) CN104199782B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107368431A (en) * 2016-05-11 2017-11-21 龙芯中科技术有限公司 Memory pool access method, cross bar switch and computer system
US10163180B2 (en) 2015-04-29 2018-12-25 Qualcomm Incorporated Adaptive memory address scanning based on surface format for graphics processing

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6359624B1 (en) * 1996-02-02 2002-03-19 Kabushiki Kaisha Toshiba Apparatus having graphic processor for high speed performance
CN101841438A (en) * 2010-04-02 2010-09-22 中国科学院计算技术研究所 Method or system for accessing and storing stream records of massive concurrent TCP streams
CN103150157A (en) * 2013-01-03 2013-06-12 中国人民解放军国防科学技术大学 Memory access bifurcation-based GPU (Graphics Processing Unit) kernel program recombination optimization method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6359624B1 (en) * 1996-02-02 2002-03-19 Kabushiki Kaisha Toshiba Apparatus having graphic processor for high speed performance
CN101841438A (en) * 2010-04-02 2010-09-22 中国科学院计算技术研究所 Method or system for accessing and storing stream records of massive concurrent TCP streams
CN103150157A (en) * 2013-01-03 2013-06-12 中国人民解放军国防科学技术大学 Memory access bifurcation-based GPU (Graphics Processing Unit) kernel program recombination optimization method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10163180B2 (en) 2015-04-29 2018-12-25 Qualcomm Incorporated Adaptive memory address scanning based on surface format for graphics processing
CN107368431A (en) * 2016-05-11 2017-11-21 龙芯中科技术有限公司 Memory pool access method, cross bar switch and computer system
CN107368431B (en) * 2016-05-11 2020-03-31 龙芯中科技术有限公司 Memory access method, cross switch and computer system

Also Published As

Publication number Publication date
CN104199782B (en) 2017-04-26

Similar Documents

Publication Publication Date Title
KR101994021B1 (en) File manipulation method and apparatus
JP6768928B2 (en) Methods and devices for compressing addresses
CN106462496B (en) Bandwidth of memory compression is provided in the system based on central processing unit CPU using Memory Controller CMC is compressed
CN103379138A (en) Method and system for realizing load balance, and method and apparatus for gray scale publication
CN107729535B (en) Method for configuring bloom filter in key value database
CN105843819B (en) Data export method and device
CN104618361B (en) A kind of network flow data method for reordering
CN107273542B (en) High-concurrency data synchronization method and system
CN101944124A (en) Distributed file system management method, device and corresponding file system
CN103914483B (en) File memory method, device and file reading, device
US10771358B2 (en) Data acquisition device, data acquisition method and storage medium
US8674858B2 (en) Method for compression and real-time decompression of executable code
CN103279521A (en) Video big data distributed decoding method based on Hadoop
CN103970875A (en) Parallel repeated data deleting method
CN107423321B (en) Method and device suitable for cloud storage of large-batch small files
CN115640254A (en) In-memory database (IMDB) acceleration by near data processing
CN104199782A (en) GPU memory access method
CN106227506A (en) A kind of multi-channel parallel Compress softwares system and method in memory compression system
CN107169138B (en) Data distribution method for distributed memory database query engine
CN205899536U (en) Geographic information service system based on tile map
CN103049561A (en) Data compressing method, storage engine and storage system
CN105208096A (en) Distributed cache system and method
CN115190102B (en) Information broadcasting method, information broadcasting device, electronic unit, SOC (system on chip) and electronic equipment
US8983916B2 (en) Configurable data generator
CN104657383A (en) Repeated video detection method and system based on correlation properties

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant