CN113887720A - Up-sampling reverse blocking mapping method - Google Patents

Up-sampling reverse blocking mapping method Download PDF

Info

Publication number
CN113887720A
CN113887720A CN202111148518.6A CN202111148518A CN113887720A CN 113887720 A CN113887720 A CN 113887720A CN 202111148518 A CN202111148518 A CN 202111148518A CN 113887720 A CN113887720 A CN 113887720A
Authority
CN
China
Prior art keywords
feature map
data
input
pixels
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202111148518.6A
Other languages
Chinese (zh)
Other versions
CN113887720B (en
Inventor
施先广
胡有能
李一涛
何增
马德
岳克强
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Dianzi University
Original Assignee
Hangzhou Dianzi University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Dianzi University filed Critical Hangzhou Dianzi University
Priority to CN202111148518.6A priority Critical patent/CN113887720B/en
Publication of CN113887720A publication Critical patent/CN113887720A/en
Application granted granted Critical
Publication of CN113887720B publication Critical patent/CN113887720B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/082Learning methods modifying the architecture, e.g. adding, deleting or silencing nodes or connections
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology

Abstract

The invention discloses an up-sampling reverse blocking mapping method, which comprises the following steps: s1, reading the input characteristic diagram data and storing the data in the shift buffer area; s2, finding out pixel points in the output characteristic diagram blocks, and mapping the positions of the four most adjacent pixel points in the input characteristic diagram; s3, calculating the pixel value of the output characteristic diagram in a pipeline mode, wherein the pixel value comprises the following steps: s31, in the vertical direction, multiplying four nearest pixel points with the parameters in the column direction for once, and adding the four obtained intermediate values to obtain two intermediate values; s32, multiplying the two intermediate values with the parameters in the row direction respectively to obtain two intermediate values, and adding the two intermediate values for one time to obtain the pixel points of the output characteristic diagram after up-sampling; s4, after the data of one block is processed, returning to S1, and processing the next block; and S5, after the feature graph is input and processed, processing the next feature graph according to the register instruction.

Description

Up-sampling reverse blocking mapping method
Technical Field
The invention relates to the technical field of neural networks, in particular to an up-sampling reverse block mapping method.
Background
The occurrence of the deep learning algorithm further makes the application of the artificial intelligence technology make a breakthrough progress. In the early stage, the deep learning algorithm is mainly operated on a server with a high-performance GPU, and with deep learning application, people find that the method is simple and efficient, but has the problems of high power consumption, large volume and the like. In response to these problems, the neural network accelerator can effectively solve the problems. The upsampling is an important loop in the implementation of the neural network accelerator, and therefore, has received much attention and development.
The up-sampling mainly has the function of constructing new pixel points by utilizing the existing pixel points, and is commonly used in a neural network accelerator for enhancing a characteristic diagram, reducing the size of the characteristic diagram and the like. It is very important for the correct recognition and learning of deep learning. The existing up-sampling is generally point-to-point mapping, that is, four adjacent points of the input feature map are read, a new pixel point is obtained through calculation, then the four points of the input feature map are read again, and the new pixel point is obtained through calculation. The disadvantage of this is that data multiplexing cannot be realized, because the pixel points with different output characteristic diagrams can be mapped to four adjacent pixel points with the same input characteristic diagrams, if point-to-point mapping is adopted, the phenomenon of repeated reading will inevitably occur, and thus the waste of time and resources is caused.
Disclosure of Invention
In order to solve the defects of the prior art and realize the purpose of high-efficiency multiplexing of data, the invention adopts the following technical scheme:
an up-sampling reverse blocking mapping method comprises the following steps:
s1, reading input feature diagram data and storing the data in a shift buffer area, and generating blocks of an output feature diagram according to four most adjacent pixel points ram [0], ram [1], ram [ W ] and ram [ W +1] of the input feature diagram;
s2, finding out pixel points in output feature map blocks, mapping the pixel points to the positions of the four most adjacent pixel points in the input feature map, wherein in each block of the output feature map, different pixel points are mapped to the points in the input feature map, and the distances between the different pixel points and the four most adjacent pixel points in the input feature map are different in the horizontal direction and the vertical direction, wherein h _ param00 and h _ param01 are parameters in the row direction and represent the distances in the horizontal direction, and v _ param00 and v _ param01 are parameters in the column direction and represent the distances in the vertical direction;
s3, calculating the distance parameter obtained by the sum of the pixel points obtained in S1 and the distance parameter obtained in S2 in a pipeline mode to obtain the pixel value of the output characteristic diagram, and the method comprises the following steps:
s31, in the vertical direction, carrying out primary multiplication on four nearest pixel points and parameters v _ param00 and v _ param01 in the column direction, and adding four obtained intermediate values to obtain two intermediate values;
s32, multiplying the two intermediate values by the parameters h _ param00 and h _ param01 in the row direction respectively to obtain two intermediate values, and adding the two intermediate values for one time to obtain the pixel points of the output characteristic diagram after up-sampling;
s4, after the data of one block is processed, returning to S1, and processing the next block;
and S5, after the feature graph is input and processed, processing the next feature graph according to the register instruction.
Through reverse block mapping and data multiplexing, data to be processed and calculation parameters can be obtained quickly, two-stage multiplication and addition operation is achieved through a production line, and the calculation rate is improved.
Further, in S1, the data cache block adopts a shift cache manner, the input feature map W × H, W, H is greater than or equal to 2, W represents that the width of the input feature map is W pixels, H represents that the height of the input feature map is H pixels, the number of pixels initially input to the cache block is W +2, pixels in the first partition of the output feature map are generated according to the four most adjacent pixels of the input feature map, after all pixels in the partition are generated, new data are read from left to right, pixels in the second partition of the output feature map are generated, and so on until all partitions of the output feature map are generated.
Further, after the rightmost block of the output feature map is generated, the pixel points of the two input feature maps are read in, and the next block of the output feature map is generated.
Further, for the input feature map W × H, W =1, H ≧ 2, W ≧ 2, H =1, or W =1, H =1, the pixel which does not exist among the four most adjacent pixels of the input feature map is set to 0.
Further, in S2, the output feature map is mapped to four nearest neighboring pixel points of the input feature map in a reverse block mapping manner, where the mapping direction is from left to right to the rightmost block, and then from the left to the right from the left lower block, and this loop is performed until the entire output feature map is mapped, the same block of the output feature map is mapped to four nearest neighboring pixel points that are the same as the input feature map, and different blocks of the output feature map are mapped to four nearest neighboring pixel points that are not completely the same as the input feature map.
In S2, when the input feature map size is W × H, W =1, H ≧ 2, W ≧ 2, H =1, or W =1, H =1, the pixel that does not exist among the four pixels in the input feature map is set to 0.
Further, in S3, data ram [0], data ram [1], data ram [ W ], and data ram [ W +1] are read from the shift buffer.
Further, in S3, when the characteristic diagram W × H is input, W ≧ 2, and H =1, the data ram [0], the data ram [1] are read from the shift buffer, and the data ram [ W ] and the data ram [ W +1] are assigned 0; when the characteristic diagram W multiplied by H, W =1 and H is larger than or equal to 2, the data ram [0] and the data ram [ W ] are read out from the shift buffer area, and the data ram [1] and the data ram [ W +1] are assigned with 0; when the feature map W × H, W =1, H =1 is input, the data ram [0] is read out from the shift buffer, and the data ram [1], the data ram [ W ], and the data ram [ W +1] are assigned 0.
Because the design is very flexible, the initial data reading mode only needs to be slightly changed, and the upsampling of a single pixel point (1 multiplied by 1), a single-column pixel point (1 multiplied by H) and a single-row pixel point (Wmultiplied by 1) can be realized.
The invention has the advantages and beneficial effects that:
according to the invention, the high-efficiency multiplexing of data is realized through a block reverse mapping mode and a cache design of a shift register; the data processing adopts a pipeline mode, so that the calculation speed is increased; in the data calculation process, a floating point-to-fixed point mode is used, the optimization is customized according to the specific design precision requirement, and the cache and the power consumption consumed by multiply-add operation in the data processing process are reduced; meanwhile, the parallel mode is simple, most of the existing neural networks can be matched, and the method is independent of the specific layer number of the neural networks.
Drawings
FIG. 1 is a flow chart of the method of the present invention.
FIG. 2 is a data cache map (W, H ≧ 2) of an up-sampling inverse block mapping input profile in the present invention.
FIG. 3 is a data buffer diagram of an up-sampling inverse block mapping input feature diagram (W =1, H ≧ 2 or W ≧ 2, H = 1) in the present invention.
Fig. 4 is a data buffer diagram (W =1, H = 1) of an up-sampling inverse block mapping input feature diagram in the present invention.
FIG. 5 is a block diagram of a data cache according to the present invention.
FIG. 6 is a schematic diagram of an upsampling inverse block mapping method according to the present invention (W, H ≧ 2).
FIG. 7 is a schematic diagram of an upsampling inverse block mapping method in the present invention (W =1, H ≧ 2 or W ≧ 2, H = 1).
Fig. 8 is a schematic diagram of an upsampling inverse block mapping method in the present invention (W =1, H = 1).
Fig. 9 is a schematic diagram of upsampling parameter generation in the present invention.
Fig. 10 is a schematic diagram of an upsampling data processing module in the present invention.
Detailed Description
The following detailed description of embodiments of the invention refers to the accompanying drawings. It should be understood that the detailed description and specific examples, while indicating the present invention, are given by way of illustration and explanation only, not limitation.
The inverse block mapping and multiplexing upsampling methods of neural network accelerators mentioned in the design. In the block mapping, the four most adjacent pixel points of different block mappings in the input characteristic diagram are not completely the same, and the same part adopts data multiplexing, so that the repeated reading condition can not occur.
As shown in fig. 1, an upsampling inverse block mapping method includes the following steps:
the method comprises the following steps: and reading part of data of the characteristic diagram and storing the part of data in the shift buffer area according to a convention mode.
As shown in fig. 2-4, the input feature map cache block does not cache only the four most adjacent pixels for data multiplexing.
As shown in fig. 5, the data cache block adopts a shift cache method. The timing sequence of the data buffer is slightly different for the input characteristic diagrams with different sizes. As shown in fig. 2, for an input feature map with width and height of W × H (W, H ≧ 2), the number of data initially input to the cache block is (W + 2), at this time, a first block of the output feature map appears for the first time, which corresponds to the four most adjacent pixels of the input feature map, and can start to generate pixels in the first block of the output feature map, after all pixels in the block are generated, only one data needs to be read in, and then pixels in a second block of the output feature map can start to be generated, note that after the rightmost block of the output feature map is generated, the data of two input feature maps need to be read in to generate a lower block of the output feature map, and the process is sequentially cycled until all blocks of the output feature map are generated; as shown in fig. 3, for an input feature map with a size of W × H (W =1, H ≧ 2 or W ≧ 2, H = 1), there are actually no four pixels that are most adjacent, the number of data that is initially input to the cache block is 2, and the remaining two pixels are replaced with 0, and then generation of pixels in the first segment of the output feature map can be started, and after all pixels in the segment are generated, only one data needs to be read in, and then generation of pixels in the second segment of the output feature map can be started; as shown in fig. 4, for an input feature map with an input feature map size of W × H (W =1, H = 1), the output feature map has only one partition, the number of data input into the buffer block at the beginning is 1, and the remaining three pixels are replaced with 0, and the generation of the output feature map can be started.
Step two: partitioning according to a partitioning reverse mapping mode, sequentially finding out pixel points in the partitions, and calculating four parameters of the pixel points of the output characteristic graph at positions corresponding to the input characteristics.
As shown in fig. 6-8, the dotted line boxes respectively represent the blocks of the output feature map and the four pixels mapped to the nearest neighbors of the input feature map, as shown in fig. 6, the output feature map is mapped to the four pixels mapped to the nearest neighbors of the input feature map by means of inverse block mapping, the mapping direction is from left to right to the rightmost block, and then from left to right from the left lower block, so as to loop until the entire output feature map is mapped. The same block of the output characteristic diagram is mapped to four nearest-neighbor pixel points which are the same as the input characteristic diagram, and different blocks of the output characteristic diagram are mapped to four nearest-neighbor pixel points which are not completely the same as the input characteristic diagram. As shown in fig. 7 and 8, when the input feature map size is W × H (W =1, H ≧ 2, W ≧ 2, H = 1) or W × H (W =1, H = 1), actually, the input feature map does not have four pixels nearest to each other, and the non-existing pixel is replaced with a pixel having a value of 0.
As shown in fig. 9, ram [0], ram [1] or 0, ram [ W ] or 0, and ram [ W +1] or 0 are respectively four nearest neighboring pixels in the input feature map, and Dout is a pixel in a certain partition of the output feature map, and is mapped to the position of the input feature map. As shown in fig. 6 to 9, each block of the output feature map often includes a plurality of pixels, and all the pixels in the same block are mapped to the same four nearest neighboring pixels, but the horizontal distances and the vertical distances between the points mapped in the input feature map by the different pixels and the nearest neighboring pixels are not completely the same, that is, h _ param00, h _ param01, v _ param00, and v _ param01 are not completely the same.
Step three: inputting the parameters of the pixel values obtained in the step one and the parameters obtained in the step two into a data processing module, and calculating by the module in a pipeline mode to obtain the pixel values of the output characteristic diagram;
the whole data processing module is divided into two stages of calculation, and each stage is a multiplication and addition combination. When the characteristic diagram (WXH, W, H ≧ 2) is input, data ram [0], data ram [1], data ram [ W ], and data ram [ W +1] are read out from the shift buffer; when a characteristic diagram (W multiplied by H, W is more than or equal to 2, H = 1) is input, data ram [0] and data ram [1] are read from the shift buffer area, and the data ram [ W ] and the data ram [ W +1] are assigned with 0; when a characteristic diagram (W multiplied by H, W =1, H ≧ 2) is input, data ram [0] and data ram [ W ] are read from the shift buffer, and data ram [1] and data ram [ W +1] are assigned 0; when the feature map (W × H, W =1, H = 1) is input, the data ram [0] is read out from the shift buffer, and the data ram [1], the data ram [ W ], and the data ram [ W +1] are assigned 0. Because the design is very flexible, the initial data reading mode only needs to be slightly changed, and the upsampling of a single pixel point (1 multiplied by 1), a single-column pixel point (1 multiplied by H) and a single-row pixel point (Wmultiplied by 1) can be realized;
as shown in fig. 10, in the first-stage multiply-add, four adjacent pixel point values and column-direction parameters (v _ param00 and v _ param 01) are input in the vertical direction, and four intermediate values are obtained through one multiplication; and respectively adding the four intermediate values to obtain two intermediate values, and finishing multiplication and addition operation once at the moment. The second-stage multiplication and addition is to multiply two intermediate values generated by the first-stage multiplication and addition with row direction parameters (h _ param00 and h _ param 01) respectively to obtain two intermediate values; and adding the two intermediate values again to obtain a result, namely outputting the pixel points of the feature map after up-sampling. Through reverse block mapping and data multiplexing, data to be processed and calculation parameters can be obtained quickly, two-stage multiplication and addition operation is achieved through a production line, and the calculation rate is improved.
Step four: after the data of one block is processed, the next block is processed through the first step, the second step and the third step;
step five: and after the input characteristic diagram is processed, continuing to process the next characteristic diagram according to the register instruction.
The above examples are only intended to illustrate the technical solution of the present invention, but not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.

Claims (8)

1. An up-sampling reverse blocking mapping method is characterized by comprising the following steps:
s1, reading input feature diagram data and storing the data in a shift buffer area, and generating blocks of an output feature diagram according to four most adjacent pixel points ram [0], ram [1], ram [ W ] and ram [ W +1] of the input feature diagram;
s2, finding out pixel points in output feature map blocks, mapping the pixel points to the positions of the four most adjacent pixel points in the input feature map, wherein in each block of the output feature map, different pixel points are mapped to the points in the input feature map, and the distances between the different pixel points and the four most adjacent pixel points in the input feature map are different in the horizontal direction and the vertical direction, wherein h _ param00 and h _ param01 are parameters in the row direction and represent the distances in the horizontal direction, and v _ param00 and v _ param01 are parameters in the column direction and represent the distances in the vertical direction;
s3, calculating the distance parameter obtained by the sum of the pixel points obtained in S1 and the distance parameter obtained in S2 in a pipeline mode to obtain the pixel value of the output characteristic diagram, and the method comprises the following steps:
s31, in the vertical direction, carrying out primary multiplication on four nearest pixel points and parameters v _ param00 and v _ param01 in the column direction, and adding four obtained intermediate values to obtain two intermediate values;
s32, multiplying the two intermediate values by the parameters h _ param00 and h _ param01 in the row direction respectively to obtain two intermediate values, and adding the two intermediate values for one time to obtain the pixel points of the output characteristic diagram after up-sampling;
s4, after the data of one block is processed, returning to S1, and processing the next block;
and S5, after the feature graph is input and processed, processing the next feature graph according to the register instruction.
2. The method of claim 1, wherein in S1, the data buffer block uses shift buffer mode, the input feature map W × H, W, H ≧ 2, W denotes the width of the input feature map as W pixels, H denotes the height of the input feature map as H pixels, the number of pixels initially input to the buffer block is W +2, the pixels in the first block of the output feature map are generated according to the four most adjacent pixels of the input feature map, after all the pixels in the block are generated, new data are read from left to right, the pixels in the second block of the output feature map are generated, and so on until all the blocks of the output feature map are generated.
3. The method of claim 2, wherein after the rightmost segment of the output feature map is generated, the pixels of the two input feature maps are read in to generate the next segment of the output feature map.
4. The method of claim 2, wherein for the input feature map W x H, W =1, H ≧ 2, or W ≧ 2, H =1, or W =1, H =1, the pixel which does not exist in the four most adjacent pixels of the input feature map is set to 0.
5. The method according to claim 2, wherein in S2, the output feature map is mapped to four nearest neighboring pixels of the input feature map by inverse block mapping, the mapping direction is from left to right to the rightmost block, and then from left to right from the bottom left block, and this is repeated until the entire output feature map is mapped, the same block of the output feature map is mapped to the same four nearest neighboring pixels of the input feature map, and different blocks of the output feature map are mapped to four nearest neighboring pixels of the input feature map that are not identical.
6. The method as claimed in claim 4, wherein in S2, when the input feature map size is W × H, W =1, H ≧ 2, H =1, or W =1, H =1, the number of pixels that do not exist among the four pixels in the input feature map is set to 0.
7. The method of claim 5, wherein in S3, data ram [0], data ram [1], data ram [ W ], and data ram [ W +1] are read from the shift buffer.
8. The method of claim 6, wherein in S3, when the characteristic map W x H, W ≧ 2, H =1 is input, data ram [0], data ram [1] are read from the shift buffer, and data ram [ W ] and data ram [ W +1] are assigned 0; when the characteristic diagram W multiplied by H, W =1 and H is larger than or equal to 2, the data ram [0] and the data ram [ W ] are read out from the shift buffer area, and the data ram [1] and the data ram [ W +1] are assigned with 0; when the feature map W × H, W =1, H =1 is input, the data ram [0] is read out from the shift buffer, and the data ram [1], the data ram [ W ], and the data ram [ W +1] are assigned 0.
CN202111148518.6A 2021-09-29 2021-09-29 Upsampling reverse blocking mapping method Active CN113887720B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111148518.6A CN113887720B (en) 2021-09-29 2021-09-29 Upsampling reverse blocking mapping method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111148518.6A CN113887720B (en) 2021-09-29 2021-09-29 Upsampling reverse blocking mapping method

Publications (2)

Publication Number Publication Date
CN113887720A true CN113887720A (en) 2022-01-04
CN113887720B CN113887720B (en) 2024-04-26

Family

ID=79007775

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111148518.6A Active CN113887720B (en) 2021-09-29 2021-09-29 Upsampling reverse blocking mapping method

Country Status (1)

Country Link
CN (1) CN113887720B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101795405A (en) * 2009-11-06 2010-08-04 杭州士兰微电子股份有限公司 H.264 high-speed luminance interpolating device and method
CN108133270A (en) * 2018-01-12 2018-06-08 清华大学 Convolutional neural networks accelerating method and device
CN110363284A (en) * 2019-06-20 2019-10-22 东南大学 A kind of convolutional neural networks hardware accelerator of the novel convolution algorithm accelerating module of band

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101795405A (en) * 2009-11-06 2010-08-04 杭州士兰微电子股份有限公司 H.264 high-speed luminance interpolating device and method
CN108133270A (en) * 2018-01-12 2018-06-08 清华大学 Convolutional neural networks accelerating method and device
CN110363284A (en) * 2019-06-20 2019-10-22 东南大学 A kind of convolutional neural networks hardware accelerator of the novel convolution algorithm accelerating module of band

Also Published As

Publication number Publication date
CN113887720B (en) 2024-04-26

Similar Documents

Publication Publication Date Title
CN111459877B (en) Winograd YOLOv2 target detection model method based on FPGA acceleration
CN109325591B (en) Winograd convolution-oriented neural network processor
CN107340993B (en) Arithmetic device and method
CN105528191B (en) Data accumulation apparatus and method, and digital signal processing device
CN110780923B (en) Hardware accelerator applied to binary convolution neural network and data processing method thereof
CN110533022B (en) Target detection method, system, device and storage medium
CN112435282A (en) Real-time binocular stereo matching method based on self-adaptive candidate parallax prediction network
CN108647184B (en) Method for realizing dynamic bit convolution multiplication
US20230068450A1 (en) Method and apparatus for processing sparse data
CN109993293B (en) Deep learning accelerator suitable for heap hourglass network
CN113792621B (en) FPGA-based target detection accelerator design method
Li et al. Efficient depthwise separable convolution accelerator for classification and UAV object detection
CN112257844A (en) Convolutional neural network accelerator based on mixed precision configuration and implementation method thereof
CN109685208B (en) Method and device for thinning and combing acceleration of data of neural network processor
CN110019184A (en) A kind of method of the orderly integer array of compression and decompression
US20210044303A1 (en) Neural network acceleration device and method
CN112364989A (en) Fast Fourier transform-based convolutional neural network acceleration design method
CN113887720A (en) Up-sampling reverse blocking mapping method
CN110766136B (en) Compression method of sparse matrix and vector
CN116128019A (en) Parallel training method and device for transducer model
CN115375922A (en) Lightweight significance detection method based on multi-scale space attention
Li et al. A computational-efficient deformable convolution network accelerator via hardware and algorithm co-optimization
CN107247944A (en) Face datection velocity optimization method and device based on deep learning
Wu et al. A High-speed and Low-power FPGA Implementation of Spiking Convolutional Neural Network Using Logarithmic Quantization
CN113705784A (en) Neural network weight coding method based on matrix sharing and hardware system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant