CN107809631B - The wavelet field method for evaluating video quality eliminated based on background - Google Patents

The wavelet field method for evaluating video quality eliminated based on background Download PDF

Info

Publication number
CN107809631B
CN107809631B CN201710926882.8A CN201710926882A CN107809631B CN 107809631 B CN107809631 B CN 107809631B CN 201710926882 A CN201710926882 A CN 201710926882A CN 107809631 B CN107809631 B CN 107809631B
Authority
CN
China
Prior art keywords
video
quality
frame
video sequence
frames
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710926882.8A
Other languages
Chinese (zh)
Other versions
CN107809631A (en
Inventor
张淑芳
黄小琴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tianjin University
Original Assignee
Tianjin University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tianjin University filed Critical Tianjin University
Priority to CN201710926882.8A priority Critical patent/CN107809631B/en
Publication of CN107809631A publication Critical patent/CN107809631A/en
Application granted granted Critical
Publication of CN107809631B publication Critical patent/CN107809631B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The invention discloses a kind of wavelet field method for evaluating video quality eliminated based on background, step 1: video sequence global qualityStep 2: calculating the local quality of video using background null methodStep 3: calculating the overall quality of video, i.e., according to the local quality and global quality of obtained video sequence, video quality evaluation modelThe present invention is directed to improve the consistency of the subjective quality assessment of video objective quality evaluation and human eye;Can there be preferable video evaluation performance for different type of distortion, different scenes, and the algorithm has lower complexity, can be realized real-time quality assessment.

Description

Wavelet domain video quality evaluation method based on background elimination
Technical Field
The invention relates to the field of video quality evaluation, in particular to a wavelet domain video quality evaluation model.
Background
Video quality evaluation algorithms are capable of measuring distortion of different degrees, and have become a popular research direction in the video field in recent years. The video quality evaluation method can be divided into subjective evaluation and objective evaluation, wherein the subjective quality evaluation is mainly used for evaluating the video quality through the visual effect of human eyes and is considered as the most reliable quality evaluation method. Therefore, the consistency between the subjective evaluation result and the objective evaluation result is generally used as the performance evaluation index of the objective evaluation algorithm.
The perceived quality of distorted video is a result of the joint action of global quality and local quality. The global quality is the rough impression of an observer on the video quality and is obtained by averaging the quality of all frames of a video sequence; the local quality is mainly determined by the characteristics of human visual attention, sequence quality variation and the like. The overall quality of the video is obtained by calculating the global and local quality of the video sequence respectively.
At present, various international objective video quality evaluation methods exist, wherein the average standard deviation (MSE), the peak signal-to-noise ratio (PSNR), the Structural Similarity (SSIM) and the multi-scale structural similarity (MS-SSIM) are widely applied to image and video quality evaluation due to simple models, but the methods are poor in consistency with human visual perception. Kalpana et al improve this by performing multi-channel decomposition on the image using a Gabor filter, performing motion estimation on each channel, and proposing a motion-based time-space domain video quality evaluation algorithm (MOVIE). Phong et al, in combination with an image quality evaluation model MAD, a spatio-temporal correlation, and a model based on the human visual system, respectively measure spatial distortion and spatio-temporal distortion of a video sequence, thereby providing a video quality evaluation algorithm (VIS3) based on the gradient similarity of spatial and spatio-temporal slices. According to the characteristic that a human visual system mainly depends on various edge structure information in an image when understanding a video, PengYan et al provides a video quality evaluation method based on space-time slice gradient similarity. These methods can achieve higher accuracy, but have high complexity and limit the practicability of the model.
Unlike image quality evaluation, video quality evaluation is related to not only spatial distortion but also temporal variation. The temporal change includes motion information and temporal distortion information, and when an observer observes a video, attention of the observer may be affected by a suddenly appearing object, a severely moving object, and temporal distortion information such as a ghost image, that is, foreground information is one of the important points of attention of human eyes. The frame difference method is a simple moving object extraction method and has the characteristics of fast background updating, strong self-adaptive capacity and the like. The frame difference method is widely applied to extracting motion information of a video sequence in video quality evaluation due to the characteristics of simplicity and rapidness. Loh et al propose a time domain video quality evaluation method based on SSIM by subtracting the current frames of the reference video and the distorted video from the two forward frames of the reference video, but the method is fast, but has poor consistency and universality with human eye visual perception, because the frame difference method may generate holes in a target for a large moving target with consistent color, and thus the moving target cannot be extracted completely. However, direct application in video quality evaluation affects the accuracy of the model, because in the case of multiple consecutive frames with the same distortion information, the frame difference method filters out the distortion information, and the accumulation of the same distortion type has a greater impact on the video quality.
Due to the complexity of a human visual system, the existing video quality evaluation algorithm is not well balanced in timeliness and accuracy.
Disclosure of Invention
The invention provides a wavelet domain video quality evaluation method based on background elimination by considering the high attention of a human visual system to edge structure information, a moving object and distortion information.
The invention relates to a wavelet domain video quality evaluation method based on background elimination, which comprises the following steps:
the first step is as follows: calculating global quality, and respectively performing reference video R by adopting 4-level haar Discrete Wavelet Transform (DWT)NAnd distorted video DNThe decomposition coefficients of each frame are expressed as follows:
CR(λ,θ,t,i)=DWT(Rt) (1)
CD(λ,θ,t,i)=DWT(Dt) (2)
wherein, CR(λ,θ,t,i),CD(lambda, theta, t, i) represents the wavelet decomposition coefficients of t frames in the reference video and the distorted video respectively, and t belongs to [1, N ∈]、Rt,DtRespectively representing the t-th frame in a reference and distorted video sequence; { λ, θ } denotes coefficient subbands used for indexing different scales and different directions of an image, respectively, θ ═ 2,3,4 denote horizontal, diagonal, and vertical subbands, respectively, and θ ═ 1 denotes an approximate subband; i represents the position of a wavelet coefficient with the t frame, the lambda scale and the direction of theta;
respectively obtaining edge coefficients E of the reference video and the distorted video under different scales through detail sub-band coefficients of wavelet domains of each frame of the video in different scales and different directionsR(λ,t,i),ED(λ,t,i):
And respectively calculating the similarity of the edge coefficients with the t frame scale of lambda by using the formula:
in the formula, T is a normal number, ESIM (lambda, T, i) represents the local similarity of the T-th frame, lambda scale and position i in the reference and distorted video, when E isR,EDWhen the two are completely the same, the value is 1;
acquiring the quality ESIMD (lambda, t) of the single-frame images of the video sequence at different scales by calculating the standard deviation of the local similarity:
wherein N iscIs the total number of coefficients in the coefficient matrix of the t frame and the lambda scale in the video.
The single frame quality of a video sequence is:
wherein l represents a haar wavelet transform series, and the value in the text is 4.
Then, the video sequence global quality QglobalExpressed as:
the second step is that: calculating the local quality of the video by adopting a background elimination method, namely firstly, constructing a reference video and a distorted video into video blocks which are not overlapped with each other in each group by taking continuous 3 frames of a video sequence as one group; secondly, combining a mean background method, taking the mean value of a reference video frame group as the background of the frame group, replacing the previous frame in a frame difference method as the background, taking the middle frame of the frame group as the representation of a space domain distortion position, and subtracting the background from the rest two frames to obtain a foreground frame to represent the characteristics of a space-time domain, so that the middle frame and the two foreground frames jointly form a space-time domain video block;
reference video RNAnd distorted video DNT frame in corresponding time-space domain video blockThe calculation formula is as follows:
wherein, BgRepresenting background images of a g-th group of video frames, g ∈ {1, 2., floor (N/3) }, N representing the total number of frames of the video sequence, m representing the position of the intermediate frame of the group of frames relative to the entire video sequence, m ═ 3g-1, t representing the position of the current frame relative to the entire video sequence;
computing visual according to equation (8)A method for single frame quality of video sequence measures the quality of each frame in video time-space domain video block to obtain the quality Q of video frame groupg
Sequencing the quality of a plurality of groups of time-space domain video blocks of the obtained video sequence, and extracting H% of the part with poor quality as the final local quality Q of the video sequencelocal
Where H represents the quality set of the group of frames of the video sequence Q1,Q2,...,Qg,...,Qfloor(N/3)Set of H% of the worst sorted fractions in (N)HRepresenting the total number of elements in the collection;
the third step: calculating the overall quality of the video, namely according to the local quality and the global quality of the obtained video sequence, a video quality evaluation model BSWQ is as follows:
the wavelet domain video quality evaluation algorithm based on background elimination provided by the invention aims to improve the consistency of objective video quality evaluation and subjective quality evaluation of human eyes; the algorithm has better video evaluation performance for different distortion types and different scenes, has lower complexity and can realize real-time quality evaluation.
Drawings
FIG. 1 is an index diagram of wavelet subband coefficients;
FIG. 2 is an overall flowchart of a wavelet domain video quality evaluation method based on background elimination according to the present invention;
fig. 3 is a graph of the BSWQ objective score versus DMOS fit.
Detailed Description
Embodiments of the present invention will be described in further detail below with reference to the accompanying drawings.
The specific implementation steps are as follows:
the first step is as follows: calculating the global quality: from the human visual characteristics, the human eye is most sensitive to the middle frequency band, i.e. the main contours of the image. The proper image contour extraction method plays an important role in video and image quality evaluation models. Wavelet transform is a method for extracting frequency information, and can decompose an image into sub-band images at different scales, so as to express edge information of the image as wavelet coefficients at different scales. Based on the fact that the Haar wavelet transform is widely used in image and video quality evaluation and compression due to the advantages of low complexity and good effect, the invention adopts 4-level Haar Discrete Wavelet Transform (DWT) to respectively perform R on a reference videoNAnd distorted video DNThe decomposition coefficients of each frame are expressed as follows:
CR(λ,θ,t,i)=DWT(Rt) (1)
CD(λ,θ,t,i)=DWT(Dt) (2)
wherein, CR(λ,θ,t,i),CD(lambda, theta, t, i) represents the wavelet decomposition coefficients of t frames in the reference video and the distorted video respectively, and t belongs to [1, N ∈]、Rt,DtRespectively representing the t-th frame in a reference and distorted video sequence; { λ, θ } denotes coefficient subbands used for indexing different scales and different directions of an image, respectively, and θ is 2,3,4Respectively representing horizontal, diagonal and vertical sub-bands, and theta 1 represents an approximate sub-band; taking 2-level discrete wavelet decomposition as an example, the index is shown in fig. 1. i represents the position of the wavelet coefficient with the t-th frame, the lambda scale and the direction theta.
Respectively obtaining edge coefficients E of the reference video and the distorted video under different scales through detail sub-band coefficients of wavelet domains of each frame of the video in different scales and different directionsR(λ,t,i),ED(λ,t,i):
And respectively calculating the similarity of the edge coefficients with the t frame scale of lambda by using the formula:
in the formula, T is a normal number and is mainly used for ensuring the stability of ESIM, ESIM (lambda, T, i) represents the local similarity of the tth frame, lambda scale and position i in the reference and distorted video, when E isR,EDWhen the two are identical, the value is 1.
Acquiring the quality ESIMD (lambda, t) of the single-frame images of the video sequence at different scales by calculating the standard deviation of the local similarity:
wherein N iscIs the total number of coefficients in the coefficient matrix of the t frame and the lambda scale in the video.
The single frame quality of a video sequence is:
wherein l represents a haar wavelet transform series, and the value in the text is 4.
Then, the video sequence global quality QglobalExpressed as:
the second step is that: calculating the local quality of the video: the method adopts a background elimination method, and comprises the following specific steps: firstly, a reference video and a distorted video are set into a group by using 3 continuous frames of a video sequence, and each group of video blocks which are not overlapped with each other is constructed (since video coding and decoding standards (such as H.264 and HEVC) generally adopt more than 2 frames of images as reference frames for motion estimation and motion compensation, namely the correlation between a current frame and a previous frame and a next frame is strongest); and secondly, combining an average background method, taking the average value of the reference video frame group as the background of the frame group, and replacing the method that the previous frame is taken as the background in a frame difference method, thereby effectively extracting the foreground information of the video sequence while avoiding filtering the distortion information of the continuous frames. As can be seen from the mask effect, the visibility of the signal in the background has a significant difference at different spatial positions. When the distortion information is located on a background with high complexity, the influence on the degradation of the video quality is not great, and thus the spatial position of the distortion information has an important influence on the video quality evaluation. According to the result of the interaction of the time-space domain information and the video quality evaluation, the influence of background complexity, motion information and distortion information on the video quality is considered simultaneously so as to evaluate the video qualityAnd the middle frame of the frame group is used as the representation of the spatial domain distortion position, and the foreground frame obtained by subtracting the background from the other two frames represents the characteristics of a time-space domain, so that the middle frame and the two foreground frames jointly form a time-space domain video block. Reference video RNAnd distorted video DNT frame in corresponding time-space domain video blockThe calculation method is as follows:
wherein, BgA background image representing a g-th group of video frames, g ∈ {1, 2., floor (N/3) }, N representing the total number of frames of the video sequence, m representing the position of the intermediate frame of the group of frames relative to the entire video sequence, m-3 g-1, t representing the position of the current frame relative to the entire video sequence.
The method for calculating the quality of a single frame of a video sequence according to the formula (8) measures the quality of each frame in a video block of a video time-space domain, thereby obtaining the quality Q of a video frame groupg
The larger the value of quality, the worse the quality of the group of video frames.
The temporal perceptual quality of video due to distortion is generally determined by the poor quality in the video sequencePartial frames are determined, therefore, the worst quality convergence method is adopted in the text, firstly, the quality of a plurality of groups of time-space domain video blocks of the obtained video sequence is sequenced, and H% of the poor quality part is extracted as the final local quality Q of the video sequencelocal
Where H represents the quality set of the group of frames of the video sequence Q1,Q2,...,Qg,...,Qfloor(N/3)Set of H% of the worst sorted fractions in (N)HRepresenting the total number of elements in the collection;
the third step: calculating the overall quality of the video: according to the local quality and the global quality of the obtained video sequence, the video quality evaluation model BSWQ is as follows:
. The larger the prediction value of the model, the higher the video quality.
The specific examples are illustrated below:
1) selecting T1700 and H15;
2) then calculating the global quality of the video sequence according to the formulas (1) to (9);
3) dividing a video sequence into 3 frames, respectively processing the frames by using formulas (10), (11) and (12) to construct a time-space domain video block, and further respectively calculating by using a formula (13) to obtain the quality of each frame;
4) after the quality of each frame group is obtained, calculating the local quality of the video sequence by using a formula (14);
5) the overall quality of the video sequence is calculated using equation (15) in conjunction with the global quality and the local quality of the video sequence.
6) And (3) performance testing:
the proposed quality evaluation method selects LIVE video database for testing, which contains reference video of 10 different scenes and 150 distorted video sequences. Each video source comprises 4 different levels of distortion types (Wireless distortion, IP distortion, h.264 compression and MPEG-2 compression), with 3 different levels of IP distortion, and the remaining three distortion types each having 4 different levels of distortion, i.e. the reference video in each scene contains 15 distorted videos. The algorithm uses two commonly used evaluation indexes in 4 evaluation indexes provided by a Video Quality Expert Group (VQEG) as the evaluation indexes: spearman Rank Order Correlation Coefficient (SROCC), Pearson Linear Correlation Coefficient (PLCC). The larger SROCC value and PLCC value indicate that the video quality evaluation algorithm has better accuracy and consistency.
Table 1 shows the evaluation performance of the method of the present invention on different distortion type videos, and it can be seen that the background elimination algorithm has good performance on various distortion type videos and good robustness.
Table 2 shows the evaluation performance of the method of the present invention on 150 distorted videos, and it can be seen that the background elimination algorithm has good versatility.
Table 3 shows the running time of the method of the present invention for a 250-frame video pa2 — 25fps.
TABLE 1
TABLE 2
Evaluation index SROCC PLCC
BSWQ 0.8265 0.8437
TABLE 3
Quality evaluation method Time(s)
Proposed algorithm 28.58
Fig. 3 is a scatter plot of the predicted values of the proposed quality assessment model BSWQ versus the difference mean subjective score (DMOS) in LIVE video library. The solid line in the figure is a nonlinear fitting curve of objective evaluation results of the Logistic function on the video sequence and subjective data, and comprises Wireless distortion, IP distortion, H.264 distortion and MPEG distortion. If the discrete points are uniformly distributed around the fitting curve, the stronger the correlation between the predicted value representing the model and the subjective data is.

Claims (1)

1. A wavelet domain video quality evaluation method based on background elimination is characterized by comprising the following steps:
the first step is as follows: calculating global quality, and respectively performing reference video R by adopting 4-level haar discrete wavelet transformNAnd distorted video DNThe decomposition coefficients of each frame are expressed as follows:
CR(λ,θ,t,i)=DWT(Rt) (1)
CD(λ,θ,t,i)=DWT(Dt) (2)
wherein,CR(λ,θ,t,i),CD(lambda, theta, t, i) represents the wavelet decomposition coefficients of t frames in the reference video and the distorted video respectively, and t belongs to [1, N ∈]、Rt,DtRespectively representing the t-th frame in a reference and distorted video sequence; { λ, θ } denotes coefficient subbands used for indexing different scales and different directions of an image, respectively, θ ═ 2,3,4 denote horizontal, diagonal, and vertical subbands, respectively, and θ ═ 1 denotes an approximate subband; i represents the position of a wavelet coefficient with the t frame, the lambda scale and the direction of theta;
respectively obtaining edge coefficients E of the reference video and the distorted video under different scales through detail sub-band coefficients of wavelet domains of each frame of the video in different scales and different directionsR(λ,t,i),ED(λ,t,i):
And respectively calculating the similarity of the edge coefficients with the t frame scale of lambda by using the formula:
in the formula, T is a normal number, ESIM (lambda, T, i) represents the local similarity of the T-th frame, lambda scale and position i in the reference and distorted video, when E isR,EDWhen the two are completely the same, the value is 1;
acquiring the quality ESIMD (lambda, t) of the single-frame images of the video sequence at different scales by calculating the standard deviation of the local similarity:
wherein N iscThe total number of coefficients in the coefficient matrix of the t frame and the lambda scale in the video is obtained;
the single frame quality of a video sequence is:
wherein l represents haar wavelet transform series, and the value in the text is 4;
then, the video sequence global quality QglobalExpressed as:
the second step is that: calculating the local quality of the video by adopting a background elimination method, namely firstly, constructing a reference video and a distorted video into video blocks which are not overlapped with each other in each group by taking continuous 3 frames of a video sequence as one group; secondly, combining a mean background method, taking the mean value of a reference video frame group as the background of the frame group, replacing the previous frame in a frame difference method as the background, taking the middle frame of the frame group as the representation of a space domain distortion position, and subtracting the background from the rest two frames to obtain a foreground frame to represent the characteristics of a space-time domain, so that the middle frame and the two foreground frames jointly form a space-time domain video block;
reference video RNAnd distorted video DNT frame F in corresponding time-space domain video blockt R,Ft DThe calculation formula is as follows:
wherein, BgRepresenting background images of a g-th group of video frames, g ∈ {1, 2., floor (N/3) }, N representing the total number of frames of the video sequence, m representing the position of the intermediate frame of the group of frames relative to the entire video sequence, m ═ 3g-1, t representing the position of the current frame relative to the entire video sequence;
the method for calculating the quality of a single frame of a video sequence according to the formula (8) measures the quality of each frame in a video block of a video time-space domain, thereby obtaining the quality Q of a video frame groupg
Sequencing the quality of a plurality of groups of time-space domain video blocks of the obtained video sequence, and extracting H% of the part with poor quality as the final local quality Q of the video sequencelocal
Where H represents the quality set of the group of frames of the video sequence Q1,Q2,...,Qg,...,Qfloor(N/3)Set of H% of the worst sorted fractions in (N)HRepresenting the total number of elements in the collection;
the third step: calculating the overall quality of the video, namely according to the local quality and the global quality of the obtained video sequence, a video quality evaluation model BSWQ is as follows:
CN201710926882.8A 2017-10-08 2017-10-08 The wavelet field method for evaluating video quality eliminated based on background Active CN107809631B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710926882.8A CN107809631B (en) 2017-10-08 2017-10-08 The wavelet field method for evaluating video quality eliminated based on background

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710926882.8A CN107809631B (en) 2017-10-08 2017-10-08 The wavelet field method for evaluating video quality eliminated based on background

Publications (2)

Publication Number Publication Date
CN107809631A CN107809631A (en) 2018-03-16
CN107809631B true CN107809631B (en) 2019-05-14

Family

ID=61584092

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710926882.8A Active CN107809631B (en) 2017-10-08 2017-10-08 The wavelet field method for evaluating video quality eliminated based on background

Country Status (1)

Country Link
CN (1) CN107809631B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113709453B (en) * 2021-09-13 2023-09-08 北京车和家信息技术有限公司 Video quality assessment method, device, equipment and medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104811691B (en) * 2015-04-08 2017-07-21 宁波大学 A kind of stereoscopic video quality method for objectively evaluating based on wavelet transformation
CN104918039B (en) * 2015-05-05 2017-06-13 四川九洲电器集团有限责任公司 image quality evaluating method and system

Also Published As

Publication number Publication date
CN107809631A (en) 2018-03-16

Similar Documents

Publication Publication Date Title
Soundararajan et al. Video quality assessment by reduced reference spatio-temporal entropic differencing
US7733372B2 (en) Method and system for video quality measurements
Maalouf et al. CYCLOP: A stereo color image quality assessment metric
Moorthy et al. Efficient motion weighted spatio-temporal video SSIM index
JP6067737B2 (en) Method, apparatus, computer program, and storage medium for video quality measurement
US20140321552A1 (en) Optimization of Deblocking Filter Parameters
Li et al. Full-reference video quality assessment by decoupling detail losses and additive impairments
Kanumuri et al. Predicting H. 264 packet loss visibility using a generalized linear model
Jakhetiya et al. A prediction backed model for quality assessment of screen content and 3-D synthesized images
CN104023227B (en) A kind of objective evaluation method of video quality based on spatial domain and spatial structure similitude
JP2009260941A (en) Method, device, and program for objectively evaluating video quality
Tandon et al. CAMBI: Contrast-aware multiscale banding index
CN107809631B (en) The wavelet field method for evaluating video quality eliminated based on background
CN104243991B (en) A kind of side information generation method and device
Huang No-reference video quality assessment by HEVC codec analysis
Uzair et al. An efficient no-reference blockiness metric for intra-coded video frames
CN113163199A (en) H265-based video rapid prediction method, rapid coding method and system
Narwaria et al. Video quality assessment using temporal quality variations and machine learning
Shang et al. Perceptual multiview video coding based on foveated just noticeable distortion profile in DCT domain
Al-Juboori et al. Content characterization for live video compression optimization
Rimac-Drlje et al. Spatial masking and perceived video quality in multimedia applications
Keimel et al. Extending video quality metrics to the temporal dimension with 2D-PCR
Zhu et al. Spatial quality index based rate perceptual-distortion optimization for video coding
Ahn et al. No-reference video quality assessment based on convolutional neural network and human temporal behavior
Singam A Review on Recent Approaches," Methods of Video Quality Assessment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant