CN112511719B - Method for judging screen content video motion type - Google Patents

Method for judging screen content video motion type Download PDF

Info

Publication number
CN112511719B
CN112511719B CN202011243794.6A CN202011243794A CN112511719B CN 112511719 B CN112511719 B CN 112511719B CN 202011243794 A CN202011243794 A CN 202011243794A CN 112511719 B CN112511719 B CN 112511719B
Authority
CN
China
Prior art keywords
video
frame
determining
video image
segment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011243794.6A
Other languages
Chinese (zh)
Other versions
CN112511719A (en
Inventor
杨楷芳
蒙琴琴
公衍超
马苗
施姿羽
韩宇婷
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shaanxi Normal University
Original Assignee
Shaanxi Normal University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shaanxi Normal University filed Critical Shaanxi Normal University
Priority to CN202011243794.6A priority Critical patent/CN112511719B/en
Publication of CN112511719A publication Critical patent/CN112511719A/en
Application granted granted Critical
Publication of CN112511719B publication Critical patent/CN112511719B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream

Abstract

A method for judging the motion type of screen content video includes such steps as determining the frame difference of video image, dividing video, determining the standard difference of video segments, determining the variance of frame difference of video segments, and determining the motion type of each video segment. Through video content characteristic analysis, characteristic models representing different motion types are obtained, an input video is divided into video segments, a mathematical model for estimating the motion types contained in the video segments is established by adopting the standard deviation of the video segments and the variance of the frame difference of the video segments, the motion type characteristic value of each video segment is obtained, the motion type is judged, and the problem of complex motion type division contained in a screen content video is solved. The simulation experiment result shows that the method has the advantages of simple calculation method, high accuracy, good use effect, wide use range and the like, and can be used in the fields of video compression coding, video classification, video retrieval and the like.

Description

Method for judging screen content video motion type
Technical Field
The invention belongs to the technical field of video content characteristic analysis, and particularly relates to a method for judging a motion type contained in a screen content video.
Background
The video content characteristics mainly refer to visual features perceived by human eyes when the video content is watched, and the visual features comprise color features, texture features, motion features and the like. The video content characteristic analysis is beneficial to designing corresponding video compression coding and video transmission aiming at different contents of the video, and completing tasks such as video classification and video retrieval. At present, the research on the content characteristics of traditional natural videos is more and more mature, for example, the spatial characteristics of a video can be usually measured by the features such as variance and gradient. The time domain characteristics of the video can be measured by frame differences, optical flow methods, background frame differences or motion vectors.
With the development of multimedia technology, the types of videos are no longer limited to natural videos shot by traditional cameras, and screen content videos generated by computers appear in the lives of people and are widely applied in the fields of medical treatment, traffic, education and teaching and the like. Screen content video is generally composed of text, graphics, charts, icons, and the like. A teacher can obtain teaching screen content videos in a screen recording mode in the teaching process, wherein the videos usually comprise characters, graphs and diagrams and belong to dynamic character and graph screen content videos. Compared with a natural video shot by a camera, a screen content video usually contains bright colors and few types, has sharp texture edges, has many sudden changes, moves more and more irregularly, and causes the continuity of the video content to be weak. Therefore, the content characteristics of the screen content video are greatly changed compared with the natural video, and the conventional method for judging the motion type contained in the natural video cannot be directly applied to the screen content video. For example, in terms of temporal motion, the screen content video generated by screen recording the courseware may contain non-linear motions such as rotation, jumping, etc. The motion type contained in the conventional frame difference or the background frame difference cannot be obtained by directly applying the conventional frame difference or the background frame difference. Therefore, a determination method for a motion type in a screen content video needs to be studied. The method has the advantages that the motion types contained in the screen content videos are effectively estimated, the method has important significance for the related research of the encoding and transmission of the subsequent videos and the classification and retrieval of the videos, and powerful technical support is provided for online education. And no method for effectively identifying the motion type in the screen content video is found at present through searching.
Disclosure of Invention
The invention aims to solve the technical problem of providing a method for judging the motion type of the screen content video, which has the advantages of simple calculation method, high accuracy, good use effect and wide use range.
The technical scheme adopted for solving the technical problems comprises the following steps:
(1) determining frame differences for video images
For a frame of video image, dividing the video image into video image blocks in a block division mode with width multiplied by height of w multiplied by w, wherein w is 2nN is more than or equal to 3 and less than or equal to 7, n is an integer and starts from the 2 nd frame of the videoDetermining a frame difference f (i) of each video image in the video according to equation (1) until the last frame of the video:
Figure BDA0002769235880000021
wherein N is1Is frame width, N2For frame height, x (i, k, t) is the brightness value of the t-th pixel of the k-th video image block in the ith frame of the video, int () is a lower integer function, i, k, t are finite positive integers, and i is greater than or equal to 2.
(2) Partitioning video
Frame difference F of current video image from 2 nd frame of video to last frame of videoiFrame difference F from the previous framei-1Satisfies the following conditions: f is not less than 0i-1<ξ,Fi>ξ,ξ∈[0,0.1]Dividing the video into video segments in YUV format, and calculating the frame difference F of the current video imageiSatisfies the condition Fi>ξ,ξ∈[0,0.1]The video image is written into the video segment, and whether the current frame is the last frame of the video is judged until the whole video is divided.
(3) Determining standard deviation of video segments
From the first video segment to the last video segment, determining the standard deviation S (a) of each video image in the video segment according to the formula (2), and determining the standard deviation E of each video segment according to the formula (3)j
Figure BDA0002769235880000022
Figure BDA0002769235880000023
Wherein x (a, k, t) is the brightness value of the t pixel of the k video image block in the a frame of the video segment, NjRepresenting the total frame number of the jth video segment, a being a finite positive integer.
(4) Determining variance of video segment frame differences
Starting from the 1 st video segment to the maximumDetermining the frame difference M (a) of each video image in the video segment according to the formula (4), and determining the variance Q of the frame difference of each video segment according to the formula (5)j
Figure BDA0002769235880000031
Figure BDA0002769235880000032
Wherein a is more than or equal to 2.
(5) Determining a motion type for each video segment
Determining a threshold value Z for each video segment motion type according to equation (6)jDetermining the motion type P contained in each video segment according to equation (7)j
Zj=Ej+Qj (6)
Figure BDA0002769235880000033
In the step (1) of determining the frame difference of the video image, the video image is divided into video image blocks according to a block division mode with width multiplied by height of w multiplied by w, wherein w is 2n,3≤n≤7。
In the step (1) of determining the frame difference of the video image, the video image is divided into video image blocks according to a block division mode with width multiplied by height of w multiplied by w, wherein w is 2nAnd n is preferably 6.
In the step (2) of dividing the video, the frame difference F of the current video image from the 2 nd frame of the video to the last frame of the videoiFrame difference F from the previous framei-1Satisfies the following conditions: f is not less than 0i-1<ξ,Fi>ξ,ξ∈[0,0.1]Dividing the video into video segments in YUV format, and calculating the frame difference F of the current video imageiSatisfies the condition Fi>ξ,ξ∈[0,0.1]The optimal xi is 0, and the current frame is judged to beOr not, the last frame of the video, until the whole video is divided.
According to the method, through video content characteristic analysis, characteristic models representing different motion types are obtained, video segments of input videos are split, a mathematical model for estimating the motion types contained in the video segments is established by adopting standard differences of the video segments and variances of frame differences of the video segments, a motion type characteristic value of each video segment is obtained, and the motion type is judged. The inventor adopts the method to carry out simulation experiments, and the experimental result shows that the method has the characteristics of simple calculation and high accuracy, and can be used in the fields of video compression coding, video classification, video retrieval and the like.
Drawings
FIG. 1 is a flowchart of example 1 of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the drawings and examples, but the present invention is not limited to the following examples.
Example 1
In fig. 1, the method for determining the motion type of the screen content video according to the present embodiment comprises the following steps:
(1) determining frame differences for video images
For a frame of video image, dividing the video image into video image blocks in a block division mode with width multiplied by height of w multiplied by w, wherein w is 2nN is greater than or equal to 3 and less than or equal to 7, n in this embodiment takes the value of 6, and the frame difference f (i) of each video image in the video is determined according to the formula (1) from the 2 nd frame of the video to the last frame of the video:
Figure BDA0002769235880000041
wherein N is1Is frame width, N2For frame height, x (i, k, t) is the brightness value of the t-th pixel of the k-th video image block in the ith frame of the video, int () is a lower integer function, i, k, t are finite positive integers, and i is greater than or equal to 2.
(2) Partitioning video
Starting from the 2 nd frame of the video to the last frame of the videoFrame difference of current video image FiFrame difference F from the previous framei-1Satisfies the following conditions: f is not less than 0i-1<ξ,Fi>ξ,ξ∈[0,0.1]Dividing the video into video segments in YUV format, and calculating the frame difference F of the current video imageiSatisfies the condition Fi>ξ,ξ∈[0,0.1]The xi value of this embodiment is 0, and it is judged whether the current frame is the last frame of the video until the entire video is divided.
In the step, the acquired screen content video is divided into video segments in YUV format by using MATLAB software, wherein the MATLAB software is commercial mathematical software produced by MathWorks company in America, Eclipse software can be used, Dev-C + + software and the like can be used for dividing the YUV format video, the judgment on the motion type is facilitated, and the operation steps are simplified.
(3) Determining standard deviation of video segments
From the first video segment to the last video segment, determining the standard deviation S (a) of each video image in the video segment according to the formula (2), and determining the standard deviation E of each video segment according to the formula (3)j
Figure BDA0002769235880000051
Figure BDA0002769235880000052
Wherein x (a, k, t) is the brightness value of the t pixel of the k video image block in the a frame of the video segment, NjRepresenting the total frame number of the jth video segment, a being a finite positive integer.
(4) Determining variance of video segment frame differences
Starting from the 1 st video segment to the last video segment, determining the frame difference M (a) of each video image in the video segments according to the formula (4), and determining the variance Q of the frame difference of each video segment according to the formula (5)j
Figure BDA0002769235880000053
Figure BDA0002769235880000061
Wherein a is more than or equal to 2.
(5) Determining a motion type for each video segment
Determining a threshold value Z for each video segment motion type according to equation (6)jDetermining the motion type P contained in each video segment according to equation (7)j
Zj=Ej+Qj (6)
Figure BDA0002769235880000062
This step uses a threshold value Z for the type of movement to be setjThe value range can accurately determine the motion type of each video segment, the method is simple, the accuracy is high, the method is used in the technical fields of video compression coding, video classification and video retrieval, and the accuracy and the operation speed of the video coding, the video classification and the video retrieval are improved.
Example 2
The method for judging the motion type of the screen content video comprises the following steps:
(1) determining frame differences for video images
For a frame of video image, dividing the video image into video image blocks in a block division mode with width multiplied by height of w multiplied by w, wherein w is 2nN is greater than or equal to 3 and less than or equal to 7, the value of n in this embodiment is 3, and the frame difference f (i) of each video image in the video is determined according to the formula (1) from the 2 nd frame of the video to the last frame of the video:
Figure BDA0002769235880000071
wherein N is1Is frame width, N2Is frame height, x (i, k, t) is in the ith frame of the videoThe luminance value of the t-th pixel of the k-th video image block int () is a lower integer function, i, k and t are finite positive integers, and i is larger than or equal to 2.
(2) Partitioning video
Frame difference F of current video image from 2 nd frame of video to last frame of videoiFrame difference F from the previous framei-1Satisfies the following conditions: f is not less than 0i-1<ξ,Fi>ξ,ξ∈[0,0.1]Dividing the video into video segments in YUV format, and calculating the frame difference F of the current video imageiSatisfies the condition Fi>ξ,ξ∈[0,0.1]The xi value of this embodiment is 0.05, and it is judged whether the current frame is the last frame of the video until the whole video is divided.
The other steps were the same as in example 1.
Example 3
The method for judging the motion type of the screen content video comprises the following steps:
(1) determining frame differences for video images
For a frame of video image, dividing the video image into video image blocks in a block division mode with width multiplied by height of w multiplied by w, wherein w is 2nN is greater than or equal to 3 and less than or equal to 7, the value of n in this embodiment is 7, and the frame difference f (i) of each video image in the video is determined according to the formula (1) from the 2 nd frame of the video to the last frame of the video:
Figure BDA0002769235880000072
wherein N is1Is frame width, N2For frame height, x (i, k, t) is the brightness value of the t-th pixel of the k-th video image block in the ith frame of the video, int () is a lower integer function, i, k, t are finite positive integers, and i is greater than or equal to 2.
(2) Partitioning video
Frame difference F of current video image from 2 nd frame of video to last frame of videoiFrame difference F from the previous framei-1Satisfies the following conditions: f is not less than 0i-1<ξ,Fi>ξ,ξ∈[0,0.1]Dividing the video into video segments of YUV format, and dividing the current video imageFrame difference FiSatisfies the condition Fi>ξ,ξ∈[0,0.1]The xi value of this embodiment is 0.1, and it is judged whether the current frame is the last frame of the video until the entire video is divided.
The other steps were the same as in example 1.
In order to verify the beneficial effects of the present invention, the inventors performed experiments by using the method of the present invention in example 1, and the experimental conditions were as follows:
in the experimental process, the inventor generates screen content videos with different time lengths by recording a screen, and obtains 8 screen content videos in total, wherein the screen content videos are education video 1, education video 2, education video 3, education video 4, education video 5, education video 6, education video 7 and education video 8, the videos are in YUV format, and the resolution ratio is 1280 × 720. The detailed information of each video is shown in table 1.
Table 1 details of the acquired 8 screen content videos
Figure BDA0002769235880000081
The results of identifying the motion types contained in the 80 video segments by applying the method for judging the motion types of the screen content video of the present invention are shown in table 2.
Table 2 results of testing different types of movements by the method of example 1
Figure BDA0002769235880000082
As can be seen from table 2, the number of identifications was shown to be 12 with an accuracy of 100%; the number of cut-out identifications is 9, and the accuracy is 100%; the number of the conveyor belt identifications is 4, and the accuracy rate is 100%; the number of the boosting identification is 10, and the accuracy rate is 100%; the number of translation identifications is 8, and the accuracy is 100%; the number of switching identifications is 6, and the accuracy rate is 85.71%; the number of honeycomb identifications is 4, and the accuracy rate is 80%; the number of scaling identifications is 3, and the accuracy rate is 75%; the number of fragment identifications is 10, and the accuracy is 100%; the number of eddy current identifications was 11 with an accuracy of 100%. There are 77 video segments that can be successfully identified, with a total identification accuracy of 96.25%.
The accuracy of the method for judging the motion type of the screen content video can reach 96.25% for all test videos, the method can effectively identify the motion type of the screen content video, and the method has important application values for the research of subsequent video compression coding and video communication transmission and the realization of the rapid classification and retrieval of the motion type of the screen content video.

Claims (2)

1. A method for judging the video motion type of screen content is characterized by comprising the following steps:
(1) determining frame differences for video images
For a frame of video image, dividing the video image into video image blocks in a block division mode with width multiplied by height of w multiplied by w, wherein w is 2nN is more than or equal to 3 and less than or equal to 7, n is an integer, and the frame difference F (i) of each video image in the video is determined according to the formula (1) from the 2 nd frame of the video to the last frame of the video:
Figure FDA0003281147460000011
wherein N is1Is frame width, N2For frame height, x (i, k, t) is the brightness value of the t-th pixel of the k-th video image block in the ith frame of the video, int () is a lower integer function, i, k and t are limited positive integers, and i is more than or equal to 2;
(2) partitioning video
Frame difference F of current video image from 2 nd frame of video to last frame of videoiFrame difference F from the previous framei-1Satisfies the following conditions: f is not less than 0i-1<ξ,Fi>ξ,ξ∈[0,0.1]Dividing the video into video segments in YUV format, and calculating the frame difference F of the current video imageiSatisfies the condition Fi>ξ,ξ∈[0,0.1]Writing the video image into the video segment, judging whether the current frame is the last frame of the video or not until the division is finishedThe whole video;
(3) determining standard deviation of video segments
From the first video segment to the last video segment, determining the standard deviation S (a) of each video image in the video segment according to the formula (2), and determining the standard deviation E of each video segment according to the formula (3)j
Figure FDA0003281147460000012
Figure FDA0003281147460000013
Wherein x (a, k, t) is the brightness value of the t pixel of the k video image block in the a frame of the video segment, NjRepresents the total frame number of the jth video clip, and a is a limited positive integer;
(4) determining variance of video segment frame differences
Starting from the 1 st video segment to the last video segment, determining the frame difference M (a) of each video image in the video segments according to the formula (4), and determining the variance Q of the frame difference of each video segment according to the formula (5)j
Figure FDA0003281147460000021
Figure FDA0003281147460000022
Wherein a is more than or equal to 2;
(5) determining a motion type for each video segment
Determining a threshold value Z for each video segment motion type according to equation (6)jDetermining the motion type P contained in each video segment according to equation (7)j
Zj=Ej+Qj (6)
Figure FDA0003281147460000023
2. The method for determining the type of video motion of screen content according to claim 1, wherein: in the step of (1) determining the frame difference of the video image, for one frame of video image, the video image is divided into video image blocks according to a block division mode with width multiplied by height of w multiplied by w, wherein w is 2nAnd n is 6.
CN202011243794.6A 2020-11-10 2020-11-10 Method for judging screen content video motion type Active CN112511719B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011243794.6A CN112511719B (en) 2020-11-10 2020-11-10 Method for judging screen content video motion type

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011243794.6A CN112511719B (en) 2020-11-10 2020-11-10 Method for judging screen content video motion type

Publications (2)

Publication Number Publication Date
CN112511719A CN112511719A (en) 2021-03-16
CN112511719B true CN112511719B (en) 2021-11-26

Family

ID=74955772

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011243794.6A Active CN112511719B (en) 2020-11-10 2020-11-10 Method for judging screen content video motion type

Country Status (1)

Country Link
CN (1) CN112511719B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106412619A (en) * 2016-09-28 2017-02-15 江苏亿通高科技股份有限公司 HSV color histogram and DCT perceptual hash based lens boundary detection method

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100698106B1 (en) * 2000-03-07 2007-03-26 엘지전자 주식회사 A hierarchical hybrid shot change detection method for mpeg-compressed video
JP2002152690A (en) * 2000-11-15 2002-05-24 Yamaha Corp Scene change point detecting method, scene change point presenting device, scene change point detecting device, video reproducing device and video recording device
DE102007028175A1 (en) * 2007-06-20 2009-01-02 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Automated method for temporal segmentation of a video into scenes taking into account different types of transitions between image sequences
EP2408190A1 (en) * 2010-07-12 2012-01-18 Mitsubishi Electric R&D Centre Europe B.V. Detection of semantic video boundaries
CN102800095B (en) * 2012-07-17 2014-10-01 南京来坞信息科技有限公司 Lens boundary detection method
CN102833492B (en) * 2012-08-01 2016-12-21 天津大学 A kind of video scene dividing method based on color similarity
CN108010044B (en) * 2016-10-28 2021-06-15 央视国际网络无锡有限公司 Video boundary detection method
CN110210379A (en) * 2019-05-30 2019-09-06 北京工业大学 A kind of lens boundary detection method of combination critical movements feature and color characteristic
CN110263729A (en) * 2019-06-24 2019-09-20 腾讯科技(深圳)有限公司 A kind of method of shot boundary detector, model training method and relevant apparatus

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106412619A (en) * 2016-09-28 2017-02-15 江苏亿通高科技股份有限公司 HSV color histogram and DCT perceptual hash based lens boundary detection method

Also Published As

Publication number Publication date
CN112511719A (en) 2021-03-16

Similar Documents

Publication Publication Date Title
Ni et al. ESIM: Edge similarity for screen content image quality assessment
Wang et al. Utility-driven adaptive preprocessing for screen content video compression
WO2022188282A1 (en) Three-dimensional fluid reverse modeling method based on physical perception
US8582952B2 (en) Method and apparatus for identifying video transitions
Wang et al. A fast single-image dehazing method based on a physical model and gray projection
CN108134937B (en) Compressed domain significance detection method based on HEVC
Nafchi et al. CorrC2G: Color to gray conversion by correlation
US20110007968A1 (en) Image evaluation method, image evaluation system and program
CN111681177B (en) Video processing method and device, computer readable storage medium and electronic equipment
Gao et al. Detail preserved single image dehazing algorithm based on airlight refinement
CN108229346B (en) Video summarization using signed foreground extraction and fusion
CN108564057B (en) Method for establishing person similarity system based on opencv
CN111507997A (en) Image segmentation method, device, equipment and computer storage medium
Zeng et al. Visual attention guided pixel-wise just noticeable difference model
CN117237279A (en) Blind quality evaluation method and system for non-uniform distortion panoramic image
CN112511719B (en) Method for judging screen content video motion type
CN111510707B (en) Full-reference screen video quality evaluation method based on space-time Gabor feature tensor
Xu et al. Improving content visibility for high‐ambient‐illumination viewable display and energy‐saving display
CN111429375A (en) Night monitoring video quality improving method assisted by daytime image reference
Yue et al. Subjective quality assessment of animation images
Li et al. Perceptual redundancy model for compression of screen content videos
CN111539420B (en) Panoramic image saliency prediction method and system based on attention perception features
US20220051382A1 (en) Techniques for training a perceptual quality model to account for brightness and color distortions in reconstructed videos
Denes et al. Predicting visible flicker in temporally changing images
CN113837047A (en) Video quality evaluation method, system, computer equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant