CN109766838B - Gait cycle detection method based on convolutional neural network - Google Patents

Gait cycle detection method based on convolutional neural network Download PDF

Info

Publication number
CN109766838B
CN109766838B CN201910026947.2A CN201910026947A CN109766838B CN 109766838 B CN109766838 B CN 109766838B CN 201910026947 A CN201910026947 A CN 201910026947A CN 109766838 B CN109766838 B CN 109766838B
Authority
CN
China
Prior art keywords
gait
neural network
frame
convolutional neural
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201910026947.2A
Other languages
Chinese (zh)
Other versions
CN109766838A (en
Inventor
王科俊
丁欣楠
李伊龙
周石冰
徐怡博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Engineering University
Original Assignee
Harbin Engineering University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Engineering University filed Critical Harbin Engineering University
Priority to CN201910026947.2A priority Critical patent/CN109766838B/en
Publication of CN109766838A publication Critical patent/CN109766838A/en
Application granted granted Critical
Publication of CN109766838B publication Critical patent/CN109766838B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention provides a gait cycle detection method based on a convolutional neural network, which is used for preprocessing a gait video, and comprises the image preprocessing operations of video decoding, pedestrian contour extraction and centroid normalization; training a convolutional neural network for extracting gait periodic characteristics; and sending the gait video frame sequence to be detected into a convolutional neural network, outputting a waveform, filtering the waveform, and determining the positions of adjacent wave crests and wave troughs to obtain a gait cycle. The method has strong robustness to angle change, clothing and carried object change, solves the problem that gait cycle is difficult to detect under front and back visual angles, has important significance to improving gait recognition accuracy in complex environment, can be used as the front end in gait recognition, and is suitable for identity recognition in safety monitoring, man-machine interaction, medical diagnosis, access control systems and the like.

Description

Gait cycle detection method based on convolutional neural network
Technical Field
The invention belongs to the field of computer vision, and particularly relates to a gait cycle detection method based on a convolutional neural network.
Background
Compared with a biological characteristic recognition mode, the gait recognition mode can finish data collection work and remote identity recognition under the condition that a tester is not aware. The gait cycle detection is an inevitable process in gait recognition, and a gait recognition algorithm with good recognition rate is established on a completely divided gait cycle. And because the gait recognition has the characteristic of concealment, the randomness of data acquisition is high, the direction of the pedestrian relative to the camera and the clothing state of the pedestrian can be random, and the difficulty of periodic detection is increased.
The development of gait cycle detection technology is accompanied by the development of gait recognition. In the existing method, the gait cycle detection is mostly carried out by taking the width of a pedestrian as a characteristic. Periodic detection using width and height features of Body features was earlier proposed in the literature (Silhouette-Based Human Identification from Body Shape and gain. IEEE International Conference on Automatic Face and quality recognition.2002,366-372), but this method is greatly affected by the distance between the pedestrian and the camera. On the basis of the above, the method for Gait cycle detection by using the normalized single contour width feature is proposed in the literature (goal simulation with transfer mapping. visual Communication and Image reproduction. 2015,33(C),69-77), but the method is difficult to work under the front and back view angles. The literature (Silhouette Analysis-Based Gate identification for Human identification. Pattern Analysis & Machine Analysis IEEE Transactions on 2003,25(12): 1505) proposes to use the aspect ratio of Gait contour to perform Gait cycle detection, avoiding the influence of height normalization on pedestrian width. The literature (Human Identification Using Temporal Information monitoring gain template. IEEE Transactions on Pattern Analysis & Machine Analysis, 2012,34(11):2164-76) avoids the effect of carry on pedestrian width by extracting the average width of the lower limbs of each frame image to represent the position of the frame in a complete Gait sequence. Generally, the gait cycle can be effectively detected at 90 degrees on the lateral side based on the body width characteristic method, but the error is large at the front and lateral visual angles, and even the gait cycle cannot work. In The literature (The human ID gap challenge protocol: data sets, performance, and Analysis. IEEE Transactions on Pattern Analysis & Machine Analysis, 2005,27(2): 162-. But there is also a problem that the error is large in the front and side viewing angles. The document (Dual-ellipse fitting approach for robust gap probability detection. neuroomputing, 2012,79(3): 173-. Dividing the pedestrian outline into a left half and a right half according to the vertical direction of the image by taking the centroid as the centroid for segmenting the pedestrian, respectively fitting by using an ellipse to ensure that the pedestrian outline is tangent to the ellipse, calculating the eccentricity of the ellipse, and adding the eccentricities of the two fitted ellipses as a feature to represent the periodic feature of the frame image. Higher recognition rates were achieved at side 90 °, front and back viewing angles, but with larger errors at oblique viewing angles such as 18 ° and 36 °.
In recent years, deep learning is rapidly developed, and a convolutional neural network is widely applied to the field of computer vision as an effective image feature extraction tool. Inspired by the fact, the gait cycle detection method based on the convolutional neural network is provided, the periodic gait feature of each frame is extracted through training the convolutional neural network, the position of the current frame in the gait cycle is located by utilizing the feature, and the gait cycle detection process is completed. The method has stronger robustness, and can detect more accurate gait cycles under the conditions of different visual angles and different dresses and carrying objects.
Disclosure of Invention
The invention aims to provide a gait cycle detection method based on a convolutional neural network, which has stronger robustness and can detect more accurate gait cycles under the conditions of different visual angles and different dresses and carrying objects.
The purpose of the invention is realized as follows:
a gait cycle detection method based on a convolutional neural network comprises the following specific implementation steps:
step 1, preprocessing a gait video, including video decoding, pedestrian contour extraction and centroid normalization image preprocessing operation;
step 2, training a convolutional neural network for extracting gait periodic characteristics;
and 3, sending the gait video frame sequence to be detected into a convolutional neural network, outputting a waveform, filtering the waveform, and determining the positions of adjacent wave crests and wave troughs to obtain a gait cycle.
The step 2 specifically comprises the following steps:
step 2.1, the position of each video frame in a gait cycle in the training set is represented by numerical value quantization and marked as a label, and the calculation formula of the label value is as follows
Figure BDA0001942845310000021
Wherein L isiThe method comprises the steps that the tag value of an ith frame in a gait video, N indicates that a gait cycle in which the ith frame is located comprises N frames, and N indicates that the ith frame is the nth frame in the gait cycle;
step 2.2, sending the marked video frame into a convolutional neural network to obtain an output value;
and 2.3, calculating the error between the output value and the label, training the error through multiple iterations of error back propagation and random gradient reduction until the error is not reduced any more through multiple iterations, wherein the calculation formula of the error is
Figure BDA0001942845310000022
Where m is the batch size of the input network, i.e. each batch contains m images,
Figure BDA0001942845310000023
an estimate of a neural network for a corresponding video frame tag;
and 2.4, storing and copying the trained convolutional neural network.
The convolutional neural network structure in the step 2 comprises a plurality of convolutional layers and at least one fully-connected layer connected with the last convolutional layer, wherein the last connected output layer of the fully-connected layer is a single neuron.
The step 1 specifically comprises the following steps:
step 1.1, performing framing processing on a video sequence, wherein the sequence after framing is an image sequence arranged according to a time sequence;
step 1.2, carrying out gray level transformation on the image sequence containing the pedestrians and the background sequence, estimating the background of the whole sequence by adopting a median method, carrying out binarization to obtain a gait contour image,
Dk(x,y)=|Ik(x,y)-Mk(x,y)|
Figure BDA0001942845310000031
wherein Ik(x, y) is the gray value at pixel (x, y) of the kth frame of the video sequence, Mk(x, y) is the background gray value here, Dk(x, y) is a background difference image, and T is a selected binarization threshold value;
step 1.3, profile normalization, wherein all profiles are uniformly scaled to have consistent height, and the input of the pedestrian profile normalization is the content in a rectangle tangent to the pedestrian profile in each video frame; for all the images of the intercepted outlines in the training set, respectively traversing all the image heights of the images, and comparing the image heights with the standard height; the standard height under a certain visual angle is H, K frames of images are shared under the visual angle, and the height of each frame of images according to the time sequence is HkK is 1,2, K, the magnification of each frame image is ak=hkAnd H, obtaining each frame of image under the visual angle, and respectively applying the corresponding magnification factor akAnd a bilinear interpolation algorithm is applied to the image,
fa=f(x,y)+(f(x+1,y)-f(x,y))×p
fb=f(x,y+1)+(f(x+1,y+1)-f(x,y+1))×p
wherein f (x, y) is the gray value at the coordinate (x, y) before interpolation, and p and q are weights; performing a second linear interpolation, and calculating the interpolation result at (x, y) as
g(x,y)=fa+(fb-fa)×q
=(1-p)(1-q)f(x,y)+p(1-p)f(x+1,y)
+q(1-p)f(x,y+1)+pqf(x+1,y+1)
Where g (x, y) is the gray value at the interpolated coordinate (x, y).
The invention has the beneficial effects that: the method has strong robustness to angle change, clothing and carried object change, solves the problem that gait cycle is difficult to detect under front and back visual angles, most of the existing gait recognition technologies are established on the basis of accurately segmented gait cycle, and the method has important significance to improving gait recognition accuracy in complex environment, can be used for front end in gait recognition, and is suitable for identity recognition in safety monitoring, human-computer interaction, medical diagnosis, access control systems and the like.
Drawings
FIG. 1 is a flow chart of the present invention.
Fig. 2 is an effect diagram of foreground and background separation.
Fig. 3 shows a gait cycle comprising 24 frames and its tag value.
Fig. 4 is an example of an output waveform and a filtered waveform.
Detailed Description
The invention is further described with reference to the accompanying drawings in which:
example 1
Taking a large gait recognition database as an example for illustration, the gait database comprises 124 gait video sequences of people, and each person has 110 video segments, including different visual angles, clothes and carrying objects.
Step 1, preprocessing the gait video, including image preprocessing operations such as video decoding, pedestrian contour extraction, centroid normalization and the like. Firstly, performing frame division processing on a video sequence, wherein the sequence after frame division is an ordered image sequence arranged according to a time sequence, performing gray scale transformation on an image sequence containing pedestrians and a background sequence, estimating the background of the whole sequence by adopting a median method for an indoor environment with basically unchanged illumination, subtracting a background image from a foreground image, performing binarization to obtain a gait outline image, and setting the gray value at the pixel (x, y) of the kth frame of the video sequence as an I gray valuek(x, y) where the background grayscale value is Mk(x, y), then background difference image Dk(x, y) and binarization result Bk(x, y) are respectively:
Dk(x,y)=|Ik(x,y)-Mk(x,y)|
Figure BDA0001942845310000041
where T is the selected binarization threshold, the process is shown in fig. 2. The contour normalization is to uniformly scale all the contours to have a consistent height, so that the gait contour sequence size is prevented from changing due to depth of field when the direction and distance between a pedestrian and a camera are changed. For each video frame, a rectangular frame is used, the pedestrian outline is tangent to the four sides of the frame, and graphs with different sizes enclosed by the frame are used as input of pedestrian outline normalization. And respectively traversing all image heights of all the intercepted outline images in the training set as a ratio with the standard height. Setting the standard height at a certain angle as H, wherein the angle has K frames of images, and the height of each frame of image in time sequence is HkK1, 2, K, the magnification of each frame image
ak=hk/H
Each frame of image obtained under the visual angle is respectively applied with the corresponding magnification factor akAnd a bilinear interpolation algorithm is applied to the image,
fa=f(x,y)+(f(x+1,y)-f(x,y))×p
fb=f(x,y+1)+(f(x+1,y+1)-f(x,y+1))×p
where f (x, y) is a gradation value at the coordinate (x, y) before interpolation, similarly, f (x, y +1) is a gradation value at the coordinate (x, y +1) before interpolation, f (x +1, y +1) is a gradation value at the coordinate (x +1, y +1) before interpolation, and p and q are weights. Then f fromaAnd fbThe interpolation result at the second linear interpolation calculation (x, y) is performed:
g(x,y)=fa+(fb-fa)×q
=(1-p)(1-q)f(x,y)+p(1-p)f(x+1,y)
+q(1-p)f(x,y+1)+pqf(x+1,y+1)
where g (x, y) is the gray value at the interpolated coordinate (x, y). The normalization operation is completed, and a grayscale video frame with the size of 128 × 88 after normalization is obtained.
And 2, training the convolutional neural network to extract the periodic gait features.
Step 2.1, quantizing each video frame according to the position of the video frame in the gait cycle, and marking the video frame as a label of the video frame;
specifically, a sinusoidal signal with the period of 1 and the amplitude of 1 is selected as a low-dimensional signal for representing the periodicity of a gait sequence. In a gait contour sequence, a video frame with two closed legs and a right foot with a tendency of stepping forward is defined as a starting position, after a period of sequence, the two closed legs are closed again, and a frame with the right foot with the tendency of stepping forward is defined as an ending position of the period. At this time, the frame image may be regarded as the end of the previous period, and may be regarded as the start of the period image. After the initial position and the final position are located, the label value of the intermediate position is calculated according to the sine function of the average value, and then the label value of each frame in the training set is:
Figure BDA0001942845310000051
wherein L isiThe tag value of the i-th frame in the gait video, N indicates that the gait cycle of the frame includes N frames, and N indicates that the frame is the N-th frame in the gait cycle, as shown in fig. 3, an example of the tag value of a gait cycle including 24 frames is shown.
Step 2.2, sending the video frame into a convolutional neural network to obtain an output value;
and (3) feeding the marked video frames in the training set into a convolutional neural network, wherein the structure of the convolutional neural network can be as follows: inputting a gray image of 128 multiplied by 88, wherein the first 6 layers are respectively the combination of 3 convolution layers and a pooling layer; the first layer is a convolution layer with 64 convolution kernels of 5 × 5 and the step size is 1, and the second layer is a pooling layer with kernels of 3 × 3 and the step size is 2; the third layer is a convolution layer with 64 convolution kernels with the length of 1, the third layer is a pooling layer with kernels with the length of 3 multiplied by 3 and the length of 2; the fifth layer is a convolution layer with 64 convolution kernels with the length of 1, the 3 x 3, and the sixth layer is a pooling layer with kernels with the length of 3 x 3 and the length of 2; tier 7, 8 and 9 are fully connected tiers containing 1024, 256 and 1 nodes, respectively.
Step 2.3, calculating the error between the output value and the label, and training the network through multiple iterations of error back propagation and random gradient descent; the Error is calculated by Mean Squared Error (MSE)
Figure BDA0001942845310000061
Where m is the batch size of the input network, i.e. each batch contains m images (video frames),
Figure BDA0001942845310000062
is an estimate of the neural network corresponding to the video frame tag, i.e., the output of the neural network. Training is performed after a plurality of iterations until the error does not decrease any more.
Step 2.4, storing and copying the convolutional neural network trained in the step 2.3 to obtain the convolutional neural network for gait periodic feature extraction and sine function regression modeling;
and 3, sending the gait video frame sequence to be detected into a convolutional neural network after the preprocessing process in the step 1, drawing a waveform by taking the frame sequence as a horizontal axis and the network output as a vertical axis, and obtaining a gait cycle by determining the positions of adjacent wave crests or wave troughs after the output waveform is filtered, wherein the adjacent wave crests and wave troughs can be easily obtained by the waveform output by the network and the waveform effect after filtering as shown in the figure 4, so that the gait cycle can be obtained.

Claims (2)

1. A gait cycle detection method based on a convolutional neural network is characterized by comprising the following specific implementation steps:
step 1, preprocessing a gait video, including video decoding, pedestrian contour extraction and centroid normalization image preprocessing operation;
step 1.1, performing framing processing on a video sequence, wherein the sequence after framing is an image sequence arranged according to a time sequence;
step 1.2, carrying out gray level transformation on an image sequence containing pedestrians and a background sequence, estimating the background of the whole sequence by adopting a median method, and carrying out binarization to obtain a gait contour image;
Dk(x,y)=|Ik(x,y)-Mk(x,y)|
Figure FDA0003371664610000011
wherein, Ik(x, y) is the gray value at pixel (x, y) of the kth frame of the video sequence; mk(x, y) is the background gray value here; dk(x, y) is a background difference image; t is a selected binarization threshold value;
step 1.3, profile normalization, wherein all profiles are uniformly scaled to have consistent height, and the input of the pedestrian profile normalization is the content in a rectangle tangent to the pedestrian profile in each video frame; for all the images of the intercepted outlines in the training set, respectively traversing all the image heights of the images, and comparing the image heights with the standard height; the standard height under a certain visual angle is H, K frames of images are shared under the visual angle, and the height of each frame of images according to the time sequence is HkK is 1,2, K, the magnification of each frame image is ak=hkAnd H, obtaining each frame of image under the visual angle, and respectively applying the corresponding magnification factor akApplying a bilinear interpolation algorithm;
fa=f(x,y)+(f(x+1,y)-f(x,y))×p
fb=f(x,y+1)+(f(x+1,y+1)-f(x,y+1))×p
wherein f (x, y) is the gray value at the coordinate (x, y) before interpolation; p and q are weights; and performing second linear interpolation, and calculating the interpolation result at (x, y) as follows:
g(x,y)=fa+(fb-fa)×q
=(1-p)(1-q)f(x,y)+p(1-p)f(x+1,y)+q(1-p)f(x,y+1)+pqf(x+1,y+1)
wherein g (x, y) is the gray value at the interpolated coordinate (x, y);
step 2, training a convolutional neural network for extracting gait periodic characteristics;
step 2.1, the position of each video frame in a gait cycle in the training set is represented by numerical quantization and marked as a label, and the calculation formula of the label value is as follows:
Figure FDA0003371664610000021
wherein L isiA tag value of an ith frame in the gait video; n represents that the gait cycle of the ith frame comprises N frames, and N represents that the ith frame is the nth frame in the gait cycle;
step 2.2, sending the marked video frame into a convolutional neural network to obtain an output value;
and 2.3, calculating the error between the output value and the label, training the error through multiple iterations of error back propagation and random gradient reduction until the error is not reduced any more through multiple iterations, wherein the calculation formula of the error is
Figure FDA0003371664610000022
Wherein m is the batch size of the input network, namely each batch contains m images;
Figure FDA0003371664610000023
an estimate of a neural network for a corresponding video frame tag;
step 2.4, storing and copying the trained convolutional neural network;
and 3, sending the gait video frame sequence to be detected into a convolutional neural network, outputting a waveform, filtering the waveform, and determining the positions of adjacent wave crests and wave troughs to obtain a gait cycle.
2. The gait cycle detection method based on the convolutional neural network as claimed in claim 1, characterized in that: the convolutional neural network structure in the step 2 comprises a plurality of convolutional layers and at least one fully-connected layer connected with the last convolutional layer, wherein the last connected output layer of the fully-connected layer is a single neuron.
CN201910026947.2A 2019-01-11 2019-01-11 Gait cycle detection method based on convolutional neural network Expired - Fee Related CN109766838B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910026947.2A CN109766838B (en) 2019-01-11 2019-01-11 Gait cycle detection method based on convolutional neural network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910026947.2A CN109766838B (en) 2019-01-11 2019-01-11 Gait cycle detection method based on convolutional neural network

Publications (2)

Publication Number Publication Date
CN109766838A CN109766838A (en) 2019-05-17
CN109766838B true CN109766838B (en) 2022-04-12

Family

ID=66453724

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910026947.2A Expired - Fee Related CN109766838B (en) 2019-01-11 2019-01-11 Gait cycle detection method based on convolutional neural network

Country Status (1)

Country Link
CN (1) CN109766838B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110598540B (en) * 2019-08-05 2021-12-03 华中科技大学 Method and system for extracting gait contour map in monitoring video
CN110765925B (en) * 2019-10-18 2023-05-09 河南大学 Method for detecting carrying object and identifying gait based on improved twin neural network
CN110796100B (en) * 2019-10-31 2022-06-07 浙江大华技术股份有限公司 Gait recognition method and device, terminal and storage device
CN112989889B (en) * 2019-12-17 2023-09-12 中南大学 Gait recognition method based on gesture guidance
US11980790B2 (en) 2020-05-06 2024-05-14 Agile Human Performance, Inc. Automated gait evaluation for retraining of running form using machine learning and digital video data
CN112329716A (en) * 2020-11-26 2021-02-05 重庆能源职业学院 Pedestrian age group identification method based on gait characteristics
CN113963437A (en) * 2021-10-15 2022-01-21 武汉众智数字技术有限公司 Gait recognition sequence acquisition method and system based on deep learning

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107122707A (en) * 2017-03-17 2017-09-01 山东大学 Video pedestrian based on macroscopic features compact representation recognition methods and system again
CN108460340A (en) * 2018-02-05 2018-08-28 北京工业大学 A kind of gait recognition method based on the dense convolutional neural networks of 3D

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10291460B2 (en) * 2012-12-05 2019-05-14 Origin Wireless, Inc. Method, apparatus, and system for wireless motion monitoring

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107122707A (en) * 2017-03-17 2017-09-01 山东大学 Video pedestrian based on macroscopic features compact representation recognition methods and system again
CN108460340A (en) * 2018-02-05 2018-08-28 北京工业大学 A kind of gait recognition method based on the dense convolutional neural networks of 3D

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Recognition of Walking Activity and Prediction of Gait Periods with a CNN and First-Order MC Strate;Uriel Martinez-Hernandez et al.;《 2018 7th IEEE International Conference on Biomedical Robotics and Biomechatronics (Biorob)》;20181011;第897-902页 *
基于卷积神经网络和不完整步态周期的步态识别方法;汤荣山 等;《通信技术》;20181231;第51卷(第12期);第2980-2985页 *

Also Published As

Publication number Publication date
CN109766838A (en) 2019-05-17

Similar Documents

Publication Publication Date Title
CN109766838B (en) Gait cycle detection method based on convolutional neural network
Liu et al. Attribute-aware face aging with wavelet-based generative adversarial networks
CN110084156B (en) Gait feature extraction method and pedestrian identity recognition method based on gait features
CN108805093B (en) Escalator passenger tumbling detection method based on deep learning
CN107967695B (en) A kind of moving target detecting method based on depth light stream and morphological method
CN105447441B (en) Face authentication method and device
CN105825183B (en) Facial expression recognizing method based on partial occlusion image
CN109409190A (en) Pedestrian detection method based on histogram of gradients and Canny edge detector
CN106778474A (en) 3D human body recognition methods and equipment
Chunli et al. A behavior classification based on enhanced gait energy image
CN110458235B (en) Motion posture similarity comparison method in video
CN108830856B (en) GA automatic segmentation method based on time series SD-OCT retina image
CN104408741A (en) Video global motion estimation method with sequential consistency constraint
CN112288778B (en) Infrared small target detection method based on multi-frame regression depth network
CN105550703A (en) Image similarity calculating method suitable for human body re-recognition
CN110930378A (en) Emphysema image processing method and system based on low data demand
CN111858997B (en) Cross-domain matching-based clothing template generation method
CN110956141A (en) Human body continuous action rapid analysis method based on local recognition
CN102880870A (en) Method and system for extracting facial features
CN109711387B (en) Gait image preprocessing method based on multi-class energy maps
CN108764343B (en) Method for positioning tracking target frame in tracking algorithm
CN106778491A (en) The acquisition methods and equipment of face 3D characteristic informations
CN105118073A (en) Human body head target identification method based on Xtion camera
CN102201060A (en) Method for tracking and evaluating nonparametric outline based on shape semanteme
CN104574400A (en) Remote sensing image segmenting method based on local difference box dimension algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20220412

CF01 Termination of patent right due to non-payment of annual fee