CN108268858B - High-robustness real-time sight line detection method - Google Patents

High-robustness real-time sight line detection method Download PDF

Info

Publication number
CN108268858B
CN108268858B CN201810118195.8A CN201810118195A CN108268858B CN 108268858 B CN108268858 B CN 108268858B CN 201810118195 A CN201810118195 A CN 201810118195A CN 108268858 B CN108268858 B CN 108268858B
Authority
CN
China
Prior art keywords
eye
iris
sight line
obtaining
feature points
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201810118195.8A
Other languages
Chinese (zh)
Other versions
CN108268858A (en
Inventor
韦东旭
沈海斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University ZJU
Original Assignee
Zhejiang University ZJU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University ZJU filed Critical Zhejiang University ZJU
Priority to CN201810118195.8A priority Critical patent/CN108268858B/en
Publication of CN108268858A publication Critical patent/CN108268858A/en
Application granted granted Critical
Publication of CN108268858B publication Critical patent/CN108268858B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/165Detection; Localisation; Normalisation using facial parts and geometric relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/46Descriptors for shape, contour or point-related descriptors, e.g. scale invariant feature transform [SIFT] or bags of words [BoW]; Salient regional features
    • G06V10/462Salient features, e.g. scale invariant feature transforms [SIFT]
    • G06V10/464Salient features, e.g. scale invariant feature transforms [SIFT] using a plurality of salient features, e.g. bag-of-words [BoW] representations
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction

Abstract

The invention discloses a high-robustness real-time sight line detection method which can be used for carrying out real-time tracking and dynamic detection on a camera on the sight line of human eyes. Firstly, detecting the accurate central position of an iris by using a double-circle movable model obtained by initialization according to the priori knowledge of the structure of the human eye; then, a stable human face characteristic point detection method is used for detecting and calculating to obtain an accurate reference point position, and the accurate reference point position is combined with the iris center position to obtain an eye vector. Next, the "eye vector" and the head pose are mapped by second order to obtain the coordinates of the position to which the line of sight points. And finally, carrying out weighted summation on the position coordinates obtained by calculating the left eye and the right eye by using the weight coefficient, and improving the detection accuracy. The invention accurately detects the positions of the center and the reference point of the iris, has small calculated amount, can achieve the effect of real-time sight line detection, and has the characteristics of low cost and high accuracy and high practical value.

Description

High-robustness real-time sight line detection method
Technical Field
The invention relates to the field of pattern recognition, image processing and man-machine interaction, in particular to a high-robustness real-time sight line detection method.
Background
Gaze detection is a human-computer interaction technique that can facilitate convenient and efficient interaction between computer implementations and human users. Human-computer interaction has always been the focus of attention in the computer field, and a good human-computer interaction technology will greatly improve the user experience of electronic products and improve the product advantages. Moreover, with the development of human-computer interaction technology, life becomes more intelligent and automatic, and work becomes more efficient and convenient. At present, the method is widely applied to aspects of virtual reality, augmented reality, online shopping and advertisement. An accurate and real-time sight detection technology will greatly reduce the threshold of these applications, providing guarantee for better human-computer interaction experience.
At present, the sight line detection technology is mainly divided into two types: sensor-based methods and computer vision-based methods. In which, the sensor-based method may make physical contact with the human body, such as an electrical signal generated when the eye is attached with an electrode to detect the rotation of the eyeball. Compared with a sensor-based method, the computer vision-based method does not need to directly contact the human body, and is more friendly and convenient for users.
For a computer vision-based method, infrared equipment is generally used to assist in image sampling, and the obtained image can be processed to obtain a detection result with higher precision. However, such methods require expensive infrared equipment and their detection effect is affected by ambient light. Moreover, the method is not ideal for detecting the person wearing the glasses, because the lens can generate reflection to influence the detection. In addition to infrared device assisted methods, there are also sophisticated imaging devices used to sample images, such as multi-angle cameras, high definition cameras, depth cameras, etc. However, such methods require equipping with a specific imaging device, and are also difficult to be widely applied in real life.
As described above, in order to apply line-of-sight detection to a wide range of life, the following conditions must be satisfied: (1) the human body does not need to be directly contacted during detection; (2) any image acquisition equipment can be used for detection, including a low-cost mobile phone front camera; (3) and (3) a high-precision real-time sight line detection algorithm.
Therefore, the invention provides a solution of a regression-based accurate real-time gaze detection system that only uses a single low-cost camera, and can be put into practical application in daily life.
Disclosure of Invention
The invention aims to overcome the defects of the prior art and provides a method which can be suitable for line-of-sight detection in low-resolution images and has high precision.
In order to achieve the purpose, the invention adopts the following technical scheme:
a high-robustness real-time sight line detection method comprises the following steps:
1) detecting 68 characteristic points of a face in the image by a face characteristic point detection algorithm;
2) after the characteristic points of the human face are detected, the central position of the iris is obtained by using a double-circle activity model;
3) after the face characteristic points are detected, obtaining the reference point position by using an average position obtaining mode, and calculating by combining with the iris center position to obtain an eye vector;
4) obtaining an estimate of the head pose;
5) obtaining a mapping function related to the eye vector and the head posture by using a regression algorithm, and obtaining the sight line positions of the left eye and the right eye by using the mapping function;
6) and carrying out weighted summation on the sight line positions of the left eye and the right eye to obtain the final sight line position.
Preferably, the step of obtaining the central position of the iris by using the double-circle motion model further includes a preprocessing step before the step, and the preprocessing step specifically includes:
1) extracting an eye region by using 12 feature points in the vicinity of the eye among the 68 feature points;
2) and carrying out corrosion treatment on the eye region image, and filtering noise.
Preferably, the radius of the outer circle and the inner circle of the double-circle movable model is obtained by multiplying the width of the eye region by a set proportionality coefficient, so that the edge of the iris can be positioned between the inner circle and the outer circle, the diameter of the inner circle is 0.25 times of the width of the eye region, and the diameter of the outer circle is 1.44 times of the diameter of the inner circle.
Preferably, in the process of obtaining the iris center position using the two-circle motion model, after the roughly estimated iris position is moved from left to right, the estimated iris position is moved within a range of-5 to +5 pixels from the upper, lower, left and right sides of the estimated iris position, and the position with the largest difference is selected as the final iris center position.
Preferably, the obtaining of the reference point position by using the average position mode specifically includes: 36 feature points of the face contour are selected from the 68 feature points, and the horizontal and vertical coordinate values of the feature points are averaged to obtain a stable reference point.
Preferably, the method for obtaining the estimation of the head pose comprises: and selecting a left eye corner point, a right eye corner point, a left mouth corner point, a nose tip and a chin from the 68 feature points to obtain the positions of the 6 feature points, and obtaining the head pose by an iterative algorithm provided by an OpenCV library.
Preferably, the mapping function in step 5) is expressed as an nth order polynomial function, and the form of the nth order polynomial function is as follows:
Figure BDA0001571231540000031
Figure BDA0001571231540000032
wherein, ghAnd gvAn abscissa and an ordinate respectively representing the sight line position; e.g. of the typehAnd evRespectively representing the abscissa and ordinate of the "eye vector"; h isp,hyAnd hrA pitch angle, a yaw angle and a roll angle respectively representing the head attitude; and a iskAnd bkThe coefficients representing the k-th order terms in the functional expression are fixed parameters of the mapping function.
Preferably, the step 6) is specifically:
and obtaining final coordinates of the obtained sight line coordinates of the left eye and the right eye through weighted summation, wherein the specific form is as follows:
gfh=w*glh+(1-w)*grh,w∈[0,1]
gfv=w*glv+(1-w)*grv,w∈[0,1]
wherein w is a weight coefficient; glhAnd grhThe horizontal coordinates of the visual lines of the left eye and the right eye respectively; glvAnd grvThe vertical coordinates of the lines of sight of the left and right eye, g, respectivelyfhAnd gfvRespectively, the abscissa and ordinate of the final gaze position.
The high-robustness real-time sight line detection method has the advantages that:
1. the method has low calculation cost, can achieve the detection speed of about 50 frames per second on a CPU processor with main frequency of 3GHz according to the practical use performance, and does not need other special equipment, so the method has low hardware requirement and has real-time property.
2. The method has high robustness, can accurately detect the sight line position for low-resolution images according to the performance of actual use, is less influenced by illumination in addition, and has universality for different scenes.
3. In the process of detecting the center of the iris by using the double-circle movable model, the double-circle movable model is initialized according to the actual size ratio of human eyes to the iris, so that the moving range of the model is greatly reduced, and the times of iterative calculation are reduced.
4. The method comprises the steps of carrying out rough detection and accurate positioning in the process of carrying out iris center detection by utilizing the double-circle movable model, and accurately calculating the final position of the double-circle movable model within a range of-5 to +5 pixels around the rough position after roughly detecting the center position of the iris, so that the accuracy of iris center position detection is greatly improved, and the effect of integral sight line detection is improved.
5. In the process of calculating the final sight line position by the method, the sight line position is calculated by combining the detection results of the eye vector obtained based on the iris center and the reference point position and the head posture obtained based on the iterative algorithm and mapping, so that the accuracy and the robustness of the method are greatly improved.
6. In the process of calculating the final sight position by the method, the detection results of the left eye and the right eye are integrated by a weighted summation method, so that the accuracy and the robustness of the method are greatly improved.
Drawings
Fig. 1 is an overall flow chart of the algorithm of the present gaze detection system.
FIG. 2 is a schematic diagram of the positions of 68 feature points on a human face;
FIG. 3 is a schematic diagram of the labels of 68 feature points of a human face;
Detailed Description
The invention is further explained below with reference to the technical solutions and the accompanying drawings.
As shown in fig. 1, a highly robust real-time gaze detection method of the present invention comprises the following steps:
1) 68 feature points of the face in the image are detected by a face feature point detection algorithm, as shown in fig. 2;
2) after the characteristic points of the human face are detected, the central position of the iris is obtained by using a double-circle activity model;
3) after the face characteristic points are detected, obtaining the reference point position by using an average position obtaining mode, and calculating by combining with the iris center position to obtain an eye vector;
4) obtaining an estimate of the head pose;
5) obtaining a mapping function related to the eye vector and the head posture by using a regression algorithm, and obtaining the sight line positions of the left eye and the right eye by using the mapping function; the mapping function in step 5) is expressed as an n-order polynomial function, and the form of the n-order polynomial function is as follows:
Figure BDA0001571231540000041
Figure BDA0001571231540000042
wherein, ghAnd gvAn abscissa and an ordinate respectively representing the sight line position; e.g. of the typehAnd evRespectively representing the abscissa and ordinate of the "eye vector"; h isp,hyAnd hrA pitch angle, a yaw angle and a roll angle respectively representing the head attitude; and a iskAnd bkThe coefficients representing the k-th order terms in the functional expression are fixed parameters of the mapping function.
6) Weighting and summing the sight line positions of the left eye and the right eye to obtain a final sight line position, which specifically comprises the following steps:
and obtaining final coordinates of the obtained sight line coordinates of the left eye and the right eye through weighted summation, wherein the specific form is as follows:
gfh=w*glh+(1-w)*grh,w∈[0,1]
gfv=w*glv+(1-w)*grv,w∈[0,1]
wherein w is a weight coefficient; glhAnd grhThe horizontal coordinates of the visual lines of the left eye and the right eye respectively; glvAnd grvThe vertical coordinates of the lines of sight of the left and right eye, g, respectivelyfhAnd gfvRespectively, the abscissa and ordinate of the final gaze position.
In this embodiment, the step of obtaining the center position of the iris using the double-circle motion model further includes a preprocessing step before the step, where the preprocessing step specifically includes:
1) extracting an eye region by using 12 feature points (points No. 37-48 in figure 3) near the eyes from the 68 feature points;
2) and carrying out corrosion treatment on the eye region image, and filtering noise.
In a preferred embodiment of the present invention, the radius of the outer circle and the inner circle of the double-circle movable model is obtained by multiplying the width of the eye region by a set proportionality coefficient, so that the edge of the iris can be located between the inner circle and the outer circle, the diameter of the inner circle is 0.25 times the width of the eye region, and the diameter of the outer circle is 1.44 times the diameter of the inner circle.
In a preferred embodiment of the present invention, in the process of obtaining the center position of the iris by using the double-circle motion model, after the roughly estimated iris position is moved from left to right, the estimated iris position is moved within a range of-5 to +5 pixels from top to bottom, left to right of the estimated iris position, and the position with the largest difference is selected as the final iris center position.
In a preferred embodiment of the present invention, the obtaining the position of the reference point by using the average position includes: 36 feature points (points 1-36 in fig. 3) of the face contour are selected from the 68 feature points, and the horizontal and vertical coordinate values of the feature points are averaged to obtain a stable reference point.
In a preferred embodiment of the present invention, the method for obtaining the head pose estimation comprises: and selecting a left eye corner point, a right eye corner point, a left mouth corner point, a nose tip and a chin from the 68 feature points to obtain the positions of the 6 feature points, and obtaining the head pose by an iterative algorithm provided by an OpenCV library.

Claims (7)

1. A high-robustness real-time sight line detection method is characterized by comprising the following steps:
1) detecting 68 characteristic points of a face in the image by a face characteristic point detection algorithm;
2) after the characteristic points of the human face are detected, the central position of the iris is obtained by using a double-circle activity model;
3) after the face characteristic points are detected, obtaining the reference point position by using an average position obtaining mode, and calculating by combining with the iris center position to obtain an eye vector;
4) obtaining an estimate of the head pose;
5) obtaining a mapping function related to the eye vector and the head posture by using a regression algorithm, and obtaining the sight line positions of the left eye and the right eye by using the mapping function;
the mapping function in step 5) is expressed as an n-order polynomial function, and the form of the n-order polynomial function is as follows:
Figure FDA0002513035640000011
Figure FDA0002513035640000012
wherein, ghAnd gvAn abscissa and an ordinate respectively representing the sight line position; e.g. of the typehAnd evRespectively representing the abscissa and ordinate of the "eye vector"; h isp,hyAnd hrA pitch angle, a yaw angle and a roll angle respectively representing the head attitude; and a iskAnd bkThe coefficients representing the k-th order terms in the functional expression, i.e. the fixed parameters of the mapping function;
6) and carrying out weighted summation on the sight line positions of the left eye and the right eye to obtain the final sight line position.
2. The method for high robust real-time gaze detection according to claim 1, wherein the step of obtaining the central position of the iris using the bi-circular motion model further comprises a preprocessing step, wherein the preprocessing step specifically comprises:
1) extracting an eye region by using 12 feature points in the vicinity of the eye among the 68 feature points;
2) and carrying out corrosion treatment on the eye region image, and filtering noise.
3. The method as claimed in claim 1, wherein the radius of the outer circle and the inner circle of the double-circle active model is obtained by multiplying the width of the eye region by a predetermined proportionality coefficient, so that the edge of the iris can be located between the outer circle and the inner circle, the diameter of the inner circle is 0.25 times the width of the eye region, and the diameter of the outer circle is 1.44 times the diameter of the inner circle.
4. The method as claimed in claim 1, wherein the iris center position is determined by using a bi-circular motion model, and after the rough iris position is estimated by moving from left to right, the iris center position is moved within a range of-5 to +5 pixels from top to bottom, left to right, and left to right of the position, and the position with the largest difference is selected as the final iris center position.
5. The method according to claim 1, wherein the reference point position is obtained by averaging the positions, specifically: 36 feature points of the face contour are selected from the 68 feature points, and the horizontal and vertical coordinate values of the feature points are averaged to obtain a stable reference point.
6. The method for highly robust real-time gaze detection according to claim 1, wherein said method of obtaining an estimate of head pose comprises: and selecting a left eye corner point, a right eye corner point, a left mouth corner point, a nose tip and a chin from the 68 feature points to obtain the positions of the 6 feature points, and obtaining the head pose by an iterative algorithm provided by an OpenCV library.
7. The highly robust real-time gaze detection method according to claim 1, characterized in that said step 6) is specifically:
and obtaining final coordinates of the obtained sight line coordinates of the left eye and the right eye through weighted summation, wherein the specific form is as follows:
gfh=w*glh+(1-w)*grh,w∈[0,1]
gfv=w*glv+(1-w)*grv,w∈[0,1]
wherein w is a weight coefficient; glhAnd grhThe horizontal coordinates of the visual lines of the left eye and the right eye respectively; glvAnd grvThe vertical coordinates of the lines of sight of the left and right eye, g, respectivelyfhAnd gfvRespectively, the abscissa and ordinate of the final gaze position.
CN201810118195.8A 2018-02-06 2018-02-06 High-robustness real-time sight line detection method Expired - Fee Related CN108268858B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810118195.8A CN108268858B (en) 2018-02-06 2018-02-06 High-robustness real-time sight line detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810118195.8A CN108268858B (en) 2018-02-06 2018-02-06 High-robustness real-time sight line detection method

Publications (2)

Publication Number Publication Date
CN108268858A CN108268858A (en) 2018-07-10
CN108268858B true CN108268858B (en) 2020-10-16

Family

ID=62773617

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810118195.8A Expired - Fee Related CN108268858B (en) 2018-02-06 2018-02-06 High-robustness real-time sight line detection method

Country Status (1)

Country Link
CN (1) CN108268858B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110046546B (en) * 2019-03-05 2021-06-15 成都旷视金智科技有限公司 Adaptive sight tracking method, device and system and storage medium
CN110275608B (en) * 2019-05-07 2020-08-04 清华大学 Human eye sight tracking method
CN110321820B (en) * 2019-06-24 2022-03-04 东南大学 Sight line drop point detection method based on non-contact equipment
CN110909611B (en) * 2019-10-29 2021-03-05 深圳云天励飞技术有限公司 Method and device for detecting attention area, readable storage medium and terminal equipment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102749991A (en) * 2012-04-12 2012-10-24 广东百泰科技有限公司 Non-contact free space eye-gaze tracking method suitable for man-machine interaction
CN102930278A (en) * 2012-10-16 2013-02-13 天津大学 Human eye sight estimation method and device
CN105303170A (en) * 2015-10-16 2016-02-03 浙江工业大学 Human eye feature based sight line estimation method
WO2018000020A1 (en) * 2016-06-29 2018-01-04 Seeing Machines Limited Systems and methods for performing eye gaze tracking

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102749991A (en) * 2012-04-12 2012-10-24 广东百泰科技有限公司 Non-contact free space eye-gaze tracking method suitable for man-machine interaction
CN102930278A (en) * 2012-10-16 2013-02-13 天津大学 Human eye sight estimation method and device
CN105303170A (en) * 2015-10-16 2016-02-03 浙江工业大学 Human eye feature based sight line estimation method
WO2018000020A1 (en) * 2016-06-29 2018-01-04 Seeing Machines Limited Systems and methods for performing eye gaze tracking

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Fast Algorithm for Iris Localization Using";Mahboubeh Shamsi.et al;<2009 International Conference of Soft Computing and Pattern Recognition>;20091231;期刊第3节 *
"基于标记点检测的视线跟踪注视点估计";龚秀锋等;《计算机工程》;20110331;第37卷(第6期);全文 *

Also Published As

Publication number Publication date
CN108268858A (en) 2018-07-10

Similar Documents

Publication Publication Date Title
CN108268858B (en) High-robustness real-time sight line detection method
CN109271914B (en) Method, device, storage medium and terminal equipment for detecting sight line drop point
CN105389539B (en) A kind of three-dimension gesture Attitude estimation method and system based on depth data
US9411417B2 (en) Eye gaze tracking system and method
CN110570455B (en) Whole body three-dimensional posture tracking method for room VR
Jain et al. Real-time upper-body human pose estimation using a depth camera
Wu et al. Wide-range, person-and illumination-insensitive head orientation estimation
WO2023071884A1 (en) Gaze detection method, control method for electronic device, and related devices
CN104809424B (en) Method for realizing sight tracking based on iris characteristics
CN111639571B (en) Video action recognition method based on contour convolution neural network
CN110895683B (en) Kinect-based single-viewpoint gesture and posture recognition method
Al-Rahayfeh et al. Enhanced frame rate for real-time eye tracking using circular hough transform
Ko et al. A robust gaze detection method by compensating for facial movements based on corneal specularities
CN107886057B (en) Robot hand waving detection method and system and robot
Li et al. Visual interpretation of natural pointing gestures in 3D space for human-robot interaction
Vasisht et al. Human computer interaction based eye controlled mouse
TWI768852B (en) Device for detecting human body direction and method for detecting human body direction
CN112183200B (en) Eye movement tracking method and system based on video image
CN107274477B (en) Background modeling method based on three-dimensional space surface layer
Lu et al. An eye gaze tracking method of virtual reality headset using a single camera and multi-light source
CN110598647B (en) Head posture recognition method based on image recognition
CN110599407B (en) Human body noise reduction method and system based on multiple TOF cameras in downward inclination angle direction
Zhang et al. Hand tracking algorithm based on superpixels feature
CN115951783A (en) Computer man-machine interaction method based on gesture recognition
Domhof et al. Multimodal joint visual attention model for natural human-robot interaction in domestic environments

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20201016

Termination date: 20210206

CF01 Termination of patent right due to non-payment of annual fee