CN110909571A - High-precision face recognition space positioning method - Google Patents

High-precision face recognition space positioning method Download PDF

Info

Publication number
CN110909571A
CN110909571A CN201811081739.4A CN201811081739A CN110909571A CN 110909571 A CN110909571 A CN 110909571A CN 201811081739 A CN201811081739 A CN 201811081739A CN 110909571 A CN110909571 A CN 110909571A
Authority
CN
China
Prior art keywords
face
person
camera
color picture
contour
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811081739.4A
Other languages
Chinese (zh)
Other versions
CN110909571B (en
Inventor
孙聪
陈志文
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Capital Association Hong Kang Polytron Technologies Inc
Original Assignee
Wuhan Capital Association Hong Kang Polytron Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Capital Association Hong Kang Polytron Technologies Inc filed Critical Wuhan Capital Association Hong Kang Polytron Technologies Inc
Priority to CN201811081739.4A priority Critical patent/CN110909571B/en
Publication of CN110909571A publication Critical patent/CN110909571A/en
Application granted granted Critical
Publication of CN110909571B publication Critical patent/CN110909571B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a high-precision face recognition space positioning method, which comprises the following steps: firstly, a group of depth cameras are placed right in front of a face of a person, and a space coordinate system is established by taking the depth cameras as an origin; then, a depth camera is used for obtaining the space coordinates of the point of the face of the person, which is closest to the depth camera, in a camera coordinate system; then, a color picture of the human face is obtained by using the depth camera, a contour line of the human face is obtained by carrying out contour analysis on the color picture by adopting a contour tracking algorithm, and a color picture with clear facial features is obtained by adjusting the pixel brightness of two sides of the contour line; and finally, calculating the angle of the face of the person relative to the camera according to a face feature recognition algorithm and the face characteristics of the person by using a clear color picture, and combining the point coordinates to obtain a complete 3D space coordinate.

Description

High-precision face recognition space positioning method
Technical Field
The invention belongs to the technical field of space positioning, and particularly relates to a high-precision face recognition space positioning method.
Background
How to obtain the position of a person's head in space? The method can be realized by utilizing a depth camera on the current market and the existing face recognition algorithm; depth cameras typically include one (group) of infrared cameras and one common optical camera. The infrared camera is used for obtaining a point cloud image (namely a 3D image) of an object under infrared rays, and the accuracy of the point cloud image is usually poor; the common optical camera is used for acquiring a planar color image of an object, and the accuracy of the common optical camera is usually high, but only the planar image of the object can be acquired.
The prior art generally needs the following steps to realize the head positioning of a person by adopting the depth camera: firstly, a depth camera is arranged in front of the head of a person; then, using the infrared image of the camera to find the point (nose tip) of the head of the person closest to the camera, and acquiring the space coordinates (x, y, z) of the point under a camera coordinate system; then, by utilizing the color image of the camera and matching with the algorithm of face recognition (at present, more algorithms are available), finding out characteristic points of the nose tip, the nose root, the canthus, the mouth corner and the like of a person, and calculating the angle (rx, ry, rz) of the face of the person relative to the camera according to the bilateral symmetry of the face of the person; and finally, synthesizing the obtained point coordinates and angles into complete 3D space coordinates (x, y, z, rx, ry and rz).
However, this approach also has certain limitations. The common optical camera has higher requirement on ambient light, is greatly influenced by the ambient light, causes underexposure due to dark light and makes five sense organs unclear; the bright light will cause local shadow on the face. Whether dark or bright, the accuracy of facial feature point identification is affected, and the accuracy of spatial coordinates is further affected. The infrared camera has the advantage of being free from ambient light interference, but its resolution is low, so it is difficult to extract facial features directly from the infrared point cloud image. The existing method can achieve more accurate positioning only under the condition that the brightness and the angle of an ambient light source are proper in a use scene, and the accuracy is greatly reduced under other conditions.
Disclosure of Invention
The invention aims to solve the problems in the prior art, and provides a high-precision face recognition space positioning method.
In order to achieve the purpose, the invention adopts the technical scheme that:
a high-precision face recognition space positioning method comprises the following steps:
s1, placing a group of depth cameras in front of the face of the person, and establishing a space coordinate system by taking the depth cameras as an origin;
s2, acquiring space coordinates (x, y, z) of a point of the face of the person closest to the depth camera in a camera coordinate system by using the depth camera;
s3, acquiring a color picture of the human face by using the depth camera, carrying out contour analysis on the color picture by using a contour tracking algorithm to obtain a contour line of the human face, and obtaining the color picture with clear facial features by adjusting pixel brightness on two sides of the contour line;
s4, using the clear color picture obtained in S3, estimating the angle (rx, ry, rz) of the face of the person relative to the camera according to the face feature recognition algorithm and the face characteristics of the person;
s5, the point coordinates and angles obtained in S2 and S4 are combined into complete 3D space coordinates (x, y, z, rx, ry, rz).
Wherein, the depth camera comprises one/group of infrared cameras and a common optical camera.
Specifically, in step S2, an infrared point cloud image of the face of the person is obtained by using an infrared camera in the depth camera, the point where the face of the person is closest to the camera, i.e., the tip of the nose, is found, and the spatial coordinates (x, y, z) of the face under the camera coordinate system are obtained.
Specifically, step S3 includes the steps of:
s31, acquiring a color picture of the human face by using a common optical camera in the depth camera, and analyzing the color picture by adopting a contour tracing algorithm to obtain a contour line of the human face;
s32, the contour line in the S31 is mapped to the infrared point cloud image of the human face;
s33, analyzing coordinate values of point clouds on two sides of the contour line along the contour line on the infrared point cloud image; if the height of the point cloud coordinate values at the two sides has larger jump, the contour is real and effective; if no larger jump exists, the contour is a false contour formed by shadow caused by over-strong light;
s34, on the color picture, adjusting the brightness of the pixel at the two sides of the corresponding position of the false contour to be close, namely adjusting the dark area to the bright area to eliminate the shadow caused by over-strong light;
s35, traversing and finding out an area with large height jump on the infrared point cloud image;
and S36, mapping the area with larger height jump in S35 to a color picture, brightening the pixel brightness of the area with the height jump upwards, and dimming the pixel brightness of the area with the height jump downwards so as to eliminate the influence caused by over-dark light.
Specifically, in step S4, using the clear color picture obtained in step S3, a face feature recognition algorithm is used to find feature points including the tip of the nose, the root of the nose, the corners of the eyes, the corners of the mouth, and the eyebrows of the person in the picture, and the angle (rx, ry, rz) of the face of the person with respect to the camera is calculated from the bilateral symmetry of the face of the person.
Specifically, the method of acquiring the facial feature points of the person from the color picture in step S3 includes the steps of:
s51, collecting human faces, and collecting human face images from the color pictures;
s52, detecting the face, including detecting the position of the face and detecting key points of the face;
and S52, acquiring facial feature points, mapping the facial image vectors into facial feature vectors through a transformation matrix, and selecting the facial feature points.
Further, the facial feature points include 68 feature points on the face.
Compared with the prior art, the invention has the beneficial effects that: the method adopts a contour tracking algorithm to perform contour analysis on a color picture acquired by a depth camera to obtain a face contour line, then obtains a color picture with clear facial features by adjusting the brightness of pixels at two sides of the contour line, finds facial feature points of a person in the picture by combining a face feature recognition algorithm, calculates the angle of the face of the person relative to the camera according to the bilateral symmetry of the face of the person, and finally obtains the accurate position of the face of the person by combining the positions of the face of the person and the camera, thereby solving the problem that the common optical camera is greatly influenced by ambient light.
Drawings
FIG. 1 is an overall flow chart of a high-precision face recognition spatial localization method of the present invention;
FIG. 2 is a flow chart of a method for eliminating the influence of ambient light according to the present invention;
fig. 3 is a schematic diagram of a spatial coordinate system for obtaining the angle of a face of a person relative to a camera in the present invention.
Detailed Description
The technical solutions of the present invention will be described clearly and completely with reference to the accompanying drawings, and it is obvious that the described embodiments are only some embodiments of the present invention, not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1, the embodiment provides a high-precision face recognition spatial localization method, which specifically includes the following steps:
s1, placing a group of depth cameras in front of the face of the person, and establishing a space coordinate system by taking the depth cameras as an origin;
s2, acquiring space coordinates (x, y, z) of a point of the face of the person closest to the depth camera in a camera coordinate system by using the depth camera;
s3, acquiring a color picture of the human face by using the depth camera, carrying out contour analysis on the color picture by using a contour tracking algorithm to obtain a contour line of the human face, and obtaining the color picture with clear facial features by adjusting pixel brightness on two sides of the contour line;
s4, using the clear color picture obtained in S3, estimating the angle (rx, ry, rz) of the face of the person relative to the camera according to the face feature recognition algorithm and the face characteristics of the person;
s5, the point coordinates and angles obtained in S2 and S4 are combined into complete 3D space coordinates (x, y, z, rx, ry, rz).
The depth camera comprises one/group of infrared cameras and a common optical camera, and can adopt a KS352 type binocular camera.
Specifically, in step S2, an infrared point cloud image of the face of the person is obtained by using an infrared camera in the depth camera, a point, which is the closest point of the face of the person to the camera, i.e., a nose tip, is found, and spatial coordinates (x, y, z) of the face under a camera coordinate system are obtained;
further, the (x, y) coordinates of the nose tip are obtained by: firstly, drawing 68 characteristic points on a human face on Demo through Opencv, and numbering; then, the following model was used:
circle(temp,cvPoint(shapes[0].part(i).x(),shapes[0].part(i).y()),3,cv::Scalar(0,0,255),-1);
wherein part (i) represents the ith feature point, and x () and y () are ways of accessing two-dimensional coordinates of the feature point;
furthermore, the z-axis coordinate of the nose tip, namely the distance from the nose tip to the camera, can be obtained by a binocular matching triangulation principle; the triangulation principle, i.e., the difference between the abscissas of the target points imaged in the left and right images (parallax Disparity), is inversely proportional to the distance of the target point from the imaging plane: obtaining Z-axis coordinate by taking the Z as ft/d; the binocular matching adopts a triangulation principle and is completely based on an image processing technology, and matching points are obtained by searching the same characteristic points in two images. The space coordinates of the human face under the camera coordinate system are mature in the prior art, and are not described in detail herein.
Preferably, as shown in fig. 2, the present embodiment is a method for obtaining a clear color picture by eliminating the influence of ambient light, i.e. step S3 includes the following steps:
s31, acquiring a color picture of the human face by using a common optical camera in the depth camera, and analyzing the color picture by adopting a contour tracing algorithm to obtain a contour line of the human face;
s32, the contour line in the S31 is mapped to the infrared point cloud image of the human face;
s33, analyzing coordinate values of point clouds on two sides of the contour line along the contour line on the infrared point cloud image; if the height of the point cloud coordinate values at the two sides has larger jump, the contour is real and effective; if no larger jump exists, the contour is a false contour formed by shadow caused by over-strong light;
s34, on the color picture, adjusting the brightness of the pixel at the two sides of the corresponding position of the false contour to be close, namely adjusting the dark area to the bright area to eliminate the shadow caused by over-strong light;
s35, traversing and finding out an area with large height jump on the infrared point cloud image;
and S36, mapping the area with larger height jump in S35 to a color picture, brightening the pixel brightness of the area with the height jump upwards, and dimming the pixel brightness of the area with the height jump downwards so as to eliminate the influence caused by over-dark light.
Specifically, in step S4, using the clear color picture obtained in step S3, a face feature recognition algorithm is used to find feature points including the nose tip, the nose root, the eye corner, the mouth corner, the eyebrow, etc. of the person in the picture, and the angle (rx, ry, rz) of the face of the person relative to the camera is calculated according to the bilateral symmetry of the face of the person;
further, the angle of the face of the person with respect to the camera is obtained by the following method:
a space coordinate system is firstly established, a head model of a person is established according to facial feature points (nose tip, nose root, eye corner, mouth corner, eyebrow and the like) of the person on a picture, the picture is mapped onto the head model, each facial feature point corresponds to a feature point on the head model, as shown in figure 3, a plane is established on the head model by using three points of a left earlobe (point A), a right earlobe (point B) and a nose tip (point C) of the person, a middle point O of the point A and the point B is used as a coordinate system origin, a direction vertical to the plane is a Z axis, a direction of a connecting line of the point A and the point B is an X axis, a direction of a connecting line of the point C and the point O is used as a Y axis, a space coordinate system based on the face is established, wherein the X axis is vertical to the Y axis, and included angles α, β and gamma between the connecting line of the point O point to the camera and the X axis, the Y axis and the Z axis respectively can be calculated, namely, the angle (rx, ry, rz) of the face.
Specifically, the method of acquiring the facial feature points of the person from the color picture in step S3 includes the steps of:
s51, collecting human faces, and collecting human face images from the color pictures;
s52, detecting the face, including detecting the position of the face and detecting key points of the face;
and S52, acquiring facial feature points, mapping the facial image vectors into facial feature vectors through a transformation matrix, and selecting the facial feature points.
Further, in step S52, the face detection may be implemented by using a Dlib model in an Openface, and the data is processed by using an Opencv library.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (6)

1. A high-precision face recognition space positioning method is characterized by comprising the following steps:
s1, placing a group of depth cameras in front of the face of the person, and establishing a space coordinate system by taking the depth cameras as an origin;
s2, acquiring space coordinates (x, y, z) of a point of the face of the person closest to the depth camera in a camera coordinate system by using the depth camera;
s3, acquiring a color picture of the human face by using the depth camera, carrying out contour analysis on the color picture by using a contour tracking algorithm to obtain a contour line of the human face, and obtaining the color picture with clear facial features by adjusting pixel brightness on two sides of the contour line;
s4, using the clear color picture obtained in S3, estimating the angle (rx, ry, rz) of the face of the person relative to the camera according to the face feature recognition algorithm and the face characteristics of the person;
s5, the point coordinates and angles obtained in S2 and S4 are combined into complete 3D space coordinates (x, y, z, rx, ry, rz).
2. The spatial localization method for high-precision face recognition according to claim 1, wherein in step S2, an infrared point cloud image of the face of the person is obtained by using an infrared camera in the depth camera, the closest point of the face of the person to the camera, i.e. the tip of the nose, is found, and spatial coordinates (x, y, z) of the face of the person in a camera coordinate system are obtained.
3. A high accuracy face recognition spatial localization method according to claim 1, wherein step S3 comprises the following steps:
s31, acquiring a color picture of the human face by using a common optical camera in the depth camera, and analyzing the color picture by adopting a contour tracing algorithm to obtain a contour line of the human face;
s32, the contour line in the S31 is mapped to the infrared point cloud image of the human face;
s33, analyzing coordinate values of point clouds on two sides of the contour line along the contour line on the infrared point cloud image; if the height of the point cloud coordinate values at the two sides has larger jump, the contour is real and effective; if no larger jump exists, the contour is a false contour formed by shadow caused by over-strong light;
s34, on the color picture, adjusting the brightness of the pixel at the two sides of the corresponding position of the false contour to be close, namely adjusting the dark area to the bright area to eliminate the shadow caused by over-strong light;
s35, traversing and finding out an area with large height jump on the infrared point cloud image;
and S36, mapping the area with larger height jump in S35 to a color picture, brightening the pixel brightness of the area with the height jump upwards, and dimming the pixel brightness of the area with the height jump downwards so as to eliminate the influence caused by over-dark light.
4. The spatial localization method of high-precision facial recognition according to claim 1, wherein in step S4, using the clear color picture obtained in step S3, a face recognition algorithm is used to find out the feature points including the nose tip, nose root, eye corner, mouth corner and eyebrow of the person in the picture, and the angle (rx, ry, rz) of the face of the person relative to the camera is calculated according to the bilateral symmetry of the face of the person.
5. A high accuracy face recognition space positioning method according to claim 4, characterized in that the method of obtaining the face feature points of the person from the color picture in step S3 comprises the following steps:
s51, collecting human faces, and collecting human face images from the color pictures;
s52, detecting the face, including detecting the position of the face and detecting key points of the face;
and S52, acquiring facial feature points, mapping the facial image vectors into facial feature vectors through a transformation matrix, and selecting the facial feature points.
6. A high accuracy facial recognition spatial localization method according to claim 5, wherein said facial feature points comprise 68 feature points on the human face.
CN201811081739.4A 2018-09-17 2018-09-17 High-precision face recognition space positioning method Active CN110909571B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811081739.4A CN110909571B (en) 2018-09-17 2018-09-17 High-precision face recognition space positioning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811081739.4A CN110909571B (en) 2018-09-17 2018-09-17 High-precision face recognition space positioning method

Publications (2)

Publication Number Publication Date
CN110909571A true CN110909571A (en) 2020-03-24
CN110909571B CN110909571B (en) 2022-05-03

Family

ID=69813687

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811081739.4A Active CN110909571B (en) 2018-09-17 2018-09-17 High-precision face recognition space positioning method

Country Status (1)

Country Link
CN (1) CN110909571B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111582118A (en) * 2020-04-29 2020-08-25 福州瑞芯微电子股份有限公司 Face recognition method and device
CN112580578A (en) * 2020-12-28 2021-03-30 珠海亿智电子科技有限公司 Binocular living camera face ranging method and system
CN112949467A (en) * 2021-02-26 2021-06-11 北京百度网讯科技有限公司 Face detection method and device, electronic equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102968636A (en) * 2012-12-11 2013-03-13 山东神思电子技术股份有限公司 Human face contour extracting method
CN104394336A (en) * 2014-12-01 2015-03-04 北京思比科微电子技术股份有限公司 Method and system of sharpening image contour based on CMOS image sensor
CN105760809A (en) * 2014-12-19 2016-07-13 联想(北京)有限公司 Method and apparatus for head pose estimation
CN108446595A (en) * 2018-02-12 2018-08-24 深圳超多维科技有限公司 A kind of space-location method, device, system and storage medium
CN108470373A (en) * 2018-02-14 2018-08-31 天目爱视(北京)科技有限公司 It is a kind of based on infrared 3D 4 D datas acquisition method and device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102968636A (en) * 2012-12-11 2013-03-13 山东神思电子技术股份有限公司 Human face contour extracting method
CN104394336A (en) * 2014-12-01 2015-03-04 北京思比科微电子技术股份有限公司 Method and system of sharpening image contour based on CMOS image sensor
CN105760809A (en) * 2014-12-19 2016-07-13 联想(北京)有限公司 Method and apparatus for head pose estimation
CN108446595A (en) * 2018-02-12 2018-08-24 深圳超多维科技有限公司 A kind of space-location method, device, system and storage medium
CN108470373A (en) * 2018-02-14 2018-08-31 天目爱视(北京)科技有限公司 It is a kind of based on infrared 3D 4 D datas acquisition method and device

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王侃: "人体头部姿态参数测量", 《中国优秀博硕士学位论文全文数据库(硕士)信息科技辑》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111582118A (en) * 2020-04-29 2020-08-25 福州瑞芯微电子股份有限公司 Face recognition method and device
CN112580578A (en) * 2020-12-28 2021-03-30 珠海亿智电子科技有限公司 Binocular living camera face ranging method and system
CN112949467A (en) * 2021-02-26 2021-06-11 北京百度网讯科技有限公司 Face detection method and device, electronic equipment and storage medium
CN112949467B (en) * 2021-02-26 2024-03-08 北京百度网讯科技有限公司 Face detection method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN110909571B (en) 2022-05-03

Similar Documents

Publication Publication Date Title
US11145038B2 (en) Image processing method and device for adjusting saturation based on depth of field information
CN105550670B (en) A kind of target object dynamically track and measurement and positioning method
US11030455B2 (en) Pose recognition method, device and system for an object of interest to human eyes
CN105894499B (en) A kind of space object three-dimensional information rapid detection method based on binocular vision
CN103868460B (en) Binocular stereo vision method for automatic measurement based on parallax optimized algorithm
CN106168853B (en) A kind of free space wear-type gaze tracking system
CN109285145B (en) Multi-standing tree height measuring method based on smart phone
US20180101227A1 (en) Headset removal in virtual, augmented, and mixed reality using an eye gaze database
WO2017211066A1 (en) Iris and pupil-based gaze estimation method for head-mounted device
CN108731587A (en) A kind of the unmanned plane dynamic target tracking and localization method of view-based access control model
CN106570904B (en) A kind of multiple target relative pose recognition methods based on Xtion camera
CN101996407B (en) Colour calibration method for multiple cameras
CN109035309A (en) Pose method for registering between binocular camera and laser radar based on stereoscopic vision
CN110909571B (en) High-precision face recognition space positioning method
CN108694741A (en) A kind of three-dimensional rebuilding method and device
CN109035307B (en) Set area target tracking method and system based on natural light binocular vision
CN112085802A (en) Method for acquiring three-dimensional finger vein image based on binocular camera
Jeges et al. Measuring human height using calibrated cameras
CN110909617B (en) Living body face detection method and device based on binocular vision
CN111582036A (en) Cross-view-angle person identification method based on shape and posture under wearable device
CN116597488A (en) Face recognition method based on Kinect database
CN113723432B (en) Intelligent identification and positioning tracking method and system based on deep learning
CN113052898B (en) Point cloud and strong-reflection target real-time positioning method based on active binocular camera
CN112767442B (en) Pedestrian three-dimensional detection tracking method and system based on top view angle
Ye et al. Research on flame location and distance measurement method based on binocular stereo vision

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant