CN111428577B - Face living body judgment method based on deep learning and video amplification technology - Google Patents

Face living body judgment method based on deep learning and video amplification technology Download PDF

Info

Publication number
CN111428577B
CN111428577B CN202010138178.8A CN202010138178A CN111428577B CN 111428577 B CN111428577 B CN 111428577B CN 202010138178 A CN202010138178 A CN 202010138178A CN 111428577 B CN111428577 B CN 111428577B
Authority
CN
China
Prior art keywords
data
face
video
heart rate
adopting
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010138178.8A
Other languages
Chinese (zh)
Other versions
CN111428577A (en
Inventor
张静
郭权浩
刘娟秀
刘霖
杜晓辉
倪光明
刘永
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN202010138178.8A priority Critical patent/CN111428577B/en
Publication of CN111428577A publication Critical patent/CN111428577A/en
Application granted granted Critical
Publication of CN111428577B publication Critical patent/CN111428577B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/40Spoof detection, e.g. liveness detection
    • G06V40/45Detection of the body part being alive
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/15Biometric patterns based on physiological signals, e.g. heartbeat, blood flow

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a human face living body judgment method based on deep learning and video amplification technologies, and relates to image processing, neural networks, software development and data set manufacturing and acquisition. The adopted attack face covers electronic equipment such as a tablet, a mobile phone and a computer, and also covers non-electronic equipment such as a common photo, a poster and the like and a three-dimensional model, wherein the three-dimensional model further comprises different environmental light intensities and different distances. The data set has greater generalization. The method processes data through an Euler influence amplification algorithm and a convolutional neural network, writes an interface to achieve real-time identification of input data, and displays whether the acquired face is a real face or an attack face in real time.

Description

Face living body judgment method based on deep learning and video amplification technology
Technical Field
The invention relates to an intelligent detection technology, which covers the field of optical-mechanical computer software and is particularly designed for a human face living body detection technology in biological recognition.
Background
Since the first demonstration of the faceID face unlocking technology released by apple company at 1 am, 9 and 13 months in 2017, the living body detection technology enters a high-speed development era, and the main function of the living body detection is to distinguish a real face from a false face. Because the face unlocking technology is non-contact, convenient and fast, the face unlocking technology of the user is brought on line in each field one after another, in 2019, in 10 months, the medium explodes out of an express cabinet of a domestic Hovere express company to generate a heavy BUG, and the Hovere express cabinet brings on a new function of face unlocking express, but investigators find that the machine can be cheated by printing photos to take out the express. More and more electronic devices are also equipped with a face brushing unlocking technology successively, and even mainstream payment platforms are on line with a face brushing function, so that the safety of face brushing is particularly important.
At present, living body detection technical schemes at home and abroad are numerous and comprise action matching type living body detection, off-line 3D structured light living body detection, off-line near-infrared living body detection and the like. The methods have respective advantages and disadvantages, and the matched living body detection belongs to non-silent detection, requires a user to make a specified action, and is relatively complicated.
Disclosure of Invention
In order to overcome the problems, the invention provides a method for judging the living body by acquiring a face video and an image, dividing the face video and the image into a real face image and a false face image and identifying the difference between the two types of images through a video amplification technology and a deep learning training model. The main objective is to carry on the safety equipment that needs to be carried out living body detection face identification and unlock.
The technical scheme of the invention is a human face living body judgment method based on deep learning and video amplification technology, which comprises the following steps:
step 1: obtaining a training sample; respectively acquiring picture data and video data of a large number of live faces at different distances, different light intensities, different angles and different shielding conditions by adopting high-resolution and low-resolution cameras as positive sample data;
step 2: the data obtained in the step 1 are played by a display and printed with photos and posters, and high-resolution and low-resolution cameras are used for shooting and displaying images respectively in the same way to obtain picture data and video data which serve as negative sample data;
and step 3: preprocessing the acquired positive and negative sample data;
step 3-1: classifying the data of the positive and negative samples into video data and image data, checking whether each frame of the video data contains face data or not, checking whether each image contains face data or not, and reserving the video and image data containing the face data;
step 3-2: performing primary backup on video data in the positive sample data, and processing the video data as a preprocessing data set of a video amplification technology;
step 3-3: carrying out background mask processing on a non-human face background area in a video data picture in the positive sample data, eliminating the background and only leaving a picture containing face information;
step 3-4: performing video stable and video align processing on the data processed in the step 3-1 to realize face alignment and algorithm shake elimination, wherein the video align processing enables the face of the picture to be aligned in each frame and have approximately the same size, and the video stable processing eliminates the shake influence on the face due to environmental factors and other human factors;
and 4, step 4: calculating the heart rate weight;
step 4-1: adopting facial landworks to calibrate face key points in the data obtained in the step 3;
step 4-2: locking a forehead rectangular area with sufficient facial blood flow and obvious change rule and triangular areas of two faces and faces by using key points and OpenCV polylines and fillPoly functions; one triangular area is an isosceles triangular area with the root of the nasal bridge bone as the vertex and the connecting line of two corners of the mouth as the bottom side; the other is an inverted triangle area with the two corners of the eyes and the bottom of the lower lip as three vertexes;
step 4-3: carrying out video amplification processing on the data acquired in the step 3, respectively testing heart rates of the three areas acquired in the step 4-2, recording numerical values x1, x2 and x3 as initial heart rates serving as references, and measuring the real heart rate y of the original data object by using standard medical equipment as calibration;
step 4-4: the step 4-3 is carried out for a plurality of times, batch test heart rates and standard heart rates are obtained, and fitting curves are utilized
Figure BDA0002398074150000021
Weights a1, a2 and a3 of the three heart rates are obtained;
and 5: taking another part of the positive and negative sample data as a preprocessing data set for deep learning; storing each frame of video data in the partial data as image data, performing batch processing on the image data by adopting a trained buffer model and an OpenCV (open channel code) haarcacade file to obtain each image face box, and finally making into a tfrecrd VOC (volatile organic compound) format data file;
step 6: training the neural network model based on the ssd _ mobilenet by adopting the file obtained in the step 5 until the training is successful;
and 7: when the video amplification technology is used for actual detection, video data of a target to be detected are obtained, and heart rates of three areas of the target to be detected are calculated by sequentially adopting the methods in the step 3-3, the step 3-4, the step 4-1, the step 4-2 and the step 4-3;
and 8: calculating a real heart rate according to the weighted value calculated in the step 4, comparing the real heart rate with a set heart rate threshold value, if the real heart rate does not meet the requirement, determining that the human face is attacked, otherwise, performing a step 9;
and step 9: and (6) identifying the face by adopting the neural network model based on the ssd _ mobilenet trained in the step 6, and judging whether the face is a living face.
The method adopts the video data to judge the heart rate characteristic of the face, adopts the heart rate characteristic and the neural network to judge whether the face is a real face or an attack face, does not need interaction of testers in the judging process, has high identification precision on the attack face, and is difficult to cheat by the existing attack mode.
Drawings
FIG. 1 is a flow chart of the video amplification technique detection process of the present invention.
FIG. 2 is a flow chart of the convolutional neural network of the present invention.
FIG. 3 is a flow chart of the data set generation process of the present invention.
Fig. 4 is an overall working flow of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present invention more apparent, the present invention will be described in further detail below with reference to the accompanying drawings and embodiments, it being understood that the embodiments described herein are only for the purpose of explaining the present invention, but not for the purpose of limiting the present invention.
Referring to fig. 1 to 4, the invention is a human face in-vivo detection system based on deep learning and video amplification technology, which comprises a data acquisition camera, an electronic device and a non-electronic device for attacking a face, a computer, an android mobile phone and a set of man-machine interaction UI of the android, wherein the man-machine interaction UI is used for processing human face image data acquired by a mobile phone camera, and judging whether the currently collected face data is true or false through analysis of the image data.
The data acquisition module comprises an acquisition object 10, a camera slide 20, a high resolution camera 30, a low resolution camera 40, an acquisition object position slide 50. The acquisition 10 object can be a person or a face attack object, such as an electronic device carrying an attack face: tablet, computer, cell-phone. Non-electronic device carrying attack face: poster, photo. The position of the acquisition object 10 may be slid on the position slider 50 to adjust the distance from the acquisition cameras 30, 40. The acquisition cameras 30, 40 may slide on the camera slide 20 to determine which camera to use.
A human face living body detection method based on deep learning and video amplification technology comprises the following steps:
step 1-1: the high resolution camera 30 in the adjustment figure 1 carries out facial image collection to the different people's faces of gathering object 10, at the in-process of gathering, the slow movement gathers object 10, make it constantly slide on the sliding strip 50 and gather the image of different distance positions, the face of gathering should constantly adjust the contained angle of gathering facial to the camera at the collection in-process, and allow to have different shelters from facial, such as wearing the cap, sunglasses, gauze mask etc., the data of gathering should have different distances, different angles, different people's faces, different light intensity, different facial shelters from. After the acquisition by the high resolution camera 30 is completed, the same procedure is performed using the low resolution camera 40.
Step 1-2: the collected data are carried on the electronic equipment to become an attack face, the high-resolution camera 30 and the low-resolution camera 40 are used for collecting, only the light intensity needs to be adjusted in the collection, and the distance of the movable sliding belt 50 is adjusted to the optimal distance only according to different attack faces carried on the electronic equipment. And collecting data of the attack face, and finally taking the total collected data as data to be processed.
Step 2-1: preprocessing the acquired positive and negative sample data before formal data processing;
step 2-2: firstly, classifying data of positive and negative samples into video data and image data respectively, checking whether each frame of the video data contains face data or not, whether each image contains face data or not, and reserving data meeting requirements;
step 2-3: carrying out primary backup on a video format data set in the positive sample data, and processing the video format data set to be used as a preprocessing data set of a video amplification technology;
step 2-4: firstly, removing the back of a non-human face in a part of video data set picture, and only leaving a picture containing face information;
step 2-5: processing the data processed in the step 2-4 to realize the face alignment and the algorithm shake elimination
Step 2-6: through the steps, the data meeting the requirement of realizing the video amplification technology is obtained, and the face key point calibration is realized on the face information in the data;
step 2-7: and a forehead rectangular area with sufficient facial blood flow and obvious change rule and triangular areas of two faces are locked by using key points;
step 2-8: performing video amplification processing on the acquired data, respectively testing heart rates of the three areas obtained in the steps 2-7, recording numerical values x1, x2 and x3 as initial heart rates serving as references, and measuring the real heart rate y of the original data object by using standard medical equipment as calibration;
step 2-9: performing the step 4-3 for multiple times, acquiring batch test heart rates and standard heart rates, and calculating the heart rate weight of each area;
step 2-10: when the video amplification technology is used for actual detection, video data of a target to be detected are obtained, and then the video data are processed by adopting the methods of the step 2-4 and the step 2-5;
step 2-11: acquiring the weight values in the steps 2-9, calculating the heart rate of the video-amplified face data by using the weight values, comparing the heart rate value with a human body heart rate threshold value, and determining whether the tested face is a living body;
step 3-1: for a non-static attack face, deep learning is used for processing, collected data to be processed are subjected to batch processing by utilizing a trained mask model and haarcacade files provided by OpenCV to obtain face boxes, the face boxes are finally manufactured into a tfrecrd VOC data file format, and the data are finally divided into three parts, namely a training set, a verification set and a test set.
Step 3-2: training input data by using a neural network model based on ssd _ mobilenet until loss is reduced to be below 1, setting a detection target threshold to be 0.6, stopping training after the test accuracy of the evaluation model reaches 97%, and establishing the model.
And 4, step 4: and (3) converting the model obtained in the step (3-2) into an android supporting model by utilizing a toco tool of tenserflow, and carrying and using the android supporting model. And compiling a human-computer interaction interface to display the identification condition in real time.

Claims (1)

1. A human face living body judgment method based on deep learning and video amplification technology comprises the following steps:
step 1: obtaining a training sample; respectively acquiring picture data and video data of a large number of live faces at different distances, different light intensities, different angles and different shielding conditions by adopting high-resolution and low-resolution cameras as positive sample data;
step 2: the data obtained in the step 1 are played by a display and printed with photos and posters, and high-resolution and low-resolution cameras are used for shooting and displaying images respectively in the same way to obtain picture data and video data which serve as negative sample data;
and step 3: preprocessing the acquired positive and negative sample data;
step 3-1: classifying the data of the positive and negative samples into video data and image data, checking whether each frame of the video data contains face data or not, checking whether each image contains face data or not, and reserving the video and image data containing the face data;
step 3-2: performing primary backup on video data in the positive sample data, and processing the video data as a preprocessing data set of a video amplification technology;
step 3-3: carrying out background mask processing on a non-human face background area in a video data picture in the positive sample data, eliminating the background and only leaving a picture containing face information;
step 3-4: performing video stable and video align processing on the data processed in the step 3-1 to realize face alignment and jitter elimination, wherein the video align processing enables the face of the picture to be aligned and have the same size in each frame, and the video stable processing eliminates the jitter influence on the face due to environmental factors and other human factors;
and 4, step 4: calculating the heart rate weight;
step 4-1: adopting facial landworks to calibrate face key points in the data obtained in the step 3;
step 4-2: locking a forehead rectangular area with sufficient facial blood flow and obvious change rule and triangular areas of two faces and faces by using key points and OpenCV polylines and fillPoly functions; one triangular area is an isosceles triangular area with the root of the nasal bridge bone as the vertex and the connecting line of two corners of the mouth as the bottom side; the other is an inverted triangle area with the two corners of the eyes and the bottom of the lower lip as three vertexes;
step 4-3: carrying out video amplification processing on the data acquired in the step 3, respectively testing heart rates of the three areas acquired in the step 4-2, recording numerical values x1, x2 and x3 as initial heart rates serving as references, and measuring the real heart rate y of the original data object by using standard medical equipment as calibration;
step 4-4: the step 4-3 is carried out for a plurality of times, batch test heart rates and standard heart rates are obtained, and fitting curves are utilized
Figure FDA0003524204420000011
Weights a1, a2 and a3 of the three heart rates are obtained;
and 5: taking a part of positive and negative sample data as a preprocessing data set for deep learning; storing each frame of video data in the partial data as image data, performing batch processing on the image data by adopting a trained buffer model and an OpenCV (open channel code) haarcacade file to obtain each image face box, and finally making into a tfrechrd VOC (volatile organic compound) format data file;
and 6: training the neural network model based on the ssd _ mobilenet by adopting the file obtained in the step 5 until the training is successful;
and 7: when the video amplification technology is used for actual detection, video data of a target to be detected are obtained, and heart rates of three areas of the target to be detected are calculated by sequentially adopting the methods in the step 3-3, the step 3-4, the step 4-1, the step 4-2 and the step 4-3;
and 8: calculating a real heart rate according to the weighted value calculated in the step 4, comparing the real heart rate with a set heart rate threshold value, if the real heart rate does not meet the requirement, determining that the human face is attacked, otherwise, performing a step 9;
and step 9: and (6) identifying the face by adopting the neural network model based on the ssd _ mobilenet trained in the step 6, and judging whether the face is a living face.
CN202010138178.8A 2020-03-03 2020-03-03 Face living body judgment method based on deep learning and video amplification technology Active CN111428577B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010138178.8A CN111428577B (en) 2020-03-03 2020-03-03 Face living body judgment method based on deep learning and video amplification technology

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010138178.8A CN111428577B (en) 2020-03-03 2020-03-03 Face living body judgment method based on deep learning and video amplification technology

Publications (2)

Publication Number Publication Date
CN111428577A CN111428577A (en) 2020-07-17
CN111428577B true CN111428577B (en) 2022-05-03

Family

ID=71546204

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010138178.8A Active CN111428577B (en) 2020-03-03 2020-03-03 Face living body judgment method based on deep learning and video amplification technology

Country Status (1)

Country Link
CN (1) CN111428577B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112929622B (en) * 2021-02-05 2022-04-12 浙江大学 Euler video color amplification method based on deep learning

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9607138B1 (en) * 2013-12-18 2017-03-28 Amazon Technologies, Inc. User authentication and verification through video analysis
CN107066942A (en) * 2017-03-03 2017-08-18 上海斐讯数据通信技术有限公司 A kind of living body faces recognition methods and system
CN108549884A (en) * 2018-06-15 2018-09-18 天地融科技股份有限公司 A kind of biopsy method and device
CN110163126A (en) * 2019-05-06 2019-08-23 北京华捷艾米科技有限公司 A kind of biopsy method based on face, device and equipment
CN110348385A (en) * 2019-07-12 2019-10-18 苏州小阳软件科技有限公司 Living body faces recognition methods and device
CN110738155A (en) * 2019-10-08 2020-01-31 杭州市第一人民医院 Face recognition method and device, computer equipment and storage medium

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
MY182294A (en) * 2015-06-16 2021-01-18 Eyeverify Inc Systems and methods for spoof detection and liveness analysis
CN109937002B (en) * 2016-11-14 2021-10-22 纽洛斯公司 System and method for camera-based heart rate tracking

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9607138B1 (en) * 2013-12-18 2017-03-28 Amazon Technologies, Inc. User authentication and verification through video analysis
CN107066942A (en) * 2017-03-03 2017-08-18 上海斐讯数据通信技术有限公司 A kind of living body faces recognition methods and system
CN108549884A (en) * 2018-06-15 2018-09-18 天地融科技股份有限公司 A kind of biopsy method and device
CN110163126A (en) * 2019-05-06 2019-08-23 北京华捷艾米科技有限公司 A kind of biopsy method based on face, device and equipment
CN110348385A (en) * 2019-07-12 2019-10-18 苏州小阳软件科技有限公司 Living body faces recognition methods and device
CN110738155A (en) * 2019-10-08 2020-01-31 杭州市第一人民医院 Face recognition method and device, computer equipment and storage medium

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
"Visual heart rate estimation with convolutional neural network";petlik R等;《Proceedings of the british machine vision conference》;20180930;第3-6页 *
"人脸识别活体检测综述";杨巨成等;《天津科技大学学报》;20200228;第35卷(第1期);第1-9页 *
"基于人脸视频的非接触式心测量算法的研究与实现";杨雯;《中国优秀硕士学位论文全文数据库基础科学辑》;20190815(第2019-8期);第A006-234页 *
"基于心率信息人脸识别过程中活体检测";杨敏 等;《信息通信》;20171215(第12期);第83-84页 *
"应用卷积神经网络的人脸活体检测算法研究";龙敏 等;《计算机科学与探索》;20180424;第12卷(第4期);第1658-1670页 *

Also Published As

Publication number Publication date
CN111428577A (en) 2020-07-17

Similar Documents

Publication Publication Date Title
CN105631439B (en) Face image processing process and device
CN103164692B (en) A kind of special vehicle instrument automatic identification system based on computer vision and method
CN106469302A (en) A kind of face skin quality detection method based on artificial neural network
CN107103298A (en) Chin-up number system and method for counting based on image procossing
CN107895160A (en) Human face detection and tracing device and method
CN107194361A (en) Two-dimentional pose detection method and device
CN111507592B (en) Evaluation method for active modification behaviors of prisoners
WO2021068781A1 (en) Fatigue state identification method, apparatus and device
CN107316029A (en) A kind of live body verification method and equipment
CN110826610A (en) Method and system for intelligently detecting whether dressed clothes of personnel are standard
CN113139962B (en) System and method for scoliosis probability assessment
CN108960047A (en) Face De-weight method in video monitoring based on the secondary tree of depth
CN112907810A (en) Face recognition temperature measurement campus access control system based on embedded GPU
CN108197564A (en) A kind of assessment system and method for drawing clock experiment
CN108090922A (en) Intelligent Target pursuit path recording method
US20230237694A1 (en) Method and system for detecting children's sitting posture based on face recognition of children
CN113688817A (en) Instrument identification method and system for automatic inspection
CN111428577B (en) Face living body judgment method based on deep learning and video amplification technology
CN111539911A (en) Mouth breathing face recognition method, device and storage medium
CN111275754B (en) Face acne mark proportion calculation method based on deep learning
CN106096527A (en) A kind of recognition methods of real-time high-precision online bank note face amount
CN112907571A (en) Target judgment method based on multispectral image fusion recognition
CN112183287A (en) People counting method of mobile robot under complex background
CN115937971B (en) Method and device for identifying hand-lifting voting
CN110321781A (en) A kind of signal processing method and device for heed contacted measure

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant