CN107633205A - lip motion analysis method, device and storage medium - Google Patents

lip motion analysis method, device and storage medium Download PDF

Info

Publication number
CN107633205A
CN107633205A CN201710708364.9A CN201710708364A CN107633205A CN 107633205 A CN107633205 A CN 107633205A CN 201710708364 A CN201710708364 A CN 201710708364A CN 107633205 A CN107633205 A CN 107633205A
Authority
CN
China
Prior art keywords
lip
real
region
face image
time face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710708364.9A
Other languages
Chinese (zh)
Other versions
CN107633205B (en
Inventor
陈林
张国辉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Technology Shenzhen Co Ltd
Original Assignee
Ping An Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Technology Shenzhen Co Ltd filed Critical Ping An Technology Shenzhen Co Ltd
Priority to CN201710708364.9A priority Critical patent/CN107633205B/en
Priority to PCT/CN2017/108749 priority patent/WO2019033570A1/en
Publication of CN107633205A publication Critical patent/CN107633205A/en
Application granted granted Critical
Publication of CN107633205B publication Critical patent/CN107633205B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of lip motion analysis method, device and storage medium, this method includes:The realtime graphic of camera device shooting is obtained, a real-time face image is extracted from the realtime graphic;By the good lip averaging model of real-time face image input training in advance, the t lip feature point that Hp position is represented in the real-time face image is identified;Lip region is determined according to the t lip feature point, the lip region is inputted into the good lip disaggregated model of training in advance, judge the lip region whether be people lip region;If so, then according to x, the y-coordinate of t lip feature point in the real-time face image, the direction of motion and move distance of lip in the real-time face image is calculated.The present invention calculates the movable information of lip in real-time face image according to the coordinate of lip feature point, realizes the analysis to lip region and the real-time capture to lip motion.

Description

Lip motion analysis method, device and storage medium
Technical field
The present invention relates to computer vision processing technology field, more particularly to a kind of lip motion analysis method, device and Computer-readable recording medium.
Background technology
Lip motion seizure is a kind of bio-identification that the facial feature information based on people carries out user's lip motion identification Technology.At present, the application field that lip motion is caught is very extensive, is played very in various fields such as access control and attendance, identifications Important effect, the life to people bring convenience.The seizure of lip motion, the way of common product are to use depth Learning method, the disaggregated model of lip feature is trained by deep learning, the feature of lip is then judged using disaggregated model.
However, training lip feature using the method for deep learning, the number of lip feature depends entirely on lip sample This species, for example judge to open one's mouth, shut up, then at least need to take and open one's mouth, the great amount of samples shut up, skimmed if rethinking judgement Mouth, it is necessary to take the great amount of samples curled one's lip again, then re -training.So not only take, real-time capture can't be accomplished.Separately Outside, lip feature is judged according to the disaggregated model of lip feature, can not analyze the lip region that identifies whether be people mouth Lip region.
The content of the invention
The present invention provides a kind of lip motion analysis method, device and computer-readable recording medium, and its main purpose exists In the movable information that lip in real-time face image is calculated according to the coordinate of lip feature point, realize analysis to lip region and To the real-time capture of lip motion.
To achieve the above object, the present invention provides a kind of electronic installation, and the device includes:Memory, processor and shooting Device, the memory includes lip motion analysis program, when the lip motion analysis program is by the computing device Realize following steps:
Real-time face image acquisition step:Obtain camera device shooting realtime graphic, using face recognition algorithms from this A real-time face image is extracted in realtime graphic;
Feature point recognition step:By the good lip averaging model of real-time face image input training in advance, the mouth is utilized Lip averaging model identifies the t lip feature point that Hp position is represented in the real-time face image;
Lip region identification step:Lip region is determined according to the t lip feature point, the lip region inputted pre- The lip disaggregated model first trained, judge the lip region whether be people lip region;And
Lip motion judgment step:If during the lip region that the lip region is behaved, according to t in the real-time face image X, the y-coordinate of lip feature point, the direction of motion and move distance of lip in the real-time face image is calculated.
Alternatively, when the lip motion analysis program is by the computing device, following steps are also realized:
Prompt step:When it is not the lip region of people that lip disaggregated model, which judges the lip region, prompt not from current The lip region of people is detected in realtime graphic, lip motion can not be judged, and is back to real-time face image acquisition step.
Alternatively, the training step of the lip averaging model includes:
Establish a first sample storehouse for there are n facial images, the mouth in every facial image in first sample storehouse T characteristic point of lip part mark, the t characteristic point are uniformly distributed in upper and lower lip and left and right labial angle;And
Face characteristic identification model is trained to obtain on people using the facial image of the mark lip feature point The lip averaging model of face.
Alternatively, the training step of the lip disaggregated model includes:
Collect that m opens one's mouth lip positive sample image and k opens one's mouth lip negative sample image, form the second Sample Storehouse;
Extract often open one's mouth lip positive sample image, the local feature of lip negative sample image;And
Support vector machine classifier is instructed using lip positive sample image, lip negative sample image and its local feature Practice, obtain the lip disaggregated model of face.
Alternatively, the lip motion judgment step includes:
The distance of upper lip medial center characteristic point and lower lip medial center characteristic point in real-time face image is calculated, is sentenced The stretching degree of disconnected lip;
By characteristic point nearest from left side outer lip corner characteristics point on left side outer lip corner characteristics point and upper and lower lip outer contour It is respectively connected with forming vectorCalculate vectorBetween angle, obtain the degree of the left slash of lip;And
By characteristic point nearest from right side outer lip corner characteristics point on right side outer lip corner characteristics point and upper and lower lip outer contour It is respectively connected with forming vectorCalculate vectorBetween angle, obtain the degree of the right slash of lip.
In addition, to achieve the above object, the present invention also provides a kind of lip motion analysis method, and this method includes:
Real-time face image acquisition step:Obtain camera device shooting realtime graphic, using face recognition algorithms from this A real-time face image is extracted in realtime graphic;
Feature point recognition step:By the good lip averaging model of real-time face image input training in advance, the mouth is utilized Lip averaging model identifies the t lip feature point that Hp position is represented in the real-time face image;
Lip region identification step:Lip region is determined according to the t lip feature point, the lip region inputted pre- The lip disaggregated model first trained, judge the lip region whether be people lip region;And
Lip motion judgment step:If during the lip region that the lip region is behaved, according to t in the real-time face image X, the y-coordinate of lip feature point, the direction of motion and move distance of lip in the real-time face image is calculated.
Alternatively, when the lip motion analysis program is by the computing device, following steps are also realized:
Prompt step:When it is not the lip region of people that lip disaggregated model, which judges the lip region, prompt not from current The lip region of people is detected in realtime graphic, lip motion can not be judged, and is back to real-time face image acquisition step.
Alternatively, the training step of the lip averaging model includes:
Establish a first sample storehouse for there are n facial images, the mouth in every facial image in first sample storehouse T characteristic point of lip part mark, the t characteristic point are uniformly distributed in upper and lower lip and left and right labial angle;And
Face characteristic identification model is trained to obtain on people using the facial image of the mark lip feature point The lip averaging model of face.
Alternatively, the training step of the lip disaggregated model includes:
Collect that m opens one's mouth lip positive sample image and k opens one's mouth lip negative sample image, form the second Sample Storehouse;
Extract often open one's mouth lip positive sample image, the local feature of lip negative sample image;And
Support vector machine classifier is instructed using lip positive sample image, lip negative sample image and its local feature Practice, obtain the lip disaggregated model of face.
Alternatively, the lip motion judgment step includes:
The distance of upper lip medial center characteristic point and lower lip medial center characteristic point in real-time face image is calculated, is sentenced The stretching degree of disconnected lip;
By characteristic point nearest from left side outer lip corner characteristics point on left side outer lip corner characteristics point and upper and lower lip outer contour It is respectively connected with forming vectorCalculate vectorBetween angle, obtain the degree of the left slash of lip;And
By characteristic point nearest from right side outer lip corner characteristics point on right side outer lip corner characteristics point and upper and lower lip outer contour It is respectively connected with forming vectorCalculate vectorBetween angle, obtain the degree of the right slash of lip.
In addition, to achieve the above object, the present invention also provides a kind of computer-readable recording medium, described computer-readable Storage medium includes lip motion analysis program, when the lip motion analysis program is executed by processor, realizes as above institute Arbitrary steps in the lip motion analysis method stated.
Lip motion analysis method, device and computer-readable recording medium proposed by the present invention, by from real-time face Lip feature point is identified in image, judge lip feature point composition region whether be people lip region, if so, then basis The movable information of lip is calculated in the coordinate of lip feature point, it is not necessary to takes the sample of the various actions of lip to carry out depth Practise, you can realize the analysis to lip region and the real-time capture to lip motion.
Brief description of the drawings
Fig. 1 is the schematic diagram of electronic installation preferred embodiment of the present invention;
Fig. 2 is the functional block diagram of lip motion analysis program in Fig. 1;
Fig. 3 is the flow chart of lip motion analysis method preferred embodiment of the present invention;
Fig. 4 is lip motion analysis method step S40 of the present invention refinement schematic flow sheet.
The realization, functional characteristics and advantage of the object of the invention will be described further referring to the drawings in conjunction with the embodiments.
Embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
The present invention provides a kind of electronic installation 1.It is the signal of the preferred embodiment of electronic installation 1 of the present invention shown in reference picture 1 Figure.
In the present embodiment, electronic installation 1 can be server, smart mobile phone, tablet personal computer, pocket computer, on table Type computer etc. has the terminal device of calculation function.
The electronic installation 1 includes:Processor 12, memory 11, camera device 13, network interface 14 and communication bus 15. Wherein, camera device 13 is installed on particular place, real-time to the target into the particular place such as office space, monitor area Shooting obtains realtime graphic, is transmitted by network by obtained realtime graphic is shot to processor 12.Network interface 14 is alternatively Wireline interface, the wave point (such as WI-FI interfaces) of standard can be included.Communication bus 15 is used to realize between these components Connection communication.
Memory 11 includes the readable storage medium storing program for executing of at least one type.The readable storage medium storing program for executing of at least one type Can be such as flash memory, hard disk, multimedia card, the non-volatile memory medium of card-type memory.In certain embodiments, it is described can Read the internal storage unit that storage medium can be the electronic installation 1, such as the hard disk of the electronic installation 1.In other realities To apply in example, the readable storage medium storing program for executing can also be the external memory storage of the electronic installation 1, such as on the electronic installation 1 The plug-in type hard disk of outfit, intelligent memory card (Smart Media Card, SMC), secure digital (Secure Digital, SD) Card, flash card (Flash Card) etc..
In the present embodiment, the readable storage medium storing program for executing of the memory 11 is generally used for storage and is installed on the electronic installation 1 lip motion analysis program 10, facial image Sample Storehouse, the lip Sample Storehouse of people and structure and the average mould of the lip trained Type and lip disaggregated model etc..The memory 11 can be also used for temporarily storing the number that has exported or will export According to.
Processor 12 can be in certain embodiments a central processing unit (Central Processing Unit, CPU), microprocessor or other data processing chips, for the program code or processing data stored in run memory 11, example Such as perform lip motion analysis program 10.
Fig. 1 illustrate only with component 11-15 and the electronic installation of lip motion analysis program 10 1, it should be understood that Be, it is not required that implement all components shown, the more or less component of the implementation that can be substituted.
Alternatively, the electronic installation 1 can also include user interface, and user interface can include input block such as keyboard (Keyboard), speech input device such as microphone (microphone) etc. has the equipment of speech identifying function, voice defeated Go out device such as sound equipment, earphone etc., alternatively user interface can also include wireline interface, the wave point of standard.
Alternatively, the electronic installation 1 can also include display, and what display can also be suitably is referred to as display screen or display Unit.Can be light-emitting diode display, liquid crystal display, touch-control liquid crystal display and OLED in certain embodiments (Organic Light-Emitting Diode, Organic Light Emitting Diode) touches device etc..Display is used to be shown in electronics dress Put the information that is handled in 1 and for showing visual user interface.
Alternatively, the electronic installation 1 also includes touch sensor.What the touch sensor was provided is touched for user The region for touching operation is referred to as touch area.In addition, touch sensor described here can be resistive touch sensor, electric capacity Formula touch sensor etc..Moreover, the touch sensor not only includes the touch sensor of contact, proximity may also comprise Touch sensor etc..In addition, the touch sensor can be single sensor, or such as multiple biographies of array arrangement Sensor.
In addition, the area of the display of the electronic installation 1 can be identical with the area of the touch sensor, can also not Together.Alternatively, display and touch sensor stacking are set, to form touch display screen.The device, which is based on touching, to be shown The touch control operation of display screen detecting user's triggering.
Alternatively, the electronic installation 1 can also include RF (Radio Frequency, radio frequency) circuit, sensor, audio Circuit etc., it will not be repeated here.
In the device embodiment shown in Fig. 1, as in a kind of memory 11 of computer-readable storage medium can include behaviour Make system and lip motion analysis program 10;Processor 12 performs the lip motion analysis program 10 stored in memory 11 Shi Shixian following steps:
The realtime graphic that camera device 13 is shot is obtained, processor 12 is carried using face recognition algorithms from the realtime graphic Real-time face image is taken out, calls eye averaging model and human eye disaggregated model from memory 11, and by the real-time face figure As inputting the lip averaging model, the lip feature point in the real-time face image is identified, lip feature point is determined Lip region inputs the lip disaggregated model, judge the lip region whether be people lip region, if so, then according to lip The movable information of lip in the real-time face image is calculated in the coordinate of characteristic point, is otherwise back to the acquisition of real-time face image Step.
In other embodiments, lip motion analysis program 10 can also be divided into one or more module, one Or multiple modules are stored in memory 11, and performed by processor 12, to complete the present invention.Module alleged by the present invention It is the series of computation machine programmed instruction section for referring to complete specific function.
It is the functional block diagram of lip motion analysis program 10 in Fig. 1 shown in reference picture 2.
The lip motion analysis program 10 can be divided into:Acquisition module 110, identification module 120, judge module 130th, computing module 140 and reminding module 150.
Acquisition module 110, it is real-time from this using face recognition algorithms for obtaining the realtime graphic of the shooting of camera device 13 A real-time face image is extracted in image.When camera device 13 photographs a realtime graphic, camera device 13 is by this reality When image be sent to processor 12, after processor 12 receives the realtime graphic, the acquisition module 110 utilizes recognition of face Algorithm extracts real-time face image.
Specifically, the face recognition algorithms that real-time face image is extracted from the realtime graphic can be based on geometric properties Method, Local Features Analysis method, eigenface method, the method based on elastic model, neural net method, etc..
Identification module 120, for the real-time face image to be inputted into the good lip averaging model of training in advance, utilize the mouth Lip averaging model identifies the t lip feature point that Hp position is represented in the real-time face image.
Assuming that there is 20 lip feature points in lip averaging model, 20 lip feature points are uniformly distributed.The identification After module 120 calls the lip averaging model trained from memory 11, real-time face image and lip averaging model are entered Row alignment, is then searched for and 20 lip spies of the lip averaging model using feature extraction algorithm in the real-time face image Levy 20 lip feature points of Point matching.Wherein, the lip averaging model of the face builds and trained in advance, specifically Embodiment will illustrate in following lip motion analysis methods.
Assuming that 20 lip feature points that the identification module 120 identifies from the real-time face image are still designated as P1 ~P20, the coordinate of 20 lip features point are respectively:(x1、y1)、(x2、y2)、(x3、y3)、…、(x20、y20)。
Wherein, as shown in Fig. 2 the upper and lower lip of lip have respectively 8 characteristic points (be designated as P1~P8 respectively, P9~ P16), left and right labial angle has 2 characteristic points (being designated as P17~P18, P19~P20 respectively) respectively.In 8 characteristic points of upper lip, 5 It is individual that positioned at upper lip outer contour (P1~P5), 3, (P6~P8, P7 are upper lip medial center positioned at upper lip inner outline Characteristic point);In 8 characteristic points of lower lip, 5 are located at lower lip lubrication groove positioned at lower lip outer contour (P9~P13), 3 Profile (P14~P16, P15 are lower lip medial center characteristic point).In respective 2 characteristic points of left and right labial angle, 1 is located at mouth Lip outer contour (such as P18, P20, hereinafter referred to as outer lip corner characteristics point), 1 positioned at lip outer contour (such as P17, P19, Hereinafter referred to as epipharynx corner characteristics point).
In the present embodiment, this feature extraction algorithm is SIFT (scale-invariant feature transform) Algorithm.SIFT algorithms extract the local feature of each lip feature point after the lip averaging model of face, select a lip Characteristic point is fixed reference feature point, and the same or analogous spy of local feature with the fixed reference feature point is searched in real-time face image Point (for example, the difference of the local feature of two characteristic points is within a preset range) is levied, principle is until in real-time face image according to this In find out all lip feature points.In other embodiments, this feature extraction algorithm can also be SURF (Speeded Up Robust Features) algorithm, LBP (Local Binary Patterns) algorithm, HOG (Histogram of Oriented Gridients) algorithm etc..
Judge module 130, for determining lip region according to the t lip feature point, the lip region is inputted pre- The lip disaggregated model first trained, judge the lip region whether be people lip region.When the identification module 120 is from reality When face image in recognize 20 lip feature points after, a lip region can be determined according to 20 lip feature points, Then the lip disaggregated model lip region input of determination trained, the result obtained according to model judge the determination Lip region whether be people lip region.Wherein, the lip disaggregated model builds and trained in advance, specific implementation Mode will illustrate in following lip motion analysis methods.
Computing module 140, if behave for the lip region lip region when, according to t in the real-time face image X, the y-coordinate of lip feature point, the direction of motion and move distance of lip in the real-time face image is calculated.
Specifically, the computing module 140 is used for:
The distance of upper lip medial center characteristic point and lower lip medial center characteristic point in real-time face image is calculated, is sentenced The stretching degree of disconnected lip;
By characteristic point nearest from left side outer lip corner characteristics point on left side outer lip corner characteristics point and upper and lower lip outer contour It is respectively connected with forming vectorCalculate vectorBetween angle, obtain the degree of the left slash of lip;And
By characteristic point nearest from right side outer lip corner characteristics point on right side outer lip corner characteristics point and upper and lower lip outer contour It is respectively connected with forming vectorCalculate vectorBetween angle, obtain the degree of the right slash of lip.
In real-time face image, upper lip medial center characteristic point P7 coordinate is (x7、y7), lower lip medial center Characteristic point P15 coordinate is (x15、y15), and judge module 130 judges the lip region that the lip region is behaved, then, two Range formula between point is as follows:
If d=0, then it represents that 2 points of coincidences of P7, P15, that is to say, that lip is in closure state;If d > 0, according to d Size judge the stretching degree of lip, d is bigger, then it represents that lip stretching degree is bigger.
Left side outer lip corner characteristics point P18 coordinate is (x18、y18), it is and nearest from P18 on upper and lower lip outer contour The coordinate of characteristic point P1, P9 is respectively (x1、y1)、(x9、y9), P18 is connected with P1, P9, forms vector respectivelyMeter Calculate vectorBetween angle α, calculation formula is as follows:
Wherein,α represents vectorIt Between angle, by calculating corner dimension, can determine whether the degree of the left slash of lip;Angle is smaller, represents that the left slash degree of lip is bigger.
Similarly, right side outer lip corner characteristics point P20 coordinate is (x20、y20), with upper and lower lip outer contour from P20 most The coordinate of near characteristic point P5, P13 is respectively (x5、y5)、(x13、y13), P20 is connected with P5, P13, forms vector respectivelyCalculate vectorBetween angle, calculation formula is as follows:
Wherein,β represents vectorIt Between angle, by calculating corner dimension, can determine whether the degree of the right slash of lip;Angle is smaller, represents that the right slash degree of lip is bigger.
Reminding module 150, for being prompted not when lip disaggregated model judges that the lip region is not the lip region of people The lip region of people is detected from current realtime graphic, lip motion can not be judged, flow returns to realtime graphic capture step Suddenly, next realtime graphic is captured.If the lip region that judge module 130 determines 20 lip features point inputs lip After disaggregated model, judge that the lip region is not the lip region of people according to model result, the reminding module 150 prompts not know The lip region of people is clipped to, next step lip motion judgment step can not be carried out, meanwhile, reacquire the reality that camera device is shot When image, and carry out subsequent step.
The electronic installation 1 that the present embodiment proposes, real-time face image is extracted from realtime graphic, utilizes lip averaging model Identify the lip feature point in the real-time face image, the lip region determined using lip disaggregated model to lip characteristic point Analyzed, if the lip region that the lip region is behaved, according to the coordinate of lip feature point, is calculated the real-time face The movable information of lip in image, realize the analysis to lip region and the real-time capture to lip motion.
In addition, the present invention also provides a kind of lip motion analysis method.Shown in reference picture 3, for lip motion of the present invention point The flow chart of analysis method preferred embodiment.This method can be performed by a device, and the device can be real by software and/or hardware It is existing.
In the present embodiment, lip motion analysis method includes:Step S10- steps S50.
Step S10, the realtime graphic of camera device shooting is obtained, is extracted using face recognition algorithms from the realtime graphic One real-time face image.When camera device photographs a realtime graphic, camera device sends this realtime graphic everywhere Device is managed, after processor receives the realtime graphic, real-time face image is extracted using face recognition algorithms.
Specifically, the face recognition algorithms that real-time face image is extracted from the realtime graphic can be based on geometric properties Method, Local Features Analysis method, eigenface method, the method based on elastic model, neural net method, etc..
Step S20, the real-time face image is inputted into the good lip averaging model of training in advance, utilize the average mould of the lip Type identifies the t lip feature point that Hp position is represented in the real-time face image.
Establish a first sample storehouse for there are n facial images, the mouth in every facial image in first sample storehouse Lip position t characteristic point of handmarking, the t characteristic point are uniformly distributed in upper and lower lip and left and right labial angle.
Face characteristic identification model is trained to obtain on people using the facial image of the mark lip feature point The lip averaging model of face.The face characteristic identification model is calculated for Ensemble of RegressionTress (abbreviation ERT) Method.ERT algorithms are formulated as follows:
Wherein t represents cascade sequence number, τt() represents the recurrence device when prime.Each device that returns is by many recurrence (tree) composition is set, the purpose of training is exactly to obtain these regression trees.
Wherein S (t) is that the shape of "current" model is estimated;It is each to return device τt() according to input picture I and S (t) come Predict an incrementThis increment is added in current shape estimation to improve "current" model.Each of which level Returning device is predicted according to characteristic point.Training dataset is:(I1, S1) ..., (In, Sn) wherein I is input Sample image, S be feature point group in sample image into shape eigenvectors.
During model training, the quantity of facial image is n in Sample Storehouse, it is assumed that t=20, i.e. each sample graph Piece has 20 characteristic points, take all samples pictures Partial Feature point (such as in 20 characteristic points of each samples pictures with Machine takes 15 characteristic points) first regression tree is trained, by the true of the predicted value of first regression tree and the Partial Feature point The residual error of real value (weighted average for 15 characteristic points that each samples pictures are taken) is used for training second tree ... class successively Push away, until training the predicted value of the N tree with the actual value of the Partial Feature point close to 0, obtain all of ERT algorithms Regression tree, the lip averaging model of face is obtained according to these regression trees, and model file and Sample Storehouse are preserved to memory In.Because the sample image of training pattern marked 20 lip feature points, then the obtained lip averaging model of face is trained Available for 20 lip feature points of identification from facial image.
After the lip averaging model trained is called from memory, real-time face image and lip averaging model are carried out Alignment, then searched for using feature extraction algorithm in the real-time face image and 20 lip features of the lip averaging model 20 lip feature points of Point matching.Assuming that the 20 lip feature points identified from the real-time face image are still designated as P1 ~P20, the coordinate of 20 lip features point are respectively:(x1、y1)、(x2、y2)、(x3、y3)、…、(x20、y20)。
Wherein, as shown in Fig. 2 the upper and lower lip of lip have respectively 8 characteristic points (be designated as P1~P8 respectively, P9~ P16), left and right labial angle has 2 characteristic points (being designated as P17~P18, P19~P20 respectively) respectively.In 8 characteristic points of upper lip, 5 It is individual that positioned at upper lip outer contour (P1~P5), 3, (P6~P8, P7 are upper lip medial center positioned at upper lip inner outline Characteristic point);In 8 characteristic points of lower lip, 5 are located at lower lip lubrication groove positioned at lower lip outer contour (P9~P13), 3 Profile (P14~P16, P15 are lower lip medial center characteristic point).In respective 2 characteristic points of left and right labial angle, 1 is located at mouth Lip outer contour (such as P18, P20, hereinafter referred to as outer lip corner characteristics point), 1 positioned at lip outer contour (such as P17, P19, Hereinafter referred to as epipharynx corner characteristics point).
Specifically, this feature extraction algorithm can also be SIFT algorithms, SURF algorithm, LBP algorithms, HOG algorithms etc..
Step S30, lip region is determined according to the t lip feature point, it is good that the lip region is inputted into training in advance Lip disaggregated model, judge the lip region whether be people lip region.
Collect that m opens one's mouth lip positive sample image and k opens one's mouth lip negative sample image, form the second Sample Storehouse.Lip positive sample figure Seem the image for referring to the lip comprising the mankind, lip portion can be plucked out from facial image Sample Storehouse as lip positive sample figure Picture.Lip negative sample image refer to that the lip region of people is incomplete or image in lip be not the mankind (such as animal) mouth The image of lip, multiple lip positive sample images and negative sample image form the second Sample Storehouse.
Extract often open one's mouth lip positive sample image, the local feature of lip negative sample image.Extracted using feature extraction algorithm Histograms of oriented gradients (Histogram of Oriented Gradient, abbreviation HOG) feature of lip sample image.Due to Colouring information effect is little in lip sample image, is generally translated into gray-scale map, and whole image is normalized, and counts Nomogram calculates the gradient direction value of each location of pixels accordingly as abscissa and the gradient in ordinate direction, with capture profile, The shadow and some texture informations, and the influence that further weakened light shines.Then it is mono- whole image to be divided into Cell one by one First lattice, gradient orientation histogram is built for each Cell cells, to count local image gradient information and be quantified, is obtained The feature description vectors of local image region.Then Cell cells are combined into big block (block), shone due to local light Change and the change of foreground-background contrast so that the excursion of gradient intensity is very big, and this is just needed to gradient intensity Normalize, further illumination, shade and edge are compressed.The HOG descriptor combinations that will finally own " block " exist Together, final HOG feature description vectors are formed.
Using lip positive sample image, lip negative sample image and the HOG features of extraction to SVMs (Support Vector Machine, SVM) grader is trained, obtain the lip disaggregated model of face.
After 20 lip feature points are recognized from real-time face image, it can be determined according to 20 lip feature points One lip region, the lip disaggregated model for then training the lip region input of determination, the result obtained according to model Judge the determination lip region whether be people lip region.
Step S40, if during the lip region of lip region behaviour, according to t lip feature in the real-time face image X, the y-coordinate of point, the direction of motion and move distance of lip in the real-time face image is calculated.
It is the refinement schematic flow sheet of step S40 in lip motion analysis method of the present invention shown in reference picture 4.Specifically, Step S40 includes:
Step S41, calculate upper lip medial center characteristic point and lower lip medial center characteristic point in real-time face image Distance, judge the stretching degree of lip;
Step S42, by left side outer lip corner characteristics point with it is nearest from left side outer lip corner characteristics point on upper and lower lip outer contour Characteristic point be respectively connected with forming vectorCalculate vectorBetween angle, obtain the degree of the left slash of lip; And
Step S43, by right side outer lip corner characteristics point with it is nearest from right side outer lip corner characteristics point on upper and lower lip outer contour Characteristic point be respectively connected with forming vectorCalculate vectorBetween angle, obtain the degree of the right slash of lip.
In real-time face image, upper lip medial center characteristic point P7 coordinate is (x7、y7), lower lip medial center Characteristic point P15 coordinate is (x15、y15), and the lip region that the lip region is behaved, then, the range formula of point-to-point transmission It is as follows:
If d=0, then it represents that 2 points of coincidences of P7, P15, that is to say, that lip is in closure state;If d > 0, according to d Size judge the stretching degree of lip, d is bigger, then it represents that lip stretching degree is bigger.
Left side outer lip corner characteristics point P18 coordinate is (x18、y18), it is and nearest from P18 on upper and lower lip outer contour The coordinate of characteristic point P1, P9 is respectively (x1、y1)、(x9、y9), P18 is connected with P1, P9, forms vector respectivelyMeter Calculate vectorBetween angle α, calculation formula is as follows:
Wherein,α represents vectorIt Between angle, by calculating corner dimension, can determine whether the degree of the left slash of lip;Angle is smaller, represents that the left slash degree of lip is bigger.
Similarly, right side outer lip corner characteristics point P20 coordinate is (x20、y20), with upper and lower lip outer contour from P20 most The coordinate of near characteristic point P5, P13 is respectively (x5、y5)、(x13、y13), P20 is connected with P5, P13, forms vector respectivelyCalculate vectorBetween angle, calculation formula is as follows:
Wherein,β represents vectorIt Between angle, by calculating corner dimension, can determine whether the degree of the right slash of lip;Angle is smaller, represents that the right slash degree of lip is bigger.
Step S50, when it is not the lip region of people that lip disaggregated model, which judges the lip region, prompt not from current reality When image in detect the lip region of people, lip motion can not be judged, flow returns to realtime graphic capture step, under capture One realtime graphic.After the lip region input lip disaggregated model that 20 lip features point is determined, according to model knot Fruit judges that the lip region is not the lip region of people, prompts the unidentified lip region to people, can not carry out next step lip Motion determination step, meanwhile, the realtime graphic of camera device shooting is reacquired, and carry out subsequent step.
The lip motion analysis method that the present embodiment proposes, is identified in the real-time face image using lip averaging model Lip feature point, the lip region determined using lip disaggregated model to lip characteristic point is analyzed, if the lip region For the lip region of people, then according to the coordinate of lip feature point, the movable information of lip in the real-time face image is calculated, Realize the analysis to lip region and the real-time capture to lip motion.
In addition, the embodiment of the present invention also proposes a kind of computer-readable recording medium, the computer-readable recording medium Include lip motion analysis program, following operation is realized when the lip motion analysis program is executed by processor:
Model construction step:Build and train face feature recognition model, obtain the lip averaging model on face, profit SVM is trained with lip sample image, obtains lip disaggregated model;
Real-time face image acquisition step:Obtain camera device shooting realtime graphic, using face recognition algorithms from this A real-time face image is extracted in realtime graphic;
Feature point recognition step:By the good lip averaging model of real-time face image input training in advance, the mouth is utilized Lip averaging model identifies the t lip feature point that Hp position is represented in the real-time face image;
Lip region identification step:Lip region is determined according to the t lip feature point, the lip region inputted pre- The lip disaggregated model first trained, judge the lip region whether be people lip region;And
Lip motion judgment step:If during the lip region that the lip region is behaved, according to t in the real-time face image X, the y-coordinate of lip feature point, the direction of motion and move distance of lip in the real-time face image is calculated.
Alternatively, when the lip motion analysis program is executed by processor, following operation is also realized:
Prompt step:When it is not the lip region of people that lip disaggregated model, which judges the lip region, prompt not from current The lip region of people is detected in realtime graphic, lip motion can not be judged, and is back to real-time face image acquisition step.
Alternatively, the lip motion judgment step includes:
The distance of upper lip medial center characteristic point and lower lip medial center characteristic point in real-time face image is calculated, is sentenced The stretching degree of disconnected lip;
By characteristic point nearest from left side outer lip corner characteristics point on left side outer lip corner characteristics point and upper and lower lip outer contour It is respectively connected with forming vectorCalculate vectorBetween angle, obtain the degree of the left slash of lip;And
By characteristic point nearest from right side outer lip corner characteristics point on right side outer lip corner characteristics point and upper and lower lip outer contour It is respectively connected with forming vectorCalculate vectorBetween angle, obtain the degree of the right slash of lip.
The embodiment of the computer-readable recording medium of the present invention is specific with above-mentioned lip motion analysis method Embodiment is roughly the same, will not be repeated here.
It should be noted that herein, term " comprising ", "comprising" or its any other variant are intended to non-row His property includes, so that process, device, article or method including a series of elements not only include those key elements, and And also include the other element being not expressly set out, or also include for this process, device, article or method institute inherently Key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that including this Other identical element also be present in the process of key element, device, article or method.
The embodiments of the present invention are for illustration only, do not represent the quality of embodiment.Embodiment party more than The description of formula, it is required general that those skilled in the art can be understood that above-described embodiment method can add by software The mode of hardware platform is realized, naturally it is also possible to which by hardware, but the former is more preferably embodiment in many cases.It is based on Such understanding, the part that technical scheme substantially contributes to prior art in other words can be with software products Form embody, the computer software product is stored in a storage medium (such as ROM/RAM, magnetic disc, light as described above Disk) in, including some instructions are make it that a station terminal equipment (can be mobile phone, computer, server, or the network equipment Deng) perform method described in each embodiment of the present invention.
The preferred embodiments of the present invention are these are only, are not intended to limit the scope of the invention, it is every to utilize this hair The equivalent structure or equivalent flow conversion that bright specification and accompanying drawing content are made, or directly or indirectly it is used in other related skills Art field, is included within the scope of the present invention.

Claims (10)

1. a kind of electronic installation, it is characterised in that described device includes:Memory, processor and camera device, the memory Include lip motion analysis program, following steps are realized when the lip motion analysis program is by the computing device:
Real-time face image acquisition step:The realtime graphic of camera device shooting is obtained, it is real-time from this using face recognition algorithms A real-time face image is extracted in image;
Feature point recognition step:By the good lip averaging model of real-time face image input training in advance, put down using the lip Equal Model Identification goes out in the real-time face image to represent t lip feature point of Hp position;
Lip region identification step:Lip region is determined according to the t lip feature point, the lip region is inputted into instruction in advance The lip disaggregated model perfected, judge the lip region whether be people lip region;And
Lip motion judgment step:If during the lip region that the lip region is behaved, according to t lip in the real-time face image X, the y-coordinate of characteristic point, the direction of motion and move distance of lip in the real-time face image is calculated.
2. electronic installation according to claim 1, it is characterised in that the lip motion analysis program is by the processor During execution, following steps are also realized:
Prompt step:When it is not the lip region of people that lip disaggregated model, which judges the lip region, prompt not from current real-time The lip region of people is detected in image, lip motion can not be judged, and is back to real-time face image acquisition step.
3. electronic installation according to claim 1 or 2, it is characterised in that the training step bag of the lip disaggregated model Include:
Collect that m opens one's mouth lip positive sample image and k opens one's mouth lip negative sample image, form the second Sample Storehouse;
Extract often open one's mouth lip positive sample image, the local feature of lip negative sample image;And
Support vector machine classifier is trained using lip positive sample image, lip negative sample image and its local feature, Obtain the lip disaggregated model of face.
4. electronic installation according to claim 1, it is characterised in that the lip motion judgment step includes:
The distance of upper lip medial center characteristic point and lower lip medial center characteristic point in real-time face image is calculated, judges mouth The stretching degree of lip;
Characteristic point nearest from left side outer lip corner characteristics point on left side outer lip corner characteristics point and upper and lower lip outer contour is distinguished It is connected to form vectorCalculate vectorBetween angle, obtain the degree of the left slash of lip;And
Characteristic point nearest from right side outer lip corner characteristics point on right side outer lip corner characteristics point and upper and lower lip outer contour is distinguished It is connected to form vectorCalculate vectorBetween angle, obtain the degree of the right slash of lip.
5. a kind of lip motion analysis method, it is characterised in that methods described includes:
Real-time face image acquisition step:The realtime graphic of camera device shooting is obtained, it is real-time from this using face recognition algorithms A real-time face image is extracted in image;
Feature point recognition step:By the good lip averaging model of real-time face image input training in advance, put down using the lip Equal Model Identification goes out in the real-time face image to represent t lip feature point of Hp position;
Lip region identification step:Lip region is determined according to the t lip feature point, the lip region is inputted into instruction in advance The lip disaggregated model perfected, judge the lip region whether be people lip region;And
Lip motion judgment step:If during the lip region that the lip region is behaved, according to t lip in the real-time face image X, the y-coordinate of characteristic point, the direction of motion and move distance of lip in the real-time face image is calculated.
6. lip motion analysis method according to claim 5, it is characterised in that this method also includes:
Prompt step:When it is not the lip region of people that lip disaggregated model, which judges the lip region, prompt not from current real-time The lip region of people is detected in image, lip motion can not be judged, and is back to real-time face image acquisition step.
7. lip motion analysis method according to claim 5, it is characterised in that the training step of the lip averaging model Suddenly include:
Establish a first sample storehouse for there are n facial images, the lip portion in every facial image in first sample storehouse Position t characteristic point of mark, the t characteristic point are uniformly distributed in upper and lower lip and left and right labial angle;And
Face characteristic identification model is trained to obtain on face using the facial image of the mark lip feature point Lip averaging model.
8. the lip motion analysis method according to claim 5 or 6, it is characterised in that the instruction of the lip disaggregated model Practicing step includes:
Collect that m opens one's mouth lip positive sample image and k opens one's mouth lip negative sample image, form the second Sample Storehouse;
Extract often open one's mouth lip positive sample image, the local feature of lip negative sample image;And
Support vector machine classifier is trained using lip positive sample image, lip negative sample image and its local feature, Obtain the lip disaggregated model of face.
9. lip motion analysis method according to claim 5, it is characterised in that the lip motion judgment step bag Include:
The distance of upper lip medial center characteristic point and lower lip medial center characteristic point in real-time face image is calculated, judges mouth The stretching degree of lip;
Characteristic point nearest from left side outer lip corner characteristics point on left side outer lip corner characteristics point and upper and lower lip outer contour is distinguished It is connected to form vectorCalculate vectorBetween angle, obtain the degree of the left slash of lip;And
Characteristic point nearest from right side outer lip corner characteristics point on right side outer lip corner characteristics point and upper and lower lip outer contour is distinguished It is connected to form vectorCalculate vectorBetween angle, obtain the degree of the right slash of lip.
10. a kind of computer-readable recording medium, it is characterised in that the computer-readable recording medium includes lip motion Analysis program, when the lip motion analysis program is executed by processor, realize as any one of claim 5 to 9 The step of lip motion analysis method.
CN201710708364.9A 2017-08-17 2017-08-17 lip motion analysis method, device and storage medium Active CN107633205B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201710708364.9A CN107633205B (en) 2017-08-17 2017-08-17 lip motion analysis method, device and storage medium
PCT/CN2017/108749 WO2019033570A1 (en) 2017-08-17 2017-10-31 Lip movement analysis method, apparatus and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710708364.9A CN107633205B (en) 2017-08-17 2017-08-17 lip motion analysis method, device and storage medium

Publications (2)

Publication Number Publication Date
CN107633205A true CN107633205A (en) 2018-01-26
CN107633205B CN107633205B (en) 2019-01-18

Family

ID=61099627

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710708364.9A Active CN107633205B (en) 2017-08-17 2017-08-17 lip motion analysis method, device and storage medium

Country Status (2)

Country Link
CN (1) CN107633205B (en)
WO (1) WO2019033570A1 (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108710836A (en) * 2018-05-04 2018-10-26 南京邮电大学 A kind of lip detecting and read method based on cascade nature extraction
CN108874145A (en) * 2018-07-04 2018-11-23 深圳美图创新科技有限公司 A kind of image processing method calculates equipment and storage medium
CN110223322A (en) * 2019-05-31 2019-09-10 腾讯科技(深圳)有限公司 Image-recognizing method, device, computer equipment and storage medium
WO2019223102A1 (en) * 2018-05-22 2019-11-28 平安科技(深圳)有限公司 Method and apparatus for checking validity of identity, terminal device and medium
CN111241922A (en) * 2019-12-28 2020-06-05 深圳市优必选科技股份有限公司 Robot, control method thereof and computer-readable storage medium
CN111259875A (en) * 2020-05-06 2020-06-09 中国人民解放军国防科技大学 Lip reading method based on self-adaptive magnetic space-time diagramm volumetric network
CN116405635A (en) * 2023-06-02 2023-07-07 山东正中信息技术股份有限公司 Multi-mode conference recording method and system based on edge calculation

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110738126A (en) * 2019-09-19 2020-01-31 平安科技(深圳)有限公司 Lip shearing method, device and equipment based on coordinate transformation and storage medium
WO2021224669A1 (en) * 2020-05-05 2021-11-11 Ravindra Kumar Tarigoppula System and method for controlling viewing of multimedia based on behavioural aspects of a user
CN113095146A (en) * 2021-03-16 2021-07-09 深圳市雄帝科技股份有限公司 Mouth state classification method, device, equipment and medium based on deep learning

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101702199A (en) * 2009-11-13 2010-05-05 深圳华为通信技术有限公司 Smiling face detection method and device and mobile terminal
CN104951730A (en) * 2014-03-26 2015-09-30 联想(北京)有限公司 Lip movement detection method, lip movement detection device and electronic equipment
CN105975935A (en) * 2016-05-04 2016-09-28 腾讯科技(深圳)有限公司 Face image processing method and apparatus
CN106529379A (en) * 2015-09-15 2017-03-22 阿里巴巴集团控股有限公司 Method and device for recognizing living body
CN106997451A (en) * 2016-01-26 2017-08-01 北方工业大学 Lip contour positioning method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007094906A (en) * 2005-09-29 2007-04-12 Toshiba Corp Characteristic point detection device and method
CN104616438B (en) * 2015-03-02 2016-09-07 重庆市科学技术研究院 A kind of motion detection method of yawning for fatigue driving detection
CN105139503A (en) * 2015-10-12 2015-12-09 北京航空航天大学 Lip moving mouth shape recognition access control system and recognition method
CN106250815B (en) * 2016-07-05 2019-09-20 上海引波信息技术有限公司 A kind of quick expression recognition method based on mouth feature
CN106485214A (en) * 2016-09-28 2017-03-08 天津工业大学 A kind of eyes based on convolutional neural networks and mouth state identification method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101702199A (en) * 2009-11-13 2010-05-05 深圳华为通信技术有限公司 Smiling face detection method and device and mobile terminal
CN104951730A (en) * 2014-03-26 2015-09-30 联想(北京)有限公司 Lip movement detection method, lip movement detection device and electronic equipment
CN106529379A (en) * 2015-09-15 2017-03-22 阿里巴巴集团控股有限公司 Method and device for recognizing living body
CN106997451A (en) * 2016-01-26 2017-08-01 北方工业大学 Lip contour positioning method
CN105975935A (en) * 2016-05-04 2016-09-28 腾讯科技(深圳)有限公司 Face image processing method and apparatus

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
VAHID KAZEMI.ETC: "One Millisecond Face Alignment with an Ensemble of Regression Trees", 《2014 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 *
杨恒翔: "基于图像的嘴唇特征提取及口型分类研究", 《中国优秀硕士学位论文全文数据库》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108710836A (en) * 2018-05-04 2018-10-26 南京邮电大学 A kind of lip detecting and read method based on cascade nature extraction
CN108710836B (en) * 2018-05-04 2020-10-09 南京邮电大学 Lip detection and reading method based on cascade feature extraction
WO2019223102A1 (en) * 2018-05-22 2019-11-28 平安科技(深圳)有限公司 Method and apparatus for checking validity of identity, terminal device and medium
CN108874145A (en) * 2018-07-04 2018-11-23 深圳美图创新科技有限公司 A kind of image processing method calculates equipment and storage medium
CN108874145B (en) * 2018-07-04 2022-03-18 深圳美图创新科技有限公司 Image processing method, computing device and storage medium
CN110223322A (en) * 2019-05-31 2019-09-10 腾讯科技(深圳)有限公司 Image-recognizing method, device, computer equipment and storage medium
CN110223322B (en) * 2019-05-31 2021-12-14 腾讯科技(深圳)有限公司 Image recognition method and device, computer equipment and storage medium
CN111241922A (en) * 2019-12-28 2020-06-05 深圳市优必选科技股份有限公司 Robot, control method thereof and computer-readable storage medium
CN111241922B (en) * 2019-12-28 2024-04-26 深圳市优必选科技股份有限公司 Robot, control method thereof and computer readable storage medium
CN111259875A (en) * 2020-05-06 2020-06-09 中国人民解放军国防科技大学 Lip reading method based on self-adaptive magnetic space-time diagramm volumetric network
CN111259875B (en) * 2020-05-06 2020-07-31 中国人民解放军国防科技大学 Lip reading method based on self-adaptive semantic space-time diagram convolutional network
CN116405635A (en) * 2023-06-02 2023-07-07 山东正中信息技术股份有限公司 Multi-mode conference recording method and system based on edge calculation

Also Published As

Publication number Publication date
CN107633205B (en) 2019-01-18
WO2019033570A1 (en) 2019-02-21

Similar Documents

Publication Publication Date Title
CN107633204B (en) Face occlusion detection method, apparatus and storage medium
CN107679448B (en) Eyeball action-analysing method, device and storage medium
CN107633205B (en) lip motion analysis method, device and storage medium
CN107808143B (en) Dynamic gesture recognition method based on computer vision
US10445562B2 (en) AU feature recognition method and device, and storage medium
CN110232311B (en) Method and device for segmenting hand image and computer equipment
CN107679449B (en) Lip motion method for catching, device and storage medium
CN107808120B (en) Glasses localization method, device and storage medium
CN107679447A (en) Facial characteristics point detecting method, device and storage medium
US8792722B2 (en) Hand gesture detection
US8750573B2 (en) Hand gesture detection
WO2019033573A1 (en) Facial emotion identification method, apparatus and storage medium
CN112052186B (en) Target detection method, device, equipment and storage medium
US9613296B1 (en) Selecting a set of exemplar images for use in an automated image object recognition system
CN107633206B (en) Eyeball motion capture method, device and storage medium
CN109829448A (en) Face identification method, device and storage medium
CN109271930B (en) Micro-expression recognition method, device and storage medium
US8902161B2 (en) Device and method for detecting finger position
CN113449704B (en) Face recognition model training method and device, electronic equipment and storage medium
WO2016099729A1 (en) Technologies for robust two dimensional gesture recognition
CN112001932A (en) Face recognition method and device, computer equipment and storage medium
CN111382791B (en) Deep learning task processing method, image recognition task processing method and device
Lahiani et al. Hand pose estimation system based on Viola-Jones algorithm for android devices
CN110175500B (en) Finger vein comparison method, device, computer equipment and storage medium
CN115223239A (en) Gesture recognition method and system, computer equipment and readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 1246925

Country of ref document: HK

GR01 Patent grant
GR01 Patent grant