CN107007257A - The automatic measure grading method and apparatus of the unnatural degree of face - Google Patents

The automatic measure grading method and apparatus of the unnatural degree of face Download PDF

Info

Publication number
CN107007257A
CN107007257A CN201710161341.0A CN201710161341A CN107007257A CN 107007257 A CN107007257 A CN 107007257A CN 201710161341 A CN201710161341 A CN 201710161341A CN 107007257 A CN107007257 A CN 107007257A
Authority
CN
China
Prior art keywords
face
facial
unnatural
data
human face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710161341.0A
Other languages
Chinese (zh)
Other versions
CN107007257B (en
Inventor
周永进
张树
徐井旭
向江怀
杨晓娟
石文秀
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen University
Original Assignee
Shenzhen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen University filed Critical Shenzhen University
Priority to CN201710161341.0A priority Critical patent/CN107007257B/en
Publication of CN107007257A publication Critical patent/CN107007257A/en
Application granted granted Critical
Publication of CN107007257B publication Critical patent/CN107007257B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0059Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
    • A61B5/0077Devices for viewing the surface of the body, e.g. camera, magnifying lens
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/72Signal processing specially adapted for physiological signals or for diagnostic purposes
    • A61B5/7235Details of waveform analysis
    • A61B5/7264Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
    • A61B5/7267Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Abstract

The present invention provides a kind of automatic measure grading method and apparatus of facial unnatural degree, including:The unnatural degree automatic measure grading module of data acquisition module, data preprocessing module, training grader and human face set gradually.The data acquisition module includes human face's static data collecting unit and face Dynamic Data Acquiring unit;The data preprocessing module is by face static data pretreatment unit, face dynamic data pretreatment unit.The beneficial effects of the invention are as follows:Monitoring in real time and assessment human face's situation, and then the real-time and objectively evaluation unnatural degree of human face, and being positioned to facial unnatural position.Subject can provide Medical Authentication material according to the index of objective quantification for postoperative beauty and shaping dispute case, or in daily life, assess to provide for tester's automatic face and objectively facilitate means.Or provide non-physiological parameter for examination of detecting a lie(Expression parameter)Foundation.

Description

The automatic measure grading method and apparatus of the unnatural degree of face
Technical field
The invention belongs to field of orthopedic surgery, technical field of image processing, rehabilitation appliances field, psychological field, specifically It is related to a kind of real time evaluating method and device for assessing the unnatural degree of human face.
Background technology
With the raising of quality of life level, requirement of the people to figure and features also more and more higher, for because of traffic accident, outer Figure and features is not good caused by the reasons such as wound, tumour and the reason such as beauty, and selection plastic surgery is more next come the people for improving figure and features defect It is more.Design and method in plastic surgery operations are influenceed by subjective factors such as personal habits, experiences, and effect is without prediction Property, it is random larger with blindness, while still medical tangle subject occurred frequently.In addition, facial muscle control is abnormal The unnatural degree of face can be influenceed, influence appearance is attractive in appearance.
At present, to the assessment of the facial naturalness of face, merely by eye-observation, subjectivity is very strong, and only face It can be just identified by the human eye after unnatural quantitative change to a certain extent, degree not unnatural to face carries out accurate quantification automatic measure grading Method and apparatus.In apparatus of the present invention, by gathering the static state and dynamic data of the postoperative face of patient, and trained by early stage Good grader carries out classification automatic measure grading to it.More reflect the unnatural situation of tester's face, to test objective reality The unnatural degree of person's face carries out quantitative evaluation, and the right-safeguarding that can for example fail for tester's beauty provide corresponding data, with compared with Strong legal sense;The unnatural degree assessment of face can also be voluntarily carried out for tester and provides simple and reliable in daily life Square law device.
" a lie detector " (Lie Detector) can be used for assisting investigation in crime survey, to understand the suspicion inquired The psychologic status of people, so as to judge whether it is related to punishment case." detecting a lie " is not to survey " lie " in itself, but thought-read reason institute is stimulated The change of caused physiological parameter, such as pulse, breathing and dermatopolyneuritis (referred to as " skin electricity ").Wherein, skin electricity is most sensitive, is to detect a lie Main basis, at present the existing many cities in the whole nation a lie detector is incorporated into public security, judicial circuit.But, can a lie detector play Normal effect, the design of external environment condition, testee's individual state, the level and problem of testing teacher with test is all close It is related.Some tests as condition not enough, it is final gainless.In apparatus of the present invention, by the unnatural degree of face Rating evaluation, it is more time saving and energy saving to judge subject with the presence or absence of lying suspicion, and do not restrained by condition.
The content of the invention
The present invention is intended to provide a kind of automatic measure grading device of the unnatural degree of face face, for subject provide face self The method of examination, it is portable and simple to operate, make the unnatural degree of automatic accurate evaluator face, and facial unnatural position is determined Position is possibly realized.
The present invention provides a kind of real-time apparatus for evaluating of facial unnatural degree, including:The data acquisition module that sets gradually, Data preprocessing module, training grader and the unnatural degree automatic measure grading module of human face.The data acquisition module includes people Facial static data collecting unit and face Dynamic Data Acquiring unit;The data preprocessing module is pre- by face static data Processing unit, face dynamic data pretreatment unit.
Accordingly, the present invention also provides a kind of method of the real-time apparatus for evaluating using facial unnatural degree, including following Several steps:
Step A:Data acquisition is carried out to the face of subject;;
Step B:The data gathered are pre-processed;
Step C:The study that the unnatural degree of face assesses grader is carried out with the method for machine learning;
Step D:With the grader trained, the facial unnatural degree of subject is graded.
The present invention uses above technical scheme, the advantage is that, one kind can in real time be gathered, monitored and evaluator face automatically The method and apparatus of the unnatural degree in portion, objectively reflect the unnatural degree of human face.Can accurately assess subject's face face not from Right situation, and facial unnatural position is positioned, and then provide shaping and beauty postoperative evaluation measures for subject, it is possible Shaping failure dispute case provide Medical Authentication material;Or in daily life, being provided for the assessment of tester's automatic face can The means leaned on.Or provide effective foundation for examination of detecting a lie.
It is preferred that, in the step B, the view data to collection strengthens, and denoising is for future use.
It is preferred that, in step C, including following several steps:
Step C1:It is used as sample by a large amount of human face photos, is trained study;
Step C1:The static state and behavioral characteristics of facial unnatural degree are extracted, and then is found corresponding to facial unnatural degree Strong feature;
Physiological structure based on human face expression, Ekman etc. define it is corresponding quantify rule, i.e., every kind of expression is by which flesh Meat is produced, and each muscle is how to act on the specific expression of generation, and how each muscle cooperates with the specific expression of generation.We will These quantify rule and are used as static nature.Described static nature can be included, size, color, profile and shape etc..
For example static nature can extract feature according to space domain model of the face in each video, such as left and right eye Size, symmetry, looks spacing etc.;Behavioral characteristics extract feature according to changing pattern of the face between multiframe, can include Speed and the direction of motion.Speed can be obtained by motion estimation algorithms such as optical flow field or Block- matchings.
Light stream is the two-dimentional instantaneous velocity that moving object is observed that pixel is moved on surface, and available computational methods are Gray differential method, Region Matching method, the method based on energy and the method based on phase.By taking the method based on phase as an example:
Each two field picture in sequence is input to one group of Gabor filter, bandpass filtering pretreatment, Gabor filtering is carried out The output response of device is R (X, t)=ρ (X, t) ejφ(X,t), wherein X=(x1,x2) it is each pixel position, φ on plane picture (X t) is output phase.For point X on equiphase contour line, need to meeting φ, (X, t)=c, c is constant.Above formula two ends are simultaneously right T derivations are obtained
It is the speed of a certain pixel, φXxy) it is phase gradient.When phase gradient normalizes the speed V on directionn=α n, whereinFor normalization side To.Simultaneous is obtained
Or, learn to carry out advanced features extraction automatically using depth network;
Or, the feature that feature and deep learning with reference to obtained by priori are drawn is inputted as training, using label as prison Superintend and direct, feature and label are handled with convolutional neural networks structure, generate the grader trained.
Wherein, label be can be nature, it is more unnatural, very unnatural, very unnatural be setting.
The present invention further uses above technical scheme, the advantage is that, by taking deep learning as an example, can pass through a large amount of faces Photo is inputted as sample, can be put into convolutional neural networks, and training study obtains the strong feature based on data.Or according to priori Knowledge, the principal character point of face face is extracted from active shape model algorithm (Active Shape Model, ASM):Eye Angle, eye center, eyebrow, nose, cheekbone, the corners of the mouth, chin profile etc.;Then face subregion is divided, the feature of extraction is utilized Point determines the position of face organ and region muscle, selects suitable according to the area of each organ size and each region facial muscles The sample window of pixel size, extracts the sampling block of all subregion;Facial zone membership vector is asked for, by all subregion with putting down Equal face, which is compared, seeks difference;Multidigit professional is assisted a ruler in governing a country to exercise supervision to the overall merit of facial unnatural degree learning training, The feature of the unnatural degree of human face is automatically extracted in big data, well-drilled grader-sorter model nerve net is obtained Network.
It is preferred that, in the step D, it may include following steps:
Step D1:For newly entering pending human face data, using the feature extracting method or depth according to priori Learn obtained strong aspect indexing, determine feature set.
Step D2:Characteristic set is accessed into grader, the unnatural degree of output face.
The present invention uses above technical scheme, the advantage is that, automatically extracts the standard diagrams of the unnatural degree of human face, right The static state and dynamic data of pretreated face face, using the method for machine learning, automatically extract sign human face not certainly The characteristic parameter so spent, while being positioned to facial unnatural position.
It is preferred that, the specific targets in the step D1 are such as:Facial muscle contraction speed, facial muscle movements direction are various Property, the local asymmetry of human face expression linkage, face or so, local anomaly is twitched, and each index is given by comprehensive analysis The deviation and transmission function of weight, calculating and machine learning model average face, so as to obtain the result of quantitative evaluation.
It is preferred that, rating scale includes:To pretreated static image data, human face expression parameter is extracted;
The present invention further uses above technical scheme, the advantage is that, pretreated dynamic image data can be carried Take the contraction process for changing over time facial muscles.It is 0 by the deciding grade and level of human face's sculpture, the comedian that facial expression is enriched is determined Level is 100, to each Distribution Indexes weight of the assessment of the unnatural degree of human face of extraction, is the classification of people's facial expression naturalness Scoring.
It is preferred that, extracting human face expression parameter includes:Facial muscle contraction speed can be between sequential frame image motion Estimation, facial muscle movements direction diversity, human face expression linkage, the left-right asymmetry property of face and to pretreated dynamic State view data is extractable to change over time at least one of contraction process of facial muscles.
The present invention further uses above technical scheme, the advantage is that, extracting human face expression parameter includes:Facial muscles Contraction speed can be between sequential frame image estimation, e.g., Block- matching or optical flow approach obtain to calculate;Facial muscles are transported Dynamic direction diversity, the characteristic point coordinate vector detected is converted to the description of corresponding mimetic muscle movement effects, is used as degree The input of amount system, via being classified after training classifier training, is calculated with this and obtains measurement results;Human face expression links Property, the Expression analysis that can be combined based on AU encodes the part of face in Haar feature bases using joint Haar features Linkage change;The left-right asymmetry property of face, by geometry pretreatment and gray scale pretreatment, sets up normalized human face data, than Compared with the similarity of left and right face);To the extractable contraction process for changing over time facial muscles of pretreated dynamic image data Deng.It is 0 by the deciding grade and level of human face's sculpture, comedian's deciding grade and level that facial expression is enriched is 100, unnatural to the human face of extraction Each Distribution Indexes weight of the assessment of degree, is people's facial expression naturalness rank scores.
The beneficial effects of the invention are as follows:Monitoring in real time and human face's situation is assessed, and then in real time and objectively appraiser face The unnatural degree in portion, is compared to existing subjective judgement method, time saving and energy saving, not by artificial deviation effects.And can be to face Unnatural position is positioned.Subject can provide medical treatment according to the index of objective quantification for postoperative beauty and shaping dispute case Expert evidence, or in daily life, provided for the assessment of tester's automatic face and objectively facilitate means.Or it is careful to detect a lie Non- physiological parameter (expression parameter) foundation of offer is provided.
Brief description of the drawings
Fig. 1 face feature point testing results.
The unnatural degree automatic measure grading method and apparatus block diagram of Fig. 2 human faces.
Fig. 3 static data acquisition processing modules.
Fig. 4 Dynamic Data Acquiring processing modules.
The unnatural degree estimation flow figure of Fig. 5 human faces.
Embodiment
Below in conjunction with the accompanying drawings, the preferably embodiment to the present invention is described in further detail:
The present invention assesses the unnatural degree of human face in real time by gathering the static data and dynamic data of human face's situation Integrated device, for subject provide personalization facial unnatural degree evaluation scheme, be postoperative beauty and shaping dispute case Medical Authentication material is provided, or in daily life, is assessed for tester's automatic face and reliable means is provided.Or to survey Lie, which is examined, provides non-physiological parameter (expression parameter) foundation.
The present invention is carried out further using the automatic measure grading of the postoperative facial unnatural degree of human face's cosmetic surgery as embodiment Describe in detail.Intensity of anomaly when " the unnatural degree of face " in this embodiment refers to reflect human face's contraction of muscle activity, Such as craniofacial asymmetry, facial stiff degree.This embodiment is directed to use with machine learning and carrys out grouped data, first by expert Sample data is analyzed and scoring is played, sample and label are then trained into grader as input.Specifically can be by number The deep learning trained according to being supplied to using the model of multiple graders or by multiple training datas or test data set Model.In instances, data can be generated to confidence level with the matching degree of grader and associated with the classification of data. In embodiment, data acquisition system is not limited to the image/video information for needing to classify, and can also include contributes to the Accurate classification mankind It is difficult the data additional information excavated.In instances, database can be with continuous updating.
The structured flowchart of the integrated device of this in the present invention is as shown in Fig. 2 specific as follows:
Step 1:Data acquisition module
In this embodiment, the module is the data acquisition device based on video camera, contains human face's static data and adopts Collection and face Dynamic Data Acquiring, as shown in Figure 3.In Dynamic Data Acquiring, 1) make following required movement as requested, such as: Wail, laugh, anger etc., the video segment of 10 seconds can be cut into according to sequence of movement, each video segment is referred to as a sample;2) The process of subject's switching expression is gathered, such as turns sad by happiness, happiness is turned by anger.The video segment of 20 seconds can be cut into, is each regarded Frequency fragment can be used as a sample.
Step 2:Data preprocessing module
In this embodiment, this module is mainly the static state and dynamic number synchronously, independently collected to above-mentioned module Data preprocess, in order to follow-up processing, as shown in figure 4, the module is pre-processed by face static data, face dynamic data is pre- Two little module compositions of processing.Wherein, the view data to collection strengthens, and denoising etc. is handled for future use.
Step 3:Train grader
In this embodiment, the module is mainly used as sample by a large amount of human face photos, and with reference to professional person's demarcation As a result, study is trained, the static state (in indivedual frames) and behavioral characteristics (many interframe) of facial unnatural degree is can extract, and then looked for To the strong feature corresponding to facial unnatural degree, it also can automatically learn to carry out advanced features extraction using depth network, can also tie The feature that feature and deep learning obtained by closing priori are drawn is inputted as training, and then obtains well-drilled grader, The output of wherein process is the set for all data for representing genealogical classification.
Step 4:The unnatural degree automatic measure grading of human face
In this embodiment, the action difference of the data of pretreatment and average face is mainly compared by the module, is come Judge that the local muscle contraction movement of face is brought unnatural.The data of pretreatment are measured with the grader trained Change index extraction, specific targets are such as:Facial muscle contraction speed, facial muscle movements direction diversity, human face expression linkage, Face or so part asymmetry, local anomaly is twitched, and each index gives weight by comprehensive analysis, is calculated and machine learning The deviation and transmission function of model average face, the result can as the unnatural degree of human face quantitative evaluation.
Rating scale:To pretreated static dynamic image data, extracting human face expression parameter, (e.g., facial muscles are received Contracting speed can be between sequential frame image estimation, e.g., Block- matching or optical flow approach obtain to calculate;Facial muscle movements Direction diversity, the characteristic point coordinate vector detected is converted to the description of corresponding mimetic muscle movement effects, is used as measurement The input of system, via being classified after training classifier training, is calculated with this and obtains measurement results;Human face expression links Property, the Expression analysis that can be combined based on AU encodes the part of face in Haar feature bases using joint Haar features Linkage change;The left-right asymmetry property of face, by geometry pretreatment and gray scale pretreatment, sets up normalized human face data, than Compared with the similarity of left and right face);To the extractable contraction process for changing over time facial muscles of pretreated dynamic image data Deng.
It is 0 by the deciding grade and level of human face's sculpture, comedian's deciding grade and level that facial expression is enriched is 100, can use House- Brackmann scorings are basic as classification, and fraction by each position, tried to achieve by unnatural degree weighting in level, and weight coefficient can be existed by feature Determined after weight normalization in training network, facial positions, texture, action equal weight are high corresponding to strong feature.Also can be linear Return different characteristic and choose gained classification results and expert analysis mode, sort R square values to adjust, the high person of R square values such as mouth is weighed Weight highest.The proportional system increases constantly dynamic with data volume and adjusted.
In the present invention, the flow of the specific implementation of this embodiment is as shown in figure 5, detailed step is as follows:
Step 1:Receive the subject that the unnatural degree of human face is assessed, need to sit quietly in human face data acquisition instrument (as imaged Machine) before.
Step 2:After all preparations are ready, start data acquisition.
Step 3:Data to collection are pre-processed, the normalization comprising data, remove background illumination differentia influence etc. Operation.
Step 4:With the method for machine learning, by taking deep learning as an example, sample can be used as by a large amount of human face photos and inputted, Convolutional neural networks can be put into, study is trained.Human face characteristic point is extracted first, for example, can be calculated from active shape model Method (Active Shape Model, ASM) extracts the principal character point of face face:Canthus, eye center, eyebrow, nose, cheekbone Bone, the corners of the mouth, chin profile etc.;Then face subregion is divided, face organ and region muscle are determined using the characteristic point of extraction Position, according to the area of each organ size and each region facial muscles select appropriate pixels size sample window, extract The sampling block of all subregion;Facial zone membership vector is asked for, all subregion and average face are compared and seek difference;Assist a ruler in governing a country Multidigit professional carries out complete supervised learning training to the overall merit of facial unnatural degree, and people is automatically extracted in big data The feature of the unnatural degree of face, obtains well-drilled grader-sorter model neutral net.
Step 5:With the grader trained, the facial unnatural degree of subject is graded.
Step 6:Selection printing rating result.
Protection point
1. in the present invention, Treatment Analysis is carried out to human face data using the method for machine learning, the unnatural degree of face is realized Quantitative evaluation, and be accurately positioned, in the protection domain of this patent.
2. based on the embodiment in the present invention, the technical staff in the field that this patent is related to is not making creative labor The every other embodiment obtained under the premise of dynamic, belongs to the scope of this patent protection.
Above content is to combine specific preferred embodiment further description made for the present invention, it is impossible to assert The specific implementation of the present invention is confined to these explanations.For general technical staff of the technical field of the invention, On the premise of not departing from present inventive concept, some simple deduction or replace can also be made, should all be considered as belonging to the present invention's Protection domain.

Claims (9)

1. a kind of real-time apparatus for evaluating of facial unnatural degree, it is characterised in that including:The data acquisition module that sets gradually, Data preprocessing module, training grader and the unnatural degree automatic measure grading module of human face.The data acquisition module includes people Facial static data collecting unit and face Dynamic Data Acquiring unit;The data preprocessing module is pre- by face static data Processing unit, face dynamic data pretreatment unit.
2. a kind of method of real-time apparatus for evaluating using facial unnatural degree as claimed in claim 1, it is characterised in that bag Include following steps:
Step A:Data acquisition is carried out to the face of subject;;
Step B:The data gathered are pre-processed;
Step C:The study that the unnatural degree of face assesses grader is carried out with the method for machine learning;
Step D:With the grader trained, the facial unnatural degree of subject is graded.
3. a kind of method of real-time apparatus for evaluating using facial unnatural degree as claimed in claim 1, it is characterised in that institute State in step B, the view data to collection strengthens, denoising is for future use.
4. a kind of method of real-time apparatus for evaluating using facial unnatural degree as claimed in claim 1, it is characterised in that step In rapid C, including following several steps:
Step C1:It is used as sample by a large amount of human face photos, is trained study;
Step C1:The static state and behavioral characteristics of facial unnatural degree are extracted, and then finds the strong spy corresponding to facial unnatural degree Levy;
Or, learn to carry out advanced features extraction automatically using depth network;
Or, the feature that feature and deep learning with reference to obtained by priori are drawn is inputted as training, and then it is good to obtain training Good grader.
5. a kind of method of real-time apparatus for evaluating using facial unnatural degree as claimed in claim 1, it is characterised in that institute State in step D, including following steps:
Step D1:The grader trained carries out quantizating index extraction to the data of pretreatment, and by comprehensive analysis each Index gives the deviation and transmission function of weight, calculating and machine learning model average face;
Step D2:Using result as the unnatural degree of human face quantitative evaluation.
6. a kind of method of real-time apparatus for evaluating using facial unnatural degree as claimed in claim 1, it is characterised in that institute State the specific targets in step D1:Facial muscle contraction speed, facial muscle movements direction diversity, human face expression linkage, Face or so part asymmetry, local anomaly is twitched, and each index gives weight by comprehensive analysis, is calculated and machine learning The deviation and transmission function of model average face, so as to obtain the result of quantitative evaluation.
7. a kind of method of real-time apparatus for evaluating using facial unnatural degree as claimed in claim 1, it is characterised in that comment Level standard:To pretreated static image data, human face expression parameter is extracted;Pretreated dynamic image data can be carried Take contraction process for changing over time facial muscles etc..It is 0 by the deciding grade and level of human face's sculpture, the comedian that facial expression is enriched Define the level as 100, be people's facial expression naturalness point to each Distribution Indexes weight of the assessment of the unnatural degree of human face of extraction Level scoring.
8. a kind of method of real-time apparatus for evaluating using facial unnatural degree as claimed in claim 1, it is characterised in that carry Human face expression parameter is taken to include:Facial muscle contraction speed can be between sequential frame image estimation, facial muscle movements Direction diversity, human face expression linkage, the left-right asymmetry property of face and to pretreated dynamic image data it is extractable with At least one of contraction process of time change facial muscles.
9. a kind of method of real-time apparatus for evaluating using facial unnatural degree as claimed in claim 1, it is characterised in that from The dynamic standard diagrams for extracting the unnatural degree of human face:To the static state and dynamic data of pretreated face face, using machine The method of study, automatically extracts the characteristic parameter for characterizing the unnatural degree of human face, while being positioned to facial unnatural position.
CN201710161341.0A 2017-03-17 2017-03-17 The automatic measure grading method and apparatus of the unnatural degree of face Active CN107007257B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710161341.0A CN107007257B (en) 2017-03-17 2017-03-17 The automatic measure grading method and apparatus of the unnatural degree of face

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710161341.0A CN107007257B (en) 2017-03-17 2017-03-17 The automatic measure grading method and apparatus of the unnatural degree of face

Publications (2)

Publication Number Publication Date
CN107007257A true CN107007257A (en) 2017-08-04
CN107007257B CN107007257B (en) 2018-06-01

Family

ID=59439576

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710161341.0A Active CN107007257B (en) 2017-03-17 2017-03-17 The automatic measure grading method and apparatus of the unnatural degree of face

Country Status (1)

Country Link
CN (1) CN107007257B (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107633207A (en) * 2017-08-17 2018-01-26 平安科技(深圳)有限公司 AU characteristic recognition methods, device and storage medium
CN107704919A (en) * 2017-09-30 2018-02-16 广东欧珀移动通信有限公司 Control method, device and the storage medium and mobile terminal of mobile terminal
CN107704834A (en) * 2017-10-13 2018-02-16 上海壹账通金融科技有限公司 Householder method, device and storage medium are examined in micro- expression face
CN108416331A (en) * 2018-03-30 2018-08-17 百度在线网络技术(北京)有限公司 Method, apparatus, storage medium and the terminal device that face symmetrically identifies
CN108446593A (en) * 2018-02-08 2018-08-24 北京捷通华声科技股份有限公司 A kind of prosopospasm detection method and device
WO2019085331A1 (en) * 2017-11-02 2019-05-09 平安科技(深圳)有限公司 Fraud possibility analysis method, device, and storage medium
CN110084259A (en) * 2019-01-10 2019-08-02 谢飞 A kind of facial paralysis hierarchical synthesis assessment system of combination face texture and Optical-flow Feature
CN110516626A (en) * 2019-08-29 2019-11-29 上海交通大学 A kind of Facial symmetry appraisal procedure based on face recognition technology
CN110889332A (en) * 2019-10-30 2020-03-17 中国科学院自动化研究所南京人工智能芯片创新研究院 Lie detection method based on micro expression in interview
CN111062936A (en) * 2019-12-27 2020-04-24 中国科学院上海生命科学研究院 Quantitative index evaluation method for facial deformation diagnosis and treatment effect
CN111914871A (en) * 2019-05-09 2020-11-10 李至伟 Artificial intelligence auxiliary evaluation method and system applied to beauty treatment and electronic device
CN111986801A (en) * 2020-07-14 2020-11-24 珠海中科先进技术研究院有限公司 Rehabilitation evaluation method, device and medium based on deep learning

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040143170A1 (en) * 2002-12-20 2004-07-22 Durousseau Donald R. Intelligent deception verification system
WO2005043453A1 (en) * 2003-10-23 2005-05-12 Northrop Grumman Corporation Robust and low cost optical system for sensing stress, emotion and deception in human subjects
US20130139255A1 (en) * 2011-11-30 2013-05-30 Elwha LLC, a limited liability corporation of the State of Delaware Detection of deceptive indicia masking in a communications interaction
CN104008391A (en) * 2014-04-30 2014-08-27 首都医科大学 Face micro-expression capturing and recognizing method based on nonlinear dimension reduction
CN104679967A (en) * 2013-11-27 2015-06-03 广州华久信息科技有限公司 Method for judging reliability of psychological test
CN105160318A (en) * 2015-08-31 2015-12-16 北京旷视科技有限公司 Facial expression based lie detection method and system
CN105205479A (en) * 2015-10-28 2015-12-30 小米科技有限责任公司 Human face value evaluation method, device and terminal device
CN106295568A (en) * 2016-08-11 2017-01-04 上海电力学院 The mankind's naturalness emotion identification method combined based on expression and behavior bimodal

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040143170A1 (en) * 2002-12-20 2004-07-22 Durousseau Donald R. Intelligent deception verification system
WO2005043453A1 (en) * 2003-10-23 2005-05-12 Northrop Grumman Corporation Robust and low cost optical system for sensing stress, emotion and deception in human subjects
US20130139255A1 (en) * 2011-11-30 2013-05-30 Elwha LLC, a limited liability corporation of the State of Delaware Detection of deceptive indicia masking in a communications interaction
CN104679967A (en) * 2013-11-27 2015-06-03 广州华久信息科技有限公司 Method for judging reliability of psychological test
CN104008391A (en) * 2014-04-30 2014-08-27 首都医科大学 Face micro-expression capturing and recognizing method based on nonlinear dimension reduction
CN105160318A (en) * 2015-08-31 2015-12-16 北京旷视科技有限公司 Facial expression based lie detection method and system
CN105205479A (en) * 2015-10-28 2015-12-30 小米科技有限责任公司 Human face value evaluation method, device and terminal device
CN106295568A (en) * 2016-08-11 2017-01-04 上海电力学院 The mankind's naturalness emotion identification method combined based on expression and behavior bimodal

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107633207A (en) * 2017-08-17 2018-01-26 平安科技(深圳)有限公司 AU characteristic recognition methods, device and storage medium
CN107704919B (en) * 2017-09-30 2021-12-07 Oppo广东移动通信有限公司 Control method and device of mobile terminal, storage medium and mobile terminal
CN107704919A (en) * 2017-09-30 2018-02-16 广东欧珀移动通信有限公司 Control method, device and the storage medium and mobile terminal of mobile terminal
CN107704834A (en) * 2017-10-13 2018-02-16 上海壹账通金融科技有限公司 Householder method, device and storage medium are examined in micro- expression face
CN107704834B (en) * 2017-10-13 2021-03-30 深圳壹账通智能科技有限公司 Micro-surface examination assisting method, device and storage medium
WO2019085331A1 (en) * 2017-11-02 2019-05-09 平安科技(深圳)有限公司 Fraud possibility analysis method, device, and storage medium
CN108446593A (en) * 2018-02-08 2018-08-24 北京捷通华声科技股份有限公司 A kind of prosopospasm detection method and device
CN108416331A (en) * 2018-03-30 2018-08-17 百度在线网络技术(北京)有限公司 Method, apparatus, storage medium and the terminal device that face symmetrically identifies
CN108416331B (en) * 2018-03-30 2019-08-09 百度在线网络技术(北京)有限公司 Method, apparatus, storage medium and the terminal device that face symmetrically identifies
CN110084259A (en) * 2019-01-10 2019-08-02 谢飞 A kind of facial paralysis hierarchical synthesis assessment system of combination face texture and Optical-flow Feature
CN110084259B (en) * 2019-01-10 2022-09-20 谢飞 Facial paralysis grading comprehensive evaluation system combining facial texture and optical flow characteristics
TWI756681B (en) * 2019-05-09 2022-03-01 李至偉 Artificial intelligence assisted evaluation method applied to aesthetic medicine and system using the same
CN111914871A (en) * 2019-05-09 2020-11-10 李至伟 Artificial intelligence auxiliary evaluation method and system applied to beauty treatment and electronic device
CN110516626A (en) * 2019-08-29 2019-11-29 上海交通大学 A kind of Facial symmetry appraisal procedure based on face recognition technology
CN110889332A (en) * 2019-10-30 2020-03-17 中国科学院自动化研究所南京人工智能芯片创新研究院 Lie detection method based on micro expression in interview
CN111062936A (en) * 2019-12-27 2020-04-24 中国科学院上海生命科学研究院 Quantitative index evaluation method for facial deformation diagnosis and treatment effect
CN111062936B (en) * 2019-12-27 2023-11-03 中国科学院上海营养与健康研究所 Quantitative index evaluation method for facial deformation diagnosis and treatment effect
CN111986801A (en) * 2020-07-14 2020-11-24 珠海中科先进技术研究院有限公司 Rehabilitation evaluation method, device and medium based on deep learning

Also Published As

Publication number Publication date
CN107007257B (en) 2018-06-01

Similar Documents

Publication Publication Date Title
CN107007257B (en) The automatic measure grading method and apparatus of the unnatural degree of face
CN110507335B (en) Multi-mode information based criminal psychological health state assessment method and system
Zhang et al. Automatic cataract detection and grading using deep convolutional neural network
CN106682616A (en) Newborn-painful-expression recognition method based on dual-channel-characteristic deep learning
CN105022929B (en) A kind of cognition accuracy analysis method of personal traits value test
CN110197729A (en) Tranquillization state fMRI data classification method and device based on deep learning
CN110119672A (en) A kind of embedded fatigue state detection system and method
CN110428908B (en) Eyelid motion function evaluation system based on artificial intelligence
CN112472048B (en) Method for realizing neural network for identifying pulse condition of cardiovascular disease patient
CN109447962A (en) A kind of eye fundus image hard exudate lesion detection method based on convolutional neural networks
CN109805944B (en) Children's ability analytic system that shares feelings
CN109431523A (en) Autism primary screening apparatus based on asocial's sonic stimulation behavior normal form
CN106128032A (en) A kind of fatigue state monitoring and method for early warning and system thereof
CN110309813A (en) A kind of model training method, detection method, device, mobile end equipment and the server of the human eye state detection based on deep learning
CN106980815A (en) Facial paralysis objective evaluation method under being supervised based on H B rank scores
CN106667506A (en) Method and device for detecting lies on basis of electrodermal response and pupil change
CN111466878A (en) Real-time monitoring method and device for pain symptoms of bedridden patients based on expression recognition
CN111403026A (en) Facial paralysis grade assessment method
CN109344763A (en) A kind of strabismus detection method based on convolutional neural networks
CN110148108A (en) Herpes zoster neuralgia curative effect prediction method and system based on functional MRI
Zeng et al. Automated detection of diabetic retinopathy using a binocular siamese-like convolutional network
Zhang et al. Real-time activity and fall risk detection for aging population using deep learning
CN107967941A (en) A kind of unmanned plane health monitoring method and system based on intelligent vision reconstruct
CN110427987A (en) A kind of the plantar pressure characteristic recognition method and system of arthritic
CN114565957A (en) Consciousness assessment method and system based on micro expression recognition

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant