CN107007257B - The automatic measure grading method and apparatus of the unnatural degree of face - Google Patents
The automatic measure grading method and apparatus of the unnatural degree of face Download PDFInfo
- Publication number
- CN107007257B CN107007257B CN201710161341.0A CN201710161341A CN107007257B CN 107007257 B CN107007257 B CN 107007257B CN 201710161341 A CN201710161341 A CN 201710161341A CN 107007257 B CN107007257 B CN 107007257B
- Authority
- CN
- China
- Prior art keywords
- face
- data
- unnatural
- degree
- human face
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/0059—Measuring for diagnostic purposes; Identification of persons using light, e.g. diagnosis by transillumination, diascopy, fluorescence
- A61B5/0077—Devices for viewing the surface of the body, e.g. camera, magnifying lens
-
- A—HUMAN NECESSITIES
- A61—MEDICAL OR VETERINARY SCIENCE; HYGIENE
- A61B—DIAGNOSIS; SURGERY; IDENTIFICATION
- A61B5/00—Measuring for diagnostic purposes; Identification of persons
- A61B5/72—Signal processing specially adapted for physiological signals or for diagnostic purposes
- A61B5/7235—Details of waveform analysis
- A61B5/7264—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems
- A61B5/7267—Classification of physiological signals or data, e.g. using neural networks, statistical classifiers, expert systems or fuzzy systems involving training the classification device
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Artificial Intelligence (AREA)
- Veterinary Medicine (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Surgery (AREA)
- Animal Behavior & Ethology (AREA)
- Heart & Thoracic Surgery (AREA)
- Public Health (AREA)
- Biomedical Technology (AREA)
- Pathology (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Medical Informatics (AREA)
- Signal Processing (AREA)
- Psychiatry (AREA)
- Physiology (AREA)
- Mathematical Physics (AREA)
- Fuzzy Systems (AREA)
- Evolutionary Computation (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The present invention provides a kind of automatic measure grading method and apparatus of the unnatural degree of face, including:The unnatural degree automatic measure grading module of data acquisition module, data preprocessing module, training grader and human face set gradually.The data acquisition module includes human face's static data collecting unit and face Dynamic Data Acquiring unit;The data preprocessing module is by face static data pretreatment unit, face dynamic data pretreatment unit.The beneficial effects of the invention are as follows:Monitoring in real time and assessment human face's situation, and then the real-time and objectively evaluation unnatural degree of human face, and being positioned to the unnatural position of face.Subject can provide Medical Authentication material or in daily life for postoperative beauty and shaping dispute case, provided for the assessment of tester's automatic face and objectively facilitate means according to the index of objective quantification.Or provide non-physiological parameter for examination of detecting a lie(Expression parameter)Foundation.
Description
Technical field
The invention belongs to field of orthopedic surgery, technical field of image processing, rehabilitation appliances field, psychological field, specifically
It is related to a kind of real time evaluating method and device for assessing the unnatural degree of human face.
Background technology
With the raising of quality of life level, requirement of the people to figure and features is also higher and higher, for because of traffic accident, outer
Figure and features caused by the reasons such as wound, tumour is bad and the reasons such as beauty, selects plastic surgery more next to improve the people of figure and features defect
It is more.Design and method in plastic surgery operations are influenced by subjective factors such as personal habits, experiences, and effect does not have prediction
Property, random and blindness is larger, while still medical tangle subject occurred frequently.In addition, facial muscle control is abnormal
The unnatural degree of face can be influenced, influences appearance beauty.
At present, to the assessment of face face naturalness, merely by eye-observation, subjectivity is very strong, and only face
It can be just identified by the human eye after unnatural quantitative change to a certain extent, degree not unnatural to face carries out accurate quantification automatic measure grading
Method and apparatus.In apparatus of the present invention, trained by gathering the static state and dynamic data of the postoperative face of patient, and by early period
Good grader carries out classification automatic measure grading to it.More reflect the unnatural situation of tester's face, to test objective reality
The unnatural degree of person's face carries out quantitative evaluation, such as can be that tester's beauty right-safeguarding that fails provide corresponding data, with compared with
Strong legal sense;Can also in daily life, voluntarily carried out for tester face it is unnatural degree assessment provide it is simple and reliable
Square law device.
" a lie detector " (Lie Detector) can be used for assisting to investigate in crime survey, to understand the suspicion inquired
The psychologic status of people, so as to judge whether it is related to punishment case." detecting a lie " is not to survey " lie " in itself, but thought-read reason institute is stimulated
The variation of caused physiological parameter, such as pulse, breathing and dermatopolyneuritis (referred to as " skin electricity ").Wherein, skin electricity is most sensitive, is to detect a lie
Main basis, at present the whole nation have many cities a lie detector is introduced into public security, judicial circuit.But can a lie detector play
Normal effect, the external environment, testee's individual state, the design of the level and problem of testing teacher with test are all close
It is related.Some tests are final gainless as condition is inadequate.In apparatus of the present invention, by the unnatural degree of face
Rating evaluation, it is more time saving and energy saving to judge subject with the presence or absence of lying suspicion, and restrained from condition.
The content of the invention
The present invention is intended to provide a kind of automatic measure grading device of the unnatural degree of face face, for subject provide face self
The method of examination, it is portable and easy to operate, automatic and accurate is made to assess the unnatural degree of face, and the unnatural position of face is determined
Position is possibly realized.
The present invention provides a kind of real-time apparatus for evaluating of the unnatural degree of face, including:The data acquisition module that sets gradually,
Data preprocessing module, training grader and the unnatural degree automatic measure grading module of human face.The data acquisition module includes people
Facial static data collecting unit and face Dynamic Data Acquiring unit;The data preprocessing module is pre- by face static data
Processing unit, face dynamic data pretreatment unit.
Correspondingly, the present invention also provides a kind of method of the real-time apparatus for evaluating using the unnatural degree of face, including following
Several steps:
Step A:Data acquisition is carried out to the face of subject;;
Step B:The data gathered are pre-processed;
Step C:The study of the unnatural degree assessment grader of face is carried out with the method for machine learning;
Step D:With trained grader, grade to the unnatural degree of the face of subject.
The present invention the advantage is that using above technical scheme, and one kind can gather in real time automatically, monitor and evaluator face
The method and apparatus of the unnatural degree in portion objectively reflect the unnatural degree of human face.Can accurately assess subject's face face not from
Right situation, and the unnatural position of face is positioned, and then provide shaping and beauty postoperative evaluation measures for subject, being possible
Shaping failure dispute case provide Medical Authentication material;Or in daily life, being provided for the assessment of tester's automatic face can
The means leaned on.Or examine provides effective foundation to detect a lie.
Preferably, in the step B, the image data of acquisition is enhanced, denoising is for future use.
Preferably, in step C, including the following steps:
Step C1:It is used as sample by a large amount of human face photos, is trained study;
Step C1:The static state and behavioral characteristics of the unnatural degree of extraction face, and then find corresponding to the unnatural degree of face
Strong feature;
Physiological structure based on human face expression, Ekman etc. define corresponding quantization rule, i.e., each expression is by which flesh
Meat generates, and each muscle is how to act on generating specific expression, and how each muscle, which cooperates with, generates specific expression.We will
These quantify rule as static nature.The static nature can include, size, color, profile and shape etc..
Such as static nature can extract feature according to space domain model of the face in each video, such as left and right eye
Size, symmetry, looks spacing etc.;Behavioral characteristics extract feature according to changing pattern of the face between multiframe, can include
Speed and the direction of motion.Speed can be obtained by motion estimation algorithms such as optical flow field or Block- matchings.
Light stream is the two-dimentional instantaneous velocity field that moving object is observed that pixel moves on surface, and available computational methods are
Gray differential method, Region Matching method, the method based on energy and the method based on phase.By taking the method based on phase as an example:
Each two field picture in sequence is input to one group of Gabor filter, carries out bandpass filtering pretreatment, Gabor filtering
The output response of device is R (X, t)=ρ (X, t) ejφ(X,t), wherein X=(x1,x2) for each pixel position on flat image, φ
(X, t) is output phase.For point X on equiphase contour line, it is constant that need to meet φ (X, t)=c, c.Above formula both ends are right simultaneously
T derivations obtain
It is the speed of a certain pixel, φX
(φx,φy) it is phase gradient.Speed V on phase gradient normalization directionn=α n, whereinFor normalization side
To.Simultaneous obtains
Alternatively, learn to carry out advanced features extraction automatically using depth network;
Alternatively, the feature that feature and deep learning with reference to obtained by priori are drawn is inputted as training, using label as prison
It superintends and directs, feature and label is handled with convolutional neural networks structure, generate trained grader.
Wherein, label be can be nature, it is more unnatural, very unnatural, very unnatural be setting.
The present invention further using above technical scheme, the advantage is that, by taking deep learning as an example, can pass through a large amount of faces
Photo is inputted as sample, can be put into convolutional neural networks, and training study obtains the strong feature based on data.Or according to priori
Knowledge selects active shape model algorithm (Active Shape Model, ASM) to extract the main feature point of face face:Eye
Angle, eye center, eyebrow, nose, cheekbone, the corners of the mouth, chin profile etc.;Then face subregion is divided, utilizes the feature of extraction
Point determines the position of face organ and region muscle, and it is suitable to be selected according to the area of each organ size and each region facial muscles
The sample window of pixel size extracts the sampling block of all subregion;Facial area membership vector is asked for, by all subregion with putting down
Equal face, which is compared, seeks difference;Multidigit professional is assisted to exercise supervision to the overall merit of the unnatural degree of face learning training,
The feature of the unnatural degree of human face is automatically extracted in big data, obtains well-drilled grader-sorter model nerve net
Network.
Preferably, in the step D, it may include following steps:
Step D1:For newly into pending human face data, using the feature extracting method or depth according to priori
Learn obtained strong aspect indexing, determine feature set.
Step D2:Characteristic set is accessed into grader, the unnatural degree of output face.
The present invention uses above technical scheme, the advantage is that, automatically extracts the standard diagrams of the unnatural degree of human face, right
The static state and dynamic data of pretreated face face using the method for machine learning, automatically extract characterization human face not certainly
The characteristic parameter so spent, while the unnatural position of face is positioned.
Preferably, the specific targets in the step D1 are such as:Facial muscle contraction speed, facial muscle movements direction are various
Property, the local asymmetry of human face expression linkage, face or so, local anomaly twitch, and each index of comprehensive analysis is given
The deviation and transmission function of weight, calculating and machine learning model average face, so as to obtain the result of quantitative evaluation.
Preferably, rating scale includes:To pretreated static image data, human face expression parameter is extracted;
The present invention further using above technical scheme, the advantage is that, pretreated dynamic image data can be carried
Take the contraction process for changing over time facial muscles.It is 0 by the deciding grade and level of human face's sculpture, the comedian that facial expression is enriched determines
Grade is 100, to each Distribution Indexes weight of the assessment of the unnatural degree of the human face of extraction, is classified for people's facial expression naturalness
Scoring.
Preferably, extraction human face expression parameter includes:Facial muscle contraction speed can be by the movement between sequential frame image
Estimation, facial muscle movements direction diversity, human face expression linkage, the left-right asymmetry property of face and to pretreated dynamic
State image data, which can extract, changes over time at least one of contraction process of facial muscles.
The present invention further using above technical scheme, the advantage is that extraction human face expression parameter includes:Facial muscles
Contraction speed can be by the estimation between sequential frame image, e.g., and Block- matching or optical flow approach are calculated;Facial muscles are transported
The characteristic point coordinate vector detected, is converted to the description of corresponding mimetic muscle movement effects, as degree by dynamic direction diversity
Via being classified after training classifier training, measurement results are calculated with this in the input of amount system;Human face expression links
Property, the Expression analysis that can be combined based on AU encodes the local connection of face in Haar feature bases using joint Haar features
Dynamic variation;The left-right asymmetry property of face by geometry pretreatment and gray scale pretreatment, is established normalized human face data, is compared
The similarity of left and right face);It can extract contraction process for changing over time facial muscles etc. to pretreated dynamic image data.
It is 0 by the deciding grade and level of human face's sculpture, is 100 by comedian's deciding grade and level that facial expression is enriched, to the unnatural degree of the human face of extraction
Assessment each Distribution Indexes weight, be people's facial expression naturalness rank scores.
The beneficial effects of the invention are as follows:Monitoring in real time and assessment human face's situation, and then in real time and objectively appraiser face
The unnatural degree in portion is compared to existing subjective judgement method, time saving and energy saving, from artificial deviation effects.It and can be to face
Unnatural position is positioned.Subject can provide medical treatment according to the index of objective quantification for postoperative beauty and shaping dispute case
Expert evidence or in daily life provides for the assessment of tester's automatic face and objectively facilitates means.It is or careful to detect a lie
It looks into and non-physiological parameter (expression parameter) foundation is provided.
Description of the drawings
Fig. 1 face feature point testing results.
The unnatural degree automatic measure grading method and apparatus block diagram of Fig. 2 human faces.
Fig. 3 static data acquisition processing modules.
Fig. 4 Dynamic Data Acquiring processing modules.
The unnatural degree estimation flow figure of Fig. 5 human faces.
Specific embodiment
Below in conjunction with the accompanying drawings, the preferably embodiment of the present invention is described in further detail:
The present invention assesses the unnatural degree of human face in real time by gathering the static data and dynamic data of human face's situation
Integrated device, the personalized unnatural degree evaluation scheme of face for subject is provided, is postoperative beauty and shaping dispute case
Medical Authentication material or in daily life is provided, is assessed for tester's automatic face and reliable means is provided.Or to survey
Lie, which examines, provides non-physiological parameter (expression parameter) foundation.
The present invention carries out further using the automatic measure grading of the postoperative unnatural degree of face of human face's cosmetic surgery as embodiment
It is described in detail.Intensity of anomaly when " the unnatural degree of face " in this embodiment refers to reflect human face's contraction of muscle activity,
Such as craniofacial asymmetry, the stiff degree of face.This embodiment is directed to use with machine learning and carrys out grouped data, first by expert
Sample data is analyzed and plays scoring, then trains grader using sample and label as input.It specifically can be by number
According to the deep learning that the model using multiple graders is supplied to either to be trained by multiple training datas or test data set
Model.In instances, the matching degree of data and grader can be generated confidence level and associated with the classification of data.
In embodiment, data acquisition system is not limited to need the image/video information classified, and can also include contributes to the Accurate classification mankind
It is not easy the data additional information excavated.In instances, database can be with continuous updating.
The structure diagram of the integrated device is as shown in Fig. 2, specific as follows in the present invention:
Step 1:Data acquisition module
In this embodiment, which is the data acquisition device based on video camera, contains human face's static data and adopts
Collection and face Dynamic Data Acquiring, as shown in Figure 3.In Dynamic Data Acquiring, 1) make following required movement as requested, such as:
It wails, laugh, anger etc., the video segment of 10 seconds can be cut into according to sequence of movement, each video segment is known as a sample;2)
The process that subject switches expression is gathered, such as turns sad by happiness, happiness is turned by anger.The video segment of 20 seconds can be cut into, is each regarded
Frequency segment can be used as a sample.
Step 2:Data preprocessing module
In this embodiment, this module is mainly the static state and dynamic number synchronously, independently collected to above-mentioned module
Data preprocess, in order to subsequent processing, as shown in figure 4, the module is pre-processed by face static data, face dynamic data is pre-
Two little module compositions of processing.Wherein, the image data of acquisition is enhanced, the processing such as denoising are for future use.
Step 3:Training grader
In this embodiment, which is mainly used as sample by a large amount of human face photos, and is demarcated with reference to professional person
As a result, being trained study, the static state (in indivedual frames) and behavioral characteristics (between multiframe) of the unnatural degree of face are can extract, and then is looked for
To the strong feature corresponding to the unnatural degree of face, depth network also can be used and learn to carry out advanced features extraction automatically, can also tie
It closes the feature that feature and deep learning obtained by priori are drawn to input as training, and then obtains well-drilled grader,
The output of wherein process is the set for all data for representing genealogical classification.
Step 4:The unnatural degree automatic measure grading of human face
In this embodiment, which is mainly compared the action difference of the data of pretreatment and average face, comes
It is unnatural to judge that face part muscle contraction movement is brought.With trained grader to the data amount of progress of pretreatment
Change index extraction, specific targets are such as:Facial muscle contraction speed, facial muscle movements direction diversity, human face expression linkage,
Face or so part asymmetry, local anomaly twitch, and give each index of comprehensive analysis to weight, calculating and machine learning
The deviation and transmission function of model average face, the result can be as the quantitative evaluations of the unnatural degree of human face.
Rating scale:To pretreated static dynamic image data, extraction human face expression parameter (e.g., receive by facial muscles
Contracting speed can be by the estimation between sequential frame image, e.g., and Block- matching or optical flow approach are calculated;Facial muscle movements
The characteristic point coordinate vector detected is converted to the description of corresponding mimetic muscle movement effects, as measurement by direction diversity
Via being classified after training classifier training, measurement results are calculated with this in the input of system;Human face expression links
Property, the Expression analysis that can be combined based on AU encodes the local connection of face in Haar feature bases using joint Haar features
Dynamic variation;The left-right asymmetry property of face by geometry pretreatment and gray scale pretreatment, is established normalized human face data, is compared
The similarity of left and right face);It can extract contraction process for changing over time facial muscles etc. to pretreated dynamic image data.
It is 0 by the deciding grade and level of human face's sculpture, is 100 by comedian's deciding grade and level that facial expression is enriched, House- can be used
Brackmann scorings are basic as classification, and fraction is acquired by the unnatural degree weighting in each position in grade, and weighting coefficient can be existed by feature
It is determined after weight normalization in training network, facial positions, texture, action equal weight are high corresponding to strong feature.It also can be linear
It returns different characteristic and chooses gained classification results and expert analysis mode, sort R square values to adjust, the high person of R square values such as mouth is weighed
Weight highest.The proportional system increases constantly dynamic with data volume and adjusts.
In the present invention, the flow of the specific implementation of this embodiment is as shown in figure 5, detailed step is as follows:
Step 1:Receive the subject of the unnatural degree assessment of human face, need to sit quietly in human face data acquisition instrument (as imaged
Machine) before.
Step 2:After all preparations are ready, start data acquisition.
Step 3:The data of acquisition are pre-processed, the normalization comprising data, removal background illumination differentia influence etc.
Operation.
Step 4:With the method for machine learning, by taking deep learning as an example, sample input can be used as by a large amount of human face photos,
Convolutional neural networks can be put into, are trained study.Human face characteristic point is extracted first, such as active shape model can be selected to calculate
Method (Active Shape Model, ASM) extracts the main feature point of face face:Canthus, eye center, eyebrow, nose, cheekbone
Bone, the corners of the mouth, chin profile etc.;Then face subregion is divided, face organ and region muscle are determined using the characteristic point of extraction
Position, according to the area of each organ size and each region facial muscles select appropriate pixels size sample window, extract
The sampling block of all subregion;Facial area membership vector is asked for, all subregion is sought into difference compared with average face;It assists
Multidigit professional carries out complete supervised learning training to the overall merit of the unnatural degree of face, and people is automatically extracted in big data
The feature of the unnatural degree of face, obtains well-drilled grader-sorter model neutral net.
Step 5:With trained grader, grade to the unnatural degree of the face of subject.
Step 6:Selection printing rating result.
Protection point
1. in the present invention, processing analysis is carried out to human face data using the method for machine learning, the unnatural degree of face is realized
Quantitative evaluation, and be accurately positioned, in the protection domain of this patent.
2. based on the embodiment in the present invention, the technical staff in the field that this patent is related to is not making creative labor
The every other embodiment obtained under the premise of dynamic belongs to the scope of this patent protection.
The above content is a further detailed description of the present invention in conjunction with specific preferred embodiments, it is impossible to assert
The specific implementation of the present invention is confined to these explanations.For those of ordinary skill in the art to which the present invention belongs, exist
On the premise of not departing from present inventive concept, several simple deduction or replace can also be made, should all be considered as belonging to the present invention's
Protection domain.
Claims (3)
1. a kind of method of real-time apparatus for evaluating using the unnatural degree of face, which is characterized in that the unnatural degree of face
Real-time apparatus for evaluating includes:The data acquisition module that sets gradually, data preprocessing module, training grader and human face not from
So degree automatic measure grading module;The data acquisition module includes human face's static data collecting unit and face Dynamic Data Acquiring
Unit;The data preprocessing module is made of face static data pretreatment unit, face dynamic data pretreatment unit;Institute
Stating method includes following steps:
Step A:Data acquisition is carried out to the face of subject;
Step B:The data gathered are pre-processed;
Step C:The study of the unnatural degree assessment grader of face is carried out with the method for machine learning;
Step D:With trained grader, grade to the unnatural degree of the face of subject;Automatically extract human face not from
The standard diagrams so spent, while the unnatural position of face is positioned;
In the step D, including following steps:
Step D1:Trained grader carries out quantizating index extraction, and each finger of comprehensive analysis to the data of pretreatment
Mark gives the deviation and transmission function of weight, calculating and machine learning model average face;
Step D2:Using result as the quantitative evaluation of the unnatural degree of human face;
Index in the step D1 is:Facial muscle contraction speed, facial muscle movements direction diversity, human face expression linkage
Property, the local asymmetry of face or so, local anomaly twitch;Rating scale:It is 0 by the deciding grade and level of human face's sculpture, by facial expression
Abundant comedian's deciding grade and level is 100, is people face to each Distribution Indexes weight of the assessment of the unnatural degree of the human face of extraction
Portion's degree of looking natural rank scores.
2. a kind of use the method as described in claim 1, which is characterized in that in the step B, to the image data of acquisition into
Row enhancing denoising is for future use.
3. a kind of method of real-time apparatus for evaluating using the unnatural degree of face as described in claim 1, which is characterized in that step
In rapid C, including the following steps:
Step C1:It is used as sample by a large amount of human face photos, is trained study;
Step C2:The static state and behavioral characteristics of the unnatural degree of extraction face, and then find the strong spy corresponding to the unnatural degree of face
Sign.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710161341.0A CN107007257B (en) | 2017-03-17 | 2017-03-17 | The automatic measure grading method and apparatus of the unnatural degree of face |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710161341.0A CN107007257B (en) | 2017-03-17 | 2017-03-17 | The automatic measure grading method and apparatus of the unnatural degree of face |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107007257A CN107007257A (en) | 2017-08-04 |
CN107007257B true CN107007257B (en) | 2018-06-01 |
Family
ID=59439576
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710161341.0A Active CN107007257B (en) | 2017-03-17 | 2017-03-17 | The automatic measure grading method and apparatus of the unnatural degree of face |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107007257B (en) |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107633207B (en) * | 2017-08-17 | 2018-10-12 | 平安科技(深圳)有限公司 | AU characteristic recognition methods, device and storage medium |
CN107704919B (en) * | 2017-09-30 | 2021-12-07 | Oppo广东移动通信有限公司 | Control method and device of mobile terminal, storage medium and mobile terminal |
CN107704834B (en) * | 2017-10-13 | 2021-03-30 | 深圳壹账通智能科技有限公司 | Micro-surface examination assisting method, device and storage medium |
CN108038413A (en) * | 2017-11-02 | 2018-05-15 | 平安科技(深圳)有限公司 | Cheat probability analysis method, apparatus and storage medium |
CN108446593B (en) * | 2018-02-08 | 2021-03-09 | 北京捷通华声科技股份有限公司 | Facial spasm detection method and device |
CN108416331B (en) * | 2018-03-30 | 2019-08-09 | 百度在线网络技术(北京)有限公司 | Method, apparatus, storage medium and the terminal device that face symmetrically identifies |
CN110084259B (en) * | 2019-01-10 | 2022-09-20 | 谢飞 | Facial paralysis grading comprehensive evaluation system combining facial texture and optical flow characteristics |
TWI756681B (en) * | 2019-05-09 | 2022-03-01 | 李至偉 | Artificial intelligence assisted evaluation method applied to aesthetic medicine and system using the same |
CN110516626A (en) * | 2019-08-29 | 2019-11-29 | 上海交通大学 | A kind of Facial symmetry appraisal procedure based on face recognition technology |
CN110889332A (en) * | 2019-10-30 | 2020-03-17 | 中国科学院自动化研究所南京人工智能芯片创新研究院 | Lie detection method based on micro expression in interview |
CN111062936B (en) * | 2019-12-27 | 2023-11-03 | 中国科学院上海营养与健康研究所 | Quantitative index evaluation method for facial deformation diagnosis and treatment effect |
CN111986801A (en) * | 2020-07-14 | 2020-11-24 | 珠海中科先进技术研究院有限公司 | Rehabilitation evaluation method, device and medium based on deep learning |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005043453A1 (en) * | 2003-10-23 | 2005-05-12 | Northrop Grumman Corporation | Robust and low cost optical system for sensing stress, emotion and deception in human subjects |
CN104008391A (en) * | 2014-04-30 | 2014-08-27 | 首都医科大学 | Face micro-expression capturing and recognizing method based on nonlinear dimension reduction |
CN104679967A (en) * | 2013-11-27 | 2015-06-03 | 广州华久信息科技有限公司 | Method for judging reliability of psychological test |
CN105160318A (en) * | 2015-08-31 | 2015-12-16 | 北京旷视科技有限公司 | Facial expression based lie detection method and system |
CN105205479A (en) * | 2015-10-28 | 2015-12-30 | 小米科技有限责任公司 | Human face value evaluation method, device and terminal device |
CN106295568A (en) * | 2016-08-11 | 2017-01-04 | 上海电力学院 | The mankind's naturalness emotion identification method combined based on expression and behavior bimodal |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040143170A1 (en) * | 2002-12-20 | 2004-07-22 | Durousseau Donald R. | Intelligent deception verification system |
US9026678B2 (en) * | 2011-11-30 | 2015-05-05 | Elwha Llc | Detection of deceptive indicia masking in a communications interaction |
-
2017
- 2017-03-17 CN CN201710161341.0A patent/CN107007257B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2005043453A1 (en) * | 2003-10-23 | 2005-05-12 | Northrop Grumman Corporation | Robust and low cost optical system for sensing stress, emotion and deception in human subjects |
CN104679967A (en) * | 2013-11-27 | 2015-06-03 | 广州华久信息科技有限公司 | Method for judging reliability of psychological test |
CN104008391A (en) * | 2014-04-30 | 2014-08-27 | 首都医科大学 | Face micro-expression capturing and recognizing method based on nonlinear dimension reduction |
CN105160318A (en) * | 2015-08-31 | 2015-12-16 | 北京旷视科技有限公司 | Facial expression based lie detection method and system |
CN105205479A (en) * | 2015-10-28 | 2015-12-30 | 小米科技有限责任公司 | Human face value evaluation method, device and terminal device |
CN106295568A (en) * | 2016-08-11 | 2017-01-04 | 上海电力学院 | The mankind's naturalness emotion identification method combined based on expression and behavior bimodal |
Also Published As
Publication number | Publication date |
---|---|
CN107007257A (en) | 2017-08-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107007257B (en) | The automatic measure grading method and apparatus of the unnatural degree of face | |
CN110507335B (en) | Multi-mode information based criminal psychological health state assessment method and system | |
CN105022929B (en) | A kind of cognition accuracy analysis method of personal traits value test | |
CN106682616A (en) | Newborn-painful-expression recognition method based on dual-channel-characteristic deep learning | |
CN112472048B (en) | Method for realizing neural network for identifying pulse condition of cardiovascular disease patient | |
CN105559802A (en) | Tristimania diagnosis system and method based on attention and emotion information fusion | |
CN110428908B (en) | Eyelid motion function evaluation system based on artificial intelligence | |
CN109447962A (en) | A kind of eye fundus image hard exudate lesion detection method based on convolutional neural networks | |
CN109805944B (en) | Children's ability analytic system that shares feelings | |
CN110047591B (en) | Method for evaluating posture of doctor in surgical operation process | |
CN108256453A (en) | A kind of method based on one-dimensional ECG signal extraction two dimension CNN features | |
CN110309813A (en) | A kind of model training method, detection method, device, mobile end equipment and the server of the human eye state detection based on deep learning | |
CN107844780A (en) | A kind of the human health characteristic big data wisdom computational methods and device of fusion ZED visions | |
CN111466878A (en) | Real-time monitoring method and device for pain symptoms of bedridden patients based on expression recognition | |
CN107967941A (en) | A kind of unmanned plane health monitoring method and system based on intelligent vision reconstruct | |
CN109978873A (en) | A kind of intelligent physical examination system and method based on Chinese medicine image big data | |
CN109344763A (en) | A kind of strabismus detection method based on convolutional neural networks | |
CN111403026A (en) | Facial paralysis grade assessment method | |
CN111461218A (en) | Sample data labeling system for fundus image of diabetes mellitus | |
CN110025312A (en) | Herpes zoster neuralgia curative effect prediction method and system based on structure magnetic resonance | |
Zhang et al. | Real-time activity and fall risk detection for aging population using deep learning | |
CN116343302A (en) | Micro-expression classification and identification system based on machine vision | |
CN110427987A (en) | A kind of the plantar pressure characteristic recognition method and system of arthritic | |
CN114240934B (en) | Image data analysis method and system based on acromegaly | |
CN115154828A (en) | Brain function remodeling method, system and equipment based on brain-computer interface technology |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |