CN106250819A - Based on face's real-time monitor and detection facial symmetry and abnormal method - Google Patents
Based on face's real-time monitor and detection facial symmetry and abnormal method Download PDFInfo
- Publication number
- CN106250819A CN106250819A CN201610574228.0A CN201610574228A CN106250819A CN 106250819 A CN106250819 A CN 106250819A CN 201610574228 A CN201610574228 A CN 201610574228A CN 106250819 A CN106250819 A CN 106250819A
- Authority
- CN
- China
- Prior art keywords
- face
- degree
- real
- time
- user
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Geometry (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
Abstract
The invention provides one based on face's real-time monitor and detection facial symmetry and abnormal method, including training stage step: set up degree of depth convolutional neural networks model, the user face main portions view data gathered, and the view data given according to user determines optimal classification strategy;Test phase step: in real time monitoring user face is movable, gathers user face main portions view data, by degree of depth convolutional neural networks models treated user face main portions image, extracts degree of depth convolution feature;Solve two classification problems based on degree of depth convolution feature, after user face main portions is carried out state recognition respectively, this main portions is carried out real-time Symmetry Detection and abnormality detection;Record user's eye state for time sequence, adjusts the sample frequency of monitoring camera in real time.The present invention all can accurately carry out the real-time detection of facial symmetry and exception under conditions of different illumination, different users, and has higher stability and universality.
Description
Technical field
The present invention relates to image identification technical field, in particular it relates to one is based on face's real-time monitor and detection face pair
Title property and abnormal method.
Background technology
In recent years, face and human facial expression recognition have attracted the attention of more and more scientific research personnel.Face datection is
Refer in video sequence or image, find all face information, and determine face size, position, movement locus, attitude, further
The process that the features such as the eyes on face, lip are extracted.Human face detection and recognition is the classics in area of pattern recognition
Problem.The knowledge and technology in the fields such as its designed image process, pattern recognition, physiology, research origin is in 19 th Century French sections
The work of scholar Alton, along with the development of computer science, after the nineties, Face datection is increasingly becoming the focus of research.Closely
Nian Lai, Face datection identification technology is increasingly becoming most potential authentication means, in ecommerce, medical treatment, follows the tracks of field
Its actual application value is outstanding day by day.
Facial area asymmetric in normal population incidence rate higher.Incidence rate is about 21%~85%.Asymmetric degree is relatively
The lighter may be not discovered, it might even be possible to occurs in some attractive facial areas or normal craniofacial complex.
The reason of facial asymmetry is a lot, generally can be divided into congenital diseases, growth promoter sex chromosome mosaicism and 3 sides of acquired disease damage
Face.Wherein, for acquired disease damage, facial area is asymmetric may be stiff by uprush life, facial muscle atrophy, mandibular joint of condyle
Firmly, facial wound or facial tumors cause.Additionally, craniofacial asymmetry is also likely the signal of apoplexy and facial paralysis.Apoplexy
People's early stage usually has that face is asymmetric, face is numb, parathria, the symptom such as cannot laugh, and at this moment patient should seek medical advice as early as possible,
Avoid disease that patient's body causes permanent expendable damage.Therefore, for the Face datection algorithm pair of craniofacial asymmetry
Detection in the early stage of some disease is significant.
At present the algorithm of Face datection is a lot, is generally divided into following four classes:
1) Knowledge based engineering algorithm, is encoded the knowledge of typical case's face by priori.In general, priori is known
Knowing the mutual relation including between face characteristic, the main application of this algorithm concentrates on Face detection field.
2) algorithm based on stencil matching, the method first store some standard masterplates of a face describe part or
Degree of association between the characteristic of whole face, then image and the existing masterplate inputted by calculating is detected.
3) algorithm of feature based, this kind of algorithm is mainly used to carry out Face detection, it is therefore an objective to will find out some structures special
Levying, these features keep constant in the case of illumination condition, posture, point of observation change;Then these characteristics are used to position
Face.
4) algorithm based on outward appearance.This kind of algorithm is unlike stencil matching, and masterplate is warp from one group of training image
Crossing study to get, these images should comprise representative face cosmetic variation factor, this type of method be mainly used to into
Row Face datection.
According to the finding that we are current, academic circles at present not yet has the face inspection being specifically designed for craniofacial asymmetry design
Method of determining and calculating.
Wearable smart machine refers to wear, or is incorporated into the clothes of user or the portable electronic of accessory
Equipment.Exposing because of Google's glasses in 2012, it is referred to as " wearable smart machine the first year ".Intelligence wearable device is as intelligence
The new focus of terminal industry is extensively admitted by market.Ge Lu enterprise marches intelligence wearable device research and development one after another, strives for newly
One takes turns and strives one seat in technological revolution.The product form of wearable smart machine includes with wrist for the Watch class (bag supported
Include the product such as wrist-watch and wrist strap), (include wearing product on footwear, socks or other lower limbs in the future with foot for the Shoes class supported
Product), it is the Glasses class (including glasses, the helmet, headband etc.) supported with Head And Face, and intelligent clothing, school bag, crutch, joins
All kinds of non-mainstream product form such as decorations.
Along with people's pay attention to day by day to own health, the market of wearable portable armarium will constantly expand.And base
Face datection algorithm in wearable device inherently receives the highest attention.These equipment will become following human medical
The skirmish of system.
In view of current scientific and effective facial symmetry and Outlier Detection Algorithm are less, the present invention utilizes based on degree of depth convolution
The computer vision algorithms make of neutral net, is greatly promoted the accuracy rate of facial symmetry and abnormality detection, stability and pervasive
Property.Additionally, algorithm in this paper can be transplanted on market on all kinds of wearable devices efficiently, before having a wide range of applications
Scape.
Summary of the invention
For defect of the prior art, it is an object of the invention to provide a kind of based on face's real-time monitor and detection face pair
Title property and abnormal method.
According to the present invention provide based on face's real-time monitor and detection facial symmetry and abnormal method, including walking as follows
Rapid:
Training stage step: set up degree of depth convolutional neural networks model, according to the user face main portions image gathered
The weight parameter of Internet in data-optimized degree of depth convolutional neural networks model, and the view data given according to user determines
Optimal classification strategy;
Test phase step: the user face of monitoring in real time is movable, gathers user face main portions view data, by deeply
Degree convolutional neural networks models treated user face main portions image, extracts degree of depth convolution feature;Solve based on degree of depth convolution
Two classification problems of feature, carry out real-time symmetry to this main portions after user face main portions is carried out state recognition respectively
Property detection and abnormality detection;Record user's eye state for time sequence, adjusts the sample frequency of monitoring camera in real time.
Preferably, described training stage step includes:
Step A: gather face's main portions view data that different user is under different illumination conditions;Main portions bag
Include following: eyes, nasolabial fold and mouth;
Step B: set up degree of depth convolutional neural networks model, the view data according to gathering is trained, and adjusts, optimizes deeply
Degree convolutional neural networks each Internet weight parameter;
Step C: generate corresponding probability vector according to the concrete view data obtained and photometric data, give according to user
View data determines optimal classification strategy.
Preferably, described test phase step includes:
Step 1: monitor user face in real time by wearable preposition monitoring camera movable, and transmit user face master
Want position view data to server;Wherein, main portions includes following: eyes, nasolabial fold and mouth;
Step 2: set up convolutional neural networks model, the degree of depth convolution extracting user face main portions view data is special
Levy;
Step 3: according to the degree of depth convolution feature extracted, set up and solve two classification problems based on degree of depth convolution feature,
User face main portions image is carried out state recognition respectively, obtains state recognition result data;
Step 4: carry out data smoothing according to state recognition result data, get rid of exceptional data point operation, and to user's face
Portion's main portions image carries out real-time Symmetry Detection and abnormality detection respectively, obtains testing result;
Step 5: output detections result, and record user's eye state for time sequence, real according to eye state for time sequence
Time adjust the sample frequency of wearable preposition monitoring camera.
Preferably, described step 2 includes: the method utilizing degree of depth study and degree of depth convolutional neural networks is built-in at server
The real-time monitoring images data of wearable preposition monitoring camera are inputted degree of depth convolutional Neural by vertical convolutional neural networks model
In model, extract extraction face eyes image, nasolabial fold image, the degree of depth convolution feature of mouth image.
Preferably, described step 3 includes: according to the degree of depth convolution feature of the user face main portions extracted, and sets up
And solve to determine whether position to be detected asymmetric two classification problems occurs, identifies face eye based on degree of depth convolution feature
The two classification states whether portion, nasolabial fold, mouth be symmetrical, and form state for time sequence.
Preferably, described step 4 includes: according to face eyes image, nasolabial fold image, mouth image state for time sequence
Row, or instantaneous state, carry out data smoothing, get rid of exceptional data point operation, and detect user left and right face similarity degree, it is judged that face
Portion's symmetry, according to state for time sequence and symmetry result, detects that asymmetric abnormality occurs in user face.
Preferably, described step 5 includes: according to eye state for time sequence, and in calculating setting unit of time, user's blinks
Eye frequency, dynamically adjusts preposition monitoring camera image data samples frequency, makes sample frequency keep consistent with frequency of wink.
Preferably, described degree of depth convolutional neural networks model can be according to front-facing camera incoming different user, difference
The view data of illumination condition adjusts in real time, and includes day inter mode and Night.
Compared with prior art, the present invention has a following beneficial effect:
1, the present invention uses degree of depth convolutional neural networks efficiently and accurately real under conditions of different illumination, different users
Show the real-time detection of facial symmetry and exception, and there is higher stability and universality.
2, present invention achieves photographic head sample frequency to adjust in real time according to human eye seasonal effect in time series, save system-computed
Amount and energy use cost, can migrate to all kinds of wearable device simple and efficient.
Accompanying drawing explanation
By the detailed description non-limiting example made with reference to the following drawings of reading, the further feature of the present invention,
Purpose and advantage will become more apparent upon:
Fig. 1 for the present invention provide based on face's real-time monitor and detection facial symmetry and the flow chart of abnormal method;
Fig. 2 is degree of depth convolutional neural networks model basic structure of the present invention.
Detailed description of the invention
Below in conjunction with specific embodiment, the present invention is described in detail.Following example will assist in the technology of this area
Personnel are further appreciated by the present invention, but limit the present invention the most in any form.It should be pointed out that, the ordinary skill to this area
For personnel, without departing from the inventive concept of the premise, it is also possible to make some changes and improvements.These broadly fall into the present invention
Protection domain.
According to the present invention provide based on face's real-time monitor and detection facial symmetry and abnormal method, including the training stage
And test phase;Wherein
Training stage includes:
Step A: face's main portions view data that collection of server different user is under different illumination conditions;
Step B: server sets up degree of depth convolutional neural networks model, the view data according to gathering is trained, adjust,
Optimize degree of depth convolutional neural networks each Internet weight parameter;
Step C: server generates corresponding probability vector according to the concrete view data obtained and photometric data, according to user
Given view data determines optimal classification strategy;
Test phase includes:
Step 1: it is movable that wearable preposition monitoring camera monitors user face in real time, and transmits face image data extremely
Server;
Step 2: the method for the study of the server by utilizing degree of depth and degree of depth convolutional neural networks sets up convolutional neural networks model,
Process face eye, nasolabial fold, mouth image, extract degree of depth convolution feature;
Step 3: server, according to the degree of depth convolution feature extracted, is set up and solves two classification based on degree of depth convolution feature
Three positions of face are carried out state recognition by problem respectively;
Step 4: according to state recognition result data, server carries out data smoothing to it, gets rid of exceptional data point operation,
And face three part is carried out real-time Symmetry Detection and abnormality detection respectively;
Step 5: user's results needed data are passed back by server, while detection user face symmetry and exception, clothes
Business device record user's eye state for time sequence, adjusts the sampling frequency of monitoring camera in real time according to eye state for time sequence
Rate.
Specifically, the method in the present invention is described by embodiment
Embodiment one:
Daily indoor environment, indoor illumination intensity scope is about 10lx~100lx, 300,000 pixel minitype USB camera moulds
Group is installed on user face front as preposition monitoring camera by wearable device, and photographic head working pixel is 640*480,
Maximum frame per second is 30fps, is furnished with infrared lamp module, can be according to control signal switch mode;Photographic head is connected to by USB interface
Server;Described server uses NanoPi2 platform building, and core is ARM9 chip, and CPU operating frequency 1.4GHz, with 1G
Internal memory, carries linux system, starts and system carry is installed in 32G TF card;Described NanoPi2 platform is as removable micro-
Type server, can pass through USB interface, wireless network card or bluetooth outline.
This example concrete steps include training stage and test phase, and training stage process is as follows:
1) the face's main portions view data under collection of server different user is in different illumination conditions;
Preposition monitoring camera gathers several users face image data under different illumination conditions and is trained, according to
The difference of scene gathers the face image data under 6 kinds of illumination conditions altogether, and the intensity of illumination corresponding to 6 kinds of illumination conditions is respectively
For: less than 10lx, 10lx, 20lx, 50lx, 100lx, more than 1500lx, the most corresponding night, indoor shady place, cloudy day be indoor,
Normal lighting conditions is indoor, fine day is indoor and these six kinds of common scenes outdoor, wherein, less than 10lx and 10lx illumination condition
Hypograph data use infrared camera collection, and remaining situation uses conventional camera collection in the daytime.Each group of people's face view data
By opening closed state according to eye, there is situation in nasolabial fold, mouth is opened six parts that closed state distinguishes and formed.Gather 26000 altogether
Organize in the daytime facial image and 7750 groups night facial image.
2) server is set up degree of depth convolutional neural networks model and is trained;
According in the daytime and data server at night respectively trains two degree of depth convolutional neural networks models.
By the degree of depth convolutional neural networks model basic structure used in Fig. 2, this degree of depth convolutional neural networks has three layers
Convolutional layer and three layers of pond layer, and one layer of full articulamentum composition.Input picture is 40*40 face eye, nasolabial fold or mouth
Image, every layer of convolutional layer carries out linear convolution calculating to last layer input feature vector image, and uses specific non-linear operator extraction
Convolution feature, forms the input as next layer of the feature thermal map, and specific features extracts process and can represent as follows:
Fk=σ (Wk×xij+bk)
In formula, FkFor the output characteristic thermal map of kth layer, σ is specific non-linear operator, WkWeight square for kth layer convolutional layer
Battle array, xijFor the numerical value of-1 layer of input feature vector figure the i-th row jth row of kth, bkBias for kth layer.
There is pond layer that convolutional layer output characteristic thermal map carries out dimension-reduction treatment after every layer of convolutional layer, reduce computation complexity.
Using ReLU activation primitive to ensure the openness of feature thermal map between pond layer with corresponding convolutional layer, concrete mode is simultaneously:
g(x)ij=max (0, xij)
In formula, xijFor the numerical value of input feature vector thermal map the i-th row jth row, g (x)ijIt is characterized thermal map the i-th row jth row to pass through
Numerical value after activation primitive calculating.
During the training of degree of depth convolutional neural networks, the parameter of each layer uses stochastic gradient descent method to update, first root
The parameter that loss function J (θ), θ are needs renewal being trained to data set, loss function J is calculated according to each iterative computation result
(θ) calculation is as follows:
In formula, m is the record strip number of training set, and y is class mark, hθX () is the function needing matching, specific formula for calculation is such as
Under:
In formula, x is input data, and j is number of parameters.According to the loss function of each sample, local derviation is asked to obtain correspondence θ
Gradient, and update θ:
In formula, θ 'jFor the parameter after updating.Use the mode of iterative computation can train degree of depth convolution god according to as above method
Through network.
3) server calculates input data correspondence probability vector, trains optimal classification strategy;
The degree of depth convolution feature thermal map extracted according to degree of depth convolutional neural networks, in entirely connecting of degree of depth convolutional neural networks
Add Soft-max layer after connecing layer, set up and solve two classification problems.The non-linear calculation used due to convolutional layer in the present invention
Son is Sigmoid function, then input function hθX () is updated to:
According to new function hθ(x), T represents the cycle, and training pattern parameter θ is to minimize loss function:
As follows according to the method for calculating probability that in given input x, Soft-max layer, x is categorized as classification j:
According to probability function, the probability vector that input is input image data of Soft-max layer, according in probability vector
Maximum, the class mark of input image data can be calculated, solve two classification problems.Simultaneously according to probability vector, the degree of depth is rolled up
The long-pending further undated parameter of neutral net, optimizes network, obtains optimal classification strategy.
When user begins to use symmetry and abnormality detection, system enters test phase, and test phase step is as follows:
Step 1: it is movable that monitoring camera monitors user face in real time, and uploads view data to server;
Monitoring camera, while the activity of monitoring user face, monitors ambient lighting condition, when intensity of illumination is less than
During 15lx, photographic head opens infrared photography pattern, obtains infrared picture data;When intensity of illumination is more than 15lx, then use day
Between photographic head catch realtime image data.
Step 2: server by utilizing degree of depth convolutional neural networks extracts face three parts of images feature;
After obtaining monitoring camera monitoring image data, by view data input model selector, image uses day in the daytime
Between degree of depth convolutional neural networks process, night infrared image use night degree of depth convolutional neural networks process.
Step 3: server is set up and solves two classification problems based on degree of depth convolution feature, identifies each position state;
Step 4: server is according to two classification problem result detection face's symmetry and exceptions, and generates eye state for time
Sequence;
According to eye, nasolabial fold, the result of mouth the right and left image input degree of depth convolutional neural networks, use sliding window
Method judge its state, sliding window size is empirically taken as 3, as a example by eye, judges nearest 3 two field pictures of right and left eyes respectively
State, determines its state by the way of ballot.Specifically, the nearest three two field picture results of right eye are " open, close, open ", then should
The state of moment right eye is " opening ", same judgement left eye state, if the state of the synchronization of right and left eyes differs, then judges
For exception.According to eye, nasolabial fold, the other condition adjudgement of mouth portion, the symmetry of detection face and exception.Meanwhile, server
The state of two form eye state for time sequence about record.
Step 5: server return testing result, to user, adjusts image in real time according to eye state for time sequence simultaneously and adopts
Sample frequency;
Face's each several part testing result is back to the further process at user for wearable device.Meanwhile, according to
The eye state for time sequence preserved, server adjusts camera image sample frequency in real time.Specifically, according to photographic head used
30fps maximum frame per second, and people is in a short time every about 2 to 8 seconds, and during nictation, closed-eye time is about 0.2 to 0.4 second, sets figure
As sample frequency bound, the upper limit is 30Hz, and lower limit is 20Hz.Definition sampling interval τs:
τs=BL+(BU-BL)r
In formula, BLIt is spaced 0.033 second for minimal sampling time, BUFor maximum sampling time interval 0.05 second, r be scope 0 to
The special parameter of 1.We are by eye state for time sequence, and continuous print opens state or closed state regards same state point as, then fixed
Justice interval τ nictationbThe time interval between state is opened for before and after certain closed state nearest two.Calculate 2 × M before a certain moment point simultaneously
The average blinking interval time of individual sampling interval, M is positive integer.Particularly, before labelling in M sampling interval average the most in a short time
Interval isThe average blinking interval time in rear M sampling interval isThreshold tau is setT, compareWithAnd according to
Comparative result three kinds of frequency event of definition:
1) advance event: ifThen show that frequency of wink speedup is more than threshold value;
2) event is retreated: ifThen show that frequency of wink is slowed down more than threshold value;
3) event is stablized: ifThen show that frequency of wink tends towards stability;
According to these three event, update sampling interval τsIn special parameter r to change sample frequency.If advance thing occurs
Part, updates r with the following methods:
In formula,Gained r, α is operated for former frameDConstant for scope 0 to 1.If retrogressing event occurring, then in order to lower section
Method calculates:
In formula, αIConstant for scope 0 to 1.According to as above update mode, when r is less, gathering way of r is very fast, subtract
Little speed is slower;When r is bigger, gathering way of r is relatively slow, reduce speed so that sample frequency can Reasonable adjustment in real time.
When occurring stablizing event, r does not changes.Sample frequency after server will update is back to monitoring camera, adopts at next frame
Use during sample.
Above the specific embodiment of the present invention is described.It is to be appreciated that the invention is not limited in above-mentioned
Particular implementation, those skilled in the art can make a variety of changes within the scope of the claims or revise, this not shadow
Ring the flesh and blood of the present invention.In the case of not conflicting, the feature in embodiments herein and embodiment can any phase
Combination mutually.
Claims (8)
1. one kind based on face's real-time monitor and detection facial symmetry and abnormal method, it is characterised in that comprise the steps:
Training stage step: set up degree of depth convolutional neural networks model, according to the user face main portions view data gathered
Optimize the weight parameter of Internet in degree of depth convolutional neural networks model, and the view data given according to user determines optimum
Classification policy;
Test phase step: the user face of monitoring in real time is movable, gathers user face main portions view data, is rolled up by the degree of depth
Long-pending neural network model processes user face main portions image, extracts degree of depth convolution feature;Solve based on degree of depth convolution feature
Two classification problems, after user face main portions is carried out state recognition respectively, this main portions is carried out real-time symmetry inspection
Survey and abnormality detection;Record user's eye state for time sequence, adjusts the sample frequency of monitoring camera in real time.
It is the most according to claim 1 based on face's real-time monitor and detection facial symmetry and abnormal method, it is characterised in that
Described training stage step includes:
Step A: gather face's main portions view data that different user is under different illumination conditions;Main portions include with
Under several: eyes, nasolabial fold and mouth;
Step B: set up degree of depth convolutional neural networks model, the view data according to gathering is trained, and adjusts, optimizes degree of depth volume
Long-pending neutral net each Internet weight parameter;
Step C: generate corresponding probability vector, the image given according to user according to the concrete view data obtained and photometric data
Data determine optimal classification strategy.
It is the most according to claim 1 based on face's real-time monitor and detection facial symmetry and abnormal method, it is characterised in that
Described test phase step includes:
Step 1: monitor user face in real time by wearable preposition monitoring camera movable, and transmit main portion of user face
Bit image data are to server;Wherein, main portions includes following: eyes, nasolabial fold and mouth;
Step 2: set up convolutional neural networks model, extracts the degree of depth convolution feature of user face main portions view data;
Step 3: according to extract degree of depth convolution feature, set up and solve two classification problems based on degree of depth convolution feature, to
Family face main portions image carries out state recognition respectively, obtains state recognition result data;
Step 4: carry out data smoothing according to state recognition result data, get rid of exceptional data point operation, and to user face master
Want station diagram picture to carry out real-time Symmetry Detection and abnormality detection respectively, obtain testing result;
Step 5: output detections result, and record user's eye state for time sequence, adjust in real time according to eye state for time sequence
The sample frequency of whole wearable preposition monitoring camera.
It is the most according to claim 3 based on face's real-time monitor and detection facial symmetry and abnormal method, it is characterised in that
Described step 2 includes: utilize the method for degree of depth study and degree of depth convolutional neural networks to set up convolutional neural networks in server
The real-time monitoring images data of wearable preposition monitoring camera are inputted in degree of depth convolutional Neural model, extract by model
Extract face eyes image, nasolabial fold image, the degree of depth convolution feature of mouth image.
It is the most according to claim 3 based on face's real-time monitor and detection facial symmetry and abnormal method, it is characterised in that
Described step 3 includes: according to the degree of depth convolution feature of the user face main portions extracted, and sets up and solves to roll up based on the degree of depth
Long-pending feature determines whether position to be detected asymmetric two classification problems occurs, whether identifies face eye, nasolabial fold, mouth
Two symmetrical classification states, and form state for time sequence.
It is the most according to claim 3 based on face's real-time monitor and detection facial symmetry and abnormal method, it is characterised in that
Described step 4 includes: according to face eyes image, nasolabial fold image, mouth image state for time sequence, or instantaneous state, enter
Row data smoothing, eliminating exceptional data point operation, and detect user left and right face similarity degree, it is judged that face symmetry, according to shape
State time series and symmetry result, detect that asymmetric abnormality occurs in user face.
It is the most according to claim 3 based on face's real-time monitor and detection facial symmetry and abnormal method, it is characterised in that
Described step 5 includes: according to eye state for time sequence, in calculating setting unit of time, the frequency of wink of user, dynamically adjusts
Preposition monitoring camera image data samples frequency, makes sample frequency keep consistent with frequency of wink.
It is the most according to claim 1 based on face's real-time monitor and detection facial symmetry and abnormal method, it is characterised in that
Described degree of depth convolutional neural networks model can according to front-facing camera incoming different user, the picture number of different illumination conditions
According to adjusting in real time, and include day inter mode and Night.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610574228.0A CN106250819A (en) | 2016-07-20 | 2016-07-20 | Based on face's real-time monitor and detection facial symmetry and abnormal method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610574228.0A CN106250819A (en) | 2016-07-20 | 2016-07-20 | Based on face's real-time monitor and detection facial symmetry and abnormal method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106250819A true CN106250819A (en) | 2016-12-21 |
Family
ID=57613408
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610574228.0A Pending CN106250819A (en) | 2016-07-20 | 2016-07-20 | Based on face's real-time monitor and detection facial symmetry and abnormal method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106250819A (en) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107392047A (en) * | 2017-07-25 | 2017-11-24 | 湖南云迪生物识别科技有限公司 | The acquisition methods and device of health data |
CN108154169A (en) * | 2017-12-11 | 2018-06-12 | 北京小米移动软件有限公司 | Image processing method and device |
CN108256633A (en) * | 2018-02-06 | 2018-07-06 | 苏州体素信息科技有限公司 | A kind of method of test depth Stability of Neural Networks |
CN108416331A (en) * | 2018-03-30 | 2018-08-17 | 百度在线网络技术(北京)有限公司 | Method, apparatus, storage medium and the terminal device that face symmetrically identifies |
CN109543526A (en) * | 2018-10-19 | 2019-03-29 | 谢飞 | True and false facial paralysis identifying system based on depth difference opposite sex feature |
CN109685101A (en) * | 2018-11-13 | 2019-04-26 | 西安电子科技大学 | A kind of adaptive acquisition method of multidimensional data and system |
CN110226194A (en) * | 2018-05-31 | 2019-09-10 | 京东方科技集团股份有限公司 | Display panel, display equipment, display base plate, manufacture display panel and the method for showing equipment |
CN110502102A (en) * | 2019-05-29 | 2019-11-26 | 中国人民解放军军事科学院军事医学研究院 | Virtual reality exchange method based on fatigue monitoring early warning |
TWI689285B (en) * | 2018-11-15 | 2020-04-01 | 國立雲林科技大學 | Facial symmetry detection method and system thereof |
CN111222558A (en) * | 2019-12-31 | 2020-06-02 | 河南裕展精密科技有限公司 | Image processing method and storage medium |
CN111460991A (en) * | 2020-03-31 | 2020-07-28 | 科大讯飞股份有限公司 | Anomaly detection method, related device and readable storage medium |
US10846518B2 (en) | 2018-11-28 | 2020-11-24 | National Yunlin University Of Science And Technology | Facial stroking detection method and system thereof |
CN112037907A (en) * | 2020-07-28 | 2020-12-04 | 上海恩睦信息科技有限公司 | System for prompting stroke risk based on facial features |
CN113876296A (en) * | 2020-07-02 | 2022-01-04 | 中国医学科学院北京协和医院 | Quick self-service detecting system for stroke |
CN115622730A (en) * | 2022-08-25 | 2023-01-17 | 支付宝(杭州)信息技术有限公司 | Training method of face attack detection model, face attack detection method and device |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101982810A (en) * | 2010-09-10 | 2011-03-02 | 中国矿业大学 | Rotary machine multi-point wireless stress acquisition method and device thereof |
CN103227833A (en) * | 2013-04-28 | 2013-07-31 | 北京农业信息技术研究中心 | Soil humidity sensor network system and information acquisition method thereof |
CN203494059U (en) * | 2013-09-23 | 2014-03-26 | 陕西秦明医学仪器股份有限公司 | Sensor collecting and processing system of implantable heart pacemaker |
CN104346621A (en) * | 2013-07-30 | 2015-02-11 | 展讯通信(天津)有限公司 | Method and device for creating eye template as well as method and device for detecting eye state |
CN104463172A (en) * | 2014-12-09 | 2015-03-25 | 中国科学院重庆绿色智能技术研究院 | Face feature extraction method based on face feature point shape drive depth model |
CN205120130U (en) * | 2015-11-14 | 2016-03-30 | 刘佳绪 | FFT calculates equipment of step number and human motion consumption calorie based on improve |
-
2016
- 2016-07-20 CN CN201610574228.0A patent/CN106250819A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101982810A (en) * | 2010-09-10 | 2011-03-02 | 中国矿业大学 | Rotary machine multi-point wireless stress acquisition method and device thereof |
CN103227833A (en) * | 2013-04-28 | 2013-07-31 | 北京农业信息技术研究中心 | Soil humidity sensor network system and information acquisition method thereof |
CN104346621A (en) * | 2013-07-30 | 2015-02-11 | 展讯通信(天津)有限公司 | Method and device for creating eye template as well as method and device for detecting eye state |
CN203494059U (en) * | 2013-09-23 | 2014-03-26 | 陕西秦明医学仪器股份有限公司 | Sensor collecting and processing system of implantable heart pacemaker |
CN104463172A (en) * | 2014-12-09 | 2015-03-25 | 中国科学院重庆绿色智能技术研究院 | Face feature extraction method based on face feature point shape drive depth model |
CN205120130U (en) * | 2015-11-14 | 2016-03-30 | 刘佳绪 | FFT calculates equipment of step number and human motion consumption calorie based on improve |
Non-Patent Citations (3)
Title |
---|
KIM: "A Smartphone-Based Automatic Diagnosis System for Facial Nerve Palsy", 《SENSORS》 * |
李爱国: "《数据挖掘原理、算法及应用》", 31 January 2012, 电子科技大学出版社 * |
郭克友: "《机器视觉技术在安全辅助驾驶中的应用》", 31 May 2012, 北京交通大学出版社 * |
Cited By (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107392047A (en) * | 2017-07-25 | 2017-11-24 | 湖南云迪生物识别科技有限公司 | The acquisition methods and device of health data |
CN108154169A (en) * | 2017-12-11 | 2018-06-12 | 北京小米移动软件有限公司 | Image processing method and device |
CN108256633A (en) * | 2018-02-06 | 2018-07-06 | 苏州体素信息科技有限公司 | A kind of method of test depth Stability of Neural Networks |
CN108416331A (en) * | 2018-03-30 | 2018-08-17 | 百度在线网络技术(北京)有限公司 | Method, apparatus, storage medium and the terminal device that face symmetrically identifies |
CN110226194A (en) * | 2018-05-31 | 2019-09-10 | 京东方科技集团股份有限公司 | Display panel, display equipment, display base plate, manufacture display panel and the method for showing equipment |
CN109543526A (en) * | 2018-10-19 | 2019-03-29 | 谢飞 | True and false facial paralysis identifying system based on depth difference opposite sex feature |
CN109685101A (en) * | 2018-11-13 | 2019-04-26 | 西安电子科技大学 | A kind of adaptive acquisition method of multidimensional data and system |
CN109685101B (en) * | 2018-11-13 | 2021-09-28 | 西安电子科技大学 | Multi-dimensional data self-adaptive acquisition method and system |
TWI689285B (en) * | 2018-11-15 | 2020-04-01 | 國立雲林科技大學 | Facial symmetry detection method and system thereof |
US10846518B2 (en) | 2018-11-28 | 2020-11-24 | National Yunlin University Of Science And Technology | Facial stroking detection method and system thereof |
CN110502102B (en) * | 2019-05-29 | 2020-05-12 | 中国人民解放军军事科学院军事医学研究院 | Virtual reality interaction method based on fatigue monitoring and early warning |
CN110502102A (en) * | 2019-05-29 | 2019-11-26 | 中国人民解放军军事科学院军事医学研究院 | Virtual reality exchange method based on fatigue monitoring early warning |
CN111222558A (en) * | 2019-12-31 | 2020-06-02 | 河南裕展精密科技有限公司 | Image processing method and storage medium |
CN111222558B (en) * | 2019-12-31 | 2024-01-30 | 富联裕展科技(河南)有限公司 | Image processing method and storage medium |
CN111460991A (en) * | 2020-03-31 | 2020-07-28 | 科大讯飞股份有限公司 | Anomaly detection method, related device and readable storage medium |
CN113876296A (en) * | 2020-07-02 | 2022-01-04 | 中国医学科学院北京协和医院 | Quick self-service detecting system for stroke |
CN113876296B (en) * | 2020-07-02 | 2024-05-28 | 中国医学科学院北京协和医院 | Quick self-service detecting system of apoplexy |
CN112037907A (en) * | 2020-07-28 | 2020-12-04 | 上海恩睦信息科技有限公司 | System for prompting stroke risk based on facial features |
CN115622730A (en) * | 2022-08-25 | 2023-01-17 | 支付宝(杭州)信息技术有限公司 | Training method of face attack detection model, face attack detection method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106250819A (en) | Based on face's real-time monitor and detection facial symmetry and abnormal method | |
CN106951867B (en) | Face identification method, device, system and equipment based on convolutional neural networks | |
CN105354548B (en) | A kind of monitor video pedestrian recognition methods again based on ImageNet retrievals | |
CN108319953A (en) | Occlusion detection method and device, electronic equipment and the storage medium of target object | |
CN103268495B (en) | Human body behavior modeling recognition methods based on priori knowledge cluster in computer system | |
CN109271888A (en) | Personal identification method, device, electronic equipment based on gait | |
CN109614882A (en) | A kind of act of violence detection system and method based on human body attitude estimation | |
CN107886503A (en) | A kind of alimentary canal anatomical position recognition methods and device | |
CN109635727A (en) | A kind of facial expression recognizing method and device | |
CN110175595A (en) | Human body attribute recognition approach, identification model training method and device | |
CN109410168A (en) | For determining the modeling method of the convolutional neural networks model of the classification of the subgraph block in image | |
CN108765394A (en) | Target identification method based on quality evaluation | |
CN106295558A (en) | A kind of pig Behavior rhythm analyzes method | |
CN104143079A (en) | Method and system for face attribute recognition | |
CN104091176A (en) | Technology for applying figure and head portrait comparison to videos | |
CN109934062A (en) | Training method, face identification method, device and the equipment of eyeglasses removal model | |
CN109886154A (en) | Most pedestrian's appearance attribute recognition methods according to collection joint training based on Inception V3 | |
CN109299658A (en) | Face area detecting method, face image rendering method, device and storage medium | |
CN107844780A (en) | A kind of the human health characteristic big data wisdom computational methods and device of fusion ZED visions | |
CN110063736B (en) | Eye movement parameter monitoring fatigue detection and wake-up promotion system based on MOD-Net network | |
CN108765014A (en) | A kind of intelligent advertisement put-on method based on access control system | |
CN109447175A (en) | In conjunction with the pedestrian of deep learning and metric learning recognition methods again | |
CN109376621A (en) | A kind of sample data generation method, device and robot | |
CN109063643A (en) | A kind of facial expression pain degree recognition methods under the hidden conditional for facial information part | |
CN112907554A (en) | Automatic air conditioner adjusting method and device based on thermal infrared image recognition |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20161221 |