CN109819100A - Mobile phone control method, device, computer installation and computer readable storage medium - Google Patents
Mobile phone control method, device, computer installation and computer readable storage medium Download PDFInfo
- Publication number
- CN109819100A CN109819100A CN201811527675.6A CN201811527675A CN109819100A CN 109819100 A CN109819100 A CN 109819100A CN 201811527675 A CN201811527675 A CN 201811527675A CN 109819100 A CN109819100 A CN 109819100A
- Authority
- CN
- China
- Prior art keywords
- identified
- feature vector
- micro
- expression
- default
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Landscapes
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The present invention provides a kind of mobile phone control method, device, computer installation and computer readable storage medium.The mobile phone control method includes: to obtain first micro- facial expression image to be identified;Extract the corresponding feature vector to be identified of the first micro- facial expression image to be identified;Feature vector to be identified is compared with one group of default feature vector, and calculates the likelihood probability value of feature vector to be identified Yu each default feature vector;Judge in multiple likelihood probability values with the presence or absence of the likelihood probability value greater than predetermined probabilities threshold value;If the first likelihood probability value in multiple likelihood probability values is greater than the predetermined probabilities threshold value, first micro- facial expression image to be identified is divided into the corresponding first expression classification of the first likelihood probability threshold value;And the application program of unlatching and the first expression category associations.Mobile phone control method provided by the invention can by micro- expression control mobile phone in application program, realize mobile computer device operating experience optimization, without manual operation, it is convenient for the user to operate.
Description
Technical field
The present invention relates to field of computer technology more particularly to a kind of mobile phone control method, device, computer installation and meters
Calculation machine readable storage medium storing program for executing.
Background technique
Micro- expression (micro-expression) is that the mankind attempt to constrain or when hiding real feelings, leakage it is very of short duration
, the facial expression for being unable to autonomous control, be lie identification effective clue, be mainly used in safety, the administration of justice, clinic etc. lead
Domain.With the development of communication technologies, computer, the use of mobile phone are more more and more universal.It is to the operation of computer, mobile phone at present
It is completed by button operation or touch operation.However, either button operation or touch operation are required to by grasping manually
It completes.Therefore, this manual manipulation mode is too single, brings inconvenience to user's use process, and be of limited application.
Summary of the invention
In view of the foregoing, it is necessary to propose a kind of without manual operation, user-friendly mobile phone control method, dress
It sets, computer installation and storage medium.
The present invention provides a kind of mobile phone control method, which comprises
Obtain first micro- facial expression image to be identified;
Extract the corresponding feature vector to be identified of the described first micro- facial expression image to be identified;
The feature vector to be identified is compared with one group of default feature vector, and calculate the feature to be identified to
The likelihood probability of amount and each default feature vector, to obtain multiple likelihood probability values;
Judge in multiple likelihood probability values with the presence or absence of the likelihood probability value greater than predetermined probabilities threshold value;
If the first likelihood probability value in multiple likelihood probability values is greater than the predetermined probabilities threshold value, by described the
One micro- facial expression image to be identified is divided into the corresponding first expression classification of the first likelihood probability threshold value;And
Open the application program with the first expression category associations.
In one of the embodiments, the feature vector to be identified include: shape eigenvectors and/or textural characteristics to
Amount.
It is described in one of the embodiments, to compare the feature vector to be identified and one group of default feature vector
It is right, and the step of calculating likelihood probability of the feature vector to be identified with each default feature vector, comprising:
Obtain the distance between the feature vector to be identified and each default feature vector value;And
The likelihood probability of the feature vector to be identified Yu each default feature vector is calculated according to the distance value
Value, the calculation formula of the likelihood probability value are as follows: p={ 1+exp [D-b] } -1, wherein D is distance value, and b is default amount of bias.
The distance value is calculated by the following formula to obtain in one of the embodiments: D={ d (y, x1) ... ..., d
(y, xj) }=((y-x1) T × M × (y-x1) ... ..., (y-xj) T × M × (y-xj);
Wherein, y is the feature vector to be identified of described first micro- facial expression image to be identified;
Xj is j-th of default feature vector;
M is goal-selling metric matrix;
J is the integer more than or equal to 1;
D (y, xj) is the distance between feature vector to be identified and j-th of default feature vector value;
It (y-xj) is the difference of feature vector to be identified and j-th of default feature vector;
(y-xj) T is the transposition of the difference of feature vector to be identified and j-th of default feature vector.
In one of the embodiments, it is described unlatching and the first expression category associations application program the step of wrap
It includes:
The first expression classification is associated with default application program;And
When including the unique identification of the default application program in the first expression classification of acquisition, described in unlatching
The application program of first expression category associations.
In one of the embodiments, the unlatching with the step of the application programs of the first expression category associations it
Afterwards, the method also includes:
Obtain the corresponding second expression classification of the second micro- facial expression image to be identified;
The predetermined registration operation instruction of second expression classification and the application program is associated;And
When getting the second expression classification, the predetermined registration operation instruction of the application program is executed.
The predetermined registration operation instruction is lower one page in one of the embodiments, and page up plays, pause, F.F., point
It hits left mouse button or clicks right mouse button.
A kind of cell phone control device, described device include:
Module is obtained, for obtaining first micro- facial expression image to be identified;
Extraction module, for extracting the corresponding feature vector to be identified of the described first micro- facial expression image to be identified;
Computing module, for the feature vector to be identified to be compared with one group of default feature vector, and will be described
Feature vector to be identified is compared with one group of default feature vector, and calculate the feature vector to be identified with it is each described pre-
If the likelihood probability of feature vector, to obtain multiple likelihood probability values;
Judgment module, for judging in multiple likelihood probability values with the presence or absence of similar general greater than predetermined probabilities threshold value
Rate value;
Division module, for described first micro- facial expression image to be identified to be divided into the predetermined probabilities threshold value corresponding
In one expression classification;And
Opening module, for when getting the first expression classification belonging to described first micro- facial expression image to be identified, then
Open the application program with the first expression category associations.
A kind of computer installation, the computer installation include processor and memory, and the processor is for executing institute
The mobile phone control method is realized when stating the computer program stored in memory.
A kind of computer readable storage medium is stored with computer program on the computer readable storage medium, described
The mobile phone control method is realized when computer program is executed by processor.
In conclusion mobile phone control method of the present invention is by obtaining first micro- facial expression image to be identified;Extract institute
State the corresponding feature vector to be identified of the first micro- facial expression image to be identified;By the feature vector to be identified and one group of default feature
Vector is compared, and the feature vector to be identified is compared with one group of default feature vector, and calculates described wait know
The likelihood probability of other feature vector and each default feature vector, to obtain multiple likelihood probability values;Judge multiple described
With the presence or absence of the likelihood probability value greater than predetermined probabilities threshold value in likelihood probability value;If first in multiple likelihood probability values
Likelihood probability value is greater than the predetermined probabilities threshold value, then it is similar described first micro- facial expression image to be identified to be divided into described first
In the corresponding first expression classification of probability threshold value;And works as and get the first expression belonging to described first micro- facial expression image to be identified
When classification, then the application program with the first expression category associations is opened.To reach without manual operation, convenient for user
The effect of operation.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
The embodiment of invention for those of ordinary skill in the art without creative efforts, can also basis
The attached drawing of offer obtains other attached drawings.
Fig. 1 is the flow chart for the mobile phone control method that the embodiment of the present invention one provides.
Fig. 2 is the functional block diagram in cell phone control device preferred embodiment provided by Embodiment 2 of the present invention.
Fig. 3 is the schematic diagram for the computer installation that the embodiment of the present invention three provides.
The present invention that the following detailed description will be further explained with reference to the above drawings.
Specific embodiment
To better understand the objects, features and advantages of the present invention, with reference to the accompanying drawing and specific real
Applying mode, the present invention will be described in detail.It should be noted that in the absence of conflict, presently filed embodiment and reality
The feature applied in mode can be combined with each other.
In the following description, numerous specific details are set forth in order to facilitate a full understanding of the present invention, described embodiment
Only some embodiments of the invention, rather than whole embodiments.Based on the embodiment in the present invention, this field
Those of ordinary skill's every other embodiment obtained without making creative work, belongs to guarantor of the present invention
The range of shield.
Unless otherwise defined, all technical and scientific terms used herein and belong to technical field of the invention
The normally understood meaning of technical staff is identical.Term as used herein in the specification of the present invention is intended merely to description tool
The purpose of the embodiment of body, it is not intended that in the limitation present invention.
Description and claims of this specification and term " first " in above-mentioned attached drawing, " second " and " third " etc. are
For distinguishing different objects, not for description particular order.In addition, term " includes " and their any deformations, it is intended that
Non-exclusive include in covering.Such as the process, method, system, product or equipment for containing a series of steps or units do not have
It is defined in listed step or unit, but optionally further comprising the step of not listing or unit, or optionally further comprising
For the intrinsic other step or units of these process, methods, product or equipment.
Preferably, mobile phone control method of the invention is applied in one or more computer installation.The computer
Device is that one kind can be according to the instruction for being previously set or storing, the automatic equipment for carrying out numerical value calculating and/or information processing,
Hardware includes but is not limited to microprocessor, specific integrated circuit (Application Specific Integrated
Circuit, ASIC), programmable gate array (Field-Programmable Gate Array, FPGA), digital processing unit
(Digital Signal Processor, DSP), embedded device etc..
The computer installation can be the calculating such as desktop PC, laptop, tablet computer, server and set
It is standby.The computer installation can carry out people by modes such as keyboard, mouse, remote controler, touch tablet or voice-operated devices with user
Machine interaction.
Embodiment one:
Fig. 1 is the step flow chart of mobile phone control method preferred embodiment of the present invention.The process according to different requirements,
The sequence of step can change in figure, and certain steps can be omitted.
As shown in fig.1, the mobile phone control method specifically includes following steps.
Step S1, first micro- facial expression image to be identified is obtained.
In the present embodiment, micro- facial expression image to be identified is the micro- of the different meanings of expression that user shows before camera
Expression.
Such as: face action when glad: the corners of the mouth tilts, and wrinkle is lifted on cheek, and eyelid is shunk, and eyes tail portion will form
" crow's feet ".Facial characteristics when sad: narrowing eye, eyebrow tightening, and the corners of the mouth pulls down, and chin is lifted or tightened.Face when fearing
Feature: mouth and eyes open, and eyebrow raises up, and nostril is magnified.Facial characteristics when angry: eyebrow is sagging, and forehead is knitted tightly, eyelid
With lip anxiety.Facial characteristics when detest: nose is sneered, is lifted on upper lip, eyebrow is sagging, narrows eye.Facial characteristics when surprised: under
Jaw is sagging, and lip and mouth loosen, and eyes magnify, eyelid and the micro- lift of eyebrow.Facial characteristics when contempt: corners of the mouth side is lifted,
It ridicules or proud laughs at shape etc..
Step S2, the corresponding feature vector to be identified of the described first micro- facial expression image to be identified is extracted.
The present embodiment does not limit the extracting method of micro- facial expression image to be identified, is such as based on differential power figure
(DEI) method and centralization binary pattern (CGBP) Fa Junke.The feature vector to be identified include: shape eigenvectors and/or
Texture feature vector.And there is no limit such as with matrix, row vector or column vector for the mark form of the feature vector to be identified
Etc. modes present.
In one embodiment, it based on the type for presetting the default feature vector in micro- expression library, extracts described to be identified micro-
The corresponding feature vector to be identified of facial expression image.For example, then being extracted to be identified when default feature vector is shape eigenvectors
Shape eigenvectors in micro- facial expression image;When default feature vector is texture feature vector, then micro- expression to be identified is extracted
Texture feature vector in image;When default feature vector is shape eigenvectors and texture feature vector, then extract wait know
The texture feature vector in shape eigenvectors and micro- facial expression image to be identified in not micro- facial expression image.
Step S3, the feature vector to be identified is compared with one group of default feature vector, and calculated described wait know
The likelihood probability of other feature vector and each default feature vector, to obtain multiple likelihood probability values.
In one embodiment, expression is preset with each of micro- expression library is preset according to the feature vector to be identified
Default feature vector, determines that micro- facial expression image to be identified belongs to the similar of expression classification of the same race to each default expression
The step of probability includes: the distance between the default feature vector for obtaining feature vector to be identified and each default expression value;Root
Determine that micro- facial expression image to be identified default expression corresponding with distance value belongs to the likelihood probability of expression classification of the same race according to distance value.
Wherein, the distance value is broad sense mahalanobis distance.
In one embodiment, the default feature vector of feature vector to be identified and default expression is determined by following formula
The distance between value:
P={ 1+exp [D-b] } -1, wherein D is distance value, and b is default amount of bias.And D=d (y, x1) ... ..., d (y,
Xj) }=((y-x1) T × M × (y-x1) ... ..., (y-xj) T × M × (y-xj);
Wherein, y is the feature vector to be identified of described first micro- facial expression image to be identified;Xj be j-th default feature to
Amount;M is goal-selling metric matrix;J is the integer more than or equal to 1;D (y, xj) is feature vector to be identified and j-th
The distance between default feature vector value;It (y-xj) is the difference of feature vector to be identified and j-th of default feature vector;(y-
Xj) T is the transposition of the difference of feature vector to be identified and j-th of default feature vector.The present embodiment passes through micro- expression to be identified
Broad sense mahalanobis distance between image and default expression determines similar function, to effectively improve the standard that similar function determines
True property.
The present embodiment determines the expression classification of micro- facial expression image to be identified by likelihood probability, can effectively be promoted to be identified
The accuracy of the expression classification identification of micro- facial expression image.
Step S4, judge in multiple likelihood probability values with the presence or absence of the likelihood probability value greater than predetermined probabilities threshold value.
In one embodiment, after extracting the feature vector to be identified of above-mentioned micro- facial expression image to be identified, determine to
Identify that micro- facial expression image and each default expression belong to the likelihood probability of expression classification of the same race.When micro- facial expression image to be identified and in advance
If the likelihood probability that the default expression in micro- expression library belongs to expression classification of the same race is greater than predetermined probabilities threshold value, determine wait know
The expression classification of not micro- facial expression image is the expression classification of the first default expression.
And in the related art, in the expression classification identification for carrying out micro- expression, be determining micro- facial expression image to be identified with
The distance value for presetting the default expression in micro- expression library, when micro- facial expression image to be identified with preset it is first default in micro- expression library
When the distance value of expression is less than some preset value, it is determined that the expression classification of micro- facial expression image to be identified is the first default expression
Expression classification.And when micro- facial expression image to be identified and the distance value for the first default expression preset in micro- expression library are pre- greater than some
If when value, it is determined that the expression classification of micro- facial expression image to be identified is not the expression classification of the first default expression.
Based on determining expression class method for distinguishing above by distance, due between different micro- expressions, shape feature
Very little is differed with textural characteristics, feature space registration is larger, therefore the above method can not be effectively to micro- expression figure to be identified
As being identified.In order to increase the identification between micro- expression, the objective function based on likelihood probability is used in the present embodiment, from
And when being identified to micro- expression to be identified, to micro- expression to be identified and the default expression in micro- expression library can be preset belong to together
The likelihood probability of kind of expression classification predicted, if micro- expression to be identified and presetting a certain default expression sheet in micro- expression library
Just belong to expression classification of the same race, then the value of likelihood probability is greater than predetermined probabilities threshold value, if micro- expression to be identified and pre-
If a certain default expression in micro- expression library is natively not belonging to expression classification of the same race, then the value of likelihood probability be less than it is default
Probability threshold value.
Step S5, described first micro- facial expression image to be identified is divided into corresponding first expression of the predetermined probabilities threshold value
In classification.
In one embodiment, the default of expression is preset with each of micro- expression library is preset according to feature vector to be identified
Feature vector determines that micro- facial expression image to be identified and each default expression belong to the likelihood probability of expression classification of the same race.When wait know
Not micro- facial expression image is greater than default with the likelihood probability that the first default expression preset in micro- expression library belongs to expression classification of the same race
When probability threshold value, determine that the expression classification of micro- facial expression image to be identified is the first expression classification.Wherein, by likelihood probability come really
The expression classification of fixed micro- facial expression image to be identified can effectively promote the standard of the expression classification identification to micro- facial expression image to be identified
True property.
Step S6, when getting the first expression classification belonging to described first micro- facial expression image to be identified, then open with
The application program of the first expression category associations.
Wherein, the first expression classification and the application program correspond, and include institute in the first expression classification
The unique identification of application program is stated, and the expression classification is stored in the database of the computer installation equipment.
In one embodiment, the expression classification is at least 2, and one starts to execute the preset instructions with control
Control instruction establishes corresponding association, another pass corresponding with the control instruction foundation that control stops the execution preset instructions
Connection, to play the purpose for opening and closing the preset instructions.
In one embodiment, it identifies that the first predetermined registration operation instructs according to the first expression classification, calls described first
Predetermined registration operation instruction.At this point, when to get the facial expression of user be the state of frowning, then the application program receives described the
The instruction of one predetermined registration operation, and execute described the first predetermined registration operation and instruct corresponding operation.Wherein, the default behaviour of the application program
Making instruction is some usual instructions in the application program, such as lower one page, page up play, pause, F.F., and mouse is left
Key, right mouse button etc..
In one embodiment, second predetermined registration operation instruction executes photographing operation to control the application system.Work as acquisition
It is the state smiled to face facial expression, then sends photographing instruction and execute movement of taking pictures.
In another embodiment, described that the feature vector to be identified is compared with default feature vector, and count
The step of calculating likelihood probability value of the feature vector to be identified with default feature vector, comprising: obtain the feature to be identified
The distance between each described default feature vector of vector sum value;And calculate the feature vector to be identified and default feature vector
Likelihood probability value: p={ 1+exp [D-b] } -1, wherein D is distance value, and b is default amount of bias.
In another embodiment, the distance value: D={ d (y, x1) ... ..., d (y, xj) }=((y-x1) T × M ×
(y-x1) ... ..., (y-xj) T × M × (y-xj);
Wherein, y is the feature vector to be identified of described first micro- facial expression image to be identified;
Xj is j-th of default feature vector;
M is goal-selling metric matrix;
J is the integer more than or equal to 1;
D (y, xj) is the distance between feature vector to be identified and j-th of default feature vector value;
It (y-xj) is the difference of feature vector to be identified and j-th of default feature vector;
(y-xj) T is the transposition of the difference of feature vector to be identified and j-th of default feature vector.
In another embodiment, described to get the first expression class belonging to described first micro- facial expression image to be identified
When other, then opening with the step of application programs of the first expression category associations includes: by the first expression classification and pre-
If application program is associated;Obtaining includes the unique identification for opening default application program in the first expression classification;When,
Then open the application program of the first expression category associations.
In another embodiment, described to get the first expression class belonging to described first micro- facial expression image to be identified
When other, then after the step of application program of unlatching and the first expression category associations, the method also includes: obtain second
The corresponding second expression classification of micro- facial expression image to be identified;The predetermined registration operation of second expression classification and the application program is instructed
It is associated;And when getting the second expression classification, the predetermined registration operation instruction of the application program is executed.Wherein, institute
Stating predetermined registration operation instruction is lower one page, and page up plays, pause, F.F., clicks left mouse button or clicks right mouse button.
In conclusion mobile phone control method of the present invention is by obtaining first micro- facial expression image to be identified;Extract institute
State the corresponding feature vector to be identified of the first micro- facial expression image to be identified;By the feature vector to be identified and one group of default feature
Vector is compared, and the feature vector to be identified is compared with one group of default feature vector, and calculates described wait know
The likelihood probability of other feature vector and each default feature vector, to obtain multiple likelihood probability values;Judge multiple described
With the presence or absence of the likelihood probability value greater than predetermined probabilities threshold value in likelihood probability value;If first in multiple likelihood probability values
Likelihood probability value is greater than the predetermined probabilities threshold value, then it is similar described first micro- facial expression image to be identified to be divided into described first
In the corresponding first expression classification of probability threshold value;And works as and get the first expression belonging to described first micro- facial expression image to be identified
When classification, then the application program with the first expression category associations is opened.To reach without manual operation, convenient for user
The effect of operation.
Embodiment two:
Fig. 2 is the functional block diagram of cell phone control device preferred embodiment of the present invention.
As shown in fig.2, the cell phone control device 20 may include obtaining module 201, extraction module 202, calculating mould
Block 203, judgment module 204, division module 205 and opening module 206.
The acquisition module 201 is for obtaining first micro- facial expression image to be identified.
In the present embodiment, micro- facial expression image to be identified is the micro- of the different meanings of expression that user shows before camera
Expression.
Such as: face action when glad: the corners of the mouth tilts, and wrinkle is lifted on cheek, and eyelid is shunk, and eyes tail portion will form
" crow's feet ".Facial characteristics when sad: narrowing eye, eyebrow tightening, and the corners of the mouth pulls down, and chin is lifted or tightened.Face when fearing
Feature: mouth and eyes open, and eyebrow raises up, and nostril is magnified.Facial characteristics when angry: eyebrow is sagging, and forehead is knitted tightly, eyelid
With lip anxiety.Facial characteristics when detest: nose is sneered, is lifted on upper lip, eyebrow is sagging, narrows eye.Facial characteristics when surprised: under
Jaw is sagging, and lip and mouth loosen, and eyes magnify, eyelid and the micro- lift of eyebrow.Facial characteristics when contempt: corners of the mouth side is lifted,
It ridicules or proud laughs at shape etc..
The extraction module 202 is for extracting the corresponding feature vector to be identified of the described first micro- facial expression image to be identified.
The present embodiment does not limit the extracting method of micro- facial expression image to be identified, is such as based on differential power figure
(DEI) method and centralization binary pattern (CGBP) Fa Junke.The feature vector to be identified include: shape eigenvectors and/or
Texture feature vector.And there is no limit such as with matrix, row vector or column vector for the mark form of the feature vector to be identified
Etc. modes present.
In one embodiment, based on the type for presetting the default feature vector in micro- expression library, the extraction module 202 is mentioned
Take the corresponding feature vector to be identified of the micro- facial expression image to be identified.For example, when default feature vector is shape eigenvectors
When, then extract the shape eigenvectors in micro- facial expression image to be identified;When default feature vector is texture feature vector, then mention
Take the texture feature vector in micro- facial expression image to be identified;When default feature vector is shape eigenvectors and texture feature vector
When, then extract the texture feature vector in the shape eigenvectors and micro- facial expression image to be identified in micro- facial expression image to be identified.
The computing module 203 is used to for the feature vector to be identified being compared with one group of default feature vector, and
The likelihood probability of the feature vector to be identified Yu each default feature vector is calculated, to obtain multiple likelihood probability values.
In one embodiment, the computing module 203 according to the feature vector to be identified and is preset in micro- expression library
Each of default expression default feature vector, determine that micro- facial expression image to be identified and each default expression belong to together
The step of likelihood probability of kind expression classification includes: that the computing module 203 obtains feature vector to be identified and each preset table
The distance between default feature vector of feelings value;Determine that micro- facial expression image to be identified is corresponding with distance value default according to distance value
Expression belongs to the likelihood probability of expression classification of the same race.Wherein, the distance value is broad sense mahalanobis distance.
In one embodiment, the computing module 203 determines feature vector to be identified and preset table by following formula
The distance between default feature vector of feelings value:
P={ 1+exp [D-b] } -1, wherein D is distance value, and b is default amount of bias.And D=d (y, x1) ... ..., d (y,
Xj) }=((y-x1) T × M × (y-x1) ... ..., (y-xj) T × M × (y-xj);
Wherein, y is the feature vector to be identified of described first micro- facial expression image to be identified;Xj be j-th default feature to
Amount;M is goal-selling metric matrix;J is the integer more than or equal to 1;D (y, xj) is feature vector to be identified and j-th
The distance between default feature vector value;It (y-xj) is the difference of feature vector to be identified and j-th of default feature vector;(y-
Xj) T is the transposition of the difference of feature vector to be identified and j-th of default feature vector.The present embodiment passes through micro- expression to be identified
Broad sense mahalanobis distance between image and default expression determines similar function, to effectively improve the standard that similar function determines
True property.
The present embodiment determines the expression classification of micro- facial expression image to be identified by likelihood probability, can effectively be promoted to be identified
The accuracy of the expression classification identification of micro- facial expression image.
The judgment module 204 is used to judge in multiple likelihood probability values with the presence or absence of greater than predetermined probabilities threshold value
Likelihood probability value.
In one embodiment, when the judgment module 204 extracts the spy to be identified of above-mentioned micro- facial expression image to be identified
After levying vector, determine that micro- facial expression image to be identified and each default expression belong to the likelihood probability of expression classification of the same race.When wait know
Not micro- facial expression image is greater than default general with the likelihood probability that the default expression preset in micro- expression library belongs to expression classification of the same race
When rate threshold value, the judgment module 204 determines that the expression classification of micro- facial expression image to be identified is the expression class of the first default expression
Not.
Wherein, in the expression classification identification for carrying out micro- expression, it is determining micro- facial expression image to be identified and presets micro- expression
The distance value of default expression in library, when micro- facial expression image to be identified is at a distance from the first default expression preset in micro- expression library
When value is less than some preset value, it is determined that the expression classification of micro- facial expression image to be identified is the expression classification of the first default expression.
And when micro- facial expression image to be identified and the distance value for the first default expression preset in micro- expression library are greater than some preset value, then
The expression classification for determining micro- facial expression image to be identified is not the expression classification of the first default expression.
In order to increase the identification between micro- expression, the objective function based on likelihood probability is used in the present embodiment, thus
When being identified to micro- expression to be identified, micro- expression to be identified can be belonged to the default expression preset in micro- expression library of the same race
The likelihood probability of expression classification is predicted, if micro- expression to be identified and a certain default expression preset in micro- expression library are original
Just belong to expression classification of the same race, then the value of likelihood probability is greater than predetermined probabilities threshold value, if micro- expression to be identified and default
A certain default expression in micro- expression library is natively not belonging to expression classification of the same race, presets generally then the value of likelihood probability is less than
Rate threshold value.
The division module 205 is used to described first micro- facial expression image to be identified being divided into the predetermined probabilities threshold value pair
In the first expression classification answered.
In one embodiment, the division module 205 according to feature vector to be identified and is preset every in micro- expression library
The default feature vector of a default expression, determines micro- facial expression image to be identified and each default expression belongs to expression classification of the same race
Likelihood probability.When micro- facial expression image to be identified belongs to the first default expression preset in micro- expression library the phase of expression classification of the same race
When being greater than predetermined probabilities threshold value like probability, determine that the expression classification of micro- facial expression image to be identified is the first expression classification.Wherein, lead to
Likelihood probability is crossed to determine the expression classification of micro- facial expression image to be identified, can effectively promote the table to micro- facial expression image to be identified
The accuracy of feelings classification identification.
The opening module 206, which is used to work as, gets the first expression classification belonging to described first micro- facial expression image to be identified
When, then open the application program with the first expression category associations.
Wherein, the first expression classification and the application program correspond, and include institute in the first expression classification
The unique identification of application program is stated, and the expression classification is stored in the database of the computer installation equipment.
In one embodiment, the expression classification is at least 2, and one starts to execute the preset instructions with control
Control instruction establishes corresponding association, another pass corresponding with the control instruction foundation that control stops the execution preset instructions
Connection, to play the purpose for opening and closing the preset instructions.
In one embodiment, the opening module 206 identifies that the first predetermined registration operation refers to according to the first expression classification
It enables, described the first predetermined registration operation is called to instruct.At this point, be the state of frowning when getting the facial expression of user, then the application
Program receives the first predetermined registration operation instruction, and executes described the first predetermined registration operation and instruct corresponding operation.Wherein, described
The predetermined registration operation instruction of application program is some usual instructions in the application program, such as lower one page, page up play,
Pause, F.F., left mouse button, right mouse button etc..
In one embodiment, second predetermined registration operation instruction executes photographing operation to control the application system.When described
It is the state smiled that opening module 206, which gets face facial expression, then sends photographing instruction and execute movement of taking pictures.
In another embodiment, described that the feature vector to be identified is compared with default feature vector, and count
The step of calculating likelihood probability value of the feature vector to be identified with default feature vector, comprising: obtain the feature to be identified
The distance between each described default feature vector of vector sum value;And calculate the feature vector to be identified and default feature vector
Likelihood probability value: p={ 1+exp [D-b] } -1, wherein D is distance value, and b is default amount of bias.
In another embodiment, the distance value: D={ d (y, x1) ... ..., d (y, xj) }=((y-x1) T × M ×
(y-x1) ... ..., (y-xj) T × M × (y-xj);
Wherein, y is the feature vector to be identified of described first micro- facial expression image to be identified;
Xj is j-th of default feature vector;
M is goal-selling metric matrix;
J is the integer more than or equal to 1;
D (y, xj) is the distance between feature vector to be identified and j-th of default feature vector value;
It (y-xj) is the difference of feature vector to be identified and j-th of default feature vector;
(y-xj) T is the transposition of the difference of feature vector to be identified and j-th of default feature vector.
In another embodiment, described to get the first expression class belonging to described first micro- facial expression image to be identified
When other, then opening with the step of application programs of the first expression category associations includes: by the first expression classification and pre-
If application program is associated;Obtaining includes the unique identification for opening default application program in the first expression classification;When,
Then open the application program of the first expression category associations.
In another embodiment, described to get the first expression class belonging to described first micro- facial expression image to be identified
When other, then after the step of application program of unlatching and the first expression category associations, the method also includes: obtain second
The corresponding second expression classification of micro- facial expression image to be identified;The predetermined registration operation of second expression classification and the application program is instructed
It is associated;And when getting the second expression classification, the predetermined registration operation instruction of the application program is executed.Wherein, institute
Stating predetermined registration operation instruction is lower one page, and page up plays, pause, F.F., clicks left mouse button or clicks right mouse button.
In conclusion mobile phone control method of the present invention is by obtaining first micro- facial expression image to be identified;Extract institute
State the corresponding feature vector to be identified of the first micro- facial expression image to be identified;By the feature vector to be identified and one group of default feature
Vector is compared, and the feature vector to be identified is compared with one group of default feature vector, and calculates described wait know
The likelihood probability of other feature vector and each default feature vector, to obtain multiple likelihood probability values;Judge multiple described
With the presence or absence of the likelihood probability value greater than predetermined probabilities threshold value in likelihood probability value;If first in multiple likelihood probability values
Likelihood probability value is greater than the predetermined probabilities threshold value, then it is similar described first micro- facial expression image to be identified to be divided into described first
In the corresponding first expression classification of probability threshold value;And works as and get the first expression belonging to described first micro- facial expression image to be identified
When classification, then the application program with the first expression category associations is opened.To reach without manual operation, convenient for user
The effect of operation.
Embodiment three
Fig. 3 is the schematic diagram of computer installation preferred embodiment of the present invention.
The computer installation 30 includes memory 31, processor 32 and is stored in the memory 31 and can be in institute
The computer program 33 run on processor 32 is stated, such as mobile phone controls program.The processor 32 executes the computer journey
The step in above-mentioned mobile phone control method embodiment, such as step S1~S5 shown in FIG. 1 are realized when sequence 33.Alternatively, the place
Reason device 32 realizes the function of each module in above-mentioned cell phone control device embodiment, such as Fig. 2 when executing the computer program 33
In module 201~205.
Illustratively, the computer program 33 can be divided into one or more module/units, it is one or
Multiple module/units are stored in the memory 31, and are executed by the processor 32, to complete the present invention.Described one
A or multiple module/units can be the series of computation machine program instruction section that can complete specific function, and described instruction section is used
In implementation procedure of the description computer program 33 in the computer installation 30.For example, the computer program 33 can
With acquisition module 201, extraction module 202, computing module 203, judgment module 204, the division module 205 being divided into Fig. 2
And opening module 206.Each module concrete function is referring to embodiment two.
The computer installation 30 can be the calculating such as desktop PC, notebook, palm PC and cloud server
Equipment.It will be understood by those skilled in the art that the schematic diagram is only the example of computer installation 30, do not constitute to calculating
The restriction of machine device 30 may include perhaps combining certain components or different portions than illustrating more or fewer components
Part, such as the computer installation 30 can also include input-output equipment, network access equipment, bus etc..
Alleged processor 32 can be central processing unit (Central Processing Unit, CPU), can also be
Other general processors, digital signal processor (Digital Signal Processor, DSP), specific integrated circuit
(Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field-
Programmable Gate Array, FPGA) either other programmable logic device, discrete gate or transistor logic,
Discrete hardware components etc..General processor can be microprocessor or the processor 32 is also possible to any conventional processing
Device etc., the processor 32 are the control centres of the computer installation 30, are entirely calculated using various interfaces and connection
The various pieces of machine device 30.
The memory 31 can be used for storing the computer program 33 and/or module/unit, and the processor 32 passes through
Operation executes the computer program and/or module/unit being stored in the memory 31, and calls and be stored in memory
Data in 31 realize the various functions of the computer installation 30.The memory 31 can mainly include storing program area and
Storage data area, wherein storing program area can application program needed for storage program area, at least one function etc.;Store number
It can be stored according to area and created data etc. are used according to computer installation 30.In addition, memory 31 may include that high speed is random
Memory is accessed, can also include nonvolatile memory, such as hard disk, memory, plug-in type hard disk, intelligent memory card (Smart
Media Card, SMC), secure digital (Secure Digital, SD) card, flash card (Flash Card), at least one disk
Memory device, flush memory device or other volatile solid-state parts.
If the integrated module/unit of the computer installation 30 is realized in the form of SFU software functional unit and as independence
Product when selling or using, can store in a computer readable storage medium.Based on this understanding, of the invention
It realizes all or part of the process in above-described embodiment method, can also instruct relevant hardware come complete by computer program
At the computer program can be stored in a computer readable storage medium, and the computer program is held by processor
When row, it can be achieved that the step of above-mentioned each embodiment of the method.Wherein, the computer program includes computer program code, institute
Stating computer program code can be source code form, object identification code form, executable file or certain intermediate forms etc..It is described
Computer-readable medium may include: any entity or device, recording medium, U that can carry the computer program code
Disk, mobile hard disk, magnetic disk, CD, computer storage, read-only memory (ROM, Read-Only Memory), arbitrary access
Memory (RAM, Random Access Memory), electric carrier signal, telecommunication signal and software distribution medium etc..It needs
It is bright, the content that the computer-readable medium includes can according in jurisdiction make laws and patent practice requirement into
Row increase and decrease appropriate, such as do not include electric load according to legislation and patent practice, computer-readable medium in certain jurisdictions
Wave signal and telecommunication signal.
In several embodiments provided by the present invention, it should be understood that disclosed computer installation and method, it can be with
It realizes by another way.For example, computer installation embodiment described above is only schematical, for example, described
The division of unit, only a kind of logical function partition, there may be another division manner in actual implementation.
It, can also be in addition, each functional unit in each embodiment of the present invention can integrate in same treatment unit
It is that each unit physically exists alone, can also be integrated in same unit with two or more units.Above-mentioned integrated list
Member both can take the form of hardware realization, can also realize in the form of hardware adds software function module.
It is obvious to a person skilled in the art that invention is not limited to the details of the above exemplary embodiments, Er Qie
In the case where without departing substantially from spirit or essential attributes of the invention, the present invention can be realized in other specific forms.Therefore, no matter
From the point of view of which point, the present embodiments are to be considered as illustrative and not restrictive, and the scope of the present invention is by appended power
Benefit requires rather than above description limits, it is intended that all by what is fallen within the meaning and scope of the equivalent elements of the claims
Variation is included in the present invention.Any reference signs in the claims should not be construed as limiting the involved claims.This
Outside, it is clear that one word of " comprising " does not exclude other units or steps, and odd number is not excluded for plural number.It is stated in computer installation claim
Multiple units or computer installation can also be implemented through software or hardware by the same unit or computer installation.The
One, the second equal words are used to indicate names, and are not indicated any particular order.
Finally it should be noted that the above examples are only used to illustrate the technical scheme of the present invention and are not limiting, although reference
Preferred embodiment describes the invention in detail, those skilled in the art should understand that, it can be to of the invention
Technical solution is modified or equivalent replacement, without departing from the spirit and scope of the technical solution of the present invention.
Claims (10)
1. a kind of mobile phone control method, which is characterized in that the described method includes:
Obtain first micro- facial expression image to be identified;
Extract the corresponding feature vector to be identified of the described first micro- facial expression image to be identified;
The feature vector to be identified is compared with one group of default feature vector, and calculate the feature vector to be identified with
The likelihood probability of each default feature vector, to obtain multiple likelihood probability values;
Judge in multiple likelihood probability values with the presence or absence of the likelihood probability value greater than predetermined probabilities threshold value;
If the first likelihood probability value in multiple likelihood probability values is greater than the predetermined probabilities threshold value, by described first to
Identify that micro- facial expression image is divided into the corresponding first expression classification of the first likelihood probability threshold value;And
Open the application program with the first expression category associations.
2. mobile phone control method as described in claim 1, which is characterized in that the feature vector to be identified includes: shape spy
Levy vector and/or texture feature vector.
3. mobile phone control method as described in claim 1, which is characterized in that described by the feature vector to be identified and one group
Default feature vector is compared, and calculates the likelihood probability of the feature vector to be identified Yu each default feature vector
The step of, comprising:
Obtain the distance between the feature vector to be identified and each default feature vector value;And
The likelihood probability value of the feature vector to be identified Yu each default feature vector, institute are calculated according to the distance value
State the calculation formula of likelihood probability value are as follows: p={ 1+exp [D-b] } -1, wherein D is distance value, and b is default amount of bias.
4. mobile phone control method as claimed in claim 3, which is characterized in that the distance value is calculated by the following formula
It arrives: D={ d (y, x1) ... ..., d (y, xj)=((y-x1)T×M×(y-x1) ... ..., (y-xj)T×M×(y-xj);
Wherein, y is the feature vector to be identified of described first micro- facial expression image to be identified;
xjFor j-th of default feature vector;
M is goal-selling metric matrix;
J is the integer more than or equal to 1;
d(y,xj) it is that the distance between feature vector to be identified and j-th of default feature vector are worth;
(y-xj) be feature vector to be identified and j-th of default feature vector difference;
(y-xj)TFor the transposition of feature vector to be identified and the difference of j-th of default feature vector.
5. mobile phone control method as described in claim 1, which is characterized in that the unlatching and the first expression category associations
Application program the step of include:
The first expression classification is associated with default application program;And
When including the unique identification of the default application program in the first expression classification of acquisition, described first is opened
The application program of expression category associations.
6. mobile phone control method as described in claim 1, which is characterized in that the unlatching and the first expression category associations
Application program the step of after, the method also includes:
Obtain the corresponding second expression classification of the second micro- facial expression image to be identified;
The predetermined registration operation instruction of second expression classification and the application program is associated;And
When getting the second expression classification, the predetermined registration operation instruction of the application program is executed.
7. mobile phone control method as claimed in claim 6, which is characterized in that predetermined registration operation instruction is lower one page, upper one
Page plays, pause, F.F., clicks left mouse button or clicks right mouse button.
8. a kind of cell phone control device, which is characterized in that described device includes:
Module is obtained, for obtaining first micro- facial expression image to be identified;
Extraction module, for extracting the corresponding feature vector to be identified of the described first micro- facial expression image to be identified;
Computing module, for the feature vector to be identified to be compared with one group of default feature vector, and by described wait know
Other feature vector is compared with one group of default feature vector, and calculates the feature vector to be identified and each default spy
The likelihood probability of vector is levied, to obtain multiple likelihood probability values;
Judgment module, for judging in multiple likelihood probability values with the presence or absence of the likelihood probability greater than predetermined probabilities threshold value
Value;
Division module, for described first micro- facial expression image to be identified to be divided into corresponding first table of the predetermined probabilities threshold value
In feelings classification;And
Opening module, for when getting the first expression classification belonging to described first micro- facial expression image to be identified, then opening
With the application program of the first expression category associations.
9. a kind of computer installation, which is characterized in that the computer installation includes processor and memory, and the processor is used
Mobile phone control as claimed in any of claims 1 to 7 in one of claims is realized when executing the computer program stored in the memory
Method processed.
10. a kind of computer readable storage medium, computer program, feature are stored on the computer readable storage medium
It is, the computer program realizes mobile phone controlling party as claimed in any of claims 1 to 7 in one of claims when being executed by processor
Method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811527675.6A CN109819100A (en) | 2018-12-13 | 2018-12-13 | Mobile phone control method, device, computer installation and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811527675.6A CN109819100A (en) | 2018-12-13 | 2018-12-13 | Mobile phone control method, device, computer installation and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109819100A true CN109819100A (en) | 2019-05-28 |
Family
ID=66602937
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811527675.6A Pending CN109819100A (en) | 2018-12-13 | 2018-12-13 | Mobile phone control method, device, computer installation and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109819100A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110222287A (en) * | 2019-05-31 | 2019-09-10 | 北京字节跳动网络技术有限公司 | Content displaying method, device, electronic equipment and computer readable storage medium |
CN110363079A (en) * | 2019-06-05 | 2019-10-22 | 平安科技(深圳)有限公司 | Expression exchange method, device, computer installation and computer readable storage medium |
CN110377201A (en) * | 2019-06-05 | 2019-10-25 | 平安科技(深圳)有限公司 | Terminal equipment control method, device, computer installation and readable storage medium storing program for executing |
CN111367580A (en) * | 2020-02-28 | 2020-07-03 | Oppo(重庆)智能科技有限公司 | Application starting method and device and computer readable storage medium |
CN111507149A (en) * | 2020-01-03 | 2020-08-07 | 京东方科技集团股份有限公司 | Interaction method, device and equipment based on expression recognition |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103699547A (en) * | 2012-09-28 | 2014-04-02 | 北京三星通信技术研究有限公司 | Application program recommendation method and terminal |
CN107179831A (en) * | 2017-06-30 | 2017-09-19 | 广东欧珀移动通信有限公司 | Start method, device, storage medium and the terminal of application |
CN107330313A (en) * | 2017-06-30 | 2017-11-07 | 努比亚技术有限公司 | Application control method, mobile terminal and readable storage medium storing program for executing |
CN107832691A (en) * | 2017-10-30 | 2018-03-23 | 北京小米移动软件有限公司 | Micro- expression recognition method and device |
-
2018
- 2018-12-13 CN CN201811527675.6A patent/CN109819100A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103699547A (en) * | 2012-09-28 | 2014-04-02 | 北京三星通信技术研究有限公司 | Application program recommendation method and terminal |
CN107179831A (en) * | 2017-06-30 | 2017-09-19 | 广东欧珀移动通信有限公司 | Start method, device, storage medium and the terminal of application |
CN107330313A (en) * | 2017-06-30 | 2017-11-07 | 努比亚技术有限公司 | Application control method, mobile terminal and readable storage medium storing program for executing |
CN107832691A (en) * | 2017-10-30 | 2018-03-23 | 北京小米移动软件有限公司 | Micro- expression recognition method and device |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110222287A (en) * | 2019-05-31 | 2019-09-10 | 北京字节跳动网络技术有限公司 | Content displaying method, device, electronic equipment and computer readable storage medium |
CN110363079A (en) * | 2019-06-05 | 2019-10-22 | 平安科技(深圳)有限公司 | Expression exchange method, device, computer installation and computer readable storage medium |
CN110377201A (en) * | 2019-06-05 | 2019-10-25 | 平安科技(深圳)有限公司 | Terminal equipment control method, device, computer installation and readable storage medium storing program for executing |
WO2020244074A1 (en) * | 2019-06-05 | 2020-12-10 | 平安科技(深圳)有限公司 | Expression interaction method and apparatus, computer device, and readable storage medium |
WO2020244160A1 (en) * | 2019-06-05 | 2020-12-10 | 平安科技(深圳)有限公司 | Terminal device control method and apparatus, computer device, and readable storage medium |
CN111507149A (en) * | 2020-01-03 | 2020-08-07 | 京东方科技集团股份有限公司 | Interaction method, device and equipment based on expression recognition |
CN111507149B (en) * | 2020-01-03 | 2023-10-27 | 京东方艺云(杭州)科技有限公司 | Interaction method, device and equipment based on expression recognition |
CN111367580A (en) * | 2020-02-28 | 2020-07-03 | Oppo(重庆)智能科技有限公司 | Application starting method and device and computer readable storage medium |
CN111367580B (en) * | 2020-02-28 | 2024-02-13 | Oppo(重庆)智能科技有限公司 | Application starting method and device and computer readable storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109819100A (en) | Mobile phone control method, device, computer installation and computer readable storage medium | |
CN107894833B (en) | Multi-modal interaction processing method and system based on virtual human | |
US10198845B1 (en) | Methods and systems for animating facial expressions | |
US20210258506A1 (en) | Image Processing Method and Apparatus | |
WO2020244074A1 (en) | Expression interaction method and apparatus, computer device, and readable storage medium | |
CN107797663A (en) | Multi-modal interaction processing method and system based on visual human | |
CN111583154B (en) | Image processing method, skin beautifying model training method and related device | |
CN109063560A (en) | Image processing method, device, computer readable storage medium and terminal | |
CN106897727A (en) | A kind of user's gender identification method and device | |
CN106295591A (en) | Gender identification method based on facial image and device | |
WO2020258981A1 (en) | Identity information processing method and device based on fundus image | |
CN110163111A (en) | Method, apparatus of calling out the numbers, electronic equipment and storage medium based on recognition of face | |
CN109741338A (en) | A kind of face dividing method, device and equipment | |
EP3648008A1 (en) | Face recognition method and apparatus, storage medium, and electronic device | |
CN108921856A (en) | Image cropping method, apparatus, electronic equipment and computer readable storage medium | |
CN109389076B (en) | Image segmentation method and device | |
CN108009470A (en) | A kind of method and apparatus of image zooming-out | |
CN105302311B (en) | Terminal coefficient control method, device and terminal based on fingerprint recognition | |
CN110378203A (en) | Image processing method, device, terminal and storage medium | |
CN110749055A (en) | Method, device and system for controlling air conditioner | |
WO2020244160A1 (en) | Terminal device control method and apparatus, computer device, and readable storage medium | |
CN107450717A (en) | A kind of information processing method and Wearable | |
WO2022048352A1 (en) | Unlocking method and apparatus based on facial expression, and computer device and storage medium | |
CN110598719A (en) | Method for automatically generating face image according to visual attribute description | |
CN106991676A (en) | A kind of super-pixel fusion method of local correlation |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190528 |