CN108804893A - A kind of control method, device and server based on recognition of face - Google Patents
A kind of control method, device and server based on recognition of face Download PDFInfo
- Publication number
- CN108804893A CN108804893A CN201810291578.5A CN201810291578A CN108804893A CN 108804893 A CN108804893 A CN 108804893A CN 201810291578 A CN201810291578 A CN 201810291578A CN 108804893 A CN108804893 A CN 108804893A
- Authority
- CN
- China
- Prior art keywords
- expression
- default
- dynamic image
- user
- frame picture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/32—User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
- G06V40/175—Static expression
Abstract
The present invention proposes that a kind of control method, device and server based on recognition of face, the control method include:The dynamic image of active user's face is obtained by current device;Whether judged in the dynamic image comprising the expression combination about pre-set user by face recognition algorithms;Wherein, the expression combination includes at least one default expression about the pre-set user;And when determining in the dynamic image comprising expression combination, the current device is controlled accordingly.In the embodiment of the present invention, recognition of face is carried out by being directed to dynamic image, judges whether combined comprising the expression about pre-set user in dynamic image, and then the identity of user is identified, can more accurately carry out recognition of face, improve safety.
Description
Technical field
The present invention relates to internet arenas, and are specifically related to a kind of control method, device and service based on recognition of face
Device.
Background technology
Continuous with intelligent terminal is popularized, and people gradually adapt to and get used to carrying out various work on intelligent terminal
It is dynamic, such as net purchase, game etc..
Nowadays people get used to various information being stored in intelligent terminal, in order to ensure the safety of intelligent terminal,
Plurality of layers password can be arranged to be protected to intelligent terminal in people, from simplest numerical ciphers to number and letter, meet
Complicated password, specific image password, recognition of face password for being combined into etc..
Existing recognition of face can include generally by the photo of the advance shooting intelligent terminal owner
One group of specific expression and gesture combination, when carrying out recognition of face, can first look for the user's that intelligent terminal is used
Face is shot, and obtained photo is compared with the photo shot in advance, and then determines the user using intelligent terminal
Whether the owner.
In a specific embodiment, the prior art to the photo taken by extracting feature, and the spy that will be extracted
It levies and is compared with for the individual features that pre-stored photo extracts, and then carry out the operations such as unlocking screen.But it is existing
Have in technology, only can by a static picture come judge to be used intelligent terminal user whether the owner,
It is inaccurate, and safety is also not high enough.
Invention content
The embodiment of the present invention provides a kind of control method, device and server based on recognition of face, existing at least to solve
There are one or more technical problems in technology, a kind of beneficial selection is at least provided.
In a first aspect, an embodiment of the present invention provides a kind of control methods based on recognition of face, including:
The dynamic image of active user's face is obtained by image acquiring device;
Whether judged in the dynamic image comprising the expression combination about pre-set user by face recognition algorithms;Its
In, the expression combination includes multiple default expressions of the pre-set user;And
When determining in the dynamic image comprising expression combination, current device is controlled accordingly.
With reference to first aspect, the present invention is calculated described by recognition of face in the first embodiment of first aspect
Before whether method judges to combine comprising the expression about pre-set user in the dynamic image, further include:
Judge whether the duration of the dynamic image is less than first threshold;
Also, it is described whether to be judged in the dynamic image comprising the expression about pre-set user by face recognition algorithms
Combination, including:
Determine the duration be less than first threshold when, judged by face recognition algorithms be in the dynamic image
The no expression comprising about pre-set user combines.
With reference to first aspect, the present invention is described to pass through face recognition algorithms in second of embodiment of first aspect
Judge whether combined comprising the expression about pre-set user in the dynamic image, including:
Extract every frame picture of the dynamic image;
To described per frame picture, the default feature of face in the picture is extracted;Wherein, the default feature passes through to institute
Multiple face pictures for stating pre-set user are trained to obtain, and multiple described face pictures include about the multiple default
The plurality of pictures of expression is each preset in expression;And
According to the default feature extracted, whether judge in the dynamic image comprising the expression about the pre-set user
Combination.
Second of embodiment with reference to first aspect, it is described according to the default feature extracted, judge the Dynamic Graph
Whether combined comprising the expression about the pre-set user as in, including:
According to the default feature extracted, judge whether the active user is the pre-set user;And
When it is the pre-set user to determine the active user, according to the default feature extracted, the dynamic is judged
Whether combined comprising the expression about the pre-set user in image.
The first scheme of second of embodiment with reference to first aspect, it is described according to the default feature extracted, sentence
Whether the active user of breaking is the pre-set user, including:
It will scheme respectively for the default feature extracted per frame picture and the default face about the pre-set user
The default feature of piece extraction is compared, and obtains similarity between every frame picture and the default facial picture;
Compare the similarity and second threshold, and counts the picture number that the similarity is more than the second threshold;
And
When the picture number counted is more than third threshold value, determine that the active user is the pre-set user.
The second scheme of second of embodiment with reference to first aspect, it is described to be judged according to the default feature extracted
Whether combined comprising the expression about the user in the dynamic image, including:
It is default by being sequentially input according to the time sequencing of every frame picture for the default feature extracted per frame picture
In expression classifier, classify to every frame picture;Wherein, the default expression classifier passes through to the multiple default expression
In each preset expression plurality of pictures be trained to obtain;
Judge whether the classification results of every frame picture of the dynamic image correspond to the multiple default expression;And
When determining that the classification results correspond to the multiple default expression, determine in the dynamic image comprising described
Expression combines.
The third scheme of second of embodiment with reference to first aspect, it is described to be judged according to the default feature extracted
Whether combined comprising the expression about the user in the dynamic image, including:
It is default by being sequentially input according to the time sequencing of every frame picture for the default feature extracted per frame picture
In expression classifier, classify to every frame picture;Wherein, the default expression classifier passes through to the multiple default expression
In each preset expression plurality of pictures be trained to obtain;
Judge whether the classification results of every frame picture of the dynamic image correspond to the multiple default expression;
When determining that the classification results correspond to the multiple default expression, judged according to the classification results described more
Each two presets whether the time interval between expression is less than the 4th threshold value in a default expression;And
When determining that the time interval is less than four threshold values, determines in the dynamic image and combined comprising the expression.
Second aspect, an embodiment of the present invention provides a kind of control devices based on recognition of face, including:
Acquisition module, the dynamic image for obtaining active user's face by image acquiring device;
Identification module, for whether being judged in the dynamic image comprising about pre-set user by face recognition algorithms
Expression combines;Wherein, the expression combination includes multiple default expressions of the pre-set user;And
Control module, for when the identification module determine in the dynamic image comprising the expression combination when, to working as
Preceding equipment is controlled accordingly.
The third aspect, an embodiment of the present invention provides a kind of server, the server includes:
One or more processors;
Storage device is configured to store one or more programs;
Communication interface is configured to that the processor and storage device is made to be communicated with external equipment;
When one or more of programs are executed by one or more of processors so that one or more of places
Reason device realizes the method in above-mentioned first aspect.
Fourth aspect, an embodiment of the present invention provides a kind of computer readable storage mediums, described based on people for storing
Computer software instructions used in the control device of face identification comprising for executing knowing based on face in above-mentioned first aspect
Other control method is the program involved by the control device based on recognition of face.
Another technical solution in above-mentioned technical proposal has the following advantages that or advantageous effect:In the embodiment of the present invention,
Recognition of face is carried out by being directed to dynamic image, judges whether combined comprising the expression about pre-set user in dynamic image,
And then the identity of user is identified, recognition of face can be more accurately carried out, safety is improved.
Above-mentioned general introduction is merely to illustrate that the purpose of book, it is not intended to be limited in any way.Except foregoing description
Schematical aspect, except embodiment and feature, by reference to attached drawing and the following detailed description, the present invention is further
Aspect, embodiment and feature, which will be, to be readily apparent that.
Description of the drawings
In the accompanying drawings, unless specified otherwise herein, otherwise run through the identical reference numeral of multiple attached drawings and indicate same or analogous
Component or element.What these attached drawings were not necessarily to scale.It should be understood that these attached drawings are depicted only according to the present invention
Some disclosed embodiments, and should not serve to limit the scope of the present invention.
Fig. 1 is the flow chart according to the control method based on recognition of face of one embodiment of the invention;
Fig. 2 is the flow chart according to the face identification method of another embodiment of the present invention;
Fig. 3 is the flow chart that judgment method is combined according to the expression of another embodiment of the present invention;
Fig. 4 is the flow chart according to user's judgment method of another embodiment of the present invention;
Fig. 5 is the flow chart according to the expression judgment method of another embodiment of the present invention;
Fig. 6 is the structural schematic diagram according to the control device based on recognition of face of another embodiment of the present invention;
Fig. 7 is the structural schematic diagram according to the identification module of another embodiment of the present invention;
Fig. 8 is the structural schematic diagram according to the judging submodule of another embodiment of the present invention;
Fig. 9 is the structural schematic diagram according to user's judging unit of another embodiment of the present invention;
Figure 10 is the structural schematic diagram according to the expression judging unit of another embodiment of the present invention;
Figure 11 is the structural schematic diagram according to the server of another embodiment of the present invention.
Specific implementation mode
Hereinafter, certain exemplary embodiments are simply just described.As one skilled in the art will recognize that
Like that, without departing from the spirit or scope of the present invention, described embodiment can be changed by various different modes.
Therefore, attached drawing and description are considered essentially illustrative rather than restrictive.
Fig. 1 shows the flow chart of the control method 100 according to an embodiment of the invention based on recognition of face.This hair
In bright embodiment, method 100, which can be applied to smart mobile phone, tablet computer etc., has the electronic equipment of camera function, this is sentenced
For smart mobile phone, method 100 is described in detail.As shown in Figure 1, control method 100 may include:
S110:The dynamic image of active user's face is obtained by image acquiring device;
In the embodiment of the present invention, image acquiring device can be any device that can obtain image.For example, can pass through
The camera function of smart mobile phone obtains the dynamic image of active user's face using camera, and preferably between the acquisition predetermined time
Every in interior image, such as three seconds or in five seconds, to reduce calculation amount, to accelerate calculating speed, and enhance user's body
It tests.
In addition, the step of obtaining dynamic image can take mobile phone in active user, and while lighting screen, carries out, and also may be used
To be carried out when active user opens certain application programs.Particularly, it can be opened in active user certain with payment function
Application program when carry out.Alternatively, it is also possible to be carried out according to fixed time interval.The user of smart mobile phone can be according to reality
Situation is configured.
S120:Whether judged in the dynamic image comprising the expression group about pre-set user by face recognition algorithms
It closes;
Expression combination in the embodiment of the present invention may include multiple default expressions about pre-set user, can be advance
Be arranged, for example, can be blink three times, open one's mouth it is two inferior, this sentence blink be used as three times expression combination to method 100 do into
One step is described in detail.Further, it is to be appreciated that pre-set user can be the owner of smart mobile phone, or by intelligence
The owner of mobile phone is configured to the personnel using smart mobile phone.For example, other than the smart mobile phone owner, pre-set user
Or the household of the owner.Therefore, may exist multiple pre-set users.And applied in full text of the present invention about default use
The information at family, especially facial picture are to have prestored, and be preferably the picture that largely prestores, to improve identification essence
Degree.
Particularly, if active user makes expression in the combination of default expression, for example at two points in a longer period of time
Clock has blinked eye three times in the even longer time, and illustrating user at this time may be not intended to control mobile phone by facial expression
Operation, and only undesigned behavior.Such case occurs in order to prevent, can be carried out first to the duration of dynamic image
Judge.Therefore, in a preferred embodiment of the invention, before S120, can also include:
Judge whether the duration of the dynamic image is less than first threshold;
And at this point, S120 can be:
S120':When determining that the duration is less than first threshold, by face recognition algorithms judge in dynamic image whether
Including the expression about pre-set user combines.
Herein, first threshold could be provided as shorter time span, such as two seconds or five seconds.It can add in this way
The control response time of fast mobile phone, enhance user experience.
Judge whether the method comprising expression combination can be any prior art, the embodiment of the present invention in dynamic image
Judged using face recognition algorithms, and in a preferred embodiment of the invention, as shown in Fig. 2, S120 can be wrapped
It includes:
S121:Extract every frame picture of the dynamic image;
As understood by those skilled in the art, dynamic image is framing, in order to judge in dynamic image whether include
Default expression combines, and in the embodiment of the present invention, the picture of each frame is all extracted and is judged.In this case, may be used
To be more accurately identified.
S122:To every frame picture, the default feature of face in the picture is extracted;
Herein, default feature can be trained to obtain by multiple face pictures to the pre-set user, and this
Should include about the plurality of pictures for each presetting expression in multiple default expressions in a little face pictures.It is understood that in this way
Obtained default feature can embody the face feature of pre-set user, the characteristics of capable of also embodying each default expression, can be with
So that the result judged is more accurate.
In the embodiment of the present invention, the identity of active user is identified using face recognition algorithms.Existing face is known
In other technology, it can be carried out by extracting face visual signature, pixels statistics feature, facial image algebraic characteristic etc..This hair
In bright, using depth learning technology or neural network, by being trained to obtain to multiple face pictures about pre-set user
Default feature.
Above-mentioned depth learning technology and neural network can be used for big data analysis, and framework is more complicated, this
Preferably the plurality of pictures about pre-set user is trained using depth convolutional neural networks in inventive embodiments, is obtained pre-
If feature.
S123:According to the default feature extracted, whether judge in the dynamic image comprising about the pre-set user
Expression combination.
It can be judged by a variety of prior arts, it is preferable that as shown in figure 3, S123 may include:
S1231:According to the default feature extracted, judge whether the active user is the pre-set user;
Any prior art may be used in the execution of the step, in a preferred embodiment of the invention, such as Fig. 4 institutes
Show, can be carried out by following steps:
S12311:It respectively will be for the default feature extracted per frame picture and presetting about the pre-set user
The default feature of facial picture extraction is compared, and obtains similarity between every frame picture and the default facial picture;
As noted above, default facial picture herein can be to transport the pre-set user face figure deposited in smart mobile phone
Piece, facial picture preferably clearly, recent.In a preferred embodiment of the present invention, herein with it is previously mentioned pre-
If user's face picture can be updated at predetermined intervals.
S12312:Compare the similarity and second threshold, and counts the figure that the similarity is more than the second threshold
Piece quantity;
Similarity mentioned herein, for the similarity between the face and the face of pre-set user of active user.It utilizes
The identity of active user is identified about multiple similarities of every frame picture, can improve the accuracy of recognition of face.Herein,
Two threshold values can be arranged as required to, such as can be 80%-90% etc..
S12313:When the picture number counted is more than third threshold value, determine that the active user is the default use
Family.
Third threshold value is related with the quantity of picture, and the quality for the dynamic image that different cameras is shot is different, always
Picture number would also vary from, specific numerical value limitation is not carried out to third threshold value herein.
If by judging, active user is not pre-set user, then can be carried out to active user's equipment currently in use
Present procedure, screen locking or shutoff operation are exited, S1232 is otherwise entered step.
S1232:When it is the pre-set user to determine the active user, according to the default feature extracted, institute is judged
It states in dynamic image and whether is combined comprising the expression about the pre-set user;
By S1231, the identity of active user is identified, in order to control the equipment that user is just using,
It then needs further to judge whether active user is made that preset expression combination.Preferably, as shown in figure 5, S1232 can be wrapped
It includes:
S12321:It will be defeated successively according to the time sequencing of every frame picture for the default feature extracted per frame picture
Enter in default expression classifier, classifies to every frame picture.
Grader is usually used in data mining, and input is typically feature vector, and output is generally also numerical value, but per number
Value indicates different classifications.In the embodiment of the present invention, preferably by neural network, particularly depth convolutional neural networks, pass through
Plurality of pictures to each presetting expression in multiple default expressions is trained to obtain.It is understood that if dynamic image
In combined comprising expression, the output of default expression classifier will include the numerical value expression of each default expression, and be according to when
Between sequence numerical value indicate.
Herein, the training sample for presetting expression classifier can be the plurality of pictures of each default expression.For example, above-mentioned
Default expression is combined as in the example of " blink three times ", can be that " opening eyes " this expression prestore plurality of pictures, as point
The training set of class device.
In the example of " blink three times ", the default expression in default expression combination can be to open eyes three times and three times
Close eyes.If default expression is combined as opening one's mouth twice, default expression is right when can magnify for face to specific degrees
The expression answered.
S12322:Judge whether the classification results of every frame picture of the dynamic image correspond to the multiple preset table
Feelings;
For grader, classification results are that numerical value indicates.Since input is sequentially in time, then exporting
It should be numerical value expression sequentially in time.Judge whether classification results correspond to multiple preset marks, that is, judges these numerical value
Whether include numerical value corresponding with multiple default expressions in expression.
S12323:When determining that the classification results correspond to the multiple default expression, determine in the dynamic image
Including the expression combination.
In view of actual conditions, if the time interval between two default expressions is long, it is possible to also illustrate user simultaneously
It is not intended to control equipment by expression, in this case, S1232 can also include:
S12321':By for the default feature extracted per frame picture according to every frame picture time sequencing successively
Input is preset in expression classifier, is classified to every frame picture;
S12322':Judge whether the classification results of every frame picture of the dynamic image correspond to the multiple preset table
Feelings;
S12321 ' and S12322 ' is identical as S12321 and S12322 respectively, and this will not be repeated here.
S12323':When determining that the classification results correspond to the multiple default expression, sentenced according to the classification results
Each two presets whether the time interval between expression is less than the 4th threshold value in the multiple default expression of breaking;
In order to improve response speed, and in view of the size of first threshold, the 4th threshold value can be arranged smaller, example
It is such as one second or even shorter.
S12324:When determining that the time interval is less than four threshold values, determine in the dynamic image to include the table
Feelings combine.
It is understood that judging if passed through, does not include the expression combination about pre-set user in dynamic image, then may be used
Not carry out any operation to currently used equipment;Otherwise, step S130 is executed.
S130:When determining in the dynamic image comprising expression combination, the current device is carried out corresponding
Control.
In the embodiment of the present invention, currently used equipment is controlled, can be solution lock screen, close screen, even
Application program can also be controlled, such as pause broadcasting etc..Therefore, method 100 can be embedded in smart machine when implementing
Operating system in, can also be embedded in certain application programs, or individual program can be used as to run, not do herein specific
Limitation.
In the control method provided in an embodiment of the present invention based on face recognition, pass through the dynamic image to user's face
It carries out recognition of face and identifies identity and the action of user, then equipment is controlled, be properly termed as " password expression ", and it is existing
It is some to be compared using pictures progress recognition of face, enhance safety.
In addition, " password expression " can also be in alarm.Nowadays, camera is all arranged on many streets, in citizen
When facing a danger, certain fixed expression can be made towards camera, assists in the detection of case.
Fig. 6 shows the structural schematic diagram of the control device 200 according to another embodiment of the present invention based on face recognition,
As shown in fig. 6, control device 200 includes:
Acquisition module 210, the dynamic image for obtaining active user's face by image acquiring device;
Whether identification module 220 judges in the dynamic image for passing through face recognition algorithms comprising about default use
The expression at family combines;Wherein, the expression combination includes multiple default expressions about the pre-set user;And
Control module 230, for when identification module 220 determine in the dynamic image comprising the expression combination when, it is right
Current device is controlled accordingly.
Particularly, control device 200 can also include:
Time judgment module, for judging whether the duration of the dynamic image is less than first threshold;
Also, the identification module is further used for:Determine that the duration is less than the first threshold in time judgment module
When value, whether judged in the dynamic image comprising the expression combination about pre-set user by face recognition algorithms.
Preferably, as shown in fig. 7, identification module 220 may include:
Picture extracting sub-module 221, every frame picture for extracting the dynamic image;
Feature extraction submodule 222, for, per frame picture, extracting the default feature of face in the picture to described;Its
In, the default feature is trained to obtain by multiple face pictures to the pre-set user, and multiple described faces
Picture includes about the plurality of pictures for each presetting expression in the multiple default expression;
Judging submodule 223, for according to the default feature extracted, judge in the dynamic image whether comprising about
The expression of the pre-set user combines.
In a preferred embodiment of the invention, as shown in figure 8, judging submodule 223 may include:
User's judging unit 2231, for according to the default feature extracted, judging whether the active user is described
Pre-set user;And
Expression judging unit 2232, for when user's judging unit determines that the active user is the pre-set user,
According to the default feature extracted, judge whether combined comprising the expression about the pre-set user in the dynamic image.
In another preferred embodiment of the present invention, as shown in figure 9, user's judging unit 2231 may include:
Comparing subunit 22311, for respectively by for the default feature extracted per frame picture with about described
The default feature of the default facial picture extraction of pre-set user is compared, and is obtained every frame picture and is schemed with the default face
Similarity between piece;
Subelement 22312 is counted, the similarity and second threshold are used for, and counts the similarity more than described
The picture number of second threshold;And
User's judgment sub-unit 22313, the picture number for being counted in statistics subelement 22312 are more than third threshold value
When, determine that the active user is the pre-set user.
In another preferred embodiment of the present invention, as shown in Figure 10, expression judging unit 2232 may include:
Classification subelement 22321, for the default feature extracted per frame picture will to be directed to according to every frame picture
Time sequencing is sequentially input in default expression classifier, is classified to every frame picture;Wherein, the default expression classifier is logical
It crosses and the plurality of pictures for each presetting expression in the multiple default expression is trained to obtain;
Expression judgment sub-unit 22323, for judging whether the classification results of every frame picture of the dynamic image correspond to
In the multiple default expression;And
Expression determination subelement 22323, every frame figure for determining the dynamic image in expression judgment sub-unit 22322
When the classification results of piece correspond to the multiple default expression, determines in the dynamic image and combined comprising the expression.
In another preferred embodiment of the present invention, expression judging unit 2232 includes:
Classification subelement 22321 ', for the default feature extracted per frame picture will to be directed to according to every frame picture
Time sequencing is sequentially input in default expression classifier, is classified to every frame picture;Wherein, the default expression classifier is logical
It crosses and the plurality of pictures for each presetting expression in the multiple default expression is trained to obtain;
Expression judgment sub-unit 22322 ', for judging whether the classification results of every frame picture of the dynamic image correspond to
In the multiple default expression;
Time judgment sub-unit 22323 ', for determining the classification results pair in the expression judgment sub-unit 22322 '
Should when the multiple default expression, according to the classification results judge in the multiple default expression each two preset expression it
Between time interval whether be less than the 4th threshold value;And
Expression determination subelement 22324, for determining that the time interval is small in the time judgment sub-unit 22323 '
When four threshold values, determines in the dynamic image and combined comprising the expression.
Figure 11 shows the structural schematic diagram of server 300 according to another embodiment of the present invention.As shown in figure 11, this sets
It is standby to include:
One or more processors 310;
Storage device 320 is configured to store one or more programs;
Communication interface 330 is configured to that the processor 310 and storage device 320 is made to be communicated with external equipment;
When one or more of programs are executed by one or more of processors 310 so that one or more
A processor 310 realizes aforementioned any control method based on recognition of face.
According to another embodiment of the present invention, a kind of computer readable storage medium is provided, computer program is stored with,
The program realizes aforementioned any control method based on recognition of face when being executed by processor.
In the description of this specification, reference term " one embodiment ", " some embodiments ", " example ", " specifically show
The description of example " or " some examples " etc. means specific features, structure, material or spy described in conjunction with this embodiment or example
Point is included at least one embodiment or example of the invention.Moreover, particular features, structures, materials, or characteristics described
It may be combined in any suitable manner in any one or more of the embodiments or examples.In addition, without conflicting with each other, this
The technical staff in field can be by the spy of different embodiments or examples described in this specification and different embodiments or examples
Sign is combined.
In addition, term " first ", " second " are used for description purposes only, it is not understood to indicate or imply relative importance
Or implicitly indicate the quantity of indicated technical characteristic." first " is defined as a result, the feature of " second " can be expressed or hidden
Include at least one this feature containing ground.In the description of the present invention, the meaning of " plurality " is two or more, unless otherwise
Clear specific restriction.
Any process described otherwise above or method description are construed as in flow chart or herein, and expression includes
It is one or more for realizing specific logical function or process the step of executable instruction code module, segment or portion
Point, and the range of the preferred embodiment of the present invention includes other realization, wherein can not press shown or discuss suitable
Sequence, include according to involved function by it is basic simultaneously in the way of or in the opposite order, to execute function, this should be of the invention
Embodiment person of ordinary skill in the field understood.
Expression or logic and/or step described otherwise above herein in flow charts, for example, being considered use
In the order list for the executable instruction for realizing logic function, may be embodied in any computer-readable medium, for
Instruction execution system, device or equipment (system of such as computer based system including processor or other can be held from instruction
The instruction fetch of row system, device or equipment and the system executed instruction) it uses, or combine these instruction execution systems, device or set
It is standby and use.For the purpose of this specification, " computer-readable medium " can any can be included, store, communicating, propagating or passing
Defeated program is for instruction execution system, device or equipment or the dress used in conjunction with these instruction execution systems, device or equipment
It sets.The more specific example (non-exhaustive list) of computer-readable medium includes following:Electricity with one or more wiring
Interconnecting piece (electronic device), portable computer diskette box (magnetic device), random access memory (RAM), read-only memory
(ROM), erasable edit read-only storage (EPROM or flash memory), fiber device and portable read-only memory
(CDROM).In addition, computer-readable medium can even is that the paper that can print described program on it or other suitable Jie
Matter, because can be for example by carrying out optical scanner to paper or other media, then into edlin, interpretation or when necessary with other
Suitable method is handled electronically to obtain described program, is then stored in computer storage.
It should be appreciated that each section of the present invention can be realized with hardware, software, firmware or combination thereof.Above-mentioned
In embodiment, software that multiple steps or method can in memory and by suitable instruction execution system be executed with storage
Or firmware is realized.It, and in another embodiment, can be under well known in the art for example, if realized with hardware
Any one of row technology or their combination are realized:With the logic gates for realizing logic function to data-signal
Discrete logic, with suitable combinational logic gate circuit application-specific integrated circuit, programmable gate array (PGA), scene
Programmable gate array (FPGA) etc..
Those skilled in the art are appreciated that realize all or part of step that above-described embodiment method carries
Suddenly it is that relevant hardware can be instructed to complete by program, the program can be stored in a kind of computer-readable storage medium
In matter, which includes the steps that one or a combination set of embodiment of the method when being executed.
In addition, each functional unit in each embodiment of the present invention can be integrated in a processing module, it can also
That each unit physically exists alone, can also two or more units be integrated in a module.Above-mentioned integrated mould
The form that hardware had both may be used in block is realized, can also be realized in the form of software function module.The integrated module is such as
Fruit is realized in the form of software function module and when sold or used as an independent product, can also be stored in a computer
In readable storage medium storing program for executing.The storage medium can be read-only memory, disk or CD etc..
The above description is merely a specific embodiment, but scope of protection of the present invention is not limited thereto, any
Those familiar with the art in the technical scope disclosed by the present invention, can readily occur in its various change or replacement,
These should be covered by the protection scope of the present invention.Therefore, protection scope of the present invention should be with the guarantor of the claim
It protects subject to range.
Claims (16)
1. a kind of control method based on recognition of face, which is characterized in that including:
The dynamic image of active user's face is obtained by image acquiring device;
Whether judged in the dynamic image comprising the expression combination about pre-set user by face recognition algorithms;Wherein, institute
State multiple default expressions that expression combination includes the pre-set user;And
When determining in the dynamic image comprising expression combination, current device is controlled accordingly.
2. control method according to claim 1, which is characterized in that judge described move by face recognition algorithms described
Before whether being combined comprising the expression about pre-set user in state image, further include:
Judge whether the duration of the dynamic image is less than first threshold;
Also, it is described whether to be judged in the dynamic image comprising the expression group about pre-set user by face recognition algorithms
It closes, including:
When determining that the duration is less than first threshold, judge whether wrapped in the dynamic image by face recognition algorithms
Containing the expression combination about pre-set user.
3. control method according to claim 1, which is characterized in that described to judge the dynamic by face recognition algorithms
Whether combined comprising the expression about pre-set user in image, including:
Extract every frame picture of the dynamic image;
To described per frame picture, the default feature of face in the picture is extracted;Wherein, the default feature passes through to described pre-
If multiple face pictures of user are trained to obtain, and multiple described face pictures include about the multiple default expression
In each preset expression plurality of pictures;And
According to the default feature extracted, whether judge in the dynamic image comprising the expression group about the pre-set user
It closes.
4. control method according to claim 3, which is characterized in that it is described according to the default feature extracted, judge institute
It states in dynamic image and whether is combined comprising the expression about the pre-set user, including:
According to the default feature extracted, judge whether the active user is the pre-set user;And
When it is the pre-set user to determine the active user, according to the default feature extracted, the dynamic image is judged
In whether comprising about the pre-set user expression combine.
5. control method according to claim 4, which is characterized in that it is described according to the default feature extracted, judge institute
State whether active user is the pre-set user, including:
It will be carried respectively for the default feature extracted per frame picture and the default facial picture about the pre-set user
The default feature taken is compared, and obtains similarity between every frame picture and the default facial picture;
Compare the similarity and second threshold, and counts the picture number that the similarity is more than the second threshold;And
When the picture number counted is more than third threshold value, determine that the active user is the pre-set user.
6. control method according to claim 4, which is characterized in that described according to described in the default feature extracted judgement
Whether combined comprising the expression about the user in dynamic image, including:
For the default feature extracted per frame picture default expression will be sequentially input according to the time sequencing of every frame picture
In grader, classify to every frame picture;Wherein, the default expression classifier passes through to every in the multiple default expression
The plurality of pictures of a default expression is trained to obtain;
Judge whether the classification results of every frame picture of the dynamic image correspond to the multiple default expression;And
When determining that the classification results correspond to the multiple default expression, determine in the dynamic image to include the expression
Combination.
7. control method according to claim 4, which is characterized in that described according to described in the default feature extracted judgement
Whether combined comprising the expression about the user in dynamic image, including:
For the default feature extracted per frame picture default expression will be sequentially input according to the time sequencing of every frame picture
In grader, classify to every frame picture;Wherein, the default expression classifier passes through to every in the multiple default expression
The plurality of pictures of a default expression is trained to obtain;
Judge whether the classification results of every frame picture of the dynamic image correspond to the multiple default expression;
When determining that the classification results correspond to the multiple default expression, judged according to the classification results the multiple pre-
If each two presets whether the time interval between expression is less than the 4th threshold value in expression;And determining that the time interval is small
When four threshold values, determines in the dynamic image and combined comprising the expression.
8. a kind of control device based on recognition of face, which is characterized in that including:
Acquisition module, the dynamic image for obtaining active user's face by image acquiring device;
Identification module, for whether being judged in the dynamic image comprising the expression about pre-set user by face recognition algorithms
Combination;Wherein, the expression combination includes multiple default expressions of the pre-set user;And
Control module, for when the identification module determine in the dynamic image comprising the expression combination when, to currently setting
It is standby to be controlled accordingly.
9. control device according to claim 8, which is characterized in that further include:
Time judgment module, for judging whether the duration of the dynamic image is less than first threshold;
Also, the identification module is further used for:Determine that the duration is less than the first threshold in the time judgment module
When value, whether judged in the dynamic image comprising the expression combination about pre-set user by face recognition algorithms.
10. control device according to claim 8, which is characterized in that the identification module includes:
Picture extracting sub-module, every frame picture for extracting the dynamic image;
Feature extraction submodule, for, per frame picture, extracting the default feature of face in the picture to described;Wherein, described
Default feature is trained to obtain by multiple face pictures to the pre-set user, and multiple described face pictures include
About the plurality of pictures for each presetting expression in the multiple default expression;And
Judging submodule, for whether according to the default feature extracted, judging in the dynamic image comprising about described pre-
If the expression of user combines.
11. control device according to claim 10, which is characterized in that the judging submodule includes:
User's judging unit, for according to the default feature extracted, judging whether the active user is the pre-set user;
And
Expression judging unit, for when user's judging unit determines that the active user is the pre-set user, according to
The default feature extracted judges whether combined comprising the expression about the pre-set user in the dynamic image.
12. control device according to claim 11, which is characterized in that user's judging unit includes:
Comparing subunit, for respectively will for the default feature extracted per frame picture with about the pre-set user
The default feature of default face picture extraction is compared, and is obtained described similar between the default facial picture per frame picture
Degree;
Subelement is counted, the similarity and second threshold are used for, and counts the similarity and is more than the second threshold
Picture number;And
User's judgment sub-unit determines institute when the picture number for being counted in the statistics subelement is more than third threshold value
It is the pre-set user to state active user.
13. control device according to claim 11, which is characterized in that the expression judging unit includes:
Classify subelement, the default feature extracted per frame picture for will be directed to described according to every frame picture time sequencing according to
Secondary input is preset in expression classifier, is classified to every frame picture;Wherein, the default expression classifier passes through to described more
The plurality of pictures that expression is each preset in a default expression is trained to obtain;
Expression judgment sub-unit, for judging it is the multiple whether the classification results of every frame picture of the dynamic image correspond to
Default expression;And
Expression determination subelement, the classification knot of every frame picture for determining the dynamic image in the expression judgment sub-unit
When fruit corresponds to the multiple default expression, determines in the dynamic image and combined comprising the expression.
14. control device according to claim 11, which is characterized in that the expression judging unit includes:
Classify subelement, the default feature extracted per frame picture for will be directed to described according to every frame picture time sequencing according to
Secondary input is preset in expression classifier, is classified to every frame picture;Wherein, the default expression classifier passes through to described more
The plurality of pictures that expression is each preset in a default expression is trained to obtain;
Expression judgment sub-unit, for judging it is the multiple whether the classification results of every frame picture of the dynamic image correspond to
Default expression;
Time judgment sub-unit, it is the multiple default for determining that the classification results correspond in the expression judgment sub-unit
When expression, whether the time interval in the multiple default expression between the default expression of each two is judged according to the classification results
Less than the 4th threshold value;And
Expression determination subelement, for the time judgment sub-unit determine the time interval be less than four threshold values when, really
It is combined comprising the expression in the fixed dynamic image.
15. a kind of server, which is characterized in that the server includes:
One or more processors;
Storage device is configured to store one or more programs;
Communication interface is configured to that the processor and the storage device is made to be communicated with external equipment;
When one or more of programs are executed by one or more of processors so that one or more of processors
Realize the control method as described in any in claim 1-7.
16. a kind of computer readable storage medium, is stored with computer program, which is characterized in that the program is held by processor
The control method as described in any in claim 1-7 is realized when row.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810291578.5A CN108804893A (en) | 2018-03-30 | 2018-03-30 | A kind of control method, device and server based on recognition of face |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810291578.5A CN108804893A (en) | 2018-03-30 | 2018-03-30 | A kind of control method, device and server based on recognition of face |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108804893A true CN108804893A (en) | 2018-11-13 |
Family
ID=64095502
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810291578.5A Pending CN108804893A (en) | 2018-03-30 | 2018-03-30 | A kind of control method, device and server based on recognition of face |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108804893A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110290267A (en) * | 2019-06-25 | 2019-09-27 | 广东以诺通讯有限公司 | A kind of mobile phone control method and system based on human face expression |
WO2020140686A1 (en) * | 2019-01-03 | 2020-07-09 | 阿里巴巴集团控股有限公司 | Method, device and apparatus for waking up intelligent apparatus based on face detection |
CN112089595A (en) * | 2020-05-22 | 2020-12-18 | 未来穿戴技术有限公司 | Login method of neck massager, neck massager and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103824059A (en) * | 2014-02-28 | 2014-05-28 | 东南大学 | Facial expression recognition method based on video image sequence |
CN104850234A (en) * | 2015-05-28 | 2015-08-19 | 成都通甲优博科技有限责任公司 | Unmanned plane control method and unmanned plane control system based on facial expression recognition |
CN107346387A (en) * | 2017-06-23 | 2017-11-14 | 深圳传音通讯有限公司 | Unlocking method and device |
CN107526994A (en) * | 2016-06-21 | 2017-12-29 | 中兴通讯股份有限公司 | A kind of information processing method, device and mobile terminal |
CN107679493A (en) * | 2017-09-30 | 2018-02-09 | 百度在线网络技术(北京)有限公司 | Face identification method and device |
-
2018
- 2018-03-30 CN CN201810291578.5A patent/CN108804893A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103824059A (en) * | 2014-02-28 | 2014-05-28 | 东南大学 | Facial expression recognition method based on video image sequence |
CN104850234A (en) * | 2015-05-28 | 2015-08-19 | 成都通甲优博科技有限责任公司 | Unmanned plane control method and unmanned plane control system based on facial expression recognition |
CN107526994A (en) * | 2016-06-21 | 2017-12-29 | 中兴通讯股份有限公司 | A kind of information processing method, device and mobile terminal |
CN107346387A (en) * | 2017-06-23 | 2017-11-14 | 深圳传音通讯有限公司 | Unlocking method and device |
CN107679493A (en) * | 2017-09-30 | 2018-02-09 | 百度在线网络技术(北京)有限公司 | Face identification method and device |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2020140686A1 (en) * | 2019-01-03 | 2020-07-09 | 阿里巴巴集团控股有限公司 | Method, device and apparatus for waking up intelligent apparatus based on face detection |
CN110290267A (en) * | 2019-06-25 | 2019-09-27 | 广东以诺通讯有限公司 | A kind of mobile phone control method and system based on human face expression |
CN112089595A (en) * | 2020-05-22 | 2020-12-18 | 未来穿戴技术有限公司 | Login method of neck massager, neck massager and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10832069B2 (en) | Living body detection method, electronic device and computer readable medium | |
Faundez-Zanuy | Data fusion in biometrics | |
US6661907B2 (en) | Face detection in digital images | |
US10275672B2 (en) | Method and apparatus for authenticating liveness face, and computer program product thereof | |
US20200005061A1 (en) | Living body detection method and system, computer-readable storage medium | |
US20200380279A1 (en) | Method and apparatus for liveness detection, electronic device, and storage medium | |
EP2336949B1 (en) | Apparatus and method for registering plurality of facial images for face recognition | |
CN110069970A (en) | Activity test method and equipment | |
CN101689303A (en) | Facial expression recognition apparatus and method, and image capturing apparatus | |
CN108647625A (en) | A kind of expression recognition method and device | |
CN111240482B (en) | Special effect display method and device | |
CN108804893A (en) | A kind of control method, device and server based on recognition of face | |
CN106682473A (en) | Method and device for identifying identity information of users | |
DE112019000040T5 (en) | DETECTING DETERMINATION MEASURES | |
Hebbale et al. | Real time COVID-19 facemask detection using deep learning | |
CN106778627A (en) | Detect method, device and the mobile terminal of face face value | |
CN114424258A (en) | Attribute identification method and device, storage medium and electronic equipment | |
Putro et al. | Adult image classifiers based on face detection using Viola-Jones method | |
CN110363111A (en) | Human face in-vivo detection method, device and storage medium based on lens distortions principle | |
CN110633677A (en) | Face recognition method and device | |
US11620728B2 (en) | Information processing device, information processing system, information processing method, and program | |
Dhruva et al. | Novel algorithm for image processing based hand gesture recognition and its application in security | |
CN108710820A (en) | Infantile state recognition methods, device and server based on recognition of face | |
CN108024148A (en) | The multimedia file recognition methods of Behavior-based control feature, processing method and processing device | |
Gaikwad et al. | Face recognition using golden ratio for door access control system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20181113 |