CN105825112A - Mobile terminal unlocking method and device - Google Patents

Mobile terminal unlocking method and device Download PDF

Info

Publication number
CN105825112A
CN105825112A CN201610159066.4A CN201610159066A CN105825112A CN 105825112 A CN105825112 A CN 105825112A CN 201610159066 A CN201610159066 A CN 201610159066A CN 105825112 A CN105825112 A CN 105825112A
Authority
CN
China
Prior art keywords
expression
action
face
characteristic point
coordinates value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201610159066.4A
Other languages
Chinese (zh)
Inventor
陈杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Qihoo Technology Co Ltd
Qizhi Software Beijing Co Ltd
Original Assignee
Beijing Qihoo Technology Co Ltd
Qizhi Software Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Qihoo Technology Co Ltd, Qizhi Software Beijing Co Ltd filed Critical Beijing Qihoo Technology Co Ltd
Priority to CN201610159066.4A priority Critical patent/CN105825112A/en
Publication of CN105825112A publication Critical patent/CN105825112A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F21/00Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
    • G06F21/30Authentication, i.e. establishing the identity or authorisation of security principals
    • G06F21/31User authentication
    • G06F21/32User authentication using biometric data, e.g. fingerprints, iris scans or voiceprints

Abstract

The invention provides a mobile terminal unlocking method and device, wherein the method comprises the following steps: when an unlocking request is received, calling an image collector of a mobile terminal; using the image collector to collect a plurality of expression images of a human face in a visual range; extracting expression images containing continuous expression actions from the plurality of collected expression images of the human face in the visual range; analyzing out the continuous expression actions included in the expression images according to the extracted expression images containing the continuous expression actions; matching the analyzed continuous expression actions and preset human face continuous expression actions used for judging whether to unlock or not; determining whether to respond to the unlocking request or not according to the matching result. According to the unlocking mode provided by the invention, a plurality of continuous human face expression actions are used for executing the unlocking, and the mode is a dynamic unlocking mode. Compared with a static unlocking mode in the prior art, the unlocking security can be improved.

Description

The unlocking method of mobile terminal and device
Technical field
The present invention relates to information security field, particularly relate to unlocking method and the device of a kind of mobile terminal.
Background technology
Along with developing rapidly of science and technology, various mobile devices (such as, smart mobile phone, notebook computer etc.) are arisen at the historic moment, and from strength to strength, the study and work for us brings great convenience the function of mobile device.The function that various mobile devices realize is the most, correspondingly, needs the information that content to be protected is the most, especially more relevant to privacy or money etc. in mobile device, more should carry out security work, thus ensure that user security uses mobile device relievedly.
In actual applications, user can protect the information security of user by the way of mobile terminal is carried out screen locking, and can arrange various ways and be unlocked, as slided, the mode such as input numerical ciphers, input fingerprint.Although, use these unlocking manners can realize unlocking, but the problem yet suffering from different aspect.Slip unblock implements convenient and swift, but safety is relatively low, and anyone can realize by slip screen unlocking easily;It is a kind of traditional unlocking manner that input numerical ciphers unlocks, and which needs the user of mobile terminal to remember uninteresting numeral, and password is easy to pass out of mind, thus affects being smoothed out of unblock;Input unlocked by fingerprint is a kind of novel unlocking manner comparatively speaking, and which performs to solve latching operation by static fingerprint recognition, but, when user falls asleep, its fingerprint is easy to be utilized by others, and security reliability is relatively low.Therefore it provides a kind of unlocking manner safe and reliable, that be prone to remember is necessary.
Summary of the invention
In view of the above problems, it is proposed that the present invention is to provide a kind of unlocking method of mobile terminal and device overcoming the problems referred to above or solving the problems referred to above at least in part.
According to one aspect of the present invention, it is provided that the unlocking method of a kind of mobile terminal, including:
When receiving unlocking request, adjust the image acquisition device of mobile terminal, utilize described image acquisition device to gather multiple expression pictures of the face in visual range;
Multiple expression pictures of face in the described visual range gathered, extract the expression picture comprising continuous expression action;
According to the expression picture comprising continuous expression action extracted, parse the action of expressing one's feelings continuously wherein comprised;
Action of the described continuous expression action parsed being expressed one's feelings continuously with the preset face being used for judging whether to unlock is mated;
Determine whether to respond described unlocking request according to matching result.
Alternatively, what described basis extracted comprises the expression picture of continuous expression action, parses the action of expressing one's feelings continuously wherein comprised, including:
Face is oriented from the described expression picture comprising continuous expression action;
Choose at least one characteristic point according to face characteristic, and according at least one characteristic point described change of position in the described expression picture comprising continuous expression action, parse the action of expressing one's feelings continuously wherein comprised.
Alternatively, described foundation face characteristic chooses at least one characteristic point, and according at least one characteristic point described change of position in the described expression picture comprising continuous expression action, parses the action of expressing one's feelings continuously wherein comprised, including:
For the face in the described expression picture comprising continuous expression action, three-dimensional system of coordinate is set;
Choose at least one characteristic point according to face characteristic, transfer at least one characteristic point described to D coordinates value according to described three-dimensional system of coordinate;
According at least one characteristic point described change of D coordinates value in described three-dimensional system of coordinate, parse described in comprise continuous expression action expression picture in action of expressing one's feelings continuously.
Alternatively, the change of D coordinates value in described three-dimensional system of coordinate of at least one characteristic point described in described basis, parse described in comprise continuous expression action expression picture in action of expressing one's feelings continuously, including:
Monitoring feature point change of D coordinates value in described three-dimensional system of coordinate in real time, when the D coordinates value monitoring described characteristic point produces change, extracts the current D coordinates value of this feature point.
Alternatively, described foundation face characteristic chooses at least one characteristic point, including: according at least one characteristic point of Feature Selection of each face organ of face.
Alternatively, preset for judging whether the human face expression action unlocked by following steps:
Obtain the expression picture comprising continuous expression action;
From the described expression picture comprising continuous expression action, orient face, three-dimensional system of coordinate is set for the face oriented;
In described three-dimensional system of coordinate, choose at least one characteristic point according to face characteristic, transfer at least one characteristic point described to D coordinates value according to described three-dimensional system of coordinate;
Monitoring described characteristic point change of D coordinates value in described three-dimensional system of coordinate in real time, when the D coordinates value monitoring described characteristic point produces change, extract the current D coordinates value of this feature point, and the sequencing that this D coordinates value produces change according to the D coordinates value of described characteristic point is preserved.
Alternatively, described action of the described continuous expression action parsed being expressed one's feelings continuously with the preset face being used for judging whether to unlock is mated, including:
When the D coordinates value monitoring characteristic point produces change, current for characteristic point D coordinates value is mated successively with preset D coordinates value.
Alternatively, described determine whether to respond described unlocking request according to matching result, including:
If the current D coordinates value of the described characteristic point extracted is mated consistent with the D coordinates value of preset characteristic point successively, then respond described unlocking request.
Alternatively, described method also includes: set the time range for comparing with the time of multiple expression pictures of the face in the described visual range gathered.
Alternatively, described setting is after the time range that compares of time with multiple expression pictures of the face in the described visual range gathered, and described method also includes:
If the time of multiple expression pictures of the face in the described visual range gathered less than the described time range set, then continues to extract the expression picture comprising continuous expression action;
If the time of multiple expression pictures of the face in the described visual range gathered exceedes the described time range of setting, then prompting user re-enters unblock expression.
According to another aspect of the present invention, additionally provide the tripper of a kind of mobile terminal, including:
Acquisition module, when being suitable to receive unlocking request, has adjusted the image acquisition device of mobile terminal, utilizes described image acquisition device to gather multiple expression pictures of the face in visual range;
Extraction module, is suitable to, multiple expression pictures of the face in the described visual range gathered, extract the expression picture comprising continuous expression action;
Parsing module, is suitable to, according to the expression picture comprising continuous expression action extracted, parse the action of expressing one's feelings continuously wherein comprised;
Matching module, the action that is suitable to express one's feelings the described continuous expression action parsed continuously with the preset face being used for judging whether to unlock is mated;
Respond module, is suitable to determine whether to respond described unlocking request according to matching result.
Alternatively, described parsing module is further adapted for:
Face is oriented from the described expression picture comprising continuous expression action;
Choose at least one characteristic point according to face characteristic, and according at least one characteristic point described change of position in the described expression picture comprising continuous expression action, parse the action of expressing one's feelings continuously wherein comprised.
Alternatively, described parsing module is further adapted for:
For the face in the described expression picture comprising continuous expression action, three-dimensional system of coordinate is set;
Choose at least one characteristic point according to face characteristic, transfer at least one characteristic point described to D coordinates value according to described three-dimensional system of coordinate;
According at least one characteristic point described change of D coordinates value in described three-dimensional system of coordinate, parse described in comprise continuous expression action expression picture in action of expressing one's feelings continuously.
Alternatively, described parsing module is further adapted for:
Monitoring feature point change of D coordinates value in described three-dimensional system of coordinate in real time, when the D coordinates value monitoring described characteristic point produces change, extracts the current D coordinates value of this feature point.
Alternatively, described parsing module is further adapted for: according at least one characteristic point of Feature Selection of each face organ of face.
Alternatively, described device also includes: preset module, is suitable to by following steps preset for judging whether the human face expression action unlocked:
Obtain the expression picture comprising continuous expression action;
From the described expression picture comprising continuous expression action, orient face, three-dimensional system of coordinate is set for the face oriented;
In described three-dimensional system of coordinate, choose at least one characteristic point according to face characteristic, transfer at least one characteristic point described to D coordinates value according to described three-dimensional system of coordinate;
Monitoring described characteristic point change of D coordinates value in described three-dimensional system of coordinate in real time, when the D coordinates value monitoring described characteristic point produces change, extract the current D coordinates value of this feature point, and the sequencing that this D coordinates value produces change according to the D coordinates value of described characteristic point is preserved.
Alternatively, described matching module is further adapted for:
When the D coordinates value monitoring characteristic point produces change, current for characteristic point D coordinates value is mated successively with preset D coordinates value.
Alternatively, described respond module is further adapted for:
If the current D coordinates value of the described characteristic point extracted is mated consistent with the D coordinates value of preset characteristic point successively, then respond described unlocking request.
Alternatively, described device also includes:
Time setting module, is suitable to set the time range for comparing with the time of multiple expression pictures of the face in the described visual range gathered.
Alternatively, described device also includes:
Comparison module, if the time of multiple expression pictures of the face be suitable in the described visual range gathered is less than described preset time scope, then continues to extract the expression picture comprising continuous expression action;If the time of multiple expression pictures of the face in the described visual range gathered exceedes described preset time scope, then prompting user re-enters unblock expression.
In embodiments of the present invention, utilize the image acquisition device of mobile terminal to gather multiple expression pictures of the face in visual range, and from multiple the expression pictures of the face gathered, extract the expression picture comprising continuous expression action.Then, according to the expression picture comprising continuous expression action extracted, the action of expressing one's feelings continuously wherein comprised is parsed.And then, this continuous print expression action is mated with preset expression action, if coupling is consistent, then mobile terminal performs to solve latching operation, if mating inconsistent, then mobile terminal does not perform to solve latching operation.As can be seen here, the unlocking manner of the embodiment of the present invention is to utilize multiple continuous print human face expression action to perform unblock, is a kind of dynamic unlocking manner, compared to static unlocking manner of the prior art, it is possible to increase the safety of unblock.Further, the embodiment of the present invention is expressed one's feelings continuously at the satisfied and preset face being used for judging whether to unlock, and action coupling is consistent can realize unblock, it is not necessary to be identical face, it is achieved that the purpose of flexible unblock.
Further, the user of mobile terminal can be autonomous set for judge whether unlock continuous print expression action (such as, set continuous print unlock expression as: choose a bottom left eyebrow, smile, blink right eye), it is compared to the unlocking manner of static state, more can increase the interest of unblock.Additionally, due to the unblock expression action that different users sets is different.Therefore, the user of mobile terminal can set the time range for comparing with the time of multiple expression pictures of the face in the visual range gathered according to the unblock expression action that oneself sets, such that it is able to improve the unblock speed of mobile terminal, save for user and unlock the time spent.
Described above is only the general introduction of technical solution of the present invention, in order to better understand the technological means of the present invention, and can be practiced according to the content of description, and in order to above and other objects of the present invention, feature and advantage can be become apparent, below especially exemplified by the detailed description of the invention of the present invention.
According to below in conjunction with the accompanying drawing detailed description to the specific embodiment of the invention, those skilled in the art will become more apparent from the above-mentioned of the present invention and other purposes, advantage and feature.
Accompanying drawing explanation
By reading the detailed description of hereafter preferred implementation, various other advantage and benefit those of ordinary skill in the art be will be clear from understanding.Accompanying drawing is only used for illustrating the purpose of preferred implementation, and is not considered as limitation of the present invention.And in whole accompanying drawing, it is denoted by the same reference numerals identical parts.In the accompanying drawings:
Fig. 1 is the schematic flow sheet of the unlocking method of mobile terminal according to an embodiment of the invention;
Fig. 2 is the schematic flow sheet of the unlocking method of mobile terminal in accordance with another embodiment of the present invention;
Fig. 3 is the structural representation of the tripper of mobile terminal according to an embodiment of the invention;And
Fig. 4 is the structural representation of the tripper of mobile terminal in accordance with another embodiment of the present invention.
Detailed description of the invention
It is more fully described the exemplary embodiment of the disclosure below with reference to accompanying drawings.Although accompanying drawing showing the exemplary embodiment of the disclosure, it being understood, however, that may be realized in various forms the disclosure and should not limited by embodiments set forth here.On the contrary, it is provided that these embodiments are able to be best understood from the disclosure, and complete for the scope of the present disclosure can be conveyed to those skilled in the art.
For solving above-mentioned technical problem, embodiments providing the unlocking method of a kind of mobile terminal, this mobile terminal can be the terminal units such as smart mobile phone, panel computer, intelligent watch.Fig. 1 is the schematic flow sheet of the unlocking method of mobile terminal according to an embodiment of the invention.Seeing Fig. 1, the method at least can include that step S102 is to step S110.
Step S102, when receiving unlocking request, has adjusted the image acquisition device of mobile terminal, utilizes image acquisition device to gather multiple expression pictures of the face in visual range.
Step S104, multiple expression pictures of the face in the visual range gathered, extracts the expression picture comprising continuous expression action.
Step S106, according to the expression picture comprising continuous expression action extracted, parses the action of expressing one's feelings continuously wherein comprised.
Step S108, action of the continuous expression action parsed being expressed one's feelings continuously with the preset face being used for judging whether to unlock is mated.
Step S110, determines whether to respond unlocking request according to matching result.
In embodiments of the present invention, utilize the image acquisition device of mobile terminal to gather multiple expression pictures of the face in visual range, and from multiple the expression pictures of the face gathered, extract the expression picture comprising continuous expression action.Then, according to the expression picture comprising continuous expression action extracted, the action of expressing one's feelings continuously wherein comprised is parsed.And then, this continuous print expression action is mated with preset expression action, if coupling is consistent, then mobile terminal performs to solve latching operation, if mating inconsistent, then mobile terminal does not perform to solve latching operation.As can be seen here, the unlocking manner of the embodiment of the present invention is to utilize multiple continuous print human face expression action to perform unblock, is a kind of dynamic unlocking manner, compared to static unlocking manner of the prior art, it is possible to increase the safety of unblock.Further, the embodiment of the present invention is expressed one's feelings continuously at the satisfied and preset face being used for judging whether to unlock, and action coupling is consistent can realize unblock, it is not necessary to be identical face, it is achieved that the purpose of flexible unblock.
In above step S102, when receiving unlocking request, adjust the image acquisition device of mobile terminal, meanwhile, can eject a Password Input frame on the display screen of mobile terminal, the display that the human face expression picture that image acquisition device collects can be real-time is in this Password Input frame.Owing on the display screen it can directly be seen that the human face expression picture gathered, thereby it is ensured that image acquisition device collects complete facial image, contributing to mobile terminal and performing smoothly to solve latching operation.
In an embodiment of the present invention, multiple expression pictures of the face in the visual range gathered, after extracting the expression picture comprising continuous expression action, first face can be oriented from the expression picture comprising continuous expression action.Then, choose at least one characteristic point according to face characteristic, and according to the change of position in the expression picture comprise continuous expression action of at least one characteristic point, parse the action of expressing one's feelings continuously wherein comprised.Wherein, from the expression picture comprising continuous expression action, orient face can use human face detection tech of the prior art, by the Face detection in expression picture out.
According to above-described embodiment, can be that the face oriented mentioned above sets up a three-dimensional system of coordinate.Here, do not limit the zero of this three-dimensional system of coordinate, the optional position that can choose face is zero.Choose at least one characteristic point according to face characteristic, and according to the three-dimensional system of coordinate established, each characteristic point is represented with the form of D coordinates value.Owing to the zero of three-dimensional system of coordinate can be arbitrarily selected, therefore, the D coordinates value of characteristic point can be that positive number can also be for negative.The change of the D coordinates value according to characteristic point, parses continuous print expression action.Monitoring feature point change in three-dimensional system of coordinate in real time, if the D coordinates value of characteristic point creates change, then represents and creates change at the face location at this feature point place.It is possible to extract human face expression action by the way of extracting current D coordinates value.Such as, the D coordinates value of a characteristic point is (0,1,1), if the D coordinates value of this feature point becomes (1,2,2), then extracts the D coordinates value (1,2,2) of this feature point.
In an alternate embodiment of the present invention, choose at least one characteristic point according to face characteristic, can be by each face organ according to face, as carried out selected characteristic point according to eyebrow, eyes, nose, mouth.Furthermore it is also possible to the number of action kind or the size of amplitude according to each face organ choose appropriate characteristic point.Such as, the action of eye is less, can choose several characteristic point less at eye, and the action of mouth is more, then can choose several characteristic point at mouth more.
Action of the continuous expression action parsed being expressed one's feelings continuously with the preset face being used for judging whether to unlock in above step S108 is mated, here, preset for judging whether that the human face expression action unlocked embodiments provides a kind of optional scheme, in this scenario, first, the expression picture comprising continuous expression action is obtained.Secondly, from the expression picture comprising continuous expression action, orient face, three-dimensional system of coordinate is set for the face oriented.Then, in three-dimensional system of coordinate, choose at least one characteristic point according to face characteristic, transfer at least one characteristic point to D coordinates value according to three-dimensional system of coordinate.Finally, monitoring feature point change of D coordinates value in described three-dimensional system of coordinate in real time, when the D coordinates value monitoring characteristic point produces change, extract the current D coordinates value of this feature point, and the sequencing that this D coordinates value produces change according to the D coordinates value of characteristic point is preserved.
Monitoring feature point change of D coordinates value in three-dimensional system of coordinate in the above-described embodiments, can be monitored in the way of employing is real-time, when the D coordinates value monitoring characteristic point produces change, current for characteristic point D coordinates value is mated successively with preset D coordinates value.If through overmatching, the current D coordinates value of characteristic point is consistent with preset D coordinates value, then respond unlocking request.
The unlocking method of the present invention can apply in multiple scene, such as, is applied to the application scenarios with time correlation, as based on clock, the application scenarios of alarm clock;It is applied to the application scenarios relevant to the date, such as application scenarios based on calendar;It is applied to the application scenarios relevant to computing function, such as application scenarios based on computer;It is applied to and relevant application scenarios of playing, such as the application scenarios relevant to the Belly-Worship Snake Game.
Now introduce a kind of solution latching operation of the present invention application scenarios based on calendar.In this concrete application scenarios, as a example by application program " Alipay ", carry out related introduction.
After user installs " Alipay " application program in the terminal, when mobile terminal detects user and starts " Alipay " application program first, first determine whether whether this application program is provided with the unlocking manner being associated with predetermined application scenarios.When mobile terminal is after judging, determine that this application program is not provided with the unlocking manner being associated with predetermined application scenarios, then showing the interface that arranges of unlocking manner on the display screen of mobile terminal, whether user can arrange unblock application scenarios by clicking on this " calendar unblock " arranging in interface, notify in terminal unit display, select the correlation step such as relevant miscellaneous function.Such as, the interface that unlocks of user setup " Alipay " application program is the application interface showing calendar, when user complete in interface is set relevant arrange after, the state of this application program is set to lock-out state, and the interface that unlocks of " Alipay " is the application interface showing calendar.Meanwhile, official's icon that " Alipay " display icon on mobile terminal display desktop provides from original installation kit, it is changed to and the icon of calendar related application.Assume that the predetermined latching operation that solves matched with calendar application scene is to receive the verification operation that user chooses the date to be on October 10th, 2010 in the application interface of calendar.When mobile terminal receives the operation that date is on October 10th, 2015 that user chooses in the application interface of calendar, after judging, the solution latching operation of user's input does not mates with verification operation, then " Alipay " keeps the application interface showing calendar on the display screen of mobile terminal.User can carry out the operation relevant to the date by the relevant miscellaneous function in the application interface of use calendar, as checked the date, adding the operations such as account.When mobile terminal receives the operation that user chooses the date to be on October 10th, 2010 in the application interface of calendar, after judging, the solution latching operation of user's input matches with verification operation, now, eject a Password Input frame on the display screen of mobile terminal, and adjust the image acquisition device of mobile terminal.The action of expressing one's feelings continuously kept strokes that if user is inputted by image acquisition device in Password Input frame and the preset face being used for judging whether to unlock is expressed one's feelings continuously, the application program " Alipay " being now in lock-out state is unlocked.Meanwhile, the display screen of mobile terminal shows interface is opened in the application of " Alipay " official, and the display icon that " Alipay " is on the display screen of mobile terminal is changed to the display icon that " Alipay " official provides.
Multiple locking applications so that user needs the multiple solution latching operation carried out as described above in releasing process, further increases the security performance of application program in mobile terminal, is effectively protected individual privacy and property safety.
The embodiment of the present invention additionally provides the unlocking method of another kind of mobile terminal.Fig. 2 is the schematic flow sheet of the unlocking method of mobile terminal in accordance with another embodiment of the present invention.Seeing Fig. 2, the method at least can include that step S202 is to step S216.
Step S202, sets the time range for comparing with the time of multiple expression pictures of the face in the visual range gathered.
In this step, time range can be set according to practical situation, and such as, time range could be arranged to 20 seconds or 30 seconds.
Step S204, when receiving unlocking request, has adjusted the image acquisition device of mobile terminal, utilizes image acquisition device to gather multiple expression pictures of the face in visual range.
Step S206, judge whether the time of multiple expression pictures of the face in collection visual range exceedes the time range of setting, if less than the described time range set, then continuing executing with step S208, if exceeding the time range of setting, then continue executing with step S210.
Step S208, multiple expression pictures of the face in the visual range gathered, extracts the expression picture comprising continuous expression action.
Step S210, prompting user re-enters unblock expression.
In this step, a line captions can be shown on the display screen of mobile terminal, be used for pointing out user to re-enter unblock expression.Such as, these captions may is that " having timed out, please re-enter unblock expression " etc..
Step S212, according to the expression picture comprising continuous expression action extracted, parses the action of expressing one's feelings continuously wherein comprised.
Step S214, action of the continuous expression action parsed being expressed one's feelings continuously with the preset face being used for judging whether to unlock is mated.
A bottom left eyebrow is chosen with continuous print, smiling, the right eye of blinking is as preset for judging whether that the face unlocked is expressed one's feelings action continuously, and continuous print chooses a bottom left eyebrow, tell tongue, blink as a example by right eye is as the continuous print human face expression action being resolved.With reference to the invention described above embodiment, by the process that the continuous expression action parsed carries out mating with preset continuous expression action it is: first, the D coordinates value of the characteristic point that left eyebrow position produces change is mated according to the storage order of D coordinates value with preset D coordinates value according to the sequencing producing change, consistent after overmatching.Then, the characteristic point D coordinates value producing change at mouth position is mated according to the storage order of D coordinates value with preset D coordinates value according to the sequencing producing change.Such as, smiling, the D coordinates value of the characteristic point producing change is (1,1,2), (2,2,2), (2,3,3), and tongue is told in preset expression action, and the D coordinates value of the characteristic point that mouth produces change is (1,1,0), (3,1,0), (4,2,2) D coordinates value (1,1 that overmatching finds to parse, is entered, 2) with the D coordinates value (1 of preset storage, 1,0) inconsistent, then to produce the D coordinates value of characteristic point of change afterwards without proceeding coupling.
Step S216, determines whether to respond unlocking request according to matching result.
In above step S202 to step S210, the time range set was as 20 seconds, when mobile terminal receives unlocking request, with this moment as start time, in the 20 second time after mobile terminal receives unlocking request, if image acquisition device collects multiple expression pictures of face, then continue to extract, from multiple expression pictures, the expression picture comprising continuous expression action;If image acquisition device does not collect multiple expression pictures of face, then prompting user re-enters unblock expression.Judge whether image acquisition device collects multiple expression pictures of face, it is possible to use human face detection tech, detect whether to there is complete face in the picture that image acquisition device collects.
According to above-described embodiment, additionally provide a kind of alternative embodiment.In above step S202 to step S210, set time range as 30 seconds, then the time range of 30 seconds is set as, need to point out user to re-enter unblock expression for judging whether within 2 seconds, being a timing node.When mobile terminal receives unlocking request, with this moment as start time, if image acquisition device can collect human face expression picture in first 10 seconds, but do not collect complete face after 10 seconds, then mobile terminal was when the 12nd second, prompting user re-enters unblock expression, now image acquisition device meeting Resurvey human face expression picture, and the human face expression picture collected before is eliminated.If image acquisition device can collect multiple expression pictures of face in remaining 14 seconds, then can continue to extract, from multiple expression pictures, the expression picture comprising continuous expression action.
The embodiment of the present invention can apply to solve locked application, when application program is not used, it is in the lock state, if user wants to use locked application program, and sending unlocking request, mobile terminal can be unlocked the response of request by above-mentioned steps S202 to step S216 to application program.In a concrete application scenarios, when the continuous expression action that mobile terminal parses with preset for judging whether to express one's feelings action continuously by the face unlocked, it fails to match time, mobile terminal, while being not responding to unlocking request, starts multimedia player and plays the multimedia messages corresponding with application program;Or, when the continuous expression action that mobile terminal parses with preset for judging whether to express one's feelings action continuously by the face unlocked, the match is successful time, mobile terminal, while response unlocking request, starts multimedia player and plays the multimedia messages corresponding with application program.Wherein, multimedia messages can include pictorial information, audio-frequency information or video information.Wherein, multimedia messages can be the multimedia messages prestored in mobile terminal, can also be that mobile terminal passes through the real-time multimedia messages obtained of network connection, such as TV programme, the broadcast program etc. of predetermined broadcast modulation real-time play of predetermined television channel real-time play.
In embodiments of the present invention, the multimedia messages corresponding with application program, by including but not limited to that either one formula following is arranged:
Based on the different time periods, different multimedia messages is set.Such as, arranging the multimedia messages of broadcasting within the time period of the 8:00 to 14:00 of every day is the real-time programme televised live of Chinese Central Television's news channel, and the multimedia messages play in remaining time of every day is the programme televised live that Chinese Central Television's sports channel is real-time.
Based on different geographical position, different multimedia messages is set.Such as, if the geographical position residing for user is Beijing, can arrange the multimedia messages of broadcasting is the real-time programme televised live of Beijing area news channel;If the geographical position residing for user is Tianjin, can arrange the multimedia messages of broadcasting is the real-time programme televised live of Efficiency in Buildings in Tianjin Area news channel.
In an embodiment of the present invention, in 7:00 to the 11:00 time period that user arranges every day in the terminal, user is in releasing process, if the continuous expression action that mobile terminal parses is with preset for judging whether the face unlocked to express one's feelings continuously, the match is successful in action, then the multimedia messages play is the associated video information of weather forecast;Within 11:00 to 18:00 time period every day, user is in releasing process, if the continuous expression action that mobile terminal parses is with preset for judging whether the face unlocked to express one's feelings continuously, the match is successful in action, then the multimedia messages play is the pictorial information relevant to physical culture;Every day 18:00 in 6:00 time period next day, user is in releasing process, if the continuous expression action that mobile terminal parses is with preset for judging whether the face unlocked to express one's feelings continuously, the match is successful in action, then the multimedia messages play is the associated video information of real-time news.
In an alternative embodiment of the invention, with reference to above-described embodiment, when mobile terminal receives user for the unlocking request of application program, if current time is 8:00, current mobile terminal has connected network, and the continuous expression action that parses of mobile terminal is with preset for judging whether the face unlocked to express one's feelings continuously, the match is successful in action, then the multimedia play equipment starting mobile terminal plays the associated video information of the weather forecast specifying internet page address;If current time is 15:00, current mobile terminal is not connected with network, and the continuous expression action that parses of mobile terminal is with preset the match is successful in action for judging whether the face unlocked to express one's feelings continuously, then start the picture concerned information that the multimedia play equipment of mobile terminal plays the physical culture of the storage position specifying mobile terminal.
In another embodiment of the present invention, in the case of mobile terminal is opened positioning service and is connected network, user can arrange the geographical position residing for active user when being Beijing, when mobile terminal receives user for the unlocking request of application program, and the continuous expression action that parses of mobile terminal is with preset for judging whether the face unlocked to express one's feelings continuously, it fails to match in action, then starting the multimedia of mobile terminal, the multimedia messages of broadcasting is the programme televised live that Beijing area news channel is real-time;When geographical position residing for active user is Tianjin, when mobile terminal receives user for the unlocking request of application program, and the continuous expression action that parses of mobile terminal is with preset for judging whether the face unlocked to express one's feelings continuously, it fails to match in action, then starting the multimedia of mobile terminal, the multimedia messages of broadcasting is the programme televised live that Efficiency in Buildings in Tianjin Area news channel is real-time;When mobile terminal receives user for the unlocking request of application program, and the continuous expression action that parses of mobile terminal is with preset for judging whether the face unlocked to express one's feelings continuously, it fails to match in action, then getting, by the positioning service of mobile terminal, the geographical position that user is presently in is Beijing, and the multimedia play equipment starting mobile terminal is play and specified Beijing news channel of internet page address current just at live program.
In embodiments of the present invention, the unblock expression action set due to different users is different.Therefore, the user of mobile terminal can set the size of time range for comparing with multiple times of pictures of expressing one's feelings of the face in the visual range gathered according to the unblocks expression action that oneself sets, such that it is able to improve the unblock speed of mobile terminal, save for user and unlock the time spent.
Based on same inventive concept, the embodiment of the present invention additionally provides the tripper of a kind of mobile terminal.Fig. 3 is the structural representation of the tripper of mobile terminal according to an embodiment of the invention.Seeing Fig. 3, the tripper 300 of mobile terminal at least may include that acquisition module 310, extraction module 320, parsing module 330, matching module 340 and respond module 350.
Now introduce the annexation between each composition of the tripper 300 of the mobile terminal of the embodiment of the present invention or the function of device and each several part:
Acquisition module 310, when being suitable to receive unlocking request, has adjusted the image acquisition device of mobile terminal, utilizes image acquisition device to gather multiple expression pictures of the face in visual range;
Extraction module 320, is coupled with acquisition module 310, is suitable to, multiple expression pictures of the face in the visual range gathered, extract the expression picture comprising continuous expression action;
Parsing module 330, is coupled with extraction module 320, is suitable to, according to the expression picture comprising continuous expression action extracted, parse the action of expressing one's feelings continuously wherein comprised;
Matching module 340, is coupled with parsing module 330, and the action that is suitable to express one's feelings the continuous expression action parsed continuously with the preset face being used for judging whether to unlock is mated;
Respond module 350, is coupled with matching module 340, is suitable to determine whether to respond unlocking request according to matching result.
In an embodiment of the present invention, parsing module 330 is further adapted for, face is oriented from the expression picture comprising continuous expression action, and choose at least one characteristic point according to face characteristic, and according to the change of position in the expression picture comprise continuous expression action of at least one characteristic point, parse the action of expressing one's feelings continuously wherein comprised.
In an embodiment of the present invention, parsing module 330 is further adapted for, and arranges three-dimensional system of coordinate for the face expressed one's feelings in picture comprising continuous expression action.At least one characteristic point is chosen according to face characteristic, at least one characteristic point is transferred to D coordinates value according to three-dimensional system of coordinate, and according to the change of D coordinates value in three-dimensional system of coordinate of at least one characteristic point, parse the action of expressing one's feelings continuously in the expression picture comprising continuous expression action.
In an embodiment of the present invention, parsing module 330 is further adapted for, real-time monitoring feature point change of D coordinates value in three-dimensional system of coordinate, when the D coordinates value monitoring characteristic point produces change, extracts the current D coordinates value of this feature point.
In an embodiment of the present invention, parsing module 330 is further adapted for, according at least one characteristic point of Feature Selection of each face organ of face.
In an embodiment of the present invention, the tripper 300 of mobile terminal also includes preset module (the most not shown), is suitable to by following steps preset for judging whether the human face expression action unlocked:
First, the expression picture comprising continuous expression action is obtained.Secondly, from the expression picture comprising continuous expression action, orient face, three-dimensional system of coordinate is set for the face oriented.Then, in three-dimensional system of coordinate, choose at least one characteristic point according to face characteristic, transfer at least one characteristic point to D coordinates value according to three-dimensional system of coordinate.Finally, monitoring feature point change of D coordinates value in described three-dimensional system of coordinate in real time, when the D coordinates value monitoring characteristic point produces change, extract the current D coordinates value of this feature point, and the sequencing that this D coordinates value produces change according to the D coordinates value of characteristic point is preserved.
In an embodiment of the present invention, matching module 340 is further adapted for, and when the D coordinates value monitoring characteristic point produces change, current for characteristic point D coordinates value is mated successively with preset D coordinates value.
In an embodiment of the present invention, respond module 350 is further adapted for, if the current D coordinates value of the characteristic point extracted is mated consistent with the D coordinates value of preset characteristic point successively, then responds unlocking request.
The embodiment of the present invention additionally provides the tripper of another kind of mobile terminal.Fig. 4 is the structural representation of the tripper of mobile terminal in accordance with another embodiment of the present invention.Seeing Fig. 4, the tripper 300 of mobile terminal can also include: time setting module 360 and comparison module 370.
Time setting module 360, is coupled with acquisition module 310, is suitable to set the time range for comparing with the time of multiple expression pictures of the face in the visual range gathered.
Comparison module 370, is coupled respectively with acquisition module 310 and extraction module 320, if the time of multiple expression pictures of the face be suitable in the visual range gathered less than preset time scope, then continues to extract the expression picture comprising continuous expression action;If the time of multiple expression pictures of the face in the visual range gathered exceedes preset time scope, then prompting user re-enters unblock expression.
According to any one preferred embodiment above-mentioned or the combination of multiple preferred embodiment, the embodiment of the present invention can reach following beneficial effect:
In embodiments of the present invention, utilize the image acquisition device of mobile terminal to gather multiple expression pictures of the face in visual range, and from multiple the expression pictures of the face gathered, extract the expression picture comprising continuous expression action.Then, according to the expression picture comprising continuous expression action extracted, the action of expressing one's feelings continuously wherein comprised is parsed.And then, this continuous print expression action is mated with preset expression action, if coupling is consistent, then mobile terminal performs to solve latching operation, if mating inconsistent, then mobile terminal does not perform to solve latching operation.As can be seen here, the unlocking manner of the embodiment of the present invention is to utilize multiple continuous print human face expression action to perform unblock, is a kind of dynamic unlocking manner, compared to static unlocking manner of the prior art, it is possible to increase the safety of unblock.Further, the embodiment of the present invention is expressed one's feelings continuously at the satisfied and preset face being used for judging whether to unlock, and action coupling is consistent can realize unblock, it is not necessary to be identical face, it is achieved that the purpose of flexible unblock.
Further, the user of mobile terminal can be autonomous set for judge whether unlock continuous print expression action (such as, set continuous print unlock expression as: choose a bottom left eyebrow, smile, blink right eye), it is compared to the unlocking manner of static state, more can increase the interest of unblock.Additionally, due to the unblock expression action that different users sets is different.Therefore, the user of mobile terminal can set the time range for comparing with the time of multiple expression pictures of the face in the visual range gathered according to the unblock expression action that oneself sets, such that it is able to improve the unblock speed of mobile terminal, save for user and unlock the time spent.
In description mentioned herein, illustrate a large amount of detail.It is to be appreciated, however, that embodiments of the invention can be put into practice in the case of not having these details.In some instances, it is not shown specifically known device, structure and technology, in order to do not obscure the understanding of this description.
Similarly, it is to be understood that, one or more in order to simplify that the disclosure helping understands in each inventive aspect, above in the description of the exemplary embodiment of the present invention, each feature of the present invention is grouped together in single embodiment, figure or descriptions thereof sometimes.But, the device of the disclosure should not being construed to reflect an intention that, i.e. the present invention for required protection requires than the more feature of feature being expressly recited in each claim.More precisely, as the following claims reflect, inventive aspect is all features less than single embodiment disclosed above.Therefore, it then follows claims of detailed description of the invention are thus expressly incorporated in this detailed description of the invention, the most each claim itself is as the independent embodiment of the present invention.
Those skilled in the art are appreciated that and can adaptively change the module in the equipment in embodiment and they are arranged in one or more equipment different from this embodiment.Module in embodiment or unit or assembly can be combined into a module or unit or assembly, and multiple submodule or subelement or sub-component can be put them in addition.In addition at least some in such feature and/or process or unit excludes each other, can use any combination that all features disclosed in this specification (including adjoint claim, summary and accompanying drawing) and so disclosed any device or all processes of equipment or unit are combined.Unless expressly stated otherwise, each feature disclosed in this specification (including adjoint claim, summary and accompanying drawing) can be replaced by the alternative features providing identical, equivalent or similar purpose.
In addition, those skilled in the art it will be appreciated that, although embodiments more described herein include some feature included in other embodiments rather than further feature, but the combination of the feature of different embodiment means to be within the scope of the present invention and formed different embodiments.Such as, in detail in the claims, one of arbitrarily can mode the using in any combination of embodiment required for protection.
The all parts embodiment of the present invention can realize with hardware, or realizes with the software module run on one or more processor, or realizes with combinations thereof.It will be understood by those of skill in the art that the some or all functions of some or all parts in the tripper of the mobile terminal that microprocessor or digital signal processor (DSP) can be used in practice to realize according to embodiments of the present invention.The present invention is also implemented as part or all the equipment for performing device as described herein or device program (such as, computer program and computer program).The program of such present invention of realization can store on a computer-readable medium, or can be to have the form of one or more signal.Such signal can be downloaded from internet website and obtain, or provides on carrier signal, or provides with any other form.
The present invention will be described rather than limits the invention to it should be noted above-described embodiment, and those skilled in the art can design alternative embodiment without departing from the scope of the appended claims.In the claims, any reference marks that should not will be located between bracket is configured to limitations on claims.Word " comprises " and does not excludes the presence of the element or step not arranged in the claims.Word "a" or "an" before being positioned at element does not excludes the presence of multiple such element.The present invention by means of including the hardware of some different elements and can realize by means of properly programmed computer.If in the unit claim listing equipment for drying, several in these devices can be specifically to be embodied by same hardware branch.Word first, second and third use do not indicate that any order.Can be title by these word explanations.
So far, those skilled in the art will recognize that, although the most detailed multiple exemplary embodiments illustrate and describing the present invention, but, without departing from the spirit and scope of the present invention, still can directly determine according to present disclosure or derive other variations or modifications of many meeting the principle of the invention.Therefore, the scope of the present invention is it is understood that and regard as covering other variations or modifications all these.
The embodiment of the present invention additionally provides A1, the unlocking method of a kind of mobile terminal, including:
When receiving unlocking request, adjust the image acquisition device of mobile terminal, utilize described image acquisition device to gather multiple expression pictures of the face in visual range;
Multiple expression pictures of face in the described visual range gathered, extract the expression picture comprising continuous expression action;
According to the expression picture comprising continuous expression action extracted, parse the action of expressing one's feelings continuously wherein comprised;
Action of the described continuous expression action parsed being expressed one's feelings continuously with the preset face being used for judging whether to unlock is mated;
Determine whether to respond described unlocking request according to matching result.
A2, according to the method described in A1, wherein, what described basis extracted comprises the expression picture of continuous expression action, parses the action of expressing one's feelings continuously wherein comprised, including:
Face is oriented from the described expression picture comprising continuous expression action;
Choose at least one characteristic point according to face characteristic, and according at least one characteristic point described change of position in the described expression picture comprising continuous expression action, parse the action of expressing one's feelings continuously wherein comprised.
A3, according to the method described in A2, wherein, described foundation face characteristic chooses at least one characteristic point, and according at least one characteristic point described change of position in the described expression picture comprising continuous expression action, parse the action of expressing one's feelings continuously wherein comprised, including:
For the face in the described expression picture comprising continuous expression action, three-dimensional system of coordinate is set;
Choose at least one characteristic point according to face characteristic, transfer at least one characteristic point described to D coordinates value according to described three-dimensional system of coordinate;
According at least one characteristic point described change of D coordinates value in described three-dimensional system of coordinate, parse described in comprise continuous expression action expression picture in action of expressing one's feelings continuously.
A4, according to the method described in A3, wherein, the change of D coordinates value in described three-dimensional system of coordinate of at least one characteristic point described in described basis, parse described in comprise continuous expression action expression picture in action of expressing one's feelings continuously, including:
Monitoring feature point change of D coordinates value in described three-dimensional system of coordinate in real time, when the D coordinates value monitoring described characteristic point produces change, extracts the current D coordinates value of this feature point.
A5, according to the method according to any one of A2-A4, wherein, described choose at least one characteristic point according to face characteristic, including: according at least one characteristic point of Feature Selection of each face organ of face.
A6, according to the method according to any one of A1-A5, wherein, by following steps preset for judge whether unlock human face expression action:
Obtain the expression picture comprising continuous expression action;
From the described expression picture comprising continuous expression action, orient face, three-dimensional system of coordinate is set for the face oriented;
In described three-dimensional system of coordinate, choose at least one characteristic point according to face characteristic, transfer at least one characteristic point described to D coordinates value according to described three-dimensional system of coordinate;
Monitoring described characteristic point change of D coordinates value in described three-dimensional system of coordinate in real time, when the D coordinates value monitoring described characteristic point produces change, extract the current D coordinates value of this feature point, and the sequencing that this D coordinates value produces change according to the D coordinates value of described characteristic point is preserved.
A7, according to the method described in A6, wherein, described action of the described continuous expression action parsed being expressed one's feelings continuously with the preset face being used for judging whether to unlock is mated, including:
When the D coordinates value monitoring characteristic point produces change, current for characteristic point D coordinates value is mated successively with preset D coordinates value.
A8, according to the method described in A7, wherein, described determine whether to respond described unlocking request according to matching result, including:
If the current D coordinates value of the described characteristic point extracted is mated consistent with the D coordinates value of preset characteristic point successively, then respond described unlocking request.
A9, according to the method according to any one of A1-8A, wherein, described method also includes: the time range that compares of times of multiple expression pictures of the face in the described visual range setting for and gathering.
A10, according to the method described in A9, wherein, described setting is after the time range that compares of time with multiple expression pictures of the face in the described visual range gathered, and described method also includes:
If the time of multiple expression pictures of the face in the described visual range gathered less than the described time range set, then continues to extract the expression picture comprising continuous expression action;
If the time of multiple expression pictures of the face in the described visual range gathered exceedes the described time range of setting, then prompting user re-enters unblock expression.
B11, the tripper of a kind of mobile terminal, including:
Acquisition module, when being suitable to receive unlocking request, has adjusted the image acquisition device of mobile terminal, utilizes described image acquisition device to gather multiple expression pictures of the face in visual range;
Extraction module, is suitable to, multiple expression pictures of the face in the described visual range gathered, extract the expression picture comprising continuous expression action;
Parsing module, is suitable to, according to the expression picture comprising continuous expression action extracted, parse the action of expressing one's feelings continuously wherein comprised;
Matching module, the action that is suitable to express one's feelings the described continuous expression action parsed continuously with the preset face being used for judging whether to unlock is mated;
Respond module, is suitable to determine whether to respond described unlocking request according to matching result.
B12, according to the device described in B11, wherein, described parsing module is further adapted for:
Face is oriented from the described expression picture comprising continuous expression action;
Choose at least one characteristic point according to face characteristic, and according at least one characteristic point described change of position in the described expression picture comprising continuous expression action, parse the action of expressing one's feelings continuously wherein comprised.
B13, according to the device described in B12, wherein, described parsing module is further adapted for:
For the face in the described expression picture comprising continuous expression action, three-dimensional system of coordinate is set;
Choose at least one characteristic point according to face characteristic, transfer at least one characteristic point described to D coordinates value according to described three-dimensional system of coordinate;
According at least one characteristic point described change of D coordinates value in described three-dimensional system of coordinate, parse described in comprise continuous expression action expression picture in action of expressing one's feelings continuously.
B14, according to the device described in B13, wherein, described parsing module is further adapted for:
Monitoring feature point change of D coordinates value in described three-dimensional system of coordinate in real time, when the D coordinates value monitoring described characteristic point produces change, extracts the current D coordinates value of this feature point.
B15, according to the device according to any one of B12-B14, wherein, described parsing module is further adapted for: according at least one characteristic point of Feature Selection of each face organ of face.
B16, according to the device according to any one of B11-B15, wherein, described device also includes: preset module, be suitable to by following steps preset for judge whether unlock human face expression action:
Obtain the expression picture comprising continuous expression action;
From the described expression picture comprising continuous expression action, orient face, three-dimensional system of coordinate is set for the face oriented;
In described three-dimensional system of coordinate, choose at least one characteristic point according to face characteristic, transfer at least one characteristic point described to D coordinates value according to described three-dimensional system of coordinate;
Monitoring described characteristic point change of D coordinates value in described three-dimensional system of coordinate in real time, when the D coordinates value monitoring described characteristic point produces change, extract the current D coordinates value of this feature point, and the sequencing that this D coordinates value produces change according to the D coordinates value of described characteristic point is preserved.
B17, according to the device described in B16, wherein, described matching module is further adapted for:
When the D coordinates value monitoring characteristic point produces change, current for characteristic point D coordinates value is mated successively with preset D coordinates value.
B18, according to the device described in B17, wherein, described respond module is further adapted for:
If the current D coordinates value of the described characteristic point extracted is mated consistent with the D coordinates value of preset characteristic point successively, then respond described unlocking request.
B19, according to the device according to any one of B11-B18, wherein, described device also includes:
Time setting module, is suitable to set the time range for comparing with the time of multiple expression pictures of the face in the described visual range gathered.
B20, according to the device described in B19, wherein, described device also includes:
Comparison module, if the time of multiple expression pictures of the face be suitable in the described visual range gathered is less than described preset time scope, then continues to extract the expression picture comprising continuous expression action;If the time of multiple expression pictures of the face in the described visual range gathered exceedes described preset time scope, then prompting user re-enters unblock expression.

Claims (10)

1. a unlocking method for mobile terminal, including:
When receiving unlocking request, adjust the image acquisition device of mobile terminal, utilize described image acquisition device to gather multiple expression pictures of the face in visual range;
Multiple expression pictures of face in the described visual range gathered, extract the expression picture comprising continuous expression action;
According to the expression picture comprising continuous expression action extracted, parse the action of expressing one's feelings continuously wherein comprised;
Action of the described continuous expression action parsed being expressed one's feelings continuously with the preset face being used for judging whether to unlock is mated;
Determine whether to respond described unlocking request according to matching result.
Method the most according to claim 1, wherein, what described basis extracted comprises the expression picture of continuous expression action, parses the action of expressing one's feelings continuously wherein comprised, including:
Face is oriented from the described expression picture comprising continuous expression action;
Choose at least one characteristic point according to face characteristic, and according at least one characteristic point described change of position in the described expression picture comprising continuous expression action, parse the action of expressing one's feelings continuously wherein comprised.
Method the most according to claim 2, wherein, described foundation face characteristic chooses at least one characteristic point, and according at least one characteristic point described change of position in the described expression picture comprising continuous expression action, parse the action of expressing one's feelings continuously wherein comprised, including:
For the face in the described expression picture comprising continuous expression action, three-dimensional system of coordinate is set;
Choose at least one characteristic point according to face characteristic, transfer at least one characteristic point described to D coordinates value according to described three-dimensional system of coordinate;
According at least one characteristic point described change of D coordinates value in described three-dimensional system of coordinate, parse described in comprise continuous expression action expression picture in action of expressing one's feelings continuously.
Method the most according to claim 3, wherein, the change of D coordinates value in described three-dimensional system of coordinate of at least one characteristic point described in described basis, parse described in comprise continuous expression action expression picture in action of expressing one's feelings continuously, including:
Monitoring feature point change of D coordinates value in described three-dimensional system of coordinate in real time, when the D coordinates value monitoring described characteristic point produces change, extracts the current D coordinates value of this feature point.
5. according to the method according to any one of claim 2-4, wherein, described foundation face characteristic chooses at least one characteristic point, including: according at least one characteristic point of Feature Selection of each face organ of face.
6. according to the method according to any one of claim 1-5, wherein, preset for judging whether the human face expression action unlocked by following steps:
Obtain the expression picture comprising continuous expression action;
From the described expression picture comprising continuous expression action, orient face, three-dimensional system of coordinate is set for the face oriented;
In described three-dimensional system of coordinate, choose at least one characteristic point according to face characteristic, transfer at least one characteristic point described to D coordinates value according to described three-dimensional system of coordinate;
Monitoring described characteristic point change of D coordinates value in described three-dimensional system of coordinate in real time, when the D coordinates value monitoring described characteristic point produces change, extract the current D coordinates value of this feature point, and the sequencing that this D coordinates value produces change according to the D coordinates value of described characteristic point is preserved.
Method the most according to claim 6, wherein, described action of the described continuous expression action parsed being expressed one's feelings continuously with the preset face being used for judging whether to unlock is mated, including:
When the D coordinates value monitoring characteristic point produces change, current for characteristic point D coordinates value is mated successively with preset D coordinates value.
Method the most according to claim 7, wherein, described determines whether to respond described unlocking request according to matching result, including:
If the current D coordinates value of the described characteristic point extracted is mated consistent with the D coordinates value of preset characteristic point successively, then respond described unlocking request.
9. according to the method according to any one of claim 1-8, wherein, described method also includes: set the time range for comparing with the time of multiple expression pictures of the face in the described visual range gathered.
10. a tripper for mobile terminal, including:
Acquisition module, when being suitable to receive unlocking request, has adjusted the image acquisition device of mobile terminal, utilizes described image acquisition device to gather multiple expression pictures of the face in visual range;
Extraction module, is suitable to, multiple expression pictures of the face in the described visual range gathered, extract the expression picture comprising continuous expression action;
Parsing module, is suitable to, according to the expression picture comprising continuous expression action extracted, parse the action of expressing one's feelings continuously wherein comprised;
Matching module, the action that is suitable to express one's feelings the described continuous expression action parsed continuously with the preset face being used for judging whether to unlock is mated;
Respond module, is suitable to determine whether to respond described unlocking request according to matching result.
CN201610159066.4A 2016-03-18 2016-03-18 Mobile terminal unlocking method and device Pending CN105825112A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610159066.4A CN105825112A (en) 2016-03-18 2016-03-18 Mobile terminal unlocking method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610159066.4A CN105825112A (en) 2016-03-18 2016-03-18 Mobile terminal unlocking method and device

Publications (1)

Publication Number Publication Date
CN105825112A true CN105825112A (en) 2016-08-03

Family

ID=56524768

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610159066.4A Pending CN105825112A (en) 2016-03-18 2016-03-18 Mobile terminal unlocking method and device

Country Status (1)

Country Link
CN (1) CN105825112A (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106599652A (en) * 2016-11-14 2017-04-26 深圳市金立通信设备有限公司 Screen unlocking method and terminal
CN107179831A (en) * 2017-06-30 2017-09-19 广东欧珀移动通信有限公司 Start method, device, storage medium and the terminal of application
CN107392112A (en) * 2017-06-28 2017-11-24 中山职业技术学院 A kind of facial expression recognizing method and its intelligent lock system of application
CN107424266A (en) * 2017-07-25 2017-12-01 上海青橙实业有限公司 The method and apparatus of recognition of face unblock
CN107613124A (en) * 2017-09-20 2018-01-19 深圳传音通讯有限公司 Unlocking method, smart machine and the storage medium of smart machine
CN107742072A (en) * 2017-09-20 2018-02-27 维沃移动通信有限公司 Face identification method and mobile terminal
CN108090339A (en) * 2017-12-28 2018-05-29 上海闻泰电子科技有限公司 Tripper, method and electronic equipment based on recognition of face
CN108875335A (en) * 2017-10-23 2018-11-23 北京旷视科技有限公司 The method and authenticating device and non-volatile memory medium of face unlock and typing expression and facial expressions and acts
CN109325330A (en) * 2018-08-01 2019-02-12 平安科技(深圳)有限公司 Micro- expression lock generates and unlocking method, device, terminal device and storage medium
CN109426714A (en) * 2017-08-30 2019-03-05 阿里巴巴集团控股有限公司 Substitution detection method and device, user ID authentication method and device
CN113536262A (en) * 2020-09-03 2021-10-22 腾讯科技(深圳)有限公司 Unlocking method and device based on facial expression, computer equipment and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103853485A (en) * 2012-12-07 2014-06-11 腾讯科技(深圳)有限公司 Touch screen unlocking method and terminal
CN103902029A (en) * 2012-12-26 2014-07-02 腾讯数码(天津)有限公司 Mobile terminal and unlocking method thereof
CN104169933A (en) * 2011-12-29 2014-11-26 英特尔公司 Method, apparatus, and computer-readable recording medium for authenticating a user
CN104461305A (en) * 2012-08-10 2015-03-25 北京奇虎科技有限公司 Scene unlocking method of terminal device and terminal device
CN104749777A (en) * 2013-12-27 2015-07-01 中芯国际集成电路制造(上海)有限公司 Interaction method for wearable smart devices

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104169933A (en) * 2011-12-29 2014-11-26 英特尔公司 Method, apparatus, and computer-readable recording medium for authenticating a user
CN104461305A (en) * 2012-08-10 2015-03-25 北京奇虎科技有限公司 Scene unlocking method of terminal device and terminal device
CN103853485A (en) * 2012-12-07 2014-06-11 腾讯科技(深圳)有限公司 Touch screen unlocking method and terminal
CN103902029A (en) * 2012-12-26 2014-07-02 腾讯数码(天津)有限公司 Mobile terminal and unlocking method thereof
CN104749777A (en) * 2013-12-27 2015-07-01 中芯国际集成电路制造(上海)有限公司 Interaction method for wearable smart devices

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106599652A (en) * 2016-11-14 2017-04-26 深圳市金立通信设备有限公司 Screen unlocking method and terminal
CN107392112A (en) * 2017-06-28 2017-11-24 中山职业技术学院 A kind of facial expression recognizing method and its intelligent lock system of application
CN107179831B (en) * 2017-06-30 2019-05-03 Oppo广东移动通信有限公司 Start method, apparatus, storage medium and the terminal of application
CN107179831A (en) * 2017-06-30 2017-09-19 广东欧珀移动通信有限公司 Start method, device, storage medium and the terminal of application
CN107424266A (en) * 2017-07-25 2017-12-01 上海青橙实业有限公司 The method and apparatus of recognition of face unblock
CN109426714B (en) * 2017-08-30 2022-04-19 创新先进技术有限公司 Method and device for detecting person changing and method and device for verifying user identity
CN109426714A (en) * 2017-08-30 2019-03-05 阿里巴巴集团控股有限公司 Substitution detection method and device, user ID authentication method and device
CN107613124A (en) * 2017-09-20 2018-01-19 深圳传音通讯有限公司 Unlocking method, smart machine and the storage medium of smart machine
CN107742072A (en) * 2017-09-20 2018-02-27 维沃移动通信有限公司 Face identification method and mobile terminal
CN107742072B (en) * 2017-09-20 2021-06-25 维沃移动通信有限公司 Face recognition method and mobile terminal
US10922533B2 (en) 2017-10-23 2021-02-16 Beijing Kuangshi Technology Co., Ltd. Method for face-to-unlock, authentication device, and non-volatile storage medium
CN108875335B (en) * 2017-10-23 2020-10-09 北京旷视科技有限公司 Method for unlocking human face and inputting expression and expression action, authentication equipment and nonvolatile storage medium
CN108875335A (en) * 2017-10-23 2018-11-23 北京旷视科技有限公司 The method and authenticating device and non-volatile memory medium of face unlock and typing expression and facial expressions and acts
CN108090339A (en) * 2017-12-28 2018-05-29 上海闻泰电子科技有限公司 Tripper, method and electronic equipment based on recognition of face
CN109325330A (en) * 2018-08-01 2019-02-12 平安科技(深圳)有限公司 Micro- expression lock generates and unlocking method, device, terminal device and storage medium
CN113536262A (en) * 2020-09-03 2021-10-22 腾讯科技(深圳)有限公司 Unlocking method and device based on facial expression, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN105825112A (en) Mobile terminal unlocking method and device
CN108234591B (en) Content data recommendation method and device based on identity authentication device and storage medium
US9530251B2 (en) Intelligent method of determining trigger items in augmented reality environments
CN105335465B (en) A kind of method and apparatus showing main broadcaster's account
CN104519124B (en) A kind of distribution method and device of virtual resource
CN104899490A (en) Terminal positioning method and user terminal
KR101884291B1 (en) Display apparatus and control method thereof
CN109982106B (en) Video recommendation method, server, client and electronic equipment
CN103152324B (en) The user authen method of Behavior-based control feature
CN202998337U (en) Video program identification system
CN102301379A (en) Method For Controlling And Requesting Information From Displaying Multimedia
CN106303599A (en) A kind of information processing method, system and server
CN109871843A (en) Character identifying method and device, the device for character recognition
CN103973550A (en) Method, system and device for rapidly and intelligently identifying instant messaging application ID (identity) number and carrying out instant messaging
CN107809654A (en) System for TV set and TV set control method
CN111954087B (en) Method and device for intercepting images in video, storage medium and electronic equipment
CN109492787A (en) Appointment business handles method, apparatus, computer equipment and storage medium
CN104049833A (en) Terminal screen image displaying method based on individual biological characteristics and terminal screen image displaying device based on individual biological characteristics
CN114501144A (en) Image-based television control method, device, equipment and storage medium
CN108024148B (en) Behavior feature-based multimedia file identification method, processing method and device
CN108334852A (en) A kind of image analysis identifying system and image analysis recognition methods
CN112989299A (en) Interactive identity recognition method, system, device and medium
CN109151599B (en) Video processing method and device
CN107391661B (en) Recommended word display method and device
CN111353414A (en) Identity recognition method, system, machine readable medium and equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20160803

RJ01 Rejection of invention patent application after publication