CN104636734A - Terminal face recognition method and device - Google Patents

Terminal face recognition method and device Download PDF

Info

Publication number
CN104636734A
CN104636734A CN201510091896.3A CN201510091896A CN104636734A CN 104636734 A CN104636734 A CN 104636734A CN 201510091896 A CN201510091896 A CN 201510091896A CN 104636734 A CN104636734 A CN 104636734A
Authority
CN
China
Prior art keywords
human face
face expression
expression feature
facial image
terminal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510091896.3A
Other languages
Chinese (zh)
Inventor
蓝情艺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen ZTE Mobile Telecom Co Ltd
Original Assignee
Shenzhen ZTE Mobile Telecom Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen ZTE Mobile Telecom Co Ltd filed Critical Shenzhen ZTE Mobile Telecom Co Ltd
Priority to CN201510091896.3A priority Critical patent/CN104636734A/en
Publication of CN104636734A publication Critical patent/CN104636734A/en
Pending legal-status Critical Current

Links

Landscapes

  • Telephone Function (AREA)

Abstract

The invention discloses a terminal face recognition method, which comprises the following steps that when a terminal enters a face recognition mode, corresponding face images are prompted to be sequentially recorded, face images are sequentially collected, and in addition, face expression features are extracted from the collected face images; after the face expression features are extracted, the extracted face expression features are matched with the prestored face expression features; when the extracted face expression features and the prestored face expression features are successfully matched, the condition that the face recognition passes is judged. The invention also discloses a terminal face recognition device. The terminal face recognition method and the terminal face recognition device have the advantages that the complicated degree and the accuracy of the face recognition process are improved, and the face recognition safety is improved.

Description

Terminal face identification method and device
Technical field
The present invention relates to moving communicating field, particularly relate to a kind of terminal face identification method and device.
Background technology
Face recognition technology is developed rapidly in recent years, and operation strategies is very wide, and as recognition of face gate inhibition, attendance checking system, recognition of face antitheft door, the unblock of smart mobile phone, the gate inhibition in bank, prison unlocks.Face recognition technology is the face feature based on people, the facial image inputted or video flowing are analyzed: first judge whether it exists face, if there is face, then provide the positional information of the position of each face, size and each major facial organ further, and according to these information, the identity characteristic contained in each face of further extraction, and itself and known face characteristic are contrasted, thus identify the identity of each face.
But when being completed the application such as unblock, opening gate by recognition of face, if camera collection is not real facial image, but when the photo of certain user or video, can identify successfully equally, complete unblock or gate inhibition open, and there is potential safety hazard.Existing face recognition process is simple, and poor stability.
Foregoing, only for auxiliary understanding technical scheme of the present invention, does not represent and admits that foregoing is prior art.
Summary of the invention
Fundamental purpose of the present invention is to provide a kind of terminal face identification method and device, solves existing face recognition process simple, and the problem of poor stability.
For achieving the above object, a kind of terminal face identification method provided by the invention, comprises step:
When terminal enters recognition of face pattern, point out typing successively corresponding facial image, gather facial image successively and extract human face expression feature from gathered facial image;
After extraction human face expression feature, extracted human face expression feature is mated with the human face expression feature that prestores;
When extracted human face expression feature and the human face expression characteristic matching that prestores are successful, judge that recognition of face is passed through.
Preferably, described when terminal enters recognition of face pattern, point out typing successively corresponding facial image, gather facial image successively and before extract the step of human face expression feature from gathered facial image, also comprise:
Number and the acquisition order of the human face expression of collection are set.
Preferably, described after extraction human face expression feature, the step that extracted human face expression feature and the human face expression feature that prestores carry out mating is comprised:
After collecting human face expression feature according to acquisition order, extracted human face expression feature is mated with the corresponding human face expression feature that prestores one by one.
Preferably, described after extraction human face expression feature, the step that extracted human face expression feature and the human face expression feature that prestores carry out mating is comprised:
After extracting first human face expression feature, by extracted first human face expression feature and the corresponding human face expression characteristic matching that prestores;
When extracted first human face expression feature with corresponding prestore human face expression characteristic matching time, extract next human face expression feature and the corresponding human face expression characteristic matching that prestores, until the human face expression characteristic matching extracted is complete; Or when there is the human face expression feature extracted and not mating with the corresponding human face expression feature that prestores, prompting face recognition failures.
Preferably, describedly facial image is gathered successively and the step extracting human face expression feature from gathered facial image comprises:
Judge whether collect default facial image in Preset Time;
When collecting default facial image in Preset Time, from gathered facial image, extract human face expression feature;
When not collecting default facial image in Preset Time, gather the facial image that next is default.
In addition, for achieving the above object, the present invention also provides a kind of terminal face identification device, and described terminal face identification device comprises:
Reminding module, during for entering recognition of face pattern in terminal, points out typing successively corresponding facial image;
Extraction module, for gathering facial image successively and extract human face expression feature from gathered facial image;
Matching module, for after extraction human face expression feature, mates extracted human face expression feature with the human face expression feature that prestores;
Judge module, for work as extracted human face expression feature with prestore human face expression characteristic matching success time, judge that recognition of face is passed through.
Preferably, described terminal face identification device also comprises and arranges module, for arranging number and the acquisition order of the human face expression of collection.
Preferably, described matching module, also for after collecting human face expression feature according to acquisition order, mates with the corresponding human face expression feature that prestores one by one by extracted human face expression feature.
Preferably, described matching module, also for after extracting first human face expression feature, by extracted first human face expression feature and the corresponding human face expression characteristic matching that prestores;
Also for work as first extracted human face expression feature with corresponding prestore human face expression characteristic matching time, extract next human face expression feature and the corresponding human face expression characteristic matching that prestores, until the human face expression characteristic matching extracted is complete; Or when there is the human face expression feature extracted and not mating with the corresponding human face expression feature that prestores, prompting face recognition failures.
Preferably, described extraction module also comprises judging unit, extraction unit and collecting unit,
Described judging unit, for judge Preset Time whether in collect default facial image;
Described extraction unit, for when collecting default facial image in Preset Time, extracts human face expression feature from gathered facial image;
Described collecting unit, for when not collecting default facial image in Preset Time, gathers the facial image that next is default.
The present invention is by mating extracted human face expression feature with the human face expression feature that prestores; When extracted human face expression feature and the human face expression characteristic matching that prestores are successful, judge the method that recognition of face is passed through.Realizing carrying out recognition of face by obtaining multiple human face expression feature, substantially increasing the security of face identification method.
Accompanying drawing explanation
Fig. 1 is the schematic flow sheet of terminal face identification method of the present invention preferred embodiment;
Fig. 2 is the high-level schematic functional block diagram of terminal face identification system of the present invention preferred embodiment;
Fig. 3 is the schematic flow sheet of the first embodiment of terminal face identification method of the present invention;
Fig. 4 is the refinement schematic flow sheet of step S10 mono-embodiment in Fig. 3;
Fig. 5 is the schematic flow sheet of the second embodiment of terminal face identification method of the present invention;
Fig. 6 is the schematic flow sheet of the 3rd embodiment of terminal face identification method of the present invention;
Fig. 7 is the high-level schematic functional block diagram of the first embodiment of terminal face identification device of the present invention;
Fig. 8 is the refinement high-level schematic functional block diagram of extraction module one embodiment in Fig. 7;
Fig. 9 is the high-level schematic functional block diagram of the second embodiment of terminal face identification device of the present invention.
The realization of the object of the invention, functional characteristics and advantage will in conjunction with the embodiments, are described further with reference to accompanying drawing.
Embodiment
Should be appreciated that specific embodiment described herein only in order to explain the present invention, be not intended to limit the present invention.
The mobile terminal realizing each embodiment of the present invention is described referring now to accompanying drawing.In follow-up description, use the suffix of such as " module ", " parts " or " unit " for representing element only in order to be conducive to explanation of the present invention, itself is specific meaning not.Therefore, " module " and " parts " can mixedly use.
Mobile terminal can be implemented in a variety of manners.Such as, the terminal described in the present invention can comprise the such as mobile terminal of mobile phone, smart phone, notebook computer, digit broadcasting receiver, PDA (personal digital assistant), PAD (panel computer), PMP (portable media player), guider etc. and the fixed terminal of such as digital TV, desk-top computer etc.Below, suppose that terminal is mobile terminal.But it will be appreciated by those skilled in the art that except the element except being used in particular for mobile object, structure according to the embodiment of the present invention also can be applied to the terminal of fixed type.
Fig. 1 is the hardware configuration signal of the mobile terminal realizing each embodiment of the present invention.
Mobile terminal 100 can comprise wireless communication unit 110, A/V (audio/video) input block 120, user input unit 130, sensing cell 140, output unit 150, storer 160, interface unit 170, controller 180 and power supply unit 190 etc.Fig. 1 shows the mobile terminal with various assembly, it should be understood that, does not require to implement all assemblies illustrated.Can alternatively implement more or less assembly.Will be discussed in more detail below the element of mobile terminal.
Wireless communication unit 110 generally includes one or more assembly, and it allows the wireless communication between mobile terminal 100 and wireless communication system or network.Such as, wireless communication unit can comprise at least one in broadcast reception module 111, mobile communication module 112, wireless Internet module 113, short range communication module 114 and positional information module 115.
Broadcast reception module 111 via broadcast channel from external broadcasting management server receiving broadcast signal and/or broadcast related information.Broadcast channel can comprise satellite channel and/or terrestrial channel.Broadcast management server can be generate and send the server of broadcast singal and/or broadcast related information or the broadcast singal generated before receiving and/or broadcast related information and send it to the server of terminal.Broadcast singal can comprise TV broadcast singal, radio signals, data broadcasting signal etc.And broadcast singal may further include the broadcast singal combined with TV or radio signals.Broadcast related information also can provide via mobile communications network, and in this case, broadcast related information can be received by mobile communication module 112.Broadcast singal can exist in a variety of manners, such as, it can exist with the form of the electronic service guidebooks (ESG) of the electronic program guides of DMB (DMB) (EPG), digital video broadcast-handheld (DVB-H) etc.Broadcast reception module 111 can by using the broadcast of various types of broadcast system Received signal strength.Especially, broadcast reception module 111 can by using such as multimedia broadcasting-ground (DMB-T), DMB-satellite (DMB-S), digital video broadcasting-hand-held (DVB-H), and the digit broadcasting system of the Radio Data System, received terrestrial digital broadcasting integrated service (ISDB-T) etc. of forward link media (MediaFLO@) receives digital broadcasting.Broadcast reception module 111 can be constructed to be applicable to providing the various broadcast system of broadcast singal and above-mentioned digit broadcasting system.The broadcast singal received via broadcast reception module 111 and/or broadcast related information can be stored in storer 160 (or storage medium of other type).
Radio signal is sent at least one in base station (such as, access point, Node B etc.), exterior terminal and server and/or receives radio signals from it by mobile communication module 112.Various types of data that such radio signal can comprise voice call signal, video calling signal or send according to text and/or Multimedia Message and/or receive.
Wireless Internet module 113 supports the Wi-Fi (Wireless Internet Access) of mobile terminal.This module can be inner or be externally couple to terminal.Wi-Fi (Wireless Internet Access) technology involved by this module can comprise WLAN (WLAN) (Wi-Fi), Wibro (WiMAX), Wimax (worldwide interoperability for microwave access), HSDPA (high-speed downlink packet access) etc.
Short range communication module 114 is the modules for supporting junction service.Some examples of short-range communication technology comprise bluetooth tM, radio-frequency (RF) identification (RFID), Infrared Data Association (IrDA), ultra broadband (UWB), purple honeybee tMetc..
Positional information module 115 is the modules of positional information for checking or obtain mobile terminal.The typical case of positional information module is GPS (GPS).According to current technology, GPS module 115 calculates from the range information of three or more satellite and correct time information and for the Information application triangulation calculated, thus calculates three-dimensional current location information according to longitude, latitude and pin-point accuracy.Current, the method for calculating position and temporal information uses three satellites and by using the error of the position that goes out of an other satellite correction calculation and temporal information.In addition, GPS module 115 can carry out computing velocity information by Continuous plus current location information in real time.
A/V input block 120 is for audio reception or vision signal.A/V input block 120 can comprise camera 121 and microphone 1220, and the view data of camera 121 to the static images obtained by image capture apparatus in Video Capture pattern or image capture mode or video processes.Picture frame after process may be displayed on display unit 151.Picture frame after camera 121 processes can be stored in storer 160 (or other storage medium) or via wireless communication unit 110 and send, and can provide two or more cameras 1210 according to the structure of mobile terminal.Such acoustic processing can via microphones sound (voice data) in telephone calling model, logging mode, speech recognition mode etc. operational mode, and can be voice data by microphone 122.Audio frequency (voice) data after process can be converted to the formatted output that can be sent to mobile communication base station via mobile communication module 112 when telephone calling model.Microphone 122 can be implemented various types of noise and eliminate (or suppress) algorithm and receiving and sending to eliminate (or suppression) noise or interference that produce in the process of sound signal.
User input unit 130 can generate key input data to control the various operations of mobile terminal according to the order of user's input.User input unit 130 allows user to input various types of information, and keyboard, the young sheet of pot, touch pad (such as, detecting the touch-sensitive assembly of the change of the resistance, pressure, electric capacity etc. that cause owing to being touched), roller, rocking bar etc. can be comprised.Especially, when touch pad is superimposed upon on display unit 151 as a layer, touch-screen can be formed.
Sensing cell 140 detects the current state of mobile terminal 100, (such as, mobile terminal 100 open or close state), the position of mobile terminal 100, user for mobile terminal 100 contact (namely, touch input) presence or absence, the orientation of mobile terminal 100, the acceleration or deceleration of mobile terminal 100 move and direction etc., and generate order or the signal of the operation for controlling mobile terminal 100.Such as, when mobile terminal 100 is embodied as sliding-type mobile phone, sensing cell 140 can sense this sliding-type phone and open or close.In addition, whether whether sensing cell 140 can detect power supply unit 190 provides electric power or interface unit 170 to couple with external device (ED).Sensing cell 140 can comprise proximity transducer 1410 and will be described this in conjunction with touch-screen below.
Interface unit 170 is used as at least one external device (ED) and is connected the interface that can pass through with mobile terminal 100.Such as, external device (ED) can comprise wired or wireless head-band earphone port, external power source (or battery charger) port, wired or wireless FPDP, memory card port, for connecting the port, audio frequency I/O (I/O) port, video i/o port, ear port etc. of the device with identification module.Identification module can be that storage uses the various information of mobile terminal 100 for authentication of users and can comprise subscriber identification module (UIM), client identification module (SIM), Universal Subscriber identification module (USIM) etc.In addition, the device (hereinafter referred to " recognition device ") with identification module can take the form of smart card, and therefore, recognition device can be connected with mobile terminal 100 via port or other coupling arrangement.Interface unit 170 may be used for receive from external device (ED) input (such as, data message, electric power etc.) and the input received be transferred to the one or more element in mobile terminal 100 or may be used for transmitting data between mobile terminal and external device (ED).
In addition, when mobile terminal 100 is connected with external base, interface unit 170 can be used as to allow by it electric power to be provided to the path of mobile terminal 100 from base or can be used as the path that allows to be transferred to mobile terminal by it from the various command signals of base input.The various command signal inputted from base or electric power can be used as and identify whether mobile terminal is arranged on the signal base exactly.Output unit 150 is constructed to provide output signal (such as, sound signal, vision signal, alarm signal, vibration signal etc.) with vision, audio frequency and/or tactile manner.Output unit 150 can comprise display unit 151, dio Output Modules 152, alarm unit 153 etc.
Display unit 151 may be displayed on the information of process in mobile terminal 100.Such as, when mobile terminal 100 is in telephone calling model, display unit 151 can show with call or other communicate (such as, text messaging, multimedia file are downloaded etc.) be correlated with user interface (UI) or graphic user interface (GUI).When mobile terminal 100 is in video calling pattern or image capture mode, display unit 151 can the image of display capture and/or the image of reception, UI or GUI that video or image and correlation function are shown etc.
Meanwhile, when display unit 151 and touch pad as a layer superposed on one another to form touch-screen time, display unit 151 can be used as input media and output unit.Display unit 151 can comprise at least one in liquid crystal display (LCD), thin film transistor (TFT) LCD (TFT-LCD), Organic Light Emitting Diode (OLED) display, flexible display, three-dimensional (3D) display etc.Some in these displays can be constructed to transparence and watch from outside to allow user, and this can be called transparent display, and typical transparent display can be such as TOLED (transparent organic light emitting diode) display etc.According to the specific embodiment wanted, mobile terminal 100 can comprise two or more display units (or other display device), such as, mobile terminal can comprise outernal display unit (not shown) and inner display unit (not shown).Touch-screen can be used for detecting touch input pressure and touch input position and touch and inputs area.
When dio Output Modules 152 can be under the isotypes such as call signal receiving mode, call mode, logging mode, speech recognition mode, broadcast reception mode at mobile terminal, voice data convert audio signals that is that wireless communication unit 110 is received or that store in storer 160 and exporting as sound.And dio Output Modules 152 can provide the audio frequency relevant to the specific function that mobile terminal 100 performs to export (such as, call signal receives sound, message sink sound etc.).Dio Output Modules 152 can comprise loudspeaker, hummer etc.
Alarm unit 153 can provide and export that event informed to mobile terminal 100.Typical event can comprise calling reception, message sink, key signals input, touch input etc.Except audio or video exports, alarm unit 153 can provide in a different manner and export with the generation of notification event.Such as, alarm unit 153 can provide output with the form of vibration, when receive calling, message or some other enter communication (incoming communication) time, alarm unit 153 can provide sense of touch to export (that is, vibrating) to notify to user.By providing such sense of touch to export, even if when the mobile phone of user is in the pocket of user, user also can identify the generation of various event.Alarm unit 153 also can provide the output of the generation of notification event via display unit 151 or dio Output Modules 152.
Storer 160 software program that can store process and the control operation performed by controller 180 etc., or temporarily can store oneself through exporting the data (such as, telephone directory, message, still image, video etc.) that maybe will export.And, storer 160 can store about when touch be applied to touch-screen time the vibration of various modes that exports and the data of sound signal.
Storer 160 can comprise the storage medium of at least one type, described storage medium comprises flash memory, hard disk, multimedia card, card-type storer (such as, SD or DX storer etc.), random access storage device (RAM), static random-access memory (SRAM), ROM (read-only memory) (ROM), Electrically Erasable Read Only Memory (EEPROM), programmable read only memory (PROM), magnetic storage, disk, CD etc.And mobile terminal 100 can be connected the memory function of execute store 160 network storage device with by network cooperates.
Controller 180 controls the overall operation of mobile terminal usually.Such as, controller 180 performs the control relevant to voice call, data communication, video calling etc. and process.In addition, controller 180 can comprise the multi-media module 1810 for reproducing (or playback) multi-medium data, and multi-media module 1810 can be configured in controller 180, or can be configured to be separated with controller 180.Controller 180 can pattern recognition process, is identified as character or image so that input is drawn in the handwriting input performed on the touchscreen or picture.
Power supply unit 190 receives external power or internal power and provides each element of operation and the suitable electric power needed for assembly under the control of controller 180.
Various embodiment described herein can to use such as computer software, the computer-readable medium of hardware or its any combination implements.For hardware implementation, embodiment described herein can by using application-specific IC (ASIC), digital signal processor (DSP), digital signal processing device (DSPD), programmable logic device (PLD), field programmable gate array (FPGA), processor, controller, microcontroller, microprocessor, being designed at least one performed in the electronic unit of function described herein and implementing, in some cases, such embodiment can be implemented in controller 180.For implement software, the embodiment of such as process or function can be implemented with allowing the independent software module performing at least one function or operation.Software code can be implemented by the software application (or program) write with any suitable programming language, and software code can be stored in storer 160 and to be performed by controller 180.
So far, oneself is through the mobile terminal according to its functional description.Below, for the sake of brevity, by the slide type mobile terminal that describes in various types of mobile terminals of such as folded form, board-type, oscillating-type, slide type mobile terminal etc. exemplarily.Therefore, the present invention can be applied to the mobile terminal of any type, and is not limited to slide type mobile terminal.
Mobile terminal 100 as shown in Figure 1 can be constructed to utilize and send the such as wired and wireless communication system of data via frame or grouping and satellite-based communication system operates.
Describe wherein according to such communication system that mobile terminal of the present invention can operate referring now to Fig. 2.
Such communication system can use different air interfaces and/or Physical layer.Such as, the air interface used by communication system comprises such as frequency division multiple access (FDMA), time division multiple access (TDMA) (TDMA), CDMA (CDMA) and universal mobile telecommunications system (UMTS) (especially, Long Term Evolution (LTE)), global system for mobile communications (GSM) etc.As non-limiting example, description below relates to cdma communication system, but such instruction is equally applicable to the system of other type.
With reference to figure 2, cdma wireless communication system can comprise multiple mobile terminal 100, multiple base station (BS) 270, base station controller (BSC) 275 and mobile switching centre (MSC) 2800MSC 280 and be constructed to form interface with Public Switched Telephony Network (PSTN) 290.MSC 280 is also constructed to form interface with the BSC 275 that can be couple to base station 270 via back haul link.Back haul link can construct according to any one in some interfaces that oneself knows, described interface comprises such as E 1/T1, ATM, IP, PPP, frame relay, HDSL, ADSL or xDSL.Will be appreciated that system as shown in Figure 2 can comprise multiple BSC2750.
Each BS 270 can serve one or more subregion (or region), by multidirectional antenna or point to specific direction each subregion of antenna cover radially away from BS 270.Or each subregion can by two or more antenna covers for diversity reception.Each BS 270 can be constructed to support multiple parallel compensate, and each parallel compensate has specific frequency spectrum (such as, 1.25MHz, 5MHz etc.).
Subregion can be called as CDMA Channel with intersecting of parallel compensate.BS 270 also can be called as base station transceiver subsystem (BTS) or other equivalent terms.Under these circumstances, term " base station " may be used for broadly representing single BSC 275 and at least one BS 270.Base station also can be called as " cellular station ".Or each subregion of particular B S270 can be called as multiple cellular station.
As shown in Figure 2, broadcast singal is sent to the mobile terminal 100 at operate within systems by broadcsting transmitter (BT) 295.Broadcast reception module 111 as shown in Figure 1 is arranged on mobile terminal 100 and sentences the broadcast singal receiving and sent by BT 295.In fig. 2, several GPS (GPS) satellite 300 is shown.Satellite 300 helps at least one in the multiple mobile terminal 100 in location.
In fig. 2, depict several satellite 300, but understand, the satellite of any number can be utilized to obtain useful locating information.GPS module 115 as shown in Figure 1 is constructed to coordinate to obtain the locating information wanted with satellite 300 usually.Substitute GPS tracking technique or outside GPS tracking technique, can use can other technology of position of tracking mobile terminal.In addition, at least one gps satellite 300 optionally or extraly can process satellite dmb transmission.
As a typical operation of wireless communication system, BS 270 receives the reverse link signal from various mobile terminal 100.Mobile terminal 100 participates in call usually, information receiving and transmitting communicates with other type.Each reverse link signal that certain base station 270 receives is processed by particular B S 270.The data obtained are forwarded to relevant BSC 275.BSC provides call Resourse Distribute and comprises the mobile management function of coordination of the soft switching process between BS 270.The data received also are routed to MSC280 by BSC 275, and it is provided for the extra route service forming interface with PSTN290.Similarly, PSTN290 and MSC 280 forms interface, and MSC and BSC 275 forms interface, and BSC 275 correspondingly control BS270 so that forward link signals is sent to mobile terminal 100.
Based on above-mentioned mobile terminal hardware configuration and communication system, each embodiment of terminal face identification method of the present invention is proposed.
With reference to the schematic flow sheet that Fig. 3, Fig. 3 are the first embodiment of terminal face identification method of the present invention.
In one embodiment, described terminal face identification method comprises:
Step S10, when terminal enters recognition of face pattern, points out typing successively corresponding facial image, gathers facial image successively and extract human face expression feature from gathered facial image;
Before terminal enters recognition of face pattern, terminal obtains facial image by the first-class picture catching instrument of shooting, from described facial image, extract human face expression feature, described human face expression feature comprises: as smiled, blinking, cry, laugh heartily, surprised, excited, sad etc.Due to expression Producing reason, the degree of expression performance, people are to all many-sided reasons such as the control ability of expression and the tendencies of expression, make the change of expression trickle and complicated, the summary of expressive features is also seemed complicated, according to the most basic sorting technique, the principal character of six kinds of basic facial expressions is summarized as follows: time 1. in surprise: eyebrow is lifted, uprise curved, skin under eyebrow is stretched, wrinkle may across forehead, eyes have been opened wide, upper eyelid is elevated, lower eyelid falls, the white of the eye may expose in the top of pupil, following also may expose, lower jaw falls, and mouth opens, to such an extent as to lip and tooth are separately, but mouth taking it easy, also do not stretch.2., during fear: eyebrow lifts and wrinkles together, the wrinkle of forehead only concentrates on middle part, and not across whole forehead, upper eyelid is lifted, and lower eyelid is at full stretch, and is pulled; Mouth has opened, and lip or slightly anxiety, pull back; Or elongate, pull back simultaneously.3. during sadness: eyebrow interior angle wrinkle, is raised together, drives the skin under eyebrow; The upper eyelid of angulus oculi medialis is elevated, and the corners of the mouth is drop-down, and the corners of the mouth may tremble.4. during indignation: eyebrow wrinkle, and has been depressed together, has occurred vertical wrinkle between forehead; Lower eyelid state at full stretch, may or may not be lifted, lower eyelid is nervous, may be depressed under the action of eyebrow, and goggling at of tears indignation may be heaved; Lip has two kinds of home positions: close, labial angle is stretching; Open downwards, will cry out seemingly; Nostril may be magnified.5. time glad: palpebra inferior may have wrinkle below, may heave, but and take it easy; Crow's feet expands outwardly from the tail of the eye; Labial angle pulls back and raises; Mouth may be magnified, and tooth may expose; One wrinkle extends to corners of the mouth outside from nose; Cheek is lifted.When 6. detesting: eyebrow has forced down, and force down upper eyelid; Occurred band in lower eyelid bottom, cheek promotes it upwards, and takes it easy; Upper lip is lifted, and lower lip and upper lip close, and promote upper lip upwards, the corners of the mouth is drop-down, lip slight convex; Nose purses up, and cheek is lifted.
Preferably, in order to better get human face expression feature from described facial image, described facial image is carried out pre-service, be normalized by the size of facial image and gray scale, the rectification of head pose, Iamge Segmentation etc., quality of human face image can be improved like this, stress release treatment, unified facial image gray-scale value and size, for postorder feature extraction and Classification and Identification are laid a solid foundation.Carry out feature extraction, change into the statement of higher level facial image by dot matrix, as shape, motion, color, texture, space structure etc., under the prerequisite ensureing stability and discrimination as far as possible, dimension-reduction treatment is carried out to huge face image data.The main method of feature extraction has: extract geometric properties, statistical nature, frequency field characteristic sum motion feature etc.Face datection and location technology can be adopted, what guarantee collected is comprise facial image, and eliminate the region that original face image data and facial image are irrelevant, avoid region the obscuring recognition of face had nothing to do with facial image in face image data or facial image, improve the precision of recognition of face, speed and success ratio.
Human face expression feature is extracted from obtained facial image, the human face expression feature extracted specifically comprises carries out bad point (appearance sudden change) removal to face image data, primitive character extract and Feature Dimension Reduction, go interference process, to obtain favourable human face expression characteristic, improve efficiency and the accuracy of subsequent detection judgement.Described human face expression feature is pre-stored in characteristics of image storehouse, as the human face expression feature that prestores.
When terminal enters recognition of face pattern, terminal shows the Facial Expression Image prestored on its screen, prompting user is by making a video recording first-class picture catching instrument successively according to the corresponding facial image of prompting typing, then terminal gathers the facial image of user's typing successively, and extracts human face expression feature from gathered facial image.As terminal screen demonstrates the expression of smile, then user makes the expression of smile by the first-class trap tool of shooting, and terminal collects the facial image of the smile that user makes, and extracts the human face expression feature of smile from the facial image that this is smiled; As terminal screen demonstrates the expression of nictation, then user makes the expression of nictation by the first-class trap tool of shooting, and terminal collects the facial image of the nictation that user makes, and extracts the human face expression feature of nictation from the facial image of this nictation.
Particularly, with reference to Fig. 4, in one embodiment, describedly facial image is gathered successively and the process extracting human face expression feature from gathered facial image can comprise:
Step S11, judges whether collect default facial image in Preset Time;
Step S12, when collecting default facial image in Preset Time, extracts human face expression feature from gathered facial image;
Step S13, when not collecting default facial image in Preset Time, gathers the facial image that next is default.
Judge whether terminal collects default facial image in Preset Time, when terminal collects default facial image in Preset Time, prompting gathers facial image success, extracts human face expression feature from gathered facial image; When terminal does not collect default facial image in Preset Time, prompting gathers facial image failure, continues to gather next facial image preset.Described default facial image is the facial image of typing in advance, the smiling face of such as certain user, face image etc. of crying.Described Preset Time can freely be arranged as required, can be set to 0.5s, 1s, 2s etc., preferably, is set to 1s.Namely after terminal enters recognition of face pattern, judge whether terminal collects the facial image of default smile in 1s, if collected, then prompting has gathered the facial image success of smile, extracts the human face expression feature of smile from the facial image of the smile collected.When terminal does not collect the facial image of default smile in 1s, then prompting gathers the facial image failure of smile, terminal screen can demonstrate the next facial image preset, as cried, then user needs to make the human face expression of crying, the facial image that terminal is cried by shooting first-class picture catching instrument acquisition user.
Step S20, after extraction human face expression feature, mates extracted human face expression feature with the human face expression feature that prestores;
After terminal extracts human face expression feature, extracted human face expression feature is mated with the human face expression feature that prestores in characteristics of image storehouse by terminal, when extracted human face expression feature reaches preset ratio with the similarity of the human face expression feature that prestores, judge that the match is successful.Described preset ratio can freely be arranged as required, can be set to 70%, 80%, 90% etc., and preferably, preset ratio is set to 80%.As when terminal the similarity of the human face expression feature of smile that prestores in the human face expression feature of smile extracted and characteristics of image storehouse reach 80% time, the match is successful.
Step S30, when extracted human face expression feature and the human face expression characteristic matching that prestores are successful, judges that recognition of face is passed through.
When the human face expression characteristic matching success prestored in the human face expression feature that terminal is extracted and terminal, judge that recognition of face operation is passed through.As extracted smile when terminal institute, that cries waits the smile prestored in human face expression feature and characteristics of image storehouse, during the characteristic matching success such as human face expression such as grade of crying, judges that recognition of face is passed through.When extracted human face expression is characterized as multiple, when all the match is successful with the corresponding human face expression feature that prestores for extracted all face expressive features, judge that recognition of face operation is passed through, have a coupling unsuccessful, then judge that recognition of face operation is not passed through.
In the present embodiment, by extracted human face expression feature is mated with the human face expression feature that prestores; When extracted human face expression feature and the human face expression characteristic matching that prestores are successful, judge that recognition of face is passed through.Effectively avoid existing face recognition process simple, and the problem of poor stability.
With reference to the schematic flow sheet that Fig. 5, Fig. 5 are the second embodiment of terminal face identification method of the present invention.Based on the first embodiment of said method, before described step S10, also comprise:
Step S40, arranges number and the acquisition order of the human face expression of collection.
Terminal is set and carries out the number of the required human face expression gathered of recognition of face and the acquisition order of often kind of expression.Such as arranging the number that terminal carries out the required human face expression gathered of recognition of face is 3, also can be set to 4 or other, these 3 human face expressions are respectively smile, cry and blink, the acquisition order of these three human face expressions is then set, as arranged the human face expression first gathering smile, then gathering the human face expression of crying, finally gathering the human face expression of nictation.
Further, described after extraction human face expression feature, the process that extracted human face expression feature and the human face expression feature that prestores carry out mating can be comprised:
Step S21, after collecting human face expression feature according to acquisition order, mates with the corresponding human face expression feature that prestores one by one by extracted human face expression feature.
After collecting human face expression feature according to acquisition order, the human face expression feature that prestores that extracted human face expression feature is corresponding with characteristics of image storehouse is one by one mated.As according to smile, the acquisition order of crying and blinking is to the human face expression feature of corresponding smile, the human face expression feature of human face expression characteristic sum nictation of crying, then human face expression feature corresponding with characteristics of image storehouse one by one for extracted 3 face expressive features is mated.
In the present embodiment, by after collecting human face expression feature according to acquisition order, extracted human face expression feature is mated with the corresponding human face expression feature that prestores one by one.Carrying out recognition of face by obtaining multiple human face expression feature, adding complexity and the accuracy of face recognition process, substantially increasing the security of face identification method.
With reference to Fig. 6, the schematic flow sheet of the 3rd embodiment of terminal face identification method of the present invention.Based on the first embodiment of said method, described after extraction human face expression feature, the process that extracted human face expression feature and the human face expression feature that prestores carry out mating can be comprised:
Step S22, after extracting first human face expression feature, by extracted first human face expression feature and the corresponding human face expression characteristic matching that prestores;
Step S23, when extracted first human face expression feature with corresponding prestore human face expression characteristic matching time, extract next human face expression feature and the corresponding human face expression characteristic matching that prestores, until the human face expression characteristic matching extracted is complete; Or when there is the human face expression feature extracted and not mating with the corresponding human face expression feature that prestores, prompting face recognition failures.
After terminal extracts first human face expression feature, by the human face expression characteristic matching that prestores corresponding with characteristics of image storehouse for extracted first human face expression feature; When first human face expression feature that terminal is extracted with corresponding prestore human face expression characteristic matching time, extract the human face expression characteristic matching that prestores that next human face expression feature is corresponding with characteristics of image storehouse, until the whole human face expression characteristic matching extracted are complete; Or when there is the human face expression feature extracted the prestore human face expression feature corresponding with in characteristics of image storehouse and not mating, point out face recognition failures.As extracted when terminal the human face expression feature that first human face expression is characterized as smile, then by extract smile human face expression feature mate with the human face expression feature of the smile prestored in characteristics of image storehouse, when terminal extract the human face expression characteristic matching of the smile prestored in the human face expression feature of smile and characteristics of image storehouse time, extract next human face expression feature, as the human face expression feature of crying, the extracted human face expression feature of crying is mated with the human face expression feature of crying prestored in characteristics of image storehouse, until the human face expression characteristic matching that terminal is extracted is complete.Or work as extracted smile or the human face expression feature of the crying smile that prestore corresponding with characteristics of image storehouse or the human face expression feature of crying when not mating, prompting face recognition failures.
In the present embodiment, by after extracting first human face expression feature, just by the method for the human face expression characteristic matching that prestores corresponding with characteristics of image storehouse for extracted first human face expression feature, while improve face identification method accuracy, ensure that the security of terminal operation.
The application of the terminal face identification method of above-mentioned the first to the second embodiment comprises man-machine interaction, safety, robot building, medical treatment, communication and automotive field etc.Executive agent can the electronic equipment such as mobile phone, pad, notebook computer, antitheft door.
Corresponding, the present invention further provides a kind of terminal face identification device.
With reference to the high-level schematic functional block diagram that Fig. 7, Fig. 7 are the first embodiment of terminal face identification device of the present invention.
In one embodiment, described terminal face identification device comprises: reminding module 10, extraction module 20, matching module 30 and judge module 40.
Reminding module 10, during for entering recognition of face pattern in terminal, points out typing successively corresponding facial image;
Extraction module 20, for gathering facial image successively and extract human face expression feature from gathered facial image;
Before terminal enters recognition of face pattern, terminal obtains facial image by the first-class picture catching instrument of shooting, from described facial image, extract human face expression feature, described human face expression feature comprises: as smiled, blinking, cry, laugh heartily, surprised, excited, sad etc.Due to expression Producing reason, the degree of expression performance, people are to all many-sided reasons such as the control ability of expression and the tendencies of expression, make the change of expression trickle and complicated, the summary of expressive features is also seemed complicated, according to the most basic sorting technique, the principal character of six kinds of basic facial expressions is summarized as follows: time 1. in surprise: eyebrow is lifted, uprise curved, skin under eyebrow is stretched, wrinkle may across forehead, eyes have been opened wide, upper eyelid is elevated, lower eyelid falls, the white of the eye may expose in the top of pupil, following also may expose, lower jaw falls, and mouth opens, to such an extent as to lip and tooth are separately, but mouth taking it easy, also do not stretch.2., during fear: eyebrow lifts and wrinkles together, the wrinkle of forehead only concentrates on middle part, and not across whole forehead, upper eyelid is lifted, and lower eyelid is at full stretch, and is pulled; Mouth has opened, and lip or slightly anxiety, pull back; Or elongate, pull back simultaneously.3. during sadness: eyebrow interior angle wrinkle, is raised together, drives the skin under eyebrow; The upper eyelid of angulus oculi medialis is elevated, and the corners of the mouth is drop-down, and the corners of the mouth may tremble.4. during indignation: eyebrow wrinkle, and has been depressed together, has occurred vertical wrinkle between forehead; Lower eyelid state at full stretch, may or may not be lifted, lower eyelid is nervous, may be depressed under the action of eyebrow, and goggling at of tears indignation may be heaved; Lip has two kinds of home positions: close, labial angle is stretching; Open downwards, will cry out seemingly; Nostril may be magnified.5. time glad: palpebra inferior may have wrinkle below, may heave, but and take it easy; Crow's feet expands outwardly from the tail of the eye; Labial angle pulls back and raises; Mouth may be magnified, and tooth may expose; One wrinkle extends to corners of the mouth outside from nose; Cheek is lifted.When 6. detesting: eyebrow has forced down, and force down upper eyelid; Occurred band in lower eyelid bottom, cheek promotes it upwards, and takes it easy; Upper lip is lifted, and lower lip and upper lip close, and promote upper lip upwards, the corners of the mouth is drop-down, lip slight convex; Nose purses up, and cheek is lifted.
Preferably, in order to better get human face expression feature from described facial image, described facial image is carried out pre-service, be normalized by the size of facial image and gray scale, the rectification of head pose, Iamge Segmentation etc., quality of human face image can be improved like this, stress release treatment, unified facial image gray-scale value and size, for postorder feature extraction and Classification and Identification are laid a solid foundation.Carry out feature extraction, change into the statement of higher level facial image by dot matrix, as shape, motion, color, texture, space structure etc., under the prerequisite ensureing stability and discrimination as far as possible, dimension-reduction treatment is carried out to huge face image data.The main method of feature extraction has: extract geometric properties, statistical nature, frequency field characteristic sum motion feature etc.Face datection and location technology can be adopted, what guarantee collected is comprise facial image, and eliminate the region that original face image data and facial image are irrelevant, avoid region the obscuring recognition of face had nothing to do with facial image in face image data or facial image, improve the precision of recognition of face, speed and success ratio.
Human face expression feature is extracted from obtained facial image, the human face expression feature extracted specifically comprises carries out bad point (appearance sudden change) removal to face image data, primitive character extract and Feature Dimension Reduction, go interference process, to obtain favourable human face expression characteristic, improve efficiency and the accuracy of subsequent detection judgement.Described human face expression feature is pre-stored in characteristics of image storehouse, as the human face expression feature that prestores.
When terminal enters recognition of face pattern, terminal shows the Facial Expression Image prestored on its screen, prompting user is by making a video recording first-class picture catching instrument successively according to the corresponding facial image of prompting typing, then terminal gathers the facial image of user's typing successively, and extracts human face expression feature from gathered facial image.As terminal screen demonstrates the expression of smile, then user makes the expression of smile by the first-class trap tool of shooting, and terminal collects the facial image of the smile that user makes, and extracts the human face expression feature of smile from the facial image that this is smiled; As terminal screen demonstrates the expression of nictation, then user makes the expression of nictation by the first-class trap tool of shooting, and terminal collects the facial image of the nictation that user makes, and extracts the human face expression feature of nictation from the facial image of this nictation.
Particularly, reference Fig. 8, Fig. 8 is the refinement high-level schematic functional block diagram of extraction module one embodiment in Fig. 7.Described extraction module 20 also comprises judging unit 21, extraction unit 22 and collecting unit 23.
Described judging unit 21, for judge Preset Time whether in collect default facial image;
Described extraction unit 22, for when collecting default facial image in Preset Time, extracts human face expression feature from gathered facial image;
Described collecting unit 23, for when not collecting default facial image in Preset Time, gathers the facial image that next is default.
Judge whether terminal collects default facial image in Preset Time, when terminal collects default facial image in Preset Time, prompting gathers facial image success, extracts human face expression feature from gathered facial image; When terminal does not collect default facial image in Preset Time, prompting gathers facial image failure, continues to gather next facial image preset.Described Preset Time can freely be arranged as required, can be set to 0.5s, 1s, 2s etc., preferably, is set to 1s.Namely after terminal enters recognition of face pattern, judge whether terminal collects the facial image of default smile in 1s, if collected, then prompting has gathered the facial image success of smile, extracts the human face expression feature of smile from the facial image of the smile collected.When terminal does not collect the facial image of default smile in 1s, then prompting gathers the facial image failure of smile, terminal screen can demonstrate the next facial image preset, as cried, then user needs to make the human face expression of crying, the facial image that terminal is cried by shooting first-class picture catching instrument acquisition user.
Matching module 30, for after extraction human face expression feature, mates extracted human face expression feature with the human face expression feature that prestores;
After terminal extracts human face expression feature, extracted human face expression feature is mated with the human face expression feature that prestores in characteristics of image storehouse by terminal, when extracted human face expression feature reaches preset ratio with the similarity of the human face expression feature that prestores, judge that the match is successful.Described preset ratio can freely be arranged as required, can be set to 70%, 80%, 90% etc., and preferably, preset ratio is set to 80%.As when terminal the similarity of the human face expression feature of smile that prestores in the human face expression feature of smile extracted and characteristics of image storehouse reach 80% time, the match is successful.
Further, described matching module 30, for work as extracted human face expression feature with prestore human face expression characteristic matching success time, judge that recognition of face is passed through.
When the human face expression characteristic matching success prestored in the human face expression feature that terminal is extracted and terminal, judge that recognition of face operation is passed through.As extracted smile when terminal institute, that cries waits the smile prestored in human face expression feature and characteristics of image storehouse, during the characteristic matching success such as human face expression such as grade of crying, judges that recognition of face is passed through.When extracted human face expression is characterized as multiple, when all the match is successful with the corresponding human face expression feature that prestores for extracted all face expressive features, judge that recognition of face operation is passed through, have a coupling unsuccessful, then judge that recognition of face operation is not passed through.
In the present embodiment, by extracted human face expression feature is mated with the human face expression feature that prestores; When extracted human face expression feature and the human face expression characteristic matching that prestores are successful, judge that recognition of face is passed through.Effectively avoid existing face recognition process simple, and the problem of poor stability.
Particularly, reference Fig. 9, Fig. 9 is the high-level schematic functional block diagram of the second embodiment of terminal face identification device of the present invention.Described terminal face identification device also comprises and arranges module 50.
Module 50 is set, for arranging number and the acquisition order of the human face expression of collection.
Terminal is set and carries out the number of the required human face expression gathered of recognition of face and the acquisition order of often kind of expression.Such as arranging the number that terminal carries out the required human face expression gathered of recognition of face is 3, also can be set to 4 or other, these 3 human face expressions are respectively smile, cry and blink, the acquisition order of these three human face expressions is then set, as arranged the human face expression first gathering smile, then gathering the human face expression of crying, finally gathering the human face expression of nictation.
Further, described matching module 30, also for after collecting human face expression feature according to acquisition order, mates with the corresponding human face expression feature that prestores one by one by extracted human face expression feature.
After collecting human face expression feature according to acquisition order, the human face expression feature that prestores that extracted human face expression feature is corresponding with characteristics of image storehouse is one by one mated.As according to smile, the acquisition order of crying and blinking is to the human face expression feature of corresponding smile, the human face expression feature of human face expression characteristic sum nictation of crying, then human face expression feature corresponding with characteristics of image storehouse one by one for extracted 3 face expressive features is mated.
Described matching module 30, also for after extracting first human face expression feature, by extracted first human face expression feature and the corresponding human face expression characteristic matching that prestores;
Also for work as first extracted human face expression feature with corresponding prestore human face expression characteristic matching time, extract next human face expression feature and the corresponding human face expression characteristic matching that prestores, until the human face expression characteristic matching extracted is complete; Or when there is the human face expression feature extracted and not mating with the corresponding human face expression feature that prestores, prompting face recognition failures.
After terminal extracts first human face expression feature, by the human face expression characteristic matching that prestores corresponding with characteristics of image storehouse for extracted first human face expression feature; When first human face expression feature that terminal is extracted with corresponding prestore human face expression characteristic matching time, extract the human face expression characteristic matching that prestores that next human face expression feature is corresponding with characteristics of image storehouse, until the whole human face expression characteristic matching extracted are complete; Or when there is the human face expression feature extracted the prestore human face expression feature corresponding with in characteristics of image storehouse and not mating, point out face recognition failures.As extracted when terminal the human face expression feature that first human face expression is characterized as smile, then by extract smile human face expression feature mate with the human face expression feature of the smile prestored in characteristics of image storehouse, when terminal extract the human face expression characteristic matching of the smile prestored in the human face expression feature of smile and characteristics of image storehouse time, extract next human face expression feature, as the human face expression feature of crying, the extracted human face expression feature of crying is mated with the human face expression feature of crying prestored in characteristics of image storehouse, until the human face expression characteristic matching that terminal is extracted is complete.Or work as extracted smile or the human face expression feature of the crying smile that prestore corresponding with characteristics of image storehouse or the human face expression feature of crying when not mating, prompting face recognition failures.
In the present embodiment, by extracted human face expression feature is mated with the human face expression feature that prestores, carrying out recognition of face by obtaining multiple human face expression feature, adding complexity and the accuracy of face recognition process, substantially increasing the security of face identification method.
These are only the preferred embodiments of the present invention; not thereby the scope of the claims of the present invention is limited; every utilize instructions of the present invention and accompanying drawing content to do equivalent structure or equivalent flow process conversion; or be directly or indirectly used in other relevant technical fields, be all in like manner included in scope of patent protection of the present invention.

Claims (10)

1. a terminal face identification method, is characterized in that, comprises step:
When terminal enters recognition of face pattern, point out typing successively corresponding facial image, gather facial image successively and extract human face expression feature from gathered facial image;
After extraction human face expression feature, extracted human face expression feature is mated with the human face expression feature that prestores;
When extracted human face expression feature and the human face expression characteristic matching that prestores are successful, judge that recognition of face is passed through.
2. terminal face identification method as claimed in claim 1, it is characterized in that, described when terminal enters recognition of face pattern, point out typing successively corresponding facial image, gather facial image successively and before extract the step of human face expression feature from gathered facial image, also comprise:
Number and the acquisition order of the human face expression of collection are set.
3. terminal face identification method as claimed in claim 2, is characterized in that, described after extraction human face expression feature, the step that extracted human face expression feature and the human face expression feature that prestores carry out mating is comprised:
After collecting human face expression feature according to acquisition order, extracted human face expression feature is mated with the corresponding human face expression feature that prestores one by one.
4. terminal face identification method as claimed in claim 2, is characterized in that, described after extraction human face expression feature, the step that extracted human face expression feature and the human face expression feature that prestores carry out mating is comprised:
After extracting first human face expression feature, by extracted first human face expression feature and the corresponding human face expression characteristic matching that prestores;
When extracted first human face expression feature with corresponding prestore human face expression characteristic matching time, extract next human face expression feature and the corresponding human face expression characteristic matching that prestores, until the human face expression characteristic matching extracted is complete; Or when there is the human face expression feature extracted and not mating with the corresponding human face expression feature that prestores, prompting face recognition failures.
5. the terminal face identification method as described in any one of Claims 1-4, is characterized in that, describedly gathers facial image successively and the step extracting human face expression feature from gathered facial image comprises:
Judge whether collect default facial image in Preset Time;
When collecting default facial image in Preset Time, from gathered facial image, extract human face expression feature;
When not collecting default facial image in Preset Time, gather the facial image that next is default.
6. a terminal face identification device, is characterized in that, described terminal face identification device comprises:
Reminding module, during for entering recognition of face pattern in terminal, points out typing successively corresponding facial image;
Extraction module, for gathering facial image successively and extract human face expression feature from gathered facial image;
Matching module, for after extraction human face expression feature, mates extracted human face expression feature with the human face expression feature that prestores;
Judge module, for work as extracted human face expression feature with prestore human face expression characteristic matching success time, judge that recognition of face is passed through.
7. terminal face identification device as claimed in claim 6, is characterized in that, described terminal face identification device also comprises and arranges module, for arranging number and the acquisition order of the human face expression of collection.
8. terminal face identification device as claimed in claim 7, it is characterized in that, described matching module, also for after collecting human face expression feature according to acquisition order, mates with the corresponding human face expression feature that prestores one by one by extracted human face expression feature.
9. terminal face identification device as claimed in claim 7, is characterized in that, described matching module, also for after extracting first human face expression feature, by extracted first human face expression feature and the corresponding human face expression characteristic matching that prestores;
Also for work as first extracted human face expression feature with corresponding prestore human face expression characteristic matching time, extract next human face expression feature and the corresponding human face expression characteristic matching that prestores, until the human face expression characteristic matching extracted is complete; Or when there is the human face expression feature extracted and not mating with the corresponding human face expression feature that prestores, prompting face recognition failures.
10. the terminal face identification device as described in any one of claim 6 to 9, is characterized in that, described extraction module also comprises judging unit, extraction unit and collecting unit,
Described judging unit, for judge Preset Time whether in collect default facial image;
Described extraction unit, for when collecting default facial image in Preset Time, extracts human face expression feature from gathered facial image;
Described collecting unit, for when not collecting default facial image in Preset Time, gathers the facial image that next is default.
CN201510091896.3A 2015-02-28 2015-02-28 Terminal face recognition method and device Pending CN104636734A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510091896.3A CN104636734A (en) 2015-02-28 2015-02-28 Terminal face recognition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510091896.3A CN104636734A (en) 2015-02-28 2015-02-28 Terminal face recognition method and device

Publications (1)

Publication Number Publication Date
CN104636734A true CN104636734A (en) 2015-05-20

Family

ID=53215464

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510091896.3A Pending CN104636734A (en) 2015-02-28 2015-02-28 Terminal face recognition method and device

Country Status (1)

Country Link
CN (1) CN104636734A (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104859587A (en) * 2015-05-22 2015-08-26 陈元喜 Automobile antitheft display with starting verification function
CN104966327A (en) * 2015-06-15 2015-10-07 北京智联新科信息技术有限公司 System and method for monitoring health and registering attendance on basis of internet of things
CN105234940A (en) * 2015-10-23 2016-01-13 上海思依暄机器人科技有限公司 Robot and control method thereof
CN105373784A (en) * 2015-11-30 2016-03-02 北京光年无限科技有限公司 Intelligent robot data processing method, intelligent robot data processing device and intelligent robot system
CN106210526A (en) * 2016-07-29 2016-12-07 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
CN107622232A (en) * 2017-09-08 2018-01-23 广东欧珀移动通信有限公司 Solve lock control method and Related product
CN107742072A (en) * 2017-09-20 2018-02-27 维沃移动通信有限公司 Face identification method and mobile terminal
CN108197450A (en) * 2017-12-28 2018-06-22 广东欧珀移动通信有限公司 Face identification method, face identification device, storage medium and electronic equipment
CN109165543A (en) * 2018-06-30 2019-01-08 恒宝股份有限公司 Equipment method for unlocking and device based on face action
CN109285008A (en) * 2018-09-02 2019-01-29 珠海横琴现联盛科技发展有限公司 The recognition of face payment information method for anti-counterfeit of combining space information
CN109670393A (en) * 2018-09-26 2019-04-23 平安科技(深圳)有限公司 Human face data acquisition method, unit and computer readable storage medium
CN109886697A (en) * 2018-12-26 2019-06-14 广州市巽腾信息科技有限公司 Method, apparatus and electronic equipment are determined based on the other operation of expression group
WO2020024388A1 (en) * 2018-08-01 2020-02-06 平安科技(深圳)有限公司 Microexpression lock generation and unlock method, apparatus, terminal device, and storage medium
CN111783677A (en) * 2020-07-03 2020-10-16 北京字节跳动网络技术有限公司 Face recognition method, face recognition device, server and computer readable medium
CN115249393A (en) * 2022-05-09 2022-10-28 深圳市麦驰物联股份有限公司 Identity authentication access control system and method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102509053A (en) * 2011-11-23 2012-06-20 唐辉 Authentication and authorization method, processor, equipment and mobile terminal
JP4998202B2 (en) * 2007-10-23 2012-08-15 日本電気株式会社 Mobile communication terminal
CN102946481A (en) * 2012-11-13 2013-02-27 广东欧珀移动通信有限公司 Method and system for unlocking human face expression
CN103259796A (en) * 2013-05-15 2013-08-21 金硕澳门离岸商业服务有限公司 Authentication system and method
CN104298910A (en) * 2013-07-19 2015-01-21 广达电脑股份有限公司 Portable electronic device and interactive face login method

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4998202B2 (en) * 2007-10-23 2012-08-15 日本電気株式会社 Mobile communication terminal
CN102509053A (en) * 2011-11-23 2012-06-20 唐辉 Authentication and authorization method, processor, equipment and mobile terminal
CN102946481A (en) * 2012-11-13 2013-02-27 广东欧珀移动通信有限公司 Method and system for unlocking human face expression
CN103259796A (en) * 2013-05-15 2013-08-21 金硕澳门离岸商业服务有限公司 Authentication system and method
CN104298910A (en) * 2013-07-19 2015-01-21 广达电脑股份有限公司 Portable electronic device and interactive face login method

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104859587A (en) * 2015-05-22 2015-08-26 陈元喜 Automobile antitheft display with starting verification function
CN104966327A (en) * 2015-06-15 2015-10-07 北京智联新科信息技术有限公司 System and method for monitoring health and registering attendance on basis of internet of things
CN105234940A (en) * 2015-10-23 2016-01-13 上海思依暄机器人科技有限公司 Robot and control method thereof
CN105373784A (en) * 2015-11-30 2016-03-02 北京光年无限科技有限公司 Intelligent robot data processing method, intelligent robot data processing device and intelligent robot system
CN106210526A (en) * 2016-07-29 2016-12-07 维沃移动通信有限公司 A kind of image pickup method and mobile terminal
US10949573B2 (en) 2017-09-08 2021-03-16 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Unlocking control methods and related products
CN107622232A (en) * 2017-09-08 2018-01-23 广东欧珀移动通信有限公司 Solve lock control method and Related product
CN107622232B (en) * 2017-09-08 2020-01-14 Oppo广东移动通信有限公司 Unlocking control method and related product
CN107742072A (en) * 2017-09-20 2018-02-27 维沃移动通信有限公司 Face identification method and mobile terminal
CN107742072B (en) * 2017-09-20 2021-06-25 维沃移动通信有限公司 Face recognition method and mobile terminal
CN108197450B (en) * 2017-12-28 2021-08-27 Oppo广东移动通信有限公司 Face recognition method, face recognition device, storage medium and electronic equipment
CN108197450A (en) * 2017-12-28 2018-06-22 广东欧珀移动通信有限公司 Face identification method, face identification device, storage medium and electronic equipment
CN109165543A (en) * 2018-06-30 2019-01-08 恒宝股份有限公司 Equipment method for unlocking and device based on face action
WO2020024388A1 (en) * 2018-08-01 2020-02-06 平安科技(深圳)有限公司 Microexpression lock generation and unlock method, apparatus, terminal device, and storage medium
CN109285008B (en) * 2018-09-02 2020-12-29 珠海横琴现联盛科技发展有限公司 Face recognition payment information anti-counterfeiting method combining spatial information
CN109285008A (en) * 2018-09-02 2019-01-29 珠海横琴现联盛科技发展有限公司 The recognition of face payment information method for anti-counterfeit of combining space information
CN109670393A (en) * 2018-09-26 2019-04-23 平安科技(深圳)有限公司 Human face data acquisition method, unit and computer readable storage medium
CN109670393B (en) * 2018-09-26 2023-12-19 平安科技(深圳)有限公司 Face data acquisition method, equipment, device and computer readable storage medium
CN109886697A (en) * 2018-12-26 2019-06-14 广州市巽腾信息科技有限公司 Method, apparatus and electronic equipment are determined based on the other operation of expression group
CN109886697B (en) * 2018-12-26 2023-09-08 巽腾(广东)科技有限公司 Operation determination method and device based on expression group and electronic equipment
CN111783677A (en) * 2020-07-03 2020-10-16 北京字节跳动网络技术有限公司 Face recognition method, face recognition device, server and computer readable medium
CN111783677B (en) * 2020-07-03 2023-12-01 北京字节跳动网络技术有限公司 Face recognition method, device, server and computer readable medium
CN115249393A (en) * 2022-05-09 2022-10-28 深圳市麦驰物联股份有限公司 Identity authentication access control system and method

Similar Documents

Publication Publication Date Title
CN104636734A (en) Terminal face recognition method and device
US10936709B2 (en) Electronic device and method for controlling the same
CN106875191A (en) One kind scanning payment processing method, device and terminal
CN104992097A (en) Method and apparatus for quickly starting application program
CN104902212A (en) Video communication method and apparatus
CN104917881A (en) Multi-mode mobile terminal and implementation method thereof
CN105306815A (en) Shooting mode switching device, method and mobile terminal
CN105162976A (en) Mobile terminal and anti-theft processing method therefor
CN105100482A (en) Mobile terminal and system for realizing sign language identification, and conversation realization method of the mobile terminal
CN106657650A (en) System expression recommendation method and device, and terminal
CN107623778A (en) Incoming call sound method and mobile terminal
CN105791548A (en) Voice information broadcast device and method
CN106569709A (en) Device and method for controlling mobile terminal
CN105577532A (en) Application message processing method and device based on keywords, and mobile terminal
CN106708321A (en) Touch screen touch method and device and terminal
CN105212896A (en) Health analysis method and terminal
CN106803058A (en) A kind of terminal and fingerprint identification method
CN106448665A (en) Voice processing device and method
CN105278860A (en) Mobile terminal image uploading device and method
CN106791187A (en) A kind of mobile terminal and NFC method
CN107105095A (en) A kind of sound processing method and mobile terminal
CN106778728A (en) A kind of mobile scanning terminal method, device and mobile terminal
CN104715262A (en) Method, device and mobile terminal for realizing smart label function by taking photos
CN107423600A (en) Mobile terminal and interface of mobile terminal locking means
CN106453843A (en) Method and terminal for preventing interference of RF signal to screen displaying

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: 518057 Guangdong province Shenzhen city Nanshan District high tech park, No. 9018 North Central Avenue, building A, floor six, Han's innovation

Applicant after: Nubian Technologies Ltd.

Address before: 518057 Guangdong province Shenzhen city Nanshan District high tech park, No. 9018 North Central Avenue, building A, floor six, Han's innovation

Applicant before: Shenzhen ZTE Mobile Tech Co., Ltd.

RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20150520