CN110472459A - The method and apparatus for extracting characteristic point - Google Patents

The method and apparatus for extracting characteristic point Download PDF

Info

Publication number
CN110472459A
CN110472459A CN201810451370.5A CN201810451370A CN110472459A CN 110472459 A CN110472459 A CN 110472459A CN 201810451370 A CN201810451370 A CN 201810451370A CN 110472459 A CN110472459 A CN 110472459A
Authority
CN
China
Prior art keywords
feature point
point set
face
user
characteristic
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201810451370.5A
Other languages
Chinese (zh)
Other versions
CN110472459B (en
Inventor
丁欣
李国良
郜文美
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Huawei Technologies Co Ltd
Original Assignee
Huawei Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Huawei Technologies Co Ltd filed Critical Huawei Technologies Co Ltd
Priority to CN201810451370.5A priority Critical patent/CN110472459B/en
Publication of CN110472459A publication Critical patent/CN110472459A/en
Application granted granted Critical
Publication of CN110472459B publication Critical patent/CN110472459B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Collating Specific Patterns (AREA)
  • Image Analysis (AREA)

Abstract

The method and apparatus for extracting characteristic point.This application provides provide the method, apparatus and mobile terminal of a kind of user's identification, it include: that the first decision model is determined according to the corresponding fisrt feature point set of face of the first user, fisrt feature point set includes the Partial Feature point in second feature point set, and second feature point set is the set of all extractible characteristic points of the face of first user;According to the characteristic point in the fisrt feature point set in the first position of the face of first user, determine the corresponding third feature point set of identification object, and, characteristic point in the third feature point set is corresponding with the first position in the second position of the face of the identification object, and this method can obtain faster result under the premise of guaranteeing detection accuracy.

Description

The method and apparatus for extracting characteristic point
Technical field
This application involves field of terminal, more particularly, to the method, apparatus and movement for extracting characteristic point in field of terminal Terminal.
Background technique
Human face characteristic point technology refers to the face feature based on people, detects the organ site of face, and profile information extracts people The feature that face is contained.Specifically a kind of by obtaining such as human eye, nose, the corners of the mouth, eyebrow and each component wheel of face The technology of some important characteristic point positions such as exterior feature point.The technical application is very extensive, can be used for recognition of face, face alignment, The operations such as expression shape change, U.S. face.The characteristic point information and existing human face characteristic point information that wherein recognition of face will detect It is compared, judges whether it is corresponding face, to identify identity;Face alignment changes modeling transformations from the face of a people Change for the face of another person;For expression synthesis i.e. according to people in different expressions, facial feature points situation carries out digital conversion; U.S. face further carries out the operation such as beautifying after carrying out face's subregion according to human face characteristic point.In addition, skin health monitoring at present It is also required to carry out human face region division using " human face characteristic point " scheme in project, to carry out subsequent skin detection.
The human face characteristic point technology not only has certain requirement to detection accuracy, and is directed to different application scenarios, right Detection speed also requires.In general, can establish thinner if comprising characteristic point in facial feature points detection scheme more The facial contour model of cause, it is as a result more accurate, but algorithm is more complicated, recognition speed is slower.Conversely, characteristic point is fewer, algorithm Simpler, identification is faster, but discrimination reduces.The quantity of characteristic point is very big on final detection performance influence, especially In " real-time " demanding scene, the precision of facial feature points detection and the contradiction of speed will be protruded more.
Summary of the invention
The application provides a kind of method, apparatus and mobile terminal for extracting characteristic point, can be before guaranteeing detection accuracy It puts and obtains result faster.
In a first aspect, providing a kind of method for extracting characteristic point characterized by comprising according to the face of the first user The corresponding fisrt feature point set in portion determines the first decision model, which includes in second feature point set Partial Feature point, the second feature point set are the set of all extractible characteristic points of the face of first user;According to Characteristic point in the fisrt feature point set is in the first position of the face of first user, the corresponding third of determination identification object Set of characteristic points, the third feature point set include the Partial Feature point in fourth feature point set, the fourth feature point set It is the set of all extractible characteristic points of the face of the identification object, also, the characteristic point in the third feature point set It is corresponding with the first position in the second position of the face of the identification object.
Through the above technical solutions, compared with traditional " average face " universal model, the application user oriented individual's Characteristic point, specific aim is stronger, can effectively evade the negative factor for influencing model training.With existing rapidly extracting characteristic point side Case is compared, and the application can be directed to different application scenarios by the low comparison with model and standard form, is guaranteeing detection essence Result is faster obtained under the premise of degree.In addition, in certain detection scenes, since exceptional condition causes certain target areas can not When rationally being detected, it can be obtained in such a way that characteristic point restores master pattern as a result, making the region for user face Division is rationally effective, is not in the result for influencing user experience.
With reference to first aspect, in certain implementations of first aspect, this method further include: according to the third feature point The matching degree of set and first decision model, determines whether the identification object is first user.
With reference to first aspect with above-mentioned implementation, in certain implementations of first aspect, this method further include: root The second decision model is determined according to the corresponding second feature point set of the face of first user;And
When the third feature point set and the matching degree of first decision model are higher than preset first threshold, according to Characteristic point in the second feature point set other than the fisrt feature point set determines the corresponding fifth feature of identification object Point set;
According to the matching degree of the fifth feature point set and second decision model, determine whether the identification object is this First user.
Specifically, when the first user initializes mobile phone, allow first user that prompt is cooperated to take pictures under given conditions, than If brightness conditions are good, distance fit moderate condition, generate user-specific face characteristic point model, way be similar to face unlock or The Matching Model acquisition modes of unlocked by fingerprint.The model is based on the highest feature point model of precision, and algorithm time-consuming is more, but can be with Characteristic point information as much as possible is obtained, which is known as " master pattern ".
Optionally, a kind of possible " master pattern " specific generating mode is: clicking through to the feature of user's plurality of pictures Row alignment filters out positional shift, influences brought by scale size and rotation, obtain " master pattern " of user.It should be understood that The acquisition essence of user model still obtains unanimously with the model of general algorithm, is all that averaging model is obtained by sample, only The sample of the selection of scheme provided by the embodiments of the present application only uses the data at mobile phone user (i.e. the first user), training gained Model can be understood as " over-fitting " situation of common mode, be only applicable to the detection of the user (i.e. the first user).
After " master pattern " (i.e. the second decision model) that generates the first user, according to different scenes to face area feature Point precision and the requirement for detecting real-time, certain characteristic points are chosen from above-mentioned " standard form ", character pair is respectively trained " low to match model " (i.e. the first decision model) of point.Face characteristic point set as shown in Figure 4 shows 68 faces in Fig. 4 Portion's characteristic point, then include 68 all characteristic points set be exactly it is above-mentioned defined in " second feature point set ", include Set arbitrarily less than 68 characteristic points is exactly " fisrt feature point set " defined above.Wherein, fisrt feature point set has A variety of possibilities, for example including 30 characteristic points, 46 characteristic points etc..
The low selection criteria with characteristic point in model is determined by the detection real-time and reduction feasibility of characteristic point, that is, is calculated The average time of detection " low to match model ", if points are less than certain amount, mean time to detect can satisfy real-time and want It asks, while can also be according to detection gained points reduction standard form.
With reference to first aspect with above-mentioned implementation, in certain possible implementations, when the third feature point set When being higher than preset first threshold with the matching degree of first decision model, this method further include: according to the first judgement mould Type determines the third feature point set of the identification object;According to the characteristic point in the third feature point set in the identification object The third place of face determines the fifth feature point set of the identification object;It is special according to the third feature point set and the 5th Sign point set restores the fourth feature point set of the identification object.
With reference to first aspect with above-mentioned implementation, in certain possible implementations, this method further include: according to this First decision model determines the third feature point set of the identification object;According to the characteristic point for including in the third feature point set The face area of the identification object is divided, and determines the first area in the face area, which is the identification pair The region of the pending image procossing of the face of elephant;Image processing operations are carried out to the first area.
With reference to first aspect with above-mentioned implementation, in certain possible implementations, the face according to the first user Before the corresponding fisrt feature point set in portion determines the first decision model, this method further include: determine the face of first user Corresponding fisrt feature point set.
With reference to first aspect with above-mentioned implementation, in certain possible implementations, the determination first user's The corresponding fisrt feature point set of face, comprising:
It is determined according to the average time of the characteristic point in the corresponding fisrt feature point set of face for extracting first user The fisrt feature point set.
For example, selecting several characteristic points in 68 master patterns shown in Fig. 4, for example obtain 30 characteristic points and 46 The model of characteristic point.Record extracts the detection time of 30 characteristic points and the detection time of 46 characteristic points, further according to current inspection Scene is surveyed, sees whether the detection time meets the requirement of real-time.If the detection time of 46 characteristic points is greater than current scene Required detection time will then choose less than 46 characteristic points, then successively detection meets the characteristic point of current requirement of real-time Quantity.
It is " low to match mould using selected characteristic point (for example, 30 characteristic points) training after the quantity for having determined characteristic point Type ", and record the constraint relationship of " low with model " with " master pattern ".When subsequent progress rapidly extracting, first " low to match model " Point corresponding with " standard form " is aligned, then according to equally spaced positional relationship and the constraint relationship of record, reduction Other feature point in " standard form ".
Or the fisrt feature point set is determined according to the second area of the first user face, which is the knowledge The region of the pending image procossing of face of other object.
Optionally, terminal device determines the third feature point set of the identification object according to first decision model;Root again The face area of the identification object is divided according to the characteristic point for including in the third feature point set, and is determined in the face area First area, which is the region of the pending image procossing of face of the identification object;The first area is carried out Image processing operations.
Or the extraction being spaced in the characteristic point in the corresponding second feature point set of face of first user including The Partial Feature point of the extraction is determined as the fisrt feature point set by Partial Feature point.
For example, selecting several characteristic points, such as every two point to take a point from 68 standard form equal intervals, such as take Characteristic point 1, characteristic point 3, characteristic point 5 etc. obtain 34 point entirety skeleton patterns.It is " low to match mould using the training of selected characteristic point Type ", and record the constraint relationship of " low with model " with " master pattern ".When subsequent progress rapidly extracting, first " low to match model " Point corresponding with " standard form " is aligned, then according to equally spaced positional relationship and the constraint relationship of record, reduction Other feature point in " standard form ".
Or extracted in the characteristic point in the corresponding second feature point set of face of first user including this first The face contour feature point is determined as the fisrt feature point set by the face contour feature point of the face of user.
For example, only needing part face information, such as eyes in the application of certain stage, nose, the face information such as mouth can be only Select 41 point model of face profile of features described above point composition.Using the training of selected characteristic point " low to match model ", and record The constraint relationship of " low to match model " with " master pattern ".Subsequent face's outer profile information if necessary, does not need again whole inspection It surveys, only needs first to be aligned " low to match model " point corresponding with " master pattern ", then according to the positional relationship of face profile And the constraint relationship of record, restore the other feature point in " master pattern ".The process is as shown in Figure 5.
With reference to first aspect with above-mentioned implementation, in certain possible implementations, the face pair of first user The characteristic point in fisrt feature point set answered includes the characteristic information under different deviation angles.
Since human face posture changes more in practical applications, pitching deflection, horizontal deflection etc. can all bring characteristic point to examine The difficulty of survey, such as certain characteristic points may be hidden, and the geometrical relationship between characteristic point changes.Only with single mark Quasi-mode plate has certain limitation.It is alternatively possible to carry out the correction of multi-angle to the characteristic point of the face of first user, i.e., Making the characteristic point in the corresponding fisrt feature point set of the facial feature points of first user includes under different deviation angles Characteristic information.
In view of user might have different use habits, there are several usual postures, it can be by user's human face posture point If for Ganlei, for the different corresponding master patterns of usual posture training.The similarity of Definition Model detection, with every when detection One model carries out similarity calculation with the characteristic point detected, selects the highest model of similarity as testing result.
Second aspect provides a kind of device for extracting characteristic point characterized by comprising
Determination unit determines the first decision model for the corresponding fisrt feature point set of face according to the first user, The fisrt feature point set includes the Partial Feature point in second feature point set, which is first user Face all extractible characteristic points set;
The determination unit be also used to according to the characteristic point in the fisrt feature point set first user face One position determines the corresponding third feature point set of identification object, which includes in fourth feature point set Partial Feature point, which is the set of all extractible characteristic points of the face of the identification object, and And the characteristic point in the third feature point set is corresponding with the first position in the second position of the face of the identification object.
In conjunction with second aspect, in certain possible implementations, which further includes judging unit, for according to this The matching degree of three set of characteristic points and first decision model determines whether the identification object is first user.
In conjunction with second aspect and above-mentioned implementation, in certain possible implementations, which is also used to root The second decision model is determined according to the corresponding second feature point set of the face of first user;And
When the judging unit determine the third feature point set and first decision model matching degree be higher than it is preset When first threshold, the determination unit according to the characteristic point in the second feature point set other than the fisrt feature point set, Determine the corresponding fifth feature point set of identification object;
The judging unit determines the identification pair according to the matching degree of the fifth feature point set and second decision model As if no is first user.
In conjunction with second aspect and above-mentioned implementation, in certain possible implementations, when the judging unit determines to be somebody's turn to do When third feature point set and the matching degree of first decision model are higher than preset first threshold, which is also used In:
The third feature point set of the identification object is determined according to first decision model;
According to the characteristic point in the third feature point set in the third place of the face of the identification object, the identification is determined The fifth feature point set of object;
The fourth feature point set of the identification object is restored according to the third feature point set and the fifth feature point set.
In conjunction with second aspect and above-mentioned implementation, in certain possible implementations, which is also used to:
The third feature point set of the identification object is determined according to first decision model;
The face area of the identification object is divided according to the characteristic point for including in the third feature point set, and determining should First area in face area, the first area are the regions of the pending image procossing of face of the identification object;
Image processing operations are carried out to the first area.
In conjunction with second aspect and above-mentioned implementation, in certain possible implementations, the determination unit is according to first Before the corresponding fisrt feature point set of the face of user determines the first decision model, it is also used to:
Determine the corresponding fisrt feature point set of the face of first user.
In conjunction with second aspect and above-mentioned implementation, in certain possible implementations, the determination unit determine this The corresponding fisrt feature point set of the face of one user, specifically includes:
The determination unit is according to the flat of the characteristic point in the corresponding fisrt feature point set of face for extracting first user The equal time determines the fisrt feature point set;Or
The determination unit determines the fisrt feature point set, the second area according to the second area of the first user face It is the region of the pending image procossing of face of the identification object;Or
The extraction part being spaced in the characteristic point for including in the corresponding second feature point set of face of first user The Partial Feature point of the extraction is determined as the fisrt feature point set by characteristic point, the determination unit;Or
First user is extracted in the characteristic point for including in the corresponding second feature point set of face of first user Face face contour feature point, which is determined as the fisrt feature point set for the face contour feature point.
In conjunction with second aspect and above-mentioned implementation, in certain possible implementations, the face pair of first user The characteristic point in fisrt feature point set answered includes the characteristic information under different deviation angles.
The third aspect provides a kind of device, comprising: processor executes in the memory for coupling with memory Instruction, to realize such as the method in any one possible implementation of first aspect and first aspect.Optionally, the dress Setting further includes memory, for storing program instruction and data.
Fourth aspect, provides a kind of computer program product, and the computer program product includes: computer program generation Code, when the computer program code is run on computers, so that computer executes above-mentioned first aspect and first party The possible method of any one of face.
5th aspect, provides a kind of computer-readable medium, and the computer-readable medium storage has program code, when When the computer program code is run on computers, so that computer executes times of above-mentioned first aspect and first aspect It anticipates a kind of possible method.
6th aspect, provides a kind of chip system, which includes processor, for supporting terminal device to realize The possible side of any one of above-mentioned first aspect and first aspect determines for example, obtaining, or institute in the processing above method The data and/or information being related to.In a kind of possible design, the chip system further includes memory, the memory, is used In the necessary program instruction of preservation terminal device and data.The chip system can be made of chip, also may include chip and Other discrete devices.
Detailed description of the invention
Fig. 1 is the schematic diagram of an example for the terminal device that the method for the extraction characteristic point of the application is suitable for.
Fig. 2 is the schematic flow chart of an example of the method provided by the embodiments of the present application for extracting characteristic point.
Fig. 3 is the schematic diagram that an example provided by the embodiments of the present application establishes decision model.
Fig. 4 is the characteristic point schematic diagram that the user based on the application knows face in method for distinguishing.
Fig. 5 is the schematic diagram that another example provided by the embodiments of the present application establishes decision model.
Fig. 6 is the schematic diagram that another example provided by the embodiments of the present application extracts feature point process.
Fig. 7 is the schematic diagram that another example provided by the embodiments of the present application extracts feature point process.
Fig. 8 is the schematic block diagram of an example of the device of the extraction characteristic point of the application.
Specific embodiment
Below in conjunction with attached drawing, the technical solution in the application is described.
The user of the application knows the identification for the user that method for distinguishing can be applied to for terminal device.Terminal device can also With referred to as user equipment (User Equipment, UE), access terminal, subscriber unit, subscriber station, movement station, mobile station, a distant place It stands, remote terminal, mobile device, user terminal, terminal, wireless telecom equipment, user agent or user apparatus.Terminal device can To be the website (STAION, ST) in WLAN, cellular phone, wireless phone, session initiation protocol (Session can be Initiation Protocol, SIP) phone, wireless local loop (Wireless Local Loop, WLL) stand, individual digital It handles (Personal Digital Assistant, PDA) equipment, the handheld device with wireless communication function, calculate equipment Or it is connected to other processing equipments, mobile unit, the car networking terminal, computer, laptop computer, hand of radio modem Hold formula communication equipment, Handheld computing device, satellite radio, wireless modem card, TV set-top box (set top Box, STB), customer premises equipment, CPE (customer premise equipment, CPE) and/or in wireless system it is enterprising The other equipment and next generation communication system of row communication, for example, terminal device in 5G network or the following evolution is public Terminal device etc. in land mobile network (Public Land Mobile Network, PLMN) network.
Non-limiting as example, in the embodiment of the present application, which can also be wearable device.It is wearable Equipment is referred to as wearable intelligent equipment, is to carry out intelligentized design to daily wearing using wearable technology, develop The general name for the equipment that can be dressed, such as glasses, gloves, wrist-watch, dress ornament and shoes.Wearable device is directly worn, or It is a kind of portable device for being integrated into the clothes or accessory of user.Wearable device is not only a kind of hardware device, even more It is interacted by software support and data interaction, cloud to realize powerful function.Broad sense wearable intelligent equipment includes function Entirely, size is big, can not depend on smart phone realizes complete or partial function, such as: smartwatch or intelligent glasses etc., with And only it is absorbed in certain a kind of application function, it needs to be used cooperatively with other equipment such as smart phone, such as all kinds of carry out sign monitorings Intelligent bracelet, intelligent jewellery etc..
In addition, in the embodiment of the present application, terminal device can also be Internet of Things (Internet of Things, IoT) Terminal device in system, IoT are the important components of future information technology development, and technical characteristics are to lead to article The communication technology and network connection are crossed, thus realize man-machine interconnection, the intelligent network of object object interconnection.
Fig. 1 shows the schematic diagram of an example of the terminal device, as shown in Figure 1, the terminal device 100 may include following Component.
A.RF circuit 110
RF circuit 110 can be used for receiving and sending messages or communication process in, signal sends and receivees, particularly, under base station After row information receives, handled to processor 180;In addition, the data for designing uplink are sent to base station.In general, RF circuit includes But being not limited to antenna, at least one amplifier, transceiver, coupler, LNA, (Low Noise Amplifier, low noise are put Big device), duplexer etc..In addition, RF circuit 110 can also be communicated with network and other equipment by wireless communication.It is described wireless Any communication standard or agreement, including but not limited to WLAN (Wireless Local Area can be used in communication Networks, WLAN) global system for mobile telecommunications (Global System of Mobile communication, GSM) system, code Divide multiple access (Code Division Multiple Access, CDMA) system, wideband code division multiple access (Wideband Code Division Multiple Access, WCDMA) system, General Packet Radio Service (General Packet Radio Service, GPRS), long term evolution (Long Term Evolution, LTE) system, LTE frequency division duplex (Frequency Division Duplex, FDD) system, LTE time division duplex (Time Division Duplex, TDD), universal mobile communications system Unite (Universal Mobile Telecommunication System, UMTS), global interconnection inserting of microwave (Worldwide Interoperability for Microwave Access, WiMAX) communication system, the 5th following generation (5th Generation, 5G) system or new wireless (New Radio, NR) etc..
B. memory 120
Memory 120 can be used for storing software program and module, and processor 180 is stored in memory 120 by operation Software program and module, thereby executing the various function application and data processing of terminal device 100.Memory 120 can It mainly include storing program area and storage data area, wherein storing program area can be needed for storage program area, at least one function Application program (such as sound-playing function, image playing function etc.) etc.;Storage data area can be stored according to terminal device 100 Use created data (such as audio data, phone directory etc.) etc..In addition, memory 120 may include that high speed is deposited at random Access to memory, can also include nonvolatile memory, a for example, at least disk memory, flush memory device or other easily The property lost solid-state memory.
C. other input equipments 130
Other input equipments 130 can be used for receiving the number or character information of input, and generate and terminal device 100 User setting and the related key signals input of function control.Specifically, other input equipments 130 may include but be not limited to physics Keyboard, function key (such as volume control button, switch key etc.), trace ball, mouse, operating stick, (light mouse is not show to light mouse The touch sensitive surface visually exported, or the extension of touch sensitive surface formed by touch screen) etc. one of or it is more Kind.Other input equipments 130 are connected with other input device controls devices 171 of I/O subsystem 170, input in other equipment Signal interaction is carried out with processor 180 under the control of controller 171.
D. display screen 140
Display screen 140 can be used for showing information input by user or the information and terminal device 100 that are supplied to user Various menus, can also receive user input.Specific display screen 140 may include display panel 141 and touch panel 142.Wherein display panel 141 can use liquid crystal display (Liquid Crystal Display, LCD), organic light emission two The forms such as pole pipe (Organic Light-Emitting Diode, OLED) configure display panel 141.Touch panel 142, Referred to as touch screen, touch-sensitive screen etc., collecting the on it or neighbouring contact of user or Touchless manipulation, (for example user uses hand The operation of any suitable object or attachment such as finger, stylus on touch panel 142 or near touch panel 142, can also be with Including somatosensory operation;The operation includes the action types such as single-point control operation, multiparty control operation), and according to preset Formula drives corresponding attachment device.Optionally, touch panel 142 may include two portions of touch detecting apparatus and touch controller Point.Wherein, touch orientation, the posture of touch detecting apparatus detection user, and touch operation bring signal is detected, signal is passed Give touch controller;Touch controller receives touch information from touch detecting apparatus, and being converted into processor can The information of processing, then give processor 180, and order that processor 180 is sent can be received and executed.Furthermore, it is possible to adopt Touch panel 142 is realized with multiple types such as resistance-type, condenser type, infrared ray and surface acoustic waves, it can also be using the following hair Any technology of exhibition realizes touch panel 142.Further, touch panel 142 can cover display panel 141, and user can root (the display content includes but is not limited to soft keyboard, virtual mouse, virtual key, icon to the content shown according to display panel 141 Etc.), it is operated on the touch panel 142 covered on display panel 141 or nearby, touch panel 142 detects After operation on or near it, processor 180 is sent to by I/O subsystem 170 to determine that user inputs, is followed by subsequent processing device 180 provide corresponding visual output by I/O subsystem 170 according to user's input on display panel 141.Although in Fig. 4, Touch panel 142 and display panel 141 are the input and input work for realizing terminal device 100 as two independent components Can, but in some embodiments it is possible to touch panel 142 is integrated with display panel 141 and realizes the defeated of terminal device 100 Enter and output function.
E. sensor 150
Sensor 150 can be one or more, for example, this may include optical sensor, motion sensor and other Sensor.
Specifically, optical sensor may include ambient light sensor and proximity sensor, wherein ambient light sensor can basis The light and shade of ambient light adjusts the brightness of display panel 141, proximity sensor can when terminal device 100 is moved in one's ear, Close display panel 141 and/or backlight.
As a kind of motion sensor, acceleration transducer can detect (generally three axis) acceleration in all directions Size can detect that size and the direction of gravity when static, can be used to identify the application of terminal device posture, (for example horizontal/vertical screen is cut Change, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap) etc..
In addition, the gravity sensor (being referred to as gravity sensor) that terminal device 100 can also configure, gyroscope, gas The other sensors such as meter, hygrometer, thermometer, infrared sensor are pressed, details are not described herein.
F. voicefrequency circuit 160, loudspeaker 161, microphone 162
It can provide the audio interface between user and terminal device 100.The audio data that voicefrequency circuit 160 can will receive Signal after conversion is transferred to loudspeaker 161, is converted to voice signal output by loudspeaker 161;On the other hand, microphone 162 The voice signal of collection is converted into signal, is converted to audio data after being received by voicefrequency circuit 160, then audio data is exported To RF circuit 108 to be sent to such as another terminal device, or audio data exported to memory 120 further to locate Reason.
G.I/O subsystem 170
I/O subsystem 170 is used to control the external equipments of input and output, may include other equipment input controller 171, Sensor controller 172, display controller 173.Optionally, other one or more input control apparatus controllers 171 from its His input equipment 130 receives signal and/or sends signal to other input equipments 130, other input equipments 130 may include Physical button (push button, rocker buttons etc.), dial, slide switch, control stick, click idler wheel, (light mouse is not aobvious to light mouse Show the touch sensitive surface visually exported, or the extension of the touch sensitive surface formed by touch screen).It is worth explanation It is that other input control apparatus controllers 171 can be connect with any one or multiple above equipments.The I/O subsystem 170 In display controller 173 from display screen 140 receive signal and/or to display screen 140 send signal.Display screen 140 detects To after user's input, user's input that display controller 173 will test is converted to and is shown in user circle on display screen 140 In face of the interaction of elephant, i.e. realization human-computer interaction.Sensor controller 172 can be received from one or more sensor 150 to be believed Number and/or to one or more sensor 150 send signal.
H. processor 180
Processor 180 is the control centre of terminal device 100, utilizes various interfaces and the entire terminal device of connection Various pieces by running or execute the software program and/or module that are stored in memory 120, and are called and are stored in Data in reservoir 120 execute the various functions and processing data of terminal device 100, to carry out whole prison to terminal device Control.Optionally, processor 180 may include one or more processing units;Preferably, processor 180 can integrate application processor And modem processor, wherein the main processing operation system of application processor, user interface and application program etc., modulatedemodulate Processor is adjusted mainly to handle wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor In 180.
Terminal device 100 further includes the power supply 190 (such as battery) powered to all parts, it is preferred that power supply can lead to Cross power-supply management system and processor 180 be logically contiguous, thus by power-supply management system realize management charging, electric discharge and The functions such as power consumption.
In addition, terminal device 100 can also include camera, bluetooth module etc., and details are not described herein although being not shown.
Fig. 2 shows the exemplary illustrations of an example of the method 200 of the extraction characteristic point of the application, for example, this method 200 It can be applied in above-mentioned terminal device 100.
As shown in Fig. 2, terminal device determines according to the corresponding fisrt feature point set of face of the first user in S210 One decision model, the fisrt feature point set include the Partial Feature point in second feature point set, the second feature point set It is the set of all extractible characteristic points of the face of first user.
Here the first user can be the private user of terminal device.With rapid economic development, all kinds of different terminals Equipment has covered most crowds, at indispensable product in for people's lives.Wherein, smart phone is even more already Function machine is replaced to become the mainstream in market.The privacy of terminal device based on individual subscriber, can be by generating user-specific The mode of characteristic point " master pattern " effectively improves extraction rate under the premise of guaranteeing the detection accuracy of characteristic point algorithm.This Shen It please the use of smart phone will be to be described in detail with user equipment.
Firstly, being illustrated to the application scenarios of human face characteristic point technology involved in the application.The application mainly enumerates It is as follows:
A. recognition of face (Face Recognition), specifically, i.e. will test gained characteristic point information with it is existing Human face characteristic point information is compared, and judges whether it is corresponding face, to identify identity.Recognition of face realizes image or view The stand-alone services moulds such as the detection, analysis and comparison of face in frequency, including Face datection positioning, face character identification and face alignment Block can provide high performance application on site program interface (Application Program for developer and enterprise Interface, API) service, it is applied to face augmented reality (Augmented Reality, AR), recognition of face and recognizes The various scenes such as card, extensive face retrieval, photo management.
B. face is aligned, specifically, changing from the face that face's variation modeling transformations of a people are another person. Face alignment is the position found face location and then find out human face characteristic point, such as left-hand side nose, nostril downside, pupil Position, the position of the characteristic points such as upper lip downside.Face alignment can be understood as face key point location or human face five-sense-organ is fixed Position.Primarily now it is applied to do Face Changing special efficacy, gender identification, age identification, intelligent textures etc..
C. expression synthesis, specifically, characteristic point situation when being in different expressions according to face carries out digital conversion. Expression synthesis technology can from still image or dynamic video sequence the specific emotional state of sub-department, so that it is determined that identified The mental emotion of object.Such as at present in biggish facial expression recognition public database Animando, detest, fear, it is happy, The different sample of neutral, sad, surprised seven classes.
D. U.S. face, specifically, being to carry out face part area according to the Partial Feature point of face, according to different areas after subregion Domain carries out the operations such as corresponding beautification.In addition to this, it is also required in skin health monitoring project at present using " human face characteristic point " Scheme carries out human face region division, to carry out the operation such as subsequent skin detection.
Other than above-mentioned basic recognition of face scene, there are also the largely other applications based on recognition of face at present Scene.
E. unlocking screen operates, specifically, maloperation and the safety of raising terminal device in order to prevent, Yong Huke Screen locking can voluntarily be locked alternatively, operation of the user to terminal device is not detected in terminal device at the appointed time Determine screen, thus, when user needs to solve lock screen, need to carry out correctly to unlock operation.For example, sliding solution can be enumerated The various unlock operations such as lock, password unlock, figure unlock, image recognition.
The identification of user involved in the application is exactly that object to be detected is mainly identified by human face characteristic point technology, main The organ site of face is detected based on the face feature of people, profile information extracts the feature that face is contained, to pass through Characteristic point information comparison come judge terminal device identification object whether be terminal device storage user.
F. application unlock operation, specifically, in order to improve the safety of terminal device, when user needs to open a certain answer Unlock interface can be popped up with (for example, the application of chat class or payment class application etc.) terminal device or the application.To in terminal Equipment carries out after correctly identifying, can normally start the application;Alternatively, when user needs a certain function (example using application Such as, function of transferring accounts or query function etc.), terminal device or the application can equally pop up unlock interface, thus, in terminal device It carries out after correctly identifying, can normally start the function.
G. it is directed to the operation of defined application program, for example, non-limiting as example, which can be with It is application program set by user, for example, the application of chat class or payment class application etc.;Alternatively, the defined application program can be with It is the application program of the settings such as manufacturer or operator.
It should be understood that particular content included by the application scenarios listed above about human face characteristic point technology is merely illustrative Property explanation, the application is not limited to this.
Such as background technique introduction, which is the contradiction of precision and speed.In spy It is more comprising characteristic point in sign point detection process, it can establish more careful skeleton pattern, it is as a result more accurate, but algorithm is more multiple It is miscellaneous, it identifies slower.Conversely, characteristic point is fewer, algorithm is simpler, and identification is faster, but discrimination reduces.The quantity pair of characteristic point Final performance influence is very big, and especially in " real-time " demanding scene, the contradiction of precision and speed will more dash forward Out.
In the embodiment of the present application, the particularity based on private mobile phone, initial process can request user to cooperate, and acquisition to the greatest extent may be used Facial information more than energy generates the characteristic point " master pattern " of user-specific, Partial Feature point in simultaneous selection " master pattern " Training " low to match model ", for subsequent process for using.
Fig. 3 is that an example provided by the embodiments of the present application establishes template schematic diagram.Specifically, as shown in figure 3, the first user opens Initial process is moved, establishes model according to execution schematic diagram shown in Fig. 3.First user detects Special Graphs under given conditions Piece, to obtain facial information as much as possible.Optionally, " specific pictures " can be that change the first user not inclined in face here The picture including characteristic point as much as possible for moving or blocking, shoot in the case that external condition does not interfere with;" specified conditions " Can be natural light uniformly, without shade or in the case that other block, in short, to the greatest extent may be used in the case where establishing decision model The characteristic information of the facial feature points of more first users of acquisition more than energy.It should be understood that starting initial process here can be Various possible scenes, the application include when first user uses mobile phone for the first time, or when progress mobile phone decision model update etc. But not limited to this.
Specifically, when the first user initializes mobile phone, allow first user that prompt is cooperated to take pictures under given conditions, than If brightness conditions are good, distance fit moderate condition, generate user-specific face characteristic point model, way be similar to face unlock or The Matching Model acquisition modes of unlocked by fingerprint.The model is based on the highest feature point model of precision, and algorithm time-consuming is more, but can be with Characteristic point information as much as possible is obtained, which is known as " master pattern ".
Optionally, a kind of possible " master pattern " specific generating mode is: clicking through to the feature of user's plurality of pictures Row alignment filters out positional shift, influences brought by scale size and rotation, obtain " master pattern " of user.It should be understood that The acquisition essence of user model still obtains unanimously with the model of general algorithm, is all that averaging model is obtained by sample, only The sample of the selection of scheme provided by the embodiments of the present application only uses the data at mobile phone user (i.e. the first user), training gained Model can be understood as " over-fitting " situation of common mode, be only applicable to the detection of the user (i.e. the first user).
All extractible feature point sets of first user face are collectively referred to as " second feature point set " by the application, and second Partial Feature point set in set of characteristic points is collectively referred to as " fisrt feature point set ".The mark generated according to the second feature point set Quasi-mode type is known as " the second decision model ", is known as " the first judgement mould according to the decision model that the fisrt feature point set generates Type ".The facial feature points for the first user for including in so the second decision model are most, meanwhile, fisrt feature point set There are many situations, therefore each possible fisrt feature point set can correspond to first decision model, i.e., second determines mould Type is unique exclusive " master pattern " of first user, and the first decision model is first user " low with model ".
After " master pattern " (i.e. the second decision model) that generates the first user, according to different scenes to face area feature Point precision and the requirement for detecting real-time, certain characteristic points are chosen from above-mentioned " standard form ", character pair is respectively trained " low to match model " (i.e. the first decision model) of point.Face characteristic point set as shown in Figure 4 shows 68 faces in Fig. 4 Portion's characteristic point, then include 68 all characteristic points set be exactly it is above-mentioned defined in " second feature point set ", include Set arbitrarily less than 68 characteristic points is exactly " fisrt feature point set " defined above.Wherein, fisrt feature point set has A variety of possibilities, for example including 30 characteristic points, 46 characteristic points etc..
The low selection criteria with characteristic point in model is determined by the detection real-time and reduction feasibility of characteristic point, that is, is calculated The average time of detection " low to match model ", if points are less than certain amount, mean time to detect can satisfy real-time and want It asks, while can also be according to detection gained points reduction standard form.
Specifically, such as the case where following various possible selection characteristic points:
(1) average time according to the characteristic point in the corresponding fisrt feature point set of face for extracting first user is true The fixed fisrt feature point set.
For example, selecting several characteristic points in 68 master patterns shown in Fig. 4, for example obtain 30 characteristic points and 46 The model of characteristic point.Record extracts the detection time of 30 characteristic points and the detection time of 46 characteristic points, further according to current inspection Scene is surveyed, sees whether the detection time meets the requirement of real-time.If the detection time of 46 characteristic points is greater than current scene Required detection time will then choose less than 46 characteristic points, then successively detection meets the characteristic point of current requirement of real-time Quantity.
It is " low to match mould using selected characteristic point (for example, 30 characteristic points) training after the quantity for having determined characteristic point Type ", and record the constraint relationship of " low with model " with " master pattern ".When subsequent progress rapidly extracting, first " low to match model " Point corresponding with " standard form " is aligned, then according to equally spaced positional relationship and the constraint relationship of record, reduction Other feature point in " standard form ".
(2) the fisrt feature point set is determined according to the second area of the first user face, which is the knowledge The region of the pending image procossing of face of other object.
Optionally, terminal device determines the third feature point set of the identification object according to first decision model;Root again The face area of the identification object is divided according to the characteristic point for including in the third feature point set, and is determined in the face area First area, which is the region of the pending image procossing of face of the identification object;The first area is carried out Image processing operations.
For example, according to characteristic point 0-5, characteristic point 12-17, characteristic point 6-10 and spy are selected in Fig. 4 in 68 master patterns Point 18-22 is levied, is ocular by region division locating for these characteristic points.In the image processing process such as U.S. face, execute such as It is mainly performed corresponding processing in the ocular when functions such as removal black eye, bright eye.
(3) extraction unit being spaced in the characteristic point for including in the corresponding second feature point set of the face of first user Divide characteristic point, the Partial Feature point of the extraction is determined as the fisrt feature point set.
For example, selecting several characteristic points, such as every two point to take a point from 68 standard form equal intervals, such as take Characteristic point 1, characteristic point 3, characteristic point 5 etc. obtain 34 point entirety skeleton patterns.It is " low to match mould using the training of selected characteristic point Type ", and record the constraint relationship of " low with model " with " master pattern ".When subsequent progress rapidly extracting, first " low to match model " Point corresponding with " standard form " is aligned, then according to equally spaced positional relationship and the constraint relationship of record, reduction Other feature point in " standard form ".
(4) first use is extracted in the characteristic point for including in the corresponding second feature point set of the face of first user The face contour feature point is determined as the fisrt feature point set by the face contour feature point of the face at family.
For example, only needing part face information, such as eyes in the application of certain stage, nose, the face information such as mouth can be only Select 41 point model of face profile of features described above point composition.Using the training of selected characteristic point " low to match model ", and record The constraint relationship of " low to match model " with " master pattern ".Subsequent face's outer profile information if necessary, does not need again whole inspection It surveys, only needs first to be aligned " low to match model " point corresponding with " master pattern ", then according to the positional relationship of face profile And the constraint relationship of record, restore the other feature point in " master pattern ".The process is as shown in Figure 5.
It is above-mentioned list four kinds establish it is low with selecting possible situation during characteristic point in model process, it should be appreciated that this Application can be used alone situation, a variety of situations can also be applied in combination, the application includes but is not limited to this.
It is above-mentioned to describe " master pattern " for establishing the first user and the low process with model in detail, but due to face appearance State changes more, pitching deflection, the difficulty that horizontal deflection etc. can all bring characteristic point to detect, such as certain spies in practical applications Sign point may be hidden, and the geometrical relationship between characteristic point changes.There is certain limitation only with single standard template Property.It is alternatively possible to the correction of multi-angle be carried out to the characteristic point of the face of first user, even if the face of first user Characteristic point in the corresponding fisrt feature point set of characteristic point includes the characteristic information under different deviation angles.
In view of user might have different use habits, there are several usual postures, it can be by user's human face posture point If for Ganlei, for the different corresponding master patterns of usual posture training.The similarity of Definition Model detection, with every when detection One model carries out similarity calculation with the characteristic point detected, selects the highest model of similarity as testing result.
In user's use process, detection can all retain sample every time, can be each sample and single posture master die Type carries out deflection angle comparison, the model of the different postures of training.For example -5 °~5 ° of template can be established by 0 ° of template, 5 ° ~10 ° can slightly be deflected by 0 °~5 ° of pose template again and obtain, and so on, the model of available user's difference posture. Specific process schematic diagram as shown in Figure 5.Specific step is as follows:
(1) 0 ° of user-specific is obtained first with average face model without offset standard form.This step can refer to The detailed process of the above-mentioned master pattern (i.e. the second decision model) for establishing the first user, for simplicity, details are not described herein again.
(2) the characteristic point situation of different deviation angle positions is drawn up using above-mentioned standard pattern die, such as certain characteristic point weights It is folded, situations such as missing, form multi-angle reference model.
(3) the deviation angle human face data or user cooperated using user is accustomed in the offset human face data correction of posture State multi-angle reference model.
(4) user-specific multi-pose master pattern is generated.
The exclusive multi-pose master pattern that the first user is established by above-mentioned process, in practical applications, for not The usual postures of difference of same situation of change and user, generate more on the basis of single attitude mode (i.e. the second decision model) Attitude mode enhances robustness.
In S220, terminal device according to the characteristic point in the fisrt feature point set first user face first Position determines the corresponding third feature point set of identification object, which includes in fourth feature point set Partial Feature point, the fourth feature point set are the set of all extractible characteristic points of the face of the identification object, also, Characteristic point in the third feature point set is corresponding with the first position in the second position of the face of the identification object.
In S210, have been set up first user master pattern (i.e. the second decision model) and it is low with model (i.e. First decision model), user's identification is carried out according to the decision model, i.e. whether identification active user is first user.With 68 For point master pattern and 34 point entirety skeleton patterns, it is assumed that current identification object #A, according in 34 point entirety skeleton patterns 34 characteristic points are corresponded to and are also extracted in the same position of the currently face of identification object #A in the position of the face of former first user 34 characteristic points obtain the characteristic information of 34 characteristic points, i.e., extract the same position of the face of current identification object #A 34 feature point sets are collectively referred to as " third feature point set ".
In the same manner, for currently identifying object #A, 68 feature point sets corresponding to 68 master patterns are collectively referred to as the " the 4th Set of characteristic points ", the feature point set in 68 characteristic points other than the characteristic point that the third feature point set includes are collectively referred to as For " fifth feature point set ".
Optionally, for terminal device according to the matching degree of the third feature point set and first decision model, determining should Identify whether object is first user.
Specifically, for currently identifying object #A, by the above-mentioned third feature point set for obtaining identification object #A Afterwards, the matching degree of characteristic point information in the characteristic point information and the first decision model in the third feature point set is compared.It can Certain threshold value to be arranged according to different application scenarios, for being related to the application scenarios of the property safeties such as payment, can be set Higher threshold value.For example, it may be when terminal determine the characteristic information for having at least 32 characteristic points in 34 characteristic points and this One decision model matches, then judges the current identification object #A for first user, can carry out the operation such as related payment.It is right In unlocking as mobile phone etc. in the low scene of precise requirements, lower threshold value can be set, such as when terminal determines 34 features There is the characteristic information more than or equal to 25 characteristic points to match in point with first decision model, then judges that the current identification is used Family #A is first user, can carry out relevant operation.
Optionally, when the third feature point set and the matching degree of first decision model are higher than preset first threshold When, according to the characteristic point in the second feature point set other than the fisrt feature point set, determine that identification object is corresponding Fifth feature point set;According to the matching degree of the fifth feature point set and second decision model, the identification object is determined It whether is first user.
The application scenarios high for some required precisions can first match according to low to more guarantee the safety of operation Model determines whether the characteristic information of the characteristic point of current identification object matches, when according to the low feature for determining characteristic point with model When information matches, further determine whether current identification user #A is the first user further according to the master pattern of the first user.Example Such as, the characteristic information and primary standard model of the characteristic point in the fifth feature point set of the current identification user #A of judgement can be passed through In corresponding characteristic point characteristic information matching degree.Feature in the currently fifth feature point set of identification user #A When the matching degree of the characteristic information of corresponding characteristic point is higher than second threshold in the characteristic information and primary standard model of point, sentence The fixed current identification user #A is the first user.
Optionally, in alternatively possible application scenarios, in fact it could happen that since exceptional condition leads to certain targets of user Region can not rationally be detected, and will affect the result of user experience.In this application, when the third feature point set and this first When the matching degree of decision model is higher than preset first threshold, terminal device determines the identification pair according to first decision model The third feature point set of elephant, further according to the characteristic point in the third feature point set in the third position of the face of the identification object It sets, determines the fifth feature point set of the identification object;And also according to the third feature point set and the fifth feature point set The fourth feature point set of the former identification object.
Specific treatment process is as shown in fig. 6, terminal device first can determine current identification object with model according to low Whether the characteristic information of characteristic point matches.When terminal device is according to the low characteristic information matching for determining characteristic point with model, sentence The fixed user currently identified is the first user;Master die is further restored further according to the master pattern retrieving algorithm of the first user Type does not need the characteristic point for extracting current identification user again to restore the first user, and directly extract from primary standard model Characteristic point, i.e. fifth feature point set except the low characteristic point for including with model.It is special in conjunction with third feature point set and the 5th Sign point set obtains the master pattern.Such reduction process is equivalent to and is only extracted the characteristic information of less characteristic point just The characteristic information of all characteristic points of available current identification user, simplifies operation, can quickly restore high-precision characteristic point Information.
In addition, might have different use habits for user, there are several usual postures, it can be by user's human face posture It is above-mentioned to have been described above the usual posture training corresponding standard form different for user, i.e. user-specific if being divided into Ganlei Multi-pose master pattern.The detection of multi-pose characteristic point is as shown in Figure 7 with reduction process.Specific step is as follows:
(1) current identification user #A is detected using existing multi-pose standard form, user is preferentially used in detection Usual posture;
(2) the highest template detection of similarity is selected as a result, the result contains deviation angle and Partial Feature point Relationship;
(3) it is closed according to the corresponding multi-pose template of deviation angle obtained by above-mentioned detection is corresponding with the characteristic point of 0 ° of standard form System, restores 0 ° of standard form.
Above-mentioned multi-pose characteristic point detects the situation of change considered human face posture in practical applications and different user Usual posture, on the basis of single pose template generate multi-pose template, existing information is more effectively utilized, enhances Shandong Stick.
Through the above technical solutions, compared with traditional " average face " universal model, the application user oriented individual's Characteristic point, specific aim is stronger, can effectively evade the negative factor for influencing model training.With existing rapidly extracting characteristic point side Case is compared, and the application can be directed to different application scenarios by the low comparison with model and standard form, is guaranteeing detection essence Result is faster obtained under the premise of degree.In addition, in certain detection scenes, since exceptional condition causes certain target areas can not When rationally being detected, it can be obtained in such a way that characteristic point restores master pattern as a result, making the region for user face Division is rationally effective, is not in the result for influencing user experience.
It should be understood that method provided by the embodiments of the present application can be applied not only to the scene of single user, also can operate with more The scene of people, it can the standard form for similarly establishing more people matches.Compared with the scene of single user, difference only exists Selection criteria when the low reduction standard form with model.
Fig. 8 is the schematic block diagram of the device 800 provided by the embodiments of the present application for extracting characteristic point.
As shown in figure 8, the device 800 includes:
Determination unit 810 determines the first judgement mould for the corresponding fisrt feature point set of face according to the first user Type, the fisrt feature point set include the Partial Feature point in second feature point set, the second feature point set be this first The set of all extractible characteristic points of the face of user.
The determination unit 810 is also used to the face according to characteristic point in the fisrt feature point set in first user First position determines the corresponding third feature point set of identification object, which includes fourth feature point set In Partial Feature point, which is the set of all extractible characteristic points of the face of the identification object, Also, the characteristic point in the third feature point set is corresponding with the first position in the second position of the face of the identification object.
Optionally, which further includes judging unit 820, for according to the third feature point set and the first judgement mould The matching degree of type determines whether the identification object is first user.
Optionally, which is also used to corresponding second feature point set of face according to first user Determine the second decision model.When the judging unit determines that the matching degree of the third feature point set and first decision model is high When preset first threshold, the determination unit 810 according in the second feature point set in addition to the fisrt feature point set it Outer characteristic point determines the corresponding fifth feature point set of identification object;The judging unit according to the fifth feature point set and The matching degree of second decision model determines whether the identification object is first user.
Optionally, when the judging unit 820 determines the matching degree of the third feature point set and first decision model When higher than preset first threshold, which is also used to determine the of the identification object according to first decision model Three set of characteristic points;And according to the characteristic point in the third feature point set the face of the identification object the third place, really The fifth feature point set of the fixed identification object;The knowledge is restored further according to the third feature point set and the fifth feature point set The fourth feature point set of other object.
Optionally, which is also used to determine the third feature of the identification object according to first decision model Point set;The face area of the identification object is divided according to the characteristic point for including in the third feature point set, and determining should First area in face area, the first area are the regions of the pending image procossing of face of the identification object;To this One region carries out image processing operations.
Optionally, which determines that first sentences according to the corresponding fisrt feature point set of face of the first user Before cover half type, it is also used to determine the corresponding fisrt feature point set of the face of first user.
Optionally, which determines the corresponding fisrt feature point set of face of first user, may include Several situations below:
(1) determination unit 810 is according to the feature in the corresponding fisrt feature point set of face for extracting first user The average time of point determines the fisrt feature point set;Or
(2) determination unit determines the fisrt feature point set according to the second area of the first user face, this second Region is the region of the pending image procossing of face of the identification object;Or
(3) extraction unit being spaced in the characteristic point for including in the corresponding second feature point set of the face of first user Divide characteristic point, which is determined as the fisrt feature point set for the Partial Feature point of the extraction;Or
(4) first use is extracted in the characteristic point for including in the corresponding second feature point set of the face of first user The face contour feature point is determined as the fisrt feature point set by the face contour feature point of the face at family, the determination unit.
In addition, it is contemplated that user might have different use habits, there are several usual postures, it can be by user's face appearance If state is divided into Ganlei, for the different corresponding master patterns of usual posture training.The similarity of Definition Model detection, when detection Similarity calculation is carried out with the characteristic point detected with each model, selects the highest model of similarity as testing result. In user's use process, detection can all retain sample every time, each sample and single posture master pattern can be carried out inclined Gyration compares, the model of the different postures of training.Characteristic point in the corresponding fisrt feature point set of the face of first user Including the characteristic information under different deviation angles.
Fig. 8 shows the schematic block diagram of customer identification device 800 provided by the embodiments of the present application.The customer identification device 800 can execute above-mentioned 200 described in method, also, modules or unit are respectively used to hold in the customer identification device Each movement and treatment process in the row above method 200, here, in order to avoid repeating, description is omitted.
Those of ordinary skill in the art may be aware that list described in conjunction with the examples disclosed in the embodiments of the present disclosure Member and algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually It is implemented in hardware or software, the specific application and design constraint depending on technical solution.Professional technician Each specific application can be used different methods to achieve the described function, but this realization is it is not considered that exceed Scope of the present application.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description, The specific work process of device and unit, can refer to corresponding processes in the foregoing method embodiment, and details are not described herein.
In several embodiments provided herein, it should be understood that disclosed systems, devices and methods, it can be with It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the unit It divides, only a kind of logical function partition, there may be another division manner in actual implementation, such as multiple units or components It can be combined or can be integrated into another system, or some features can be ignored or not executed.Another point, it is shown or The mutual coupling, direct-coupling or communication connection discussed can be through some interfaces, the indirect coupling of device or unit It closes or communicates to connect, can be electrical property, mechanical or other forms.
The unit as illustrated by the separation member may or may not be physically separated, aobvious as unit The component shown may or may not be physical unit, it can and it is in one place, or may be distributed over multiple In network unit.It can select some or all of unit therein according to the actual needs to realize the mesh of this embodiment scheme 's.
It, can also be in addition, each functional unit in each embodiment of the application can integrate in one processing unit It is that each unit physically exists alone, can also be integrated in one unit with two or more units.
It, can be with if the function is realized in the form of SFU software functional unit and when sold or used as an independent product It is stored in a computer readable storage medium.Based on this understanding, the technical solution of the application is substantially in other words The part of the part that contributes to existing technology or the technical solution can be embodied in the form of software products, the meter Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be a People's computer, server or network equipment etc.) execute each embodiment the method for the application all or part of the steps. And storage medium above-mentioned includes: that USB flash disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), arbitrary access are deposited The various media that can store program code such as reservoir (Random Access Memory, RAM), magnetic or disk.
The above, the only specific embodiment of the application, but the protection scope of the application is not limited thereto, it is any Those familiar with the art within the technical scope of the present application, can easily think of the change or the replacement, and should all contain Lid is within the scope of protection of this application.Therefore, the protection scope of the application should be based on the protection scope of the described claims.

Claims (18)

1. a kind of method for extracting characteristic point characterized by comprising
The first decision model, the fisrt feature point set are determined according to the corresponding fisrt feature point set of the face of the first user Including the Partial Feature point in second feature point set, the second feature point set is all of the face of first user The set of extractible characteristic point;
According to the characteristic point in the fisrt feature point set in the first position of the face of first user, identification pair is determined As corresponding third feature point set, the third feature point set includes the Partial Feature point in fourth feature point set, institute State the set of all extractible characteristic points for the face that fourth feature point set is the identification object, also, the third Characteristic point in set of characteristic points is corresponding with the first position in the second position of the face of the identification object.
2. the method according to claim 1, wherein the method also includes:
According to the matching degree of the third feature point set and first decision model, determine the identification object whether be First user.
3. method according to claim 1 or 2, which is characterized in that the method also includes:
The second decision model is determined according to the corresponding second feature point set of the face of first user;And
When the third feature point set and the matching degree of first decision model are higher than preset first threshold, according to Characteristic point in the second feature point set other than the fisrt feature point set determines identification object the corresponding 5th Set of characteristic points;
According to the matching degree of the fifth feature point set and second decision model, determine the identification object whether be First user.
4. according to the method in any one of claims 1 to 3, which is characterized in that when the third feature point set and institute When stating the matching degree of the first decision model higher than preset first threshold, the method also includes:
The third feature point set of the identification object is determined according to first decision model;
According to the characteristic point in the third feature point set in the third place of the face of the identification object, the knowledge is determined The fifth feature point set of other object;
The fourth feature point set of the identification object is restored according to the third feature point set and the fifth feature point set It closes.
5. according to the method in any one of claims 1 to 3, which is characterized in that the method also includes:
The third feature point set of the identification object is determined according to first decision model;
The face area of the identification object is divided according to the characteristic point for including in the third feature point set, and determines institute The first area in face area is stated, the first area is the region of the pending image procossing of face of the identification object;
Image processing operations are carried out to the first area.
6. the method according to any one of claims 1 to 5, which is characterized in that the face pair according to the first user Before the fisrt feature point set answered determines the first decision model, the method also includes:
Determine the corresponding fisrt feature point set of the face of first user.
7. according to the method described in claim 6, it is characterized in that, the face corresponding first of the determination first user Set of characteristic points, comprising:
Institute is determined according to the average time of the characteristic point in the corresponding fisrt feature point set of face for extracting first user State fisrt feature point set;Or
The fisrt feature point set is determined according to the second area of first user face, and the second area is the knowledge The region of the pending image procossing of face of other object;Or
The extraction unit dtex being spaced in the characteristic point for including in the corresponding second feature point set of face of first user Point is levied, the Partial Feature point of the extraction is determined as the fisrt feature point set;Or
First user is extracted in the characteristic point for including in the corresponding second feature point set of face of first user Face face contour feature point, the face contour feature point is determined as the fisrt feature point set.
8. method according to any one of claim 1 to 7, which is characterized in that the face of first user is corresponding Characteristic point in fisrt feature point set includes the characteristic information under different deviation angles.
9. a kind of device for extracting characteristic point characterized by comprising
Determination unit determines the first decision model for the corresponding fisrt feature point set of face according to the first user, described Fisrt feature point set includes the Partial Feature point in second feature point set, and the second feature point set is first use The set of all extractible characteristic points of the face at family;
The determination unit is also used to the face according to the characteristic point in the fisrt feature point set in first user First position determines the corresponding third feature point set of identification object, and the third feature point set includes fourth feature point set Partial Feature point in conjunction, the fourth feature point set are all extractible characteristic points of the face of the identification object Set, also, the characteristic point in the third feature point set is in the second position and described the of the face of the identification object One position is corresponding.
10. device according to claim 9, which is characterized in that described device further include:
Judging unit, for the matching degree according to the third feature point set and first decision model, described in judgement Identify whether object is first user.
11. device according to claim 9 or 10, which is characterized in that the determination unit is also used to according to described first The corresponding second feature point set of the face of user determines the second decision model;And
When the judging unit determine the third feature point set and first decision model matching degree be higher than it is default First threshold when, the determination unit according in the second feature point set other than the fisrt feature point set Characteristic point determines the corresponding fifth feature point set of identification object;
The judging unit determines the knowledge according to the matching degree of the fifth feature point set and second decision model Whether other object is first user.
12. the device according to any one of claim 9 to 11, which is characterized in that described in determining when the judging unit When third feature point set and the matching degree of first decision model are higher than preset first threshold, the determination unit is also For:
The third feature point set of the identification object is determined according to first decision model;
According to the characteristic point in the third feature point set in the third place of the face of the identification object, the knowledge is determined The fifth feature point set of other object;
The fourth feature point set of the identification object is restored according to the third feature point set and the fifth feature point set It closes.
13. according to claim ask any one of 9 to 11 described in device, which is characterized in that the determination unit is also used to:
The third feature point set of the identification object is determined according to first decision model;
The face area of the identification object is divided according to the characteristic point for including in the third feature point set, and determines institute The first area in face area is stated, the first area is the region of the pending image procossing of face of the identification object;
Image processing operations are carried out to the first area.
14. the device according to any one of claim 9 to 13, which is characterized in that the determination unit is used according to first Before the corresponding fisrt feature point set of the face at family determines the first decision model, it is also used to:
Determine the corresponding fisrt feature point set of the face of first user.
15. device according to claim 14, which is characterized in that the determination unit determines the face of first user Corresponding fisrt feature point set, specifically includes:
The determination unit is according to the flat of the characteristic point in the corresponding fisrt feature point set of face for extracting first user The equal time determines the fisrt feature point set;Or
The determination unit determines the fisrt feature point set according to the second area of first user face, described second Region is the region of the pending image procossing of face of the identification object;Or
The extraction unit dtex being spaced in the characteristic point for including in the corresponding second feature point set of face of first user Point is levied, the Partial Feature point of the extraction is determined as the fisrt feature point set by the determination unit;Or
First user is extracted in the characteristic point for including in the corresponding second feature point set of face of first user Face face contour feature point, the face contour feature point is determined as the fisrt feature point set by the determination unit It closes.
16. device according to any one of claims 9 to 15, which is characterized in that the face of first user is corresponding Fisrt feature point set in characteristic point include the characteristic information under different deviation angles.
17. a kind of device characterized by comprising
Processor executes the instruction in the memory, for coupling with memory to realize as any in claim 1 to 8 Method described in.
18. device according to claim 17, which is characterized in that further include:
The memory, for storing program instruction and data.
CN201810451370.5A 2018-05-11 2018-05-11 Method and device for extracting feature points Active CN110472459B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810451370.5A CN110472459B (en) 2018-05-11 2018-05-11 Method and device for extracting feature points

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810451370.5A CN110472459B (en) 2018-05-11 2018-05-11 Method and device for extracting feature points

Publications (2)

Publication Number Publication Date
CN110472459A true CN110472459A (en) 2019-11-19
CN110472459B CN110472459B (en) 2022-12-27

Family

ID=68504753

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810451370.5A Active CN110472459B (en) 2018-05-11 2018-05-11 Method and device for extracting feature points

Country Status (1)

Country Link
CN (1) CN110472459B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111242020A (en) * 2020-01-10 2020-06-05 广州康行信息技术有限公司 Face recognition method and device

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070009180A1 (en) * 2005-07-11 2007-01-11 Ying Huang Real-time face synthesis systems
CN1940961A (en) * 2005-09-29 2007-04-04 株式会社东芝 Feature point detection apparatus and method
US20080304699A1 (en) * 2006-12-08 2008-12-11 Kabushiki Kaisha Toshiba Face feature point detection apparatus and method of the same
US20110148865A1 (en) * 2009-12-18 2011-06-23 Electronics And Telecommunications Research Institute Method for automatic rigging and shape surface transfer of 3d standard mesh model based on muscle and nurbs by using parametric control
CN102654903A (en) * 2011-03-04 2012-09-05 井维兰 Face comparison method
CN104050642A (en) * 2014-06-18 2014-09-17 上海理工大学 Color image restoration method
CN104751112A (en) * 2013-12-31 2015-07-01 石丰 Fingerprint template based on fuzzy feature point information and fingerprint identification method
CN105205779A (en) * 2015-09-15 2015-12-30 厦门美图之家科技有限公司 Eye image processing method and system based on image morphing and shooting terminal
CN106326867A (en) * 2016-08-26 2017-01-11 维沃移动通信有限公司 Face recognition method and mobile terminal
CN106875329A (en) * 2016-12-20 2017-06-20 北京光年无限科技有限公司 A kind of face replacement method and device
CN107451453A (en) * 2017-07-28 2017-12-08 广东欧珀移动通信有限公司 Solve lock control method and Related product
CN107563304A (en) * 2017-08-09 2018-01-09 广东欧珀移动通信有限公司 Unlocking terminal equipment method and device, terminal device
CN107609514A (en) * 2017-09-12 2018-01-19 广东欧珀移动通信有限公司 Face identification method and Related product
CN107633204A (en) * 2017-08-17 2018-01-26 平安科技(深圳)有限公司 Face occlusion detection method, apparatus and storage medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070009180A1 (en) * 2005-07-11 2007-01-11 Ying Huang Real-time face synthesis systems
CN1940961A (en) * 2005-09-29 2007-04-04 株式会社东芝 Feature point detection apparatus and method
US20080304699A1 (en) * 2006-12-08 2008-12-11 Kabushiki Kaisha Toshiba Face feature point detection apparatus and method of the same
US20110148865A1 (en) * 2009-12-18 2011-06-23 Electronics And Telecommunications Research Institute Method for automatic rigging and shape surface transfer of 3d standard mesh model based on muscle and nurbs by using parametric control
CN102654903A (en) * 2011-03-04 2012-09-05 井维兰 Face comparison method
CN104751112A (en) * 2013-12-31 2015-07-01 石丰 Fingerprint template based on fuzzy feature point information and fingerprint identification method
CN104050642A (en) * 2014-06-18 2014-09-17 上海理工大学 Color image restoration method
CN105205779A (en) * 2015-09-15 2015-12-30 厦门美图之家科技有限公司 Eye image processing method and system based on image morphing and shooting terminal
CN106326867A (en) * 2016-08-26 2017-01-11 维沃移动通信有限公司 Face recognition method and mobile terminal
CN106875329A (en) * 2016-12-20 2017-06-20 北京光年无限科技有限公司 A kind of face replacement method and device
CN107451453A (en) * 2017-07-28 2017-12-08 广东欧珀移动通信有限公司 Solve lock control method and Related product
CN107563304A (en) * 2017-08-09 2018-01-09 广东欧珀移动通信有限公司 Unlocking terminal equipment method and device, terminal device
CN107633204A (en) * 2017-08-17 2018-01-26 平安科技(深圳)有限公司 Face occlusion detection method, apparatus and storage medium
CN107609514A (en) * 2017-09-12 2018-01-19 广东欧珀移动通信有限公司 Face identification method and Related product

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111242020A (en) * 2020-01-10 2020-06-05 广州康行信息技术有限公司 Face recognition method and device

Also Published As

Publication number Publication date
CN110472459B (en) 2022-12-27

Similar Documents

Publication Publication Date Title
CN110471858B (en) Application program testing method, device and storage medium
CN110059661A (en) Action identification method, man-machine interaction method, device and storage medium
CN108108649B (en) Identity verification method and device
CN108255304A (en) Video data handling procedure, device and storage medium based on augmented reality
CN109213732A (en) A kind of method, mobile terminal and computer readable storage medium improving photograph album classification
CN108062400A (en) Examination cosmetic method, smart mirror and storage medium based on smart mirror
CN108664783A (en) The electronic equipment of recognition methods and support this method based on iris recognition
MX2015007253A (en) Image processing method and apparatus, and terminal device.
CN106951868B (en) A kind of gait recognition method and device based on figure feature
CN110163806A (en) A kind of image processing method, device and storage medium
CN110088764A (en) For the operating method of iris identifying function and the electronic equipment of support this method
CN110443769A (en) Image processing method, image processing apparatus and terminal device
CN110163045A (en) A kind of recognition methods of gesture motion, device and equipment
CN108875594A (en) A kind of processing method of facial image, device and storage medium
CN110070063A (en) Action identification method, device and the electronic equipment of target object
CN110162653A (en) A kind of picture and text sort recommendations method and terminal device
CN109544445A (en) A kind of image processing method, device and mobile terminal
CN110310399A (en) The unlocking method and Related product of vein door lock
CN110086998A (en) A kind of image pickup method and terminal
CN111353946A (en) Image restoration method, device, equipment and storage medium
CN112766406A (en) Article image processing method and device, computer equipment and storage medium
CN115171196B (en) Face image processing method, related device and storage medium
CN108062370A (en) A kind of application program searching method and mobile terminal
CN110472459A (en) The method and apparatus for extracting characteristic point
CN107704173A (en) A kind of application program display methods, terminal and computer-readable recording medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant