CN105469056A - Face image processing method and device - Google Patents

Face image processing method and device Download PDF

Info

Publication number
CN105469056A
CN105469056A CN201510846644.7A CN201510846644A CN105469056A CN 105469056 A CN105469056 A CN 105469056A CN 201510846644 A CN201510846644 A CN 201510846644A CN 105469056 A CN105469056 A CN 105469056A
Authority
CN
China
Prior art keywords
key point
face key
frame image
current frame
face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201510846644.7A
Other languages
Chinese (zh)
Inventor
杨松
张涛
王百超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Technology Co Ltd
Xiaomi Inc
Original Assignee
Xiaomi Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiaomi Inc filed Critical Xiaomi Inc
Priority to CN201510846644.7A priority Critical patent/CN105469056A/en
Publication of CN105469056A publication Critical patent/CN105469056A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • G06V40/166Detection; Localisation; Normalisation using acquisition arrangements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships

Abstract

The invention relates to a face image processing method and device. The method includes following steps that: face key point position of a preset frame image is obtained; the preset frame image is located before a current frame image and is spaced from the current frame image by preset numbers of frames; the face key point position of the preset frame image is tracked in the current frame image; when the face key point position of the preset frame image is tracked successfully, the tracking position of a face key point in the current frame image can be obtained, and the tracking position is adopted as the initialization position of the face key point of the current frame image; and iterative solution is performed on the initialization position of the face key point of the current frame image, so that the face key point position of the current frame image can be obtained. With the face image processing method and device of the invention adopted, the number of the times of the iterative solution is small, and the speed of the iterative solution is high, and therefore, face key point positioning efficiency can be improved.

Description

Face image processing process and device
Technical field
The application relates to image technique field, particularly relates to face image processing process and device.
Background technology
Along with the development of image processing techniques, facial image detects key point not only has great role to recognition of face, and can also provide basis for Expression Recognition, face recognition or identity authentication etc., be one of key problem in recognition of face.In correlation technique, the crucial location algorithm of face, when locating human face's key point position, needs after acquisition face key point initial position usually, carry out 4-6 iterative and obtain final position, the number of times of its iterative is more, and calculated amount is comparatively large, and efficiency of algorithm is poor.
Summary of the invention
For overcoming Problems existing in correlation technique, present disclose provides face image processing process and device.
According to the first aspect of disclosure embodiment, provide a kind of face image processing process, described method comprises:
Obtain the face key point position of presetting two field picture; Wherein, described default two field picture is before current frame image, and frame number is preset at described current frame image interval;
In current frame image, the face key point position of default two field picture is followed the tracks of;
When following the tracks of successfully the face key point position of default two field picture, obtain the tracing positional of face key point at current frame image, using the initialized location of described tracing positional as the face key point of current frame image;
Iterative is carried out to the initialized location of the face key point of current frame image, obtains the face key point position of current frame image.
Optionally, the face key point position of two field picture is preset in described acquisition, comprising:
When described default two field picture is face initial frame image, mean place solving method is utilized to solve the face key point position of described default two field picture;
Wherein, described mean place solving method comprises:
Face datection is carried out to image, obtains the human face region in image;
Obtain the face key point mean place preset in training set;
Carry out iterative with the initialized location that described face key point mean place is the face key point of described human face region, obtain the face key point position of described image.
Optionally, described method also comprises:
When following the tracks of unsuccessfully the face key point position of default two field picture, described mean place solving method is utilized to solve the initialized location of the face key point of described current frame image.
Optionally, describedly in current frame image, the face key point position of default two field picture to be followed the tracks of, comprising:
In current frame image, utilize the face key point position of default feature point tracking algorithm to default two field picture to follow the tracks of, described feature point tracking algorithm comprises the feature point tracking algorithm based on light stream.
Optionally, the initialized location of the described face key point to current frame image carries out iterative, comprising:
The gradient descent algorithm of the supervision preset is utilized to carry out iterative to described initialized location.
According to the second aspect of disclosure embodiment, provide a kind of face image processing device, described device comprises:
Primary importance acquisition module, is configured to the face key point position obtaining default two field picture; Wherein, described default two field picture is before current frame image, and frame number is preset at described current frame image interval;
Tracking module, is configured to follow the tracks of the face key point position of default two field picture in current frame image;
First initialized location determination module, be configured to when following the tracks of successfully to the face key point position of default two field picture, obtain the tracing positional of face key point at current frame image, using the initialized location of described tracing positional as the face key point of current frame image;
Iterative module, is configured to carry out iterative to the initialized location of the face key point of current frame image, obtains the face key point position of current frame image.
Optionally, described device also comprises:
Mean place solves module, is configured to carry out Face datection to image, obtains the human face region in image; Obtain the face key point mean place preset in training set; Carry out iterative with the initialized location that described face key point mean place is the face key point of described human face region, obtain the face key point position of described image;
Second place acquisition module, is configured to when described default two field picture is face initial frame image, utilize described mean place to solve face key point position that module solves described default two field picture.
Optionally, described device also comprises:
Second initialized location determination module, is configured to when following the tracks of unsuccessfully the face key point position of default two field picture, utilize described mean place to solve initialized location that module solves the face key point of described current frame image.
Optionally, described tracking module, comprising:
Follow the tracks of submodule, be configured to utilize the face key point position of feature point tracking algorithm to default two field picture of presetting to follow the tracks of, described feature point tracking algorithm comprises the feature point tracking algorithm based on light stream.
Optionally, described iterative module, comprising:
Iterative submodule, is configured to utilize the gradient descent algorithm of the supervision preset to carry out iterative to described initialized location.
According to the third aspect of disclosure embodiment, a kind of face image processing device is provided, comprises:
Processor;
For the storer of storage of processor executable instruction;
Wherein, described processor is configured to:
Obtain the face key point position of presetting two field picture; Wherein, described default two field picture is before current frame image, and frame number is preset at described current frame image interval;
In current frame image, the face key point position of default two field picture is followed the tracks of;
When following the tracks of successfully to the face key point position of default two field picture, obtain the tracing positional of face key point at current frame image, using the initialized location of described tracing positional as the face key point of current frame image;
Iterative is carried out to the initialized location of the face key point of current frame image, obtains the face key point position of current frame image.
The technical scheme that embodiment of the present disclosure provides can comprise following beneficial effect:
In the disclosure, basis is set to the face key point of the default two field picture before current frame image, in current frame image, tracking is carried out to this position and obtain tracing positional, because default two field picture is accurately located face key point, then in current frame image tracing positional and final position closely, when with tracing positional be initialized location carry out iterative time, the number of times of iterative can significantly reduce, the speed of iterative can be accelerated, improve the efficiency of face key point location.
In the disclosure, if presetting two field picture is face initial frame image, then cannot obtains the image that can be used as position reference, and utilize mean place solving method to solve, the face key point position of default two field picture can be obtained, reference can be provided for the face key point location of other images follow-up.
In the disclosure, consider that the difference when presetting two field picture and current frame image is larger, then likely occur following the tracks of failed situation to the face key point position of default two field picture, mean place solving method can be utilized to solve the initialized location of the face key point of current frame image, improve applicability of the present disclosure.
In the disclosure, adopt the feature point tracking algorithm based on light stream to follow the tracks of the face key point position of default two field picture fast, which is easy to realize, and accuracy rate is high.
In the disclosure, the iterative speed of the gradient descent algorithm of supervision is fast, and can solve accurate face key point position.
Should be understood that, it is only exemplary and explanatory that above general description and details hereinafter describe, and can not limit the disclosure.
Accompanying drawing explanation
Accompanying drawing to be herein merged in instructions and to form the part of this instructions, shows and meets embodiment of the present disclosure, and is used from instructions one and explains principle of the present disclosure.
Fig. 1 is the process flow diagram of a kind of face image processing process of the disclosure according to an exemplary embodiment.
Fig. 2 is the process flow diagram of the another kind of face image processing process of the disclosure according to an exemplary embodiment.
Fig. 3 is the block diagram of a kind of face image processing device of the disclosure according to an exemplary embodiment.
Fig. 4 is the block diagram of the another kind of face image processing device of the disclosure according to an exemplary embodiment.
Fig. 5 is the block diagram of the another kind of face image processing device of the disclosure according to an exemplary embodiment.
Fig. 6 is the block diagram of the another kind of face image processing device of the disclosure according to an exemplary embodiment.
Fig. 7 is the block diagram of the another kind of face image processing device of the disclosure according to an exemplary embodiment.
Fig. 8 is a kind of block diagram for face image processing device of the disclosure according to an exemplary embodiment.
Embodiment
Here will be described exemplary embodiment in detail, its sample table shows in the accompanying drawings.When description below relates to accompanying drawing, unless otherwise indicated, the same numbers in different accompanying drawing represents same or analogous key element.Embodiment described in following exemplary embodiment does not represent all embodiments consistent with the disclosure.On the contrary, they only with as in appended claims describe in detail, the example of apparatus and method that aspects more of the present disclosure are consistent.
The term used in the disclosure is only for the object describing specific embodiment, and the not intended to be limiting disclosure." one ", " described " and " being somebody's turn to do " of the singulative used in disclosure and the accompanying claims book is also intended to comprise most form, unless context clearly represents other implications.It is also understood that term "and/or" used herein refer to and comprise one or more project of listing be associated any or all may combine.
Term first, second, third, etc. may be adopted although should be appreciated that to describe various information in the disclosure, these information should not be limited to these terms.These terms are only used for the information of same type to be distinguished from each other out.Such as, when not departing from disclosure scope, the first information also can be called as the second information, and similarly, the second information also can be called as the first information.Depend on linguistic context, word as used in this " if " can be construed as into " ... time " or " when ... time " or " in response to determining ".
As shown in Figure 1, Fig. 1 is the process flow diagram of a kind of face image processing process of the disclosure according to an exemplary embodiment, can be applicable in terminal, comprises the following steps 101-104:
In a step 101, the face key point position of presetting two field picture is obtained.Described default two field picture is before current frame image, and frame number is preset at described current frame image interval.
The terminal related in disclosure embodiment can be intelligent terminal, such as computing machine, intelligent television, smart mobile phone, panel computer, PDA (PersonalDigitalAssistant, personal digital assistant) etc.
The method that disclosure embodiment provides, the face key point position that can be applicable in video image is detected; Video image is by a frame frame continuous print image construction, and when carrying out location, face key point position to current frame image, the disposal route that disclosure embodiment can be adopted to provide positions.
Wherein, face key point and face's key feature points, can comprise eyebrow, eyes, nose, face or face mask etc.In actual applications, the face key point that need locate can be preset as required, such as, set nose or face etc.Each face key point particular location in the picture, namely represents in image, which pixel is above-mentioned face's key feature points.
Default two field picture refers to before current frame image, and frame number is preset at described current frame image interval.Current frame image refers to that current need carry out the image of face key point location, default two field picture can be the image having completed face key point location, therefore for default two field picture, the positioning result that it is anticipated can be acquired, that is to say the face key point position of above-mentioned default two field picture.In the embodiment that the disclosure provides, the interval frame number presetting two field picture and current frame image can choose less numerical value, default two field picture and current frame image interval less, the difference between two two field pictures is less; In actual applications, this interval frame number can be the multiple choices such as a frame, two frames, and in order to reduce calculated amount and obtain preferably positioning result, default two field picture can be the previous frame image of current frame image.
In a step 102, in current frame image, the face key point position of default two field picture is followed the tracks of.
Because default two field picture has obtained face key point position, and the difference presetting two field picture and current frame image is less, therefore can follow the tracks of the face key point position of default two field picture.
In an optional implementation, can be in current frame image, utilize the face key point position of default feature point tracking algorithm to default two field picture to follow the tracks of, described feature point tracking algorithm comprises the feature point tracking algorithm based on light stream.
When object is when moving, the luminance patterns of its corresponding point on image is also in motion.The apparent motion (apparentmotion) of this brightness of image pattern is exactly light stream.Light stream have expressed the change of image, because it contains the information of target travel, therefore can be used to the motion conditions determining target.Utilize time domain change and the correlativity of the greyscale image data in motion image sequence, the motion conditions of image pixel can be determined.When following the tracks of the face key point position of default two field picture, in continuous print two field picture, face can regard a moving target as, therefore the light stream formula in correlation technique can be adopted, based on the face key point position in default two field picture, in current frame image, this face key point is followed the tracks of, calculate the position of face key point in present frame.
As seen from the above-described embodiment, adopt the feature point tracking algorithm based on light stream to follow the tracks of the face key point position of default two field picture fast, which is easy to realize, and accuracy rate is high.
In step 103, when following the tracks of successfully to the face key point position of default two field picture, obtain the tracing positional of face key point at current frame image, using the initialized location of described tracing positional as the face key point of current frame image.
If the difference preset between two field picture and current frame image is little, then the face key point position of presetting two field picture can be followed the tracks of successfully usually, thus obtains the tracing positional orienting face key point in current frame image.Because default two field picture is accurately located face key point, this tracing positional known and final position closely, can using the initialized location of tracing positional as the face key point of current frame image.
At step 104, iterative is carried out to the initialized location of the face key point of current frame image, obtain the face key point position of current frame image.
When carrying out iterative, with the tracing positional in step 103 for initialized location carry out iterative time, then the number of times of iterative can significantly reduce.
In an optional implementation, Gradient Descent (SDM, the SupervisedDescentMethod) algorithm of default supervision can be utilized to carry out iterative to described initialized location.
SDM algorithm can realize face alignment fast, and the method solves the method for complicated least square problem (leastsquaresproblem) based on machine learning.The thinking of the method is direction from training data learning Gradient Descent and sets up corresponding regression model, then utilizes the model obtained to carry out gradient direction estimation.In the embodiment that the disclosure provides, using the initialized location of tracing positional as the face key point of current frame image, SDM algorithm is utilized to carry out iterative to the human face characteristic point of initialized location, the final face key point position obtaining optimum solution.
As seen from the above-described embodiment, the iterative speed of the gradient descent algorithm of this supervision is fast, and can solve accurate face key point position.
So far, the face image processing process that disclosure embodiment provides, basis is set to the face key point of the default two field picture before current frame image, in current frame image, tracking is carried out to this position and obtain tracing positional, because default two field picture is accurately located face key point, then in current frame image, tracing positional and final position be closely, when with tracing positional be initialized location carry out iterative time, the number of times of iterative can significantly reduce, the speed of iterative can be accelerated, improve the efficiency of face key point location.
As shown in Figure 2, be the process flow diagram of the another kind of face image processing process according to an exemplary embodiment, comprise the steps:
In step 201, when described default two field picture is face initial frame image, mean place solving method is utilized to solve the face key point position of described default two field picture.
In step 202., in current frame image, the face key point position of default two field picture is followed the tracks of.
In step 203, when following the tracks of successfully the face key point position of default two field picture, obtain the tracing positional of face key point at current frame image, using the initialized location of described tracing positional as the face key point of current frame image.
In step 204, when following the tracks of unsuccessfully the face key point position of default two field picture, described mean place solving method is utilized to solve the initialized location of the face key point of described current frame image.
In step 205, iterative is carried out to the initialized location of the face key point of current frame image, obtain the face key point position of current frame image.
Wherein, mean place solving method comprises the steps:
Face datection is carried out to image, obtains the human face region in image.
Obtain the face key point mean place preset in training set.
Carry out iterative with the initialized location that described face key point mean place is the face key point of described human face region, obtain the face key point position of described image.
The embodiment that the disclosure provides, face initial frame image, to refer in video image the first two field picture of a certain face to be identified.Video image is made up of continuous print sequence of frames of video, recognition of face is carried out to video image, if a certain facial image first time to be identified occurs in sequence of frames of video, then cannot obtain the previous frame image that can be used as reference, mean place solving method now can be utilized to solve face key point position to face initial frame image.
In mean place solving method, first Face datection is carried out to image, obtain human face region, for follow-up carry out face key point location time, calculate in the human face region confined, the scope of iterative can be reduced.Face key point initialized location then adopts the face key point mean place of demarcating in advance in training set to carry out iterative, finally obtains the face key point position of image.
When initial two field picture orients the postpone of face key point, then can be used as the reference of other two field pictures follow-up, follow the tracks of for other two field pictures.
Wherein, if the difference of current frame image and default two field picture is larger, then likely causing and follow the tracks of failed situation, when following the tracks of unsuccessfully the face key point position of default two field picture, above-mentioned mean place solving method can be utilized to solve the initialized location of the face key point of described current frame image.
As seen from the above-described embodiment, if default two field picture is the first two field picture in video image, cannot take then previous frame image as reference, mean place solving method can be utilized to solve, obtain the face key point position of presetting two field picture, the face key point location being embodied as other images follow-up provides reference.Consider that the difference when presetting two field picture and current frame image is larger, then likely occur following the tracks of failed situation to the face key point position of default two field picture, mean place solving method can be utilized to solve the initialized location of the face key point of current frame image, improve applicability of the present disclosure.
Corresponding with the embodiment of preceding method, the embodiment of terminal that the disclosure additionally provides face image processing device and applies.
As shown in Figure 3, Fig. 3 is the block diagram of a kind of face image processing device of the disclosure according to an exemplary embodiment, and described device comprises: primary importance acquisition module 31, tracking module 32 first initialized location determination module 33 and iterative module 34.
Wherein, primary importance acquisition module 31, is configured to the face key point position obtaining default two field picture; Wherein, described default two field picture is before current frame image, and frame number is preset at described current frame image interval.
Tracking module 32, is configured to follow the tracks of the face key point position of default two field picture in current frame image.
First initialized location determination module 33, be configured to when following the tracks of successfully to the face key point position of default two field picture, obtain the tracing positional of face key point at current frame image, using the initialized location of described tracing positional as the face key point of current frame image.
Iterative module 34, is configured to carry out iterative to the initialized location of the face key point of current frame image, obtains the face key point position of current frame image.
As seen from the above-described embodiment, basis is set to the face key point of the default two field picture before current frame image, in current frame image, tracking is carried out to this position and obtain tracing positional, because default two field picture is accurately located face key point, then in current frame image, tracing positional and final position be closely, when with tracing positional be initialized location carry out iterative time, the number of times of iterative can significantly reduce, the speed of iterative can be accelerated, improve the efficiency of face key point location.
As shown in Figure 4, Fig. 4 is the block diagram of the another kind of face image processing device of the disclosure according to an exemplary embodiment, this embodiment is on aforementioned basis embodiment illustrated in fig. 3, and described device also comprises: mean place solves module 35 and second place acquisition module 36.
Wherein, mean place solves module 35, is configured to carry out Face datection to image, obtains the human face region in image; Obtain the face key point mean place preset in training set; Carry out iterative with the initialized location that described face key point mean place is the face key point of described human face region, obtain the face key point position of described image.
Second place acquisition module 36, is configured to when described default two field picture is face initial frame image, utilize described mean place to solve face key point position that module solves described default two field picture.
As seen from the above-described embodiment, if presetting two field picture is face initial frame image, then cannot obtain the image that can be used as position reference, and utilize mean place solving method to solve, the face key point position of default two field picture can be obtained, reference can be provided for the face key point location of other images follow-up.
As shown in Figure 5, Fig. 5 is the block diagram of the another kind of face image processing device of the disclosure according to an exemplary embodiment, and this embodiment is on aforementioned basis embodiment illustrated in fig. 3, and described device also comprises: the second initialized location determination module 37.
Wherein, the second initialized location determination module 37, is configured to when following the tracks of unsuccessfully the face key point position of default two field picture, utilize described mean place to solve initialized location that module solves the face key point of described current frame image.
As seen from the above-described embodiment, consider that the difference when presetting two field picture and current frame image is larger, then likely occur following the tracks of failed situation to the face key point position of default two field picture, mean place solving method can be utilized to solve the initialized location of the face key point of current frame image, improve applicability of the present disclosure.
As shown in Figure 6, Fig. 6 is the block diagram of the another kind of face image processing device of the disclosure according to an exemplary embodiment, and this embodiment is on aforementioned basis embodiment illustrated in fig. 3, and tracking module 32, comprising: follow the tracks of submodule 321.
Wherein, follow the tracks of submodule 321, be configured to utilize the face key point position of feature point tracking algorithm to default two field picture of presetting to follow the tracks of, described feature point tracking algorithm comprises the feature point tracking algorithm based on light stream.
As seen from the above-described embodiment, adopt the feature point tracking algorithm based on light stream to follow the tracks of the face key point position of default two field picture fast, which is easy to realize, and accuracy rate is high.
As shown in Figure 7, Fig. 7 is the block diagram of the another kind of face image processing device of the disclosure according to an exemplary embodiment, and this embodiment is on aforementioned basis embodiment illustrated in fig. 3, and iterative module 34, comprising: iterative submodule 341.
Wherein, iterative submodule 341, is configured to utilize the gradient descent algorithm of the supervision preset to carry out iterative to described initialized location.
As seen from the above-described embodiment, the iterative speed of the gradient descent algorithm of supervision is fast, and can solve accurate face key point position.
Accordingly, the disclosure also provides a kind of face image processing device, and described device includes processor; For the storer of storage of processor executable instruction; Wherein, described processor is configured to:
Obtain the face key point position of presetting two field picture; Wherein, described default two field picture is before current frame image, and frame number is preset at described current frame image interval.
In current frame image, the face key point position of default two field picture is followed the tracks of.
When following the tracks of successfully to the face key point position of default two field picture, obtain the tracing positional of face key point at current frame image, using the initialized location of described tracing positional as the face key point of current frame image.
Iterative is carried out to the initialized location of the face key point of current frame image, obtains the face key point position of current frame image.
In said apparatus, the implementation procedure of the function and efficacy of modules specifically refers to the implementation procedure of corresponding step in said method, does not repeat them here.
For device embodiment, because it corresponds essentially to embodiment of the method, so relevant part illustrates see the part of embodiment of the method.Device embodiment described above is only schematic, the wherein said unit illustrated as separating component or can may not be and physically separates, parts as unit display can be or may not be physical location, namely can be positioned at a place, or also can be distributed in multiple network element.Some or all of module wherein can be selected according to the actual needs to realize the object of disclosure scheme.Those of ordinary skill in the art, when not paying creative work, are namely appreciated that and implement.
As shown in Figure 8, Fig. 8 is a kind of block diagram for face image processing device 800 of the disclosure according to an exemplary embodiment.Such as, device 800 can be mobile phone, computing machine, digital broadcast terminal, messaging devices, game console, tablet device, Medical Devices, body-building equipment, personal digital assistant etc.
With reference to Fig. 8, device 800 can comprise following one or more assembly: processing components 802, storer 804, power supply module 806, multimedia groupware 808, audio-frequency assembly 810, the interface 812 of I/O (I/O), sensor module 814, and communications component 816.
The integrated operation of the usual control device 800 of processing components 802, such as with display, call, data communication, camera operation and record operate the operation be associated.Processing components 802 can comprise one or more processor 820 to perform instruction, to complete all or part of step of above-mentioned method.In addition, processing components 802 can comprise one or more module, and what be convenient between processing components 802 and other assemblies is mutual.Such as, processing components 802 can comprise multi-media module, mutual with what facilitate between multimedia groupware 808 and processing components 802.
Storer 804 is configured to store various types of data to be supported in the operation of device 800.The example of these data comprises for any application program of operation on device 800 or the instruction of method, contact data, telephone book data, message, picture, video etc.Storer 804 can be realized by the volatibility of any type or non-volatile memory device or their combination, as static RAM (SRAM), Electrically Erasable Read Only Memory (EEPROM), Erasable Programmable Read Only Memory EPROM (EPROM), programmable read only memory (PROM), ROM (read-only memory) (ROM), magnetic store, flash memory, disk or CD.
The various assemblies that power supply module 806 is device 800 provide electric power.Power supply module 806 can comprise power-supply management system, one or more power supply, and other and the assembly generating, manage and distribute electric power for device 800 and be associated.
Multimedia groupware 808 is included in the screen providing an output interface between described device 800 and user.In certain embodiments, screen can comprise liquid crystal display (LCD) and touch panel (TP).If screen comprises touch panel, screen may be implemented as touch-screen, to receive the input signal from user.Touch panel comprises one or more touch sensor with the gesture on sensing touch, slip and touch panel.Described touch sensor can the border of not only sensing touch or sliding action, but also detects the duration relevant to described touch or slide and pressure.In certain embodiments, multimedia groupware 808 comprises a front-facing camera and/or post-positioned pick-up head.When device 800 is in operator scheme, during as screening-mode or video mode, front-facing camera and/or post-positioned pick-up head can receive outside multi-medium data.Each front-facing camera and post-positioned pick-up head can be fixing optical lens systems or have focal length and optical zoom ability.
Audio-frequency assembly 810 is configured to export and/or input audio signal.Such as, audio-frequency assembly 810 comprises a microphone (MIC), and when device 800 is in operator scheme, during as call model, logging mode and speech recognition mode, microphone is configured to receive external audio signal.The sound signal received can be stored in storer 804 further or be sent via communications component 816.In certain embodiments, audio-frequency assembly 810 also comprises a loudspeaker, for output audio signal.
I/O interface 812 is for providing interface between processing components 802 and peripheral interface module, and above-mentioned peripheral interface module can be keyboard, some striking wheel, button etc.These buttons can include but not limited to: home button, volume button, start button and locking press button.
Sensor module 814 comprises one or more sensor, for providing the state estimation of various aspects for device 800.Such as, sensor module 814 can detect the opening/closing state of device 800, the relative positioning of assembly, such as described assembly is display and the keypad of device 800, the position of all right pick-up unit 800 of sensor module 814 or device 800 1 assemblies changes, the presence or absence that user contacts with device 800, the temperature variation of device 800 orientation or acceleration/deceleration and device 800.Sensor module 814 can comprise proximity transducer, be configured to without any physical contact time detect near the existence of object.Sensor module 814 can also comprise optical sensor, as CMOS or ccd image sensor, for using in imaging applications.In certain embodiments, this sensor module 814 can also comprise acceleration transducer, gyro sensor, Magnetic Sensor, pressure transducer, microwave remote sensor or temperature sensor.
Communications component 816 is configured to the communication being convenient to wired or wireless mode between device 800 and other equipment.Device 800 can access the wireless network based on communication standard, as WiFi, 2G or 3G, or their combination.In one exemplary embodiment, communications component 816 receives from the broadcast singal of external broadcasting management system or broadcast related information via broadcast channel.In one exemplary embodiment, described communications component 816 also comprises near-field communication (NFC) module, to promote junction service.Such as, can based on radio-frequency (RF) identification (RFID) technology in NFC module, Infrared Data Association (IrDA) technology, ultra broadband (UWB) technology, bluetooth (BT) technology and other technologies realize.
In the exemplary embodiment, device 800 can be realized, for performing said method by one or more application specific integrated circuit (ASIC), digital signal processor (DSP), digital signal processing appts (DSPD), programmable logic device (PLD) (PLD), field programmable gate array (FPGA), controller, microcontroller, microprocessor or other electronic components.
In the exemplary embodiment, additionally provide a kind of non-transitory computer-readable recording medium comprising instruction, such as, comprise the storer 804 of instruction, above-mentioned instruction can perform said method by the processor 820 of device 800.Such as, described non-transitory computer-readable recording medium can be ROM, random access memory (RAM), CD-ROM, tape, floppy disk and optical data storage devices etc.
A kind of non-transitory computer-readable recording medium, when the instruction in described storage medium is performed by the processor of terminal, make terminal can perform a kind of face image processing process, described method comprises:
Obtain the face key point position of presetting two field picture; Wherein, described default two field picture is before current frame image, and frame number is preset at described current frame image interval.
In current frame image, the face key point position of default two field picture is followed the tracks of.
When following the tracks of successfully to the face key point position of default two field picture, obtain the tracing positional of face key point at current frame image, using the initialized location of described tracing positional as the face key point of current frame image.
Iterative is carried out to the initialized location of the face key point of current frame image, obtains the face key point position of current frame image.
Those skilled in the art, at consideration instructions and after putting into practice invention disclosed herein, will easily expect other embodiment of the present disclosure.The disclosure is intended to contain any modification of the present disclosure, purposes or adaptations, and these modification, purposes or adaptations are followed general principle of the present disclosure and comprised the undocumented common practise in the art of the disclosure or conventional techniques means.Instructions and embodiment are only regarded as exemplary, and true scope of the present disclosure and spirit are pointed out by claim below.
Should be understood that, the disclosure is not limited to precision architecture described above and illustrated in the accompanying drawings, and can carry out various amendment and change not departing from its scope.The scope of the present disclosure is only limited by appended claim.
The foregoing is only preferred embodiment of the present disclosure, not in order to limit the disclosure, all within spirit of the present disclosure and principle, any amendment made, equivalent replacements, improvement etc., all should be included within scope that the disclosure protects.

Claims (11)

1. a face image processing process, is characterized in that, described method comprises:
Obtain the face key point position of presetting two field picture; Wherein, described default two field picture is before current frame image, and frame number is preset at described current frame image interval;
In current frame image, the face key point position of default two field picture is followed the tracks of;
When following the tracks of successfully the face key point position of default two field picture, obtain the tracing positional of face key point at current frame image, using the initialized location of described tracing positional as the face key point of current frame image;
Iterative is carried out to the initialized location of the face key point of current frame image, obtains the face key point position of current frame image.
2. method according to claim 1, is characterized in that, the face key point position of two field picture is preset in described acquisition, comprising:
When described default two field picture is face initial frame image, mean place solving method is utilized to solve the face key point position of described default two field picture;
Wherein, described mean place solving method comprises:
Face datection is carried out to image, obtains the human face region in image;
Obtain the face key point mean place preset in training set;
Carry out iterative with the initialized location that described face key point mean place is the face key point of described human face region, obtain the face key point position of described image.
3. method according to claim 2, is characterized in that, described method also comprises:
When following the tracks of unsuccessfully the face key point position of default two field picture, described mean place solving method is utilized to solve the initialized location of the face key point of described current frame image.
4. method according to claim 1, is characterized in that, describedly follows the tracks of the face key point position of default two field picture in current frame image, comprising:
In current frame image, utilize the face key point position of default feature point tracking algorithm to default two field picture to follow the tracks of, described feature point tracking algorithm comprises the feature point tracking algorithm based on light stream.
5. method according to claim 1, is characterized in that, the initialized location of the described face key point to current frame image carries out iterative, comprising:
The initialized location of gradient descent algorithm to the face key point of described current frame image of the supervision preset is utilized to carry out iterative.
6. a face image processing device, is characterized in that, described device comprises:
Primary importance acquisition module, is configured to the face key point position obtaining default two field picture; Wherein, described default two field picture is before current frame image, and frame number is preset at current frame image interval;
Tracking module, is configured to follow the tracks of the face key point position of default two field picture in current frame image;
First initialized location determination module, be configured to when following the tracks of successfully the face key point position of default two field picture, obtain the tracing positional of face key point at current frame image, using the initialized location of described tracing positional as the face key point of current frame image;
Iterative module, is configured to carry out iterative to the initialized location of the face key point of current frame image, obtains the face key point position of current frame image.
7. device according to claim 6, is characterized in that, described device also comprises:
Mean place solves module, is configured to carry out Face datection to image, obtains the human face region in image; Obtain the face key point mean place preset in training set; Carry out iterative with the initialized location that described face key point mean place is the face key point of described human face region, obtain the face key point position of described image;
Second place acquisition module, is configured to when described default two field picture is face initial frame image, utilize described mean place to solve face key point position that module solves described default two field picture.
8. device according to claim 7, is characterized in that, described device also comprises:
Second initialized location determination module, is configured to when following the tracks of unsuccessfully the face key point position of default two field picture, utilize described mean place to solve initialized location that module solves the face key point of described current frame image.
9. device according to claim 6, is characterized in that, described tracking module, comprising:
Follow the tracks of submodule, be configured to utilize the face key point position of feature point tracking algorithm to default two field picture of presetting to follow the tracks of, described feature point tracking algorithm comprises the feature point tracking algorithm based on light stream.
10. device according to claim 6, is characterized in that, described iterative module, comprising:
Iterative submodule, is configured to utilize the initialized location of gradient descent algorithm to the face key point of described current frame image of the supervision preset to carry out iterative.
11. 1 kinds of face image processing devices, is characterized in that, comprising:
Processor;
For the storer of storage of processor executable instruction;
Wherein, described processor is configured to:
Obtain the face key point position of presetting two field picture; Wherein, described default two field picture is before current frame image, and frame number is preset at described current frame image interval;
In current frame image, the face key point position of default two field picture is followed the tracks of;
When following the tracks of successfully to the face key point position of default two field picture, obtain the tracing positional of face key point at current frame image, using the initialized location of described tracing positional as the face key point of current frame image;
Iterative is carried out to the initialized location of the face key point of current frame image, obtains the face key point position of current frame image.
CN201510846644.7A 2015-11-26 2015-11-26 Face image processing method and device Pending CN105469056A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510846644.7A CN105469056A (en) 2015-11-26 2015-11-26 Face image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510846644.7A CN105469056A (en) 2015-11-26 2015-11-26 Face image processing method and device

Publications (1)

Publication Number Publication Date
CN105469056A true CN105469056A (en) 2016-04-06

Family

ID=55606727

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510846644.7A Pending CN105469056A (en) 2015-11-26 2015-11-26 Face image processing method and device

Country Status (1)

Country Link
CN (1) CN105469056A (en)

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106295511A (en) * 2016-07-26 2017-01-04 北京小米移动软件有限公司 Face tracking method and device
CN106503682A (en) * 2016-10-31 2017-03-15 北京小米移动软件有限公司 Crucial independent positioning method and device in video data
CN106778585A (en) * 2016-12-08 2017-05-31 腾讯科技(上海)有限公司 A kind of face key point-tracking method and device
CN106845398A (en) * 2017-01-19 2017-06-13 北京小米移动软件有限公司 Face key independent positioning method and device
CN106845377A (en) * 2017-01-10 2017-06-13 北京小米移动软件有限公司 Face key independent positioning method and device
CN106875422A (en) * 2017-02-06 2017-06-20 腾讯科技(上海)有限公司 Face tracking method and device
CN106875421A (en) * 2017-01-19 2017-06-20 博康智能信息技术有限公司北京海淀分公司 A kind of multi-object tracking method and device
CN107403145A (en) * 2017-07-14 2017-11-28 北京小米移动软件有限公司 Image characteristic points positioning method and device
CN107506682A (en) * 2016-06-14 2017-12-22 掌赢信息科技(上海)有限公司 A kind of man face characteristic point positioning method and electronic equipment
CN107563323A (en) * 2017-08-30 2018-01-09 华中科技大学 A kind of video human face characteristic point positioning method
WO2018170864A1 (en) * 2017-03-20 2018-09-27 成都通甲优博科技有限责任公司 Face recognition and tracking method
WO2018202089A1 (en) * 2017-05-05 2018-11-08 商汤集团有限公司 Key point detection method and device, storage medium and electronic device
CN109215054A (en) * 2017-06-29 2019-01-15 沈阳新松机器人自动化股份有限公司 Face tracking method and system
CN110136229A (en) * 2019-05-27 2019-08-16 广州亮风台信息科技有限公司 A kind of method and apparatus changed face for real-time virtual
CN113691806A (en) * 2017-05-26 2021-11-23 Line 株式会社 Image compression method, image restoration method, and computer-readable recording medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140015930A1 (en) * 2012-06-20 2014-01-16 Kuntal Sengupta Active presence detection with depth sensing
CN103824050A (en) * 2014-02-17 2014-05-28 北京旷视科技有限公司 Cascade regression-based face key point positioning method
CN104036240A (en) * 2014-05-29 2014-09-10 小米科技有限责任公司 Face feature point positioning method and device
CN104573614A (en) * 2013-10-22 2015-04-29 北京三星通信技术研究有限公司 Equipment and method for tracking face

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140015930A1 (en) * 2012-06-20 2014-01-16 Kuntal Sengupta Active presence detection with depth sensing
CN104573614A (en) * 2013-10-22 2015-04-29 北京三星通信技术研究有限公司 Equipment and method for tracking face
CN103824050A (en) * 2014-02-17 2014-05-28 北京旷视科技有限公司 Cascade regression-based face key point positioning method
CN104036240A (en) * 2014-05-29 2014-09-10 小米科技有限责任公司 Face feature point positioning method and device

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107506682A (en) * 2016-06-14 2017-12-22 掌赢信息科技(上海)有限公司 A kind of man face characteristic point positioning method and electronic equipment
CN106295511B (en) * 2016-07-26 2019-05-21 北京小米移动软件有限公司 Face tracking method and device
CN106295511A (en) * 2016-07-26 2017-01-04 北京小米移动软件有限公司 Face tracking method and device
CN106503682A (en) * 2016-10-31 2017-03-15 北京小米移动软件有限公司 Crucial independent positioning method and device in video data
CN106503682B (en) * 2016-10-31 2020-02-04 北京小米移动软件有限公司 Method and device for positioning key points in video data
WO2018103525A1 (en) * 2016-12-08 2018-06-14 腾讯科技(深圳)有限公司 Method and device for tracking facial key point, and storage medium
CN106778585B (en) * 2016-12-08 2019-04-16 腾讯科技(上海)有限公司 A kind of face key point-tracking method and device
US10817708B2 (en) 2016-12-08 2020-10-27 Tencent Technology (Shenzhen) Company Limited Facial tracking method and apparatus, and storage medium
CN106778585A (en) * 2016-12-08 2017-05-31 腾讯科技(上海)有限公司 A kind of face key point-tracking method and device
CN106845377A (en) * 2017-01-10 2017-06-13 北京小米移动软件有限公司 Face key independent positioning method and device
CN106875421A (en) * 2017-01-19 2017-06-20 博康智能信息技术有限公司北京海淀分公司 A kind of multi-object tracking method and device
CN106845398A (en) * 2017-01-19 2017-06-13 北京小米移动软件有限公司 Face key independent positioning method and device
CN106845398B (en) * 2017-01-19 2020-03-03 北京小米移动软件有限公司 Face key point positioning method and device
CN106875422B (en) * 2017-02-06 2022-02-25 腾讯科技(上海)有限公司 Face tracking method and device
CN106875422A (en) * 2017-02-06 2017-06-20 腾讯科技(上海)有限公司 Face tracking method and device
WO2018170864A1 (en) * 2017-03-20 2018-09-27 成都通甲优博科技有限责任公司 Face recognition and tracking method
WO2018202089A1 (en) * 2017-05-05 2018-11-08 商汤集团有限公司 Key point detection method and device, storage medium and electronic device
CN113691806A (en) * 2017-05-26 2021-11-23 Line 株式会社 Image compression method, image restoration method, and computer-readable recording medium
CN109215054A (en) * 2017-06-29 2019-01-15 沈阳新松机器人自动化股份有限公司 Face tracking method and system
CN107403145A (en) * 2017-07-14 2017-11-28 北京小米移动软件有限公司 Image characteristic points positioning method and device
CN107403145B (en) * 2017-07-14 2021-03-09 北京小米移动软件有限公司 Image feature point positioning method and device
CN107563323A (en) * 2017-08-30 2018-01-09 华中科技大学 A kind of video human face characteristic point positioning method
CN110136229A (en) * 2019-05-27 2019-08-16 广州亮风台信息科技有限公司 A kind of method and apparatus changed face for real-time virtual

Similar Documents

Publication Publication Date Title
CN105469056A (en) Face image processing method and device
CN105528606A (en) Region identification method and device
CN105549732A (en) Method and device for controlling virtual reality device and virtual reality device
CN104243819A (en) Photo acquiring method and device
CN104156915A (en) Skin color adjusting method and device
CN105430262A (en) Photographing control method and photographing control device
CN105260732A (en) Image processing method and device
CN107463903B (en) Face key point positioning method and device
JP2017534933A (en) Instruction generation method and apparatus
CN105828201A (en) Video processing method and device
CN104899610A (en) Picture classification method and device
CN104238912A (en) Application control method and application control device
CN106225764A (en) Based on the distance-finding method of binocular camera in terminal and terminal
CN104918107A (en) Video file identification processing method and device
CN104461014A (en) Screen unlocking method and device
CN105354560A (en) Fingerprint identification method and device
CN104077585A (en) Image correction method and device and terminal
CN105279499A (en) Age recognition method and device
CN105260360A (en) Named entity identification method and device
CN104156993A (en) Method and device for switching face image in picture
CN104537132A (en) Motion data recording method and device
CN104077563A (en) Human face recognition method and device
CN104484858A (en) Figure image processing method and device
CN105100634A (en) Image photographing method and image photographing device
CN105975961A (en) Human face recognition method, device and terminal

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20160406