CN106295533B - A kind of optimization method, device and the camera terminal of self-timer image - Google Patents
A kind of optimization method, device and the camera terminal of self-timer image Download PDFInfo
- Publication number
- CN106295533B CN106295533B CN201610622070.XA CN201610622070A CN106295533B CN 106295533 B CN106295533 B CN 106295533B CN 201610622070 A CN201610622070 A CN 201610622070A CN 106295533 B CN106295533 B CN 106295533B
- Authority
- CN
- China
- Prior art keywords
- point
- face
- self
- image
- characteristic point
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/178—Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- General Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- Biophysics (AREA)
- Software Systems (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Mathematical Physics (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Biomedical Technology (AREA)
- Signal Processing (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The invention discloses a kind of optimization methods of self-timer image, suitable for executing in camera terminal, this method comprises: acquiring multiple facial images and being labeled to human face characteristic point therein, form training image set;The training that the training image set input convolutional neural networks marked are carried out to human face characteristic point, obtains the convolutional neural networks model of human face characteristic point;The self-timer image to be processed is input in the convolutional neural networks model of the human face characteristic point and is predicted, the human face characteristic point of the self-timer image is obtained;The similar distance parameter of left and right face is obtained according to the human face characteristic point of the self-timer image;Judge whether the similar distance parameter of the left face is greater than the similar distance parameter of right face;If then saving the self-timer image as it is, mirror image preservation is otherwise carried out.The invention also discloses a kind of optimization device of self-timer image and camera terminals.
Description
Technical field
The present invention relates to field of image processing more particularly to a kind of optimization method, device and the camera terminals of self-timer image.
Background technique
With the rapid development of mobile communication and microelectric technique, (such as camera, video camera and having is clapped for various camera terminals
According to mobile phone, the tablet computer etc. of function) resolution ratio of camera head had reached 100,000, hundreds of thousands of or even million grades of pixels, promote
More and more people are made to get used to carrying out self-timer by these camera terminals to record the various animations of oneself.
For user in self-timer, many people can adjust the angle or left side face self-timer of self-timer according to current light, or
The right face self-timer.And scientific investigations showed that the photo of left side face is higher by the scoring of pleasant degree than the right face, pupil diameter is also bigger,
To be easier to obtain liking for photo viewer.If user is dissatisfied to the photo of the right face self-timer, shooting can be passed through
The operation interface of terminal is arranged from the storage mode taken a picture, such as selects mirrored storage.But this manual setting is still not
Enough intelligence, is unable to the left and right face image of automatic identification user and makes corresponding adjustment.
In recognition of face, facial feature localization technology is particularly critical, usually predicts some human face characteristic points being pre-designed,
Such as canthus, the tip of the brow, nose and corners of the mouth position.But user's usual face in self-timer has certain rotation angle, this is undoubtedly
The difficulty of facial feature localization can be increased.
Summary of the invention
In view of the above problems, it proposes on the present invention overcomes the above problem or at least be partially solved in order to provide one kind
State optimization method, device and the camera terminal of a kind of self-timer image of problem.
According to an aspect of the present invention, a kind of optimization method of self-timer image is provided, suitable for being executed in camera terminal,
This method comprises: acquiring multiple facial images and being labeled to human face characteristic point therein, training image set is formed;By institute
The training that the training image set input convolutional neural networks marked carry out human face characteristic point is stated, the volume of human face characteristic point is obtained
Product neural network model;The self-timer image to be processed is input in the convolutional neural networks model of the human face characteristic point
It is predicted, obtains the human face characteristic point of the self-timer image;Left and right face is obtained according to the human face characteristic point of the self-timer image
Similar distance parameter;Judge whether the similar distance parameter of the left face is greater than the similar distance parameter of right face;If then pressing
The self-timer image is saved as former state, otherwise carries out mirror image preservation.
Optionally, in the method according to the invention, human face characteristic point include nose vertex C, lip left and right vertex E and
F, and following any one group of eye feature point: right and left eyes central point A1And B1, the left vertex A of left eye2With the right vertex B of right eye2;
Wherein, C point respectively with straight line A1B1D point and G point are vertically intersected on EF.
Optionally, in the method according to the invention, the similar distance parameter of the left and right face includes at least following five groups
Any one group in distance parameter: I, A1The distance between point and D point A1D and B1The distance between point and D point B1D;ⅱ,A2Point with
The distance between D point A2D and B2The distance between point and D point B2D;The distance between III, E point and G point are between EG and F point and G point
Distance FG;IV, C point to A1Point and the sum of the distance A for arriving E point1C+CE and C point is to B1Point and the sum of the distance B for arriving F point1C
+CF;V, C point to A2Point and the sum of the distance A for arriving E point2C+CE and C point is to B2Point and the sum of the distance B for arriving F point2C+CF。
Optionally, in the method according to the invention, further includes: face inspection is carried out to the self-timer image to be processed
It surveys, obtains human face region, and cutting and scaling processing are carried out to the human face region.
Optionally, in the method according to the invention, further includes: according to the human face characteristic point parameter of the self-timer image,
The transformation matrix that the self-timer image carries out Plane Rotation is calculated, and is water by the self-timer image rotation according to the transformation matrix
Flat direct picture.
Optionally, in the method according to the invention, further includes: to ethnic group, year in multiple collected facial images
Age, face rotation angle are labeled, and form training image set;By the training image collection for being labeled with face rotation angle
It closes input convolutional neural networks to be trained, exports human face posture class corresponding to the interval range of preset face rotation angle
Type obtains the convolutional neural networks model of face rotation angle.
Optionally, in the method according to the invention, the interval range of the preset face rotation angle of the output is calculated
Formula are as follows:
Wherein, wherein m indicates the angular interval number divided, and i indicates i-th of section, σi(Z) indicate output result the
The probability in i section, Z indicate the output of neural network.
Optionally, in the method according to the invention, further includes: the convolutional neural networks of angle are rotated according to the face
The human face characteristic point of model and the mark carries out regression training to the human face characteristic point coordinate of the training image set, returns
Return calculation formula are as follows:
Wherein, N indicates the number of human face characteristic point to be output, x1iIndicate the face characteristic of convolutional neural networks output
The coordinate of point, x2iIndicate that the coordinate of the human face characteristic point manually marked, D indicate the human face characteristic point of convolutional neural networks output
The error amount of coordinate and the coordinate of the human face characteristic point manually marked.
Optionally, in the method according to the invention, the convolutional neural networks include the convolutional layer for repeating superposition, ReLU
Layer, down-sampling layer, and multiple output branchs are obtained being finally superimposed full articulamentum;Wherein each output branch corresponds to face one
Attributive character, and the error amount for corresponding to attribute is returned in model training.
Optionally, in the method according to the invention, further includes: if detected in self-timer image to be processed in the presence of more
Face, then handled image according to following manner: obtaining the left end and right end characteristic point of each face respectively
Abscissa xleftAnd xright, and the ordinate y of the top and bottom characteristic pointtopAnd ybottom;It is calculated according to coordinate value every
Open the region area size=of face | (xright-xleft)*(ybottom-ytop)|;Determine that region area is maximum in self-timer image
One face, and calculate the similar distance parameter of its left and right face;According to the similar distance parameter being calculated to self-timer image into
Row mirror image is saved or is treated as such.
Optionally, in the method according to the invention, further includes: if detected in self-timer image to be processed in the presence of more
Face is opened, then image is handled according to following manner: the same of its left and right face is calculated separately according to the characteristic point of every face
Class distance parameter;The similar distance parameter of the left and right face of all faces is summed;Judging the similar distance parameter summation of left face is
The no similar distance parameter summation greater than right face;If then saving the self-timer image as it is, mirror image preservation is otherwise carried out.
According to another aspect of the present invention, a kind of optimization device of self-timer image is provided, suitable for residing in camera terminal,
The device includes: image training module, suitable for acquiring multiple facial images and being labeled to human face characteristic point, forms training figure
Image set closes;Model training module, it is special suitable for the training image set input convolutional neural networks marked are carried out face
The training for levying point, obtains the convolutional neural networks model of human face characteristic point;Characteristic point computing module, suitable for will be described to be processed
Self-timer image is input in the convolutional neural networks model of the human face characteristic point and is predicted, obtains the people of the self-timer image
Face characteristic point;Distance calculation module, the similar distance suitable for obtaining left and right face according to the human face characteristic point of the self-timer image are joined
Number;Image preserving module, suitable for judging whether the similar distance parameter of the left face is greater than the similar distance parameter of right face;If
The self-timer image is then saved as it is, otherwise carries out mirror image preservation.
Optionally, in a device in accordance with the invention, the human face characteristic point includes the left and right top of nose vertex C, lip
Point E and F, and following any one group of eye feature point: right and left eyes central point A1And B1, the left vertex A of left eye2With the right top of right eye
Point B2;Wherein the C point respectively with straight line A1B1D point and G point are vertically intersected on EF.
Optionally, in a device in accordance with the invention, the similar distance parameter of the left and right face includes following five groups of distances
Any one group in parameter: I, A1The distance between point and D point A1D and B1The distance between point and D point B1D;ⅱ,A2Point and D point
The distance between A2D and B2The distance between point and D point B2D;The distance between III, E point and G point are between EG and F point and G point
Distance FG;IV, C point to A1Point and the sum of the distance A for arriving E point1C+CE and C point is to B1Point and the sum of the distance B for arriving F point1C+
CF;V, C point to A2Point and the sum of the distance A for arriving E point2C+CE and C point is to B2Point and the sum of the distance B for arriving F point2C+CF。
Optionally, in a device in accordance with the invention, further includes: face detection module, suitable for it is described it is to be processed from
It claps image and carries out Face datection, obtain human face region, and cutting and scaling processing are carried out to the human face region.
Optionally, in a device in accordance with the invention, further includes: image rotation module, according to the people of the self-timer image
Face characteristic point parameter calculates the transformation matrix that the self-timer image carries out Plane Rotation, and described certainly according to transformation matrix general
Bat image rotation is horizontal direct picture.
Optionally, in a device in accordance with the invention, described image training module is further adapted in multiple collected faces
Ethnic group, age, face rotation angle are labeled in image, form training image set;The model training module is also suitable
It is trained in by the training image set input convolutional neural networks for being labeled with face rotation angle, exports preset people
Face rotates human face posture type corresponding to the interval range of angle, obtains the convolutional neural networks model of face rotation angle.
Optionally, in a device in accordance with the invention, the interval range of the preset face rotation angle of the output is calculated
Formula are as follows:
Wherein, wherein m indicates the angular interval number divided, and i indicates i-th of section, σi(Z) indicate output result the
The probability in i section, Z indicate the output of neural network.
Optionally, in a device in accordance with the invention, the model training module is further adapted for according to the face rotation angle
The convolutional neural networks model of degree and the face key point of the mark sit the human face characteristic point of the training image set
Mark carries out regression training, returns calculation formula are as follows:
Wherein, N indicates the number of human face characteristic point to be output, x1iIndicate the face characteristic of convolutional neural networks output
The coordinate of point, x2iIndicate that the coordinate of the human face characteristic point manually marked, D indicate the human face characteristic point of convolutional neural networks output
The error amount of coordinate and the coordinate of the human face characteristic point manually marked.
Optionally, in a device in accordance with the invention, the convolutional neural networks include the convolutional layer for repeating superposition, ReLU
Layer, down-sampling layer, and multiple output branchs are obtained being finally superimposed full articulamentum;Wherein each output branch corresponds to face one
Attributive character, and the error amount for corresponding to attribute is returned in model training.
Optionally, in a device in accordance with the invention, face detection module is further adapted for detecting the self-timer figure to be processed
As whether there is multiple faces;Characteristic point computing module is further adapted for when the face detection module detects multiple faces, point
The left end of each face and the abscissa x of right end characteristic point are not obtainedleftAnd xright, and the top and bottom feature
The ordinate y of pointtopAnd ybottom;Distance calculation module is further adapted for calculating the region area of every face according to the coordinate value
Size=| (xright-xleft)*(ybottom-ytop) |, and determine the maximum face of region area in the self-timer image, with
And calculate the similar distance parameter of its left and right face;Image preserving module is further adapted for the similar distance parameter being calculated according to
Mirror image preservation is carried out to self-timer image or is treated as such.
Optionally, in a device in accordance with the invention, distance calculation module is further adapted for detecting in the face detection module
When to multiple faces, the similar distance parameter of its left and right face is calculated separately according to the characteristic point of every face, and by all faces
Left and right face similar distance parameter summation;Image preserving module is further adapted for judging whether the similar distance parameter summation of left face is big
In the similar distance parameter summation of right face;If then saving the self-timer image as it is, mirror image preservation is otherwise carried out.
According to a further aspect of the invention, a kind of camera terminal is provided, the optimization including self-timer image as described above
Device.
According to the technique and scheme of the present invention, fixed according to face by constructing the convolutional neural networks model of human face characteristic point
The human face characteristic point that position technology identifies, calculates the left and right face distance parameter of self-timer image to be processed, to judge user
It is to be taken pictures using left face or right face is taken pictures, if left face is taken pictures then saves photo as former state, if right face is taken pictures then using mirror
As saving.Show in this way, intelligently realizing from the optimization taken a picture, improves user image.In addition, also passing through construction people
Face rotates the convolutional neural networks model of angle, and the rotation angle based on face carries out recurrence calculating to human face characteristic point, thus
The accurate positionin that ensure that human face characteristic point effectively eliminates influence when face when taking pictures tilts to facial feature localization.
Detailed description of the invention
To the accomplishment of the foregoing and related purposes, certain illustrative sides are described herein in conjunction with following description and drawings
Face, these aspects indicate the various modes that can practice principles disclosed herein, and all aspects and its equivalent aspect
It is intended to fall in the range of theme claimed.Read following detailed description in conjunction with the accompanying drawings, the disclosure it is above-mentioned
And other purposes, feature and advantage will be apparent.Throughout the disclosure, identical appended drawing reference generally refers to identical
Component or element.
Fig. 1 shows the structural block diagram of mobile terminal 100 according to an embodiment of the invention;
Fig. 2 shows the flow charts of the optimization method 200 of self-timer image according to an embodiment of the invention;
Fig. 3 shows the structural block diagram of the optimization device 300 of self-timer image according to an embodiment of the invention.
Specific embodiment
Exemplary embodiments of the present disclosure are described in more detail below with reference to accompanying drawings.Although showing the disclosure in attached drawing
Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure without should be by embodiments set forth here
It is limited.On the contrary, these embodiments are provided to facilitate a more thoroughly understanding of the present invention, and can be by the scope of the present disclosure
It is fully disclosed to those skilled in the art.
The present invention provides a kind of optimization device of self-timer image, may reside in camera terminal, as camera, video camera and
Mobile terminal etc. with camera function, Fig. 1 are arranged as realizing that the example of the optimization device of self-timer image according to the present invention is moved
The structural block diagram of dynamic terminal 100.
As described in Figure 1, mobile terminal may include memory interface 102, one or more data processors, image procossing
Device and/or central processing unit 104 and peripheral interface 106.Memory interface 102, one or more processors 104 and/or
Peripheral interface 106 also can integrate in one or more integrated circuits either discrete component.In the mobile terminal 100,
Various elements can be coupled by one or more communication bus or signal wire.Sensor, equipment and subsystem can couple
To peripheral interface 106, to help to realize multiple functions.
For example, motion sensor 110, light sensor 112 and range sensor 114 may be coupled to peripheral interface 106,
To facilitate the functions such as orientation, illumination and ranging.Other sensors 116 can equally be connected with peripheral interface 106, such as positioning system
System (such as GPS receiver), temperature sensor, biometric sensor or other sensor devices, it is possible thereby to help to implement phase
The function of pass.
Camera sub-system 120 and optical sensor 122 can be used for the camera of convenient such as record photos and video clips
The realization of function, wherein the camera sub-system and optical sensor for example can be charge-coupled device (CCD) or complementary gold
Belong to oxide semiconductor (CMOS) optical sensor.It can help to realize by one or more radio communication subsystems 124
Communication function, wherein radio communication subsystem may include radio-frequency transmitter and transmitter and/or light (such as infrared) receiver
And transmitter.The particular design and embodiment of radio communication subsystem 124 can depend on mobile terminal 100 is supported one
A or multiple communication networks.For example, mobile terminal 100 may include be designed to support LTE, 3G, GSM network, GPRS network,
EDGE network, Wi-Fi or WiMax network and BlueboothTMThe communication subsystem 124 of network.Audio subsystem 126 can be with
It is coupled with loudspeaker 128 and microphone 130, to help to implement to enable the function of voice, such as speech recognition, voice are multiple
System, digital record and telephony feature.
I/O subsystem 140 may include touch screen controller 142 and/or other one or more input controllers 144.
Touch screen controller 142 may be coupled to touch screen 146.For example, the touch screen 146 and touch screen controller 142 can be with
The contact carried out therewith and movement or pause are detected using any one of a variety of touch-sensing technologies, wherein sensing skill
Art includes but is not limited to capacitive character, resistive, infrared and surface acoustic wave technique.Other one or more input controllers 144
May be coupled to other input/control devicess 148, for example, one or more buttons, rocker switch, thumb wheel, infrared port,
The pointer device of USB port, and/or stylus etc.One or more of button (not shown)s may include for controlling
The up/down button of 130 volume of loudspeaker 128 and/or microphone.
Memory interface 102 can be coupled with memory 150.The memory 150 may include that high random access is deposited
Reservoir and/or nonvolatile memory, such as one or more disk storage equipments, one or more optical storage apparatus, and/
Or flash memories (such as NAND, NOR).Memory 150 can store an operating system 152, for example, Android, iOS or
The operating system of Windows Phone etc.The operating system 152 may include for handling basic system services and execution
The instruction of task dependent on hardware.Memory 150 can also be stored using 154.
In mobile device operation, meeting load operating system 152 from memory 150, and executed by processor 104.
At runtime using 154, it can also load from memory 150, and be executed by processor 104.Operating system is operated in using 154
On, the various desired functions of user are realized using the interface that operating system and bottom hardware provide, such as instant messaging, webpage
Browsing, pictures management etc..It can be using 154 independently of operating system offer, be also possible to what operating system carried.Separately
Outside, when application 154 is mounted in mobile terminal 100, drive module can also be added to operating system.
In above-mentioned various applications 154, one such application is the optimization device of self-timer image related to the present invention
300.In some embodiments, mobile terminal 100 is configured as executing the optimization method 200 of self-timer image according to the present invention.
Fig. 2 shows the optimization methods 200 of self-timer image according to an embodiment of the invention, are suitable in camera terminal
It executes, this method starts from step S210.
It in step S210, acquires multiple facial images and human face characteristic point therein is labeled, form training figure
Image set closes.Specifically, human face characteristic point includes at least the left and right vertex E and F of nose vertex C, lip, and any one group following
Eye feature point: right and left eyes central point A1And B1, the left vertex A of left eye2With the right vertex B of right eye2;Wherein C point respectively with straight line
A1B1D point and G point are vertically intersected on EF.In addition, the characteristic point at other positions of face is also typically included, such as forehead region, chin
Region, left and right cheek, face, loop wire region, periphery etc..
In addition, can also rotate angle when the characteristic point to face is labeled to ethnic group, age and face and mark
Note, to obtain face character feature as much as possible.It is, of course, also possible to be carried out according to scoring software to the face in these images
Scoring mark.
Then, in step S220, the above-mentioned training image set for being labeled with human face characteristic point is inputted into convolutional Neural net
Network carries out the training of human face characteristic point, obtains the convolutional neural networks model of human face characteristic point.Wherein, convolutional neural networks include
Convolutional layer, ReLU layers, the down-sampling layer of superposition are repeated, and obtains multiple output branchs being finally superimposed full articulamentum;It is wherein every
A output branch corresponds to one attributive character of face, and the error amount of corresponding attribute is returned in model training.It specifically, can be with
It include: input → convolutional layer C1 → down-sampling layer P1 → convolutional layer C2 → down-sampling layer P2 → full articulamentum F1 → full articulamentum F2
→ output, wherein input when the training image set size, export the human face characteristic point for the training image, year
The initial results of the attributive character such as age, ethnic group.
Then, in step S230, self-timer image to be processed is input to the convolutional neural networks mould of human face characteristic point
It is predicted in type, obtains the human face characteristic point of the self-timer image.
Wherein, before this step, Face datection first can also be carried out to the self-timer image to be processed, obtains face
Region, and cutting appropriate and scaling processing are carried out to the human face region.
Then, in step S240, the similar distance parameter of left and right face is obtained according to the human face characteristic point of self-timer image.Its
The similar distance parameter of middle left and right face includes at least any one group in following five groups of distance parameters:
ⅰ、A1The distance between point and D point A1D and B1The distance between point and D point B1D;
ⅱ、A2The distance between point and D point A2D and B2The distance between point and D point B2D;
The distance between III, E point and G point the distance between EG and F point and G point FG;
IV, C point to A1Point and the sum of the distance A for arriving E point1C+CE and C point is to B1Point and the sum of the distance B for arriving F point1C+
CF;
V, C point to A2Point and the sum of the distance A for arriving E point2C+CE and C point is to B2Point and the sum of the distance B for arriving F point2C+
CF。
Then, in step s 250, judge whether the similar distance parameter of the left face is greater than the similar distance ginseng of right face
Number.Such as judge left eye central point A1The distance between D point A1Whether D is greater than right eye central point B1The distance between D point B1D;
Or judge the left vertex A of left eye2The distance between D point A2Whether D is greater than the right vertex B of right eye2The distance between D point B2D,
The distance parameter that he organizes is also according to same method.
If judging, the distance parameter of left face is greater than the distance parameter of right face, saves self-timer figure as it is in step S260
Otherwise picture carries out mirror image preservation in step S270.Specifically, call back function of taking pictures is usually logical after the data for receiving sampling of taking pictures
The form for crossing byte arrays returns to self-timer image, and the image is carried out square in the way of overturning Y-axis when carrying out mirror image preservation
Battle array transformation, and the self-timer image after overturning is stored.
In addition, in some cases, user wishes to set the self-timer image for having certain tilt angle in horizontal front, this
When, the present invention can also calculate the transformation square that self-timer image carries out Plane Rotation according to the human face characteristic point parameter of self-timer image
Battle array, and be horizontal direct picture by the self-timer image rotation according to the transformation matrix.For example, straight line CD and warp can be calculated
The angle of the vertical line of D point is crossed, to be horizontal direct picture by the self-timer image rotation angle, but method is not limited to
This.
According to one embodiment, the training image set for having marked face rotation angle degree can also be inputted convolutional Neural
Network is trained, and is exported human face posture type corresponding to the interval range of preset face rotation angle, is obtained face rotation
The convolutional neural networks model of gyration.Wherein, the interval range of face rotation angle refers to and rotates angle for people according to face
A front surface and a side surface of face is averagely divided into two or more interval ranges, the section model of each face rotation angle degree
Enclose a kind of corresponding human face posture type.For example, face can be rotated to angle divides following 5 interval ranges: [- 180 ° ,-
120°],[-120°,-60°],[-60°,+60°],[+60°,+120°],[+120°,+180°].Wherein, preset face is exported
Rotate the calculation formula of the interval range of angle are as follows:
Wherein, wherein m indicates the angular interval number divided, and i indicates i-th of section, σi(Z) indicate output result the
The probability in i section, Z indicate the output of neural network.
In addition, because of the influence of face rotation angle, it is possible to cause the mark to human face characteristic point less accurate.At this time
It can be according to the human face characteristic point that above-mentioned face rotates the convolutional neural networks model of angle and has marked to the training
The human face characteristic point coordinate of image collection carries out regression training, returns calculation formula are as follows:
Wherein, N indicates the number of human face characteristic point to be output, x1iIndicate the face characteristic of convolutional neural networks output
The coordinate of point, x2iIndicate that the coordinate of the human face characteristic point manually marked, D indicate the human face characteristic point of convolutional neural networks output
The error amount of coordinate and the coordinate of the human face characteristic point manually marked.It is reduced between two coordinates as far as possible by regression training
Error amount, that is, can effectively ensure that the accurate positionin of human face characteristic point.
It, can be according to people if detecting that there are multiple faces in self-timer image to be processed according to one embodiment
The judgment basis that maximum face of face region area is saved as image.Specifically, the most left of every face is obtained first
The numerical value at end and right end characteristic point, the top and bottom characteristic point, wherein the numerical value of left end and right end can take it
Abscissa value xleftAnd xright, its ordinate value y can be topmost taken with the lowermost numerical valuetopAnd ybottom, then according to
Lower formula calculates human face region area size:
Size=| (xright-xleft)*(ybottom-ytop)|
It is, the maximum value in wherein lateral numerical value is subtracted into minimum value in the numerical value in all characteristic points, it is longitudinal
Maximum value in numerical value also subtracts minimum value, and then two differences are multiplied and obtain region area shared by every face.
Later, the maximum face of region area in self-timer image is determined according to the regional area value being calculated, and
Calculate the similar distance parameter of its left and right face.Finally, carrying out mirror image to self-timer image according to the similar distance parameter being calculated
It saves or is treated as such, i.e., left face parameter is greater than right face and then saves as former state, and otherwise mirror image saves.
In addition, if detecting that there are multiple faces in self-timer image to be processed, it can also be by all people's face
The numerical value summation of similar distance parameter is as judgment basis.Specifically, its left and right is calculated separately according to the characteristic point of every face
The similar distance parameter of face;The similar distance parameter of the left and right face of all faces is summed;Judge the similar distance parameter of left face
Whether summation is greater than the similar distance parameter summation of right face;If then saving the self-timer image as it is, mirror image is otherwise carried out
It saves.
According to one embodiment, building people can also be trained from local features such as face, skin and picture qualities respectively
The convolutional neural networks model of face scoring.In this manner it is possible to more intelligently according to the scoring situation tune from middle left and right face of taking a picture
Whole preserving type.It is saved as former state if the scoring height for showing left face-like state, mirror image is protected if the scoring height for showing right face-like state
It deposits.
Fig. 3 shows the optimization device 300 of self-timer image according to an embodiment of the invention, suitable for residing in shooting
In terminal, which includes image training module 310, model training module 320, characteristic point computing module 330, distance calculating mould
Block 340 and image preserving module 350.
Image training module 310 acquires multiple facial images and is labeled to human face characteristic point, forms training image collection
It closes.Wherein, human face characteristic point includes the left and right vertex E and F of nose vertex C, lip, and following any one group of eye feature
Point: right and left eyes central point A1And B1, the left vertex A of left eye2With the right vertex B of right eye2;Wherein C point respectively with straight line A1B1It is vertical with EF
Intersect at D point and G point.In addition, image training module 210 can also in multiple collected facial images to ethnic group, the age,
Face rotation angle is labeled, and forms richer training image set.
Model training module 320 carries out the training image set input convolutional neural networks for having been marked with human face characteristic point
The training of human face characteristic point obtains the convolutional neural networks model of human face characteristic point.Wherein, convolutional neural networks network includes repeating
The convolutional layer of superposition, ReLU layers, down-sampling layer, and multiple output branchs are obtained being finally superimposed full articulamentum;It is wherein each defeated
Branch corresponds to one attributive character of face out, and the error amount of corresponding attribute is returned in model training.Specifically, it can wrap
It includes: input → convolutional layer C1 → down-sampling layer P1 → convolutional layer C2 → down-sampling layer P2 → full articulamentum F1 → full articulamentum F2 →
Output.
According to one embodiment, model training module 320 can also will have been marked with the training image of face rotation angle
Set input convolutional neural networks are trained, and export human face posture corresponding to the interval range of preset face rotation angle
Type obtains the convolutional neural networks model of face rotation angle.Calculate the section of the preset face rotation angle of the output
The formula of range are as follows:
Wherein, wherein m indicates the angular interval number divided, and i indicates i-th of section, σi(Z) indicate output result the
The probability in i section, Z indicate the output of neural network.
According to one embodiment, model training module 320 can also rotate the convolutional neural networks mould of angle according to face
The face key point of type and the mark carries out regression training to the human face characteristic point coordinate of the training image set, returns
Calculation formula are as follows:
Wherein, N indicates the number of human face characteristic point to be output, x1iIndicate the face characteristic of convolutional neural networks output
The coordinate of point, x2iIndicate that the coordinate of the human face characteristic point manually marked, D indicate the human face characteristic point of convolutional neural networks output
The error amount of coordinate and the coordinate of the human face characteristic point manually marked.
Self-timer image to be processed is input to the convolutional neural networks of the human face characteristic point by characteristic point computing module 330
It is predicted in model, obtains the human face characteristic point of the self-timer image.
Distance calculation module 340 obtains the similar distance parameter of left and right face according to the human face characteristic point of the self-timer image.
Wherein, distance parameter includes at least any one group in following five groups of distance parameters: I, A1The distance between point and D point A1D and B1
The distance between point and D point B1D;ⅱ,A2The distance between point and D point A2D and B2The distance between point and D point B2D;III, E point with
The distance between the distance between G point EG and F point and G point FG;IV, C point to A1Point and the sum of the distance A for arriving E point1C+CE and
C point is to B1Point and the sum of the distance B for arriving F point1C+CF;V, C point to A2Point and the sum of the distance A for arriving E point2C+CE and C point arrives
B2Point and the sum of the distance B for arriving F point2C+CF。
Image preserving module 350 judges whether the similar distance parameter of the left face is greater than the similar distance parameter of right face;
If then saving above-mentioned from taking a picture, otherwise progress mirror image preservation as it is.
According to one embodiment, the optimization device 300 of self-timer image of the invention can also include face detection module, right
Self-timer image to be processed carries out Face datection, obtains human face region, and the human face region is suitably cut and scaled
Processing.
It can also include image rotation module, according to the human face characteristic point of the self-timer image according to another embodiment
Parameter calculates the self-timer image and carries out the transformation matrix of Plane Rotation, and revolved the self-timer image according to the transformation matrix
Switch to horizontal direct picture.
In addition, the face detection module in device 300 is further adapted for detecting the self-timer image to be processed with the presence or absence of more
Open face;Characteristic point computing module is further adapted for obtaining everyone respectively when the face detection module detects multiple faces
The left end of face and the abscissa x of right end characteristic pointleftAnd xright, and the ordinate of the top and bottom characteristic point
ytopAnd ybottom;Distance calculation module is further adapted for calculating the region area size=of every face according to the coordinate value |
(xright-xleft)*(ybottom-ytop) |, and determine the maximum face of region area in the self-timer image, and calculate
The similar distance parameter of its left and right face;The similar distance parameter that image preserving module is further adapted for being calculated according to is to self-timer
Image carries out mirror image preservation or is treated as such.
According to another embodiment, in the optimization device 300 of self-timer image of the invention, distance calculation module is further adapted for
When the face detection module detects multiple faces, the similar of its left and right face is calculated separately according to the characteristic point of every face
Distance parameter, and the similar distance parameter of the left and right face of all faces is summed;Image preserving module is further adapted for judging left face
Whether similar distance parameter summation is greater than the similar distance parameter summation of right face;If then saving the self-timer image as it is,
Otherwise mirror image preservation is carried out.
The optimization device 300 of self-timer image according to the present invention, detail is in the description based on Fig. 1 and Fig. 2
Detailed disclosure, details are not described herein.
According to the technique and scheme of the present invention, the angle of current self-timer image is judged using the technology of Face datection and facial feature localization
Degree belongs to left side face or the right face: if it is left side face, then keeping original direction and is saved;If it is the right side
If the face of side, then left and right mirror image is carried out to present image and obtains the image of left side face to save, so that all preservations
Self-timer image is all the angle of left side face, improves the photographic image of user.Wherein, the technology of facial feature localization passes through training image collection
The convolutional neural networks model of building human face characteristic point is closed, and joined the correction factor of face rotation angle, so as to standard
The characteristic point of true locating human face.This method precision height, strong robustness, and the resulting model occupied space of training is also smaller,
To realize under the premise of not influencing camera terminal performance to the optimization processing of self-timer image.
A9, method as described in a1, wherein the convolutional neural networks include repeat superposition convolutional layer, ReLU layers, under
Sample level, and multiple output branchs are obtained being finally superimposed full articulamentum;Wherein each output branch corresponds to one attribute of face
Feature, and the error amount for corresponding to attribute is returned in model training.
A10, the method as described in A4, further includes: if detected in self-timer image to be processed there are multiple faces,
Image is handled according to following manner: obtaining the left end of each face and the abscissa x of right end characteristic point respectivelyleft
And xright, and the ordinate y of the top and bottom characteristic pointtopAnd ybottom;Every face is calculated according to the coordinate value
Region area size=| (xright-xleft)*(ybottom-ytop)|;Determine region area maximum one in the self-timer image
Face is opened, and calculates the similar distance parameter of its left and right face;According to the similar distance parameter being calculated to self-timer image
It carries out mirror image preservation or is treated as such.
A11, the method as described in A4, further includes: if detected in self-timer image to be processed there are multiple faces,
Image is handled according to following manner: being joined according to the similar distance that the characteristic point of every face calculates separately its left and right face
Number;The similar distance parameter of the left and right face of all faces is summed;Judge whether the similar distance parameter summation of the left face is big
In the similar distance parameter summation of right face;If then saving the self-timer image as it is, mirror image preservation is otherwise carried out.
B13, as described in B12 device, wherein the human face characteristic point include nose vertex C, lip left and right vertex E and
F, and following any one group of eye feature point: right and left eyes central point A1And B1, the left vertex A of left eye2With the right vertex B of right eye2;
Wherein the C point respectively with straight line A1B1D point and G point are vertically intersected on EF.
B14, the device as described in B13, wherein the similar distance parameter of the left and right face includes following five groups of distance parameters
In any one group:
ⅰ、A1The distance between point and D point A1D and B1The distance between point and D point B1D;
ⅱ、A2The distance between point and D point A2D and B2The distance between point and D point B2D;
The distance between III, E point and G point the distance between EG and F point and G point FG;
IV, C point to A1Point and the sum of the distance A for arriving E point1C+CE and C point is to B1Point and the sum of the distance B for arriving F point1C+
CF;
V, C point to A2Point and the sum of the distance A for arriving E point2C+CE and C point is to B2Point and the sum of the distance B for arriving F point2C+
CF。
B15, as described in B12 device, further includes: face detection module, suitable for the self-timer image to be processed into
Row Face datection obtains human face region, and carries out cutting and scaling processing to the human face region.
B16, as described in B12 device, further includes: image rotation module, according to the human face characteristic point of the self-timer image
Parameter calculates the self-timer image and carries out the transformation matrix of Plane Rotation, and revolved the self-timer image according to the transformation matrix
Switch to horizontal direct picture.
B17, as described in B12 device, wherein described image training module is further adapted in multiple collected facial images
In to ethnic group, the age, face rotation angle be labeled, formed training image set;The model training module be further adapted for by
The training image set input convolutional neural networks for being labeled with face rotation angle are trained, and export preset face rotation
Human face posture type corresponding to the interval range of gyration obtains the convolutional neural networks model of face rotation angle.
B18, the device as described in B17, wherein calculating the public affairs of the interval range of the preset face rotation angle of the output
Formula are as follows:
Wherein, wherein m indicates the angular interval number divided, and i indicates i-th of section, σi(Z) indicate output result the
The probability in i section, Z indicate the output of neural network.
B19, the device as described in B17, wherein the model training module is further adapted for rotating angle according to the face
The face key point of convolutional neural networks model and the mark to the human face characteristic point coordinate of the training image set into
Row regression training returns calculation formula are as follows:
Wherein, N indicates the number of human face characteristic point to be output, x1iIndicate the face characteristic of convolutional neural networks output
The coordinate of point, x2iIndicate that the coordinate of the human face characteristic point manually marked, D indicate the human face characteristic point of convolutional neural networks output
The error amount of coordinate and the coordinate of the human face characteristic point manually marked.
B20, as described in B12 device, wherein the convolutional neural networks include repeat superposition convolutional layer, ReLU layers,
Down-sampling layer, and multiple output branchs are obtained being finally superimposed full articulamentum;Wherein each output branch corresponds to face one category
Property feature, and return in model training the error amount of corresponding attribute.
B21, the device as described in B15, wherein the face detection module is further adapted for detecting the self-timer figure to be processed
As whether there is multiple faces;The characteristic point computing module is further adapted for detecting multiple faces in the face detection module
When, the left end of each face and the abscissa x of right end characteristic point are obtained respectivelyleftAnd xright, and it is topmost and most lower
Hold the ordinate y of characteristic pointtopAnd ybottom;The distance calculation module is further adapted for calculating every face according to the coordinate value
Region area size=| (xright-xleft)*(ybottom-ytop) |, and determine region area maximum one in the self-timer image
Face is opened, and calculates the similar distance parameter of its left and right face;Described image preserving module is further adapted for being calculated according to
Similar distance parameter self-timer image mirror image preservation or be treated as such.
B22, the device as described in B21, wherein the distance calculation module is further adapted for detecting in the face detection module
When to multiple faces, the similar distance parameter of its left and right face is calculated separately according to the characteristic point of every face, and by all faces
Left and right face similar distance parameter summation;Described image preserving module is further adapted for judging that the similar distance parameter of the left face is total
Whether the similar distance parameter summation of right face is greater than;If then saving the self-timer image as it is, mirror image guarantor is otherwise carried out
It deposits.
In the instructions provided here, numerous specific details are set forth.It is to be appreciated, however, that implementation of the invention
Example can be practiced without these specific details.In some instances, well known method, knot is not been shown in detail
Structure and technology, so as not to obscure the understanding of this specification.
Similarly, it should be understood that in order to simplify the disclosure and help to understand one or more of the various inventive aspects,
Above in the description of exemplary embodiment of the present invention, each feature of the invention is grouped together into single implementation sometimes
In example, figure or descriptions thereof.However, the disclosed method should not be interpreted as reflecting the following intention: i.e. required to protect
Shield the present invention claims than feature more features expressly recited in each claim.More precisely, as following
As claims reflect, inventive aspect is all features less than single embodiment disclosed above.Therefore, it abides by
Thus the claims for following specific embodiment are expressly incorporated in the specific embodiment, wherein each claim itself
As a separate embodiment of the present invention.
Those skilled in the art should understand that the module of the equipment in example disclosed herein or unit or groups
Part can be arranged in equipment as depicted in this embodiment, or alternatively can be positioned at and the equipment in the example
In different one or more equipment.Module in aforementioned exemplary can be combined into a module or furthermore be segmented into multiple
Submodule.
Those skilled in the art will understand that can be carried out adaptively to the module in the equipment in embodiment
Change and they are arranged in one or more devices different from this embodiment.It can be the module or list in embodiment
Member or component are combined into a module or unit or component, and furthermore they can be divided into multiple submodule or subelement or
Sub-component.Other than such feature and/or at least some of process or unit exclude each other, it can use any
Combination is to all features disclosed in this specification (including adjoint claim, abstract and attached drawing) and so disclosed
All process or units of what method or apparatus are combined.Unless expressly stated otherwise, this specification is (including adjoint power
Benefit require, abstract and attached drawing) disclosed in each feature can carry out generation with an alternative feature that provides the same, equivalent, or similar purpose
It replaces.
In addition, it will be appreciated by those of skill in the art that although some embodiments described herein include other embodiments
In included certain features rather than other feature, but the combination of the feature of different embodiments mean it is of the invention
Within the scope of and form different embodiments.For example, in the following claims, embodiment claimed is appointed
Meaning one of can in any combination mode come using.
In addition, be described as herein can be by the processor of computer system or by executing by some in the embodiment
The combination of method or method element that other devices of the function are implemented.Therefore, have for implementing the method or method
The processor of the necessary instruction of element forms the device for implementing this method or method element.In addition, Installation practice
Element described in this is the example of following device: the device be used for implement as in order to implement the purpose of the invention element performed by
Function.
As used in this, unless specifically stated, come using ordinal number " first ", " second ", " third " etc.
Description plain objects, which are merely representative of, is related to the different instances of similar object, and is not intended to imply that the object being described in this way must
Must have the time it is upper, spatially, sequence aspect or given sequence in any other manner.
Although the embodiment according to limited quantity describes the present invention, above description, the art are benefited from
It is interior it is clear for the skilled person that in the scope of the present invention thus described, it can be envisaged that other embodiments.Additionally, it should be noted that
Language used in this specification primarily to readable and introduction purpose and select, rather than in order to explain or limit
Determine subject of the present invention and selects.Therefore, without departing from the scope and spirit of the appended claims, for this
Many modifications and changes are obvious for the those of ordinary skill of technical field.For the scope of the present invention, to this
Invent done disclosure be it is illustrative and not restrictive, it is intended that the scope of the present invention be defined by the claims appended hereto.
Claims (19)
1. a kind of optimization method of self-timer image, suitable for being executed in camera terminal, this method comprises:
It acquires multiple facial images and human face characteristic point therein is labeled, form training image set;
The training that the training image set marked input convolutional neural networks are carried out to human face characteristic point, obtains human face characteristic point
Convolutional neural networks model;
Self-timer image to be processed is input in the convolutional neural networks model of the human face characteristic point and is predicted, institute is obtained
State the human face characteristic point of self-timer image;
The similar distance parameter of left and right face is obtained according to the human face characteristic point of the self-timer image;
Judge whether the similar distance parameter of left face is greater than the similar distance parameter of right face;
If then saving the self-timer image as it is, mirror image preservation is otherwise carried out;
In addition, this method further include: marked in multiple collected facial images to ethnic group, age, face rotation angle
Note forms training image set;By it is described be labeled with face rotation angle training image set input convolutional neural networks into
Row training exports human face posture type corresponding to the interval range of preset face rotation angle, obtains face rotation angle
Convolutional neural networks model, wherein calculating the formula of the interval range of the output preset face rotation angle are as follows:
Wherein, m indicates the angular interval number of segmentation, and i indicates i-th of section, σi(Z) indicate output result in i-th section
Probability, Z indicate the output of neural network.
2. the method as described in claim 1, wherein the human face characteristic point include nose vertex C, lip left and right vertex E and
F, and following any one group of eye feature point:
Right and left eyes central point A1And B1, the left vertex A of left eye2With the right vertex B of right eye2;
Wherein C point respectively with straight line A1B1D point and G point are vertically intersected on EF.
3. method according to claim 2, wherein the similar distance parameter of the left and right face includes at least following five groups of distances
Any one group in parameter:
ⅰ、A1The distance between point and D point A1D and B1The distance between point and D point B1D;
ⅱ、A2The distance between point and D point A2D and B2The distance between point and D point B2D;
The distance between III, E point and G point the distance between EG and F point and G point FG;
IV, C point to A1Point and the sum of the distance A for arriving E point1C+CE and C point is to B1Point and the sum of the distance B for arriving F point1C+CF;
V, C point to A2Point and the sum of the distance A for arriving E point2C+CE and C point is to B2Point and the sum of the distance B for arriving F point2C+CF。
4. the method as described in claim 1, further includes:
Face datection is carried out to the self-timer image to be processed, obtains human face region, and the human face region cut and
Scaling processing.
5. the method as described in claim 1, further includes:
According to the human face characteristic point parameter of the self-timer image, the transformation matrix that the self-timer image carries out Plane Rotation is calculated,
And according to the transformation matrix by the self-timer image rotation be horizontal direct picture.
6. the method as described in claim 1, further includes:
The convolutional neural networks model of angle and the human face characteristic point of the mark are rotated to the training according to the face
The human face characteristic point coordinate of image collection carries out regression training, returns calculation formula are as follows:
Wherein, N indicates the number of human face characteristic point to be output, x1iIndicate the human face characteristic point of convolutional neural networks output
Coordinate, x2iIndicate that the coordinate of the human face characteristic point manually marked, D indicate the coordinate of the human face characteristic point of convolutional neural networks output
With the error amount of the coordinate of the human face characteristic point manually marked.
7. the method as described in claim 1, wherein the convolutional neural networks include the convolutional layer for repeating superposition, ReLU layers,
Down-sampling layer, and multiple output branchs are obtained being finally superimposed full articulamentum;Wherein each output branch corresponds to face one category
Property feature, and return in model training the error amount of corresponding attribute.
8. method as claimed in claim 4, further includes:
If detecting that there are multiple faces in self-timer image to be processed, are handled image according to following manner:
The left end of each face and the abscissa x of right end characteristic point are obtained respectivelyleftAnd xright, and it is topmost and most lower
Hold the ordinate y of characteristic pointtopAnd ybottom;
The region area size=of every face is calculated according to the coordinate value | (xright-xleft)*(ybottom-ytop)|;
It determines the maximum face of region area in the self-timer image, and calculates the similar distance parameter of its left and right face;
Mirror image preservation is carried out to self-timer image according to the similar distance parameter being calculated or is treated as such.
9. method as claimed in claim 4, further includes:
If detecting that there are multiple faces in self-timer image to be processed, are handled image according to following manner:
The similar distance parameter of its left and right face is calculated separately according to the characteristic point of every face;
The similar distance parameter of the left and right face of all faces is summed;
Judge whether the similar distance parameter summation of the left face is greater than the similar distance parameter summation of right face;
If then saving the self-timer image as it is, mirror image preservation is otherwise carried out.
10. a kind of optimization device of self-timer image, suitable for residing in camera terminal, which includes:
Image training module forms training image set suitable for acquiring multiple facial images and being labeled to human face characteristic point;
Model training module, the training image set input convolutional neural networks suitable for will mark carry out the instruction of human face characteristic point
Practice, obtains the convolutional neural networks model of human face characteristic point;
Characteristic point computing module, suitable for self-timer image to be processed to be input to the convolutional neural networks mould of the human face characteristic point
It is predicted in type, obtains the human face characteristic point of the self-timer image;
Distance calculation module, suitable for obtaining the similar distance parameter of left and right face according to the human face characteristic point of the self-timer image;
Image preserving module, suitable for judging whether the distance parameter of left face is greater than right face;If then saving the self-timer as it is
Otherwise image carries out mirror image preservation;
In addition, described image training module is further adapted in multiple collected facial images to ethnic group, age, face rotation angle
Degree is labeled, and forms training image set;The model training module is further adapted for the face that is labeled with rotating angle
Training image set input convolutional neural networks are trained, corresponding to the interval range for exporting preset face rotation angle
Human face posture type obtains the convolutional neural networks model of face rotation angle, wherein calculating the preset face rotation of the output
The formula of the interval range of gyration are as follows:
Wherein, m indicates the angular interval number of segmentation, and i indicates i-th of section, σi(Z) indicate output result in i-th section
Probability, Z indicate the output of neural network.
11. device as claimed in claim 10, wherein the human face characteristic point includes the left and right vertex E of nose vertex C, lip
And F, and following any one group of eye feature point:
Right and left eyes central point A1And B1, the left vertex A of left eye2With the right vertex B of right eye2;
Wherein C point respectively with straight line A1B1D point and G point are vertically intersected on EF.
12. device as claimed in claim 11, wherein the similar distance parameter of the left and right face includes following five groups of distances ginseng
Any one group in number:
ⅰ、A1The distance between point and D point A1D and B1The distance between point and D point B1D;
ⅱ、A2The distance between point and D point A2D and B2The distance between point and D point B2D;
The distance between III, E point and G point the distance between EG and F point and G point FG;
IV, C point to A1Point and the sum of the distance A for arriving E point1C+CE and C point is to B1Point and the sum of the distance B for arriving F point1C+CF;
V, C point to A2Point and the sum of the distance A for arriving E point2C+CE and C point is to B2Point and the sum of the distance B for arriving F point2C+CF。
13. device as claimed in claim 10, further includes:
Face detection module obtains human face region, and to the people suitable for carrying out Face datection to the self-timer image to be processed
Face region carries out cutting and scaling processing.
14. device as claimed in claim 10, further includes:
Image rotation module calculates the self-timer image and carries out plane rotation according to the human face characteristic point parameter of the self-timer image
The transformation matrix turned, and be horizontal direct picture by the self-timer image rotation according to the transformation matrix.
15. device as claimed in claim 10, wherein the model training module is further adapted for rotating angle according to the face
Convolutional neural networks model and the mark face key point to the human face characteristic point coordinate of the training image set
Regression training is carried out, calculation formula is returned are as follows:
Wherein, N indicates the number of human face characteristic point to be output, x1iIndicate the human face characteristic point of convolutional neural networks output
Coordinate, x2iIndicate that the coordinate of the human face characteristic point manually marked, D indicate the coordinate of the human face characteristic point of convolutional neural networks output
With the error amount of the coordinate of the human face characteristic point manually marked.
16. device as claimed in claim 10, wherein the convolutional neural networks include the convolutional layer for repeating superposition, ReLU
Layer, down-sampling layer, and multiple output branchs are obtained being finally superimposed full articulamentum;Wherein each output branch corresponds to face one
Attributive character, and the error amount for corresponding to attribute is returned in model training.
17. device as claimed in claim 13, wherein
The face detection module is further adapted for detecting the self-timer image to be processed with the presence or absence of multiple faces;
The characteristic point computing module is further adapted for obtaining everyone respectively when the face detection module detects multiple faces
The left end of face and the abscissa x of right end characteristic pointleftAnd xright, and the ordinate of the top and bottom characteristic point
ytopAnd ybottom;
The distance calculation module is further adapted for calculating the region area size=of every face according to the coordinate value | (xright-
xleft)*(ybottom-ytop) |, and determine the maximum face of region area in the self-timer image, and calculate its left and right
The similar distance parameter of face;
Described image preserving module is further adapted for the similar distance parameter being calculated according to and carries out mirror image guarantor to self-timer image
It deposits or is treated as such.
18. device as claimed in claim 17, wherein
The distance calculation module is further adapted for when the face detection module detects multiple faces, according to the spy of every face
Sign point calculates separately the similar distance parameter of its left and right face, and the similar distance parameter of the left and right face of all faces is summed;
Described image preserving module be further adapted for judging the similar distance parameter summation of the left face whether be greater than right face it is similar away from
From parameter summation;If then saving the self-timer image as it is, mirror image preservation is otherwise carried out.
19. a kind of camera terminal, the optimization device including the self-timer image as described in any one of claim 10-18.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610622070.XA CN106295533B (en) | 2016-08-01 | 2016-08-01 | A kind of optimization method, device and the camera terminal of self-timer image |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610622070.XA CN106295533B (en) | 2016-08-01 | 2016-08-01 | A kind of optimization method, device and the camera terminal of self-timer image |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106295533A CN106295533A (en) | 2017-01-04 |
CN106295533B true CN106295533B (en) | 2019-07-02 |
Family
ID=57663958
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610622070.XA Active CN106295533B (en) | 2016-08-01 | 2016-08-01 | A kind of optimization method, device and the camera terminal of self-timer image |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106295533B (en) |
Families Citing this family (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2018184192A1 (en) * | 2017-04-07 | 2018-10-11 | Intel Corporation | Methods and systems using camera devices for deep channel and convolutional neural network images and formats |
CN107194361B (en) * | 2017-05-27 | 2021-04-02 | 成都通甲优博科技有限责任公司 | Two-dimensional posture detection method and device |
CN107506732B (en) * | 2017-08-25 | 2021-03-30 | 奇酷互联网络科技(深圳)有限公司 | Method, device, mobile terminal and computer storage medium for mapping |
CN107909100A (en) * | 2017-11-10 | 2018-04-13 | 广州视源电子科技股份有限公司 | Determine the method, apparatus, equipment and storage medium of distance |
CN113688737A (en) * | 2017-12-15 | 2021-11-23 | 北京市商汤科技开发有限公司 | Face image processing method, face image processing device, electronic apparatus, storage medium, and program |
CN108055461B (en) * | 2017-12-21 | 2020-01-14 | Oppo广东移动通信有限公司 | Self-photographing angle recommendation method and device, terminal equipment and storage medium |
CN109977727A (en) * | 2017-12-27 | 2019-07-05 | 广东欧珀移动通信有限公司 | Sight protectio method, apparatus, storage medium and mobile terminal |
CN108846342A (en) * | 2018-06-05 | 2018-11-20 | 四川大学 | A kind of harelip operation mark point recognition system |
CN108848405B (en) * | 2018-06-29 | 2020-10-09 | 广州酷狗计算机科技有限公司 | Image processing method and device |
CN109214343B (en) * | 2018-09-14 | 2021-03-09 | 北京字节跳动网络技术有限公司 | Method and device for generating face key point detection model |
CN109376712A (en) * | 2018-12-07 | 2019-02-22 | 广州纳丽生物科技有限公司 | A kind of recognition methods of face forehead key point |
CN112465910B (en) * | 2020-11-26 | 2021-12-28 | 成都新希望金融信息有限公司 | Target shooting distance obtaining method and device, storage medium and electronic equipment |
CN112541484B (en) * | 2020-12-28 | 2024-03-19 | 平安银行股份有限公司 | Face matting method, system, electronic device and storage medium |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101377814A (en) * | 2007-08-27 | 2009-03-04 | 索尼株式会社 | Face image processing apparatus, face image processing method, and computer program |
CN103152489A (en) * | 2013-03-25 | 2013-06-12 | 锤子科技(北京)有限公司 | Showing method and device for self-shooting image |
CN103793693A (en) * | 2014-02-08 | 2014-05-14 | 厦门美图网科技有限公司 | Method for detecting face turning and facial form optimizing method with method for detecting face turning |
CN105205779A (en) * | 2015-09-15 | 2015-12-30 | 厦门美图之家科技有限公司 | Eye image processing method and system based on image morphing and shooting terminal |
CN105205462A (en) * | 2015-09-18 | 2015-12-30 | 北京百度网讯科技有限公司 | Shooting promoting method and device |
CN105227832A (en) * | 2015-09-09 | 2016-01-06 | 厦门美图之家科技有限公司 | A kind of self-timer method based on critical point detection, self-heterodyne system and camera terminal |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101815174B (en) * | 2010-01-11 | 2015-03-04 | 北京中星微电子有限公司 | Control method and control device for camera shooting |
AU2013205535B2 (en) * | 2012-05-02 | 2018-03-15 | Samsung Electronics Co., Ltd. | Apparatus and method of controlling mobile terminal based on analysis of user's face |
-
2016
- 2016-08-01 CN CN201610622070.XA patent/CN106295533B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101377814A (en) * | 2007-08-27 | 2009-03-04 | 索尼株式会社 | Face image processing apparatus, face image processing method, and computer program |
CN103152489A (en) * | 2013-03-25 | 2013-06-12 | 锤子科技(北京)有限公司 | Showing method and device for self-shooting image |
CN103793693A (en) * | 2014-02-08 | 2014-05-14 | 厦门美图网科技有限公司 | Method for detecting face turning and facial form optimizing method with method for detecting face turning |
CN105227832A (en) * | 2015-09-09 | 2016-01-06 | 厦门美图之家科技有限公司 | A kind of self-timer method based on critical point detection, self-heterodyne system and camera terminal |
CN105205779A (en) * | 2015-09-15 | 2015-12-30 | 厦门美图之家科技有限公司 | Eye image processing method and system based on image morphing and shooting terminal |
CN105205462A (en) * | 2015-09-18 | 2015-12-30 | 北京百度网讯科技有限公司 | Shooting promoting method and device |
Also Published As
Publication number | Publication date |
---|---|
CN106295533A (en) | 2017-01-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106295533B (en) | A kind of optimization method, device and the camera terminal of self-timer image | |
US11354825B2 (en) | Method, apparatus for generating special effect based on face, and electronic device | |
CN108062526A (en) | A kind of estimation method of human posture and mobile terminal | |
CN109242765B (en) | Face image processing method and device and storage medium | |
CN110059661A (en) | Action identification method, man-machine interaction method, device and storage medium | |
CN105608425B (en) | The method and device of classification storage is carried out to photo | |
CN108197602A (en) | A kind of convolutional neural networks generation method and expression recognition method | |
CN110111418A (en) | Create the method, apparatus and electronic equipment of facial model | |
CN107368810A (en) | Method for detecting human face and device | |
CN107124548A (en) | A kind of photographic method and terminal | |
CN106250839B (en) | A kind of iris image perspective correction method, apparatus and mobile terminal | |
CN110414428A (en) | A method of generating face character information identification model | |
CN108537193A (en) | Ethnic attribute recognition approach and mobile terminal in a kind of face character | |
CN108090463B (en) | Object control method, device, storage medium and computer equipment | |
CN109274891B (en) | Image processing method, device and storage medium thereof | |
CN108921856B (en) | Image cropping method and device, electronic equipment and computer readable storage medium | |
TW202006630A (en) | Payment method, apparatus, and system | |
CN108200335A (en) | Photographic method, terminal and computer readable storage medium based on dual camera | |
CN107944420A (en) | The photo-irradiation treatment method and apparatus of facial image | |
CN206595991U (en) | A kind of double-camera mobile terminal | |
CN109218614A (en) | A kind of automatic photographing method and mobile terminal of mobile terminal | |
JP2020507159A (en) | Picture push method, mobile terminal and storage medium | |
CN110213493A (en) | Equipment imaging method, device, storage medium and electronic equipment | |
WO2021120626A1 (en) | Image processing method, terminal, and computer storage medium | |
CN110427108A (en) | Photographic method and Related product based on eyeball tracking |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |