CN106295533A - Optimization method, device and the camera terminal of a kind of image of autodyning - Google Patents

Optimization method, device and the camera terminal of a kind of image of autodyning Download PDF

Info

Publication number
CN106295533A
CN106295533A CN201610622070.XA CN201610622070A CN106295533A CN 106295533 A CN106295533 A CN 106295533A CN 201610622070 A CN201610622070 A CN 201610622070A CN 106295533 A CN106295533 A CN 106295533A
Authority
CN
China
Prior art keywords
point
image
human face
face
characteristic point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610622070.XA
Other languages
Chinese (zh)
Other versions
CN106295533B (en
Inventor
傅松林
洪炜冬
张伟
许清泉
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Meitu Technology Co Ltd
Original Assignee
Xiamen Meitu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Meitu Technology Co Ltd filed Critical Xiamen Meitu Technology Co Ltd
Priority to CN201610622070.XA priority Critical patent/CN106295533B/en
Publication of CN106295533A publication Critical patent/CN106295533A/en
Application granted granted Critical
Publication of CN106295533B publication Critical patent/CN106295533B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • G06V40/171Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/172Classification, e.g. identification
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/178Human faces, e.g. facial parts, sketches or expressions estimating age from face image; using age information for improving recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • General Physics & Mathematics (AREA)
  • Human Computer Interaction (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Mathematical Physics (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Biomedical Technology (AREA)
  • Signal Processing (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention discloses the optimization method of a kind of image of autodyning, be suitable to perform in camera terminal, the method includes: gathers multiple facial images and is labeled human face characteristic point therein, forms training image set;The described training image set marked input convolutional neural networks is carried out the training of human face characteristic point, obtains the convolutional neural networks model of human face characteristic point;It is input in the convolutional neural networks model of described human face characteristic point be predicted by described pending auto heterodyne image, obtains the human face characteristic point of described auto heterodyne image;Human face characteristic point according to described auto heterodyne image obtains the similar distance parameter of left and right face;Judge whether the similar distance parameter of described left face is more than the similar distance parameter of right face;If the most in statu quo preserving described auto heterodyne image, otherwise carry out mirror image preservation.The invention also discloses optimization device and the camera terminal of a kind of image of autodyning.

Description

Optimization method, device and the camera terminal of a kind of image of autodyning
Technical field
The present invention relates to image processing field, particularly relate to optimization method, device and the camera terminal of a kind of image of autodyning.
Background technology
Along with developing rapidly of mobile communication and microelectric technique, various camera terminals are (such as camera, video camera with have bat According to the mobile phone of function, panel computer etc.) resolution ratio of camera head reached 100,000, hundreds thousand of or even million grades of pixels, promote Make increasing people get used to by these camera terminals to carry out autodyning to record the various animations of oneself.
User is when autodyning, and a lot of people can adjust the angle of auto heterodyne according to current light, or left side face is autodyned, or The right face is autodyned.And scientific investigations showed that the photo of left side face is higher by joyful degree scoring than the right face, pupil diameter is bigger, Thus be easier to obtain liking of photo viewer.If the photo that the right face is autodyned is unsatisfied with by user, can be by shooting The operation interface of terminal arranges from the storage mode taken a picture, as selected mirrored storage etc..But this manual setting is the most not Enough intelligence, it is impossible to automatically identify that the left and right face image of user makes corresponding adjustment.
In recognition of face, facial feature localization technology is particularly critical, it is common that predict some human face characteristic points being pre-designed, Such as positions such as canthus, the tip of the brow, nose and the corners of the mouths.But user's generally face when autodyning has certain anglec of rotation, and this is undoubtedly The difficulty of facial feature localization can be strengthened.
Summary of the invention
In view of the above problems, it is proposed that the present invention in case provide one overcome the problems referred to above or at least in part solve on State optimization method, device and the camera terminal of a kind of image of autodyning of problem.
According to an aspect of the present invention, it is provided that the optimization method of a kind of image of autodyning, be suitable to perform in camera terminal, The method includes: gathers multiple facial images and is labeled human face characteristic point therein, forms training image set;By institute State the training image set input convolutional neural networks marked and carry out the training of human face characteristic point, obtain the volume of human face characteristic point Long-pending neural network model;Described pending auto heterodyne image is input in the convolutional neural networks model of described human face characteristic point It is predicted, obtains the human face characteristic point of described auto heterodyne image;Human face characteristic point according to described auto heterodyne image obtains left and right face Similar distance parameter;Judge whether the similar distance parameter of described left face is more than the similar distance parameter of right face;If then pressing Former state preserves described auto heterodyne image, otherwise carries out mirror image preservation.
Alternatively, in the method according to the invention, human face characteristic point include nose summit C, summit, the left and right E of lip and F, and the eye feature point of following any one group: right and left eyes central point A1And B1, left eye left summit A2Summit B right with right eye2; Wherein, C point respectively with straight line A1B1It is vertically intersected on D point and G point with EF.
Alternatively, in the method according to the invention, the similar distance parameter of described left and right face at least includes following five groups In distance parameter any one group:, A1Distance A between point and D point1D and B1Distance B between point and D point1D;ⅱ、A2Point with Distance A between D point2D and B2Distance B between point and D point2D;, between distance EG and F point and G point between E point and G point Distance FG;, C point to A1Point and distance sum A to E point1C+CE and C point is to B1Point and distance sum B to F point1C +CF;, C point to A2Point and distance sum A to E point2C+CE and C point is to B2Point and distance sum B to F point2C+CF。
Alternatively, in the method according to the invention, also include: described pending auto heterodyne image is carried out face inspection Survey, obtain human face region, and this human face region is carried out cutting and scaling process.
Alternatively, in the method according to the invention, also include: according to the human face characteristic point parameter of described auto heterodyne image, Calculate described auto heterodyne image and carry out the transformation matrix of Plane Rotation, and be water according to this transformation matrix by described auto heterodyne image rotation Flat direct picture.
Alternatively, in the method according to the invention, also include: to ethnic group, year in multiple facial images collected Age, the face anglec of rotation are labeled, and form training image set;By the described training image collection being labeled with the face anglec of rotation Close input convolutional neural networks to be trained, the human face posture class corresponding to interval range of the face anglec of rotation that output is preset Type, obtains the convolutional neural networks model of the face anglec of rotation.
Alternatively, in the method according to the invention, the interval range of the face anglec of rotation that described output is preset is calculated Formula be:
σ i ( Z ) = exp ( Z i ) Σ j = 1 m exp ( Z j )
Wherein, wherein m represents the angular interval number of segmentation, and i represents that i-th is interval, σi(Z) represent that output result is the I interval probability, Z represents the output of neutral net.
Alternatively, in the method according to the invention, also include: according to the convolutional neural networks of the described face anglec of rotation The human face characteristic point of model and described mark carries out regression training to the face characteristic point coordinates of described training image set, returns The computing formula is returned to be:
D = 1 2 N Σ i = 1 N | | x 1 i - x 2 i | | 2 2
Wherein, N represents the number of human face characteristic point to be exported, x1iRepresent the face characteristic of convolutional neural networks output The coordinate of point, x2iRepresenting the coordinate of the human face characteristic point of artificial mark, D represents the human face characteristic point that convolutional neural networks exports The error amount of the coordinate of the human face characteristic point of coordinate and artificial mark.
Alternatively, in the method according to the invention, described convolutional neural networks includes repeating the convolutional layer of superposition, ReLU Layer, down-sampling layer, and obtain multiple output branch at the full articulamentum of last superposition;The most each output branch correspondence face one Attribute character, and the error amount of the corresponding attribute of passback in model training.
Alternatively, in the method according to the invention, also include: if be detected that pending auto heterodyne image exists many Face, then process image according in the following manner: obtains the high order end of each face and low order end characteristic point respectively Abscissa xleftAnd xright, and topmost and the vertical coordinate y of bottom characteristic pointtopAnd ybottom;Calculate every according to coordinate figure Open the region area size=of face | (xright-xleft)*(ybottom-ytop)|;Determine region area maximum in auto heterodyne image One face, and calculate the similar distance parameter of face around;According to calculated similar distance parameter, auto heterodyne image is entered Row mirror image preserves or is treated as such.
Alternatively, in the method according to the invention, also include: if be detected that pending auto heterodyne image exists many Face, then process image according in the following manner: calculates the same of face around respectively according to the characteristic point of every face Class distance parameter;The similar distance parameter of the left and right face of all faces is sued for peace;The similar distance parameter summation judging left face is The no similar distance parameter summation more than right face;If the most in statu quo preserving described auto heterodyne image, otherwise carry out mirror image preservation.
According to a further aspect in the invention, it is provided that the optimization device of a kind of image of autodyning, be suitable to reside in camera terminal, This device includes: image training module, is suitable to gather multiple facial images and be labeled human face characteristic point, forms training figure Image set closes;Model training module, is suitable to that the described training image set marked input convolutional neural networks is carried out face special Levy training a little, obtain the convolutional neural networks model of human face characteristic point;Characteristic point computing module, is suitable to described pending Auto heterodyne image is input in the convolutional neural networks model of described human face characteristic point be predicted, and obtains the people of described auto heterodyne image Face characteristic point;Distance calculation module, is suitable to the human face characteristic point according to described auto heterodyne image and obtains the similar distance ginseng of left and right face Number;Image preserves module, is suitable to judge the similar the distance parameter whether similar distance parameter of described left face is more than right face;If The most in statu quo preserve described auto heterodyne image, otherwise carry out mirror image preservation.
Alternatively, in a device in accordance with the invention, described human face characteristic point includes the top, left and right of nose summit C, lip Point E and F, and the eye feature point of following any one group: right and left eyes central point A1And B1, left eye left summit A2Top right with right eye Point B2;Wherein said C point respectively with straight line A1B1It is vertically intersected on D point and G point with EF.
Alternatively, in a device in accordance with the invention, the similar distance parameter of described left and right face includes following five groups of distances In parameter any one group:, A1Distance A between point and D point1D and B1Distance B between point and D point1D;ⅱ、A2Point and D point Between distance A2D and B2Distance B between point and D point2D;, between distance EG and F point and G point between E point and G point Distance FG;, C point to A1Point and distance sum A to E point1C+CE and C point is to B1Point and distance sum B to F point1C+ CF;, C point to A2Point and distance sum A to E point2C+CE and C point is to B2Point and distance sum B to F point2C+CF。
Alternatively, in a device in accordance with the invention, also include: face detection module, be suitable to described pending from Clap image and carry out Face datection, obtain human face region, and this human face region is carried out cutting and scaling process.
Alternatively, in a device in accordance with the invention, also include: image rotation module, according to the people of described auto heterodyne image Face characteristic point parameter, calculates described auto heterodyne image and carries out the transformation matrix of Plane Rotation, and according to this transformation matrix by described from Clapping image rotation is horizontal direct picture.
Alternatively, in a device in accordance with the invention, described image training module is further adapted at multiple faces collected Ethnic group, age, the face anglec of rotation are labeled by image, form training image set;Described model training module is the suitableeest In the described training image set input convolutional neural networks being labeled with the face anglec of rotation is trained, the people that output is preset Human face posture type corresponding to the interval range of the face anglec of rotation, obtains the convolutional neural networks model of the face anglec of rotation.
Alternatively, in a device in accordance with the invention, the interval range of the face anglec of rotation that described output is preset is calculated Formula be:
σ i ( Z ) = exp ( Z i ) Σ j = 1 m exp ( Z j )
Wherein, wherein m represents the angular interval number of segmentation, and i represents that i-th is interval, σi(Z) represent that output result is the I interval probability, Z represents the output of neutral net.
Alternatively, in a device in accordance with the invention, described model training module is further adapted for according to the described face anglec of rotation The human face characteristic point of described training image set is sat by the convolutional neural networks model of degree and the face key point of described mark Mark carries out regression training, and regression Calculation formula is:
D = 1 2 N Σ i = 1 N | | x 1 i - x 2 i | | 2 2
Wherein, N represents the number of human face characteristic point to be exported, x1iRepresent the face characteristic of convolutional neural networks output The coordinate of point, x2iRepresenting the coordinate of the human face characteristic point of artificial mark, D represents the human face characteristic point that convolutional neural networks exports The error amount of the coordinate of the human face characteristic point of coordinate and artificial mark.
Alternatively, in a device in accordance with the invention, described convolutional neural networks includes repeating the convolutional layer of superposition, ReLU Layer, down-sampling layer, and obtain multiple output branch at the full articulamentum of last superposition;The most each output branch correspondence face one Attribute character, and the error amount of the corresponding attribute of passback in model training.
Alternatively, in a device in accordance with the invention, face detection module is further adapted for detecting described pending auto heterodyne figure Seem multiple faces of no existence;Characteristic point computing module is further adapted for when described face detection module detects multiple faces, point Do not obtain the high order end of each face and the abscissa x of low order end characteristic pointleftAnd xright, and topmost and bottom feature The vertical coordinate y of pointtopAnd ybottom;Distance calculation module is further adapted for calculating the region area of every face according to described coordinate figure Size=| (xright-xleft)*(ybottom-ytop) |, and determine the face that in described auto heterodyne image, region area is maximum, with And calculate the similar distance parameter of face around;Image preserves module and is further adapted for according to described calculated similar distance parameter Auto heterodyne image is carried out mirror image preservation or is treated as such.
Alternatively, in a device in accordance with the invention, distance calculation module is further adapted for detecting in described face detection module During to multiple faces, calculate the similar distance parameter of face around respectively according to the characteristic point of every face, and by all faces Left and right face similar distance parameter summation;Image preserves module and is further adapted for judging that the similar distance parameter summation of left face is the biggest Similar distance parameter summation in right face;If the most in statu quo preserving described auto heterodyne image, otherwise carry out mirror image preservation.
According to a further aspect of the invention, it is provided that a kind of camera terminal, including the optimization of image of autodyning as above Device.
According to technical scheme, by constructing the convolutional neural networks model of human face characteristic point, fixed according to face The human face characteristic point that position technology identifies, calculates the left and right face distance parameter of pending auto heterodyne image, thus judges user Being to use left face to take pictures or right face is taken pictures, if left face is taken pictures, then former state preserves photo, if right face is taken pictures, uses mirror As preserving.So, achieve the optimization from taking a picture intelligently and represent, improve user image.It addition, also by structure people The convolutional neural networks model of the face anglec of rotation, the anglec of rotation based on face carries out regression Calculation to human face characteristic point, thus Ensure that being accurately positioned of human face characteristic point, effectively eliminate face when taking pictures and tilt impact during facial feature localization.
Accompanying drawing explanation
In order to realize above-mentioned and relevant purpose, herein in conjunction with explained below and accompanying drawing, some illustrative side is described Face, these aspects indicate can to put into practice the various modes of principles disclosed herein, and all aspects and equivalence aspect It is intended to fall under in the range of theme required for protection.By reading in conjunction with the accompanying detailed description below, the disclosure above-mentioned And other purpose, feature and advantage will be apparent from.Throughout the disclosure, identical reference generally refers to identical Parts or element.
Fig. 1 shows the structured flowchart of mobile terminal 100 according to an embodiment of the invention;
Fig. 2 shows the flow chart of the optimization method 200 of image of autodyning according to an embodiment of the invention;
Fig. 3 shows the structured flowchart optimizing device 300 of auto heterodyne image according to an embodiment of the invention.
Detailed description of the invention
It is more fully described the exemplary embodiment of the disclosure below with reference to accompanying drawings.Although accompanying drawing shows the disclosure Exemplary embodiment, it being understood, however, that may be realized in various forms the disclosure and should be by embodiments set forth here Limited.On the contrary, it is provided that these embodiments are able to be best understood from the disclosure, and can be by the scope of the present disclosure Complete conveys to those skilled in the art.
The present invention provides the optimization device of a kind of image of autodyning, and may reside within camera terminal, as camera, video camera and Having the mobile terminal etc. of camera function, Fig. 1 is arranged as realizing the example optimizing device of the auto heterodyne image according to the present invention and moves The structured flowchart of dynamic terminal 100.
As described in Figure 1, mobile terminal can include memory interface 102, one or more data processor, image procossing Device and/or CPU 104, and peripheral interface 106.Memory interface 102, one or more processor 104 and/or Peripheral interface 106 both can be discrete component, it is also possible to be integrated in one or more integrated circuit.In the mobile terminal 100, Various elements can be coupled by one or more communication bus or holding wire.Sensor, equipment and subsystem can couple To peripheral interface 106, in order to help to realize several functions.
Such as, motion sensor 110, light sensor 112 and range sensor 114 are alternatively coupled to peripheral interface 106, Facilitating orientation, illuminate and the function such as range finding.Other sensors 116 are equally connected with peripheral interface 106, such as, position system System (such as GPS), temperature sensor, biometric sensor or other sensor devices, thus can help to implement phase The function closed.
Camera sub-system 120 and optical pickocff 122 may be used for the camera of convenient such as recording photograph and video clipping The realization of function, wherein said camera sub-system and optical pickocff can be such as charge-coupled image sensor (CCD) or complementary gold Belong to oxide semiconductor (CMOS) optical pickocff.Can help to realize by one or more radio communication subsystem 124 Communication function, wherein radio communication subsystem can include radio-frequency transmitter and transmitter and/or light (the most infrared) receiver And transmitter.The particular design of radio communication subsystem 124 and embodiment can depend on that mobile terminal 100 is supported Individual or multiple communication networks.Such as, mobile terminal 100 can include being designed to supporting LTE, 3G, GSM network, GPRS network, EDGE network, Wi-Fi or WiMax network and BlueboothTMThe communication subsystem 124 of network.Audio subsystem 126 is permissible It is coupled with speaker 128 and mike 130, in order to helping the function implementing to enable voice, such as speech recognition, voice is multiple System, digital record and telephony feature.
I/O subsystem 140 can include touch screen controller 142 and/or other input controllers 144 one or more. Touch screen controller 142 is alternatively coupled to touch screen 146.For example, this touch screen 146 and touch screen controller 142 are permissible Use any one of multiple touch-sensing technology to detect the contact and movement or time-out carried out therewith, wherein sense skill Art is including, but not limited to capacitive character, resistive, infrared and surface acoustic wave technique.Other input controllers 144 one or more Be alternatively coupled to other input/control devicess 148, the most one or more buttons, rocker switch, thumb wheel, infrared port, The pointer device of USB port and/or instruction pen etc.The one or more button (not shown) can include for controlling Speaker 128 and/or the up/down button of mike 130 volume.
Memory interface 102 can be coupled with memorizer 150.This memorizer 150 can include that high random access is deposited Reservoir and/or nonvolatile memory, the most one or more disk storage equipment, one or more optical storage apparatus, and/ Or flash memories (such as NAND, NOR).Memorizer 150 can store operating system 152, such as Android, iOS or The operating system of Windows Phone etc.This operating system 152 can include for processing basic system services and execution Depend on the instruction of the task of hardware.Memorizer 150 can also store application 154.
When mobile device is run, operating system 152 can be loaded from memorizer 150, and performed by processor 104. Application 154 operationally, also can load from memorizer 150, and be performed by processor 104.Application 154 operates in operating system On, the interface utilizing operating system and bottom hardware to provide realizes the desired function of various user, such as instant messaging, webpage Browse, pictures management etc..Application 154 can be independently of what operating system provided, it is also possible to is that operating system carries.Separately Outward, when application 154 is installed in mobile terminal 100, it is also possible to add to operating system and drive module.
In above-mentioned various application 154, a kind of application therein is the optimization device of auto heterodyne image related to the present invention 300.In certain embodiments, mobile terminal 100 is configured to perform the optimization method 200 of the auto heterodyne image according to the present invention.
Fig. 2 shows the optimization method 200 of image of autodyning according to an embodiment of the invention, is suitable in camera terminal Performing, the method starts from step S210.
In step S210, gather multiple facial images and human face characteristic point therein is labeled, forming training figure Image set closes.Specifically, human face characteristic point at least includes summit, left and right E and F of nose summit C, lip, and following any one group Eye feature point: right and left eyes central point A1And B1, left eye left summit A2Summit B right with right eye2;Wherein C point respectively with straight line A1B1It is vertically intersected on D point and G point with EF.Additionally, generally also include the characteristic point at other positions of face, such as forehead region, chin Region, left and right cheek, loop wire region, face periphery etc..
During it addition, the characteristic point of face is labeled, it is also possible to ethnic group, age and the face anglec of rotation are marked Note, to obtain face character feature as much as possible.It is, of course, also possible to the face in these images are carried out according to scoring software Scoring mark.
Subsequently, in step S220, by the above-mentioned training image set input convolutional Neural net being labeled with human face characteristic point Network carries out the training of human face characteristic point, obtains the convolutional neural networks model of human face characteristic point.Wherein, convolutional neural networks includes Repeat the convolutional layer of superposition, ReLU layer, down-sampling layer, and obtain multiple output branch at the full articulamentum of last superposition;The most every Individual one attribute character of output branch correspondence face, and the error amount of the corresponding attribute of passback in model training.Specifically, permissible Including: input → convolutional layer C1 → down-sampling layer P1 → convolutional layer C2 → down-sampling layer P2 → full articulamentum F1 → full articulamentum F2 → output, wherein, input time described training image set size, be output as the human face characteristic point of described training image, year The initial results of the attribute character such as age, ethnic group.
Subsequently, in step S230, pending auto heterodyne image is input to the convolutional neural networks mould of human face characteristic point Type is predicted, obtains the human face characteristic point of this auto heterodyne image.
Wherein, before this step, it is also possible to first described pending auto heterodyne image is carried out Face datection, obtain face Region, and this human face region is carried out suitable cutting and scaling process.
Subsequently, in step S240, obtain the similar distance parameter of left and right face according to the human face characteristic point of auto heterodyne image.Its The similar distance parameter of middle left and right face at least includes any one group in following five groups of distance parameters:
ⅰ、A1Distance A between point and D point1D and B1Distance B between point and D point1D;
ⅱ、A2Distance A between point and D point2D and B2Distance B between point and D point2D;
, distance FG between distance EG and F point and G point between E point and G point;
, C point to A1Point and distance sum A to E point1C+CE and C point is to B1Point and distance sum B to F point1C+ CF;
, C point to A2Point and distance sum A to E point2C+CE and C point is to B2Point and distance sum B to F point2C+ CF。
Subsequently, in step s 250, it is judged that whether the similar distance parameter of described left face joins more than the similar distance of right face Number.As judged left eye central point A1And distance A between D point1Whether D is more than right eye central point B1And distance B between D point1D; Or judge left eye left summit A2And distance A between D point2Whether D is more than right eye right summit B2And distance B between D point2D, its The distance parameter of his group is also according to same method.
If judging the distance parameter distance parameter more than right face of left face, then in statu quo preserve auto heterodyne figure in step S260 Picture, otherwise carries out mirror image preservation in step S270.Specifically, call back function of taking pictures is the most logical after the data receiving sampling of taking pictures The form crossing byte arrays returns auto heterodyne image, when carrying out mirror image and preserving, according to the mode of upset Y-axis, this image is carried out square Battle array conversion, and the auto heterodyne image after upset is stored.
It addition, in some cases, user wishes the auto heterodyne image having certain angle of inclination is set to horizontal front, this Time, the present invention can also calculate auto heterodyne image carry out the conversion square of Plane Rotation according to the human face characteristic point parameter of auto heterodyne image Battle array, and be horizontal direct picture according to this transformation matrix by described auto heterodyne image rotation.For example, it is possible to calculate straight line CD and warp Cross the angle of the vertical curve of D point, thus be horizontal direct picture by this angle of image rotation of autodyning, but method is not limited to This.
According to an embodiment, it is also possible to the training image set marking the face anglec of rotation is inputted convolutional Neural Network is trained, the human face posture type corresponding to interval range of the face anglec of rotation that output is preset, and obtains face rotation The convolutional neural networks model of gyration.Wherein, the interval range of the face anglec of rotation, refer to people according to the face anglec of rotation The front of face and side are averaged and are divided into two or more interval range, the interval model of each face anglec of rotation Enclose corresponding a kind of human face posture type.For example, it is possible to by following for the segmentation of the face anglec of rotation 5 interval ranges: [-180 ° ,- 120°],[-120°,-60°],[-60°,+60°],[+60°,+120°],[+120°,+180°].Wherein, the face that output is preset The computing formula of the interval range of the anglec of rotation is:
σ i ( Z ) = exp ( Z i ) Σ j = 1 m exp ( Z j )
Wherein, wherein m represents the angular interval number of segmentation, and i represents that i-th is interval, σi(Z) represent that output result is the I interval probability, Z represents the output of neutral net.
Additionally, because the impact of the face anglec of rotation, it is possible to cause the mark to human face characteristic point the most accurate.Now Just can be according to the convolutional neural networks model of the above-mentioned face anglec of rotation and the human face characteristic point marked to described training The face characteristic point coordinates of image collection carries out regression training, and regression Calculation formula is:
D = 1 2 N Σ i = 1 N | | x 1 i - x 2 i | | 2 2
Wherein, N represents the number of human face characteristic point to be exported, x1iRepresent the face characteristic of convolutional neural networks output The coordinate of point, x2iRepresenting the coordinate of the human face characteristic point of artificial mark, D represents the human face characteristic point that convolutional neural networks exports The error amount of the coordinate of the human face characteristic point of coordinate and artificial mark.Reduced as far as possible between two coordinates by regression training Error amount, i.e. can effectively ensure that being accurately positioned of human face characteristic point.
According to an embodiment, if be detected that pending auto heterodyne image exists multiple faces, then can be according to people The basis for estimation that face of face region area maximum preserves as image.Specifically, the most left of every face is first obtained End and low order end characteristic point, the top and the numerical value of bottom characteristic point, wherein the numerical value of high order end and low order end can take it Abscissa value xleftAnd xright, topmost the numerical value with bottom can take its ordinate value ytopAnd ybottom, then according to Lower formula calculating human face region area size:
Size=| (xright-xleft)*(ybottom-ytop)|
It is, in numerical value in all characteristic points, by wherein laterally the maximum in numerical value deduct minima, longitudinally Maximum in numerical value also deducts minima, and be then multiplied the region area i.e. obtained shared by every face by two differences.
Afterwards, determine, according to calculated area surface product value, the face that in auto heterodyne image, region area is maximum, and Calculate the similar distance parameter of face around.Finally, according to calculated similar distance parameter, auto heterodyne image is carried out mirror image Preserving or be treated as such, the most left face parameter preserves more than right face then former state, and otherwise mirror image preserves.
It addition, if be detected that pending auto heterodyne image exists multiple faces, then it is also possible that somebody's face The numerical value summation of similar distance parameter is as basis for estimation.Specifically, calculate respectively around according to the characteristic point of every face The similar distance parameter of face;The similar distance parameter of the left and right face of all faces is sued for peace;Judge the similar distance parameter of left face Whether summation is more than the similar distance parameter summation of right face;If the most in statu quo preserving described auto heterodyne image, otherwise carry out mirror image Preserve.
According to an embodiment, it is also possible to train structure people from local features such as face, skin and picture qualities respectively The convolutional neural networks model of face scoring.In this manner it is possible to adjust according to from the scoring situation of middle left and right face of taking a picture more intelligently Whole preserving type.If showing the scoring height of left face-like state, former state preserves, if showing the scoring height of right face-like state, mirror image is protected Deposit.
Fig. 3 shows the optimization device 300 of auto heterodyne image according to an embodiment of the invention, is suitable to reside in shooting In terminal, this device includes that image training module 310, model training module 320, characteristic point computing module 330, distance calculate mould Block 340 and image preserve module 350.
Image training module 310 gathers multiple facial images and is labeled human face characteristic point, forms training image collection Close.Wherein, human face characteristic point includes summit, left and right E and F of nose summit C, lip, and the eye feature of following any one group Point: right and left eyes central point A1And B1, left eye left summit A2Summit B right with right eye2;Wherein C point respectively with straight line A1B1Vertical with EF Intersect at D point and G point.Additionally, image training module 210 can also in multiple facial images collected to ethnic group, the age, The face anglec of rotation is labeled, and forms more rich training image set.
The training image set having been marked with human face characteristic point is inputted convolutional neural networks by model training module 320 to be carried out The training of human face characteristic point, obtains the convolutional neural networks model of human face characteristic point.Wherein, convolutional neural networks network includes repeating The convolutional layer of superposition, ReLU layer, down-sampling layer, and obtain multiple output branch at the full articulamentum of last superposition;The most each defeated Go out one attribute character of branch's correspondence face, and the error amount of the corresponding attribute of passback in model training.Specifically, can wrap Include: input → convolutional layer C1 → down-sampling layer P1 → convolutional layer C2 → down-sampling layer P2 → full articulamentum F1 → full articulamentum F2 → Output.
According to an embodiment, model training module 320 can also will have been marked with the training image of the face anglec of rotation Set input convolutional neural networks is trained, the human face posture corresponding to interval range of the face anglec of rotation that output is preset Type, obtains the convolutional neural networks model of the face anglec of rotation.Calculate the interval of the face anglec of rotation that described output is preset The formula of scope is:
σ i ( Z ) = exp ( Z i ) Σ j = 1 m exp ( Z j )
Wherein, wherein m represents the angular interval number of segmentation, and i represents that i-th is interval, σi(Z) represent that output result is the I interval probability, Z represents the output of neutral net.
According to an embodiment, model training module 320 can also be according to the convolutional neural networks mould of the face anglec of rotation The face key point of type and described mark carries out regression training to the face characteristic point coordinates of described training image set, returns Computing formula is:
D = 1 2 N Σ i = 1 N | | x 1 i - x 2 i | | 2 2
Wherein, N represents the number of human face characteristic point to be exported, x1iRepresent the face characteristic of convolutional neural networks output The coordinate of point, x2iRepresenting the coordinate of the human face characteristic point of artificial mark, D represents the human face characteristic point that convolutional neural networks exports The error amount of the coordinate of the human face characteristic point of coordinate and artificial mark.
Pending auto heterodyne image is input to the convolutional neural networks of described human face characteristic point by characteristic point computing module 330 Model is predicted, obtains the human face characteristic point of this auto heterodyne image.
Distance calculation module 340 obtains the similar distance parameter of left and right face according to the human face characteristic point of described auto heterodyne image. Wherein, any one group during distance parameter at least includes following five groups of distance parameters:, A1Distance A between point and D point1D and B1 Distance B between point and D point1D;ⅱ、A2Distance A between point and D point2D and B2Distance B between point and D point2D;, E point with Distance FG between distance EG and F point and G point between G point;, C point to A1Point and distance sum A to E point1C+CE and C point is to B1Point and distance sum B to F point1C+CF;, C point to A2Point and distance sum A to E point2C+CE and C point arrives B2Point and distance sum B to F point2C+CF。
Image preserves module 350 and judges whether the similar distance parameter of described left face is more than the similar distance parameter of right face; If the most in statu quo preserving above-mentioned from taking a picture, otherwise carry out mirror image preservation.
According to an embodiment, the optimization device 300 of the auto heterodyne image of the present invention can also include face detection module, right Pending auto heterodyne image carries out Face datection, obtains human face region, and this human face region is carried out suitably cutting and scaling Process.
According to another embodiment, it is also possible to include image rotation module, according to the human face characteristic point of described auto heterodyne image Parameter, calculates described auto heterodyne image and carries out the transformation matrix of Plane Rotation, and revolved by described auto heterodyne image according to this transformation matrix Transfer horizontal direct picture to.
It addition, the face detection module in device 300 is further adapted for detecting whether described pending auto heterodyne image exists many Open face;Characteristic point computing module is further adapted for, when described face detection module detects multiple faces, obtaining everyone respectively The high order end of face and the abscissa x of low order end characteristic pointleftAnd xright, and topmost and the vertical coordinate of bottom characteristic point ytopAnd ybottom;Distance calculation module is further adapted for calculating the region area size=of every face according to described coordinate figure | (xright-xleft)*(ybottom-ytop) |, and determine the face that in described auto heterodyne image, region area is maximum, and calculate The similar distance parameter of face around;Image preserves module and is further adapted for according to described calculated similar distance parameter auto heterodyne Image carries out mirror image preservation or is treated as such.
According to another embodiment, in the optimization device 300 of the auto heterodyne image of the present invention, distance calculation module is further adapted for When described face detection module detects multiple faces, calculate the similar of face around respectively according to the characteristic point of every face Distance parameter, and the similar distance parameter of the left and right face of all faces is sued for peace;Image preserves module and is further adapted for judging left face Whether similar distance parameter summation is more than the similar distance parameter summation of right face;If the most in statu quo preserving described auto heterodyne image, Otherwise carry out mirror image preservation.
The optimization device 300 of the auto heterodyne image according to the present invention, its detail is in description based on Fig. 1 and Fig. 2 Detailed disclosure, does not repeats them here.
According to technical scheme, the technology of Face datection and facial feature localization is utilized to judge the angle of current auto heterodyne image Degree is belonging to left side face or the right face: if if the face of the left side, then keeps original direction and preserves;If it is right If the face of limit, then present image is carried out left and right mirror image and obtains the image of left side face and preserve, so that all preservations Auto heterodyne image is all the angle of left side face, improves the photographic image of user.Wherein, the technology of facial feature localization passes through training image collection Close the convolutional neural networks model building human face characteristic point, and add the correction factor of the face anglec of rotation such that it is able to be accurate The really characteristic point of locating human face.This method precision is high, strong robustness, and it is the least to train the model of gained to take up room, It is achieved thereby that the optimization to auto heterodyne image processes on the premise of not affecting camera terminal performance.
A9, method as described in A1, wherein said convolutional neural networks include repeating the convolutional layer of superposition, ReLU layer, under Sample level, and obtain multiple output branch at the full articulamentum of last superposition;The most each one attribute of output branch correspondence face Feature, and the error amount of the corresponding attribute of passback in model training.
A10, method as described in A4, also include: if be detected that there are multiple faces in pending auto heterodyne image, then According in the following manner, image is processed: obtain the high order end of each face and the abscissa x of low order end characteristic point respectivelyleft And xright, and topmost and the vertical coordinate y of bottom characteristic pointtopAnd ybottom;Every face is calculated according to described coordinate figure Region area size=| (xright-xleft)*(ybottom-ytop)|;Determine that region area in described auto heterodyne image is maximum one Open face, and calculate the similar distance parameter of face around;According to described calculated similar distance parameter to auto heterodyne image Carry out mirror image preservation or be treated as such.
A11, method as described in A4, also include: if be detected that there are multiple faces in pending auto heterodyne image, then According in the following manner, image is processed: calculate the similar distance ginseng of face around respectively according to the characteristic point of every face Number;The similar distance parameter of the left and right face of all faces is sued for peace;Judge that the similar distance parameter summation of described left face is the biggest Similar distance parameter summation in right face;If the most in statu quo preserving described auto heterodyne image, otherwise carry out mirror image preservation.
B13, device as described in B12, wherein said human face characteristic point include nose summit C, summit, the left and right E of lip and F, and the eye feature point of following any one group: right and left eyes central point A1And B1, left eye left summit A2Summit B right with right eye2; Wherein said C point respectively with straight line A1B1It is vertically intersected on D point and G point with EF.
B14, device as described in B13, the similar distance parameter of wherein said left and right face includes following five groups of distance parameters In any one group:
ⅰ、A1Distance A between point and D point1D and B1Distance B between point and D point1D;
ⅱ、A2Distance A between point and D point2D and B2Distance B between point and D point2D;
, distance FG between distance EG and F point and G point between E point and G point;
, C point to A1Point and distance sum A to E point1C+CE and C point is to B1Point and distance sum B to F point1C+ CF;
, C point to A2Point and distance sum A to E point2C+CE and C point is to B2Point and distance sum B to F point2C+ CF。
B15, device as described in B12, also include: face detection module, be suitable to enter described pending auto heterodyne image Row Face datection, obtains human face region, and this human face region carries out cutting and scaling process.
B16, device as described in B12, also include: image rotation module, according to the human face characteristic point of described auto heterodyne image Parameter, calculates described auto heterodyne image and carries out the transformation matrix of Plane Rotation, and revolved by described auto heterodyne image according to this transformation matrix Transfer horizontal direct picture to.
B17, device as described in B12, wherein, described image training module is further adapted at multiple facial images collected In ethnic group, age, the face anglec of rotation are labeled, formed training image set;Described model training module be further adapted for by The described training image set input convolutional neural networks being labeled with the face anglec of rotation is trained, the face rotation that output is preset Human face posture type corresponding to the interval range of gyration, obtains the convolutional neural networks model of the face anglec of rotation.
B18, device as described in B17, wherein calculate the public affairs of the interval range of the face anglec of rotation that described output is preset Formula is:
σ i ( Z ) = exp ( Z i ) Σ j = 1 m exp ( Z j )
Wherein, wherein m represents the angular interval number of segmentation, and i represents that i-th is interval, σi(Z) represent that output result is the I interval probability, Z represents the output of neutral net.
B19, device as described in B17, wherein said model training module is further adapted for according to the described face anglec of rotation The face characteristic point coordinates of described training image set is entered by the face key point of convolutional neural networks model and described mark Row regression training, regression Calculation formula is:
D = 1 2 N Σ i = 1 N | | x 1 i - x 2 i | | 2 2
Wherein, N represents the number of human face characteristic point to be exported, x1iRepresent the face characteristic of convolutional neural networks output The coordinate of point, x2iRepresenting the coordinate of the human face characteristic point of artificial mark, D represents the human face characteristic point that convolutional neural networks exports The error amount of the coordinate of the human face characteristic point of coordinate and artificial mark.
B20, device as described in B12, wherein said convolutional neural networks include repeating the convolutional layer of superposition, ReLU layer, Down-sampling layer, and obtain multiple output branch at the full articulamentum of last superposition;One genus of the most each output branch correspondence face Property feature, and the error amount of the corresponding attribute of passback in model training.
B21, device as described in B15, wherein, described face detection module is further adapted for detecting described pending auto heterodyne figure Seem multiple faces of no existence;Described characteristic point computing module is further adapted for multiple faces being detected in described face detection module Time, obtain the high order end of each face and the abscissa x of low order end characteristic point respectivelyleftAnd xright, and topmost and under The vertical coordinate y of end characteristic pointtopAnd ybottom;Described distance calculation module is further adapted for calculating every face according to described coordinate figure Region area size=| (xright-xleft)*(ybottom-ytop) |, and determine that region area in described auto heterodyne image is maximum one Open face, and calculate the similar distance parameter of face around;Described image preserves module and is further adapted for described in basis being calculated Similar distance parameter auto heterodyne image is carried out mirror image preservation or is treated as such.
B22, device as described in B21, wherein, described distance calculation module is further adapted for detecting in described face detection module During to multiple faces, calculate the similar distance parameter of face around respectively according to the characteristic point of every face, and by all faces Left and right face similar distance parameter summation;Described image preserves module and is further adapted for judging that the similar distance parameter of described left face is total Whether it is more than the similar distance parameter summation of right face;If the most in statu quo preserving described auto heterodyne image, otherwise carry out mirror image guarantor Deposit.
In description mentioned herein, illustrate a large amount of detail.It is to be appreciated, however, that the enforcement of the present invention Example can be put into practice in the case of not having these details.In some instances, it is not shown specifically known method, knot Structure and technology, in order to do not obscure the understanding of this description.
Similarly, it will be appreciated that one or more in order to simplify that the disclosure helping understands in each inventive aspect, exist Above in the description of the exemplary embodiment of the present invention, each feature of the present invention is grouped together into single enforcement sometimes In example, figure or descriptions thereof.But, the method for the disclosure should not be construed to reflect an intention that i.e. required guarantor The application claims protected is than the feature more features being expressly recited in each claim.More precisely, as following As claims are reflected, inventive aspect is all features less than single embodiment disclosed above.Therefore, abide by The claims following detailed description of the invention are thus expressly incorporated in this detailed description of the invention, the most each claim itself Independent embodiment as the present invention.
Those skilled in the art are to be understood that the module of the equipment in example disclosed herein or unit or group Part can be arranged in equipment as depicted in this embodiment, or alternatively can be positioned at and the equipment in this example In different one or more equipment.Module in aforementioned exemplary can be combined as a module or be segmented into multiple in addition Submodule.
Those skilled in the art are appreciated that and can carry out the module in the equipment in embodiment adaptively Change and they are arranged in one or more equipment different from this embodiment.Can be the module in embodiment or list Unit or assembly are combined into a module or unit or assembly, and can put them in addition multiple submodule or subelement or Sub-component.In addition at least some in such feature and/or process or unit excludes each other, can use any Combine all features disclosed in this specification (including adjoint claim, summary and accompanying drawing) and so disclosed appoint Where method or all processes of equipment or unit are combined.Unless expressly stated otherwise, this specification (includes adjoint power Profit requires, summary and accompanying drawing) disclosed in each feature can be carried out generation by providing identical, equivalent or the alternative features of similar purpose Replace.
Although additionally, it will be appreciated by those of skill in the art that embodiments more described herein include other embodiments Some feature included by rather than further feature, but the combination of the feature of different embodiment means to be in the present invention's Within the scope of and form different embodiments.Such as, in the following claims, embodiment required for protection appoint One of meaning can mode use in any combination.
Additionally, some in described embodiment be described as at this can be by the processor of computer system or by performing The method of other device enforcement of described function or the combination of method element.Therefore, have for implementing described method or method The processor of the necessary instruction of element is formed for implementing the method or the device of method element.Additionally, device embodiment This described element is the example of following device: this device is for implementing by performed by the element of the purpose in order to implement this invention Function.
As used in this, unless specifically stated so, ordinal number " first ", " second ", " the 3rd " etc. is used Describe plain objects and be merely representative of the different instances relating to similar object, and be not intended to imply that the object being so described must Must have the time upper, spatially, sequence aspect or in any other manner to definite sequence.
Although the embodiment according to limited quantity describes the present invention, but benefits from above description, the art In it is clear for the skilled person that in the scope of the present invention thus described, it can be envisaged that other embodiments.Additionally, it should be noted that The language that uses in this specification primarily to the readable and purpose of teaching and select rather than in order to explain or limit Determine subject of the present invention and select.Therefore, in the case of without departing from the scope of the appended claims and spirit, for this For the those of ordinary skill of technical field, many modifications and changes will be apparent from.For the scope of the present invention, to this The disclosure that invention is done is illustrative and not restrictive, and it is intended that the scope of the present invention be defined by the claims appended hereto.

Claims (10)

1. autodyning the optimization method of image, be suitable to perform in camera terminal, the method includes:
Gather multiple facial images and human face characteristic point therein is labeled, forming training image set;
The described training image set marked input convolutional neural networks is carried out the training of human face characteristic point, obtains face special Levy convolutional neural networks model a little;
It is input in the convolutional neural networks model of described human face characteristic point be predicted by described pending auto heterodyne image, Human face characteristic point to described auto heterodyne image;
Human face characteristic point according to described auto heterodyne image obtains the similar distance parameter of left and right face;
Judge whether the similar distance parameter of described left face is more than the similar distance parameter of right face;
If the most in statu quo preserving described auto heterodyne image, otherwise carry out mirror image preservation.
2. the method for claim 1, wherein said human face characteristic point include nose summit C, summit, the left and right E of lip and F, and the eye feature point of following any one group:
Right and left eyes central point A1And B1, left eye left summit A2Summit B right with right eye2
Wherein said C point respectively with straight line A1B1It is vertically intersected on D point and G point with EF.
3. method as claimed in claim 2, the similar distance parameter of wherein said left and right face at least includes following five groups of distances In parameter any one group:
ⅰ、A1Distance A between point and D point1D and B1Distance B between point and D point1D;
ⅱ、A2Distance A between point and D point2D and B2Distance B between point and D point2D;
, distance FG between distance EG and F point and G point between E point and G point;
, C point to A1Point and distance sum A to E point1C+CE and C point is to B1Point and distance sum B to F point1C+CF;
, C point to A2Point and distance sum A to E point2C+CE and C point is to B2Point and distance sum B to F point2C+CF。
4. the method for claim 1, also includes:
Described pending auto heterodyne image is carried out Face datection, obtains human face region, and this human face region is carried out cutting and Scaling processes.
5. the method for claim 1, also includes:
According to the human face characteristic point parameter of described auto heterodyne image, calculate described auto heterodyne image and carry out the transformation matrix of Plane Rotation, And be horizontal direct picture according to this transformation matrix by described auto heterodyne image rotation.
6. the method for claim 1, also includes:
Ethnic group, age, the face anglec of rotation are labeled by multiple facial images collected, form training image collection Close;
Being trained by the described training image set input convolutional neural networks being labeled with the face anglec of rotation, output is preset Human face posture type corresponding to the interval range of the face anglec of rotation, obtains the convolutional neural networks mould of the face anglec of rotation Type.
7. method as claimed in claim 6, wherein calculates the public affairs of the interval range of the face anglec of rotation that described output is preset Formula is:
σ i ( Z ) = exp ( Z i ) Σ j = 1 m exp ( Z j )
Wherein, wherein m represents the angular interval number of segmentation, and i represents that i-th is interval, σi(Z) represent that output result is in i-th district Between probability, Z represents the output of neutral net.
8. method as claimed in claim 6, also includes:
Convolutional neural networks model according to the described face anglec of rotation and the human face characteristic point of described mark are to described training The face characteristic point coordinates of image collection carries out regression training, and regression Calculation formula is:
D = 1 2 N Σ i = 1 N | | x 1 i - x 2 i | | 2 2
Wherein, N represents the number of human face characteristic point to be exported, x1iRepresent the human face characteristic point of convolutional neural networks output Coordinate, x2iRepresenting the coordinate of the human face characteristic point of artificial mark, D represents the coordinate of the human face characteristic point that convolutional neural networks exports Error amount with the coordinate of the artificial human face characteristic point marked.
9. autodyning the optimization device of image, be suitable to reside in camera terminal, this device includes:
Image training module, is suitable to gather multiple facial images and be labeled human face characteristic point, forms training image set;
Model training module, is suitable to the described training image set marked input convolutional neural networks is carried out human face characteristic point Training, obtain the convolutional neural networks model of human face characteristic point;
Characteristic point computing module, is suitable to be input to described pending auto heterodyne image the convolutional Neural net of described human face characteristic point Network model is predicted, obtains the human face characteristic point of described auto heterodyne image;
Distance calculation module, is suitable to the human face characteristic point according to described auto heterodyne image and obtains the similar distance parameter of left and right face;
Image preserves module, is suitable to judge that whether the distance parameter of described left face is more than right face;If the most in statu quo preserving described Auto heterodyne image, otherwise carries out mirror image preservation.
10. a camera terminal, including the optimization device of image of autodyning as claimed in claim 9.
CN201610622070.XA 2016-08-01 2016-08-01 A kind of optimization method, device and the camera terminal of self-timer image Active CN106295533B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610622070.XA CN106295533B (en) 2016-08-01 2016-08-01 A kind of optimization method, device and the camera terminal of self-timer image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610622070.XA CN106295533B (en) 2016-08-01 2016-08-01 A kind of optimization method, device and the camera terminal of self-timer image

Publications (2)

Publication Number Publication Date
CN106295533A true CN106295533A (en) 2017-01-04
CN106295533B CN106295533B (en) 2019-07-02

Family

ID=57663958

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610622070.XA Active CN106295533B (en) 2016-08-01 2016-08-01 A kind of optimization method, device and the camera terminal of self-timer image

Country Status (1)

Country Link
CN (1) CN106295533B (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107194361A (en) * 2017-05-27 2017-09-22 成都通甲优博科技有限责任公司 Two-dimentional pose detection method and device
CN107506732A (en) * 2017-08-25 2017-12-22 奇酷互联网络科技(深圳)有限公司 Method, equipment, mobile terminal and the computer-readable storage medium of textures
CN108055461A (en) * 2017-12-21 2018-05-18 广东欧珀移动通信有限公司 Recommendation method, apparatus, terminal device and the storage medium of self-timer angle
WO2018184192A1 (en) * 2017-04-07 2018-10-11 Intel Corporation Methods and systems using camera devices for deep channel and convolutional neural network images and formats
CN108846342A (en) * 2018-06-05 2018-11-20 四川大学 A kind of harelip operation mark point recognition system
CN108848405A (en) * 2018-06-29 2018-11-20 广州酷狗计算机科技有限公司 Image processing method and device
CN109214343A (en) * 2018-09-14 2019-01-15 北京字节跳动网络技术有限公司 Method and apparatus for generating face critical point detection model
CN109376712A (en) * 2018-12-07 2019-02-22 广州纳丽生物科技有限公司 A kind of recognition methods of face forehead key point
WO2019090904A1 (en) * 2017-11-10 2019-05-16 广州视源电子科技股份有限公司 Distance determination method, apparatus and device, and storage medium
CN109934058A (en) * 2017-12-15 2019-06-25 北京市商汤科技开发有限公司 Face image processing process, device, electronic equipment, storage medium and program
CN109977727A (en) * 2017-12-27 2019-07-05 广东欧珀移动通信有限公司 Sight protectio method, apparatus, storage medium and mobile terminal
CN112465910A (en) * 2020-11-26 2021-03-09 成都新希望金融信息有限公司 Target shooting distance obtaining method and device, storage medium and electronic equipment
CN112541484A (en) * 2020-12-28 2021-03-23 平安银行股份有限公司 Face matting method, system, electronic device and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101377814A (en) * 2007-08-27 2009-03-04 索尼株式会社 Face image processing apparatus, face image processing method, and computer program
CN101815174A (en) * 2010-01-11 2010-08-25 北京中星微电子有限公司 Control method and control device for camera shooting
CN103152489A (en) * 2013-03-25 2013-06-12 锤子科技(北京)有限公司 Showing method and device for self-shooting image
CN103383595A (en) * 2012-05-02 2013-11-06 三星电子株式会社 Apparatus and method of controlling mobile terminal based on analysis of user's face
CN103793693A (en) * 2014-02-08 2014-05-14 厦门美图网科技有限公司 Method for detecting face turning and facial form optimizing method with method for detecting face turning
CN105205462A (en) * 2015-09-18 2015-12-30 北京百度网讯科技有限公司 Shooting promoting method and device
CN105205779A (en) * 2015-09-15 2015-12-30 厦门美图之家科技有限公司 Eye image processing method and system based on image morphing and shooting terminal
CN105227832A (en) * 2015-09-09 2016-01-06 厦门美图之家科技有限公司 A kind of self-timer method based on critical point detection, self-heterodyne system and camera terminal

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101377814A (en) * 2007-08-27 2009-03-04 索尼株式会社 Face image processing apparatus, face image processing method, and computer program
CN101815174A (en) * 2010-01-11 2010-08-25 北京中星微电子有限公司 Control method and control device for camera shooting
CN103383595A (en) * 2012-05-02 2013-11-06 三星电子株式会社 Apparatus and method of controlling mobile terminal based on analysis of user's face
CN103152489A (en) * 2013-03-25 2013-06-12 锤子科技(北京)有限公司 Showing method and device for self-shooting image
CN103793693A (en) * 2014-02-08 2014-05-14 厦门美图网科技有限公司 Method for detecting face turning and facial form optimizing method with method for detecting face turning
CN105227832A (en) * 2015-09-09 2016-01-06 厦门美图之家科技有限公司 A kind of self-timer method based on critical point detection, self-heterodyne system and camera terminal
CN105205779A (en) * 2015-09-15 2015-12-30 厦门美图之家科技有限公司 Eye image processing method and system based on image morphing and shooting terminal
CN105205462A (en) * 2015-09-18 2015-12-30 北京百度网讯科技有限公司 Shooting promoting method and device

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018184192A1 (en) * 2017-04-07 2018-10-11 Intel Corporation Methods and systems using camera devices for deep channel and convolutional neural network images and formats
US11551335B2 (en) 2017-04-07 2023-01-10 Intel Corporation Methods and systems using camera devices for deep channel and convolutional neural network images and formats
CN107194361A (en) * 2017-05-27 2017-09-22 成都通甲优博科技有限责任公司 Two-dimentional pose detection method and device
CN107194361B (en) * 2017-05-27 2021-04-02 成都通甲优博科技有限责任公司 Two-dimensional posture detection method and device
CN107506732A (en) * 2017-08-25 2017-12-22 奇酷互联网络科技(深圳)有限公司 Method, equipment, mobile terminal and the computer-readable storage medium of textures
CN107506732B (en) * 2017-08-25 2021-03-30 奇酷互联网络科技(深圳)有限公司 Method, device, mobile terminal and computer storage medium for mapping
WO2019090904A1 (en) * 2017-11-10 2019-05-16 广州视源电子科技股份有限公司 Distance determination method, apparatus and device, and storage medium
CN109934058A (en) * 2017-12-15 2019-06-25 北京市商汤科技开发有限公司 Face image processing process, device, electronic equipment, storage medium and program
CN108055461A (en) * 2017-12-21 2018-05-18 广东欧珀移动通信有限公司 Recommendation method, apparatus, terminal device and the storage medium of self-timer angle
CN109977727A (en) * 2017-12-27 2019-07-05 广东欧珀移动通信有限公司 Sight protectio method, apparatus, storage medium and mobile terminal
CN108846342A (en) * 2018-06-05 2018-11-20 四川大学 A kind of harelip operation mark point recognition system
CN108848405B (en) * 2018-06-29 2020-10-09 广州酷狗计算机科技有限公司 Image processing method and device
CN108848405A (en) * 2018-06-29 2018-11-20 广州酷狗计算机科技有限公司 Image processing method and device
CN109214343A (en) * 2018-09-14 2019-01-15 北京字节跳动网络技术有限公司 Method and apparatus for generating face critical point detection model
CN109376712A (en) * 2018-12-07 2019-02-22 广州纳丽生物科技有限公司 A kind of recognition methods of face forehead key point
CN112465910A (en) * 2020-11-26 2021-03-09 成都新希望金融信息有限公司 Target shooting distance obtaining method and device, storage medium and electronic equipment
CN112541484A (en) * 2020-12-28 2021-03-23 平安银行股份有限公司 Face matting method, system, electronic device and storage medium
CN112541484B (en) * 2020-12-28 2024-03-19 平安银行股份有限公司 Face matting method, system, electronic device and storage medium

Also Published As

Publication number Publication date
CN106295533B (en) 2019-07-02

Similar Documents

Publication Publication Date Title
CN106295533A (en) Optimization method, device and the camera terminal of a kind of image of autodyning
CN110473141B (en) Image processing method, device, storage medium and electronic equipment
JP6732317B2 (en) Face activity detection method and apparatus, and electronic device
TWI753271B (en) Resource transfer method, device and system
CN106934376B (en) A kind of image-recognizing method, device and mobile terminal
US8879803B2 (en) Method, apparatus, and computer program product for image clustering
CN109815770B (en) Two-dimensional code detection method, device and system
CN110163806B (en) Image processing method, device and storage medium
CN108171152A (en) Deep learning human eye sight estimation method, equipment, system and readable storage medium storing program for executing
US10318797B2 (en) Image processing apparatus and image processing method
CN108062526A (en) A kind of estimation method of human posture and mobile terminal
CN105608425B (en) The method and device of classification storage is carried out to photo
WO2020199611A1 (en) Liveness detection method and apparatus, electronic device, and storage medium
CN110059661A (en) Action identification method, man-machine interaction method, device and storage medium
CN109242765B (en) Face image processing method and device and storage medium
CN109726659A (en) Detection method, device, electronic equipment and the readable medium of skeleton key point
CN109815843A (en) Object detection method and Related product
CN107368810A (en) Method for detecting human face and device
CN110414428A (en) A method of generating face character information identification model
TW202006630A (en) Payment method, apparatus, and system
CN109934065A (en) A kind of method and apparatus for gesture identification
CN108537193A (en) Ethnic attribute recognition approach and mobile terminal in a kind of face character
CN106250839A (en) A kind of iris image perspective correction method, device and mobile terminal
CN110443769A (en) Image processing method, image processing apparatus and terminal device
CN107145839A (en) A kind of fingerprint image completion analogy method and its system

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant