CN108021905A - image processing method, device, terminal device and storage medium - Google Patents
image processing method, device, terminal device and storage medium Download PDFInfo
- Publication number
- CN108021905A CN108021905A CN201711394859.5A CN201711394859A CN108021905A CN 108021905 A CN108021905 A CN 108021905A CN 201711394859 A CN201711394859 A CN 201711394859A CN 108021905 A CN108021905 A CN 108021905A
- Authority
- CN
- China
- Prior art keywords
- face
- facial image
- target facial
- image
- face feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/165—Detection; Localisation; Normalisation using facial parts and geometric relationships
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
- G06V40/171—Local features and components; Facial parts ; Occluding parts, e.g. glasses; Geometrical relationships
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Human Computer Interaction (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Geometry (AREA)
- Image Processing (AREA)
Abstract
The embodiment of the present application discloses a kind of image processing method, device, terminal device and storage medium.This method includes:Obtain the face feature of target facial image;The target facial image is labeled according to the face feature;Target facial image after mark is inputted into training pattern, the face of the target facial image are adjusted.Image processing method provided by the embodiments of the present application, the facial image for carrying out face feature mark is inputted into training pattern, to realize the adjustment to facial image face, the efficiency of picture processing can be not only improved, the accuracy to the adjustment of facial image face can also be improved.
Description
Technical field
The invention relates to technical field of image processing, more particularly to a kind of image processing method, device, terminal to set
Standby and storage medium.
Background technology
With the development of mobile communication technology and the popularization of intelligent terminal, intelligent mobile terminal has become people's life
One of indispensable instrument in work.Terminal device not only has the function of to take pictures, and also has the function of picture processing.For example,
The processing such as mill skin, whitening, anti-acne, thin face, big eye and bright eye is carried out to the face in picture.
In correlation technique, in picture human face region handle when, it is necessary to user manually select wherein some function into
Row processing, when image processing function is more and picture progress multiple processing of the user to loading operate, user needs to divide manually
Do not operated so that operating process is complicated, and the effect handled is sometimes unnatural, does not reach the effect that user wants.
Existing image processing method there are it is certain the defects of.
The content of the invention
The embodiment of the present application provides a kind of image processing method, device, terminal device and storage medium, to improve at picture
The efficiency of reason.
In a first aspect, the embodiment of the present application provides a kind of image processing method, this method includes:
Obtain the face feature of target facial image;
The target facial image is labeled according to the face feature;
Target facial image after mark is inputted into training pattern, the face of the target facial image are adjusted.
Second aspect, the embodiment of the present application additionally provide a kind of picture processing unit, which includes:
Face feature acquisition module, for obtaining the face feature of target facial image;
Target facial image labeling module, for being labeled according to the face feature to the target facial image;
Face adjust module, for the target facial image after mark to be inputted training pattern, to the target face figure
The face of picture are adjusted.
The third aspect, the embodiment of the present application additionally provide a kind of terminal device, including:Processor, memory and storage
On a memory and the computer program that can run on a processor, the processor are realized such as when performing the computer program
Method described in first aspect.
Fourth aspect, the embodiment of the present application additionally provide a kind of storage medium, are stored thereon with computer program, the program
Method as described in relation to the first aspect is realized when being executed by processor.
The embodiment of the present application, obtains the face feature of target facial image, then according to face feature to target person first
Face image is labeled, and the target facial image after mark finally is inputted training pattern, to the face of target facial image into
Row adjustment.Image processing method provided by the embodiments of the present application, will carry out the facial image input training mould of face feature mark
Type, to realize the adjustment to facial image face, can not only improve the efficiency of picture processing, can also improve to facial image
The accuracy of face adjustment.
Brief description of the drawings
Fig. 1 is a kind of flow chart of image processing method in the embodiment of the present application;
Fig. 2 is the flow chart of another image processing method in the embodiment of the present application;
Fig. 3 is the flow chart of another image processing method in the embodiment of the present application;
Fig. 4 is the flow chart of another image processing method in the embodiment of the present application;
Fig. 5 is a kind of structure diagram of picture processing unit in the embodiment of the present application;
Fig. 6 is a kind of structure diagram of terminal device in the embodiment of the present application;
Fig. 7 is the structure diagram of another terminal device in the embodiment of the present application.
Embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining the application, rather than the restriction to the application.It also should be noted that in order to just
It illustrate only part relevant with the application rather than entire infrastructure in description, attached drawing.
Fig. 1 is a kind of flow chart of image processing method provided by the embodiments of the present application, the present embodiment be applicable to containing
The situation that the picture for having facial image is handled, this method can be performed by picture processing unit, which can integrate
In the terminal device such as mobile phone or tablet.As shown in Figure 1, this method comprises the following steps.
Step 110, the face feature of target facial image is obtained.
Wherein, target facial image can include the image of at least one face, can be shot by local terminal
Image, the image either downloaded from network data base or is sent by other-end equipment by what social software received
Image.Face can refer to each position of face, can include ear, eyebrow, eyes, nose and lip.Face feature can be with
Including face position and face attribute, under this application scene, eyebrow, eyes, nose and the lip in face can be only focused on, it is right
The feature of ear is without considering.Wherein, face position can be position of the face relative to whole face.Face attribute can be with
The size and type of face are characterized, wherein, the size of face is determined by the ratio of face and face.The type of face can
To be determined with existing mode classification.For example, eyebrow type can include crescent or half moon eyebrow, synophrys, triangle eyebrow, arched eyebrows, slanted eyebrows and
Straight eyebrows slanting upwards and outwards etc., eyebrow can include situations such as dense and sparse;Ocular form can include peach blossom eye, auspicious phoenix eyes, sleep phoenix eyes, willow leaf eye,
Apricot eye, fox eye, copper bell eye, longan, slim eye and fawn eye etc., it is different that eyes can include big eye, moderate ocular, ommatidium etc.
Situation;Nose type can include ultra-narrow nose, narrow nose, middle nose, platyrrhiny and super platyrrhiny etc., and the bridge of the nose can include Roman nose and flat nose
Deng.Lip can include thin slim, roomy type, corners of the mouth catacline, cola type, doll type and cupid's type etc..
In the present embodiment, obtaining the mode of face position can be, using the location algorithm in correlation technique to face into
Row positioning, location algorithm can be supervision descending method (Supervised Descent Method, SDM), active list item model
(Active Appearance Model, AAM) algorithm or active shape model (Active Shape Model, ASM) algorithm
Deng.The mode for obtaining face attribute can be that the face of facial image are compared with standard form respectively, obtain face
Attribute.It is exemplary, a certain target facial image is identified analysis, the face of acquisition be characterized as dense arched eyebrows, in
Nose and thin slim lip in slim eye, Roman nose etc. size.
In the present embodiment, obtaining the process of the face feature of target facial image can be, first to target facial image
Recognition of face is carried out, extraction includes the region of face, then the face in face are identified, obtain the position of each face
Put, and each face are analyzed, or face are compared with default template, face attribute is obtained, so as to obtain mesh
Mark the face feature of facial image.
Step 120, target facial image is labeled according to face feature.
In this implementation, it can mark face feature in target facial image that target facial image, which is labeled,.
The mode marked according to face feature to target facial image memory can be by target facial image input face feature mark
Neutral net, to carry out face feature mark to target facial image, or using manual mode and according to pixel scale to mesh
Mark facial image and carry out face feature mark.Wherein, face feature mark neutral net can existing can realize face
The neutral net of feature marking Function.Manual mode, that is, manual type, manually by means of the location algorithm in correlation technique to five
Official is positioned, and face position is obtained, so as to fulfill the mark of face position, manually by the face and template of target facial image
It is compared after obtaining face attribute, by face attribute labeling in target facial image.
Specifically, after the face feature of target facial image is obtained, according to the face feature of acquisition to target face
Image is labeled.It is exemplary, the face of target facial image be characterized as dense arched eyebrows, medium sized slim eye,
Nose and thin slim lip, the face feature of acquisition is marked on target facial image in Roman nose.
Step 130, the target facial image after mark is inputted into training pattern, the face of target facial image is adjusted
It is whole.
Wherein, training pattern can be the model being adjusted to the face of facial image.Training pattern can be based on
The machine learning language of setting, the model that constantly training obtains is carried out by sample set.In the present embodiment, training pattern can be with
The face in facial image after being marked to face feature are adjusted.
In the present embodiment, training pattern can be to the process that target facial image is adjusted, by the target after mark
Facial image input training pattern after, training pattern analyzes face feature, according to the attribute of face, position and with it is whole
The ratio of face, is respectively adjusted the eyebrow in face, eyes, nose and lip, finally by the facial image after adjustment
Output.Exemplary, it is assumed that in target facial image, eyes seem smaller relative to whole face, then by eyes according to whole
The size of face tunes up, and the bridge of the nose is relatively low, then heightens the bridge of the nose is appropriate.
The technical solution of the present embodiment, obtains the face feature of target facial image, then according to face feature pair first
Target facial image is labeled, and the target facial image after mark finally is inputted training pattern, to target facial image
Face are adjusted.Image processing method provided by the embodiments of the present application, the facial image for carrying out face feature mark is inputted
Training pattern, to realize the adjustment to facial image face, can not only improve the efficiency of picture processing, can also improve to people
The accuracy of face image face adjustment.
Optionally, after being labeled according to face feature to target facial image, following steps are further included:Obtain mesh
Mark the face shape of face feature of facial image;Target facial image is labeled according to face shape of face feature.
Wherein, face shape of face can be the profile of face face.Face shape of face feature can include:Circular face, rectangle
Face, square face, triangle face and oval face etc..Obtaining the process of the face shape of face feature of target facial image can be, right
Face face contour in target facial image is analyzed, or by face in target facial image and standard shape of face template into
Row compares, and obtains face shape of face feature.After face shape of face feature is obtained, face shape of face feature is marked in target facial image
In.Exemplary, it is assumed that the face shape of face of target face figure is characterized as circular face, then marks circular face in target face figure
As in.Finally, the target facial image for carrying out face feature mark and face shape of face feature mark is inputted into training pattern, with reality
Now to the adjustment of target facial image face.
The technical solution of the present embodiment, obtains the face shape of face feature of target facial image, according to face shape of face feature pair
Target facial image is labeled.Target facial image is carried out at the same time by after the mark of face feature and face shape of face feature,
Training pattern is inputted, to realize the adjustment to target facial image, the accuracy to the adjustment of facial image face can be improved.
Fig. 2 is the flow chart of another image processing method provided by the embodiments of the present application.As shown in Fig. 2, this method bag
Include following steps.
Step 210, face image set is obtained.
Wherein, face image set can be obtained from network data base, can be by the largely star Jing Guo landscaping treatment
Portrait forms.In the present embodiment, obtaining the mode of face image set can be, facial image is retrieved from network data base
Picture under classification, then downloads substantial amounts of facial image in batches, forms face image set.Alternatively, search has difference respectively
The portrait album of the star of face feature, downloads the picture of default quantity from the portrait album of each star respectively.It is exemplary, can be with
100 female stars, 10 portrait albums of 100 matinée idols are downloaded respectively, form face image set.
Step 220, face image set is labeled according to face feature, obtains facial image sample set.
Wherein, the mode being labeled according to face feature to face image set can be that face image set is inputted five
Official's feature marks neutral net, to carry out face feature mark to face image set;Alternatively, using manual mode and according to pixel
Rank carries out face feature mark to face image set.
Wherein, face feature mark neutral net can be existing to have the neutral net of face feature marking Function.
Manual mode, that is, manual type, manually positions face by means of the location algorithm in correlation technique, obtains facial image
Face position of the collection per pictures, is then labeled according to pixel scale, manually by the face of face image set and template into
After row compares acquisition face attribute, face attribute labeling is concentrated in facial image.In the present embodiment, it is necessary to face image set
In every image be labeled, every facial image being marked can all be used as sample.
Specifically, after face image set is obtained, the face feature of every image is concentrated respectively to every according to facial image
Open facial image to be labeled, so as to form facial image sample set.
Step 230, according to facial image sample set, training pattern is trained based on setting machine learning algorithm.
In the present embodiment, after face sample set is obtained, training pattern is trained based on setting machine learning algorithm,
Make the position relationship between the face in training pattern study facial image sample set, and the combination pass between face attribute
System.Training pattern is set to have the function of to be adjusted face in facial image according to face feature, so as to fulfill to face figure
The beautification of picture.After training pattern is successfully trained, it is possible to for handling facial image.
Step 240, the face feature of target facial image is obtained.
Step 250, target facial image is labeled according to face feature.
Step 260, the target facial image after mark is inputted into training pattern, the face of target facial image is adjusted
It is whole.
Optionally, after face image set is obtained, include the following steps well:The face shape of face for obtaining face image set is special
Sign.
In the present embodiment, obtaining the mode of the face shape of face feature of face image set can be, to every in facial image
The face face contour for opening facial image is analyzed, or facial image is concentrated to the face and master die of every facial image
Plate is compared, and obtains face shape of face feature.
Optionally, face image set is labeled according to face feature, obtains facial image sample set, can be by following
Mode is implemented:Face image set is labeled according to face feature and face shape of face feature, obtains facial image sample set.
In the present embodiment, after obtaining facial image and concentrating the face feature and face shape of face feature of every image, respectively
Every facial image that facial image is concentrated is labeled according to face feature and face shape of face feature, so as to obtain face figure
As sample set.According to the sample set, training pattern is trained based on setting machine learning algorithm so that training pattern has
The function that the face of facial image are adjusted according to face feature and face shape of face feature.Face is carried out to face image set
Shape of face feature marks, the advantage of doing so is that make training pattern when face are adjusted, can be by face feature and face face
Type feature is as foundation is referred to, so as to improve the reliability of training pattern processing picture.
The technical solution of the present embodiment, obtains face image set, then according to face feature to face image set into rower
Note, obtains facial image sample set, finally according to facial image sample set, based on setting machine learning algorithm to training pattern into
Row training.By being labeled to form facial image sample set to the face image set of collection, and utilize image pattern set pair instruction
Practice model training, the accuracy of training pattern processing picture can be improved.
Fig. 3 is the flow chart of another image processing method provided by the embodiments of the present application.As shown in figure 3, this method bag
Include following steps.
Step 310, the face feature of target facial image is obtained.
Step 320, target facial image is labeled according to face feature.
Step 330, the target facial image after mark is inputted into training pattern, the face of target facial image is adjusted
It is whole.
Step 340, the exposure parameter of target facial image is obtained.
Wherein, light exposure when exposure parameter can be camera photographic subjects facial image.When camera shooting picture is completed
Afterwards, the light exposure of the image of shooting has determined.In the present embodiment, the mode for obtaining the exposure parameter of target facial image can be with
It is to search the property parameters of target facial image, exposure parameter is obtained in property parameters.
Step 350, the face complexion of target facial image is adjusted according to exposure parameter.
Optionally, can be to obtain to the mode that the face complexion of target facial image is adjusted according to exposure parameter
The colour of skin parameter of target facial image, determines that the colour of skin corrects parameter according to colour of skin parameter and exposure parameter, is corrected and joined according to the colour of skin
Several face complexions to target facial image are adjusted.
Wherein, colour of skin parameter can be the color value of the pixel of skin area in target facial image, i.e. skin area
The rgb value of pixel.Obtaining the mode of the colour of skin parameter of target facial image can be, identify the skin region of target facial image
Domain, counts the color data of pixel in skin area, and calculates the average value of color data, so as to obtain target person
The colour of skin parameter of face image.
Correction parameter can be the colour of skin parameter after the adjustment of target facial image.Determined according to colour of skin parameter and exposure parameter
The mode of colour of skin correction parameter can be that the corresponding default colour of skin parameter of exposure parameter, Ran Hougen are searched in default mapping table
Correction parameter is determined according to the colour of skin parameter of default colour of skin parameter and target facial image.Exemplary, correction parameter can be pre-
If colour of skin parameter, or the numerical value between default colour of skin parameter and the colour of skin parameter of target facial image, such as can be
Both average value, or correction parameter is determined according to both weights.Wherein default mapping table can be a large amount of by analyzing
The exposure parameter that determines of beautification picture and the mapping table of colour of skin parameter.
Specifically, after the exposure parameter of target facial image is obtained, the colour of skin parameter of target facial image is obtained, then
Correction parameter is determined further according to colour of skin parameter and exposure parameter, is finally carried out according to the colour of skin of correction parameters on target facial image
Adjustment.
The technical solution of the present embodiment, obtains the colour of skin parameter of target facial image, according to colour of skin parameter and exposure parameter
Determine that the colour of skin corrects parameter, the face complexion that parameters on target facial image is corrected according to the colour of skin is adjusted.To target person
After the face adjustment of face image, the colour of skin of target facial image is adjusted, improves the landscaping effect of picture.
Fig. 4 is the flow chart of another image processing method provided by the embodiments of the present application.As to above-described embodiment
It is explained further, as shown in figure 4, this method comprises the following steps.
Step 401, face image set is obtained, and obtains the face feature and face shape of face feature of face image set;
Step 402, face image set is labeled according to face feature and face shape of face feature, obtains facial image sample
This collection.
Step 403, according to facial image sample set, training pattern is trained based on setting machine learning algorithm.
Step 404, the face feature and face shape of face feature of target facial image are obtained.
Step 405, target facial image is labeled according to face feature and face shape of face feature.
Step 406, the target facial image after mark is inputted into training pattern, the face of target facial image is adjusted
It is whole.
Step 407, the exposure parameter and colour of skin parameter of target facial image are obtained;
Step 408, determine that the colour of skin corrects parameter according to colour of skin parameter and exposure parameter.
Step 409, the face complexion that parameters on target facial image is corrected according to the colour of skin is adjusted.
The technical solution of this implementation, instruction is inputted by the target facial image for carrying out face feature and face shape of face feature mark
In the model perfected, to realize adjustment to target facial image face, then according to the exposure parameter of target facial image and
Colour of skin parameter is adjusted face complexion, realizes the beautification to target facial image.Improve the efficiency of picture processing.
Fig. 5 is a kind of structure diagram of picture processing unit provided by the embodiments of the present application.As shown in figure 5, the device
Including:Face feature acquisition module 510, target facial image labeling module 520 and face adjustment module 530.
Face feature acquisition module 510, for obtaining the face feature of target facial image;
Target facial image labeling module 520, for being labeled according to face feature to target facial image;
Face adjust module 530, for the target facial image after mark to be inputted training pattern, to target facial image
Face be adjusted.
Optionally, further include:
Face shape of face feature acquisition module, for obtaining the face shape of face feature of target facial image;
Face shape of face feature labeling module, for being labeled according to face shape of face feature to target facial image.
Optionally, further include:
Face image set acquisition module, for obtaining face image set;
Facial image sample set acquisition module, for being labeled according to face feature to face image set, obtains face
Image pattern collection;
Training pattern training module, for according to facial image sample set, based on setting machine learning algorithm to training mould
Type is trained.
Optionally, facial image sample set acquisition module, is additionally operable to:
Face image set input face feature is marked into neutral net, to carry out face feature mark to face image set;
Alternatively,
Face feature mark is carried out to face image set using manual mode and according to pixel scale.
Optionally, further include:
Obtain the face shape of face feature of face image set;
Correspondingly, facial image sample set acquisition module, is additionally operable to:
Face image set is labeled according to face feature and face shape of face feature, obtains facial image sample set.
Optionally, further include:
Exposure parameter acquisition module, for obtaining the exposure parameter of target facial image;
Face complexion adjusts module, for being adjusted according to exposure parameter to the face complexion of target facial image.
Optionally, face complexion adjustment module, is additionally operable to:
Obtain the colour of skin parameter of target facial image;
Determine that the colour of skin corrects parameter according to colour of skin parameter and exposure parameter;
The face complexion that parameters on target facial image is corrected according to the colour of skin is adjusted.
Above device can perform the method that the foregoing all embodiments of the application are provided, and it is corresponding to possess the execution above method
Function module and beneficial effect.Not ins and outs of detailed description in the present embodiment, reference can be made to the foregoing all implementations of the application
The method that example is provided.
Fig. 6 is a kind of structure diagram of terminal device provided by the embodiments of the present application.As shown in fig. 6, terminal device 600
Including memory 601 and processor 602, wherein processor 602 is used to perform following steps:
Obtain the face feature of target facial image;
The target facial image is labeled according to the face feature;
Target facial image after mark is inputted into training pattern, the face of the target facial image are adjusted.
Fig. 7 is the structure diagram of another terminal device provided by the embodiments of the present application.As shown in fig. 7, the terminal can
With including:Housing (not shown), memory 701, central processing unit (Central Processing Unit, CPU) 702
(also known as processor, hereinafter referred to as CPU), the computer program that is stored on memory 701 and can be run on processor 702,
Circuit board (not shown) and power circuit (not shown).The circuit board is placed in the space that the housing surrounds
Portion;The CPU702 and the memory 701 are arranged on the circuit board;The power circuit, for for the terminal
Each circuit or device power supply;The memory 701, for storing executable program code;The CPU702 is by reading
The executable program code that is stored in memory 701 is stated to run program corresponding with the executable program code.
The terminal further includes:Peripheral Interface 703, RF (Radio Frequency, radio frequency) circuit 705, voicefrequency circuit
706th, loudspeaker 711, power management chip 708, input/output (I/O) subsystem 709, touch-screen 712, other input/controls
Equipment 710 and outside port 704, these components are communicated by one or more communication bus or signal wire 707.
It should be understood that graphic terminal 700 is only an example of terminal, and terminal device 700 can be with
With than more or less components shown in figure, two or more components can be combined, or can have
Different component configurations.Various parts shown in figure can be including one or more signal processings and/or special integrated
Hardware, software including circuit are realized in the combination of hardware and software.
Below just it is provided in this embodiment for picture processing terminal device be described in detail, the terminal device with
Exemplified by smart mobile phone.
Memory 701, the memory 701 can be accessed by CPU702, Peripheral Interface 703 etc., and the memory 701 can
Including high-speed random access memory, can also include nonvolatile memory, such as one or more disk memories,
Flush memory device or other volatile solid-state parts.
The peripheral hardware that outputs and inputs of equipment can be connected to CPU702 and deposited by Peripheral Interface 703, the Peripheral Interface 703
Reservoir 701.
I/O subsystems 709, the I/O subsystems 709 can be by the input/output peripherals in equipment, such as touch-screen 712
With other input/control devicess 710, Peripheral Interface 703 is connected to.I/O subsystems 709 can include 7091 He of display controller
For controlling one or more input controllers 7092 of other input/control devicess 710.Wherein, one or more input controls
Device 7092 processed receives electric signal from other input/control devicess 710 or sends electric signal to other input/control devicess 710,
Other input/control devicess 710 can include physical button (pressing button, rocker buttons etc.), dial, slide switch, behaviour
Vertical pole, click on roller.What deserves to be explained is input controller 7092 can with it is following any one be connected:Keyboard, infrared port,
The instruction equipment of USB interface and such as mouse.
Wherein, according to touch-screen operation principle and transmission information medium classification, touch-screen 712 can be resistance-type,
Capacitor induction type, infrared-type or surface acoustic wave type.Classify according to mounting means, touch-screen 712 can be:It is external hanging type, built-in
Formula or monoblock type.Classify according to technical principle, touch-screen 712 can be:Vector pressure sensing technology touch-screen, resistive technologies are touched
Touch screen, capacitance technology touch-screen, infrared technology touch-screen or surface acoustic wave technique touch-screen.
Touch-screen 712, the touch-screen 712 are the input interface and output interface between user terminal and user, can
User is shown to depending on output, visual output can include figure, text, icon, video etc..Optionally, touch-screen 712 is by user
The electric signal (electric signal of such as contact surface) triggered on touch screen curtain, is sent to processor 702.
Display controller 7091 in I/O subsystems 709 receives electric signal from touch-screen 712 or is sent out to touch-screen 712
Electric signals.Touch-screen 712 detects the contact on touch-screen, and the contact detected is converted to and shown by display controller 7091
The interaction of user interface object on touch-screen 712, that is, realize human-computer interaction, the user interface being shown on touch-screen 712
Icon that object can be the icon of running game, be networked to corresponding network etc..What deserves to be explained is equipment can also include light
Mouse, light mouse is not show the touch sensitive surface visually exported, or the extension of the touch sensitive surface formed by touch-screen.
RF circuits 705, are mainly used for establishing the communication of intelligent sound box and wireless network (i.e. network side), realize intelligent sound box
Data receiver and transmission with wireless network.Such as transmitting-receiving short message, Email etc..
Voicefrequency circuit 706, is mainly used for receiving voice data from Peripheral Interface 703, which is converted to telecommunications
Number, and the electric signal is sent to loudspeaker 711.
Loudspeaker 711, for the voice signal for receiving intelligent sound box from wireless network by RF circuits 705, is reduced to
Sound simultaneously plays the sound to user.
Power management chip 708, the hardware for being connected by CPU702, I/O subsystem and Peripheral Interface are powered
And power management.
In the present embodiment, central processing unit 702 is used for:
Obtain the face feature of target facial image;
The target facial image is labeled according to the face feature;
Target facial image after mark is inputted into training pattern, the face of the target facial image are adjusted.
Further, after being labeled according to the face feature to the target facial image, further include:
Obtain the face shape of face feature of the target facial image;
The target facial image is labeled according to the face shape of face feature.
Further, before the face feature of target facial image is obtained, further include:
Obtain face image set;
The face image set is labeled according to face feature, obtains facial image sample set;
According to the facial image sample set, the training pattern is trained based on setting machine learning algorithm.
Further, it is described that the face image set is labeled according to face feature, including:
It is special to carry out face to the face image set by face image set input face feature mark neutral net
Sign mark;Alternatively,
Face feature mark is carried out to the face image set using manual mode and according to pixel scale.
Further, after face image set is obtained, further include:
Obtain the face shape of face feature of the face image set;
Correspondingly, described be labeled the face image set according to face feature, facial image sample set, bag are obtained
Include:
The face image set is labeled according to face feature and the face shape of face feature, obtains facial image sample
This collection.
Further, after being adjusted to the face of the target facial image, further include:
Obtain the exposure parameter of the target facial image;
The face complexion of the target facial image is adjusted according to the exposure parameter.
Further, it is described that the face complexion of the target facial image is adjusted according to the exposure parameter, bag
Include:
Obtain the colour of skin parameter of the target facial image;
Determine that the colour of skin corrects parameter according to the colour of skin parameter and the exposure parameter;
Parameter is corrected according to the colour of skin to be adjusted the face complexion of the target facial image.
The embodiment of the present application also provides a kind of storage medium for including terminal device executable instruction, and the terminal device can
Execute instruction is used to perform a kind of image processing method when being performed by terminal device processor.
The computer-readable storage medium of the embodiment of the present application, can use any of one or more computer-readable media
Combination.Computer-readable medium can be computer-readable signal media or computer-readable recording medium.It is computer-readable
Storage medium for example may be-but not limited to-the system of electricity, magnetic, optical, electromagnetic, infrared ray or semiconductor, device or
Device, or any combination above.The more specifically example (non exhaustive list) of computer-readable recording medium includes:Tool
There are the electrical connections of one or more conducting wires, portable computer diskette, hard disk, random access memory (RAM), read-only storage
(ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, portable compact disc read-only storage (CD-
ROM), light storage device, magnetic memory device or above-mentioned any appropriate combination.In this document, computer-readable storage
Medium can be any includes or the tangible medium of storage program, the program can be commanded execution system, device or device
Using or it is in connection.
Computer-readable signal media can include in a base band or as carrier wave a part propagation data-signal,
Wherein carry computer-readable program code.The data-signal of this propagation can take various forms, including but unlimited
In electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be that computer can
Any computer-readable medium beyond storage medium is read, which, which can send, propagates or transmit, is used for
By instruction execution system, device either device use or program in connection.
The program code included on computer-readable medium can be transmitted with any appropriate medium, including --- but it is unlimited
In wireless, electric wire, optical cable, RF etc., or above-mentioned any appropriate combination.
Can with one or more programming languages or its combination come write for perform the application operation computer
Program code, programming language include object oriented program language-such as Java, Smalltalk, C++, also wrap
Include conventional procedural programming language-such as " C " language or similar programming language.Program code can be complete
Ground is performed, partly performed on the user computer on the user computer, the software kit independent as one performs, partly exists
Part performs or is performed completely on remote computer or server on the remote computer on subscriber computer.It is being related to
In the situation of remote computer, remote computer can pass through the network of any kind --- including LAN (LAN) or wide area
Net (WAN)-be connected to subscriber computer, or, it may be connected to outer computer (such as utilize ISP
To pass through Internet connection).
Certainly, a kind of storage medium for including computer executable instructions that the embodiment of the present application is provided, its computer
The picture processing operation that executable instruction is not limited to the described above, can also carry out the picture that the application any embodiment is provided
Relevant operation in processing method.
Note that it above are only preferred embodiment and the institute's application technology principle of the application.It will be appreciated by those skilled in the art that
The application is not limited to specific embodiment described here, can carry out for a person skilled in the art various obvious changes,
The protection domain readjusted and substituted without departing from the application.Therefore, although being carried out by above example to the application
It is described in further detail, but the application is not limited only to above example, in the case where not departing from the application design, also
It can include other more equivalent embodiments, and scope of the present application is determined by scope of the appended claims.
Claims (10)
- A kind of 1. image processing method, it is characterised in that including:Obtain the face feature of target facial image;The target facial image is labeled according to the face feature;Target facial image after mark is inputted into training pattern, the face of the target facial image are adjusted.
- 2. according to the method described in claim 1, it is characterized in that, according to the face feature to the target facial image After being labeled, further include:Obtain the face shape of face feature of the target facial image;The target facial image is labeled according to the face shape of face feature.
- 3. according to the method described in claim 1, it is characterized in that, before the face feature of target facial image is obtained, go back Including:Obtain face image set;The face image set is labeled according to face feature, obtains facial image sample set;According to the facial image sample set, the training pattern is trained based on setting machine learning algorithm.
- 4. according to the method described in claim 3, it is characterized in that, described carry out the face image set according to face feature Mark, including:Face image set input face feature is marked into neutral net, to carry out face feature mark to the face image set Note;Alternatively,Face feature mark is carried out to the face image set using manual mode and according to pixel scale.
- 5. according to the method described in claim 3, it is characterized in that, after face image set is obtained, further include:Obtain the face shape of face feature of the face image set;Correspondingly, described be labeled the face image set according to face feature, facial image sample set is obtained, including:The face image set is labeled according to face feature and the face shape of face feature, obtains facial image sample Collection.
- 6. according to the method described in claim 1, it is characterized in that, it is adjusted it in the face to the target facial image Afterwards, further include:Obtain the exposure parameter of the target facial image;The face complexion of the target facial image is adjusted according to the exposure parameter.
- 7. according to the method described in claim 6, it is characterized in that, it is described according to the exposure parameter to the target face figure The face complexion of picture is adjusted, including:Obtain the colour of skin parameter of the target facial image;Determine that the colour of skin corrects parameter according to the colour of skin parameter and the exposure parameter;Parameter is corrected according to the colour of skin to be adjusted the face complexion of the target facial image.
- A kind of 8. picture processing unit, it is characterised in that including:Face feature acquisition module, for obtaining the face feature of target facial image;Target facial image labeling module, for being labeled according to the face feature to the target facial image;Face adjust module, for the target facial image after mark to be inputted training pattern, to the target facial image Face are adjusted.
- A kind of 9. terminal device, it is characterised in that including:Processor, memory and storage on a memory and can handled The computer program run on device, the processor are realized such as any one of claim 1-7 when performing the computer program The method.
- 10. a kind of storage medium, is stored thereon with computer program, it is characterised in that the program is realized when being executed by processor Method as described in any in claim 1-7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711394859.5A CN108021905A (en) | 2017-12-21 | 2017-12-21 | image processing method, device, terminal device and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711394859.5A CN108021905A (en) | 2017-12-21 | 2017-12-21 | image processing method, device, terminal device and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN108021905A true CN108021905A (en) | 2018-05-11 |
Family
ID=62074379
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711394859.5A Pending CN108021905A (en) | 2017-12-21 | 2017-12-21 | image processing method, device, terminal device and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108021905A (en) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108776983A (en) * | 2018-05-31 | 2018-11-09 | 北京市商汤科技开发有限公司 | Based on the facial reconstruction method and device, equipment, medium, product for rebuilding network |
CN108985215A (en) * | 2018-07-09 | 2018-12-11 | Oppo(重庆)智能科技有限公司 | A kind of image processing method, picture processing unit and terminal device |
CN110136054A (en) * | 2019-05-17 | 2019-08-16 | 北京字节跳动网络技术有限公司 | Image processing method and device |
CN110866469A (en) * | 2019-10-30 | 2020-03-06 | 腾讯科技(深圳)有限公司 | Human face facial features recognition method, device, equipment and medium |
CN110927806A (en) * | 2019-10-29 | 2020-03-27 | 清华大学 | Magnetotelluric inversion method and magnetotelluric inversion system based on supervised descent method |
CN111598818A (en) * | 2020-04-17 | 2020-08-28 | 北京百度网讯科技有限公司 | Face fusion model training method and device and electronic equipment |
-
2017
- 2017-12-21 CN CN201711394859.5A patent/CN108021905A/en active Pending
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108776983A (en) * | 2018-05-31 | 2018-11-09 | 北京市商汤科技开发有限公司 | Based on the facial reconstruction method and device, equipment, medium, product for rebuilding network |
CN108985215A (en) * | 2018-07-09 | 2018-12-11 | Oppo(重庆)智能科技有限公司 | A kind of image processing method, picture processing unit and terminal device |
CN110136054A (en) * | 2019-05-17 | 2019-08-16 | 北京字节跳动网络技术有限公司 | Image processing method and device |
CN110136054B (en) * | 2019-05-17 | 2024-01-09 | 北京字节跳动网络技术有限公司 | Image processing method and device |
CN110927806A (en) * | 2019-10-29 | 2020-03-27 | 清华大学 | Magnetotelluric inversion method and magnetotelluric inversion system based on supervised descent method |
CN110927806B (en) * | 2019-10-29 | 2021-03-23 | 清华大学 | Magnetotelluric inversion method and magnetotelluric inversion system based on supervised descent method |
CN110866469A (en) * | 2019-10-30 | 2020-03-06 | 腾讯科技(深圳)有限公司 | Human face facial features recognition method, device, equipment and medium |
CN111598818A (en) * | 2020-04-17 | 2020-08-28 | 北京百度网讯科技有限公司 | Face fusion model training method and device and electronic equipment |
US11830288B2 (en) | 2020-04-17 | 2023-11-28 | Beijing Baidu Netcom Science And Technology Co., Ltd. | Method and apparatus for training face fusion model and electronic device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109495688B (en) | Photographing preview method of electronic equipment, graphical user interface and electronic equipment | |
CN108021905A (en) | image processing method, device, terminal device and storage medium | |
CN110121118B (en) | Video clip positioning method and device, computer equipment and storage medium | |
CN108594997B (en) | Gesture skeleton construction method, device, equipment and storage medium | |
CN110147805B (en) | Image processing method, device, terminal and storage medium | |
KR102173123B1 (en) | Method and apparatus for recognizing object of image in electronic device | |
CN110059685B (en) | Character area detection method, device and storage medium | |
CN105229673B (en) | Apparatus and associated method | |
US20220237812A1 (en) | Item display method, apparatus, and device, and storage medium | |
US11461949B2 (en) | Electronic device for providing avatar and operating method thereof | |
CN106303029A (en) | The method of controlling rotation of a kind of picture, device and mobile terminal | |
CN110827195B (en) | Virtual article adding method and device, electronic equipment and storage medium | |
CN111062276A (en) | Human body posture recommendation method and device based on human-computer interaction, machine readable medium and equipment | |
CN107967667A (en) | Generation method, device, terminal device and the storage medium of sketch | |
CN110427108A (en) | Photographic method and Related product based on eyeball tracking | |
CN112052897A (en) | Multimedia data shooting method, device, terminal, server and storage medium | |
CN108491780B (en) | Image beautification processing method and device, storage medium and terminal equipment | |
JP2023510375A (en) | Image processing method, device, electronic device and storage medium | |
CN112581358A (en) | Training method of image processing model, image processing method and device | |
CN111325220B (en) | Image generation method, device, equipment and storage medium | |
CN108055461B (en) | Self-photographing angle recommendation method and device, terminal equipment and storage medium | |
WO2019218879A1 (en) | Photographing interaction method and apparatus, storage medium and terminal device | |
CN110991445A (en) | Method, device, equipment and medium for identifying vertically arranged characters | |
CN112149599B (en) | Expression tracking method and device, storage medium and electronic equipment | |
CN108960213A (en) | Method for tracking target, device, storage medium and terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180511 |
|
RJ01 | Rejection of invention patent application after publication |