CN109584153A - Modify the methods, devices and systems of eye - Google Patents

Modify the methods, devices and systems of eye Download PDF

Info

Publication number
CN109584153A
CN109584153A CN201811491092.2A CN201811491092A CN109584153A CN 109584153 A CN109584153 A CN 109584153A CN 201811491092 A CN201811491092 A CN 201811491092A CN 109584153 A CN109584153 A CN 109584153A
Authority
CN
China
Prior art keywords
channel
image
region
pixel
eye
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201811491092.2A
Other languages
Chinese (zh)
Inventor
廖声洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Megvii Technology Co Ltd
Original Assignee
Beijing Megvii Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Megvii Technology Co Ltd filed Critical Beijing Megvii Technology Co Ltd
Priority to CN201811491092.2A priority Critical patent/CN109584153A/en
Publication of CN109584153A publication Critical patent/CN109584153A/en
Pending legal-status Critical Current

Links

Classifications

    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The present invention provides a kind of methods, devices and systems for modifying eye;Wherein, this method comprises: obtaining the image data of the face of target object;The human face characteristic point of detected target object from image data;The region to be finished of the eye of target object is determined according to human face characteristic point;According to preset modification parameter, treats modified regions and carry out moditied processing, the image data that obtains that treated.The present invention automatically can carry out moditied processing by the eye to target object, so that eye modification operation is more convenient while improving modification effect, to improve user experience.

Description

Modify the methods, devices and systems of eye
Technical field
The present invention relates to technical field of image processing, more particularly, to a kind of methods, devices and systems for modifying eye.
Background technique
Raising with user's aesthetical standard and the attention to personal image, user are often desirable to shooting photo or view Frequency is reprocessed to promote personal image, for example, carrying out skin-whitening, thin face to photo, repairing double chin, adjustment face shape The processing such as shape.In existing mode, it usually needs user wipes photo using third party's image processing software, smears, Fuzzy wait handles manually;For eye modification, practical operation is equally relatively complicated, if user grasps image processing software It is less to make experience, is difficult control degree of modification, modification effect is difficult to meet the aesthetic requirement of user, and user experience is lower.
Summary of the invention
In view of this, the purpose of the present invention is to provide a kind of methods, devices and systems for modifying eye, with automatically right The eye of target object carries out moditied processing, so that eye modification operation is more convenient while improving modification effect, to improve User experience.
In a first aspect, method includes: to obtain target object the embodiment of the invention provides a kind of method for modifying eye The image data of face;The human face characteristic point of detected target object from image data;Target pair is determined according to human face characteristic point The region to be finished of the eye of elephant;It according to preset modification parameter, treats modified regions and carries out moditied processing, obtain that treated Image data.
In preferred embodiments of the present invention, the step of the image data of the face of above-mentioned acquisition target object, comprising: logical Cross image capture device acquisition preview frame image;Face datection is carried out to preview frame image by preset Face datection model; If detecting that there are faces in preview frame image, obtain the image data of the face of target object.
In preferred embodiments of the present invention, the area to be finished of the above-mentioned eye that target object is determined according to human face characteristic point The step of domain, comprising: according to the position of characteristic point each in human face characteristic point, determine the region to be finished of eye;Area to be finished Domain includes eye pupil region, upper eyelid region, lower eyelid region, eye pouch region, left eye angular zone, right eye angular zone, left eyebrow area One of domain, right brow region, left eyelashes region, right eyelashes region, informer region are a variety of.
In preferred embodiments of the present invention, the above-mentioned position according to characteristic point each in human face characteristic point determines eye Region to be finished the step of, comprising: according to the position of characteristic point each in human face characteristic point, by the feature in human face characteristic point Point is divided into multiple groups;For every group of characteristic point: carrying out curve fitting processing to current group characteristic point, obtain currently organizing characteristic point pair The lines of outline answered;Lines of outline area encompassed is determined as currently to organize the corresponding region to be finished of characteristic point.
In preferred embodiments of the present invention, above-mentioned preset modification parameter includes mill skin parameter;It is above-mentioned according to preset Parameter is modified, modified regions is treated and carries out moditied processing, the step of the image data that obtains that treated, comprising: is obtained to be finished The image in the channel R in region, the channel G and channel B;Following processing: root are carried out to the image in the channel R, the channel G and channel B respectively According to mill skin parameter, the pixel value of each pixel in the image of current channel, the image of the current channel after being adjusted are adjusted; The image of the current channel adjusted of the channel R, the channel G and channel B is merged, the region to be finished that obtains that treated.
In preferred embodiments of the present invention, above-mentioned mill skin parameter includes specified pixel associated with current pixel position It sets;According to mill skin parameter, the pixel value of each pixel in the image of current channel is adjusted, the current channel after being adjusted The step of image, comprising: following processing are carried out to each pixel in the image of current channel: being obtained associated with current pixel Specified pixel position;Processing is weighted and averaged to the pixel value on specified pixel position, using processing result as current picture The pixel value of element.
In preferred embodiments of the present invention, above-mentioned specified pixel position includes, current pixel adjacent with current pixel The location of pixels on top, the location of pixels of current pixel lower part, on the right side of location of pixels and current pixel on the left of current pixel Location of pixels;Alternatively, specified pixel position includes, the location of pixels on current pixel top, current picture adjacent with current pixel Location of pixels, the location of pixels on the right side of current pixel, current pixel upper left quarter on the left of the location of pixels of plain lower part, current pixel Location of pixels, the location of pixels of current pixel lower left quarter, the location of pixels of current pixel upper right quarter and current pixel right lower quadrant Location of pixels.
In preferred embodiments of the present invention, above-mentioned preset modification parameter includes eye shadow colouring parameter, eye pupil colouring ginseng One of number, eyelash colouring parameter and informer's colouring parameter are a variety of;It is above-mentioned according to preset modification parameter, to be finished The step of region carries out moditied processing, the image data that obtains that treated, comprising: obtain the channel R, the channel G in region to be finished The image in the channel R, the channel G and channel B of corresponding modification parameter with the image of channel B and region to be finished;It treats respectively The following processing of image progress in the channel R, the channel G and channel B of modified regions: according to preset weight, by the figure of current channel It is merged as being weighted with the image of corresponding channel in modification parameter, using fusion results as the image of current channel;R is led to The fused image in road, the channel G and channel B merges, the region to be finished that obtains that treated.
It is above-mentioned according to preset weight in preferred embodiments of the present invention, by the image of current channel and modification parameter The image of middle corresponding channel is weighted fusion, using fusion results as the step of the image of current channel, comprising: treat respectively Each pixel carries out following processing in the image of the current channel of modified regions: obtaining the image of the corresponding channel of modification parameter In, first pixel corresponding with the position of current pixel point in the image of the current channel in region to be finished;According to pre- If weight, the pixel value weighting fusion treatment of pixel value and the first pixel to current pixel point, by fusion treatment result Pixel value as current pixel point;In the image of the current channel in region to be finished after the completion of each pixel processing, obtain The image of current channel.
It is above-mentioned according to preset modification parameter in preferred embodiments of the present invention, it treats modified regions and carries out at modification The step of reason further include: emergence processing or uniform Fuzzy Processing are carried out to the region to be finished after moditied processing, obtained final Region to be finished.
Second aspect, the embodiment of the invention provides a kind of devices for modifying eye, and device includes: data acquisition module, For obtaining the image data of the face of target object;Characteristic point detection module, for the detected target object from image data Human face characteristic point;Area determination module, the region to be finished of the eye for determining target object according to human face characteristic point;It repairs Processing module is adornd, carries out moditied processing for according to preset modification parameter, treating modified regions, the picture number that obtains that treated According to.
The third aspect, the embodiment of the invention provides it is a kind of modify eye system, system include: image capture device, Processing equipment and storage device;Image capture device, for obtaining preview frame image or image data;It is stored on storage device Computer program, computer program execute the method such as above-mentioned modification eye when equipment processed is run.
Fourth aspect, the embodiment of the invention provides a kind of computer readable storage medium, computer readable storage mediums On be stored with computer program, the step of when computer program equipment operation processed executes the method for above-mentioned modification eye.
The embodiment of the present invention bring it is following the utility model has the advantages that
The methods, devices and systems of above-mentioned modification eye provided in an embodiment of the present invention, get the face of target object Image data after, the human face characteristic point of detected target object from the image data determines mesh further according to the human face characteristic point Mark the region to be finished of the eye of object;And then according to preset modification parameter, treats modified regions and carry out moditied processing, obtain Image data that treated.Which automatically can carry out moditied processing by the eye to target object, operate convenient and modify Effect is preferable, to improve user experience.
Other features and advantages of the present invention will illustrate in the following description, alternatively, Partial Feature and advantage can be with Deduce from specification or unambiguously determine, or by implementing above-mentioned technology of the invention it can be learnt that.
To enable the above objects, features and advantages of the present invention to be clearer and more comprehensible, better embodiment is cited below particularly, and match Appended attached drawing is closed, is described in detail below.
Detailed description of the invention
It, below will be to specific in order to illustrate more clearly of the specific embodiment of the invention or technical solution in the prior art Embodiment or attached drawing needed to be used in the description of the prior art be briefly described, it should be apparent that, it is described below Attached drawing is some embodiments of the present invention, for those of ordinary skill in the art, before not making the creative labor It puts, is also possible to obtain other drawings based on these drawings.
Fig. 1 is a kind of structural schematic diagram of electronic system provided in an embodiment of the present invention;
Fig. 2 is a kind of flow chart of method for modifying eye provided in an embodiment of the present invention;
Fig. 3 is a kind of schematic diagram of human face characteristic point provided in an embodiment of the present invention;
Fig. 4 is the flow chart of the method for another modification eye provided in an embodiment of the present invention;
Fig. 5 is a kind of structural schematic diagram of device for modifying eye provided in an embodiment of the present invention.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with attached drawing to the present invention Technical solution be clearly and completely described, it is clear that described embodiments are some of the embodiments of the present invention, rather than Whole embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art are not making creative work premise Under every other embodiment obtained, shall fall within the protection scope of the present invention.
In view of the modification of existing eye needs to realize by third party software, cumbersome and modification effect is difficult to handle Control, the lower problem of user experience, the embodiment of the invention provides a kind of methods, devices and systems for modifying eye, the skills Art can be applied in the multiple terminals equipment such as camera, mobile phone, tablet computer, and it is real which can be used corresponding software and hardware It is existing, it describes in detail below to the embodiment of the present invention.
Embodiment one:
Firstly, showing referring to Fig.1 to describe the methods, devices and systems of modification eye for realizing the embodiment of the present invention Example electronic system 100.
A kind of structural schematic diagram of electronic system as shown in Figure 1, electronic system 100 include one or more processing equipments 102, one or more storage devices 104, input unit 106, output device 108 and one or more image capture devices 110, these components pass through the interconnection of bindiny mechanism's (not shown) of bus system 112 and/or other forms.It should be noted that Fig. 1 institute The component and structure for the electronic system 100 shown be it is illustrative, and not restrictive, as needed, the electronic system It can have other assemblies and structure.
The processing equipment 102 can be gateway, or intelligent terminal, or include central processing unit It (CPU) or the equipment of the processing unit of the other forms with data-handling capacity and/or instruction execution capability, can be to institute The data for stating other components in electronic system 100 are handled, and other components in the electronic system 100 can also be controlled To execute desired function.
The storage device 104 may include one or more computer program products, and the computer program product can To include various forms of computer readable storage mediums, such as volatile memory and/or nonvolatile memory.It is described easy The property lost memory for example may include random access memory (RAM) and/or cache memory (cache) etc..It is described non- Volatile memory for example may include read-only memory (ROM), hard disk, flash memory etc..In the computer readable storage medium On can store one or more computer program instructions, processing equipment 102 can run described program instruction, to realize hereafter The client functionality (realized by processing equipment) in the embodiment of the present invention and/or other desired functions.Institute Various application programs and various data can also be stored by stating in computer readable storage medium, such as the application program uses And/or various data generated etc..
The input unit 106 can be the device that user is used to input instruction, and may include keyboard, mouse, wheat One or more of gram wind and touch screen etc..
The output device 108 can export various information (for example, image or sound) to external (for example, user), and It and may include one or more of display, loudspeaker etc..
Described image acquisition equipment 110 can acquire preview frame image or image data, and by collected preview frame Image or image data are stored in the storage device 104 for the use of other components.
Illustratively, for realizing the example electricity of the methods, devices and systems of modification eye according to an embodiment of the present invention Each device in subsystem can integrate setting, can also be with scattering device, such as by processing equipment 102, storage device 104, defeated Enter device 106 and output device 108 is integrally disposed in one, and image capture device 110, which is set to, can collect target The designated position of object.When each device in above-mentioned electronic system is integrally disposed, which be may be implemented as such as The intelligent terminals such as camera, smart phone, tablet computer, computer.
Embodiment two:
A kind of method for modifying eye is present embodiments provided, this method is held by the processing equipment in above-mentioned electronic system Row;The processing equipment can be any equipment with data-handling capacity, such as host computer, local server, Cloud Server Deng.The processing equipment can independently be handled the information received, can also be connected with server, be carried out jointly to information Analysis processing, and processing result is uploaded to cloud.
As shown in Fig. 2, the method for the modification eye includes the following steps:
Step S202 obtains the image data of the face of target object;
Wherein, which can be the frame image of single frames, or the frame image of multiframe.In general, when receiving When the shooting instruction that user issues, such as presses shooting button, issues phonetic order, gesture instruction, can start to shoot or record Imaged data;The face for including in the image data can be regarded as the face of above-mentioned target object.
Step S204, the human face characteristic point of detected target object from image data;
May include from the human face characteristic point of the target object detected in image data the position of each characteristic point with And the feature vertex type of each characteristic point;Wherein, the position of characteristic point can be labeled in above-mentioned picture number in the form of identifier In, and each identifier is associated with the feature vertex type of this feature point.This feature vertex type can be eyebrow outline point, eyes Profile point, nose profile point, upper lip profile point, lower lip profile point, chin profile point etc.;In certain above-mentioned human face characteristic point It can also include the characteristic point of other feature vertex types;In general, the characteristic point position of a characteristic point is opposite with feature vertex type It answers.
It is illustrated in figure 3 a kind of example of human face characteristic point;It is the testing result of face in dotted line frame;Each characteristic point with The form of " dot " is labeled in the image data of the face of target object;Various types of characteristic points are labeled in corresponding position On;For example, eyebrow outline point is located near the eyebrow of target object;Eye contour point is located at the ocular vicinity of target object;And The feature vertex type of each characteristic point is associated with preservation with each " dot ", and user can click each feature in testing result The feature vertex type of point, the characteristic point being clicked can be shown on designated position, with for reference.
If including multiple target objects in a frame image data, processing first can be split to the image data, obtained To multiple topographies, includes a target object in each topography, then carry out face for each target object again The detection of characteristic point.The detection of human face characteristic point can pass through the characteristic point detection model realization that training obtains in advance, this feature Point detection model can also be realized by neural fusion by other artificial intelligence or machine learning mode.Pass through The image pattern for being largely labeled with human face characteristic point can train to obtain features described above point detection model.
Step S206 determines the region to be finished of the eye of target object according to human face characteristic point;
In general, the eye of target object generally includes multiple positions such as eyebrow, eye pupil, upper eyelid, eye pouch, eyelashes;Each The region to be finished at position can be determined by being located at the human face characteristic point of the position corresponding position.Such as eyebrow, above-mentioned people Eyebrow outline point in face characteristic point is determined for the corresponding region to be finished of eyebrow;Eyebrow outline can specifically be clicked through Row curve fit process forms contour curve, the region which surrounds is determined as to the region to be finished of eyebrow.
The region to be finished of the eye of target object can be the region to be finished of all sites of above-mentioned eye, can also be with Only include the region to be finished of partial portion, such as only includes the region to be finished of eye pupil and upper eyelid position;Eye it is to be finished The specific range in region it is not limited here, can be arranged according to actual needs, naturally it is also possible to according to the setting command of user Determine above-mentioned region to be finished.
Step S208 treats modified regions and carries out moditied processing according to preset modification parameter, the image that obtains that treated Data.
Above-mentioned moditied processing is specifically as follows the adjustment colour of skin, adjustment eyebrow shape, eyebrow color, renders eye shadow, informer, adds Add eyelashes, U.S. pupil etc.;Different regions to be finished is usually corresponding with relatively-stationary moditied processing process, such as repairs to upper eyelid Decorations processing generally comprises rendering eye shadow and informer, generally comprises U.S. pupil of addition etc. to the moditied processing of eyeball.In addition, different repairs Treatment process is adornd, required modification parameter is also different;Eye shadow is such as rendered, generally comprises upper eyelid in corresponding modification parameter The color of various pieces, the modification parameter for rendering eye shadow can be saved by way of image.For another example the colour of skin is adjusted, it is corresponding Design parameter and the variation degree of the colour of skin etc. needed for generally comprising the adjustment colour of skin in modification parameter.
Thus in actual implementation, it can wrap the region to be finished containing eye each section in above-mentioned preset modification parameter It carries out modifying parameter required for moditied processing;After getting region to be finished, according to the corresponding eye in the region to be finished Portion position, the modification parameter needed for extracting the modification eye position region to be finished in above-mentioned preset modification parameter.
The method of above-mentioned modification eye provided in an embodiment of the present invention, gets the image data of the face of target object Afterwards, from the image data detected target object human face characteristic point, the eye of target object is determined further according to the human face characteristic point The region to be finished in portion;And then according to preset modification parameter, treats modified regions and carry out moditied processing, the figure that obtains that treated As data.Which automatically can carry out moditied processing by the eye to target object, and the convenient and modification effect of operation is preferable, from And improve user experience.
Embodiment three:
Above described embodiment describes can pass through the people of characteristic point detection model detected target object from image data Face characteristic point;Therefore, the training method of this feature point detection model is described first in the present embodiment.Specifically, this feature point is examined Survey model can train in the following ways to be obtained:
Step 11, training sample set is obtained;The training sample set includes the facial image for setting quantity;The face figure The markup information of human face characteristic point is carried as in;The markup information includes position and the feature vertex type of human face characteristic point;
The quantity of facial image in the training sample set can be preset, such as 100,000;It is appreciated that face The quantity of image is more, and the performance and ability for the characteristic point detection model that training obtains are better, and detection accuracy is more accurate.This A little facial images can be obtained from general facial image database, can also be from being detected by way of Face datection in video flowing It obtains.Above-mentioned human face characteristic point can be labeled on facial image manually by engineer, can also by marking software automatic marking, It is adjusted again by engineer.The mark of human face characteristic point is more accurate, is more conducive to the detection essence of subsequent characteristics point detection model Exactness.The human face characteristic point is referred to as face key point.
It, can be with the shape of the identifiers such as dot, asterisk for a facial image for marking human face characteristic point manually Formula adds characteristic point on the facial image, and the feature vertex type of this feature point, example are inputted by the input frame of this feature point Such as, eyebrow outline point, eye contour point, chin profile point;This feature vertex type can also be refined further, for example, for eyebrow Hair profile point, can also be subdivided into brows profile point, eyebrow peak profile point etc.;For eye contour point, black eye can also be subdivided into Pearl profile point, white of the eye profile point etc..
To in this present embodiment, due to mainly being detected to the eye of target object, thus in training sample set Image data in can also only mark eye profile point, and the eyebrow outline point of subdivision, eye contour point etc..
Step 12, according to the first division proportion, training subset and verifying subset are marked off from training sample set;
Wherein, which can be specific percentage, such as 30%, it at this time can be by training sample set In 30% facial image and corresponding markup information as training subset, by 30% facial image in training sample set and Corresponding markup information is as verifying subset;First division proportion may be percentage combination, such as 30% and 40%, this When can be using 30% facial image in training sample set and corresponding markup information as training subset, by training sample set 40% facial image and corresponding markup information are as verifying subset in conjunction.
Seen from the above description, the percentage that training subset and verifying subset account for training sample set can be identical, can also With difference;Also, the facial image in training subset and verifying subset, can be entirely different, and there may also be partial intersections.Example Such as, be distributed from training sample set using random manner mark off training subset and verifying subset, at this time training subset and Facial image in verifying subset is possible to that there are identical facial images;And if first marked off from training sample set Training subset, then verifying subset is marked off from facial image remaining in training sample set, training subset and verifying at this time Facial image in subset can be entirely different.
Step 13, initial neural network model is built, and initial training parameter is set;
In general, the training parameter of neural network model include network node, the determination of initial weight, minimum training rate, Dynamic parameter, allowable error, the number of iterations etc..
Step 14, by training subset and training parameter training neural network model, after verifying subset to training Neural network model is verified;
It in actual implementation, can be by the facial image and corresponding markup information in above-mentioned training subset and verifying subset It is respectively divided into multiple groups;First by training subset lineup's face image and corresponding markup information be input to above-mentioned mind It is trained in network model, after the completion of training, then the mind after lineup's face image in subset is input to training will be verified Through carrying out the detection of human face characteristic point in network model, it will test result markup information corresponding with this group of facial image and compared It is right, the accuracy in detection of Current Situation of Neural Network model is obtained, which is verification result.
Step 15, if verification result is unsatisfactory for preset precision threshold, according to verification result adjusting training parameter;
In order to improve the accuracy in detection of neural network model, neural network model inspection can be analyzed according to verification result The lower reason of accuracy, and the training parameter for needing to adjust are surveyed, it is excellent to be carried out to neural network model and its training method Change.
Step 16, training subset and training parameter adjusted training neural network model are continued through, until nerve net The verification result of network model meets precision threshold, obtains characteristic point detection model.
By above-mentioned steps it is found that training and verifying to neural network model are the processes of progress of intersecting, instruct every time Practice lineup's face image and the corresponding markup information used in training subset, verifying uses the lineup in verifying subset every time Face image and corresponding markup information, repetition training and verifying, until the verification result of neural network model meets precision threshold, This feature point detection model can be obtained.
If every group of facial image and corresponding markup information in training subset are all using finishing, but verification result is still It is not able to satisfy precision threshold, every group of facial image and corresponding markup information in training subset can be reused at this time, New training subset can be marked off from above-mentioned training sample set to continue to train.
Furthermore it is also possible to mark off the test subset of the second division proportion, from above-mentioned training sample set in order to guarantee to survey The accuracy of test result, facial image in the test subset usually with the facial image in above-mentioned training subset and verifying subset Entirely different, i.e., there is no intersect.The test subset can be used for surveying the characteristic point detection model that training is completed comprehensively Examination, to measure the performance and ability of this feature point detection model, and can be generated the assessment report of this feature point detection model.? In actual implementation, it can train to obtain multiple characteristic point detection models, the performance and ability of each characteristic point detection model are different, Actual demand, such as detection accuracy, detection speed are detected according to current face's characteristic point, can choose performance and ability more Matched characteristic point detection model.
In the present embodiment, the characteristic point detection model that training obtains through the above way, characteristic point detection with higher Accuracy so as to accurately detect the characteristic point of the eye of target object in image data, and then modifies eye Processing is conducive to improve user experience.
Example IV:
The embodiment of the invention provides the method for another modification eye, this method is real on the basis of the above embodiments It is existing;In the present embodiment, emphasis describe determine target object eye region to be finished process, and treat modified regions into The process of row moditied processing;As shown in figure 4, the method for the modification eye includes the following steps:
Step S402 acquires preview frame image by the image capture device when image capture device starting;By pre- If Face datection model to preview frame image carry out Face datection;
The image capture device is specifically as follows camera, which can be independent equipment, the processing with distal end Equipment communication connection;The camera also can integrate in the equipment such as mobile phone, tablet computer.User starts image capture device Afterwards, which can acquire preview frame image.
Step S404 judges preview frame image with the presence or absence of face;If so, executing step S406;If not, executing step Rapid S402;
Above-mentioned Face datection model can training obtains in advance by neural network;Preview frame image can be specifically input to In the face detection model, is identified in the frame image by the model and illustrate the preview if there is face with the presence or absence of face There are target objects in frame image, then export the specific location of the face;The specific location can be known by Face datection collimation mark; Image data inside the face detection block is the image data of face, which generally comprises the complete of target object Face image.
Step S406 obtains the image data of the face of target object.
In actual implementation, the process that processing equipment obtains the image data of the face of target object can be touched by user Hair, for example, user presses shooting button;Certainly, in some cases, when detecting preview video there are after face, processing is set The image data of the standby face that the corresponding target object of the face can be acquired automatically by image capture device, naturally it is also possible to The image data of the face of target object is acquired by other equipment.In another mode, processing equipment can also connect After the triggering command for receiving user, above-mentioned steps S402 is just executed, i.e., preview frame image is acquired by image capture device, in turn Execute subsequent process.
Step S408, by the characteristic point detection model that training obtains in advance, the detected target object from image data Human face characteristic point;
Step S410 determines the region to be finished of eye according to the position of characteristic point each in human face characteristic point;
For example, the feature vertex type of each characteristic point in human face characteristic point can be checked one by one, if feature vertex type packet It, can should for example, brows profile point, eyebrow tail profile point, canthus profile point etc. containing the keywords such as " eyebrow ", " eye " " eyelashes " Feature point extraction comes out;It include the characteristic point composition characteristic of the keywords such as above-mentioned " eyebrow ", " eye " " eyelashes " in feature vertex type Point set;The keyword can be specifically configured according to the concrete position of eye to be finished, for example, if only to eyebrow into Row modification, which can only include " eyebrow ".Above-mentioned region to be finished includes eye pupil region, upper eyelid region, lower eyelid area Domain, eye pouch region, left eye angular zone, right eye angular zone, left brow region, right brow region, left eyelashes region, right eyelashes area One of domain, informer region are a variety of.
Referring again to FIGS. 3, the position of each characteristic point is generally in each position in eye in the set of characteristic points of eye Edge, such as the characteristic point at eyebrow position, generally include that brows characteristic point, contour feature point, eyebrow bottom profiled are special on eyebrow Levy point, eyebrow tail characteristic point, be based on these characteristic points at this time, that is, can determine the corresponding region of eyebrow, the region be eyebrow to Modified regions;For another example, the characteristic point of eyes generally comprises inner eye corner characteristic point, upper eyelid contour feature point, eye tail feature Point, lower eye pouch contour feature point are based on these characteristic points, that is, can determine the corresponding region of eyes;Wherein, it is based on above-mentioned inner eye corner Characteristic point, upper eyelid contour feature point and eye tail characteristic point and brows characteristic point, eyebrow bottom profiled characteristic point and eyebrow tail feature Point can determine the region to be finished of upper eyelid.
In actual implementation, above-mentioned steps S410 can be realized by 02~step 04 of following step:
Step 02, according to the position of characteristic point each in human face characteristic point, human face characteristic point is divided into multiple groups;
For example, contour feature point, eyebrow bottom profiled characteristic point, eyebrow tail characteristic point divide on above-mentioned brows characteristic point, eyebrow It is one group, this group of characteristic point is used to determine the region to be finished of eyebrow;Above-mentioned inner eye corner characteristic point, upper eyelid contour feature point and Eye tail characteristic point and brows characteristic point, eyebrow bottom profiled characteristic point and eyebrow tail characteristic point are divided into one group, this group of characteristic point is used In the region to be finished for determining upper eyelid.Since each position of eye is connected with each other, or even it is overlapped, it is specific between every group of characteristic point Point can be repeated mutually, naturally it is also possible to not repeated.
Step 04, for every group of characteristic point: carrying out curve fitting processing to current group characteristic point, obtain currently organizing characteristic point Corresponding lines of outline;Lines of outline area encompassed is determined as currently to organize the corresponding region to be finished of characteristic point.
In a kind of mode, the curve that exponential function or logarithmic function appropriate represent can choose, in every group of characteristic point The processing that carries out curve fitting of each characteristic point by the parameter of continuous adjustment index function or logarithmic function can be such that curve uses up Amount approaches the variation tendency in this group of characteristic point between each characteristic point, by this way available more smooth contour line Item.In another mode, it can also be directly connected to characteristic point adjacent two-by-two in every group of characteristic point, to obtain corresponding wheel Profile item.
It is following to start to treat modified regions progress moditied processing after determining region to be finished, it is walked with reference to actual eye make-up Suddenly, it usually needs to bottom adornment on eye, to adjust the colour of skin of eye part skin and increase fine and smooth degree, therefore, the eye of the present embodiment In portion's method of modifying, it is also desirable to first treat modified regions and carry out mill skin operation, be realized especially by following step.
Step S412 obtains the channel R in region to be finished, the image in the channel G and channel B;
Channel decomposition is carried out by treating the corresponding topography of modified regions, the R that can be obtained the region to be finished is logical Road, the channel G and channel B image.
Step S414, carries out following processing to the image in the channel R, the channel G and channel B respectively: being joined according to preset modification Mill skin parameter in number, adjusts the pixel value of each pixel in the image of current channel, the current channel after being adjusted Image;
In order to realize mill bark effect, it usually needs the variance of the pixel value in the image in each channel between pixel is reduced, For example, by filtering, uniformly the modes such as gelatinization, emergence adjust the pixel value of pixel;Based on this, usually wrapped in above-mentioned mill skin parameter The specific formula and parameters of formula of the pixel value containing adjustment.Furthermore it is also possible to adjust each pixel in image on the whole Pixel value increases brightness, dimming such as whole, to realize the purpose of the adjustment colour of skin.
In another way, the pixel value of the pixel can also be adjusted according to the neighboring pixel of each pixel, be somebody's turn to do with reducing The pixel value variance of pixel and surrounding pixel;Based on this, above-mentioned steps S414 can also be real by 12~step 14 of following step It is existing:
Step 12, following processing are carried out to each pixel in the image of current channel: obtained associated with current pixel Specified pixel position;
The specified pixel position can be preset, and in a kind of mode, specified pixel position includes adjacent with current pixel , the location of pixels on current pixel top, the location of pixels of current pixel lower part, the location of pixels on the left of current pixel and current Location of pixels on the right side of pixel;That is, the specified pixel position is four location of pixels adjacent with current pixel.Another way In, specified pixel position include it is adjacent with current pixel, the location of pixels on current pixel top, current pixel lower part pixel The location of pixels of the location of pixels on the right side of location of pixels, current pixel, current pixel upper left quarter on the left of position, current pixel, The location of pixels of the location of pixels of current pixel lower left quarter, the location of pixels of current pixel upper right quarter and current pixel right lower quadrant; Specified pixel position i.e. in which is eight location of pixels adjacent with current pixel.
For edge pixel, may not have four or eight location of pixels adjacent thereto simultaneously, thus can only obtain A part of specified pixel position is taken, for adjusting the pixel value of current pixel;The specified pixel of edge pixel can not also be obtained Position is not also adjusted the pixel value of edge pixel.
In addition, above-mentioned specified pixel position can not also be adjacent with current pixel, but with current pixel between presetting Every the location of pixels based on preset interval acquisition specified quantity.For example, current pixel is based on two at a distance of two pixel distances A pixel distance, obtain the top of current pixel, lower part, left and right side location of pixels.
Step 14, processing is weighted and averaged to the pixel value on specified pixel position, using processing result as current picture The pixel value of element.
For example, the location of pixels of current pixel is (x, y), if to the adjacent four specified pixel position of current pixel On pixel value be weighted and averaged processing, processing result be f (x, y)=(f (x-1, y)+f (x+1, y)+f (x, y-1)+f (x, y+1))/4;Wherein, f represents the pixel value in pixel set;If on the adjacent eight specified pixel position of current pixel Pixel value be weighted and averaged processing, processing result is f (x, y)=(f (x-1, y)+f (x+1, y)+f (x, y-1)+f (x, y+ 1)+f(x-1,y-1)+f(x+1,y+1)+f(x+1,y-1)+f(x-1,y+1))/8。
The image of the current channel adjusted of the channel R, the channel G and channel B is merged, is obtained everywhere by step S416 Region to be finished after reason.
Channel merging is carried out by the image to the channel R, the channel G and channel B, current channel adjusted, can be obtained Treated region to be finished.
It treats modified regions to carry out after the completion of grinding skin operation, usually also needs to carry out eye shadow, eye pupil, eyelash, eyelash Equal colourings operation, different types of colouring operation is it is generally necessary to corresponding modification parameter, thus preset in the present embodiment is repaired Decorations parameter include eye shadow colouring parameter, eye pupil colouring parameter, eyelash colouring parameter and eyelash colouring one of parameter or It is a variety of;No matter which kind of colouring operation, can be realized by following step:
It is corresponding to obtain the channel R in region to be finished, the image in the channel G and channel B and region to be finished by step S418 The channel R of modification parameter, the channel G and channel B image;
The channel R in region to be finished, the image in the channel G and channel B and the region to be finished corresponding modification parameter The image in the channel R, the channel G and channel B can be obtained by way of channel decomposition.
Step S420 treats the following processing of image progress in the channel R, the channel G and channel B of modified regions respectively: according to The image of current channel is weighted with the image of corresponding channel in modification parameter and merges, by fusion results by preset weight Image as current channel;
In general, more obvious if necessary to modification effect, i.e. heavy make-up on eye is above-mentioned during Weighted Fusion, modification The image weights of corresponding channel are larger in parameter;It is more unobvious if necessary to modification effect, i.e., it is light make-up on eye, it is above-mentioned to add It weighs in fusion process, the image weights for modifying corresponding channel in parameter are smaller.
It, can be with during the image of above-mentioned current channel is weighted and merges with the image of corresponding channel in modification parameter The mutual corresponding pixel in position in two images is weighted fusion, to realize the Weighted Fusion of whole image, is based on This, above-mentioned steps S420 can also be realized by 22~step 26 of following step:
Step 22, each pixel in the image of the current channel of modified regions is treated respectively and carries out following processing: being obtained It modifies in the image of the corresponding channel of parameter, the position phase with the current pixel point in the image of the current channel in region to be finished Corresponding first pixel;
In view of the region shape to be finished at the same position of different target objects is different, thus corresponding modification parameter Channel image channel image can be stretched or compressed based on the region shape to be finished of current target object so that The image for modifying the corresponding channel of parameter is identical as the image shape of the current channel in region to be finished, has position corresponding each other Characteristic point.
Step 24, according to preset weight, the pixel value weighting of pixel value and the first pixel to current pixel point is melted Conjunction processing, using fusion treatment result as the pixel value of current pixel point;
In the step, two pixel weighting fusion treatments can be realized by following formula: g ' (x, y)=g (x, y) * Alpha1+k (x, y) * alpha2;Wherein, (x, y) represents the position of pixel, and g (x, y) is the pixel value of current pixel point;k (x, y) is the pixel value of the first pixel in channel parameters;G ' (x, y) is the pixel value of the current pixel point after Weighted Fusion; Alpha1 and alpha2 is corresponding weight;The summation of usual alpha1 and alpha2 is 1.
Step 26, current channel is obtained after the completion of each pixel processing in the image of the current channel in region to be finished Image.
Step S422 merges the fused image of the channel R, the channel G and channel B, and it is to be repaired to obtain that treated Adorn region.
In a further mode of operation, it can also realize that the image of current channel is corresponding with modification parameter by way of figure layer The Weighted Fusion of the image in channel, wherein region to be finished is set as preview figure layer, which can be expressed as src1 (r1, g1, b1, alpha1), modification parameter correspondence image be set as material figure layer, the material figure layer can be expressed as src2 (r2, g2,b2,alpha2);Finished pigmented figure layer be color=src1 (r1, g1, b1, alpha1) * alpha1+src2 (r2, g2, b2,alpha2)*alpha2;Wherein, alpha1+alpha2=1.In addition, usually also needing to judge figure layer in coloring process In current location point whether in region to be finished, if the finished pigmented figure layer in, current location point is above-mentioned color;If it was not then the finished pigmented figure layer of current location point is color=src1 (r1, g1, b1, alpha1).
Step S424 carries out emergence processing or uniform Fuzzy Processing to the region to be finished after moditied processing, obtains final Region to be finished.
Wherein, the region to be finished that emergence is handled after specifically can be understood as moditied processing is adjacent with the region to be finished The linking part (it can be appreciated that the marginal portion in region to be finished) of image-region is blurred, so that region to be finished Edge and adjacent image regions can be with natural sparse models.Specific in the present embodiment, due to treated in above-mentioned steps modified regions into Colouring etc. of having gone is handled, it will usually be caused the edges of regions to be finished to be connected more stiff with the other parts of image data, be led to The problems such as crossing above-mentioned emergence processing, the fracture generated at the edge in region to be finished, twist distortion can be weakened.
Above-mentioned uniform Fuzzy Processing can treat modified regions and carry out Fuzzy Processing on the whole;Due at above-mentioned modification During reason, more apparent color change has occurred inside region to be finished, at this point, can be with by above-mentioned uniform Fuzzy Processing It treats and carries out Fuzzy Processing inside modified regions, to optimize visually unnatural problem.
The method of above-mentioned modification eye, behind the region to be finished that the eye of target object is determined according to the human face characteristic point; And then according to preset modification parameter, modified regions are treated and carry out mill skin, colouring etc. handling, the image data that obtains that treated. Which automatically can carry out moditied processing by the eye to target object, and the convenient and modification effect of operation is preferable, to improve User experience.
Embodiment five:
The method of the modification eye provided based on the above embodiment, the present embodiment provides a kind of specific application scenarios, i.e., It is taken pictures by intelligent terminal, the method for above-mentioned modification eye is realized during taking pictures;This method comprises the following steps:
Step 32, the photographing mode with modification eye function is opened;
Step 34, load modification eye default parameters table;It is generally comprised in the default parameters table and mould is detected by characteristic point The quantity of type detection characteristic point, characteristic point type, eye shadow modification parameter (being referred to as eye shadow modification material template), eye pupil are repaired Adorn parameter (being referred to as eye pupil material template), eyelashes modification parameter (being referred to as eyelashes material template), eyebrow modification Parameter (being referred to as eyebrow material template), eye modify pattern (the color scheme mode of such as parameters, color presentation side To the emergence degree of, color transition) etc..Certainly, before taking pictures or after taking pictures, user can also be manually adjusted in above-mentioned parameter table Various parameters.
Step 36, image capture device (camera of such as mobile phone) is opened, obtains preview frame image;
Step 38, the photographing instruction of user is received;
Step 40, preview frame image is input in Face datection model, Face datection is carried out to image by the model, To judge in preview frame image with the presence or absence of face;
Step 42, if there is face, image data is acquired, which is input in characteristic point detection model, With the human face characteristic point of the face of target object in detection image data;It is then tied after acquiring image data if there is no face Beam current process;
Step 44, according to the human face characteristic point detected, the eye pupil area, upper eyelid area, lower eye of template object face are determined The eye make-ups such as dermatotome, eye pouch area, right and left eyes angular region, left and right eyebrow area, left and right eyelashes, informer area modify (makeups) region;
Step 46, above-mentioned eye make-up modified regions are integrally carried out uniformly grinding skin operation;
Specifically, the channel R, the channel G and channel B of eye make-up modified regions image are respectively processed: for each picture The pixel of vegetarian refreshments, current point is weighted with the pixel closed on, then substitutes current pixel, such as uses four field modes: f (x, y) =(f (x-1, y)+f (x+1, y)+f (x, y-1)+f (x, y+1))/4;Wherein, f represents the pixel value of each position pixel;Or it adopts With eight field modes: f (x, y)=(f (x-1, y)+f (x+1, y)+f (x, y-1)+f (x, y+1)+f (x-1, y-1)+f (x+1, y+ 1)+f(x+1,y-1)+f(x-1,y+1))/8。
Step 48, template is modified in conjunction with above-mentioned eye shadow, the eye make-up modified regions after opposite grinding skin carry out eye shadow colouring;
Firstly, opening figure layer mixed function, wherein preview figure layer src1 (r1, g1, b1, alpha1) is eye make-up modified regions Corresponding figure layer, material figure layer src2 (r2, g2, b, 2, alpha2) are the corresponding figure layers of eye shadow modification template;Judge current point Whether in eye make-up modified regions, if in eye make-up modified regions, finished pigmented result be color=src1 (r1, g1, b1, Alpha1) * alpha1+src2 (r2, g2, b, 2, alpha2) * alpha2;Alpha1+alpha2=1.If not modified in eye make-up In region, then finished pigmented result is color=src1 (r1, g1, b1, alpha1).
Step 50, template is modified in conjunction with above-mentioned eye pupil, eye pupil colouring is carried out to eye make-up modified regions;
Firstly, opening figure layer mixed function, wherein preview figure layer src1 (r1, g1, b1, alpha1) is eye make-up modified regions Corresponding figure layer, material figure layer src2 (r2, g2, b, 2, alpha2) are the corresponding figure layers of eye shadow modification template;With eye pupil center A The circle domain of eye pupil modification is determined using the distance of eye pupil center A and eye pupil marginal point B as radius for the center of circle;Current point is calculated to circle The distance d of heart A, if d≤| AB |, finished pigmented result color=src1 (r1, g1, b1, alpha1) * alpha1+src2 (r2, g2, b, 2, alpha2) * alpha2;Alpha1+alpha2=1;If d > | AB |, finished pigmented result is color= src1(r1,g1,b1,alpha1)。
Step 52, continue to carry out eye make-up modified regions eyebrow colouring, eyelashes colouring, informer's colouring in the manner described above, It obtains eye make-up and tentatively modifies result A;
Step 54, it tentatively modifies eye make-up result A and carries out edge emergence, uniformly mixing, make each processing region transition aobvious It obtains naturally, obtaining final process result B;
Step 56, final process result B interaction is completed into this processing operation to display terminal.
In aforesaid way, after user takes pictures, the image data after modifying eye can be obtained, no longer need to carry out manual Modification, the more convenient and modification effect of operation is preferable, can satisfy the shooting demand of current intelligent terminal user What You See Is What You Get, It is more widely applied, user experience is higher, increases the interest of intelligent terminal, is also beneficial to improve the economy of production firm Benefit.
Embodiment six:
Corresponding to above method embodiment, a kind of structural schematic diagram of the device of modification eye shown in Figure 5, the dress It sets and includes:
Data acquisition module 50, the image data of the face for obtaining target object;
Characteristic point detection module 51, the human face characteristic point for the detected target object from image data;
Area determination module 52, the region to be finished of the eye for determining target object according to human face characteristic point;
Moditied processing module 53, for treating modified regions and carrying out moditied processing, obtain everywhere according to preset modification parameter Image data after reason.
The device of above-mentioned modification eye provided in an embodiment of the present invention, gets the image data of the face of target object Afterwards, from the image data detected target object human face characteristic point, the eye of target object is determined further according to the human face characteristic point The region to be finished in portion;And then according to preset modification parameter, treats modified regions and carry out moditied processing, the figure that obtains that treated As data.Which automatically can carry out moditied processing by the eye to target object, and the convenient and modification effect of operation is preferable, from And improve user experience.
Further, above-mentioned data acquisition module, is used for: acquiring preview frame image by image capture device;By pre- If Face datection model to preview frame image carry out Face datection;If detecting that there are faces in preview frame image, obtain Take the image data of the face of target object.
Further, above-mentioned zone determining module is used for: according to the position of characteristic point each in human face characteristic point, being determined The region to be finished of eye;Region to be finished includes eye pupil region, upper eyelid region, lower eyelid region, eye pouch region, left eye angle One of region, right eye angular zone, left brow region, right brow region, left eyelashes region, right eyelashes region, informer region Or it is a variety of.
Further, above-mentioned zone determining module is used for: according to the position of characteristic point each in human face characteristic point, by people Face characteristic point is divided into multiple groups;For every group of characteristic point: carrying out curve fitting processing to current group characteristic point, obtain currently organizing spy The corresponding lines of outline of sign point;Lines of outline area encompassed is determined as currently to organize the corresponding region to be finished of characteristic point.
Further, above-mentioned preset modification parameter includes mill skin parameter;Above-mentioned moditied processing module, is used for: obtain to The image in the channels R of modified regions, the channel G and channel B;Following places are carried out to the image in the channel R, the channel G and channel B respectively Reason: according to mill skin parameter, adjusting the pixel value of each pixel in the image of current channel, the current channel after being adjusted Image;The image of the current channel adjusted of the channel R, the channel G and channel B is merged, it is to be finished to obtain that treated Region.
Further, above-mentioned mill skin parameter includes specified pixel associated with current pixel position;Above-mentioned moditied processing Module is used for: being carried out following processing to each pixel in the image of current channel: being obtained associated with current pixel specified Location of pixels;Processing is weighted and averaged to the pixel value on specified pixel position, using processing result as the picture of current pixel Element value.
Further, above-mentioned specified pixel position includes adjacent with current pixel, the location of pixels on current pixel top, The location of pixels on the right side of location of pixels and current pixel on the left of the location of pixels of current pixel lower part, current pixel;Alternatively, referring to Pixel location include it is adjacent with current pixel, the location of pixels on current pixel top, current pixel lower part location of pixels, Location of pixels, the current picture of the location of pixels on the right side of location of pixels, current pixel, current pixel upper left quarter on the left of current pixel The location of pixels of the location of pixels of plain lower left quarter, the location of pixels of current pixel upper right quarter and current pixel right lower quadrant.
Further, above-mentioned preset modification parameter includes eye shadow colouring parameter, eye pupil colouring parameter, eyelash colouring ginseng One of several and informer colouring parameter is a variety of;Above-mentioned moditied processing module, is used for: obtaining the channel R, the G in region to be finished The image in the corresponding channel R for modifying parameter of the image and region to be finished of channel and channel B, the channel G and channel B;Respectively Treat the following processing of image progress in the channel R, the channel G and channel B of modified regions: according to preset weight, by current channel Image be weighted and merge with the image of corresponding channel in modification parameter, using fusion results as the image of current channel;By R The fused image in channel, the channel G and channel B merges, the region to be finished that obtains that treated.
Further, above-mentioned moditied processing module, is used for: treating respectively each in the image of the current channel of modified regions Pixel carries out following processing: in the image for obtaining the corresponding channel of modification parameter, the figure with the current channel in region to be finished Corresponding first pixel in position of current pixel point as in;According to preset weight, to the pixel value of current pixel point With the pixel value weighting fusion treatment of the first pixel, using fusion treatment result as the pixel value of current pixel point;It is to be finished In the image of the current channel in region after the completion of each pixel processing, the image of current channel is obtained.
Further, above-mentioned moditied processing module, is used for: to the region to be finished after moditied processing carry out emergence processing or Uniform Fuzzy Processing obtains final region to be finished.
The technical effect of device provided by the present embodiment, realization principle and generation is identical with previous embodiment, for letter It describes, Installation practice part does not refer to place, can refer to corresponding contents in preceding method embodiment.
Embodiment seven:
The embodiment of the invention provides it is a kind of modify eye system, system include: image capture device, processing equipment and Storage device;Image capture device, for obtaining preview frame image or image data;Computer journey is stored on storage device Sequence, computer program execute the method such as above-mentioned modification eye when equipment processed is run.
It is apparent to those skilled in the art that for convenience and simplicity of description, the system of foregoing description Specific work process, can refer to corresponding processes in the foregoing method embodiment, details are not described herein.
Further, the present embodiment also provides a kind of computer readable storage medium, deposits on computer readable storage medium Contain computer program, the step of when computer program equipment operation processed executes the method for above-mentioned modification eye.
The computer program product of the methods, devices and systems of modification eye provided by the embodiment of the present invention, including deposit The computer readable storage medium of program code is stored up, the instruction that said program code includes can be used for executing previous methods implementation Method described in example, specific implementation can be found in embodiment of the method, and details are not described herein.
In addition, in the description of the embodiment of the present invention unless specifically defined or limited otherwise, term " installation ", " phase Even ", " connection " shall be understood in a broad sense, for example, it may be being fixedly connected, may be a detachable connection, or be integrally connected;It can To be mechanical connection, it is also possible to be electrically connected;It can be directly connected, can also can be indirectly connected through an intermediary Connection inside two elements.For the ordinary skill in the art, above-mentioned term can be understood at this with concrete condition Concrete meaning in invention.
It, can be with if the function is realized in the form of SFU software functional unit and when sold or used as an independent product It is stored in a computer readable storage medium.Based on this understanding, technical solution of the present invention is substantially in other words The part of the part that contributes to existing technology or the technical solution can be embodied in the form of software products, the meter Calculation machine software product is stored in a storage medium, including some instructions are used so that a computer equipment (can be a People's computer, server or network equipment etc.) it performs all or part of the steps of the method described in the various embodiments of the present invention. And storage medium above-mentioned includes: that USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only Memory), arbitrary access are deposited The various media that can store program code such as reservoir (RAM, Random Access Memory), magnetic or disk.
In the description of the present invention, it should be noted that term " center ", "upper", "lower", "left", "right", "vertical", The orientation or positional relationship of the instructions such as "horizontal", "inner", "outside" be based on the orientation or positional relationship shown in the drawings, merely to Convenient for description the present invention and simplify description, rather than the device or element of indication or suggestion meaning must have a particular orientation, It is constructed and operated in a specific orientation, therefore is not considered as limiting the invention.In addition, term " first ", " second ", " third " is used for descriptive purposes only and cannot be understood as indicating or suggesting relative importance.
Finally, it should be noted that embodiment described above, only a specific embodiment of the invention, to illustrate the present invention Technical solution, rather than its limitations, scope of protection of the present invention is not limited thereto, although with reference to the foregoing embodiments to this hair It is bright to be described in detail, those skilled in the art should understand that: anyone skilled in the art In the technical scope disclosed by the present invention, it can still modify to technical solution documented by previous embodiment or can be light It is readily conceivable that variation or equivalent replacement of some of the technical features;And these modifications, variation or replacement, do not make The essence of corresponding technical solution is detached from the spirit and scope of technical solution of the embodiment of the present invention, should all cover in protection of the invention Within the scope of.Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (13)

1. a kind of method for modifying eye, which is characterized in that the described method includes:
Obtain the image data of the face of target object;
The human face characteristic point of the target object is detected from described image data;
The region to be finished of the eye of the target object is determined according to the human face characteristic point;
According to preset modification parameter, moditied processing is carried out to the region to be finished, the described image data that obtain that treated.
2. the method according to claim 1, wherein obtain target object face image data the step of, Include:
Preview frame image is acquired by image capture device;
Face datection is carried out to the preview frame image by preset Face datection model;
If detecting that there are faces in the preview frame image, obtain the image data of the face of target object.
3. the method according to claim 1, wherein determining the target object according to the human face characteristic point The step of region to be finished of eye, comprising:
According to the position of characteristic point each in the human face characteristic point, the region to be finished of the eye is determined;It is described to be finished Region includes eye pupil region, upper eyelid region, lower eyelid region, eye pouch region, left eye angular zone, right eye angular zone, left eyebrow One of region, right brow region, left eyelashes region, right eyelashes region, informer region are a variety of.
4. according to the method described in claim 3, it is characterized in that, according to the position of characteristic point each in the human face characteristic point The step of setting, determining the region to be finished of the eye, comprising:
According to the position of characteristic point each in the human face characteristic point, the human face characteristic point is divided into multiple groups;
For every group of characteristic point: carrying out curve fitting processing to current group characteristic point, it is corresponding to obtain the current group of characteristic point Lines of outline;The lines of outline area encompassed is determined as the corresponding region to be finished of the current group of characteristic point.
5. the method according to claim 1, wherein the preset modification parameter includes mill skin parameter;
It is described that moditied processing is carried out to the region to be finished according to preset modification parameter, the described image that obtains that treated The step of data, comprising:
Obtain the channel R in the region to be finished, the image in the channel G and channel B;
Following processing are carried out to the image in the channel R, the channel G and the channel B respectively: according to the mill skin parameter, Adjust the pixel value of each pixel in the image of current channel, the image of the current channel after being adjusted;
The image of the current channel adjusted of the channel R, the channel G and the channel B is merged, is obtained Treated the region to be finished.
6. according to the method described in claim 5, it is characterized in that, the mill skin parameter includes finger associated with current pixel Pixel location;
It is described according to the mill skin parameter, the pixel value of each pixel in the image of current channel is adjusted, after being adjusted The step of image of the current channel, comprising:
Following processing are carried out to each pixel in the image of current channel:
Obtain specified pixel associated with current pixel position;
Processing is weighted and averaged to the pixel value on the specified pixel position, using processing result as the current pixel Pixel value.
7. according to the method described in claim 6, it is characterized in that, the specified pixel position includes adjacent with current pixel , the location of pixels on the current pixel top, the location of pixels of the current pixel lower part, the picture on the left of the current pixel Location of pixels on the right side of plain position and the current pixel;
Alternatively, the specified pixel position include it is adjacent with current pixel, it is the location of pixels on the current pixel top, described The location of pixels on the right side of location of pixels, the current pixel on the left of the location of pixels of current pixel lower part, the current pixel, The location of pixels of the current pixel upper left quarter, the location of pixels of the current pixel lower left quarter, the current pixel upper right quarter Location of pixels and the current pixel right lower quadrant location of pixels.
8. the method according to claim 1, wherein the preset modification parameter include eye shadow colouring parameter, One of eye pupil colouring parameter, eyelash colouring parameter and informer's colouring parameter are a variety of;
It is described that moditied processing is carried out to the region to be finished according to preset modification parameter, the described image that obtains that treated The step of data, comprising:
Obtain the channel R in the region to be finished, the corresponding modification of the image in the channel G and channel B and the region to be finished The image in the channel R of parameter, the channel G and channel B;
Following processing are carried out to the image of the channel R in the region to be finished, the channel G and channel B respectively: according to preset The image of current channel is weighted with the image of corresponding channel in the modification parameter and merges, fusion results are made by weight For the image of the current channel;
The fused image of the channel R, the channel G and channel B is merged, the area to be finished that obtains that treated Domain.
9. according to the method described in claim 8, it is characterized in that, described according to preset weight, by the image of current channel It is weighted and merges with the image of corresponding channel in the modification parameter, using fusion results as the image of the current channel Step, comprising:
Following processing are carried out to pixel each in the image of the current channel in the region to be finished respectively: obtaining the modification Position phase in the image of the corresponding channel of parameter, with the current pixel point in the image of the current channel in the region to be finished Corresponding first pixel;Pixel value and first pixel according to the preset weight, to the current pixel point Pixel value weighting fusion treatment, using fusion treatment result as the pixel value of current pixel point;
In the image of the current channel in the region to be finished after the completion of each pixel processing, the figure of the current channel is obtained Picture.
10. the method according to claim 1, wherein described according to preset modification parameter, to described to be finished Region carries out the step of moditied processing further include:
Emergence processing or uniform Fuzzy Processing are carried out to the region to be finished after moditied processing, obtained final described to be repaired Adorn region.
11. a kind of device for modifying eye, which is characterized in that described device includes:
Data acquisition module, the image data of the face for obtaining target object;
Characteristic point detection module, for detecting the human face characteristic point of the target object from described image data;
Area determination module, the region to be finished of the eye for determining the target object according to the human face characteristic point;
Moditied processing module, for carrying out moditied processing to the region to be finished, being handled according to preset modification parameter Described image data afterwards.
12. a kind of system for modifying eye, which is characterized in that the system comprises: image capture device, processing equipment and storage Device;
Described image acquires equipment, for obtaining preview frame image or image data;
Computer program is stored on the storage device, the computer program executes such as when being run by the processing equipment The described in any item methods of claims 1 to 10.
13. a kind of computer readable storage medium, computer program, feature are stored on the computer readable storage medium It is, the computer program equipment processed executes the step of the described in any item methods of the claims 1 to 10 when running Suddenly.
CN201811491092.2A 2018-12-06 2018-12-06 Modify the methods, devices and systems of eye Pending CN109584153A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811491092.2A CN109584153A (en) 2018-12-06 2018-12-06 Modify the methods, devices and systems of eye

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811491092.2A CN109584153A (en) 2018-12-06 2018-12-06 Modify the methods, devices and systems of eye

Publications (1)

Publication Number Publication Date
CN109584153A true CN109584153A (en) 2019-04-05

Family

ID=65927671

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811491092.2A Pending CN109584153A (en) 2018-12-06 2018-12-06 Modify the methods, devices and systems of eye

Country Status (1)

Country Link
CN (1) CN109584153A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110135315A (en) * 2019-05-07 2019-08-16 厦门欢乐逛科技股份有限公司 Eye pupil replacement method and device based on human eye key point
CN110136054A (en) * 2019-05-17 2019-08-16 北京字节跳动网络技术有限公司 Image processing method and device
CN110766631A (en) * 2019-10-21 2020-02-07 北京旷视科技有限公司 Face image modification method and device, electronic equipment and computer readable medium
CN111583102A (en) * 2020-05-14 2020-08-25 北京字节跳动网络技术有限公司 Face image processing method and device, electronic equipment and computer storage medium
CN112347979A (en) * 2020-11-24 2021-02-09 郑州阿帕斯科技有限公司 Eye line drawing method and device
CN113596314A (en) * 2020-04-30 2021-11-02 北京达佳互联信息技术有限公司 Image processing method and device and electronic equipment
CN113781359A (en) * 2021-09-27 2021-12-10 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103236066A (en) * 2013-05-10 2013-08-07 苏州华漫信息服务有限公司 Virtual trial make-up method based on human face feature analysis
CN105139438A (en) * 2014-09-19 2015-12-09 电子科技大学 Video face cartoon animation generation method
CN106846240A (en) * 2015-12-03 2017-06-13 阿里巴巴集团控股有限公司 A kind of method for adjusting fusion material, device and equipment
CN107578372A (en) * 2017-10-31 2018-01-12 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and electronic equipment
CN107578380A (en) * 2017-08-07 2018-01-12 北京金山安全软件有限公司 Image processing method and device, electronic equipment and storage medium
CN108038836A (en) * 2017-11-29 2018-05-15 维沃移动通信有限公司 A kind of image processing method, device and mobile terminal
CN105701773B (en) * 2014-11-28 2018-08-17 联芯科技有限公司 A kind of method and device of quick processing image

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103236066A (en) * 2013-05-10 2013-08-07 苏州华漫信息服务有限公司 Virtual trial make-up method based on human face feature analysis
CN105139438A (en) * 2014-09-19 2015-12-09 电子科技大学 Video face cartoon animation generation method
CN105701773B (en) * 2014-11-28 2018-08-17 联芯科技有限公司 A kind of method and device of quick processing image
CN106846240A (en) * 2015-12-03 2017-06-13 阿里巴巴集团控股有限公司 A kind of method for adjusting fusion material, device and equipment
CN107578380A (en) * 2017-08-07 2018-01-12 北京金山安全软件有限公司 Image processing method and device, electronic equipment and storage medium
CN107578372A (en) * 2017-10-31 2018-01-12 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and electronic equipment
CN108038836A (en) * 2017-11-29 2018-05-15 维沃移动通信有限公司 A kind of image processing method, device and mobile terminal

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110135315A (en) * 2019-05-07 2019-08-16 厦门欢乐逛科技股份有限公司 Eye pupil replacement method and device based on human eye key point
CN110135315B (en) * 2019-05-07 2023-04-07 厦门稿定股份有限公司 Eye pupil replacement method and device based on key points of human eyes
CN110136054A (en) * 2019-05-17 2019-08-16 北京字节跳动网络技术有限公司 Image processing method and device
CN110136054B (en) * 2019-05-17 2024-01-09 北京字节跳动网络技术有限公司 Image processing method and device
CN110766631A (en) * 2019-10-21 2020-02-07 北京旷视科技有限公司 Face image modification method and device, electronic equipment and computer readable medium
WO2021218118A1 (en) * 2020-04-30 2021-11-04 北京达佳互联信息技术有限公司 Image processing method and apparatus
CN113596314A (en) * 2020-04-30 2021-11-02 北京达佳互联信息技术有限公司 Image processing method and device and electronic equipment
CN113596314B (en) * 2020-04-30 2022-11-11 北京达佳互联信息技术有限公司 Image processing method and device and electronic equipment
CN111583102B (en) * 2020-05-14 2023-05-16 抖音视界有限公司 Face image processing method and device, electronic equipment and computer storage medium
CN111583102A (en) * 2020-05-14 2020-08-25 北京字节跳动网络技术有限公司 Face image processing method and device, electronic equipment and computer storage medium
CN112347979A (en) * 2020-11-24 2021-02-09 郑州阿帕斯科技有限公司 Eye line drawing method and device
CN112347979B (en) * 2020-11-24 2024-03-15 郑州阿帕斯科技有限公司 Eye line drawing method and device
CN113781359A (en) * 2021-09-27 2021-12-10 北京市商汤科技开发有限公司 Image processing method and device, electronic equipment and storage medium

Similar Documents

Publication Publication Date Title
CN109584153A (en) Modify the methods, devices and systems of eye
CN105184249B (en) Method and apparatus for face image processing
CN101779218B (en) Makeup simulation system, makeup simulation apparatus, makeup simulation method, and makeup simulation program
CN107123083B (en) Face edit methods
CN105426827B (en) Living body verification method, device and system
CN106056064B (en) A kind of face identification method and face identification device
CN108985172A (en) A kind of Eye-controlling focus method, apparatus, equipment and storage medium based on structure light
CN106469302A (en) A kind of face skin quality detection method based on artificial neural network
CN109784281A (en) Products Show method, apparatus and computer equipment based on face characteristic
CN105787878A (en) Beauty processing method and device
CN104463938A (en) Three-dimensional virtual make-up trial method and device
CN103914699A (en) Automatic lip gloss image enhancement method based on color space
CN105518708A (en) Method and equipment for verifying living human face, and computer program product
CN106897746A (en) Data classification model training method and device
CN109191508A (en) A kind of simulation beauty device, simulation lift face method and apparatus
CN102567734B (en) Specific value based retina thin blood vessel segmentation method
CN108053398A (en) A kind of melanoma automatic testing method of semi-supervised feature learning
CN110235169A (en) Evaluation system of making up and its method of operating
CN109684959A (en) The recognition methods of video gesture based on Face Detection and deep learning and device
CN109147023A (en) Three-dimensional special efficacy generation method, device and electronic equipment based on face
CN107705240A (en) Virtual examination cosmetic method, device and electronic equipment
CN108537126A (en) A kind of face image processing system and method
CN108024719A (en) Gloss Evaluation device, Gloss Evaluation method and the Gloss Evaluation program of skin
CN109685713A (en) Makeup analog control method, device, computer equipment and storage medium
CN113850169B (en) Face attribute migration method based on image segmentation and generation countermeasure network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination