CN110443765A - Image processing method, device and electronic equipment - Google Patents

Image processing method, device and electronic equipment Download PDF

Info

Publication number
CN110443765A
CN110443765A CN201910710509.8A CN201910710509A CN110443765A CN 110443765 A CN110443765 A CN 110443765A CN 201910710509 A CN201910710509 A CN 201910710509A CN 110443765 A CN110443765 A CN 110443765A
Authority
CN
China
Prior art keywords
subgraph
facial image
face key
key point
wrinkle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910710509.8A
Other languages
Chinese (zh)
Inventor
邱添羽
田先润
吕仰铭
李骈臻
张伟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Meitu Technology Co Ltd
Original Assignee
Xiamen Meitu Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Meitu Technology Co Ltd filed Critical Xiamen Meitu Technology Co Ltd
Priority to CN201910710509.8A priority Critical patent/CN110443765A/en
Publication of CN110443765A publication Critical patent/CN110443765A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/77Retouching; Inpainting; Scratch removal
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

Image processing method, device and electronic equipment provided by the embodiments of the present application, are related to technical field of image processing.Described image processing method includes: that facial image to be processed is divided into multiple subgraphs;For each subgraph, the subgraph is detected by being directed to the wrinkle detection model that subgraph training obtains in advance, obtains the wrinkle information in each subgraph;For each subgraph, wrinkle Processing for removing is carried out to the subgraph according to the wrinkle information of the subgraph.By the above method, can improve when carrying out wrinkle dispelling processing using the prior art has that effect is unnatural.

Description

Image processing method, device and electronic equipment
Technical field
This application involves technical field of image processing, set in particular to a kind of image processing method, device and electronics It is standby.
Background technique
Present taking photograph of intelligent mobile phone has various U.S. face options, and various U.S. face softwares also emerge one after another, but user is instead Increasingly tend to natural beauty or slight U.S. face.The problems such as U.S. face rank is turned down, can solve the simple colour of skin, acne print, But it cannot be clean by wrinkle dispelling.
But through inventor the study found that in the prior art, directly according to a complete faceform to facial image Wrinkle detection is carried out, so that carrying out wrinkle dispelling processing based on testing result and there is a problem of that effect is unnatural.
Summary of the invention
In view of this, the application's is designed to provide a kind of image processing method, device and electronic equipment, it is existing to improve There is the problem of technology.
To achieve the above object, the embodiment of the present application adopts the following technical scheme that
A kind of image processing method, comprising:
Facial image to be processed is divided into multiple subgraphs;
For each subgraph, by be directed in advance wrinkle detection model that subgraph training obtains to the subgraph into Row detection, obtains the wrinkle information in each subgraph;
For each subgraph, wrinkle Processing for removing is carried out to the subgraph according to the wrinkle information of the subgraph.
In the embodiment of the present application preferably selects, the step that facial image to be processed is divided into multiple subgraphs Suddenly, comprising:
Rectangular area is formed according to multiple first object key points predetermined in facial image to be processed, wherein The multiple first object key point is the position letter in the multiple face key points for including based on forehead in the facial image Cease determining part face key point;
By the top edge of the rectangular area and lower edge respectively according to predetermined first distance and second distance into Row movement, forms new rectangular area;
It will be removed in the new rectangular area based on the brow region that the face key point determines, obtain forehead subgraph Picture.
In the embodiment of the present application preferably selects, the step that facial image to be processed is divided into multiple subgraphs Suddenly, comprising:
Eye areas is formed according to multiple second target critical points predetermined in facial image to be processed, wherein The multiple second target critical point is the position letter in the multiple face key points for including based on eyes in the facial image Cease determining part face key point;
It will be removed in the eye areas based on the eyeball that the face key point determines, obtain eyes subgraph.
In the embodiment of the present application preferably selects, the step that facial image to be processed is divided into multiple subgraphs Suddenly, comprising:
Cheek region is formed according to multiple third target critical points predetermined in facial image to be processed, wherein The multiple third target critical point is the position letter in the multiple face key points for including based on cheek in the facial image Cease determining part face key point;
It will be removed in the cheek region based on the determining mouth of the face key point and nasal area, obtain cheek Image.
In the embodiment of the present application preferably selects, described facial image to be processed is divided into multiple subgraphs executing Before the step of picture, described image processing method further include:
Face key point localization process is carried out to facial image to be processed, obtains the changing coordinates of multiple face key points Information;
The tilt angle of the face in the facial image is calculated according to the changing coordinates information, and judges the inclination angle Whether degree is more than predetermined angle;
If the tilt angle is more than predetermined angle, processing is updated to the changing coordinates information, so that according to The tilt angle that updated coordinate information is calculated is less than the predetermined angle.
It is described to be calculated in the facial image according to the changing coordinates information in the embodiment of the present application preferably selects Face tilt angle the step of, comprising:
A left side is calculated according to the coordinate information for the face key point for belonging to left eye region in the multiple face key point Eye coordinates information, and calculated according to the coordinate information for the face key point for belonging to right eye region in the multiple face key point To right eye coordinate information;
The tilt angle that information calculates the face in the facial image is sat according to the left eye coordinates information and right eye.
The embodiment of the present application also provides a kind of image processing apparatus, comprising:
Divide module, for facial image to be processed to be divided into multiple subgraphs;
Detection module detects mould by being directed to the wrinkle that subgraph training obtains in advance for being directed to each subgraph Type detects the subgraph, obtains the wrinkle information in each subgraph;
Wrinkle cancellation module carries out the subgraph according to the wrinkle information of the subgraph for being directed to each subgraph Wrinkle Processing for removing.
In the embodiment of the present application preferably selects, the segmentation module includes:
Rectangular area forms submodule, for being closed according to multiple first objects predetermined in facial image to be processed The rectangular region of key dot, wherein the multiple first object key point is that the multiple faces for including close in the facial image The part face key point determined in key point based on the location information of forehead;
Mobile submodule, for by the top edge of the rectangular area and lower edge respectively according to predetermined first away from It is moved from second distance, forms new rectangular area;
Region removes submodule, the eyebrow area for will be determined in the new rectangular area based on the face key point Domain removal, obtains forehead subgraph.
In the embodiment of the present application preferably selects, described image processing unit further include:
Face key point localization process module, for carrying out face key point localization process to facial image to be processed, Obtain the changing coordinates information of multiple face key points;
Tilt angle computing module, for calculating inclining for the face in the facial image according to the changing coordinates information Rake angle, and judge whether the tilt angle is more than predetermined angle;
Coordinate information update module is used for when the tilt angle is more than predetermined angle, to the changing coordinates information It is updated processing, so as to be less than the predetermined angle according to the tilt angle that updated coordinate information is calculated.
Image processing method, device and electronic equipment provided by the embodiments of the present application, by being directed to people to be processed in advance The obtained multiple wrinkle detection models of multiple subgraphs training of face image segmentation detect subgraph, and according to detecting To the wrinkle information of multiple subgraphs carry out wrinkle Processing for removing respectively, to avoid in the prior art directly according to one it is complete Faceform to facial image carry out wrinkle detection so that based on testing result carry out wrinkle dispelling processing and there are effect not from Right problem has that effect is unnatural so as to improve when carrying out wrinkle dispelling processing using the prior art.
Detailed description of the invention
Technical solution in ord to more clearly illustrate embodiments of the present application, below will be to needed in the embodiment attached Figure is briefly described, it should be understood that the following drawings illustrates only some embodiments of the application, therefore is not construed as pair The restriction of range for those of ordinary skill in the art without creative efforts, can also be according to this A little attached drawings obtain other relevant attached drawings.
Fig. 1 is the structural block diagram of electronic equipment provided by the embodiments of the present application.
Fig. 2 is the flow diagram of image processing method provided by the embodiments of the present application.
Fig. 3 is the flow diagram of step S110 provided by the embodiments of the present application.
Fig. 4 is the schematic diagram of 118 face key points provided by the embodiments of the present application.
Fig. 5 is the schematic diagram of forehead subgraph provided by the embodiments of the present application.
Fig. 6 is the schematic diagram of left eye eyeball subgraph provided by the embodiments of the present application.
Fig. 7 is the schematic diagram of right eye eyeball subgraph provided by the embodiments of the present application.
Fig. 8 is the schematic diagram of left cheek subgraph provided by the embodiments of the present application.
Fig. 9 is the schematic diagram of right cheek subgraph provided by the embodiments of the present application.
Figure 10 is another flow diagram of image processing method provided by the embodiments of the present application.
Figure 11 is the structural block diagram of image processing apparatus provided by the embodiments of the present application.
Icon: 10- electronic equipment;12- memory;14- processor;100- image processing apparatus;110- divides module; 120- detection module;130- wrinkle cancellation module.
Specific embodiment
To keep the purposes, technical schemes and advantages of the embodiment of the present application clearer, below in conjunction with the embodiment of the present application In attached drawing, the technical scheme in the embodiment of the application is clearly and completely described, it is clear that described embodiment is Some embodiments of the present application, instead of all the embodiments.The application being usually described and illustrated herein in the accompanying drawings is implemented The component of example can be arranged and be designed with a variety of different configurations.
Therefore, the detailed description of the embodiments herein provided in the accompanying drawings is not intended to limit below claimed Scope of the present application, but be merely representative of the selected embodiment of the application.Based on the embodiment in the application, this field is common Technical staff's every other embodiment obtained without making creative work belongs to the model of the application protection It encloses.
It should also be noted that similar label and letter indicate similar terms in following attached drawing, therefore, once a certain Xiang Yi It is defined in a attached drawing, does not then need that it is further defined and explained in subsequent attached drawing.
As shown in Figure 1, the embodiment of the present application provides a kind of electronic equipment 10.The electronic equipment 10 may include memory 12, processor 14 and image processing apparatus 100.
Wherein, the concrete type of the electronic equipment 10 is unrestricted, can be configured according to practical application request.Example Such as, it may include, but be not limited to the electronic equipments such as computer, tablet computer, mobile phone.
In detail, it is directly or indirectly electrically connected between the memory 12 and processor 14, to realize the biography of data Defeated or interaction.It is electrically connected for example, can be realized between each other by one or more communication bus or signal wire.At described image Reason device 100 includes that at least one can be stored in the software in the memory 12 in the form of software or firmware (firmware) Functional module.The processor 14 is for executing the executable computer program stored in the memory 12, for example, described Software function module included by image processing apparatus 100 and computer program etc., to realize figure provided by the embodiments of the present application As processing method.
Wherein, the memory 12 may be, but not limited to, random access memory (Random Access Memory, RAM), read-only memory (Read Only Memory, ROM), programmable read only memory (Programmable Read-Only Memory, PROM), erasable read-only memory (Erasable Programmable Read-Only Memory, EPROM), Electricallyerasable ROM (EEROM) (Electric Erasable Programmable Read-Only Memory, EEPROM) etc..
The processor 14 may be a kind of IC chip, the processing capacity with signal.Above-mentioned processor 14 It can be general processor, including central processing unit (Central Processing Unit, CPU), network processing unit (Network Processor, NP), system on chip (System on Chip, SoC) etc..
It is appreciated that structure shown in FIG. 1 is only to illustrate, the electronic equipment 10 may also include more than shown in Fig. 1 Perhaps less component or with the configuration different from shown in Fig. 1.
In conjunction with Fig. 2, the embodiment of the present application also provides a kind of image processing method that can be applied to above-mentioned electronic equipment 10.Its In, method and step defined in the related process of described image processing method can be realized by the electronic equipment 10, below will Detailed process shown in Fig. 2 is described in detail.
Facial image to be processed is divided into multiple subgraphs by step S110.
Optionally, the particular number of the subgraph is unrestricted, can be configured according to practical application request.Example Such as, in the present embodiment, the particular number of the subgraph can be 5, can be respectively forehead subgraph, left eye subgraph Picture, right eye subgraph, left cheek subgraph and right cheek subgraph.
Wherein, the forehead subgraph is symmetrical, but there are wrinkles in forehead subgraph middle position, such as Fruit is split the forehead subgraph, can cut off intermediate wrinkle, influences the accuracy of wrinkle detection.The left eye subgraph It is generally not present wrinkle between picture and the right eye subgraph, between the left cheek subgraph and the right cheek subgraph, It can be split.The left eye subgraph and right eye subgraph bilateral symmetry, the left cheek subgraph and the right side Cheek subgraph bilateral symmetry.
Under equal accuracy, the size of subgraph is smaller, and the speed for carrying out wrinkle detection by wrinkle detection model is faster. In the present embodiment, by the way that eye areas is divided into the left eye subgraph and the right eye subgraph, by cheek region point It is segmented into the left cheek subgraph and the right cheek subgraph, subgraph can be carried out according to corresponding wrinkle detection model Wrinkle detection processing, to improve the efficiency of wrinkle detection.
Step S120 trains obtained wrinkle detection model pair by being directed to the subgraph in advance for each subgraph The subgraph is detected, and the wrinkle information in each subgraph is obtained.
Optionally, the particular number of the wrinkle detection model is unrestricted, can be set according to practical application request It sets.For example, in the example that one kind can substitute, it can be respectively according to the forehead subgraph, left eye subgraph, right eye subgraph Picture, left cheek subgraph and the training of right cheek subgraph obtain 5 corresponding wrinkle detection models.
In another example forehead wrinkle can be obtained according to forehead subgraph training in the example that another kind can substitute Line detection model obtains eyes wrinkle detection model according to the left eye subgraph and the training of right eye subgraph, according to the left side Cheek subgraph and the training of right cheek subgraph obtain cheek wrinkle detection model, to reduce the operand of training pattern.
Step S130 carries out wrinkle elimination to the subgraph according to the wrinkle information of the subgraph for each subgraph Processing.
Optionally, it is described carry out wrinkle Processing for removing concrete mode it is unrestricted, can according to practical application request into Row setting.For example, in one embodiment, the concrete mode for carrying out wrinkle Processing for removing, which can be, carries out mill skin processing.Again For example, in the present embodiment, the concrete mode for carrying out wrinkle Processing for removing can be the image of wrinkle position, according to Image is filled processing around wrinkle position.
By above method, obtained by the multiple subgraphs training divided in advance for facial image to be processed more A wrinkle detection model detects subgraph, and the wrinkle information of the multiple subgraphs obtained according to detection is wrinkled respectively Line Processing for removing makes to avoid directly wrinkle detection is carried out to facial image according to a complete faceform in the prior art It obtains and carries out wrinkle dispelling processing based on testing result and there is a problem of that effect is unnatural, so as to improve wrinkle dispelling processing is carried out The unnatural problem of effect.
For step S110, the specific image based on the subgraph is different, and the step S110 may include different Step.In conjunction with Fig. 3, when subgraph is specially forehead subgraph, the step S110 may include step S111, step S112 With step S113.
Step S111 forms rectangle region according to multiple first object key points predetermined in facial image to be processed Domain.
Wherein, the multiple first object key point is to be based in the facial image in the multiple face key points for including The part face key point that the location information of forehead determines.Optionally, the particular number of the face key point is unrestricted, can To be configured according to practical application request.For example, in the present embodiment, in conjunction with Fig. 4, the particular number of the face key point It can be 118.
It, can be according to eyes and brow region since 118 face key points do not include the key point in forehead region Key point determines a rectangular area, then rectangular area is moved up, to obtain forehead subheader picture.In the present embodiment, described Predetermined multiple first object key points may include key point 0,1,2,3,29,30,31,32,33,38,46,50,76 With 84 (further referring to Fig. 4).
Step S112, by the top edge of the rectangular area and lower edge respectively according to predetermined first distance and Two distances are moved, and new rectangular area is formed.
Optionally, the first distance and second distance can be configured according to practical application request.For example, in one kind In the example that can be substituted, the forehead region area of the facial image is larger, and the first distance and second distance can divide It is not the width of 1.2 times of rectangular areas and the width of 0.9 times of rectangular area.
In another example the forehead region area of the facial image is smaller in the example that can substitute of another kind, described the One distance and second distance can be the width of 1 times of rectangular area and the width of 1 times of rectangular area respectively.
Step S113 will be removed based on the brow region that the face key point determines in the new rectangular area, be obtained To forehead subgraph.
In the present embodiment, the brow region includes left brow region and right brow region.The left brow region packet Include the region of key point 33,34,35,36,37,38,39,40,41 and 33 (further referring to Fig. 4) formation, the right eyebrow Region includes the region that key point 42,43,44,45,46,47,48,49,50 and 42 (further referring to Fig. 4) is formed.In conjunction with Fig. 5, after the brow region removal determined in the new rectangular area based on the face key point, by the new square The size scaling in shape region is to 128*256, to obtain forehead subgraph.
In conjunction with Fig. 6 and Fig. 7, when subgraph is specially eyes subgraph, the step S110 may include:
Firstly, forming eye areas according to multiple second target critical points predetermined in facial image to be processed; Secondly, obtaining eyes subgraph for removing in the eye areas based on the eyeball that the face key point determines.
Wherein, the multiple second target critical point is to be based in the facial image in the multiple face key points for including The part face key point that the location information of eyes determines.In the present embodiment, the eye areas may include left eyes area Domain and right eye areas.Predetermined multiple second target critical points may include corresponding with the left eye areas Key point 0,1,2,3,76,72,71,38,39,40,41 and 33 (further referring to Fig. 4), and with the right eye areas pair The key point 71,72,84,29,30,31,32,46,47,48,49 and 50 answered (further referring to Fig. 4).
In the present embodiment, the eyeball may include left eyeball and right eyeball.Left eye ball area Domain may include the region that key point 51,52,53,54,55,56,57 and 58 (further referring to Fig. 4) is formed, the right eye Ball region may include the region that key point 61,62,63,64,65,66,67 and 68 (further referring to Fig. 4) is formed.Respectively The left eyeball and right eyeball that will be determined in the left eye areas and right eye areas based on the face key point After removal, by the size scaling of the left eye areas and right eye areas to 128*128, with obtain left eye eyeball subgraph and Right eye eyeball subgraph.
In conjunction with Fig. 8 and Fig. 9, when subgraph is specially cheek subgraph, the step S110 may include:
Firstly, forming cheek region according to multiple third target critical points predetermined in facial image to be processed; Secondly, obtaining cheek subgraph for removing in the cheek region based on the determining mouth of the face key point and nasal area Picture.
Wherein, the multiple third target critical point is to be based in the facial image in the multiple face key points for including The part face key point that the location information of cheek determines.
For example, the cheek region may include left cheek region and right cheek area in the example that one kind can substitute Domain.Wherein, predetermined multiple third target critical points may include key point corresponding with the left cheek region 3,4,5,6,7,8,9,10,11,12,95,80 and 76 (further referring to Fig. 4), and pass corresponding with the right cheek region Key point 20,21,22,23,24,25,26,27,28,29,84,80 and 95 (further referring to Fig. 4).
In the present embodiment, the mouth region may include key point 86,87,88,89,90,91,92,93,94,95, The region that 96 and 97 (further referring to Fig. 4) are formed.The nasal area may include left nose subregion and right nasal area, The left nose subregion includes the region that key point 76,77,78 and 80 (further referring to Fig. 4) is formed, the right nose region Domain includes the region that 80,82,83 and 84 (further referring to Fig. 4) are formed.Respectively by the left cheek region and right cheek area After being removed in domain based on the determining mouth region of the face key point and nasal area, by the left cheek region and right face The size scaling in buccal region domain is to 128*128, to obtain left cheek subgraph and right cheek subgraph.
For step S120, described the step of obtaining wrinkle detection model for subgraph training in advance, is unrestricted, It can be configured according to practical application request.For example, in the present embodiment, the step may include:
Firstly, obtaining multiple training pictures, and face wrinkle in every trained picture is marked by staff manually Position;Secondly, being multiple subgraphs by every trained picture segmentation, it is trained to obtain the subgraph pair for each subgraph The wrinkle detection model answered.
Wherein, the particular number of the trained picture is unrestricted, can be configured according to practical application request.Example Such as, in the present embodiment, the particular number of the trained picture can be 2000, to guarantee enough training burden, thus institute The precision for stating wrinkle detection model progress wrinkle detection processing is higher.
Also, in the present embodiment, the wrinkle detection model is established based on U-Net convolutional neural networks.In order to reduce Operand, so that the wrinkle detection model can be applied to the not high equipment of the operational capabilities such as mobile phone, it can be according to specific Equipment it is different, reduce the size of input picture, the number of channels and convolution layer number of neural network.
For example, in the present embodiment, when the electronic equipment 10 is mobile phone, maximum can be replaced with convolution (step-length 2) Pond (takes the maximum point of local acceptance region intermediate value), deconvolution is replaced with linear up-sampling, so that mobile phone can carry out in real time Wrinkle detection processing.Similarly, the size of the forehead subgraph can be 128*256, the left eye subgraph, the right eye Subgraph, the left cheek subgraph, the right cheek subgraph size can be 128*128.
For step S130, the step of wrinkle Processing for removing is carried out to subgraph respectively according to the wrinkle information of each subgraph It is rapid unrestricted, it can be configured according to practical application request.For example, in the present embodiment, the step may include:
Firstly, the wrinkle information flag that will test is the wrinkle for needing to repair;Secondly, finding out gradient map, and will be terraced Figure piecemeal is spent, each piece of size is BlockSize*BlockSize;Then, obtain either with or without wrinkle gradient block and ask It corresponds to the variance in original image block out, and variance is acquired according to pixel size and formula of variance, and has the block side labeled as wrinkle It is poor then be set as a maximum;Secondly, screw type traversal is all from outside to inside marks the gradient block for being, and with surrounding eight The smallest gradient block substitution of variance in block, while corresponding variance size is also substituted, obtain a gradient map adjusted;So Afterwards, Poisson's equation being solved using Fourier transformation, gradient map adjusted is restored to the subgraph after being repaired.
In detail, the BlockSize can determine that calculation formula can be indicated according to the size of subgraph are as follows: BlockSize=max (3, min (25, min (height, width) * 0.01)) * 2-1.Wherein, max indicates maximum between the two Value, min indicate minimum value between the two, and height, width indicate the height and width of subgraph.
In the present embodiment, in order to obtain in the facial image face key point coordinate information, executing the step Before rapid 110, described image processing method can also include step S140: carry out face key point to facial image to be processed Localization process obtains the changing coordinates information of multiple face key points.
Wherein, Face datection is carried out to the facial image to be processed first, is detecting the face to be processed There are the laggard pedestrian's face key point localization process of face in image.If not detecting face, not to the people to be processed Face image is handled.When detecting multiple faces in the facial image to be processed, people is carried out to multiple faces respectively Face key point localization process, to multiple face parallel processings, to improve treatment effeciency.
In the present embodiment, the facial image may be inclined, and described image processing method can also include step S150, to judge whether the tilt angle of the facial image is more than predetermined angle.
Step S150 calculates the tilt angle of the face in the facial image according to the changing coordinates information, and sentences Whether the tilt angle of breaking is more than predetermined angle.
Concrete mode based on the tilt angle for calculating the face in the facial image is different, the step S150 It may include different steps.
For example, in the present embodiment, the step S150 may include:
Firstly, being calculated according to the coordinate information for the face key point for belonging to left eye region in the multiple face key point To left eye coordinates information, and according to the coordinate information meter for the face key point for belonging to right eye region in the multiple face key point Calculation obtains right eye coordinate information;It is calculated in the facial image secondly, sitting information according to the left eye coordinates information and right eye The tilt angle of face.
In detail, the left eye coordinates information refers specifically to the average coordinates information of the face key point of the left eye region, The right eye coordinate information refers specifically to the average coordinates information of the face key point of the right eye region, can be according to the left eye Coordinate information and right eye sit the angle that left eye and right eye line and horizontal direction is calculated in information, in the as described facial image Face tilt angle.
Optionally, the specific location of the left eye region and right eye region is unrestricted, can be according to practical application request It is configured.For example, in the present embodiment, the left eye region may include 51,53,55 and 57 (further referring to Fig. 4) The region that four key points are formed, the right eye region may include four passes 61,63,65 and 67 (further referring to Fig. 4) The region that key point is formed.The left eye coordinates information can be expressed as (x_left, y_left), x_left=(x_51+x_53+ X_55+x_57)/4, y_left=(y_51+y_53+y_55+y_57)/4.The right eye coordinate information can be expressed as (x_ Right, y_right), x_right=(x_61+x_63+x_65+x_67)/4, y_right=(y_61+y_63+y_65+y_ 67)/4.The calculation formula of the tilt angle of face in the facial image can be with are as follows: θ=atan2 (y_right-y_left, x_right-x_left)*180/π。
Wherein, when the tilt angle is not above predetermined angle, it is believed that the face in the facial image does not have There is inclination.According to the difference of required precision, the predetermined angle can take different values.For example, showing what one kind can substitute In example, required precision is high, and the predetermined angle can be 0.
In another example the predetermined angle can be 0.1 °, due to the inclination angle in the example that another kind can substitute Very little is spent, does not influence subsequent segmentation, it is believed that the facial image does not tilt, can be according to the changing coordinates information It is split.
In conjunction with Figure 10, when the tilt angle is more than predetermined angle, described image processing method can also include step S160 carries out rotation processing to the facial image, to obtain being less than in the tilt angle of the face of facial image to be processed The coordinate information of face key point when the predetermined angle.
Step S160 is updated processing to the changing coordinates information, so as to be calculated according to updated coordinate information Obtained tilt angle is less than the predetermined angle.
In detail, if the tilt angle be more than predetermined angle, can with left eye (x_left, y_left) be coordinate origin, The facial image to be processed is rotated clockwise into angle, θ, to obtain new facial image, according to the new facial image Updated coordinate information is obtained, so as to be split according to the updated coordinate information.
When the tilt angle is not above predetermined angle, according to the changing coordinates information by face figure to be processed As being divided into multiple subgraphs.
Further, in conjunction with Figure 11, the embodiment of the present application also provides a kind of image processing apparatus 100, can be applied to Above-mentioned electronic equipment 10.Wherein, which may include that segmentation module 110, detection module 120 and wrinkle disappear Except module 130.
The segmentation module 110, for facial image to be processed to be divided into multiple subgraphs.In the present embodiment, The segmentation module 110 can be used for executing step S110 shown in Fig. 2, and the related content about the segmentation module 110 can Referring to above to the specific descriptions of step S110.
The detection module 120 trains obtained wrinkle by being directed to the subgraph in advance for being directed to each subgraph Detection model detects the subgraph, obtains the wrinkle information in each subgraph.In the present embodiment, the detection module 120 can be used for executing step S120 shown in Fig. 2, and it is right above that the related content about the detection module 120 is referred to The specific descriptions of step S120.
The wrinkle cancellation module 130, for being directed to each subgraph, according to the wrinkle information of the subgraph to the subgraph As carrying out wrinkle Processing for removing.In the present embodiment, the wrinkle cancellation module 130 can be used for executing step shown in Fig. 2 S130, the related content about the wrinkle cancellation module 130 are referred to above to the specific descriptions of step S130.
Further, the segmentation module 110 may include that rectangular area forms submodule, mobile submodule and region Except submodule.
The rectangular area forms submodule, for according to multiple first mesh predetermined in facial image to be processed It marks key point and forms rectangular area.In the present embodiment, the rectangular area formation submodule can be used for executing shown in Fig. 3 Step S111, the related content for forming submodule about the rectangular area are referred to specifically retouching to step S111 above It states.
The mobile submodule, for by the top edge of the rectangular area and lower edge respectively according to predetermined One distance and second distance are moved, and new rectangular area is formed.In the present embodiment, the mobile submodule can be used for Step S112 shown in Fig. 3 is executed, the related content about the mobile submodule is referred to above to the tool of step S112 Body description.
The region removes submodule, the eyebrow for will be determined in the new rectangular area based on the face key point The removal of hair-fields domain, obtains forehead subgraph.In the present embodiment, the region removal submodule can be used for executing shown in Fig. 3 Step S113, about the region removal submodule related content be referred to above to the specific descriptions of step S113.
Further, described image processing unit 100 can also include face key point localization process module, tilt angle Computing module and coordinate information update module.
The face key point localization process module, for being carried out at face key point location to facial image to be processed Reason, obtains the changing coordinates information of multiple face key points.In the present embodiment, the face key point localization process module can With for executing step S140 shown in Fig. 10, the related content about region removal submodule is referred to above to step The specific descriptions of rapid S140.
The tilt angle computing module, for calculating the face in the facial image according to the changing coordinates information Tilt angle, and judge whether the tilt angle is more than predetermined angle.In the present embodiment, the tilt angle computing module It can be used for executing step S150 shown in Fig. 10, the related content about the tilt angle computing module is referred to above To the specific descriptions of step S150.
The coordinate information update module is used for when the tilt angle is more than predetermined angle, to the changing coordinates Information is updated processing, so as to be less than the preset angle according to the tilt angle that updated coordinate information is calculated Degree.In the present embodiment, the coordinate information update module can be used for executing step S160 shown in Fig. 10, about the seat The related content of mark information updating module is referred to above to the specific descriptions of step S160.
In conclusion image processing method provided by the embodiments of the present application, device and electronic equipment 10, by being directed in advance Multiple wrinkle detection models that multiple subgraphs training of facial image segmentation to be processed obtains detect subgraph, and The wrinkle information of the multiple subgraphs obtained according to detection carries out wrinkle Processing for removing respectively, to avoid direct root in the prior art Wrinkle detection is carried out to facial image according to a complete faceform, so that carrying out wrinkle dispelling processing based on testing result and depositing In the unnatural problem of effect, so as to improve asking when carrying out wrinkle dispelling processing using the prior art there are effect is unnatural Topic.
The foregoing is merely preferred embodiment of the present application, are not intended to limit this application, for the skill of this field For art personnel, various changes and changes are possible in this application.Within the spirit and principles of this application, made any to repair Change, equivalent replacement, improvement etc., should be included within the scope of protection of this application.

Claims (10)

1. a kind of image processing method characterized by comprising
Facial image to be processed is divided into multiple subgraphs;
For each subgraph, the subgraph is examined by being directed to the wrinkle detection model that subgraph training obtains in advance It surveys, obtains the wrinkle information in each subgraph;
For each subgraph, wrinkle Processing for removing is carried out to the subgraph according to the wrinkle information of the subgraph.
2. image processing method as described in claim 1, which is characterized in that it is described facial image to be processed is divided into it is more The step of a subgraph, comprising:
Rectangular area is formed according to multiple first object key points predetermined in facial image to be processed, wherein described Multiple first object key points are that the location information based on forehead is true in the multiple face key points for including in the facial image Fixed part face key point;
The top edge of the rectangular area and lower edge are moved according to predetermined first distance and second distance respectively It is dynamic, form new rectangular area;
It will be removed in the new rectangular area based on the brow region that the face key point determines, obtain forehead subgraph.
3. image processing method as described in claim 1, which is characterized in that it is described facial image to be processed is divided into it is more The step of a subgraph, comprising:
Eye areas is formed according to multiple second target critical points predetermined in facial image to be processed, wherein described Multiple second target critical points are that the location information based on eyes is true in the multiple face key points for including in the facial image Fixed part face key point;
It will be removed in the eye areas based on the eyeball that the face key point determines, obtain eyes subgraph.
4. image processing method as described in claim 1, which is characterized in that it is described facial image to be processed is divided into it is more The step of a subgraph, comprising:
Cheek region is formed according to multiple third target critical points predetermined in facial image to be processed, wherein described Multiple third target critical points are that the location information based on cheek is true in the multiple face key points for including in the facial image Fixed part face key point;
It will be removed in the cheek region based on the determining mouth of the face key point and nasal area, obtain cheek subgraph Picture.
5. the image processing method as described in claim 1-4 any one, which is characterized in that execute it is described will be to be processed Facial image was divided into before the step of multiple subgraphs, described image processing method further include:
Face key point localization process is carried out to facial image to be processed, obtains the changing coordinates letter of multiple face key points Breath;
The tilt angle of the face in the facial image is calculated according to the changing coordinates information, and judges that the tilt angle is No is more than predetermined angle;
If the tilt angle is more than predetermined angle, processing is updated to the changing coordinates information, so that according to update The tilt angle that coordinate information afterwards is calculated is less than the predetermined angle.
6. image processing method as claimed in claim 5, which is characterized in that described to calculate institute according to the changing coordinates information The step of stating the tilt angle of the face in facial image, comprising:
Left eye is calculated according to the coordinate information for the face key point for belonging to left eye region in the multiple face key point to sit Information is marked, and the right side is calculated according to the coordinate information for the face key point for belonging to right eye region in the multiple face key point Eye coordinates information;
The tilt angle that information calculates the face in the facial image is sat according to the left eye coordinates information and right eye.
7. a kind of image processing apparatus characterized by comprising
Divide module, for facial image to be processed to be divided into multiple subgraphs;
Detection module trains obtained wrinkle detection model pair by being directed to the subgraph in advance for being directed to each subgraph The subgraph is detected, and the wrinkle information in each subgraph is obtained;
Wrinkle cancellation module carries out wrinkle to the subgraph according to the wrinkle information of the subgraph for being directed to each subgraph Processing for removing.
8. image processing apparatus as claimed in claim 7, which is characterized in that the segmentation module includes:
Rectangular area forms submodule, for according to multiple first object key points predetermined in facial image to be processed Form rectangular area, wherein the multiple first object key point is the multiple face key points for including in the facial image In based on forehead location information determine part face key point;
Mobile submodule, for by the top edge of the rectangular area and lower edge respectively according to predetermined first distance and Second distance is moved, and new rectangular area is formed;
Region removes submodule, for going to the brow region determined in the new rectangular area based on the face key point It removes, obtains forehead subgraph.
9. image processing apparatus as claimed in claim 7, which is characterized in that further include:
Face key point localization process module is obtained for carrying out face key point localization process to facial image to be processed The changing coordinates information of multiple face key points;
Tilt angle computing module, for calculating the inclination angle of the face in the facial image according to the changing coordinates information Degree, and judge whether the tilt angle is more than predetermined angle;
Coordinate information update module, for being carried out to the changing coordinates information when the tilt angle is more than predetermined angle Update processing, so as to be less than the predetermined angle according to the tilt angle that updated coordinate information is calculated.
10. a kind of electronic equipment, which is characterized in that including memory and processor, the processor is for executing the storage The executable computer program stored in device, to realize image processing method as claimed in any one of claims 1 to 6.
CN201910710509.8A 2019-08-02 2019-08-02 Image processing method, device and electronic equipment Pending CN110443765A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910710509.8A CN110443765A (en) 2019-08-02 2019-08-02 Image processing method, device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910710509.8A CN110443765A (en) 2019-08-02 2019-08-02 Image processing method, device and electronic equipment

Publications (1)

Publication Number Publication Date
CN110443765A true CN110443765A (en) 2019-11-12

Family

ID=68432885

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910710509.8A Pending CN110443765A (en) 2019-08-02 2019-08-02 Image processing method, device and electronic equipment

Country Status (1)

Country Link
CN (1) CN110443765A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476117A (en) * 2020-03-25 2020-07-31 中建科技有限公司深圳分公司 Safety helmet wearing detection method and device and terminal
CN114612994A (en) * 2022-03-23 2022-06-10 深圳伯德睿捷健康科技有限公司 Method and device for training wrinkle detection model and method and device for detecting wrinkles

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103824087A (en) * 2012-11-16 2014-05-28 广州三星通信技术研究有限公司 Detection positioning method and system of face characteristic points
CN103827916A (en) * 2011-09-22 2014-05-28 富士胶片株式会社 Wrinkle detection method, wrinkle detection device and wrinkle detection program, as well as wrinkle evaluation method, wrinkle evaluation device and wrinkle evaluation program
CN105550671A (en) * 2016-01-28 2016-05-04 北京麦芯科技有限公司 Face recognition method and device
CN105872447A (en) * 2016-05-26 2016-08-17 努比亚技术有限公司 Video image processing device and method
CN108324247A (en) * 2018-01-29 2018-07-27 杭州美界科技有限公司 A kind of designated position wrinkle of skin appraisal procedure and system
CN108876727A (en) * 2018-01-12 2018-11-23 迈格威科技有限公司 Image processing method, image processing apparatus and non-volatile memory medium
CN109086688A (en) * 2018-07-13 2018-12-25 北京科莱普云技术有限公司 Face wrinkles' detection method, device, computer equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103827916A (en) * 2011-09-22 2014-05-28 富士胶片株式会社 Wrinkle detection method, wrinkle detection device and wrinkle detection program, as well as wrinkle evaluation method, wrinkle evaluation device and wrinkle evaluation program
CN103824087A (en) * 2012-11-16 2014-05-28 广州三星通信技术研究有限公司 Detection positioning method and system of face characteristic points
CN105550671A (en) * 2016-01-28 2016-05-04 北京麦芯科技有限公司 Face recognition method and device
CN105872447A (en) * 2016-05-26 2016-08-17 努比亚技术有限公司 Video image processing device and method
CN108876727A (en) * 2018-01-12 2018-11-23 迈格威科技有限公司 Image processing method, image processing apparatus and non-volatile memory medium
CN108324247A (en) * 2018-01-29 2018-07-27 杭州美界科技有限公司 A kind of designated position wrinkle of skin appraisal procedure and system
CN109086688A (en) * 2018-07-13 2018-12-25 北京科莱普云技术有限公司 Face wrinkles' detection method, device, computer equipment and storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
刘耕云: "基于图像的人脸皱纹提取技术研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
甘俊英 等: "基于轻量级卷积神经网络的人脸美丽预测", 《五邑大学学报(自然科学版)》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111476117A (en) * 2020-03-25 2020-07-31 中建科技有限公司深圳分公司 Safety helmet wearing detection method and device and terminal
CN114612994A (en) * 2022-03-23 2022-06-10 深圳伯德睿捷健康科技有限公司 Method and device for training wrinkle detection model and method and device for detecting wrinkles

Similar Documents

Publication Publication Date Title
JP6900516B2 (en) Gaze point determination method and devices, electronic devices and computer storage media
CN104268591B (en) A kind of facial critical point detection method and device
CN108876879B (en) Method and device for realizing human face animation, computer equipment and storage medium
WO2018028546A1 (en) Key point positioning method, terminal, and computer storage medium
CN112384127B (en) Eyelid sagging detection method and system
CN106803067A (en) A kind of quality of human face image appraisal procedure and device
CN108230383A (en) Hand three-dimensional data determines method, apparatus and electronic equipment
CN107395958B (en) Image processing method and device, electronic equipment and storage medium
CN108229301B (en) Eyelid line detection method and device and electronic equipment
WO2021238410A1 (en) Image processing method and apparatus, electronic device, and medium
WO2020252969A1 (en) Eye key point labeling method and apparatus, and training method and apparatus for eye key point detection model
CN109711268B (en) Face image screening method and device
CN106778660B (en) A kind of human face posture bearing calibration and device
CN110443765A (en) Image processing method, device and electronic equipment
CN108256454A (en) A kind of training method based on CNN models, human face posture estimating and measuring method and device
CN110415285A (en) Image processing method, device and electronic equipment
CN115601811B (en) Face acne detection method and device
CN111488836A (en) Face contour correction method, device, equipment and storage medium
CN113379623B (en) Image processing method, device, electronic equipment and storage medium
CN111476151A (en) Eyeball detection method, device, equipment and storage medium
CN109035380B (en) Face modification method, device and equipment based on three-dimensional reconstruction and storage medium
CN111275610B (en) Face aging image processing method and system
CN104156689B (en) Method and device for positioning feature information of target object
CN110473281A (en) Threedimensional model retouches side processing method, device, processor and terminal
Azar et al. Real time eye detection using edge detection and Euclidean distance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20191112

RJ01 Rejection of invention patent application after publication