CN106558039B - A kind of facial image processing method and device - Google Patents

A kind of facial image processing method and device Download PDF

Info

Publication number
CN106558039B
CN106558039B CN201510612934.5A CN201510612934A CN106558039B CN 106558039 B CN106558039 B CN 106558039B CN 201510612934 A CN201510612934 A CN 201510612934A CN 106558039 B CN106558039 B CN 106558039B
Authority
CN
China
Prior art keywords
leg
head
pixel
width
mean breadth
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201510612934.5A
Other languages
Chinese (zh)
Other versions
CN106558039A (en
Inventor
谭国富
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Tencent Cloud Computing Beijing Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201510612934.5A priority Critical patent/CN106558039B/en
Publication of CN106558039A publication Critical patent/CN106558039A/en
Application granted granted Critical
Publication of CN106558039B publication Critical patent/CN106558039B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10052Images from lightfield camera

Landscapes

  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention discloses a kind of facial image processing method and devices, and wherein the realization of method includes: to obtain original portrait picture, show the original portrait picture in user interface;Face datection is carried out to the original portrait picture, determines the head width and leg mean breadth of head zone;It determines that the leg mean breadth is wide using the head width as standard, then the leg area is modified, leg mean breadth is made to narrow;Revised portrait picture is shown in the user interface.The head of portrait picture is obtained by the way of Face datection, and leg mean breadth, determine whether leg mean breadth is wide according to head width, leg area is modified in the case that mean breadth is wide in leg, user's manual operation is not needed, it is easy to be easy-to-use.

Description

A kind of facial image processing method and device
Technical field
The present invention relates to field of computer technology, in particular to a kind of facial image processing method and device.
Background technique
Currently, popularizing with digital camera, mobile phone, camera etc., the quantity that picture generates is more and more, but due to light Line, apparatus for making a video recording, personal appearance, shooting angle shoot posture, lens distortion and other reasons, the portrait picture after some shootings Effect, especially leg, often not fully up to expectations, especially some schoolgirls always feel elephant leg, short and thick short, will affect photograph Overall effect.So the personage of some professions can be handled with softwares such as a kind of photoshop (image processing software), by leg Portion attenuates, so that personage is very thin, it is beautiful generous.
But the method for using software editing at present, for operator, not only learning cost is very high, and operates ratio Cumbersome, general user is difficult to grasp.
Summary of the invention
The embodiment of the invention provides a kind of facial image processing method and devices, simple and easy-to-use to portrait leg for providing The scheme being modified.
A kind of facial image processing method, comprising:
Original portrait picture is obtained, shows the original portrait picture in user interface;
Face datection is carried out to the original portrait picture, determines the average width of the head width of head zone and leg Degree;
It determines that the leg mean breadth is wide using the head width as standard, then the leg area is repaired Just, leg mean breadth is made to narrow;Revised portrait picture is shown in the user interface.
A kind of facial image processing device, comprising:
Picture acquiring unit, for obtaining original portrait picture;
Display control unit, for showing the original portrait picture in user interface;
Detection unit, for determining the head width of head zone to the original portrait picture progress Face datection, with And leg mean breadth;
Amending unit, for determining that the leg mean breadth is wide using the head width as standard, then to the leg Portion region is modified, and leg mean breadth is made to narrow;
The display control unit is also used to show revised portrait picture in the user interface.
As can be seen from the above technical solutions, the embodiment of the present invention is had the advantage that and is obtained by the way of Face datection Head and the leg mean breadth for obtaining portrait picture, determine whether leg mean breadth is wide according to head width, In Leg area is modified in the case that leg mean breadth is wide, does not need user's manual operation, it is easy to be easy-to-use.
Detailed description of the invention
To describe the technical solutions in the embodiments of the present invention more clearly, make required in being described below to embodiment Attached drawing is briefly introduced, it should be apparent that, drawings in the following description are only some embodiments of the invention, for this For the those of ordinary skill in field, without any creative labor, it can also be obtained according to these attached drawings His attached drawing.
Fig. 1 is present invention method flow diagram;
Fig. 2 is standing of embodiment of the present invention portrait picture schematic diagram;
Fig. 3 is system structure diagram of the embodiment of the present invention;
Fig. 4 is subsystem configuration of embodiment of the present invention schematic diagram;
Fig. 5 is picture of embodiment of the present invention encoding and decoding subsystem structure schematic diagram;
Fig. 6 is that thin leg beautifies subsystem structure schematic diagram to the embodiment of the present invention automatically;
Fig. 7 is present invention method flow diagram;
Fig. 8 is facial image processing of embodiment of the present invention apparatus structure schematic diagram;
Fig. 9 is facial image processing of embodiment of the present invention apparatus structure schematic diagram;
Figure 10 is facial image processing of embodiment of the present invention apparatus structure schematic diagram;
Figure 11 is facial image processing of embodiment of the present invention apparatus structure schematic diagram.
Specific embodiment
To make the objectives, technical solutions, and advantages of the present invention clearer, below in conjunction with attached drawing to the present invention make into It is described in detail to one step, it is clear that the described embodiments are only some of the embodiments of the present invention, rather than whole implementation Example.Based on the embodiments of the present invention, obtained by those of ordinary skill in the art without making creative efforts All other embodiment, shall fall within the protection scope of the present invention.
The method that the embodiment of the present invention will propose a kind of beautification portrait photo of automatic thin leg, can be with intelligent measurement people leg Position, identification people's leg it is fat or thin, naturally reduced weight to excessively thick people's leg, so that personage's leg type in picture, it appears that more Add vivaciously, lovely, therefore personage also becomes beautiful beautiful, while will not introduce abnormal result.In self-timer, living photo, art shines, The multiple scenes of wedding photography etc., there is specific and practical application.
The embodiment of the invention provides a kind of facial image processing methods, as shown in Figure 1, comprising:
101: obtaining original portrait picture, show above-mentioned original portrait picture in user interface;
Original portrait picture, is for the portrait picture after subsequent correction, this is original in the embodiment of the present invention Whether influence, therefore the embodiment of the present invention is simultaneously if being modified that the realization of the embodiment of the present invention can't be constituted before portrait picture It is indifferent to whether original portrait picture was once modified.Original portrait picture can be local from equipment, is also possible to lead to It crosses and takes pictures or receive what external incoming mode obtained, concrete form is different according to the difference of application scenarios.
102: Face datection being carried out to above-mentioned original portrait picture, determines that the head width of head zone and leg are flat Equal width;
Face datection in the present embodiment, can be based on existing face recognition technology, and the present embodiment does not make into one this Step repeats.Head zone is head region shared in picture, which has this parameter of width, in the present embodiment leg Portion's mean breadth may each be single leg mean breadth, be also possible to both legs mean breadth, not need uniqueness restriction.
It is as follows the embodiment of the invention also provides the specific calculation of leg mean breadth: above-mentioned determining head zone Head width and leg mean breadth include:
Determine the width and height of position and head of the head in above-mentioned original portrait picture;
Leg area is calculated with face location and face height, calculates the leg mean breadth of above-mentioned leg area.
It present embodiments provides and calculates leg area based on face height, again with leg after leg area is calculated Mean breadth calculating in leg is carried out based on portion region.What the acquisition pattern of leg area can find and propose according to Leonardo da Vinci Body paint rule, to determine leg area.Such as: the ratio of standardized human body is that head is the 1/8 of height, and shoulder breadth is the 1/ of height 4, using navel as boundary, upper lower part of the body ratio is 5:8, meets " golden section " law.
It please refers to shown in Fig. 2, positive standing portrait example, the present embodiment still further provides more detailed calculating side Formula, as follows: if above-mentioned original portrait picture is positive standing portrait picture, above-mentioned determining head is in above-mentioned original portrait picture In position and head width and height;Leg area is calculated with face location and face height, calculates above-mentioned leg The leg mean breadth in region includes:
The central point for remembering face is (fx,fy), the width of face and long (wf,hf);
Remember that the parameter of leg area is as follows:
Leg area is offline are as follows: hby=yf-7.5*hf;
Leg area is online are as follows: hty=hby+4hf;
The center line of leg area are as follows: hcy=2*hty/3;
The width of leg area are as follows: wc=2*w;
The edge detection that vertical direction is carried out to above-mentioned leg area, extracts longest edge as leg profile, thus Calculate leg mean breadth.
103: determining that above-mentioned leg mean breadth is wide using above-mentioned head width as standard, then above-mentioned leg area is carried out Amendment, makes leg mean breadth narrow;Revised portrait picture is shown in above-mentioned user interface.
The embodiment of the present invention obtains head and the leg mean breadth of portrait picture by the way of Face datection, according to Determine whether leg mean breadth wide according to head width, in the case that in leg, mean breadth is wide to leg area into Row amendment does not need user's manual operation, easy to be easy-to-use.
In the present embodiment, use head width for standard come judge leg be it is wide, be based on leg mean breadth The parameter of use is different, can carry out adjustment slightly, such as: both legs mean breadth is used, then head width can be multiplied by 2 again Add or multiplied by one be more than or equal to 1 coefficient, according to be that head width then can be directly used in single leg mean breadth.Tool Body is as follows: above-mentioned to determine that above-mentioned leg mean breadth is wide as standard using above-mentioned head width and include:
Determine that above-mentioned leg mean breadth is greater than above-mentioned head width, it is determined that above-mentioned leg mean breadth is wide, above-mentioned Leg mean breadth is the mean breadth of single leg.
The embodiment of the present invention still further provides modified specific implementation, as follows: above-mentioned to above-mentioned leg area It is modified, makes leg mean breadth narrow to include:
It is as follows to calculate smoothing formula:Wherein (r) is pixel and above-mentioned center
The horizontal distance of point, d is the width on head;
To each pixel (x, y) in leg area, if above-mentioned pixel (x, y) and point leg central point (xf, hcy) Distance be r;
If its leg central point (xf, hcy) is less than 2d, and then takes [x, hcy-f on the left side leg central point (xf, hcy) (r)] pixel around after the bilinearity difference of four points is the pixel value of the pixel;
If its leg central point (xf, hcy) is less than 2d, and then takes [x, hcy+f on the right of leg central point (xf, hcy) (r)] pixel around after the bilinearity difference of four points is the pixel value of the pixel.
Since mobile pixel may cause, image is unbalanced, and in order to eliminate the effects of the act, the embodiment of the invention also provides solutions Certainly scheme is as follows: the above method further include:
Histogram equalization processing is carried out to revised portrait picture.
Following embodiment will be illustrated the embodiment of the present invention by taking positive standing portrait as an example.The present invention is implemented Example proposes the implementation of the beautification portrait photo of automatic thin leg a kind of, can be known with the shape and profile of intelligent measurement people leg Others' leg it is fat or thin, naturally thin leg is carried out to excessively thick people's leg, so that personage's leg type in picture, it appears that more living It sprinkles, lovely, therefore personage also becomes beautiful beautiful, while will not introduce abnormal result.
System provided in an embodiment of the present invention, as shown in figure 3, including three parts: UI (User Interface, user Interface) display subsystem, picture encoding and decoding subsystem, automatic thin leg beautification subsystem.
Wherein, UI display subsystem is responsible for picture and operation interface display;
Picture encoding and decoding subsystem responsible carries out coding-decoding operation to picture;
Automatic thin leg beautifies portrait photo's subsystem, is responsible for carrying out picture automatic thin leg.
(1), as shown in figure 4, each functions of modules of UI display subsystem is described as follows:
Picture display interface module: it is responsible for showing decoded image.
Reader action bar module: being responsible for the display some operation buttons of reader forms, including a key beautifies button.
(2), as shown in figure 5, each functions of modules of picture encoding and decoding subsystem is described as follows:
Picture decoder module: the coding for parsing picture becomes original image information flow, RGB (red, green, blue, red, Green, blue) format image data.
Coding of graphics module: original image information flow is encoded to JPEG (Joint Photographic Experts Group, Joint Photographic Experts Group) etc. picture formats.
(3), as shown in fig. 6, automatic thin leg beautification each functions of modules of subsystem is described as follows:
Face detection module: the position of detection face in the picture
Leg locating module: detection leg position region, and judge whether leg is excessively thick.
Leg image change module: excessively thick leg image is converted.
Image beautification adjustment module: transformed image is integrally adjusted.
Based on system above, the method for the beautification portrait photo of automatic thin leg provided in an embodiment of the present invention is detailed Process flow description, as shown in fig. 7, comprises:
It after inputting a picture, is decoded respectively, Face datection, positioning leg portions region, judges the fat or thin coefficient of people's leg, count Transformation template is calculated, diminution transformation is carried out to both legs, after image histogram equalization, the photo after beautification is finally shown to boundary On face.
Step 1: decoding: input picture is decoded into the format of RGB.
Step 2: Face datection: using the human face detection tech [1] of excellent figure, detecting the position of photograph face, remember face Central point is (fx,fy), the length and width (w of facef,hf), it is as shown below:
Step 3: calculating human leg region: the body paint rule for being found and being proposed according to Leonardo da Vinci: standardized human body Ratio be head be the 1/8 of height, shoulder breadth is the 1/4 of height, and using navel as boundary, upper lower part of the body ratio is 5:8, meets " gold point Cut " law, by taking the portrait of front as an example, the region the calculation formulae for related parameters of the lower part of the body is as follows:
Personal total height: high=8*hf
Offline, that is, personal 7 coordinates of leg area: hby=yf-7.5*hf
Online, half of the height of note: hty=hby+4hf of leg area
The center line of leg area remembers 2/3 position of leg: hcy=2*hty/3
The width of leg area remembers head width 2 times: wc=2*w
The position of parameters, as shown in Fig. 2, leg area is exactly between offline in leg area, and centered on wf, Width is the region of wc.
Step 4: the width of detection people's leg is thin: in the region of previous step, carrying out the edge detection of vertical direction, mention Longest edge is taken, as leg profile, calculates the mean breadth of leg, w1, w2 are as shown, if w1 or w2 is greater than 1 A head width wf then needs to adjust.
Step 5: transformation template is calculated: according to smooth transformation formula
Wherein (r) is the horizontal distance of pixel and central point, and d is the width on head.Smooth transformation formula is according to not Disconnected test, it is ensured that edge variation is small, and intermediate conversion is big.
Step 6: being converted to people's leg area: to each pixel (x, y) in leg area, if itself and a leg The distance of portion's central point (xf, hcy) be r, if its leg central point (xf, hcy) be less than 2d, and leg central point (xf, Hcy it is the pixel value of the pixel that) left side, which then takes the pixel around [x, hcy-f (r)] after the bilinearity difference of four points,.If Its leg central point (xf, hcy) is less than 2d, and four are then taken around [x, hcy+f (r)] on the right of leg central point (xf, hcy) Pixel after the bilinearity difference of point is the pixel value of the pixel.
Step 7: carrying out histogram equalization to image: carrying out histogram equalization to entire picture, improve pair of image Degree of ratio.
Step 8: being shown to interface: by final as the result is shown to interface
The embodiment of the present invention can be identified that people's leg is fat or thin, be carried out to excessively thick people's leg with the approximate location of intelligent measurement people's leg Naturally weight reducing, so that personage's leg type in picture, it appears that more elongated, lovely, therefore personage also becomes beautiful beautiful, together Shi Buhui introduces abnormal result.Although above technical scheme citing is directed to the portrait stood, for other postures Portrait can also use the transform method of the embodiment of the present invention.
The embodiment of the invention also provides a kind of facial image processing devices, as shown in Figure 8, comprising:
Picture acquiring unit 801, for obtaining original portrait picture;
Display control unit 802, for showing above-mentioned original portrait picture in user interface;
Detection unit 803 determines that the head of head zone is wide for carrying out Face datection to above-mentioned original portrait picture Degree and leg mean breadth;
Amending unit 804, for determining that above-mentioned leg mean breadth is wide using above-mentioned head width as standard, then to above-mentioned Leg area is modified, and leg mean breadth is made to narrow;
Above-mentioned display control unit 802 is also used to show revised portrait picture in above-mentioned user interface.
Original portrait picture, is for the portrait picture after subsequent correction, this is original in the embodiment of the present invention Whether influence, therefore the embodiment of the present invention is simultaneously if being modified that the realization of the embodiment of the present invention can't be constituted before portrait picture It is indifferent to whether original portrait picture was once modified.Original portrait picture can be local from equipment, is also possible to lead to It crosses and takes pictures or receive what external incoming mode obtained, concrete form is different according to the difference of application scenarios.
Face datection in the present embodiment, can be based on existing face recognition technology, and the present embodiment does not make into one this Step repeats.Head zone is head region shared in picture, which has this parameter of width, in the present embodiment leg Portion's mean breadth may each be single leg mean breadth, be also possible to both legs mean breadth, not need uniqueness restriction.
The embodiment of the present invention obtains head and the leg mean breadth of portrait picture by the way of Face datection, according to Determine whether leg mean breadth wide according to head width, in the case that in leg, mean breadth is wide to leg area into Row amendment does not need user's manual operation, easy to be easy-to-use.
Optionally, as follows the embodiment of the invention also provides the specific calculation of leg mean breadth: above-mentioned detection list Member 803, for determining the width and height of position and head of the head in above-mentioned original portrait picture;With face location with And face height calculates leg area, calculates the leg mean breadth of above-mentioned leg area.
It present embodiments provides and calculates leg area based on face height, again with leg after leg area is calculated Mean breadth calculating in leg is carried out based on portion region.What the acquisition pattern of leg area can find and propose according to Leonardo da Vinci Body paint rule, to determine leg area.Such as: the ratio of standardized human body is that head is the 1/8 of height, and shoulder breadth is the 1/ of height 4, using navel as boundary, upper lower part of the body ratio is 5:8, meets " golden section " law.
Optionally, in the present embodiment, use head width for standard come judge leg be it is wide, it is flat based on leg The parameter that equal width uses is different, can carry out adjustment slightly, such as: both legs mean breadth is used, then head width can be with Multiplied by 2 again plus multiplied by one be more than or equal to 1 coefficient, according to be that head then can be directly used in single leg mean breadth Width.It is specific as follows: above-mentioned amending unit 804, for determining that above-mentioned leg mean breadth is greater than above-mentioned head width, it is determined that Above-mentioned leg mean breadth is wide, and above-mentioned leg mean breadth is the mean breadth of single leg.
Optionally, it please refers to shown in Fig. 2, positive standing portrait example, the present embodiment still further provides specifically Calculation, it is as follows: above-mentioned amending unit 804, if for above-mentioned original portrait picture be positive standing portrait picture, The central point for remembering face is (fx,fy), the width of face and long (wf,hf);
Remember that the parameter of leg area is as follows:
Leg area is offline are as follows: hby=yf-7.5*hf;
Leg area is online are as follows: hty=hby+4hf;
The center line of leg area are as follows: hcy=2*hty/3;
The width of leg area are as follows: wc=2*w;
The edge detection that vertical direction is carried out to above-mentioned leg area, extracts longest edge as leg profile, thus Calculate leg mean breadth.
Optionally, the embodiment of the present invention still further provides modified specific implementation, as follows:
Above-mentioned amending unit 804, as follows for calculating smoothing formula:Wherein
(r) be pixel Yu above-mentioned central point horizontal distance, d is the width on head;
To each pixel (x, y) in leg area, if above-mentioned pixel (x, y) and point leg central point (xf, hcy) Distance be r;
If its leg central point (xf, hcy) is less than 2d, and then takes [x, hcy-f on the left side leg central point (xf, hcy) (r)] pixel around after the bilinearity difference of four points is the pixel value of the pixel;
If its leg central point (xf, hcy) is less than 2d, and then takes [x, hcy+f on the right of leg central point (xf, hcy) (r)] pixel around after the bilinearity difference of four points is the pixel value of the pixel.
Further, since mobile pixel may cause, image is unbalanced, and in order to eliminate the effects of the act, the embodiment of the present invention is also It is as follows to provide solution: as shown in figure 9, above-mentioned facial image processing device further include:
Equilibrium treatment unit 901, for carrying out histogram equalization to the revised portrait picture of above-mentioned amending unit 804 Processing.
The embodiment of the invention also provides another facial image processing devices, as shown in Figure 10, comprising:
Receiver 1001, transmitter 1002, processor 1003, memory 1004 and display 1005;Memory 1004 It can be used for storing picture and processor 1003 carry out required caching in data handling procedure;
Wherein, above-mentioned processor 1003 shows above-mentioned original portrait figure in user interface for obtaining original portrait picture Piece;Face datection is carried out to above-mentioned original portrait picture, determines the head width and leg mean breadth of head zone;With Above-mentioned head width is that standard determines that above-mentioned leg mean breadth is wide, then is modified to above-mentioned leg area, keeps leg flat Equal width narrows;Revised portrait picture is shown in above-mentioned user interface.
Original portrait picture, is for the portrait picture after subsequent correction, this is original in the embodiment of the present invention Whether influence, therefore the embodiment of the present invention is simultaneously if being modified that the realization of the embodiment of the present invention can't be constituted before portrait picture It is indifferent to whether original portrait picture was once modified.Original portrait picture can be local from equipment, is also possible to lead to It crosses and takes pictures or receive what external incoming mode obtained, concrete form is different according to the difference of application scenarios.
Face datection in the present embodiment, can be based on existing face recognition technology, and the present embodiment does not make into one this Step repeats.Head zone is head region shared in picture, which has this parameter of width, in the present embodiment leg Portion's mean breadth may each be single leg mean breadth, be also possible to both legs mean breadth, not need uniqueness restriction.
The embodiment of the present invention obtains head and the leg mean breadth of portrait picture by the way of Face datection, according to Determine whether leg mean breadth wide according to head width, in the case that in leg, mean breadth is wide to leg area into Row amendment does not need user's manual operation, easy to be easy-to-use.
As follows the embodiment of the invention also provides the specific calculation of leg mean breadth: above-mentioned processor 1003 is used Include: in the head width and leg mean breadth for determining head zone
Determine the width and height of position and head of the head in above-mentioned original portrait picture;
Leg area is calculated with face location and face height, calculates the leg mean breadth of above-mentioned leg area.
It present embodiments provides and calculates leg area based on face height, again with leg after leg area is calculated Mean breadth calculating in leg is carried out based on portion region.What the acquisition pattern of leg area can find and propose according to Leonardo da Vinci Body paint rule, to determine leg area.Such as: the ratio of standardized human body is that head is the 1/8 of height, and shoulder breadth is the 1/ of height 4, using navel as boundary, upper lower part of the body ratio is 5:8, meets " golden section " law.
It please refers to shown in Fig. 2, positive standing portrait example, the present embodiment still further provides more detailed calculating side Formula is as follows: if above-mentioned original portrait picture is positive standing portrait picture, above-mentioned processor 1003, for determining that head exists The width and height of position and head in above-mentioned original portrait picture;Leg zone is calculated with face location and face height Domain, the leg mean breadth for calculating above-mentioned leg area include:
The central point for remembering face is (fx,fy), the width of face and long (wf,hf);
Remember that the parameter of leg area is as follows:
Leg area is offline are as follows: hby=yf-7.5*hf;
Leg area is online are as follows: hty=hby+4hf;
The center line of leg area are as follows: hcy=2*hty/3;
The width of leg area are as follows: wc=2*w;
The edge detection that vertical direction is carried out to above-mentioned leg area, extracts longest edge as leg profile, thus Calculate leg mean breadth.
In the present embodiment, use head width for standard come judge leg be it is wide, be based on leg mean breadth The parameter of use is different, can carry out adjustment slightly, such as: both legs mean breadth is used, then head width can be multiplied by 2 again Add or multiplied by one be more than or equal to 1 coefficient, according to be that head width then can be directly used in single leg mean breadth.Tool Body is as follows: above-mentioned processor 1003, includes: for determining that above-mentioned leg mean breadth is wide as standard using above-mentioned head width
Determine that above-mentioned leg mean breadth is greater than above-mentioned head width, it is determined that above-mentioned leg mean breadth is wide, above-mentioned Leg mean breadth is the mean breadth of single leg.
The embodiment of the present invention still further provides modified specific implementation, as follows: above-mentioned processor 1003 is used for Above-mentioned leg area is modified, makes leg mean breadth narrow to include:
It is as follows to calculate smoothing formula:Wherein (r) is pixel and above-mentioned center
The horizontal distance of point, d is the width on head;
To each pixel (x, y) in leg area, if above-mentioned pixel (x, y) and point leg central point (xf, hcy) Distance be r;
If its leg central point (xf, hcy) is less than 2d, and then takes [x, hcy-f on the left side leg central point (xf, hcy) (r)] pixel around after the bilinearity difference of four points is the pixel value of the pixel;
If its leg central point (xf, hcy) is less than 2d, and then takes [x, hcy+f on the right of leg central point (xf, hcy) (r)] pixel around after the bilinearity difference of four points is the pixel value of the pixel.
Since mobile pixel may cause, image is unbalanced, and in order to eliminate the effects of the act, the embodiment of the invention also provides solutions Certainly scheme is as follows: above-mentioned processor 1003, is also used to carry out histogram equalization processing to revised portrait picture.
The embodiment of the invention also provides another facial image processing devices, as shown in figure 11, for ease of description, only show Part related to the embodiment of the present invention, it is disclosed by specific technical details, please refer to present invention method part.It should Device by taking terminal as an example, terminal can be include mobile phone, tablet computer, PDA (Personal Digital Assistant, it is a Personal digital assistant), POS (Point of Sales, point-of-sale terminal), any terminal device such as vehicle-mounted computer, using terminal as mobile phone For:
Figure 11 shows the block diagram of the part-structure of mobile phone relevant to terminal provided in an embodiment of the present invention.With reference to figure 11, mobile phone includes: radio frequency (Radio Frequency, RF) circuit 1110, memory 1120, input unit 1130, display unit 1140, sensor 1150, voicefrequency circuit 1160, Wireless Fidelity (wireless fidelity, WiFi) module 1170, processor The components such as 1180 and power supply 1190.It will be understood by those skilled in the art that handset structure shown in Figure 11 is not constituted pair The restriction of mobile phone may include perhaps combining certain components or different component cloth than illustrating more or fewer components It sets.
It is specifically introduced below with reference to each component parts of the Figure 11 to mobile phone:
RF circuit 1110 can be used for receiving and sending messages or communication process in, signal sends and receivees, particularly, by base station After downlink information receives, handled to processor 1180;In addition, the data for designing uplink are sent to base station.In general, RF circuit 1110 include but is not limited to antenna, at least one amplifier, transceiver, coupler, low-noise amplifier (Low Noise Amplifier, LNA), duplexer etc..In addition, RF circuit 1110 can also be logical with network and other equipment by wireless communication Letter.Any communication standard or agreement, including but not limited to global system for mobile communications (Global can be used in above-mentioned wireless communication System of Mobile communication, GSM), general packet radio service (General Packet Radio Service, GPRS), CDMA (Code Division Multiple Access, CDMA), wideband code division multiple access (Wideband Code Division Multiple Access, WCDMA), long term evolution (Long Term Evolution, LTE), Email, short message service (Short Messaging Service, SMS) etc..
Memory 1120 can be used for storing software program and module, and processor 1180 is stored in memory by operation 1120 software program and module, thereby executing the various function application and data processing of mobile phone.Memory 1120 can be led It to include storing program area and storage data area, wherein storing program area can be needed for storage program area, at least one function Application program (such as sound-playing function, image player function etc.) etc.;Storage data area, which can be stored, uses institute according to mobile phone Data (such as audio data, phone directory etc.) of creation etc..In addition, memory 1120 may include high random access storage Device, can also include nonvolatile memory, and a for example, at least disk memory, flush memory device or other volatibility are solid State memory device.
Input unit 1130 can be used for receiving the number or character information of input, and generate with the user setting of mobile phone with And the related key signals input of function control.Specifically, input unit 1130 may include touch panel 1131 and other inputs Equipment 1132.Touch panel 1131, also referred to as touch screen collect touch operation (such as the user of user on it or nearby Use the behaviour of any suitable object or attachment such as finger, stylus on touch panel 1131 or near touch panel 1131 Make), and corresponding attachment device is driven according to preset formula.Optionally, touch panel 1131 may include touch detection Two parts of device and touch controller.Wherein, the touch orientation of touch detecting apparatus detection user, and detect touch operation band The signal come, transmits a signal to touch controller;Touch controller receives touch information from touch detecting apparatus, and by it It is converted into contact coordinate, then gives processor 1180, and order that processor 1180 is sent can be received and executed.In addition, Touch panel 1131 can be realized using multiple types such as resistance-type, condenser type, infrared ray and surface acoustic waves.In addition to touch surface Plate 1131, input unit 1130 can also include other input equipments 1132.Specifically, other input equipments 1132 may include But in being not limited to physical keyboard, function key (such as volume control button, switch key etc.), trace ball, mouse, operating stick etc. It is one or more.
Display unit 1140 can be used for showing information input by user or be supplied to user information and mobile phone it is each Kind menu.Display unit 1140 may include display panel 1141, optionally, can use liquid crystal display (Liquid Crystal Display, LCD), the forms such as Organic Light Emitting Diode (Organic Light-Emitting Diode, OLED) To configure display panel 1141.Further, touch panel 1131 can cover display panel 1141, when touch panel 1131 detects After arriving touch operation on it or nearby, processor 1180 is sent to determine the type of touch event, is followed by subsequent processing device 1180 provide corresponding visual output according to the type of touch event on display panel 1141.Although in Figure 11, touch surface Plate 1131 and display panel 1141 are the input and input function for realizing mobile phone as two independent components, but certain In embodiment, can be integrated by touch panel 1131 and display panel 1141 and that realizes mobile phone output and input function.
Mobile phone may also include at least one sensor 1150, such as optical sensor, motion sensor and other sensors. Specifically, optical sensor may include ambient light sensor and proximity sensor, wherein ambient light sensor can be according to ambient light Light and shade adjust the brightness of display panel 1141, proximity sensor can close display panel when mobile phone is moved in one's ear 1141 and/or backlight.As a kind of motion sensor, accelerometer sensor can detect in all directions (generally three axis) and add The size of speed can detect that size and the direction of gravity when static, can be used to identify application (such as the horizontal/vertical screen of mobile phone posture Switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap) etc.;Also as mobile phone The other sensors such as configurable gyroscope, barometer, hygrometer, thermometer, infrared sensor, details are not described herein.
Voicefrequency circuit 1160, loudspeaker 1161, microphone 1162 can provide the audio interface between user and mobile phone.Audio Electric signal after the audio data received conversion can be transferred to loudspeaker 1161, be converted by loudspeaker 1161 by circuit 1160 For voice signal output;On the other hand, the voice signal of collection is converted to electric signal by microphone 1162, by voicefrequency circuit 1160 Audio data is converted to after reception, then by after the processing of audio data output processor 1180, through RF circuit 1110 to be sent to ratio Such as another mobile phone, or audio data is exported to memory 1120 to be further processed.
WiFi belongs to short range wireless transmission technology, and mobile phone can help user's transceiver electronics postal by WiFi module 1170 Part, browsing webpage and access streaming video etc., it provides wireless broadband internet access for user.Although Figure 11 is shown WiFi module 1170, but it is understood that, and it is not belonging to must be configured into for mobile phone, it can according to need do not changing completely Become in the range of the essence of invention and omits.
Processor 1180 is the control centre of mobile phone, using the various pieces of various interfaces and connection whole mobile phone, By running or execute the software program and/or module that are stored in memory 1120, and calls and be stored in memory 1120 Interior data execute the various functions and processing data of mobile phone, to carry out integral monitoring to mobile phone.Optionally, processor 1180 may include one or more processing units;Preferably, processor 1180 can integrate application processor and modulation /demodulation processing Device, wherein the main processing operation system of application processor, user interface and application program etc., modem processor is mainly located Reason wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor 1180.
Mobile phone further includes the power supply 1190 (such as battery) powered to all parts, it is preferred that power supply can pass through power supply Management system and processor 1180 are logically contiguous, to realize management charging, electric discharge and power consumption pipe by power-supply management system The functions such as reason.
Although being not shown, mobile phone can also include camera, bluetooth module etc., and details are not described herein.
In embodiments of the present invention, processor 1180 included by the terminal also has the function of executing method flow;With Upper method flow execution can be based on the hardware configuration of the above mobile phone.
It is worth noting that, included each unit is only patrolled according to function in above-mentioned facial image processing Installation practice It volume is divided, but is not limited to the above division, as long as corresponding functions can be realized;In addition, each function list The specific name of member is also only for convenience of distinguishing each other, the protection scope being not intended to restrict the invention.
In addition, those of ordinary skill in the art will appreciate that realizing all or part of the steps in above-mentioned each method embodiment It is that relevant hardware can be instructed to complete by program, corresponding program can store in a kind of computer readable storage medium In, storage medium mentioned above can be read-only memory, disk or CD etc..
The above is only the preferable specific embodiments of the present invention, but scope of protection of the present invention is not limited thereto, any Those familiar with the art the variation that can readily occur in or replaces in the technical scope that the embodiment of the present invention discloses It changes, should be covered by the protection scope of the present invention.Therefore, protection scope of the present invention should be with the protection model of claim Subject to enclosing.

Claims (11)

1. a kind of facial image processing method characterized by comprising
Original portrait picture is obtained, shows the original portrait picture in user interface;
Face datection is carried out to the original portrait picture, determines the head width of head zone, and according to Leonardo da Vinci's human body Drawing rule is determined leg area and is calculated leg mean breadth based on the leg area;
It determines that the leg mean breadth is greater than head width using the head width as standard, then the leg area is carried out Amendment, makes leg mean breadth narrow;Revised portrait picture is shown in the user interface;The leg mean breadth is The mean breadth of single leg;
Wherein, described that the leg area is modified, make leg mean breadth narrow to include:
To each pixel (x, y) in leg area, if the distance of the pixel (x, y) and leg central point (xf, hcy) For r;
If the distance between the pixel (x, y) and the leg central point (xf, hcy) are less than 2d, and at leg center It is the pixel of the pixel that the left side point (xf, hcy), which then takes the pixel around [x, hcy-f (r)] after the bilinearity difference of four points, Value;
If the distance between the pixel (x, y) and the leg central point (xf, hcy) are less than 2d, and at leg center It is the pixel of the pixel that the pixel around [x, hcy+f (r)] after the bilinearity difference of four points is then taken on the right of point (xf, hcy) Value;
Wherein, hcy is the center line of leg area, is calculated based on face location and face height;F (r) is based on circumference Rate, pixel and the horizontal distance of the leg central point and the width on head are calculated;D is the width on head.
2. method according to claim 1, which is characterized in that the head width of the determining head zone and leg are flat Equal width includes:
Determine the width and height of position and head of the head in the original portrait picture;
Leg area is calculated with face location and face height, calculates the leg mean breadth of the leg area.
3. method according to claim 2, which is characterized in that the original portrait picture is positive standing portrait picture, The width and height of position and head of the determining head in the original portrait picture;With face location and face Height calculates leg area, and the leg mean breadth for calculating the leg area includes:
The central point for remembering face is (fx,fy), the width on head is wf, the height on head is hf
Remember that the parameter of leg area is as follows:
Leg area is offline are as follows: hby=yf-7.5*hf
Leg area is online are as follows: hty=hby+4hf
The center line of leg area are as follows: hcy=2*hty/3;
The width of leg area are as follows: wc=2*wf
Yf is face center height;
The edge detection that vertical direction is carried out to above-mentioned leg area, extracts longest edge as leg profile, to calculate Leg mean breadth.
4. method according to claim 3, which is characterized in that the f (r) is based on pi, pixel and the central point Horizontal distance and the width on head be calculated and include:
It is as follows to calculate smoothing formula:
5. method according to claim 4, which is characterized in that the method also includes:
Histogram equalization processing is carried out to revised portrait picture.
6. a kind of facial image processing device characterized by comprising
Picture acquiring unit, for obtaining original portrait picture;
Display control unit, for showing the original portrait picture in user interface;
Detection unit determines the head width of head zone, and press for carrying out Face datection to the original portrait picture Leg area is determined according to Leonardo da Vinci's body paint rule and leg mean breadth is calculated based on the leg area;
Amending unit, for determining that the leg mean breadth is greater than head width using the head width as standard, then to institute It states leg area to be modified, leg mean breadth is made to narrow;The leg mean breadth is the mean breadth of single leg;
The display control unit is also used to show revised portrait picture in the user interface;
Wherein, the amending unit includes:
To each pixel (x, y) in leg area, if the distance of the pixel (x, y) and leg central point (xf, hcy) For r;If the distance between the pixel (x, y) and the leg central point (xf, hcy) are less than 2d, and at leg center It is the pixel of the pixel that the left side point (xf, hcy), which then takes the pixel around [x, hcy-f (r)] after the bilinearity difference of four points, Value;If the distance between the pixel (x, y) and the leg central point (xf, hcy) are less than 2d, and in leg central point It is the pixel value of the pixel that the pixel around [x, hcy+f (r)] after the bilinearity difference of four points is then taken on the right of (xf, hcy); Wherein, hcy is the center line of leg area, is calculated based on face location and face height;F (r) be based on pi, as Vegetarian refreshments is calculated with the horizontal distance of the leg central point and the width on head;D is the width on head.
7. facial image processing device according to claim 6, which is characterized in that
The detection unit, for determining the width and height of position and head of the head in the original portrait picture; Leg area is calculated with face location and face height, calculates the leg mean breadth of the leg area.
8. facial image processing device according to claim 7, which is characterized in that
The amending unit remembers the central point of face if being positive standing portrait picture for the original portrait picture For (fx,fy), the width on head is wf, the height on head is hf
Remember that the parameter of leg area is as follows:
Leg area is offline are as follows: hby=yf-7.5*hf
Leg area is online are as follows: hty=hby+4hf
The center line of leg area are as follows: hcy=2*hty/3;
The width of leg area are as follows: wc=2*wf
Yf is face center height;
The edge detection that vertical direction is carried out to above-mentioned leg area, extracts longest edge as leg profile, to calculate Leg mean breadth.
9. facial image processing device according to claim 8, which is characterized in that
The f (r) is calculated based on pi, pixel and the horizontal distance of the central point and the width on head, packet It includes: it is as follows to calculate smoothing formula:
10. facial image processing device according to claim 9, which is characterized in that the facial image processing device further include:
Equilibrium treatment unit, for carrying out histogram equalization processing to the revised portrait picture of the amending unit.
11. a kind of storage medium, which is characterized in that be stored with computer program, the computer program in the storage medium A kind of described in any item facial image processing methods of 1-5 are required for perform claim.
CN201510612934.5A 2015-09-23 2015-09-23 A kind of facial image processing method and device Active CN106558039B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510612934.5A CN106558039B (en) 2015-09-23 2015-09-23 A kind of facial image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510612934.5A CN106558039B (en) 2015-09-23 2015-09-23 A kind of facial image processing method and device

Publications (2)

Publication Number Publication Date
CN106558039A CN106558039A (en) 2017-04-05
CN106558039B true CN106558039B (en) 2019-11-19

Family

ID=58413612

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510612934.5A Active CN106558039B (en) 2015-09-23 2015-09-23 A kind of facial image processing method and device

Country Status (1)

Country Link
CN (1) CN106558039B (en)

Families Citing this family (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108830783B (en) * 2018-05-31 2021-07-02 北京市商汤科技开发有限公司 Image processing method and device and computer storage medium
CN110852933A (en) * 2018-08-21 2020-02-28 北京市商汤科技开发有限公司 Image processing method and apparatus, image processing device, and storage medium
CN110852934A (en) * 2018-08-21 2020-02-28 北京市商汤科技开发有限公司 Image processing method and apparatus, image device, and storage medium
CN110852932B (en) * 2018-08-21 2024-03-08 北京市商汤科技开发有限公司 Image processing method and device, image equipment and storage medium
CN110880155A (en) * 2018-09-05 2020-03-13 武汉斗鱼网络科技有限公司 Method, storage medium, device and system for realizing long leg special effect
CN110880156A (en) * 2018-09-05 2020-03-13 武汉斗鱼网络科技有限公司 Long-leg special effect implementation method, storage medium, equipment and system
CN109658355A (en) * 2018-12-19 2019-04-19 维沃移动通信有限公司 A kind of image processing method and device
CN110136051A (en) * 2019-04-30 2019-08-16 北京市商汤科技开发有限公司 A kind of image processing method, device and computer storage medium
CN110264430B (en) * 2019-06-29 2022-04-15 北京字节跳动网络技术有限公司 Video beautifying method and device and electronic equipment
CN112380990A (en) * 2020-11-13 2021-02-19 咪咕文化科技有限公司 Picture adjusting method, electronic device and readable storage medium

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06195431A (en) * 1992-12-25 1994-07-15 Casio Comput Co Ltd Montage preparation device
CN101616236A (en) * 2008-06-23 2009-12-30 松下电器产业株式会社 Image adjusts big or small processing unit and image is adjusted big or small processing method
CN102027505A (en) * 2008-07-30 2011-04-20 泰塞拉技术爱尔兰公司 Automatic face and skin beautification using face detection
CN103679175A (en) * 2013-12-13 2014-03-26 电子科技大学 Fast 3D skeleton model detecting method based on depth camera
CN103810750A (en) * 2014-01-16 2014-05-21 北京航空航天大学 Human body section ring based parametric deformation method
CN104537608A (en) * 2014-12-31 2015-04-22 深圳市中兴移动通信有限公司 Image processing method and device

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20020065452A1 (en) * 2000-11-29 2002-05-30 Roland Bazin Process for diagnosing conditions of external body portions and features of products applied thereto
JP5166102B2 (en) * 2008-04-22 2013-03-21 株式会社東芝 Image processing apparatus and method
JP5849558B2 (en) * 2011-09-15 2016-01-27 オムロン株式会社 Image processing apparatus, image processing method, control program, and recording medium

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH06195431A (en) * 1992-12-25 1994-07-15 Casio Comput Co Ltd Montage preparation device
CN101616236A (en) * 2008-06-23 2009-12-30 松下电器产业株式会社 Image adjusts big or small processing unit and image is adjusted big or small processing method
CN102027505A (en) * 2008-07-30 2011-04-20 泰塞拉技术爱尔兰公司 Automatic face and skin beautification using face detection
CN103679175A (en) * 2013-12-13 2014-03-26 电子科技大学 Fast 3D skeleton model detecting method based on depth camera
CN103810750A (en) * 2014-01-16 2014-05-21 北京航空航天大学 Human body section ring based parametric deformation method
CN104537608A (en) * 2014-12-31 2015-04-22 深圳市中兴移动通信有限公司 Image processing method and device

Also Published As

Publication number Publication date
CN106558039A (en) 2017-04-05

Similar Documents

Publication Publication Date Title
CN106558039B (en) A kind of facial image processing method and device
CN108540724A (en) A kind of image pickup method and mobile terminal
CN107580184A (en) A kind of image pickup method and mobile terminal
CN109040643A (en) The method, apparatus of mobile terminal and remote group photo
CN106027108B (en) A kind of indoor orientation method, device and wearable device and mobile terminal
CN108495056A (en) Photographic method, mobile terminal and computer readable storage medium
CN108038825B (en) Image processing method and mobile terminal
CN108073343A (en) A kind of display interface method of adjustment and mobile terminal
CN108195390A (en) A kind of air navigation aid, device and mobile terminal
CN108347558A (en) A kind of method, apparatus and mobile terminal of image optimization
CN107506732A (en) Method, equipment, mobile terminal and the computer-readable storage medium of textures
CN108132752A (en) A kind of method for editing text and mobile terminal
CN104519269B (en) A kind of the view-finder display methods and device of camera installation
CN106504303B (en) A kind of method and apparatus playing frame animation
CN109544486A (en) A kind of image processing method and terminal device
CN110113528A (en) A kind of parameter acquiring method and terminal device
CN106851119B (en) Picture generation method and equipment and mobile terminal
CN108040209A (en) A kind of image pickup method and mobile terminal
CN108234894A (en) A kind of exposure adjustment method and terminal device
CN107124556A (en) Focusing method, device, computer-readable recording medium and mobile terminal
CN107920384A (en) A kind of method, terminal and computer-readable recording medium for searching for communication network
CN108198127A (en) A kind of image processing method, device and mobile terminal
CN109741269A (en) Image processing method, device, computer equipment and storage medium
CN109377531A (en) Image color cast method of adjustment, device, mobile terminal and readable storage medium storing program for executing
CN108196753A (en) A kind of interface switching method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
TR01 Transfer of patent right
TR01 Transfer of patent right

Effective date of registration: 20230710

Address after: 518000 Tencent Building, No. 1 High-tech Zone, Nanshan District, Shenzhen City, Guangdong Province, 35 Floors

Patentee after: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd.

Patentee after: TENCENT CLOUD COMPUTING (BEIJING) Co.,Ltd.

Address before: 2, 518000, East 403 room, SEG science and Technology Park, Zhenxing Road, Shenzhen, Guangdong, Futian District

Patentee before: TENCENT TECHNOLOGY (SHENZHEN) Co.,Ltd.