CN107845076A - Image processing method, device, computer-readable recording medium and computer equipment - Google Patents
Image processing method, device, computer-readable recording medium and computer equipment Download PDFInfo
- Publication number
- CN107845076A CN107845076A CN201711040347.9A CN201711040347A CN107845076A CN 107845076 A CN107845076 A CN 107845076A CN 201711040347 A CN201711040347 A CN 201711040347A CN 107845076 A CN107845076 A CN 107845076A
- Authority
- CN
- China
- Prior art keywords
- image
- face
- color
- parameter
- pending image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 26
- 238000012545 processing Methods 0.000 claims abstract description 71
- 238000000034 method Methods 0.000 claims description 16
- 210000001747 pupil Anatomy 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 13
- 238000004364 calculation method Methods 0.000 claims description 3
- 230000000007 visual effect Effects 0.000 abstract description 11
- 238000003384 imaging method Methods 0.000 description 15
- 238000010586 diagram Methods 0.000 description 5
- 238000012937 correction Methods 0.000 description 4
- 230000015572 biosynthetic process Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 238000013139 quantization Methods 0.000 description 3
- 238000001228 spectrum Methods 0.000 description 3
- 238000003786 synthesis reaction Methods 0.000 description 3
- 241000208340 Araliaceae Species 0.000 description 2
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 2
- 235000003140 Panax quinquefolius Nutrition 0.000 description 2
- 238000004422 calculation algorithm Methods 0.000 description 2
- 230000009977 dual effect Effects 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 210000000744 eyelid Anatomy 0.000 description 2
- 230000004927 fusion Effects 0.000 description 2
- 235000008434 ginseng Nutrition 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 230000002194 synthesizing effect Effects 0.000 description 2
- 230000002087 whitening effect Effects 0.000 description 2
- 208000003351 Melanosis Diseases 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 230000003255 anti-acne Effects 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 239000000872 buffer Substances 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000013500 data storage Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000008569 process Effects 0.000 description 1
- 238000002310 reflectometry Methods 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 238000002834 transmittance Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to a kind of image processing method, device, computer-readable recording medium and computer equipment.Described image processing method includes:The color characteristic of pending image is obtained, the dominant hue of the pending image is determined according to the color characteristic;The U.S. face parameter according to corresponding to being chosen the dominant hue;U.S. face processing is carried out to the portrait area in the pending image according to the U.S. face parameter.By the dominant hue for obtaining pending image, the U.S. face parameter according to corresponding to obtaining dominant hue, U.S. face processing is carried out to the portrait area in pending image according to the U.S. face parameter so that the result of U.S. face processing more conforms to the whole style of entire image, improves the visual effect of image.
Description
Technical field
The application is related to image processing field, more particularly to a kind of image processing method and device, computer-readable storage
Medium and computer equipment.
Background technology
With the popularization of camera function, increasing user's custom shoots institute by the mobile terminal with shoot function
Locate landscape or personage in environment etc., and be shared with other people.User's self-timer or shooting companion when can carry U.S. face effect, it is traditional
U.S. face mode is to carry out U.S. face using fixed U.S. face parameter, and the visual effect of its image shot is poor.
The content of the invention
The embodiment of the present application provides a kind of image processing method and device, computer-readable recording medium and computer and set
It is standby, the visual effect of image can be improved.
A kind of image processing method, including:
Obtain the color characteristic of pending image;
The dominant hue of the pending image is determined according to the color characteristic;
The U.S. face parameter according to corresponding to being chosen the dominant hue;
U.S. face processing is carried out to the portrait area in the pending image according to the U.S. face parameter.
A kind of image processing apparatus, including:
Characteristic extracting module, for obtaining the color characteristic of pending image;Hue determination module, for according to the face
Color characteristic determines the dominant hue of the pending image;
Parameter chooses module, for the U.S. face parameter according to corresponding to dominant hue selection;
U.S. face module, for being carried out according to the U.S. face parameter to the portrait area in the pending image at U.S. face
Reason.
A kind of computer-readable recording medium, is stored thereon with computer program, and the computer program is held by processor
The step of described image processing method is realized during row.
A kind of computer equipment, including memory and processor, computer program, the meter are stored in the memory
When calculation machine program is by the computing device so that the step of image processing method described in the computing device.
Image processing method and device, computer-readable recording medium and computer equipment, pass through in the embodiment of the present application
The dominant hue of pending image is obtained, the U.S. face parameter according to corresponding to obtaining dominant hue, according to the U.S. face parameter to pending
Portrait area in image carries out U.S. face processing so that the result of U.S. face processing more conforms to the whole style of entire image, carries
The high visual effect of image.
Brief description of the drawings
, below will be to embodiment or existing in order to illustrate more clearly of the embodiment of the present application or technical scheme of the prior art
There is the required accompanying drawing used in technology description to be briefly described, it should be apparent that, drawings in the following description are only this
Some embodiments of application, for those of ordinary skill in the art, on the premise of not paying creative work, can be with
Other accompanying drawings are obtained according to these accompanying drawings.
Fig. 1 is the flow chart of image processing method in one embodiment;
Fig. 2 is the color histogram generated in one embodiment;
Fig. 3 is the schematic diagram that depth information is obtained in one embodiment;
Fig. 4 is the flow chart of image processing method in another embodiment;
Fig. 5 is the structured flowchart of image processing apparatus in one embodiment;
Fig. 6 is the internal structure schematic diagram of one embodiment Computer equipment;
Fig. 7 is the schematic diagram of image processing circuit in one embodiment.
Embodiment
In order that the object, technical solution and advantage of the application are more clearly understood, it is right below in conjunction with drawings and Examples
The application is further elaborated.It should be appreciated that specific embodiment described herein is only to explain the application, and
It is not used in restriction the application.
It is appreciated that term " first " used in this application, " second " etc. can be used to describe various elements herein,
But these elements should not be limited by these terms.These terms are only used for distinguishing first element and another element.Citing comes
Say, in the case where not departing from scope of the present application, the first acquisition module can be referred to as the second acquisition module, and similarly,
Second acquisition module can be referred to as the first acquisition module.First acquisition module and the second acquisition module both acquisition module,
But it is not same acquisition module.
Fig. 1 is the flow chart of image processing method in one embodiment.As shown in figure 1, a kind of image processing method, bag
Include:
Step 102, the color characteristic of pending image is obtained.
Specifically, computer equipment can obtain pending image from local or server, and can obtain pending image
Color characteristic.Computer equipment is by imaging first-class imaging device shooting image, and using the image of shooting as pending image,
Also image can be obtained from local photograph album storehouse, as pending image.Furthermore computer equipment is from the internet that server obtains
Image in image or individual subscriber network album, as pending image.Wherein, computer equipment can be mobile terminal,
Desktop computer etc..Mobile terminal can be smart mobile phone, tablet personal computer, personal digital assistant, Wearable etc..
Step 104, the dominant hue of the pending image is determined according to the color characteristic.
Computer equipment determines the dominant hue of pending image according to color characteristic.Color characteristic is applied in image retrieval
In visual signature.Color characteristic can be RGB (Red Green Blue, RGB) color characteristic, HIS (Hue Intensity
Saturation, tone, brightness, color saturation) color characteristic, HSV (Hue Saturation Value, tone, colour saturation
Degree, brightness) color characteristic etc..Dominant hue refers to the color that accounting is maximum in the various tones of image.Computer equipment is got
After the color characteristic of pending image, the statistics such as histogram can be used to obtain different color proportion in entire image, selected
Select dominant hue of the maximum color of accounting example as pending image.
Step 106, the U.S. face parameter according to corresponding to being chosen the dominant hue.
Specifically, the corresponding relation of dominant hue and U.S. face parameter is pre-established.It is corresponding for example during warm tones, portrait is partially yellow
The value of U.S. face parameter be to be partial to the parameter value of whitening effect;During cool tone, portrait is partially pale, the value of corresponding U.S. face parameter
It is partial to parameter value of ruddy effect etc..Warm tones can be red, orange, yellow.Cool tone can be green, blueness, black.
Step 108, U.S. face processing is carried out to the portrait area in the pending image according to the U.S. face parameter.
Specifically, pending image is obtained, identifies the portrait area in the pending image.Computer equipment is got
After U.S. face parameter, U.S. face processing is carried out to the portrait area in pending image, obtains the image after U.S. face processing.
Computer equipment can carry out recognition of face to pending image, determine the human face region in pending image, and
Depth information corresponding to human face region, corresponding portrait area is obtained according to the human face region and corresponding depth information.Its
In, human face region refers to the region where the face of portrait in pending image, and portrait area refers to whole in pending image
Region where portrait.
Computer equipment can extract the characteristics of image of pending image, and special to image by default human face recognition model
Sign is analyzed, and judges whether include face in pending image, if comprising, it is determined that corresponding human face region.Characteristics of image
It may include shape facility, space characteristics and edge feature etc., wherein, shape facility refers to shape local in pending image;
Space characteristics refer to the mutual locus or relative direction relation between multiple regions for being split in pending image;
Edge feature refers to form boundary pixel between two regions etc. in pending image.
Human face recognition model can be the decision model built beforehand through machine learning, when building human face recognition model,
Substantial amounts of sample image can be obtained, facial image and unmanned image are included in sample image, can be according to each sample image
It is no that sample image is marked comprising face, and the input using the sample image of mark as human face recognition model, pass through machine
Device study is trained, and obtains human face recognition model.
In addition, the human face region in pending image can be also obtained by Face datection algorithm.Face datection algorithm can be with
Including the detection method based on geometric properties, feature face detecting method, linear discriminant analysis method, based on hidden markov model
Detection method etc..
When passing through camera collection image, depth map corresponding to image can be obtained simultaneously, the pixel in depth map
Point is corresponding with the pixel in image.Pixel in depth map represents the depth information of respective pixel in image, depth information
As depth information of the object corresponding to pixel to image collecting device.For example, depth information can be entered by dual camera
Row obtains, and depth information corresponding to obtained pixel can be 1 meter, 2 meters or 3 meters etc..It is generally acknowledged that portrait with face same
On one vertical plane, the value of the depth information and face to the depth information of image collecting device of portrait to image collecting device
In same scope.
It is also possible to use region-growing method identifies the portrait area in pending image.To have by region-growing method
The pixel set for having similar quality gets up to form region, and pending image is split, obtains portrait area.
Image processing method in the present embodiment, by obtaining the dominant hue of pending image, obtained according to dominant hue corresponding
U.S. face parameter, U.S. face processing is carried out to the portrait area in pending image according to the U.S. face parameter so that U.S. face processing
Result more conform to the whole style of entire image, improve the visual effect of image.
In one embodiment, the color characteristic for obtaining the pending image, is determined according to the color characteristic
The dominant hue of the pending image, including:Color feature extracted is carried out to the pending image and obtains color histogram;Root
The dominant hue of the pending image is determined according to the color histogram.
Color histogram can be RGB color histogram, hsv color histogram or YUV color histograms etc..Color is straight
Side's figure can be used for description different color ratio shared in pending image, and color space can be divided into multiple small colors
Section, and the quantity for the pixel that each color interval is fallen into image is calculated respectively, so as to can obtain color histogram.YUV
Middle Y represents brightness, and U represents that colourity, V represent colourity.
In one embodiment, computer equipment can generate the hsv color histogram of pending image, can first will be pending
Image is changed to hsv color space from RGB color, wherein, in hsv color space, component may include H (Hue, color
Adjust), S (Saturation, saturation degree) and V (Value, lightness), wherein, H is measured with angle, and span is 0 °~360 °,
Calculated counterclockwise since red, red is 0 °, and green is 120 °, and blueness is 240 °;S represents color close to spectrum colour
Degree, the ratio shared by spectrum colour is bigger, and color is higher close to the degree of spectrum colour, and the saturation degree of color is also higher, satisfy
High with degree, color is general deep and gorgeous;V represents bright degree, and for light source colour, the brightness of brightness value and illuminator has
Close;For object color, this value is relevant with the transmittance or reflectivity of object, and the usual spans of V are 0% (black) to 100%
(white).
Computer equipment can quantify to tri- components of H, S and V in HSV respectively, and by H, S and V tri- after quantization
The characteristic vector of individual component synthesizing one-dimensional, the value of characteristic vector can be between 0~255, and totally 256 are worth, that is, can be by HSV
Color space is divided into 256 color intervals, the value of the corresponding characteristic vector of each color interval.For example, can be by H component amounts
16 grades are turned to, S components and V component are quantified as 4 grades respectively, the characteristic vector of synthesizing one-dimensional can be as shown in formula (1):
L=H*QS*QV+S*QV+V (1)
Wherein, L represents the one-dimensional characteristic vector of tri- component synthesis of H, S and V after quantifying;QSRepresent the amount of S components
Change series, QVRepresent the quantization series of V component.Computer equipment can be empty in hsv color according to each pixel in human face region
Between in value, it is determined that in the quantization level of tri- components of H, S and V, and calculate the characteristic vector of each pixel, then count respectively
The quantity for the pixel that characteristic vector is distributed in 256 values, generate color histogram.
Fig. 2 is the color histogram generated in one embodiment.As shown in Fig. 2 the transverse axis of color histogram can represent
Characteristic vector in hsv color space, namely multiple color intervals of hsv color space division, the longitudinal axis represent the number of pixel
Measure, crest 202 is included in the color histogram, the peak value of crest 202 is 850, and color interval corresponding to the peak value can be 150
Value.
In other embodiments, RGB color histogram etc. can also be generated.
In one embodiment, the U.S. face parameter according to corresponding to being chosen the dominant hue, including:According to the mass-tone
Tune blee adjusting parameter and trunk colour of skin adjusting parameter corresponding to taking.
Specifically, can pre-establish the corresponding relation between dominant hue and blee adjusting parameter, and dominant hue with
Corresponding relation between trunk colour of skin adjusting parameter.After computer equipment gets the dominant hue of pending image, according to the master
Corresponding blee adjusting parameter is obtained in corresponding relation of the tone between dominant hue and blee adjusting parameter, and
According to the adjustment of the corresponding trunk colour of skin of acquisition in corresponding relation of the dominant hue between dominant hue and trunk colour of skin adjusting parameter
Parameter.
Further, it is described that U.S. face processing is carried out to the portrait area in the pending image according to U.S. face parameter, including:
Colour of skin adjustment processing is carried out to the human face region in the portrait area according to the blee adjusting parameter;According to the skin
Color adjustment parameter carries out colour of skin adjustment processing to the trunk area of skin color in the portrait area in addition to human face region.
In the present embodiment according to corresponding to dominant hue obtains respectively blee adjusting parameter and trunk colour of skin adjusting parameter,
Blee and the trunk colour of skin are adjusted using respective adjusting parameter so that blee and trunk in portrait area
The colour of skin meets the environment residing for portrait, improves the visual effect of image.
In one embodiment, described image processing method also includes:The multiple image being continuously shot is obtained, will be described more
Two field picture is merged to obtain the pending image, identifies the portrait area in the pending image.
Specifically, computer equipment shoots continuous multiple frames image by camera for Same Scene, and multiple image is entered
Row fusion obtains clearly image, using this clearly image as pending image.For example, drawn per two field picture in multiple image
Be divided into nine regions, will be compared acquisition most clearly region per identical region in two field picture, then by it is each most clearly
Region synthesis obtains final image.
In one embodiment, it is described to obtain the multiple image being continuously shot, the multiple image is merged to obtain
The pending image, including:The multiple image being continuously shot is obtained, and obtains the human eye state in the multiple image;If
A frame eye opening image in the multiple image be present, then using the frame eye opening image as the pending image;It is if described
Multiframe eye opening image in multiple image be present, then merged the multiframe eye opening image to obtain the pending image.
Specifically, human eye state may include closed-eye state and eyes-open state.Judge human eye upper eyelid and lower eyelid away from
From whether predetermined threshold value is less than, if so, closed-eye state is then in, if it is not, being then in eyes-open state.Also can determine whether to detect people
The pupil region of eye or white of the eye region, if so, then human eye is in eyes-open state, otherwise, human eye is in closed-eye state.Eye opening image
The human eye for referring to include is in the image of eyes-open state.
A frame eye opening image in multiple image be present if detecting, directly using the two field picture as pending image.If
Detecting in multiple image has multiframe eye opening image, then multiframe eye opening image can be merged to obtain pending image.For example,
Nine regions are divided into per two field picture in multiframe eye opening image, it is most clear that acquisition will be compared per identical region in two field picture
Clear region, then clearly region synthesis obtains final image by each most.
In one embodiment, the U.S. face parameter according to corresponding to being chosen the dominant hue, including:Obtain the portrait
At least one of the lip color of face and pupil color in region;Lip color adjustment corresponding with the lip color is chosen according to the dominant hue
Parameter, and/or the whole parameter of pupil hue corresponding with the pupil color.
Specifically, computer equipment can obtain the lip color and pupil color of face in the portrait area in the pending image
At least one of.The corresponding relation of dominant hue and lip color, and the corresponding relation of dominant hue and pupil color can be pre-established.Calculate
After machine equipment gets the lip color of the face, from dominant hue lip color adjustment corresponding with acquisition in the corresponding relation of lip color
Parameter.After computer equipment gets the pupil color of the face, the acquisition pair from the corresponding relation of the dominant hue and lip color
The whole parameter of pupil hue answered.
By being adjusted to lip color and pupil color, the visual effect of image can be more conformed to.
In one embodiment, described image processing method also includes:Obtain the depth of view information of the portrait area.
Specifically, depth information can be obtained by dual camera.
Fig. 3 is the schematic diagram that depth information is obtained in one embodiment.As shown in Figure 3, it is known that the first camera 302 to
The distance between two cameras 304 Tc, shot respectively corresponding to object 306 by the first camera 302 and second camera 304
Image, the first included angle A can be obtained according to the image1With the second included angle A 2, the first camera 302 arrives the institute of second camera 304
Perpendicular intersection between horizontal line and object 302 is intersection point 308.Assuming that the distance of the first camera 302 to intersection point 308 is
Tx, then the distance of intersection point 308 to second camera 304 is just Tc-Tx, the depth information of object 306 is that object 306 arrives intersection point
308 vertical range is Ts.According to the first camera 302, object 306 and intersection point 308 form triangle, then can obtain with
Lower formula:
Similarly, the triangle formed according to second camera 304, object 306 and intersection point 308, then can obtain following public affairs
Formula:
The depth information that object 406 can be obtained by above-mentioned formula is:
Further, the U.S. face parameter according to corresponding to being chosen the dominant hue, including:According to the depth of view information and
U.S. face parameter corresponding to dominant hue selection.
Specifically, the corresponding relation between depth of view information and U.S. face degree coefficient can be pre-established.Wherein, U.S. face degree system
Number refers to the degree of U.S. face.Such as depth of view information, in 1 to 3 meter, U.S. face degree is that coefficient is 100%, depth of view information at 3 meters extremely
6 meters, U.S. face degree coefficient is 90% etc..Computer equipment can first the first U.S. face parameter according to corresponding to being chosen dominant hue, then root
According to depth of view information obtain corresponding to U.S. face degree coefficient, the described first U.S. face parameter is multiplied by into U.S. face degree coefficient, and to obtain second beautiful
Face parameter.U.S. face processing is carried out to the portrait area in pending image according to the second U.S. face parameter.
Fig. 4 is the flow chart of image processing method in another embodiment.As shown in figure 4, described image processing method bag
Include:
Step 402, pending image is obtained, identifies the portrait area in the pending image.
Step 404, color feature extracted is carried out to the pending image and obtains color histogram, it is straight according to the color
Side's figure determines the dominant hue of the pending image.
Step 406, the first U.S. face parameter according to corresponding to being chosen the dominant hue.
Step 408, the depth of view information of the portrait area is obtained, the U.S. face degree according to corresponding to obtaining the depth of view information
Coefficient.
Step 410, the second U.S. face parameter is obtained according to the described first U.S. face parameter and U.S. face degree coefficient.
Step 412, U.S. face processing is carried out to the portrait area in the pending image according to the described second U.S. face parameter.
Image processing method in the present embodiment, by carrying out color feature extracted to pending image, obtain color histogram
Figure, the dominant hue of color is determined according to the color histogram, corresponding with the corresponding relation selection of U.S. face parameter according to dominant hue
The first U.S. face parameter, further according to portrait area depth of view information obtain corresponding to U.S. face degree coefficient, with reference to the first U.S. face ginseng
Several and U.S. face degree coefficient can obtain the second U.S. face parameter, by the second U.S. face parameter to the portrait area in pending image
Carry out U.S. face processing so that image integrally more optimizes, and improves overall visual effect.
Fig. 5 is the structured flowchart of image processing apparatus in one embodiment.As shown in figure 5, a kind of image processing apparatus, bag
Include characteristic extracting module 502, hue determination module 504, parameter and choose module 506 and U.S. face module 508.Wherein:
Characteristic extracting module 502 is used for the color characteristic for obtaining the pending image.
Hue determination module 504 is used for the dominant hue that the pending image is determined according to the color characteristic.
Parameter chooses module 506 and is used for the U.S. face parameter according to corresponding to being chosen the dominant hue.
U.S. face module 508 is used to carry out at U.S. face the portrait area in the pending image according to the U.S. face parameter
Reason.
Image processing apparatus in the present embodiment, by obtaining the dominant hue of pending image, obtained according to dominant hue corresponding
U.S. face parameter, U.S. face processing is carried out to the portrait area in pending image according to the U.S. face parameter so that U.S. face processing
Result more conform to the whole style of entire image, improve the visual effect of image.
In one embodiment, characteristic extracting module 502 is additionally operable to carry out color feature extracted to the pending image
Color histogram is obtained, and hue determination module 504 determines the mass-tone of the pending image according to the color histogram
Adjust.
In one embodiment, parameter chooses module 506 and is additionally operable to the blee according to corresponding to being chosen the dominant hue
Adjusting parameter and trunk colour of skin adjusting parameter;U.S. face module 508 is additionally operable to according to the blee adjusting parameter to the people
As the human face region progress colour of skin adjustment processing in region, and according to the colour of skin adjusting parameter in the portrait area
Trunk area of skin color carries out colour of skin adjustment processing.
In one embodiment, described image processing unit also includes Identification of Images module.Identification of Images module is used to obtain
Pending image is taken, identifies the portrait area in the pending image.
In one embodiment, the Identification of Images module is additionally operable to obtain the multiple image being continuously shot, will be described more
Two field picture is merged to obtain the pending image, identifies the portrait area in the pending image.
In one embodiment, the Identification of Images module is additionally operable to obtain the multiple image being continuously shot, and obtains institute
State the human eye state in multiple image;If a frame eye opening image in the multiple image be present, by the frame eye opening image
As the pending image;If multiframe eye opening image in the multiple image be present, the multiframe eye opening image is carried out
Fusion obtains the pending image.
In one embodiment, parameter chooses the lip color and pupil that module 506 is additionally operable to obtain face in the portrait area
At least one of hole color;Lip color adjustment parameter corresponding with the lip color is chosen according to the dominant hue, with the pupil color pair
The whole parameter of pupil hue answered.
In one embodiment, described image processing unit also includes depth of field acquisition module.The depth of field acquisition module is used
In the depth of view information for obtaining the portrait area.
The U.S. face module 508 is additionally operable to the U.S. face parameter according to corresponding to being chosen the depth of view information and dominant hue.
In one embodiment, parameter chooses module 506 and is additionally operable to the U.S. face journey according to corresponding to obtaining the depth of view information
Coefficient is spent, the first U.S. face parameter according to corresponding to being chosen the dominant hue, according to the described first U.S. face parameter and U.S. face degree system
Number obtains the second U.S. face parameter.U.S. face parameter 508 is additionally operable to according to the described second U.S. face parameter in the pending image
Portrait area carries out U.S. face processing.
Image processing apparatus in the present embodiment, by carrying out color feature extracted to pending image, obtain color histogram
Figure, the dominant hue of color is determined according to the color histogram, corresponding with the corresponding relation selection of U.S. face parameter according to dominant hue
The first U.S. face parameter, further according to portrait area depth of view information obtain corresponding to U.S. face degree coefficient, with reference to the first U.S. face ginseng
Several and U.S. face degree coefficient can obtain the second U.S. face parameter, by the second U.S. face parameter to the portrait area in pending image
Carry out U.S. face processing so that image integrally more optimizes, and improves overall visual effect.
The division of modules is only used for for example, in other embodiments, will can push away in above-mentioned image processing apparatus
Recommend information generation device and be divided into different modules as required, to complete all or part of above-mentioned recommendation information generating means
Function.
The embodiment of the present application provides a kind of computer-readable recording medium.The computer-readable recording medium, thereon
Computer program is stored with, the computer program realizes described image processing method when being executed by processor the step of.
Fig. 6 is the internal structure schematic diagram of one embodiment Computer equipment.As shown in fig. 6, the computer equipment bag
Include processor, memory and the network interface connected by system bus.Wherein, the processor, which is used to provide, calculates and controls energy
Power, support the operation of whole computer equipment.Memory is used for data storage, program etc., and at least one meter is stored on memory
Calculation machine program, the computer program can be executed by processor, to realize that what is provided in the embodiment of the present application sets suitable for computer
Standby wireless network communication method.Memory may include magnetic disc, CD, read-only memory (Read-Only Memory,
The non-volatile memory medium such as ROM), or random access memory (Random-Access-Memory, RAM) etc..For example, one
In individual embodiment, memory includes non-volatile memory medium and built-in storage.Non-volatile memory medium is stored with operation system
System and computer program.The computer program can be provided performed by processor for the following each embodiment of realization
A kind of wireless network communication method.Built-in storage provides at a high speed for the operating system computer program in non-volatile memory medium
The running environment of caching.Network interface can be Ethernet card or wireless network card etc., for being carried out with the computer equipment of outside
Communication.The computer equipment can be mobile phone, tablet personal computer or personal digital assistant or Wearable etc..
A kind of computer program product for including instruction, when run on a computer so that described in computer performs
Image processing method.
The embodiment of the present application also provides a kind of computer equipment.Above computer equipment includes image processing circuit, figure
As process circuit can utilize hardware and/or component software to realize, it may include define ISP (Image Signal
Processing, picture signal processing) pipeline various processing units.Fig. 7 is that image processing circuit shows in one embodiment
It is intended to.As shown in fig. 7, for purposes of illustration only, the various aspects of the image processing techniques related to the embodiment of the present application are only shown.
As shown in fig. 7, image processing circuit includes ISP processors 740 and control logic device 750.Imaging device 710 is caught
View data handled first by ISP processors 740, ISP processors 740 view data is analyzed with catch can be used for it is true
The image statistics of fixed and/or imaging device 710 one or more control parameters.Imaging device 710 may include there is one
The camera of individual or multiple lens 712 and imaging sensor 714.Imaging sensor 714 may include colour filter array (such as
Bayer filters), imaging sensor 714 can obtain the luminous intensity caught with each imaging pixel of imaging sensor 714 and wavelength
Information, and the one group of raw image data that can be handled by ISP processors 740 is provided.Sensor 720 (such as gyroscope) can be based on passing
The parameter (such as stabilization parameter) of the image procossing of collection is supplied to ISP processors 740 by the interface type of sensor 720.Sensor 720
Interface can utilize SMIA (Standard Mobile Imaging Architecture, Standard Mobile Imager framework) interface,
The combination of other serial or parallel camera interfaces or above-mentioned interface.
In addition, raw image data can also be sent to sensor 720 by imaging sensor 714, sensor 720 can be based on passing
The interface type of sensor 720 is supplied to ISP processors 740, or sensor 720 to deposit raw image data raw image data
Store up in video memory 730.
ISP processors 740 handle raw image data pixel by pixel in various formats.For example, each image pixel can
Bit depth with 8,10,12 or 14 bits, ISP processors 740 can be carried out at one or more images to raw image data
Reason operation, statistical information of the collection on view data.Wherein, image processing operations can be by identical or different bit depth precision
Carry out.
ISP processors 740 can also receive view data from video memory 730.For example, the interface of sensor 720 will be original
View data is sent to video memory 730, and the raw image data in video memory 730 is available to ISP processors 740
It is for processing.Video memory 730 can be independent special in the part of storage arrangement, storage device or electronic equipment
With memory, and it may include DMA (Direct Memory Access, direct direct memory access (DMA)) feature.
When receiving from the interface of imaging sensor 714 or from the interface of sensor 720 or from video memory 730
During raw image data, ISP processors 740 can carry out one or more image processing operations, such as time-domain filtering.Figure after processing
As data can be transmitted to video memory 730, to carry out other processing before shown.ISP processors 740 can also be from
The reception processing data of video memory 730, the processing data is carried out in original domain and in RGB and YCbCr color spaces
Image real time transfer.View data after processing may be output to display 780, so that user watches and/or by graphics engine
Or GPU (Graphics Processing Unit, graphics processor) is further handled.In addition, the output of ISP processors 740
Also it can be transmitted to video memory 730, and display 780 can read view data from video memory 730.In one embodiment
In, video memory 730 can be configured as realizing one or more frame buffers.In addition, the output of ISP processors 740 can be sent out
Encoder/decoder 770 is given, so as to encoding/decoding image data.The view data of coding can be saved, and be shown in
Decompressed before in the equipment of display 780.
The step of processing view data of ISP processors 740, includes:To view data carry out VFE (Video Front End,
Video front) handle and CPP (Camera Post Processing, camera post processing) processing.At the VFE of view data
Reason may include correct view data contrast or brightness, modification record in a digital manner illumination conditions data, to picture number
According to compensate processing (such as white balance, automatic growth control, γ correction etc.), to view data be filtered processing etc..To figure
As the CPP processing of data may include to zoom in and out image, preview frame and record frame are provided to each path.Wherein, CPP can make
Preview frame and record frame are handled with different codecs.View data after the processing of ISP processors 740 can be transmitted to U.S. face
Module 760, to carry out U.S. face processing to image before shown.U.S. face module 760 can wrap to the face processing of view data U.S.
Include:Whitening, nti-freckle, mill skin, thin face, anti-acne, increase eyes etc..Wherein, U.S. face module 760 can be CPU in mobile terminal
(Central Processing Unit, central processing unit), GPU or coprocessor etc..Data after the U.S. processing of face module 760
It can be transmitted to encoder/decoder 770, so as to encoding/decoding image data.The view data of coding can be saved, and aobvious
Decompressed before being shown in the equipment of display 780.Wherein, U.S. face module 760 may be additionally located at encoder/decoder 770 and display
Between device 780, i.e., U.S. face module carries out U.S. face processing to the image being imaged.Above-mentioned encoder/decoder 770 can be mobile whole
CPU, GPU or coprocessor etc. in end.
The statistics that ISP processors 740 determine, which can be transmitted, gives the unit of control logic device 750.For example, statistics can wrap
Include the image sensings such as automatic exposure, AWB, automatic focusing, flicker detection, black level compensation, the shadow correction of lens 712
The statistical information of device 714.Control logic device 750 may include the processor and/or micro-control for performing one or more routines (such as firmware)
Device processed, one or more routines according to the statistics of reception, can determine control parameter and the ISP processing of imaging device 710
The control parameter of device 740.For example, the control parameter of imaging device 710 may include the control parameter of sensor 720 (such as gain, expose
The time of integration of photocontrol), camera flash control parameter, the control parameter of lens 712 (such as focus on or zoom focal length) or
The combination of these parameters.ISP control parameters may include to be used for AWB and color adjustment (for example, during RGB processing)
Gain level and color correction matrix, and the shadow correction parameter of lens 712.
Image processing method as described above can be realized with image processing techniques in Fig. 7.
Any reference to memory, storage, database or other media used in this application may include non-volatile
And/or volatile memory.Suitable nonvolatile memory may include read-only storage (ROM), programming ROM (PROM),
Electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include arbitrary access
Memory (RAM), it is used as external cache.By way of illustration and not limitation, RAM is available in many forms, such as
It is static RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDR SDRAM), enhanced
SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM
(RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM).
Embodiment described above only expresses the several embodiments of the application, and its description is more specific and detailed, but simultaneously
Therefore the limitation to the application the scope of the claims can not be interpreted as.It should be pointed out that for one of ordinary skill in the art
For, on the premise of the application design is not departed from, various modifications and improvements can be made, these belong to the guarantor of the application
Protect scope.Therefore, the protection domain of the application patent should be determined by the appended claims.
Claims (10)
- A kind of 1. image processing method, it is characterised in that including:Obtain the color characteristic of pending image;The dominant hue of the pending image is determined according to the color characteristic;The U.S. face parameter according to corresponding to being chosen the dominant hue;U.S. face processing is carried out to the portrait area in the pending image according to the U.S. face parameter.
- 2. according to the method for claim 1, it is characterised in that the color characteristic for obtaining pending image, including:Color feature extracted is carried out to the pending image and obtains color histogram;The dominant hue that the pending image is determined according to the color characteristic, including:The dominant hue of the pending image is determined according to the color histogram.
- 3. method according to claim 1 or 2, it is characterised in that the U.S. face according to corresponding to being chosen the dominant hue Parameter, including:Blee adjusting parameter and trunk colour of skin adjusting parameter according to corresponding to being chosen the dominant hue;It is described that U.S. face processing is carried out to the portrait area in the pending image according to U.S. face parameter, including:Colour of skin adjustment processing is carried out to the human face region in the portrait area according to the blee adjusting parameter;Colour of skin tune is carried out to the trunk area of skin color in the portrait area in addition to human face region according to the colour of skin adjusting parameter Whole processing.
- 4. method according to claim 1 or 2, it is characterised in that methods described also includes:The multiple image that is continuously shot is obtained, the multiple image is merged to obtain the pending image, described in identification The portrait area in pending image.
- 5. according to the method for claim 4, it is characterised in that it is described to obtain the multiple image being continuously shot, will be described more Two field picture is merged to obtain the pending image, including:The multiple image being continuously shot is obtained, and obtains the human eye state in the multiple image;If a frame eye opening image in the multiple image be present, using the frame eye opening image as the pending image;If multiframe eye opening image in the multiple image be present, the multiframe eye opening image is merged to obtain and described waits to locate Manage image.
- 6. method according to claim 1 or 2, it is characterised in that the U.S. face according to corresponding to being chosen the dominant hue Parameter, including:Obtain at least one of lip color and pupil color of face in the portrait area;Lip color adjustment parameter corresponding with the lip color, and/or pupil corresponding with the pupil color are chosen according to the dominant hue Hole color adjustment parameter.
- 7. method according to claim 1 or 2, it is characterised in that methods described also includes:Obtain the depth of view information of the portrait area;The U.S. face parameter according to corresponding to being chosen the dominant hue, including:The U.S. face parameter according to corresponding to being chosen the depth of view information and dominant hue.
- A kind of 8. image processing apparatus, it is characterised in that including:Characteristic extracting module, for obtaining the color characteristic of pending image;Hue determination module, for special according to the color Sign determines the dominant hue of the pending image;Parameter chooses module, for the U.S. face parameter according to corresponding to dominant hue selection;U.S. face module, for carrying out U.S. face processing to the portrait area in the pending image according to the U.S. face parameter.
- 9. a kind of computer-readable recording medium, is stored thereon with computer program, it is characterised in that the computer program quilt The step of image processing method as any one of claim 1 to 7 is realized during computing device.
- 10. a kind of computer equipment, including memory and processor, computer program, the meter are stored in the memory When calculation machine program is by the computing device so that image of the computing device as any one of claim 1 to 7 The step of processing method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711040347.9A CN107845076A (en) | 2017-10-31 | 2017-10-31 | Image processing method, device, computer-readable recording medium and computer equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711040347.9A CN107845076A (en) | 2017-10-31 | 2017-10-31 | Image processing method, device, computer-readable recording medium and computer equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107845076A true CN107845076A (en) | 2018-03-27 |
Family
ID=61681084
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711040347.9A Pending CN107845076A (en) | 2017-10-31 | 2017-10-31 | Image processing method, device, computer-readable recording medium and computer equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107845076A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109559274A (en) * | 2018-11-30 | 2019-04-02 | 深圳市脸萌科技有限公司 | Image processing method, device, electronic equipment and computer readable storage medium |
CN111369461A (en) * | 2020-03-03 | 2020-07-03 | 北京字节跳动网络技术有限公司 | Beauty parameter adjusting method and device and electronic equipment |
CN111831193A (en) * | 2020-07-27 | 2020-10-27 | 北京思特奇信息技术股份有限公司 | Automatic skin changing method, device, electronic equipment and storage medium |
WO2022156196A1 (en) * | 2021-01-25 | 2022-07-28 | 北京达佳互联信息技术有限公司 | Image processing method and image processing apparatus |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103605975A (en) * | 2013-11-28 | 2014-02-26 | 小米科技有限责任公司 | Image processing method and device and terminal device |
CN103929629A (en) * | 2014-04-24 | 2014-07-16 | 厦门美图网科技有限公司 | Image processing method based on image major colors |
CN106991654A (en) * | 2017-03-09 | 2017-07-28 | 广东欧珀移动通信有限公司 | Human body beautification method and apparatus and electronic installation based on depth |
CN107038680A (en) * | 2017-03-14 | 2017-08-11 | 武汉斗鱼网络科技有限公司 | The U.S. face method and system that adaptive optical shines |
CN107194869A (en) * | 2017-05-23 | 2017-09-22 | 腾讯科技(上海)有限公司 | A kind of image processing method and terminal, computer-readable storage medium, computer equipment |
-
2017
- 2017-10-31 CN CN201711040347.9A patent/CN107845076A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103605975A (en) * | 2013-11-28 | 2014-02-26 | 小米科技有限责任公司 | Image processing method and device and terminal device |
CN103929629A (en) * | 2014-04-24 | 2014-07-16 | 厦门美图网科技有限公司 | Image processing method based on image major colors |
CN106991654A (en) * | 2017-03-09 | 2017-07-28 | 广东欧珀移动通信有限公司 | Human body beautification method and apparatus and electronic installation based on depth |
CN107038680A (en) * | 2017-03-14 | 2017-08-11 | 武汉斗鱼网络科技有限公司 | The U.S. face method and system that adaptive optical shines |
CN107194869A (en) * | 2017-05-23 | 2017-09-22 | 腾讯科技(上海)有限公司 | A kind of image processing method and terminal, computer-readable storage medium, computer equipment |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109559274A (en) * | 2018-11-30 | 2019-04-02 | 深圳市脸萌科技有限公司 | Image processing method, device, electronic equipment and computer readable storage medium |
CN111369461A (en) * | 2020-03-03 | 2020-07-03 | 北京字节跳动网络技术有限公司 | Beauty parameter adjusting method and device and electronic equipment |
CN111831193A (en) * | 2020-07-27 | 2020-10-27 | 北京思特奇信息技术股份有限公司 | Automatic skin changing method, device, electronic equipment and storage medium |
WO2022156196A1 (en) * | 2021-01-25 | 2022-07-28 | 北京达佳互联信息技术有限公司 | Image processing method and image processing apparatus |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107730444A (en) | Image processing method, device, readable storage medium storing program for executing and computer equipment | |
CN107808136A (en) | Image processing method, device, readable storage medium storing program for executing and computer equipment | |
CN108009999A (en) | Image processing method, device, computer-readable recording medium and electronic equipment | |
CN107945107A (en) | Image processing method, device, computer-readable recording medium and electronic equipment | |
CN107730445A (en) | Image processing method, device, storage medium and electronic equipment | |
CN107798652A (en) | Image processing method, device, readable storage medium storing program for executing and electronic equipment | |
CN107886484A (en) | U.S. face method, apparatus, computer-readable recording medium and electronic equipment | |
CN107680128A (en) | Image processing method, device, electronic equipment and computer-readable recording medium | |
CN107742274A (en) | Image processing method, device, computer-readable recording medium and electronic equipment | |
CN107862663A (en) | Image processing method, device, readable storage medium storing program for executing and computer equipment | |
CN108805103A (en) | Image processing method and device, electronic equipment, computer readable storage medium | |
CN107862657A (en) | Image processing method, device, computer equipment and computer-readable recording medium | |
CN110149482A (en) | Focusing method, device, electronic equipment and computer readable storage medium | |
CN107993209A (en) | Image processing method, device, computer-readable recording medium and electronic equipment | |
CN107800965B (en) | Image processing method, device, computer readable storage medium and computer equipment | |
CN107766831A (en) | Image processing method, device, mobile terminal and computer-readable recording medium | |
CN107818305A (en) | Image processing method, device, electronic equipment and computer-readable recording medium | |
CN107911625A (en) | Light measuring method, device, readable storage medium storing program for executing and computer equipment | |
CN108537155A (en) | Image processing method, device, electronic equipment and computer readable storage medium | |
CN107800966A (en) | Method, apparatus, computer-readable recording medium and the electronic equipment of image procossing | |
CN107945135A (en) | Image processing method, device, storage medium and electronic equipment | |
CN108022207A (en) | Image processing method, device, storage medium and electronic equipment | |
CN107862274A (en) | U.S. face method, apparatus, electronic equipment and computer-readable recording medium | |
CN107862658A (en) | Image processing method, device, computer-readable recording medium and electronic equipment | |
CN107730446A (en) | Image processing method, device, computer equipment and computer-readable recording medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180327 |