CN107909542A - Image processing method, device, computer-readable recording medium and electronic equipment - Google Patents
Image processing method, device, computer-readable recording medium and electronic equipment Download PDFInfo
- Publication number
- CN107909542A CN107909542A CN201711244125.9A CN201711244125A CN107909542A CN 107909542 A CN107909542 A CN 107909542A CN 201711244125 A CN201711244125 A CN 201711244125A CN 107909542 A CN107909542 A CN 107909542A
- Authority
- CN
- China
- Prior art keywords
- face
- parameter
- clarity
- image
- model
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 32
- 238000012545 processing Methods 0.000 claims abstract description 136
- 238000000034 method Methods 0.000 claims abstract description 15
- 238000004590 computer program Methods 0.000 claims description 7
- 230000002087 whitening effect Effects 0.000 description 17
- 238000003384 imaging method Methods 0.000 description 13
- 238000004422 calculation algorithm Methods 0.000 description 10
- 241001396014 Priacanthus arenatus Species 0.000 description 9
- 230000006870 function Effects 0.000 description 8
- 238000003860 storage Methods 0.000 description 8
- 230000008859 change Effects 0.000 description 7
- 238000013139 quantization Methods 0.000 description 7
- 208000003351 Melanosis Diseases 0.000 description 6
- 238000004040 coloring Methods 0.000 description 6
- 230000001815 facial effect Effects 0.000 description 5
- 238000012937 correction Methods 0.000 description 4
- 238000010586 diagram Methods 0.000 description 4
- 241000208340 Araliaceae Species 0.000 description 3
- 235000005035 Panax pseudoginseng ssp. pseudoginseng Nutrition 0.000 description 3
- 235000003140 Panax quinquefolius Nutrition 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 3
- 210000000887 face Anatomy 0.000 description 3
- 235000008434 ginseng Nutrition 0.000 description 3
- 238000001228 spectrum Methods 0.000 description 3
- 101150058243 Lipf gene Proteins 0.000 description 2
- 230000003255 anti-acne Effects 0.000 description 2
- 238000001514 detection method Methods 0.000 description 2
- 238000003708 edge detection Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000003550 marker Substances 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 238000012805 post-processing Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 230000002194 synthesizing effect Effects 0.000 description 2
- 238000012549 training Methods 0.000 description 2
- AXDJCCTWPBKUKL-UHFFFAOYSA-N 4-[(4-aminophenyl)-(4-imino-3-methylcyclohexa-2,5-dien-1-ylidene)methyl]aniline;hydron;chloride Chemical compound Cl.C1=CC(=N)C(C)=CC1=C(C=1C=CC(N)=CC=1)C1=CC=C(N)C=C1 AXDJCCTWPBKUKL-UHFFFAOYSA-N 0.000 description 1
- 241001156002 Anthonomus pomorum Species 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 239000000872 buffer Substances 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000007405 data analysis Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000005286 illumination Methods 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 230000000116 mitigating effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 238000002310 reflectometry Methods 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 230000001360 synchronised effect Effects 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
- 238000002834 transmittance Methods 0.000 description 1
Classifications
-
- G06T3/04—
Landscapes
- Image Processing (AREA)
Abstract
This application involves a kind of image processing method, device, computer-readable recording medium and electronic equipment.The described method includes:Obtain pending image and corresponding clarity;Corresponding U.S. face parameter model is obtained according to the clarity, U.S.'s face parameter model refers to the model for calculating U.S. face parameter;Corresponding target U.S. face parameter is obtained according to the clarity and U.S. face parameter model;U.S. face processing carries out the pending image according to target U.S. face parameter.Above-mentioned image processing method, device, computer-readable recording medium and electronic equipment, improve the accuracy rate of image procossing.
Description
Technical field
This application involves technical field of image processing, more particularly to image processing method, device, computer-readable storage
Medium and electronic equipment.
Background technology
Either in work or living, it is all an essential technical ability to take pictures.People is allowed to expire in order to take one
The photo of meaning, not only needs in shooting process to improve acquisition parameters, it is also necessary to photo sheet after shooting is completed
Body is improved.The processing of U.S. face just refers to compare a kind of method that piece is beautified, and after being handled by U.S. face, can allow in photo
Personage seem to be more in line with the aesthetic of the mankind.
The content of the invention
The embodiment of the present application provides a kind of image processing method, device, computer-readable recording medium and electronic equipment, can
To improve the accuracy rate of image procossing.
A kind of image processing method, the described method includes:
Obtain pending image and corresponding clarity;
Corresponding U.S. face parameter model is obtained according to the clarity, U.S.'s face parameter model refers to be used to calculate U.S. face
The model of parameter;
Corresponding target U.S. face parameter is obtained according to the clarity and U.S. face parameter model;
U.S. face processing carries out the pending image according to target U.S. face parameter.
A kind of image processing apparatus, described device include:
Image collection module, for obtaining pending image and corresponding clarity.
Model acquisition module, for obtaining corresponding U.S. face parameter model, U.S.'s face parameter mould according to the clarity
Type refers to the model for calculating U.S. face parameter.
Parameter acquisition module, for obtaining corresponding target U.S. face parameter according to the clarity and U.S. face parameter model.
U.S. face processing module, for carrying out U.S. face processing to the pending image according to target U.S. face parameter.
A kind of computer-readable recording medium, is stored thereon with computer program, and the computer program is held by processor
Following steps are realized during row:
Obtain pending image and corresponding clarity;
Corresponding U.S. face parameter model is obtained according to the clarity, U.S.'s face parameter model refers to be used to calculate U.S. face
The model of parameter;
Corresponding target U.S. face parameter is obtained according to the clarity and U.S. face parameter model;
U.S. face processing carries out the pending image according to target U.S. face parameter.
A kind of electronic equipment, including memory and processor, store computer-readable instruction in the memory, described
When instruction is performed by the processor so that the processor performs following steps:
Obtain pending image and corresponding clarity;
Corresponding U.S. face parameter model is obtained according to the clarity, U.S.'s face parameter model refers to be used to calculate U.S. face
The model of parameter;
Corresponding target U.S. face parameter is obtained according to the clarity and U.S. face parameter model;
U.S. face processing carries out the pending image according to target U.S. face parameter.
Above-mentioned image processing method, device, computer-readable recording medium and electronic equipment, according to the clear of pending image
Clear degree obtains corresponding U.S. face parameter model, and corresponding target U.S. face parameter is obtained according to clarity and U.S. face parameter model, and
U.S. face processing is carried out to pending image according to target U.S. face parameter of acquisition.So can according to the images of different clarity into
The different U.S. face processing of row, improves the accuracy rate of image procossing, optimizes U.S. face processing.
Brief description of the drawings
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, below will be to embodiment or existing
There is attached drawing needed in technology description to be briefly described, it should be apparent that, drawings in the following description are only this
Some embodiments of application, for those of ordinary skill in the art, without creative efforts, can be with
Other attached drawings are obtained according to these attached drawings.
Fig. 1 is the applied environment figure of image processing method in one embodiment;
Fig. 2 is the flow chart of image processing method in one embodiment;
Fig. 3 is the flow chart of image processing method in another embodiment;
Fig. 4 is the color histogram generated in one embodiment;
Fig. 5 is the change curve of one embodiment Sino-U.S. face coefficient;
Fig. 6 is the flow chart of image processing method in another embodiment;
Fig. 7 is the change curve of another embodiment Sino-U.S. face coefficient;
Fig. 8 is the structure diagram of image processing apparatus in one embodiment;
Fig. 9 is the schematic diagram of image processing circuit in one embodiment.
Embodiment
It is with reference to the accompanying drawings and embodiments, right in order to which the object, technical solution and advantage of the application are more clearly understood
The application is further elaborated.It should be appreciated that specific embodiment described herein is only to explain the application, and
It is not used in restriction the application.
It is appreciated that term " first " used in this application, " second " etc. can be used to describe various elements herein,
But these elements should not be limited by these terms.These terms are only used to distinguish first element and another element.Citing comes
Say, in the case where not departing from scope of the present application, the first acquisition module can be known as the second acquisition module, and similarly,
Second acquisition module can be known as the first acquisition module.First acquisition module and the second acquisition module both acquisition module,
But it is not same acquisition module.
Fig. 1 is the applied environment figure of image processing method in one embodiment.As shown in Figure 1, the application environment includes
User terminal 102 and server 104.It can be used for gathering pending image in user terminal 102, pending image be sent to
In server 104.After server 104 receives pending image, pending image and corresponding clarity are obtained;According to clear
Clear degree obtains corresponding U.S. face parameter model, and U.S. face parameter model refers to the model for calculating U.S. face parameter;According to clarity
Corresponding target U.S. face parameter is obtained with U.S. face parameter model;Pending image is carried out at U.S. face according to target U.S. face parameter
Reason.Pending image after the processing of U.S. face is returned to user terminal 102 by last server 104.It is understood that server
Target U.S. face parameter of acquisition can also be sent to user terminal 102, user terminal 102 treats place according to target U.S. face parameter
Manage image and carry out U.S. face processing.Wherein, user terminal 102 is to be in computer network outermost, is mainly used for inputting user's letter
The electronic equipment of breath and output handling result, such as can be PC, mobile terminal, personal digital assistant, wearable
Electronic equipment etc..Server 104 is to be used to respond service request, while the equipment for providing the service of calculating, such as can be one
Or multiple stage computers.It is understood that in the other embodiment that the application provides, which applies ring
User terminal 102 can be only included in border, i.e., user terminal 102 is used to gather pending image, and pending image is carried out
U.S. face processing.
Fig. 2 is the flow chart of image processing method in one embodiment.As shown in Fig. 2, the image processing method includes step
Rapid 202 to step 208.Wherein:
Step 202, pending image and corresponding clarity are obtained.
In one embodiment, pending image refers to that needs carry out the image of U.S. face processing.Pending image can be
It is acquired by mobile terminal.The camera that can be used for shooting is installed, user can be by mobile whole on mobile terminal
Photographing instruction is initiated at end, and mobile terminal gathers shooting image after photographing instruction is detected, by camera.Mobile terminal meeting
The image of collection is stored, forms an image collection.It is understood that pending image can also be and pass through other
What approach obtained, do not limit herein.For example, pending image can also be what is downloaded from webpage, or deposited from external
Imported in storage equipment etc..Obtaining pending image can specifically include:U.S. face instruction input by user is received, and according to U.S.
Face instruction obtains pending image, and image identification is included in its Sino-U.S.'s face instruction.Image identification refers to distinguish different pending figures
The unique mark of picture, pending image is obtained according to image identification.For example, image identification can be image name, image coding,
One or more in image storage address etc..Specifically, mobile terminal, can be in movement after pending image is got
Terminal local carries out U.S. face processing, can also send pending image to server and carry out U.S. face processing.Clarity refers to figure
The readability on texture and border as in, clarity is higher, can more see the detail textures information in image clearly.
Step 204, corresponding U.S. face parameter model is obtained according to clarity, U.S. face parameter model refers to be used to calculate U.S. face
The model of parameter.
Specifically, U.S. face parameter model refers to the model for calculating U.S. face parameter, and in general U.S. face parameter model can
To be expressed as a function model, which can represent input variable and export the functional relation of result.It is appreciated that
, this functional relation can be linear or nonlinear.Input variable is input in the function model,
The output result of corresponding output can be obtained.For example, the function model can be expressed as Y=X+1, wherein, Y is output as a result, X
For the variable of input.So when input variable X is 1, corresponding obtained output result Y is just 2.Pre-establish clarity and U.S.
The correspondence of face parameter model, corresponding U.S. face parameter model can be obtained according to clarity.When clarity is different, obtain
U.S. face parameter model can be different, so as to fulfill according to different clarity, calculated using different U.S. face parameter models
Target U.S. face parameter.
Step 206, corresponding target U.S. face parameter is obtained according to clarity and U.S. face parameter model.
In general, clarity can represent the readability of detailed information in image, if the clarity of image is too low,
So many detailed information are just lost in explanatory drawin picture.And when U.S. face processing is carried out to image, often to image
Detailed information affect.For example when mill skin processing is carried out to image, the line of the skin in image can be made
Situations such as reason becomes smoother, while also causes the texture of hairline to become unintelligible, or face thicken.If figure
As the clarity of itself is not high, then when U.S. face processing is carried out to image, it is easier to cause the details of missing image to believe
Breath is more, and image fault is more serious, image is lost aesthetic feeling.Therefore, in order to more accurately to the U.S. face of image progress
Processing, can pre-establish the correspondence of image definition and target U.S. face parameter, and corresponding target is obtained according to clarity
U.S. face parameter.
In the embodiment that the application provides, target U.S. face parameter refers to the ginseng that U.S. face processing is carried out to pending image
Number.U.S. face processing just refers to a kind of method beautified to image.For example, U.S. face processing can be to the portrait in image into
Row whitening, mill skin, which are handled or referred to, carries out portrait the processing such as makeups, thin face, weight reducing.U.S. face parameter model can represent clear
Functional relation between clear degree and U.S. face parameter.After getting clarity and U.S. face parameter model, using clarity as U.S. face
The input variable of parameter model, is calculated by U.S. face parameter model, and obtained output result is target U.S. face parameter.
Step 208, U.S. face processing is carried out to pending image according to target U.S. face parameter.
After getting target U.S. face parameter, U.S. face processing is carried out to pending image according to target U.S. face parameter.Can be with
Understand, can carry out U.S. face processing for whole image, U.S. face can also be only carried out to some region in image
Processing.For example, whitening processing can be handled for whole image, the brightness of whole image of raising can also be just for
Skin area is handled, the processing that thin face processing can be carried out just for human face region.It is understood that pending figure
Seem to be made of several pixels, each pixel can be made of multiple Color Channels, and each Color Channel represents one
A color component.For example, image can by RGB (Red Green Blue, red, green, blue) triple channel form or by
HSV (Hue Saturation Value, tone, saturation degree, lightness) triple channel is formed, and can also be by CMY (Cyan
Magenta Yellow, fuchsin is blue or green, yellow) triple channel composition., can be with then when U.S. face processing is carried out to pending image
U.S. face processing is carried out to each Color Channel of pending image respectively, the degree for the treatment of of each Color Channel can differ.
Specifically, the corresponding clarity of each Color Channel of pending image is obtained;Obtained according to the clarity of each Color Channel
Corresponding U.S.'s face parameter model;The corresponding target for obtaining each Color Channel respectively according to clarity and U.S. face parameter model is beautiful
Face parameter;U.S. face processing carries out each Color Channel of pending image according to target U.S. face parameter.
The image processing method that above-described embodiment provides, corresponding U.S. face parameter is obtained according to the clarity of pending image
Model, corresponding target U.S. face parameter is obtained according to clarity and U.S. face parameter model, and according to target U.S. face parameter of acquisition
U.S. face processing is carried out to pending image.Different U.S. face so can be carried out according to the image of different clarity to handle, improved
The accuracy rate of image procossing, optimizes U.S. face processing.
Fig. 3 is the flow chart of image processing method in another embodiment.As shown in figure 3, the image processing method includes
Step 302 is to step 310.Wherein:
Step 302, the target area in pending image is obtained, and calculates the corresponding clarity in target area.
If carrying out U.S. face processing on the server, pending image can be sent to server by each mobile terminal,
Server carries out the pending image in pending image U.S. face processing after the pending image is received.It is mobile whole
When end sends pending image, while corresponding terminal iidentification is sent, after server process is completed, searched according to terminal iidentification
Corresponding mobile terminal, the pending image after processing is completed are sent to mobile terminal.Wherein, terminal iidentification refers to user
The unique mark of terminal.For example, terminal iidentification can be IP (Internet Protocol, the agreement interconnected between network)
At least one of location, MAC (Media Access Control, media access control) address etc..
Target area is usually the region that user compares concern, can specifically refer to need to carry out U.S. face in pending image
The region of processing.For example, target area can refer to human face region, portrait area, skin area, lip region, hair zones
Deng not limiting herein.It is understood that pending image is made of several pixels, target area be then by
What several pixels in pending image were formed.The corresponding region area in target area refers to target area occupied area
Size, can be expressed as the total quantity of pixel included in target area, can also be expressed as target area with it is corresponding
The area ratio of pending image.Target area generally shows as independent connected region in the picture, and connected region refers to one
The region of a closing.For example, each face in image can correspond to an independent connected region, when there are multiple in image
, will be there are multiple connected regions when face, each face corresponds to an independent connected region.
Obtaining target area can include:The human face region in pending image is detected, and according to human face region acquisition pair
The target area answered.Specifically, target area can refer to human face region in pending image, portrait area, skin area,
Lip region, hair zones, face region etc., do not limit in the present embodiment.Human face region refers to people in pending image
Region where face.The human face region of pending image can be obtained by Face datection algorithm, Face datection algorithm can wrap
Include the detection method based on geometric properties, feature face detecting method, linear discriminant analysis method, based on hidden markov model examine
Survey method etc., does not limit herein.Portrait area refers to the region where the whole portrait in pending image.Then obtain portrait
The method in region can specifically include:Obtain the depth information of pending image;The human face region in pending image is detected, and
According to the portrait area in human face region and the pending image of Depth Information Acquistion.It is understood that pass through image collector
When putting collection image, the corresponding depth map of image can be obtained at the same time, the pixel in pixel and image in depth map
Point corresponds to.Pixel in depth map represents the depth information of respective pixel in image, and depth information is that pixel is corresponding
Depth information of the object to image collecting device.It is generally acknowledged that portrait and face, on same vertical plane, portrait is adopted to image
The value of the depth informations of acquisition means and face to the depth information of image collecting device is in same scope.Therefore, obtaining
After human face region, the corresponding depth information of human face region can be obtained from depth map, then according to the corresponding depth of human face region
Degree information can obtain the corresponding depth information of portrait area, then can be got according to the corresponding depth information of portrait area
Portrait area in pending image.
Specifically, skin area refers to the region where skin.Skin area can be divided into face skin area and portrait
Skin area, face skin area refer to the region where facial skin, and portrait skin area includes face and body skin
The region at place.The method of corresponding face skin area is obtained according to human face region can specifically include:According to human face region
Corresponding colouring information generates color histogram;Obtain the peak value in color histogram and corresponding color interval;According to color
Interval division skin color section, using the region corresponding to skin color section in human face region as face skin area.So
Afterwards using the region corresponding to skin color section in portrait area as portrait skin area.Color histogram is used to describe difference
Color ratio shared in human face region, colouring information refer to the relevant parameter of the color for representing image.For example,
In hsv color space, colouring information can include H (Hue, tone), S (Saturation, saturation degree) and the V of color in image
Information such as (Value, lightness).The corresponding colouring information of human face region is obtained, colouring information can be divided into multiple small colors
Section, and the quantity for the pixel that each color interval is fallen into human face region is calculated respectively, so as to obtain color histogram.Its
In, color histogram can be RGB color histogram, hsv color histogram or YUV color histograms etc., however it is not limited to this.
In hsv color space, component may include H (Hue, tone), S (Saturation, saturation degree) and V (Value, lightness), H tables
Show that angle is measured, value range is 0 °~360 °, is calculated counterclockwise since red, and red is 0 °, and green is 120 °,
Blueness is 240 °;S represents color close to the degree of spectrum colour, and the ratio shared by spectrum colour is bigger, journey of the color close to spectrum colour
Degree is higher, and the saturation degree of color is also higher, and saturation degree is high, and color is general deep and gorgeous;V represents bright degree, for
Light source colour, brightness value are related with the brightness of illuminator;For object color, this value is related with the transmittance or reflectivity of object, V
Usual value range is 0% (black) to 100% (white).Specifically, generating the method for hsv color histogram can include:By people
Changed from RGB color to hsv color space in face region;Tri- components of H, S, V in HSV are quantified respectively, and will
The feature vector of tri- component synthesizing one-dimensionals of H, S, V after quantization;According to each pixel in human face region in hsv color space
In value, determine quantization level of the pixel in tri- components of H, S, V;Corresponding feature is calculated according to the quantization level of pixel
Vector, and the quantity for the pixel being distributed according to feature vector statistics on each quantization level;Face is generated according to statistical result
Color Histogram.
Wherein, the value of feature vector can be between 0~255, and hsv color space, can also be divided into by totally 256 values
256 color intervals, each color interval correspond to the value of a feature vector.For example, it can be 16 grades by H element quantizations, by S points
Amount and V component are quantified as 4 grades respectively, and the feature vector of synthesizing one-dimensional can be shown below:
L=H*QS*QV+S*QV+V;
L represents the one-dimensional feature vector of tri- component synthesis of H, S and V after quantifying;QSRepresent the quantized level of S components
Number, QVRepresent the quantization series of V component.Wave crest refers to the maximum of the wave amplitude in one section of ripple that color histogram is formed, can
It is determined by the first-order difference for asking for each point in color histogram, peak value is then the maximum on wave crest.Get face
After peak value in Color Histogram, the color interval of the corresponding quantization of peak value is obtained, which can be hsv color space
In feature vector corresponding with peak value value.Fig. 4 is the color histogram generated in one embodiment.As shown in figure 4, color is straight
The longitudinal axis of square figure represents the quantity of the distribution situation, the i.e. pixel in corresponding color section of pixel.Transverse axis represents that hsv color is empty
Between feature vector, namely hsv color space division multiple color intervals.As can be seen that the color histogram in Fig. 4 includes
Wave crest 402,402 corresponding peak value of wave crest are 850, and the corresponding color interval of the peak value is to have 850 in 150, namely statistical picture
The feature vector value of a pixel is 150.
According to the skin color section of the corresponding color interval division human face region of the peak value of color histogram, can set in advance
Determine the value range in skin color section, skin color area is calculated further according to the corresponding color interval of peak value and default value range
Between.Using the region corresponding to skin color section in human face region as face skin area.Alternatively, computer equipment can incite somebody to action
The corresponding color interval of peak value is multiplied with default value range, wherein, default value range may include upper limit value and lower limit, can
The corresponding color interval of peak value is multiplied with upper limit value and lower limit respectively, obtains skin color section.For example, computer equipment
The value range that skin color section can be preset is 80%~120%, if the corresponding color interval of the peak value of color histogram
For 150 value, then skin color section can be calculated as 120~180.Computer equipment can obtain each picture in human face region
Vegetarian refreshments is in the feature vector in hsv color space, and whether judging characteristic vector falls into skin color section, can will if falling into
Corresponding pixel is defined as the pixel of face skin area.For example, it is 120~180 that skin color section, which is calculated, then
Pixel of the feature vector between 120~180 in human face region in hsv color space can be defined as by computer equipment
Pixel in face skin area.
Step 304, the parameter section residing for clarity, the corresponding U.S. face parameter model in the section that gets parms are determined.
The corresponding clarity in target area is obtained, and target U.S. face parameter is obtained according to the clarity of target area.Generally
For, target area is the region that user compares concern, more accurate according to the U.S. face parameter that the clarity of target area obtains.
If the clarity of target area than relatively low, easilys lead to the loss of detailed information after U.S. face processing, U.S. of image is reduced
Sense.Carrying out that the degree for correspondingly mitigating U.S. face processing can be contemplated when U.S. face processing, it is thin so preferably to retain some
Information is saved, the distortion of image will not be caused.Specifically, clarity is divided into different parameter sections, when clarity is in difference
Parameter section when, it is corresponding to carry out different degrees of U.S. face processing.For example, the value of clarity can be 0~1, value is bigger,
Represent that image is more clear.Clarity is carried out to be divided into different sections, each section corresponds to a U.S. face parameter model.It is false
If clarity is divided into 0~0.2,0.2~0.6,0.6~0.8 and 0.8~1 etc. four section, correspond to respectively model 1, model 2,
Model 3 and model 4.Then when the clarity of target area is 0.5, the U.S. face parameter model of acquisition is just model 2.
Step 306, U.S. face underlying parameter is obtained.
Step 308, U.S. face coefficient is calculated according to clarity and U.S. face parameter model, and according to U.S. face underlying parameter and U.S. face
Coefficient obtains target U.S. face parameter.
Specifically, pending image may have in multiple target areas, such as pending image there are multiple faces, then
As soon as correspond to multiple target areas using the region where each face as an independent target area, multiple faces.U.S. face
Underlying parameter refers to the reference value for carrying out U.S. face processing, and U.S. face underlying parameter can be user's preset parameter set in advance
It is worth or corresponding with target area.Corresponding U.S. face underlying parameter, different target areas are obtained according to target area
Same U.S. face underlying parameter can be corresponded to, different U.S. face underlying parameters can also be corresponded to.According to clarity and U.S. face parameter
Model calculates U.S. face coefficient, then obtains target U.S. face parameter according to U.S. face underlying parameter and U.S. face coefficient.Wherein, U.S. face coefficient
Refer to the weight for obtaining U.S. face parameter.
As an example it is assumed that U.S. face underlying parameter is Param, U.S. face coefficient is Factor, the target area of calculating it is clear
Clear degree is Clarity.Clarity Clarity is divided into three parameter regions, is respectively Clarity<stdClaritymin,
stdClaritymin<Clarity<stdClaritymax, Clarity>stdClaritymax.Then the calculation formula of U.S. face coefficient is such as
Under:
U.S. face coefficient can be calculated according to above-mentioned formula, mesh is then calculated according to basic U.S. face parameter and U.S. face coefficient
The U.S. face parameter adjustParam of mark, such as following formula:
AdjustParam=Param*FactorT
Fig. 5 is the change curve of one embodiment Sino-U.S. face coefficient.As shown in figure 5, the change of U.S. face coefficient is in ladder
Property increase, one is divided into three phases, the first stage:Work as Clarity<stdClarityminWhen, U.S. face coefficient value is constant;The
Two-stage:Work as stdClaritymin<Clarity<stdClaritymaxWhen, U.S. face coefficient and the linear relation with increase of clarity;
Phase III:Work as Clarity>stdClaritymaxWhen, U.S. face coefficient remains unchanged.
Step 310, U.S. face processing is carried out to pending image according to target U.S. face parameter.
Specifically, U.S. face processing is carried out to the target area in pending image according to target U.S. face parameter.Getting
After target area, an area identification can be established to each target area, then establish area identification, position coordinates and
The relation of target U.S. face parameter, obtains corresponding target area, then according to corresponding target U.S. face parameter according to position coordinates
U.S. face is carried out to target area to come out.Face 1, face 2 and face 3 are included in pending image " pic.jpg " for example, detecting
Deng three human face regions, corresponding face mark is respectively face1, face2 and face3, and corresponding target U.S. face parameter is distinguished
For 1 grade of whitening, 2 grades of whitenings and 1 grade of anti-acne.
The image processing method that above-described embodiment provides, obtains the target area in pending image first, according to waiting to locate
Manage the corresponding clarity of objective area in image and obtain U.S. face parameter model.Then obtained according to clarity and U.S. face parameter model
Corresponding target U.S. face parameter, pending image is carried out U.S. face processing further according to target U.S. face parameter of acquisition.So can be with
Different U.S. face is carried out according to the different clarity of objective area in image to handle, and improves the accuracy rate of image procossing, optimization
U.S. face processing.
Fig. 6 is the flow chart of image processing method in another embodiment.As shown in fig. 6, the image processing method includes
Step 602 is to step 612.Wherein:
Step 602, pending image and corresponding clarity are obtained.
Target area in pending image can be one or more, such as can have a people in pending image
Face, it is possibility to have multiple faces, using the region where face as target area.It is understood that in pending image
Target area can be not present.Target area in pending image can be obtained by zone marker, can also be direct
It is detected to obtain in pending image.Zone marker refers to be used for the mark for representing the scope of target area in pending image
Note, such as target area is marked with red rectangle frame in pending image, the region in the red rectangle frame is just recognized
To be target area.Target area in pending image is individually extracted, and with the image identification of pending image and
Position coordinates establishes correspondence, if there are multiple target areas in a pending image, each target area is single
Solely correspondence is established with image identification and position coordinates.Wherein, position coordinates refers to represent target area in pending image
In position coordinate.For example, position coordinates can be the coordinate of position of the target area center in pending image,
It can also be the coordinate of position of the upper left position in pending image.After having handled target area, pass through image mark
Know and search corresponding pending image, specific position of the target area in the pending image is then searched by position coordinates
Put, which is reduced.
Step 604, corresponding U.S. face parameter model is obtained according to clarity, U.S. face parameter model refers to be used to calculate U.S. face
The model of parameter.
In one embodiment, spatial domain gradient algorithm, frequency domain analysis etc. can be included by calculating the algorithm of clarity.Often
The spatial domain gradient algorithm seen includes Brenner algorithms, Tenengrad algorithms, SMD algorithm scheduling algorithms.Frequency domain analysis can be with
Clarity is calculated by counting the high fdrequency component in frequency domain, high fdrequency component is higher, and image is more clear.Using Tenengrad algorithms as
Example, distinguishes the Grad on calculated level and vertical direction, then the image based on Tenengrad is clear using Sobel gradient operators
Clear degree is defined as follows:
Wherein, T is given edge detection threshold, and Gx and Gy are the horizontal and vertical sides of pixel (x, y) place Sobel respectively
To the convolution of edge detection operator, edge can be detected by following Sobel gradient operators template:
Step 606, the corresponding character attribute feature of human face region in pending image is obtained, and according to character attribute feature
Corresponding U.S. face underlying parameter is obtained, its Sino-U.S.'s face underlying parameter includes the first U.S. face underlying parameter and the second U.S. face basis ginseng
Number.
Character attribute feature refers to that the feature for representing the character attribute of personage in image, such as character attribute feature can be
Refer to the one or more in sex character, age characteristics, ethnic group feature etc..Specifically, the face area in pending image is obtained
Domain, then identifies corresponding character attribute feature according to human face region.Further, the face in pending image is obtained
Region, the corresponding character attribute feature of human face region is obtained by feature recognition model.Wherein, feature recognition model refers to identify
The model of character attribute feature, feature recognition model train to obtain by face sample set.Face sample set refers to
The image collection being made of several facial images, trains to obtain feature recognition model, usually people according to face sample set
Facial image in face sample set is more, and the feature recognition model that training obtains is more accurate.For example, in supervised learning, will
Each facial image in face sample set stamps corresponding label, for marking the type of facial image, by people
The training of face sample set can obtain feature recognition model.Feature recognition model can classify human face region, obtain
Corresponding character attribute feature.For example, human face region can be divided into yellow, black race and white people, then obtained pair
The character attribute feature answered is exactly one kind in yellow, black race or white people.That is, by feature recognition model into
Row classification is based on uniform.Character attribute feature to the different dimensions for obtaining human face region, then can be by not
Same feature recognition model is obtained respectively.
Specifically, character attribute feature can include ethnic group characteristic parameter, sex character parameter, age characteristics parameter, skin
Color characteristic parameter, skin quality characteristic parameter, shape of face characteristic parameter, dressing characteristic parameter, do not limit herein.For example, pass through ethnic group
Identification model obtains the corresponding ethnic group characteristic parameter of human face region, and the human face region corresponding age is obtained according to age identification model
Characteristic parameter, the corresponding sex character parameter of human face region is obtained according to gender identification model.Pre-establish character attribute feature
With the relation between U.S. face underlying parameter, gone to obtain corresponding U.S. face underlying parameter according to character attribute feature.For example, personage belongs to
Property feature can include masculinity and femininity, when it be male to identify face, corresponding U.S.'s face underlying parameter mill skin is handled, and works as identification
When face is women, corresponding U.S.'s face underlying parameter is handled for whitening.Pair between character attribute feature and U.S. face underlying parameter
It should be related to, can be user setting or system is learnt by big data.
Step 608, U.S. face coefficient is calculated according to clarity and U.S. face parameter model, according to the first U.S. face underlying parameter and U.S.
Face coefficient obtains the corresponding first U.S. face parameter, and using the second U.S. face underlying parameter as the second U.S. face parameter.
The processing of U.S. face can include mill skin, thin face, big eye, nti-freckle, livid ring around eye of dispelling, whitening are highlighted, sharpened, red lip, bright eye
Etc. different processing, and a portion processing can cause the loss of detailed information, can be influenced be subject to clarity, and a part of
Processing will not cause the loss of detailed information, therefore be not directly linked with clarity.For example, mill skin, thin face, big eye, nti-freckle,
Livid ring around eye etc. of dispelling comprehend the size for changing face, the detailed information of image will be caused to change, if the situation that clarity is low
System, then the processing such as mill skin, thin face is carried out to face, it is possible to the more detailed information of missing image can be caused, cause image
Serious distortion;And the processing such as whitening is highlighted, sharpened, red lip, bright eye is that colouring information is handled, image would not be influenced
Detailed information.Then target U.S. face parameter can include the first U.S. face parameter and the second U.S. face parameter, and the first U.S. face parameter can be subject to
The influence of clarity, with clarity there are correspondence, the second U.S. face parameter will not then be influenced be subject to clarity.Specifically,
U.S. face underlying parameter includes the first U.S. face underlying parameter and the second U.S. face underlying parameter.According to clarity and U.S. face parameter model meter
U.S. face coefficient is calculated, the corresponding first U.S. face parameter is obtained according to the first U.S. face underlying parameter and U.S. face coefficient, and by the second U.S. face
Underlying parameter is as the second U.S. face parameter.
Further, when clarity is less than some value, U.S. face processing can not be carried out to target area, or
Carry out a part of processing.For example, the whitening that small degree is only carried out to target area is handled, it can so ensure the image after processing
Distortion will not be too serious.One parameter threshold can be then set, while U.S. face coefficient is divided into the first U.S. face coefficient and second U.S.
Face coefficient.When clarity is more than the parameter threshold, the first U.S. face coefficient is calculated according to clarity and U.S. face parameter model, according to
First U.S. face underlying parameter and the first U.S. face coefficient obtain the corresponding first U.S. face parameter, and using the second U.S. face underlying parameter as
Second U.S. face parameter.When clarity is less than the parameter threshold, the first U.S. face coefficient and the second U.S. face coefficient are obtained, according to first
U.S. face coefficient and the first U.S. face underlying parameter obtain the first U.S. face parameter, according to the second U.S. face coefficient and the second U.S. face underlying parameter
Obtain the second U.S. face parameter.Wherein, the first U.S. face parameter and the second U.S. face parameter are a less value of setting, then finally obtain
The the first U.S. face coefficient and the second U.S. face coefficient taken will be a smaller value.
Step 610, according to the first U.S. face parameter and second U.S. face parameter acquiring target U.S. face parameter.
In one embodiment, it is assumed that U.S. face underlying parameter is defaultParam, its Sino-U.S.'s face underlying parameter is divided into the
The U.S. face underlying parameter unchangedParam of one U.S. face underlying parameter adjustParam and second.U.S. face processing can wrap respectively
Include mill skin, the processing such as thin face, big eye, nti-freckle, livid ring around eye of dispelling, whitening are highlighted, sharpened, red lip, bright eye, then:
DefaultParam=[adjustParam | unchangedParam];
DjustParam=[softenP, faceSlenderP, eyeLargerP, deblemishP, depouchP];
UnchangedParam=[skinBrightenP, sharpP, lipP, eyeBrightenP];
Wherein, softenP represents mill skin underlying parameter, and faceSlenderP represents thin face underlying parameter, eyeLargerP
Represent big eye underlying parameter, deblemishP represents nti-freckle underlying parameter, and depouchP represents livid ring around eye underlying parameter of dispelling.
SkinBrightenP whitenings highlight underlying parameter, and sharpP sharpens underlying parameter, and lipP represents red lip underlying parameter,
EyeBrightenP represents bright eye underlying parameter.
Assuming that U.S. face coefficient is Factor, U.S. face coefficient includes the U.S. face systems of first U.S. face coefficient adjustFactor and second
Number unchangedFactor.First U.S. face parameter can be obtained according to the first U.S. face coefficient and the first U.S. face underlying parameter, according to
Second U.S. face coefficient and the second U.S. face underlying parameter can obtain the second U.S. face parameter.
Factor=[adjustFactor | unchangedFactor];
AdjustFactor=[softenF, faceSlenderF, eyeLargerF, deblemishF, depouchF];
UnchangedFactor=[skinBrightenF, sharpF, lipF, eyeBrightenF];
Wherein, softenF represents mill skin coefficient, and faceSlenderF represents thin face coefficient, and eyeLargerF represents big eye
Coefficient, deblemishF represent nti-freckle coefficient, and depouchF represents livid ring around eye coefficient of dispelling.SkinBrightenF whitenings, which highlight, is
Number, sharpF sharpening coefficients, lipF represent red lip coefficient, and eyeBrightenF represents bright eye system number.
The clarity of the pending image obtained is Clarity, then can taking by big data analysis best sharpness
It is worth scope, can also be by the User Defined value range.Assuming that the value range for defining best sharpness is
[stdClaritymin,stdClaritymax], can be directly by the first U.S. face underlying parameter when clarity is in the value range
As the first U.S. face parameter, i.e., first U.S. face coefficient adjustFactor is just [1,1,1,1,1].When clarity exceeds the value
During scope, corresponding U.S. face coefficient can be obtained by a function, usually clarity is smaller, and the first U.S. face coefficient is got over
Small, clarity is bigger, and the first U.S. face coefficient is bigger.In order to avoid U.S. face degree for the treatment of is too deep or excessively shallow, clarity can be set
A lower limit minClarity and a upper limit value maxClarity are put, when clarity is more than the upper limit value or less than the lower limit
During value, the value of the first U.S. face coefficient is constant.Specifically, can the self-defined upper limit value and lower limit, it is assumed that define the lower limit
Value and upper limit value are respectively minClarity=0.25*stdClaritymin, maxClarity=1.75*stdClaritymax,
Then, the formula for obtaining the first U.S. face coefficient is as follows:
When clarity Clarity is more than minClarity, the first U.S. face coefficient can be obtained according to above formula, according to
First U.S. face coefficient can obtain the first U.S. face parameter with the first U.S. face underlying parameter, and the second U.S. face underlying parameter is directly made
For the second U.S. face parameter, namely the second U.S. face coefficient is [1,1,1,1]., can when clarity Clarity is less than minClarity
To select not carry out target area U.S. face processing, the U.S. face that small degree can also be carried out to target area is handled.Define first
U.S. face coefficient and the second U.S. face coefficient are less value, are obtained according to the first of acquisition the U.S. face coefficient and the first U.S. face underlying parameter
First U.S. face parameter, and the second U.S. face parameter is obtained according to the second U.S. face coefficient of acquisition and the second U.S. face underlying parameter, then
Obtained the first U.S. face parameter and the second U.S. face parameter is also a less value.For example, when Clarity is less than minClarity
When, it is adjustFactor=[0.5,0,0,0,0] that can make the first U.S. face coefficient, and the second U.S. face coefficient is
UnchangedFactor=[0,0,0,0].Corresponding target U.S. face is obtained according to the U.S. face coefficient of acquisition and U.S. face underlying parameter
Parameter, then target U.S. face parameter adjustParam be:
AdjustParam=defaultParam*FactorT
Fig. 7 is the change curve of another embodiment Sino-U.S. face coefficient.As shown in fig. 7, when clarity Clarity is more than
During minClarity, the change of the first U.S. face coefficient is in steps growth, and one is divided into four-stage, first stage:When
minClarity<Clarity<stdClarityminWhen, the first U.S. face coefficient and clarity linearly increase;Second stage:When
stdClaritymin<Clarity<stdClaritymaxWhen, the first U.S. face coefficient remains unchanged;Phase III:When
stdClaritymax<Clarity<During maxClarity, the first U.S. face coefficient and clarity linearly increase;Fourth stage:When
Clarity>During maxClarity, the first U.S. face coefficient remains unchanged.
Step 612, U.S. face processing is carried out to pending image according to target U.S. face parameter.
It can include multiple U.S. face modules in system, each U.S.'s face module can carry out a kind of U.S. face processing.For example, system
In can include mill skin module, whitening module, big eye module, thin face module, colour of skin adjustment module, can be respectively to pending figure
As carrying out the processing of mill skin, whitening processing, the processing of big eye, the processing of thin face, colour of skin adjustment processing.In one embodiment, Ge Gemei
Face module can be a code function module, realize that the U.S. face to image is handled by the code function module.Each generation
Code function module all corresponds to a flag bit, decides whether to carry out corresponding processing by the flag bit.For example, each U.S.'s face
Module all corresponds to a flag bit Stat, when the value of flag bit Stat is 1 or ture, illustrates to need to carry out U.S.'s face mould
The corresponding U.S. face processing of block;When the value of flag bit Stat is 0 or fals, explanation need not carry out U.S.'s face module and correspond to
U.S. face processing.
Specifically, assignment is carried out to the flag bit of each U.S. face module according to target U.S. face parameter, is obtained according to flag bit
It is input to for the U.S. face module of U.S. face processing, and by target U.S. face parameter in each U.S. face module of acquisition to pending image
Carry out U.S. face processing.For example, target U.S. face parameter includes carrying out whitening processing to face, then by the corresponding mark of whitening module
Position is assigned a value of 1, if big eye processing need not be carried out, the corresponding flag bit of big eye module is assigned a value of 0.Carrying out U.S. face processing
When, travel through each U.S. face module and judge whether to need to carry out corresponding processing according to flag bit.It is it is understood that each
The U.S. face processing that U.S. face module carries out is separate, is independent of each other.Assuming that image needs to carry out a variety of U.S. face processing, then may be used
Handled with passing sequentially through each U.S. face module, obtain final U.S. face image.
When there are when multiple target areas, can traveling through each target area, and being obtained every in pending image
The corresponding target U.S. face parameter in one target area, carries out target area at U.S. face respectively according to target U.S. face parameter of acquisition
Reason.If only carrying out U.S. face processing to the target area in pending image, and not in pending image in addition to target area
Remaining area do the processing of U.S. face, may result in has obvious difference after the treatment between target area and remaining area.
For example, after carrying out whitening processing to target area, the brightness of target area is substantially higher than the brightness of remaining area, so makes figure
As seeming very unnatural.So the border of target area can be subjected to transition processing in the U.S. face image of generation so that
To U.S. face image seem more natural.
The image processing method that above-described embodiment provides, obtains the clarity of pending image, is obtained according to clarity first
Take U.S. face parameter model.Then corresponding target U.S. face parameter is obtained according to clarity and U.S. face parameter model, further according to acquisition
Target U.S. face parameter U.S. face processing is carried out to pending image.It can so be carried out according to the different clarity of image different
U.S. face processing, improves the accuracy rate of image procossing, optimizes U.S. face processing.
Fig. 8 is the structure diagram of image processing apparatus in one embodiment.As shown in figure 8, the image processing apparatus 800
Including image collection module 802, model acquisition module 804, parameter acquisition module 806 and U.S. face processing module 808.Wherein:
Image collection module 802, for obtaining pending image and corresponding clarity.
Model acquisition module 804, for obtaining corresponding U.S. face parameter model, U.S.'s face parameter according to the clarity
Model refers to the model for calculating U.S. face parameter.
Parameter acquisition module 806, for obtaining corresponding target U.S. face ginseng according to the clarity and U.S. face parameter model
Number.
U.S. face processing module 808, for carrying out U.S. face processing to the pending image according to target U.S. face parameter.
The image processing apparatus that above-described embodiment provides, corresponding U.S. face parameter is obtained according to the clarity of pending image
Model, corresponding target U.S. face parameter is obtained according to clarity and U.S. face parameter model, and according to target U.S. face parameter of acquisition
U.S. face processing is carried out to pending image.Different U.S. face so can be carried out according to the image of different clarity to handle, improved
The accuracy rate of image procossing, optimizes U.S. face processing.
In one embodiment, image collection module 802 is additionally operable to obtain the target area in pending image, and calculates
The corresponding clarity in the target area.
In one embodiment, image collection module 802 is additionally operable to detect the human face region in pending image, and according to
The human face region obtains corresponding target area.
In one embodiment, model acquisition module 804 is additionally operable to determine the parameter section residing for the clarity, obtains
The corresponding U.S. face parameter model in the parameter section.
In one embodiment, parameter acquisition module 806 is additionally operable to obtain U.S. face underlying parameter;According to the clarity and
U.S. face parameter model calculates U.S. face coefficient, and obtains target U.S. face parameter according to the U.S. face underlying parameter and U.S. face coefficient.
In one embodiment, parameter acquisition module 806 is additionally operable to obtain human face region in the pending image and corresponds to
Character attribute feature, and corresponding U.S. face underlying parameter is obtained according to the character attribute feature.
In one embodiment, parameter acquisition module 806 is additionally operable to be calculated according to the clarity and U.S. face parameter model
U.S. face coefficient, the corresponding first U.S. face parameter is obtained according to the described first U.S. face underlying parameter and U.S. face coefficient, and by described the
Two U.S. face underlying parameters are as the second U.S. face parameter;According to the described first U.S. face parameter and second U.S. face parameter acquiring target U.S. face
Parameter.
The division of modules is only used for for example, in other embodiments, will can scheme in above-mentioned image processing apparatus
As processing unit is divided into different modules as required, to complete all or part of function of above-mentioned image processing apparatus.
The embodiment of the present application additionally provides a kind of computer-readable recording medium.One or more includes computer program
Non-volatile computer readable storage medium storing program for executing, when the computer program is executed by one or more processors so that described
Processor performs following steps:
Obtain pending image and corresponding clarity;
Corresponding U.S. face parameter model is obtained according to the clarity, U.S.'s face parameter model refers to be used to calculate U.S. face
The model of parameter;
Corresponding target U.S. face parameter is obtained according to the clarity and U.S. face parameter model;
U.S. face processing carries out the pending image according to target U.S. face parameter.
In one embodiment, the pending image of the acquisition and corresponding clarity that the processor performs include:
The target area in pending image is obtained, and calculates the corresponding clarity in the target area.
In one embodiment, the target area in the pending image of the acquisition that the processor performs includes:
The human face region in pending image is detected, and corresponding target area is obtained according to the human face region.
In one embodiment, what the processor performed is described according to the corresponding U.S. face parameter mould of clarity acquisition
Type includes:
Determine the parameter section residing for the clarity, obtain the corresponding U.S. face parameter model in the parameter section.
In one embodiment, what the processor performed is described according to the clarity and U.S. face parameter model acquisition pair
The target U.S. face parameter answered includes:
Obtain U.S. face underlying parameter;
U.S. face coefficient is calculated according to the clarity and U.S. face parameter model, and according to the U.S. face underlying parameter and U.S. face
Coefficient obtains target U.S. face parameter.
In one embodiment, acquisition U.S. face underlying parameter that the processor performs includes:
The corresponding character attribute feature of human face region in the pending image is obtained, and according to the character attribute feature
Obtain corresponding U.S. face underlying parameter.
In one embodiment, what the processor performed is described according to the clarity and U.S. face parameter model acquisition pair
The target U.S. face parameter answered includes:
U.S. face coefficient is calculated according to the clarity and U.S. face parameter model, according to the described first U.S. face underlying parameter and U.S.
Face coefficient obtains the corresponding first U.S. face parameter, and using the described second U.S. face underlying parameter as the second U.S. face parameter;
According to the described first U.S. face parameter and second U.S. face parameter acquiring target U.S. face parameter.
The embodiment of the present application also provides a kind of electronic equipment.Above-mentioned electronic equipment includes image processing circuit, at image
Managing circuit can utilize hardware and or software component to realize, it may include define ISP (Image SignalProcessing, image
Signal processing) pipeline various processing units.Fig. 9 is the schematic diagram of image processing circuit in one embodiment.As shown in figure 9,
For purposes of illustration only, the various aspects with the relevant image processing techniques of the embodiment of the present application are only shown.
As shown in figure 9, image processing circuit includes ISP processors 940 and control logic device 950.Imaging device 910 is caught
View data handled first by ISP processors 940, ISP processors 940 view data is analyzed with catch can be used for it is true
The image statistics of fixed and/or imaging device 910 one or more control parameters.Imaging device 910 may include there is one
The camera of a or multiple lens 912 and imaging sensor 914.Imaging sensor 914 may include colour filter array (such as
Bayer filters), imaging sensor 914 can obtain the luminous intensity caught with each imaging pixel of imaging sensor 914 and wavelength
Information, and the one group of raw image data that can be handled by ISP processors 940 is provided.Sensor 920 (such as gyroscope) can be based on passing
The parameter (such as stabilization parameter) of the image procossing of collection is supplied to ISP processors 940 by 920 interface type of sensor.Sensor 920
Interface can utilize SMIA (Standard Mobile Imaging Architecture, Standard Mobile Imager framework) interface,
The combination of other serial or parallel camera interfaces or above-mentioned interface.
In addition, raw image data can be also sent to sensor 920 by imaging sensor 914, sensor 920 can be based on passing
920 interface type of sensor is supplied to ISP processors 940, or sensor 920 to deposit raw image data raw image data
Store up in video memory 930.
ISP processors 940 handle raw image data pixel by pixel in various formats.For example, each image pixel can
Bit depth with 8,10,12 or 14 bits, ISP processors 940 can carry out raw image data at one or more images
Reason operation, statistical information of the collection on view data.Wherein, image processing operations can be by identical or different bit depth precision
Carry out.
ISP processors 940 can also receive view data from video memory 930.For example, 920 interface of sensor will be original
View data is sent to video memory 930, and the raw image data in video memory 930 is available to ISP processors 940
It is for processing.Video memory 930 can be independent special in the part of storage arrangement, storage device or electronic equipment
With memory, and it may include DMA (Direct Memory Access, direct direct memory access (DMA)) feature.
When receiving from 914 interface of imaging sensor or from 920 interface of sensor or from video memory 930
During raw image data, ISP processors 940 can carry out one or more image processing operations, such as time-domain filtering.Figure after processing
As data can be transmitted to video memory 930, to carry out other processing before shown.ISP processors 940 can also be from
Video memory 930 receives processing data, and the processing data are carried out in original domain and in RGB and YCbCr color spaces
Image real time transfer.View data after processing may be output to display 980, so that user watches and/or by graphics engine
Or GPU (Graphics Processing Unit, graphics processor) is further handled.In addition, the output of ISP processors 940
Also it can be transmitted to video memory 930, and display 980 can read view data from video memory 930.In one embodiment
In, video memory 930 can be configured as realizing one or more frame buffers.In addition, the output of ISP processors 940 can be sent out
Encoder/decoder 970 is given, so as to encoding/decoding image data.The view data of coding can be saved, and be shown in
Decompressed before in 980 equipment of display.
The step of processing view data of ISP processors 940, includes:To view data carry out VFE (Video Front End,
Video front) handle and CPP (Camera Post Processing, camera post processing) processing.At the VFE of view data
Reason may include correct view data contrast or brightness, modification record in a digital manner illumination conditions data, to picture number
According to compensate processing (such as white balance, automatic growth control, γ correction etc.), to view data be filtered processing etc..To figure
As the CPP processing of data may include to zoom in and out image, preview frame and record frame are provided to each path.Wherein, CPP can make
Preview frame and record frame are handled with different codecs.View data after the processing of ISP processors 940 can be transmitted to U.S. face
Module 960, to carry out U.S. face processing to image before shown.U.S. face module 960 can wrap the face processing of view data U.S.
Include:Whitening, nti-freckle, mill skin, thin face, anti-acne, increase eyes etc..Wherein, U.S. face module 960 can be CPU in mobile terminal
(Central Processing Unit, central processing unit), GPU or coprocessor etc..Data after the U.S. processing of face module 960
It can be transmitted to encoder/decoder 970, so as to encoding/decoding image data.The view data of coding can be saved, and aobvious
Decompressed before being shown in 980 equipment of display.Wherein, U.S. face module 960 may be additionally located at encoder/decoder 970 and display
Between device 980, i.e., U.S. face module carries out the image being imaged U.S. face processing.Above-mentioned encoder/decoder 970 can be mobile whole
CPU, GPU or coprocessor etc. in end.
The definite statistics of ISP processors 940, which can be transmitted, gives control logic device Unit 950.For example, statistics can wrap
Include the image sensings such as automatic exposure, automatic white balance, automatic focusing, flicker detection, black level compensation, 912 shadow correction of lens
914 statistical information of device.Control logic device 950 may include the processor and/or micro-control for performing one or more routines (such as firmware)
Device processed, one or more routines according to the statistics of reception, can determine control parameter and the ISP processing of imaging device 910
The control parameter of device 940.For example, the control parameter of imaging device 910 may include 920 control parameter of sensor (such as gain, expose
The time of integration of photocontrol), camera flash control parameter, 912 control parameter of lens (such as focus on or zoom focal length) or
The combination of these parameters.ISP control parameters may include to be used for automatic white balance and color adjustment (for example, during RGB processing)
Gain level and color correction matrix, and 912 shadow correction parameter of lens.
The image processing method of above-described embodiment offer can be provided with image processing techniques in Fig. 9.
A kind of computer program product for including instruction, when run on a computer so that computer performs above-mentioned
The image processing method that embodiment provides.
Any reference to memory, storage, database or other media used in this application may include non-volatile
And/or volatile memory.Suitable nonvolatile memory may include read-only storage (ROM), programming ROM (PROM),
Electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include arbitrary access
Memory (RAM), it is used as external cache.By way of illustration and not limitation, RAM is available in many forms, such as
It is static RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhanced
SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM
(RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM).
Embodiment described above only expresses the several embodiments of the application, its description is more specific and detailed, but simultaneously
Therefore the limitation to the application the scope of the claims cannot be interpreted as.It should be pointed out that for those of ordinary skill in the art
For, on the premise of the application design is not departed from, various modifications and improvements can be made, these belong to the guarantor of the application
Protect scope.Therefore, the protection domain of the application patent should be determined by the appended claims.
Claims (10)
- A kind of 1. image processing method, it is characterised in that the described method includes:Obtain pending image and corresponding clarity;Corresponding U.S. face parameter model is obtained according to the clarity, U.S.'s face parameter model refers to be used to calculate U.S. face parameter Model;Corresponding target U.S. face parameter is obtained according to the clarity and U.S. face parameter model;U.S. face processing carries out the pending image according to target U.S. face parameter.
- 2. image processing method according to claim 1, it is characterised in that the pending image and corresponding clear of obtaining Clear degree includes:The target area in pending image is obtained, and calculates the corresponding clarity in the target area.
- 3. image processing method according to claim 2, it is characterised in that the target area obtained in pending image Domain includes:The human face region in pending image is detected, and corresponding target area is obtained according to the human face region.
- 4. image processing method according to claim 1, it is characterised in that described corresponding according to clarity acquisition U.S. face parameter model includes:Determine the parameter section residing for the clarity, obtain the corresponding U.S. face parameter model in the parameter section.
- 5. image processing method according to any one of claims 1 to 4, it is characterised in that described according to the clarity Obtaining corresponding target U.S. face parameter with U.S. face parameter model includes:Obtain U.S. face underlying parameter;U.S. face coefficient is calculated according to the clarity and U.S. face parameter model, and according to the U.S. face underlying parameter and U.S. face coefficient Obtain target U.S. face parameter.
- 6. image processing method according to claim 5, it is characterised in that U.S.'s face underlying parameter that obtains includes:The corresponding character attribute feature of human face region in the pending image is obtained, and is obtained according to the character attribute feature Corresponding U.S.'s face underlying parameter.
- 7. image processing method according to claim 5, it is characterised in that U.S.'s face underlying parameter includes the first U.S. face Underlying parameter and the second U.S. face underlying parameter;It is described to obtain corresponding target U.S. face parameter according to the clarity and U.S. face parameter model and include:U.S. face coefficient is calculated according to the clarity and U.S. face parameter model, according to the described first U.S. face underlying parameter and U.S. face system Number obtains the corresponding first U.S. face parameter, and using the described second U.S. face underlying parameter as the second U.S. face parameter;According to the described first U.S. face parameter and second U.S. face parameter acquiring target U.S. face parameter.
- 8. a kind of image processing apparatus, it is characterised in that described device includes:Image collection module, for obtaining pending image and corresponding clarity;Model acquisition module, for obtaining corresponding U.S. face parameter model according to the clarity, U.S.'s face parameter model is Refer to the model for being used for calculating U.S. face parameter;Parameter acquisition module, for obtaining corresponding target U.S. face parameter according to the clarity and U.S. face parameter model;U.S. face processing module, for carrying out U.S. face processing to the pending image according to target U.S. face parameter.
- 9. a kind of computer-readable recording medium, is stored thereon with computer program, it is characterised in that the computer program quilt The image processing method as any one of claim 1 to 7 is realized when processor performs.
- 10. a kind of electronic equipment, including memory and processor, computer-readable instruction is stored in the memory, it is described When instruction is performed by the processor so that the processor performs the image procossing as any one of claim 1 to 7 Method.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711244125.9A CN107909542A (en) | 2017-11-30 | 2017-11-30 | Image processing method, device, computer-readable recording medium and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711244125.9A CN107909542A (en) | 2017-11-30 | 2017-11-30 | Image processing method, device, computer-readable recording medium and electronic equipment |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107909542A true CN107909542A (en) | 2018-04-13 |
Family
ID=61848352
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711244125.9A Pending CN107909542A (en) | 2017-11-30 | 2017-11-30 | Image processing method, device, computer-readable recording medium and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107909542A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109685741A (en) * | 2018-12-28 | 2019-04-26 | 北京旷视科技有限公司 | A kind of image processing method, device and computer storage medium |
CN111031239A (en) * | 2019-12-05 | 2020-04-17 | Oppo广东移动通信有限公司 | Image processing method and apparatus, electronic device, and computer-readable storage medium |
CN112118457A (en) * | 2019-06-20 | 2020-12-22 | 腾讯科技(深圳)有限公司 | Live broadcast data processing method and device, readable storage medium and computer equipment |
CN112561822A (en) * | 2020-12-17 | 2021-03-26 | 苏州科达科技股份有限公司 | Beautifying method and device, electronic equipment and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS6413679A (en) * | 1987-07-07 | 1989-01-18 | Sharp Kk | Writing control circuit for picture memory |
CN103208129A (en) * | 2013-04-23 | 2013-07-17 | 董昊程 | Portable editor and edition method for processing camera photo |
CN104537630A (en) * | 2015-01-22 | 2015-04-22 | 厦门美图之家科技有限公司 | Method and device for image beautifying based on age estimation |
CN104967784A (en) * | 2015-07-02 | 2015-10-07 | 广东欧珀移动通信有限公司 | Method of calling underlayer effect mode of shooting function of mobile terminal, and mobile terminal |
CN106161962A (en) * | 2016-08-29 | 2016-11-23 | 广东欧珀移动通信有限公司 | A kind of image processing method and terminal |
CN106210516A (en) * | 2016-07-06 | 2016-12-07 | 广东欧珀移动通信有限公司 | One is taken pictures processing method and terminal |
US20170213327A1 (en) * | 2016-01-21 | 2017-07-27 | Astral Images Corporation | Method and system for processing image content for enabling high dynamic range (uhd) output thereof and computer-readable program product comprising uhd content created using same |
-
2017
- 2017-11-30 CN CN201711244125.9A patent/CN107909542A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPS6413679A (en) * | 1987-07-07 | 1989-01-18 | Sharp Kk | Writing control circuit for picture memory |
CN103208129A (en) * | 2013-04-23 | 2013-07-17 | 董昊程 | Portable editor and edition method for processing camera photo |
CN104537630A (en) * | 2015-01-22 | 2015-04-22 | 厦门美图之家科技有限公司 | Method and device for image beautifying based on age estimation |
CN104967784A (en) * | 2015-07-02 | 2015-10-07 | 广东欧珀移动通信有限公司 | Method of calling underlayer effect mode of shooting function of mobile terminal, and mobile terminal |
US20170213327A1 (en) * | 2016-01-21 | 2017-07-27 | Astral Images Corporation | Method and system for processing image content for enabling high dynamic range (uhd) output thereof and computer-readable program product comprising uhd content created using same |
CN106210516A (en) * | 2016-07-06 | 2016-12-07 | 广东欧珀移动通信有限公司 | One is taken pictures processing method and terminal |
CN106161962A (en) * | 2016-08-29 | 2016-11-23 | 广东欧珀移动通信有限公司 | A kind of image processing method and terminal |
Non-Patent Citations (2)
Title |
---|
SRUTHY SURAN 等: "Automatic aesthetic quality assessment of photographic images using deep convolutional neural network", 《 2016 INTERNATIONAL CONFERENCE ON INFORMATION SCIENCE (ICIS)》 * |
欧阳杰臣 等: "基于Android人脸美化App的研究与实现", 《计算机技术与发展》 * |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109685741A (en) * | 2018-12-28 | 2019-04-26 | 北京旷视科技有限公司 | A kind of image processing method, device and computer storage medium |
CN109685741B (en) * | 2018-12-28 | 2020-12-11 | 北京旷视科技有限公司 | Image processing method and device and computer storage medium |
CN112118457A (en) * | 2019-06-20 | 2020-12-22 | 腾讯科技(深圳)有限公司 | Live broadcast data processing method and device, readable storage medium and computer equipment |
CN112118457B (en) * | 2019-06-20 | 2022-09-09 | 腾讯科技(深圳)有限公司 | Live broadcast data processing method and device, readable storage medium and computer equipment |
CN111031239A (en) * | 2019-12-05 | 2020-04-17 | Oppo广东移动通信有限公司 | Image processing method and apparatus, electronic device, and computer-readable storage medium |
CN111031239B (en) * | 2019-12-05 | 2021-06-18 | Oppo广东移动通信有限公司 | Image processing method and apparatus, electronic device, and computer-readable storage medium |
CN112561822A (en) * | 2020-12-17 | 2021-03-26 | 苏州科达科技股份有限公司 | Beautifying method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP3477931B1 (en) | Image processing method and device, readable storage medium and electronic device | |
CN107993209A (en) | Image processing method, device, computer-readable recording medium and electronic equipment | |
CN107800966B (en) | Method, apparatus, computer readable storage medium and the electronic equipment of image procossing | |
CN107742274A (en) | Image processing method, device, computer-readable recording medium and electronic equipment | |
CN107808136A (en) | Image processing method, device, readable storage medium storing program for executing and computer equipment | |
CN107730444A (en) | Image processing method, device, readable storage medium storing program for executing and computer equipment | |
CN107862657A (en) | Image processing method, device, computer equipment and computer-readable recording medium | |
CN107451969B (en) | Image processing method, image processing device, mobile terminal and computer readable storage medium | |
CN107911625A (en) | Light measuring method, device, readable storage medium storing program for executing and computer equipment | |
CN107862663A (en) | Image processing method, device, readable storage medium storing program for executing and computer equipment | |
CN108009999A (en) | Image processing method, device, computer-readable recording medium and electronic equipment | |
CN108764208A (en) | Image processing method and device, storage medium, electronic equipment | |
CN107800965B (en) | Image processing method, device, computer readable storage medium and computer equipment | |
CN107730445A (en) | Image processing method, device, storage medium and electronic equipment | |
CN110149482A (en) | Focusing method, device, electronic equipment and computer readable storage medium | |
CN110276767A (en) | Image processing method and device, electronic equipment, computer readable storage medium | |
CN107680128A (en) | Image processing method, device, electronic equipment and computer-readable recording medium | |
CN107945107A (en) | Image processing method, device, computer-readable recording medium and electronic equipment | |
CN110334635A (en) | Main body method for tracing, device, electronic equipment and computer readable storage medium | |
CN110473185A (en) | Image processing method and device, electronic equipment, computer readable storage medium | |
CN107909542A (en) | Image processing method, device, computer-readable recording medium and electronic equipment | |
CN107862658A (en) | Image processing method, device, computer-readable recording medium and electronic equipment | |
CN108537749A (en) | Image processing method, device, mobile terminal and computer readable storage medium | |
CN107862659A (en) | Image processing method, device, computer equipment and computer-readable recording medium | |
CN105139404A (en) | Identification camera capable of detecting photographing quality and photographing quality detecting method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |