CN107808137A - Image processing method, device, electronic equipment and computer-readable recording medium - Google Patents
Image processing method, device, electronic equipment and computer-readable recording medium Download PDFInfo
- Publication number
- CN107808137A CN107808137A CN201711045718.2A CN201711045718A CN107808137A CN 107808137 A CN107808137 A CN 107808137A CN 201711045718 A CN201711045718 A CN 201711045718A CN 107808137 A CN107808137 A CN 107808137A
- Authority
- CN
- China
- Prior art keywords
- region
- limbs
- parameter
- image
- electronic equipment
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/103—Static body considered as a whole, e.g. static pedestrian or occupant recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
Abstract
The invention relates to a kind of image processing method, device, electronic equipment and computer-readable recording medium.The above method, including:Human bioequivalence is carried out to pending image, determines the limbs region of pending image;Extract the limbs feature in the limbs region;The adjustment region in the limbs region is determined according to the limbs feature, and chooses beautification parameter corresponding with each adjustment region;Landscaping treatment is carried out to corresponding adjustment region in the limbs region according to the beautification parameter.Above-mentioned image processing method, device, electronic equipment and computer-readable recording medium, beautification parameter can adaptively be chosen according to the limbs feature of people, landscaping effect can be improved, make the visual display effect of image more preferable.
Description
Technical field
The application is related to technical field of image processing, more particularly to a kind of image processing method, device, electronic equipment and
Computer-readable recording medium.
Background technology
After electronic equipment is by imaging first-class collection character image, figure map of the technology to collection can be beautified by image
As carrying out landscaping treatment, wherein, landscaping treatment may include whitening, mill skin, increase eyes, thin face, weight reducing etc..At traditional beautification
All it is to pre-set fixed beautification parameter in reason, the character image for each collection is the beautification by the fixation
Parameter is uniformly processed.
The content of the invention
The embodiment of the present application provides a kind of image processing method, device, electronic equipment and computer-readable recording medium, can
Beautification parameter is adaptively chosen with the limbs feature according to people, landscaping effect can be improved, makes the visual display effect of image more preferable.
A kind of image processing method, including:
Human bioequivalence is carried out to pending image, determines the limbs region of the pending image;
Extract the limbs feature in the limbs region;
The adjustment region in the limbs region is determined according to the limbs feature, and is chosen corresponding with each adjustment region
Beautify parameter;
Landscaping treatment is carried out to corresponding adjustment region in the limbs region according to the beautification parameter.
A kind of image processing apparatus, including:
Identification module, for carrying out human bioequivalence to pending image, determine the limbs region of the pending image;
Characteristic extracting module, for extracting the limbs feature in the limbs region;
Parameter chooses module, for determining the adjustment region in the limbs region according to the limbs feature, and choose with
Beautification parameter corresponding to each adjustment region;
Processing module, for being carried out according to the beautification parameter to corresponding adjustment region in the limbs region at beautification
Reason.
A kind of electronic equipment, including memory and processor, computer program, the calculating are stored with the memory
When machine program is by the computing device so that the processor realizes method as described above.
A kind of computer-readable recording medium, is stored thereon with computer program, and the computer program is held by processor
Method as described above is realized during row.
Above-mentioned image processing method, device, electronic equipment and computer-readable recording medium, pedestrian is entered to pending image
Body identifies, determines the limbs region of pending image, the limbs feature in extraction limbs region, limbs area to be determined according to limbs feature
The adjustment region in domain, and beautification parameter corresponding with each adjustment region is chosen, according to beautification parameter to corresponding in limbs region
Adjustment region carry out landscaping treatment, can should determine that adjustment region according to the limbs feature of people is adaptive and choose beautification parameter,
Landscaping effect can be improved, makes the visual display effect of image more preferable.
Brief description of the drawings
Fig. 1 is the block diagram of electronic equipment in one embodiment;
Fig. 2 is the Organization Chart of image processing method in one embodiment;
Fig. 3 is the schematic flow sheet of image processing method in one embodiment;
Fig. 4 is the schematic flow sheet that limbs region corresponding with human face region is obtained in one embodiment;
Fig. 5 is the schematic diagram that depth of view information is calculated in one embodiment;
Fig. 6 is the schematic flow sheet for determining weight reducing region in one embodiment and choosing weight reducing parameter;
Fig. 7 is the schematic flow sheet for choosing whitening parameter in one embodiment according to features of skin colors;
Fig. 8 is the schematic flow sheet for choosing whitening parameter in another embodiment according to features of skin colors;
Fig. 9 is brightness and the relation schematic diagram of whitening parameter in one embodiment;
Figure 10 is the block diagram of image processing apparatus in one embodiment;
Figure 11 is the schematic diagram of image processing circuit in one embodiment.
Embodiment
In order that the object, technical solution and advantage of the application are more clearly understood, it is right below in conjunction with drawings and Examples
The application is further elaborated.It should be appreciated that specific embodiment described herein is only to explain the application, not
For limiting the application.
It is appreciated that term " first " used in this application, " second " etc. can be used to describe various elements herein,
But these elements should not be limited by these terms.These terms are only used for distinguishing first element and another element.Citing comes
Say, in the case where not departing from scope of the present application, the first client can be referred to as the second client, and similarly, can incite somebody to action
Second client is referred to as the first client.First client and the second client both clients, but it is not same visitor
Family end.
Fig. 1 is the block diagram of electronic equipment in one embodiment.As shown in figure 1, the electronic equipment includes passing through system bus
Processor, memory, display screen and the input unit of connection.Wherein, memory may include non-volatile memory medium and processing
Device.The non-volatile memory medium of electronic equipment is stored with operating system and computer program, and the computer program is by processor
A kind of image processing method provided during execution with realizing in the embodiment of the present application.The processor, which is used to provide, calculates and controls energy
Power, support the operation of whole electronic equipment.Built-in storage in electronic equipment is the computer journey in non-volatile memory medium
The operation of sequence provides environment.The display screen of electronic equipment can be LCDs or electric ink display screen etc., and input fills
It can be button, trace ball or the Trackpad set on the touch layer or electronic equipment casing covered on display screen to put,
Can also be external keyboard, Trackpad or mouse etc..The electronic equipment can be that mobile phone, tablet personal computer or individual digital help
Reason or Wearable etc..It will be understood by those skilled in the art that the structure shown in Fig. 1, only with application scheme phase
The block diagram of the part-structure of pass, the restriction for the electronic equipment being applied thereon to application scheme is not formed, specific electricity
Sub- equipment can include, than more or less parts shown in figure, either combining some parts or having different parts
Arrangement.
Fig. 2 is the Organization Chart of image processing method in one embodiment.As shown in Fig. 2 it is related in electronic equipment at beautification
Reason may include there is three-tier architecture, be characteristic layer, adaptation layer and process layer respectively.Characteristic layer is used for the portrait for extracting the image of input
Characteristic information, it is special that portrait characteristic information may include but be not limited to features of skin colors, skin quality feature, age characteristics, sex character, build
Sign etc..
Adaptation layer is beautified for adaptive choose such as the face characteristic information according to extraction and the physics acquisition parameters of collection
Parameter, wherein, physics acquisition parameters may include but be not limited to shooting distance, ambient brightness, colour temperature, exposure etc..Alternatively, fit
With parameter Selection Model can be previously stored with layer, portrait characteristic information and the physics of collection can be clapped by parameter Selection Model
Parameter etc. is taken the photograph to be analyzed, so as to choose with portrait characteristic information and physics acquisition parameters are corresponding beautifies parameter, wherein, parameter
Selection Model can be built by machine deep learning.The beautification parameter that adaptation layer is chosen may include but be not limited to whitening ginseng
Number, mill skin parameter, big eye parameter, thin face parameter, weight reducing parameter etc..
Process layer is used for the beautification parameter chosen according to adaptation layer, calls corresponding beautification component to carry out the image of input
Landscaping treatment, beautification component may include but be not limited to grind skin component, whitening component, big eye component, thin face component and weight reducing component
Deng different beautification components can be handled according to the beautification parameter of corresponding types, for example, mill skin component can be according to mill skin parameter
The image of input is carried out grinding skin processing, whitening component can carry out whitening processing etc. according to whitening parameter to the image of input.Respectively
Can be separate between individual beautification component, in one embodiment, process layer can obtain mark corresponding with each beautification component
Position, wherein, flag bit needs to carry out landscaping treatment, flag bit second to the image of input for the beautification component of the first flag bit
The beautification component of flag bit can not carry out landscaping treatment to the image of input, and the first flag bit and the second flag bit can be according to demand
Set, for example, the first flag bit is 1, the second flag bit is 0 etc., but not limited to this.When beautification component is needed to input
Image carries out landscaping treatment, and beautification component can obtain the beautification parameter of corresponding types, and the figure according to the beautification parameter to input
As carrying out landscaping treatment., can be defeated after process layer carries out landscaping treatment according to the beautification parameter that adaptation layer is chosen to the image of input
The image gone out after processing, and the image after processing is shown in display screen.
As shown in figure 3, in one embodiment, there is provided a kind of image processing method, comprise the following steps:
Step 310, human bioequivalence is carried out to pending image, determines the limbs region of pending image.
Electronic equipment can obtain pending image, and pending image can be electronic equipment by imaging first-class imaging device
The preview image that can be in display screen preview of collection or the image that has generated and stored.Electronic equipment can treat place
Manage image and carry out human bioequivalence, determine the limbs region of pending image, wherein, limbs region refers to the four limbs of people, trunk
Deng body part region.
In one embodiment, electronic equipment can extract the characteristics of image of pending image, pass through body identification model point
Characteristics of image is analysed, the body part such as the four limbs of portrait, trunk in pending image is identified, so as to obtain limbs region.Image is special
Sign may include shape facility, space characteristics and edge feature etc., wherein, shape facility refers to shape local in pending image
Shape, space characteristics refer to the mutual locus or relative direction between multiple regions for being split in pending image
Relation, edge feature refer to forming boundary pixel between two regions etc. in pending image.Body identification model can lead to
Cross machine deep learning to be built, when building body identification model, substantial amounts of sample image can be obtained, included in sample image
There are portrait image and unmanned image, whether comprising portrait sample image can be marked according to each sample image, and will mark
Input of the sample image of note as body identification model, is trained by machine learning, obtains body identification model.Electronics
Equipment can judge whether include portrait in pending image by body identification model, if comprising can determine that the limbs of portrait
Region.
In one embodiment, electronic equipment also first can carry out recognition of face to pending image, determine human face region, then
Pending image is determined according to human face region.Alternatively, electronic equipment can extract the characteristics of image of pending image, and by pre-
If human face recognition model characteristics of image is analyzed, judge whether include face in pending image, if comprising, it is determined that
Corresponding human face region, wherein, human face recognition model can be built by machine deep learning.Alternatively, electronic equipment obtains
After taking human face region, portrait area can be obtained according to human face region, depth of view information and colouring information of human face region etc. can be obtained,
And portrait area is determined according to algorithm of region growing or stingy nomography etc..The depth of field refers in camera lens or other imaging devices
The subject longitudinal separation scope that the imaging that forward position can obtain picture rich in detail is determined, in the present embodiment, depth of view information
Each object in pending image be can be understood as to the distance of the camera lens of electronic equipment, namely object distance information.Electronics is set
After the standby acquisition portrait area according to human face region, it may be determined that the limbs region in portrait area in addition to human face region.
Step 320, the limbs feature in limbs region is extracted.
Electronic equipment determines the limbs region of pending image, and limbs feature can be extracted from limbs region.Limbs feature
Physical characteristic, the features of skin colors etc. in limbs region are may include but be not limited to, wherein, physical characteristic can be used for describing pending image
Build, the physical characteristic such as high, short, fat, thin of middle portrait may include the shape at each positions such as four limbs or the trunk for representing people
The characteristic point information of shape, position and size etc.;The skin that the features of skin colors in limbs region may refer to people's body part is being waited to locate
The color that is presented in reason image, bright dark etc., features of skin colors may include brightness and color characteristic of skin area etc..
Step 330, the adjustment region in limbs region is determined according to limbs feature, and is chosen corresponding with each adjustment region
Beautify parameter.
Electronic equipment can determine the adjustment region in limbs region according to the limbs feature of extraction, and adjustment region refers to needing
The region of landscaping treatment is carried out, for example, it is desired to carry out the skin area of whitening processing, need to carry out the arm area of weight reducing processing
Leg area that domain, needs are stretched etc..In one embodiment, electronic equipment can pass through default limbs model analysis limb
Body characteristicses, and adjustment region is determined, wherein, limbs model can be built beforehand through machine learning.In one embodiment,
Electronic equipment can build limbs model in advance, can obtain substantial amounts of sample image, can be marked in each sample image it is in need enter
The adjustment region of row beautification.Electronic equipment can be using sample image as limbs model input, entered by modes such as machine learning
Row training, builds limbs model.
, can be according to corresponding to each adjustment region of limbs Feature Selection after electronic equipment determines the adjustment region in limbs region
Beautify parameter, the limbs feature of different adjustment regions can correspond to different beautification parameters.For example, electronic equipment passes through limbs model
Analyze limbs feature, it may be determined that adjustment region includes waist and thigh, wherein, build corresponding to the limbs feature of waist is thinner,
Can choose it is less weight reducing parameter weight reducing processing is carried out to waist, build corresponding to the limbs feature of thigh is thicker, can choose compared with
Big weight reducing parameter carries out weight reducing processing etc., but not limited to this to thigh.
Step 340, landscaping treatment is carried out to corresponding adjustment region in limbs region according to beautification parameter.
Electronic equipment can carry out landscaping treatment according to the beautification parameter of each adjustment region to adjustment region.Landscaping treatment can
Including but not limited to whitening processing, weight reducing processing, stretch processing etc., wherein, whitening processing can be adjusted according to whitening parameter
The color-values of each pixel in the skin area in whole limbs region, color-values can be pixel RGB (red, green, blue),
The value of the color space such as HSV (tone, saturation degree, lightness) or YUV (lightness, colourity);Weight reducing processing and stretch processing can be with
It is the target window for choosing pre-set radius, and deformation process etc., but not limited to this is carried out to adjustment region by target window.
In one embodiment, electronic equipment can first obtain the landscaping treatment type for needing to carry out to limbs region, can sentence
It is disconnected whether to need to carry out limbs region whitening processing, weight reducing processing or stretch processing etc..Alternatively, electronic equipment can obtain limb
The skin area of body region, and judge the whether big preset value of area of the skin area, if being more than, need to enter limbs region
Row whitening is handled, if being less than or equal to, can not carry out whitening processing to limbs region.It is pending that electronic equipment can also obtain collection
At the time of image, if the moment is in preset time period, whitening processing can not be carried out to limbs region, the preset time period can
Be will not exposed a large amount of skins period, such as belong to the December~January in winter etc..Alternatively, electronic equipment can obtain
The dimension scale at each body position in limbs region, if the dimension scale meets preset ratio, limbs region can not be entered
Row weight reducing processing and stretch processing etc., but not limited to this, can also other modes be used to obtain the landscaping treatment type for needing to carry out.
In the present embodiment, human bioequivalence is carried out to pending image, determines the limbs region of pending image, extract limb
The limbs feature of body region, the adjustment region in limbs region is determined according to limbs feature, and chosen corresponding with each adjustment region
Beautification parameter, landscaping treatment is carried out to corresponding adjustment region in limbs region according to beautification parameter, can be according to the limb of people
Body characteristicses are adaptive to be should determine that adjustment region and chooses beautification parameter, can be improved landscaping effect, be made the visual display effect of image more
It is good.
As shown in figure 4, in one embodiment, step determines the limbs region of pending image, comprises the following steps:
Step 402, the depth of view information of pending image is obtained.
Electronic equipment can obtain the depth of view information of each pixel in pending image.In one embodiment, electronics is set
It is standby to be overleaf provided with two cameras, including the first camera and second camera, the first camera and second camera
It can be set in the same horizontal line, horizontal left-right situs, may also be arranged on same vertical curve, be arranged above and below vertically.In this reality
To apply in example, the first camera and second camera can be the cameras of different pixels, wherein, the first camera can be pixel
Higher camera, it is mainly used in being imaged, second camera can be the relatively low auxiliary depth of field camera of pixel, adopt for obtaining
The depth of view information of the image of collection.
Further, electronic equipment can first pass through the first image of the first camera collection scene, while be taken the photograph by second
As the second image of head collection Same Scene, first the first image and the second image can be corrected and demarcated, by correction and mark
The first image and the second image after fixed are synthesized, and obtain pending image.Electronic equipment can according to correction and it is calibrated
First image and the second image generation disparity map, the depth map of pending image is generated further according to disparity map, can be wrapped in depth map
Depth of view information containing each pixel in image to be handled, in depth map, the region of similar depth of view information can be with identical
Color be filled, color change can reflect the change of the depth of field.In one embodiment, electronic equipment can be according to the first shooting
The meters such as the camera lens difference of height of the photocentre of head and second camera distance, photocentre difference in height on a horizontal and two cameras
Calibration parameter is calculated, and the first image and the second image are corrected and demarcated according to calibration parameter.
Electronic equipment calculates same object in the first image and the parallax of the second image, and obtains this according to parallax and be shot
Depth of view information of the thing in pending image, wherein, parallax refers to observing side caused by same target on two points
To difference.Fig. 5 is the schematic diagram that depth of view information is calculated in one embodiment.As shown in figure 5, the first camera and second camera
In the same horizontal line, the primary optical axis of two cameras reaches parallel left-right situs, and OL and OR are respectively the first camera and the
The photocentre of two cameras, the beeline of photocentre to corresponding image planes is focal length f.If P is a bit in world coordinate system, it
Practised physiognomy on a left side and the right imaging point practised physiognomy is PL, PR, the distance of PL and PR to the left hand edge of respective image planes is respectively XL, XR, P's
Parallax is XL-XR or XR-XL.The distance between the photocentre OL of first camera and the photocentre OR of second camera are b, according to
The distance between OL, OR b, focal length f and parallax XL-XR or XR-XL, you can point P depth of field Z, its calculating side is calculated
Shown in method such as formula (1):
Or
Electronic equipment can carry out Feature Points Matching to the first image and the second image, extract the first image characteristic point and
Corresponding row in second image finds optimal match point, it is believed that the characteristic point of the first image and the second image it is corresponding most
Good match point is same point respectively in the first image and the imaging point of the second image, you can calculates the parallax of the two, you can generation
Disparity map, the depth of view information of each pixel in pending image is calculated further according to formula (1).
In other examples, the depth of view information for obtaining pending image otherwise can be also used, such as utilizes knot
The mode such as structure light or TOF (Time of flight, flight time telemetry) calculates the depth of view information of pending image, and unlimited
In aforesaid way.
Step 404, the human face region of pending image is obtained, and the average scape of human face region is calculated according to depth of view information
It is deep.
After electronic equipment determines the human face region of pending image, the depth of field letter of each pixel in human face region can be obtained
Breath, and calculate the average depth of field of human face region.
Step 406, limbs region corresponding with human face region is obtained according to the average depth of field.
Electronic equipment can first obtain rough portrait area according to the average depth of field of human face region, recycle neighbor pixel
Similitude, precisely obtain the portrait profile of portrait area, wherein, the similitude of neighbor pixel refers to phase in certain area
More close, the situations about will not undergo mutation such as the colouring information between adjacent pixel.Electronic equipment can extract depth of field letter
Difference between breath and the average depth of field of human face region is less than the pixel of the first numerical value, obtains rough portrait area, and count
The difference of the rgb value of two neighboring pixel in the portrait area obtained roughly is calculated, if the difference of the rgb value of two neighboring pixel
Value is less than second value, illustrates to belong to the same area, if the difference of the rgb value of two neighboring pixel is more than or equal to the second number
Value, illustrates to be not belonging to the same area, big with the difference of the rgb value of adjacent pixel in the extractable portrait area obtained roughly
In or equal to second value pixel, form the portrait profile of portrait area.Alternatively, two neighboring pixel can also be calculated
Gray scale difference value etc., be not limited in the difference of rgb value.After electronic equipment obtains portrait area, you can obtain in portrait area
Limbs region in addition to human face region.
In the present embodiment, limbs region can accurately be obtained according to the depth of view information of pending image, the limb of extraction can be made
Body characteristicses and the beautification parameter of selection are more accurate, can improve landscaping effect, make the visual display effect of image more preferable.
As shown in fig. 6, in one embodiment, step 330 determines the adjustment region in limbs region according to limbs feature, and
Beautification parameter corresponding with each adjustment region is chosen, is comprised the following steps:
Step 602, the target build according to corresponding to being chosen physical characteristic.
The limbs feature in electronic equipment extraction limbs region, limbs feature may include physical characteristic, and physical characteristic may include
The characteristic point information of shape, position and the size at each position such as four limbs or trunk for representing people etc..Electronic equipment according to
The physical characteristic of extraction can determine that the body-shape information of portrait in pending image, and body-shape information may include but be not limited to each in portrait
The dimension scale at individual body position, shape etc., such as chest, the dimension scale of waist and buttocks, the shape of shoulder and thickness etc..
Electronic equipment can choose target build according to physical characteristic, wherein, target build is referred to relative to original portrait
Body-shape information for there is the build of preferable display effect, target build can be by gathering largely by portrait beautification
The build image that the image of reason obtains or the build image or threedimensional model of the shape parameter generation previously according to setting
Deng the size at each body position of portrait, dimension scale, shape etc. can be included in target build.
Alternatively, electronic equipment can extract the age characteristics of portrait, sex character etc. in pending image, further according to the age
Feature, sex character and physical characteristic choose target build.Age characteristics can be according to the texture of human face region in pending image
Information, marginal information etc. are extracted, and sex character can be extracted according to the face feature of human face region, physical characteristic etc..
In one embodiment, all ages and classes stage, different sexes can correspond to different target build, for example, can be divided into age level
Less than 12 years old, 12~18 years old, 19~35 years old, 36~50 years old, 50 years old~70 years old, 70 years old with first-class, sex may include man, female, no
The same sex others as can correspond to different target builds in all ages and classes stage.For same age stage, identical sex, no
Same body-shape information can also correspond to different target builds, for example, portrait and build that build is too obese are relatively fine slender
Portrait can correspond to different target builds.
Step 604, target build is compared in proportion with limbs region, generates comparison data.
The target build of selection can be compared in proportion with the limbs region in pending image for electronic equipment, than
Such as, can be defined by the human face region of pending image, by limbs area zoom to target build 1:1 ratio, then by target
Body position corresponding with limbs region is compared respectively at each body position in build, can obtain each body position
Dimension difference, position gap and shape difference away from etc. comparison data.
Step 606, weight reducing region is determined according to comparison data, and chooses weight reducing parameter corresponding with each weight reducing region.
Electronic equipment can determine weight reducing region according to comparison data, alternatively, can obtain each body part in limbs region
Position and the dimension difference of target build, and dimension difference is defined as region of reducing weight more than the position of preset first threshold value, for example,
The size disparity of the waist in limbs region and the waist of target build is 8 in pending image, it is assumed that preset first threshold value 5,
It then can determine that waist for weight reducing region.Alternatively, different body positions can correspond to different preset first threshold values, for example, waist
Corresponding preset first threshold value is 5, and preset first threshold value corresponding to thigh is 3 etc..
After electronic equipment determines weight reducing region according to comparison data, it can be chosen according to comparison data and each weight reducing region pair
The weight reducing parameter answered, dimension difference, position gap and the shape difference at body position away from etc. comparison data it is bigger, corresponding weight reducing ginseng
Number can be bigger.Alternatively, body position and the dimension difference of target build can be closed with weight reducing parameter into positive correlation in limbs region
System, further, the positive correlation can be linear relationship, and weight reducing parameter can increase with the increase of dimension difference.
In one embodiment, the maximum of weight reducing parameter can be set, when dimension difference is more than default Second Threshold, the weight reducing ginseng of selection
Number can be maximum.
Electronic equipment can carry out weight reducing processing according to the weight reducing parameter of selection to corresponding weight reducing region.Alternatively, electronics
Equipment can according to corresponding to being chosen weight reducing parameter windows radius, and by target window corresponding with the windows radius to corresponding
Weight reducing region carries out deformation process.
In the present embodiment, can according to corresponding to being chosen physical characteristic target build, and according to target build determine reduce weight
Weight reducing parameter corresponding to region and each weight reducing region of selection, reduce weight region and ginseng of reducing weight can be adaptively chosen according to physical characteristic
Number, the weight reducing region of selection and weight reducing parameter can be made more to be bonded the body-shape information of portrait in pending image, after landscaping treatment
Image is more natural, true.
In one embodiment, step 330 determines the adjustment region in limbs region according to limbs feature, and choose with it is each
Beautification parameter corresponding to adjustment region, including:Body proportion is calculated according to physical characteristic, drawing zone is determined according to Body proportion
Domain, and choose extensograph parameter corresponding with each stretch zones.
Electronic equipment can calculate Body proportion according to the physical characteristic in limbs region, and Body proportion refers to each body part
The dimension scale of position, such as dimension scale of the length ratio with leg, waist and buttocks etc. above the waist.Electronic equipment can basis
The Body proportion of calculating determines stretch zones, alternatively, can be by the Body proportion of calculating compared with preset ratio, and this is default
Ratio can be the golden ratio of body, for example the upper part of the body and leg ratio are 5:8, the ratio of shank and thigh is 5:3 etc., but
Not limited to this, preset ratio corresponding to different body positions can be different.When the Body proportion of calculating meets or close to preset ratio
When, each position can not be stretched.When the Body proportion of calculating and the difference of preset ratio are more than proportion threshold value, can incite somebody to action
The position that large scale is accounted in preset ratio is defined as stretch zones, for example, the upper part of the body and leg ratio that are calculated are 1:1,
With preset ratio 5:8 difference is more than proportion threshold value 0.25, then electronic equipment can be true by the leg that large scale is accounted in preset ratio
It is set to stretch zones.
Electronic equipment can be joined according to the Body proportion of calculating with the stretching of the corresponding stretch zones of difference selection of preset ratio
Number, the Body proportion of calculating can choose smaller extensograph parameter closer to preset ratio.Electronic equipment can be according to the stretching of selection
Parameter carries out stretch processing to corresponding stretch zones, the Body proportion at each position of portrait in pending image is compared close to default
Example.
In one embodiment, when electronic equipment carries out landscaping treatment to adjustment region, such as according to weight reducing parameter to thin
Body region carries out weight reducing processing, or carries out stretch processing etc. to stretch zones according to extensograph parameter, can create new figure layer.Electricity
Sub- equipment can be along the profile of adjustment region, it would be desirable to which the adjustment region for carrying out landscaping treatment takes down from pending image
Come, and be put into the new figure layer of establishment.Electronic equipment can be carried out at beautification in new figure layer according to beautification parameter to adjustment region
Reason, after the completion of processing, background compensation processing first can be carried out to pending image, then the adjustment region after landscaping treatment is relay
Enter in pending image, during so as to avoid being reduced weight in the limbs region to portrait, the landscaping treatment such as stretch, cause background
The situation of distortion.
In the present embodiment, can according to physical characteristic calculate Body proportion, and according to Body proportion choose stretch zones and
Corresponding extensograph parameter, the stretch zones of selection and extensograph parameter can be made more accurate, improve landscaping effect, make the vision of image
Display effect is more preferable.
As shown in fig. 7, in one embodiment, above-mentioned image processing method is further comprising the steps of:
Step 702, the skin area in limbs region is obtained, and extracts the features of skin colors of skin area.
After electronic equipment determines the limbs region corresponding with human face region of pending image, the skin in limbs region can be obtained
Skin region, skin area can be obtained according to the color-values of each pixel in limbs region, wherein, color-values can be pixel
In the value of the color spaces such as RGB (red, green, blue), HSV (tone, saturation degree, lightness) or YUV (lightness, colourity).At one
In embodiment, electronic equipment can divide the color-values scope for belonging to skin area in advance, and can fall color-values in limbs region
The pixel for entering the color-values scope divided in advance is defined as skin area.
Electronic equipment can extract the features of skin colors of the skin area in limbs region, and features of skin colors may include the bright of skin area
Spend feature and color characteristic etc..In one embodiment, electronic equipment can by pending image from the first color space conversion to
Second color space, in the present embodiment, the first color space can be RGB colors, and the second color space can be YUV
Color space or other color spaces, are not limited thereto.YUV color spaces may include luminance signal Y and two colors
Signal B-Y (i.e. U), R-Y (i.e. V) are spent, wherein, Y-component represents lightness, can be grey decision-making, and U and V represent colourity, can used
In the color and saturation degree of description image, the luminance signal Y and carrier chrominance signal U, V of YUV color spaces are separation.Electronic equipment
Can be according to specific conversion formula, by pending image from the first color space conversion to the second color space.Electronic equipment can
The average of the pixel each component in the second color space included in the skin area in limbs region is calculated, such as, YUV face
The colour space includes Y-component, U components and V component, then electronic equipment can calculate all pixels point included in skin area respectively and exist
The average of Y-component, the average in U components and the average in V component, and can be all by what is included in the skin area in limbs region
Pixel Y-component, U components and V component features of skin colors of the average as the skin area, wherein, the average of Y-component can be with
As the brightness of skin area, color characteristic that the average of U components and V component can be as skin area etc..
Step 704, whitening parameter corresponding with skin area is chosen according to features of skin colors.
After the features of skin colors of the skin area in electronic equipment extraction limbs region, whitening corresponding with features of skin colors can be chosen
Parameter, wherein, whitening parameter may include the adjustment parameter of each component in the color spaces such as RGB, HSV or YUV, highlight parameter
Deng, different features of skin colors can correspond to different whitening parameters, according to the features of skin colors of skin area adaptively choose corresponding to
Whitening parameter, for example, the brightness of skin area is larger, then can correspond to choose it is less highlight parameter, skin area it is bright
It is smaller to spend feature, then can correspond to choose and larger highlight parameter etc., but not limited to this.
Electronic equipment can be adjusted according to the whitening parameter of selection to the skin area in limbs region in pending image,
Whitening processing is carried out to the skin area, whitening processing is carried out to skin area may include that basis highlights parameter and improves skin area
Brightness, according in the color spaces such as RGB, HSV or YUV each component adjustment parameter adjustment skin area in each pixel
Color-values etc., but not limited to this.
In the present embodiment, suitable whitening parameter can be chosen according to the features of skin colors of skin area to carry out at whitening
Reason, makes image reach more preferable visual display effect.
As shown in figure 8, in one embodiment, step 704 chooses whitening corresponding with skin area according to features of skin colors
Parameter, comprise the following steps:
Step 802, the brightness section residing for brightness is determined, and obtains close corresponding with the parameter of brightness section matching
System.
The features of skin colors of the skin area in electronic equipment limbs region, it can extract the brightness of the skin area, brightness
Feature can be average of the pixel that includes of skin area in the Y-component of YUV color spaces.Electronic equipment can obtain default
Brightness section, and the parameter corresponding relation matched with each brightness section, parameter corresponding relation can be used for describing brightness
With the corresponding relation of whitening parameter, different brightness sections can match different parameter corresponding relations.Electronic equipment can determine that skin
Brightness section residing for the brightness in skin region, and calculated according to the parameter corresponding relation matched with the residing brightness section
Whitening parameter.
Step 804, whitening parameter corresponding with brightness is calculated according to parameter corresponding relation.
In one embodiment, electronic equipment can pre-set multiple luminance thresholds, and be divided according to multiple luminance thresholds
Multiple brightness sections, such as, the first luminance threshold and the second luminance threshold can be pre-set, according to the first luminance threshold and second
Luminance threshold can divide 3 brightness sections, including the first brightness section less than or equal to the first luminance threshold, bright more than first
Spend threshold value and less than the second brightness section of the second luminance threshold, and the 3rd brightness region more than or equal to the second luminance threshold
Between etc., it is possible to understand that ground, brightness section can also be divided by other means, however it is not limited to this.
Alternatively, can be true when the brightness of skin area is more than the first luminance threshold and is less than the second luminance threshold
Determine brightness and be located at the second brightness section, the parameter corresponding relation matched with second brightness section can be negatively correlated line
Sexual intercourse, brightness can be linear with whitening parameter, and further, brightness can linearly close with highlighting parameter
System, highlighting parameter can reduce with the increase of brightness.
Fig. 9 is brightness and the relation schematic diagram of whitening parameter in one embodiment.As shown in figure 9, can divide 3 it is bright
Section, including the first brightness section 910 less than or equal to the first luminance threshold are spent, more than the first luminance threshold and less than second
Second brightness section 920 of luminance threshold, and the 3rd brightness section 930 more than or equal to the second luminance threshold.Work as skin
When the brightness in region is in the first brightness section 910, it can be the first fixed parameter that parameter is highlighted corresponding to it;Work as skin
When the brightness in region is in the second brightness section 920, brightness carries with highlighting the linear relationship that parameter can be negatively correlated
Bright parameter can reduce with the increase of brightness;When the brightness of skin area is in three brightness sections 930, its
The corresponding parameter that highlights can be the second fixed parameter.It is to be appreciated that the brightness of skin area also may be used with highlighting parameter
To be other parameter corresponding relations, it is not limited in several parameter corresponding relations shown in Fig. 9.
In the present embodiment, can get parms corresponding relation according to the brightness section residing for the brightness of skin area,
And whitening parameter is calculated according to parameter corresponding relation, whitening parameter is adaptively chosen according to brightness, the whitening of selection can be made
Parameter is more accurate, image is reached more preferable visual display effect.
In one embodiment, there is provided a kind of image processing method, comprise the following steps:
Step (1), human bioequivalence is carried out to pending image, determines the limbs region of pending image.
Optionally it is determined that the limbs region of pending image, including:Obtain the depth of view information of pending image;Acquisition is treated
The human face region of image is handled, and the average depth of field of human face region is calculated according to depth of view information;According to the acquisition of the average depth of field and people
Limbs region corresponding to face region.
Step (2), the limbs feature in extraction limbs region.
Step (3), the adjustment region in limbs region is determined according to limbs feature, and chosen corresponding with each adjustment region
Beautify parameter.
Alternatively, limbs feature includes physical characteristic;Step (3), including:The objective body according to corresponding to being chosen physical characteristic
Type;Target build is compared in proportion with limbs region, generates comparison data;Weight reducing region is determined according to comparison data,
And choose weight reducing parameter corresponding with each weight reducing region.
Alternatively, limbs feature includes physical characteristic;Step (3), including:Body proportion is calculated according to physical characteristic;Root
Stretch zones are determined according to Body proportion, and choose extensograph parameter corresponding with each stretch zones.
Alternatively, step (2), including:The skin area in limbs region is obtained, and extracts the features of skin colors of skin area;
Step (3), including:Whitening parameter corresponding with skin area is chosen according to features of skin colors.
Alternatively, the features of skin colors of skin area is extracted, including:By skin area from the first color space conversion to second
Color space;The average of the pixel each component in the second color space included in skin area is calculated, and by each point
Features of skin colors of the average of amount as skin area.
Alternatively, features of skin colors includes brightness;Whitening parameter corresponding with skin area is chosen according to features of skin colors,
Including:The brightness section residing for brightness is determined, and obtains the parameter corresponding relation matched with brightness section;According to parameter pair
It should be related to and calculate whitening parameter corresponding with brightness.
Step (4), landscaping treatment is carried out to corresponding adjustment region in limbs region according to beautification parameter.
In the present embodiment, human bioequivalence is carried out to pending image, determines the limbs region of pending image, extract limb
The limbs feature of body region, the adjustment region in limbs region is determined according to limbs feature, and chosen corresponding with each adjustment region
Beautification parameter, landscaping treatment is carried out to corresponding adjustment region in limbs region according to beautification parameter, can be according to the limb of people
Body characteristicses are adaptive to be should determine that adjustment region and chooses beautification parameter, can be improved landscaping effect, be made the visual display effect of image more
It is good.
As shown in Figure 10, in one embodiment, there is provided a kind of image processing apparatus 1000, including identification module 1010,
Characteristic extracting module 1020, parameter choose module 1030 and processing module 1040.
Identification module 1010, for carrying out human bioequivalence to pending image, determine the limbs region of pending image.
Characteristic extracting module 1020, for extracting the limbs feature in limbs region.
Parameter chooses module 1030, for determining the adjustment region in limbs region according to limbs feature, and choose with it is each
Beautification parameter corresponding to adjustment region.
Processing module 1040, for carrying out landscaping treatment to corresponding adjustment region in limbs region according to beautification parameter.
In the present embodiment, human bioequivalence is carried out to pending image, determines the limbs region of pending image, extract limb
The limbs feature of body region, the adjustment region in limbs region is determined according to limbs feature, and chosen corresponding with each adjustment region
Beautification parameter, landscaping treatment is carried out to corresponding adjustment region in limbs region according to beautification parameter, can be according to the limb of people
Body characteristicses are adaptive to be should determine that adjustment region and chooses beautification parameter, can be improved landscaping effect, be made the visual display effect of image more
It is good.
In one embodiment, identification module 1010, including depth of field acquiring unit, depth of field computing unit and region obtain list
Member.
Depth of field acquiring unit, for obtaining the depth of view information of pending image.
Depth of field computing unit, human face region is calculated for obtaining the human face region of pending image, and according to depth of view information
The average depth of field.
Area acquisition unit, for obtaining limbs region corresponding with human face region according to the average depth of field.
In the present embodiment, limbs region can accurately be obtained according to the depth of view information of pending image, the limb of extraction can be made
Body characteristicses and the beautification parameter of selection are more accurate, can improve landscaping effect, make the visual display effect of image more preferable.
In one embodiment, limbs feature includes physical characteristic.Parameter chooses module 1030, including build chooses list
Member, generation unit and parameter choose unit.
Build chooses unit, for target build corresponding to being chosen according to physical characteristic.
Generation unit, for target build to be compared in proportion with limbs region, generate comparison data.
Parameter chooses unit, for determining weight reducing region according to comparison data, and chooses corresponding with each weight reducing region
Weight reducing parameter.
In the present embodiment, can according to corresponding to being chosen physical characteristic target build, and according to target build determine reduce weight
Weight reducing parameter corresponding to region and each weight reducing region of selection, reduce weight region and ginseng of reducing weight can be adaptively chosen according to physical characteristic
Number, the weight reducing region of selection and weight reducing parameter can be made more to be bonded the body-shape information of portrait in pending image, after landscaping treatment
Image is more natural, true.
In one embodiment, limbs feature includes physical characteristic.Parameter chooses module 1030, except being chosen including build
Unit, generation unit and parameter choose unit, in addition to ratio computing unit.
Ratio computing unit, for calculating Body proportion according to physical characteristic.
Parameter chooses unit, is additionally operable to determine stretch zones according to Body proportion, and choose corresponding with each stretch zones
Extensograph parameter.
In the present embodiment, can according to physical characteristic calculate Body proportion, and according to Body proportion choose stretch zones and
Corresponding extensograph parameter, the stretch zones of selection and extensograph parameter can be made more accurate, improve landscaping effect, make the vision of image
Display effect is more preferable.
In one embodiment, characteristic extracting module 1020, are additionally operable to obtain the skin area in limbs region, and extract skin
The features of skin colors in skin region.
Alternatively, characteristic extracting module 1020 includes converting unit and average calculation unit.
Converting unit, for by skin area from the first color space conversion to the second color space.
Average calculation unit, for calculating the pixel included in skin area each component in the second color space
Average, and the features of skin colors using the average of each component as skin area.
Parameter chooses module 1030, is additionally operable to choose whitening parameter corresponding with skin area according to features of skin colors.
In the present embodiment, suitable whitening parameter can be chosen according to the features of skin colors of skin area to carry out at whitening
Reason, makes image reach more preferable visual display effect.
In one embodiment, parameter chooses module 1030, except choosing unit, generation unit, parameter choosing including build
Take unit and ratio computing unit, in addition to interval determination unit.
Interval determination unit, for determining the brightness section residing for brightness, and obtain the ginseng matched with brightness section
Number corresponding relation.
Parameter chooses unit, is additionally operable to calculate whitening parameter corresponding with brightness according to parameter corresponding relation.
In the present embodiment, can get parms corresponding relation according to the brightness section residing for the brightness of skin area,
And whitening parameter is calculated according to parameter corresponding relation, whitening parameter is adaptively chosen according to brightness, the whitening of selection can be made
Parameter is more accurate, image is reached more preferable visual display effect.
The embodiment of the present application also provides a kind of electronic equipment.Above-mentioned electronic equipment includes image processing circuit, at image
Managing circuit can utilize hardware and/or component software to realize, it may include define ISP (Image Signal Processing, figure
As signal transacting) the various processing units of pipeline.Figure 11 is the schematic diagram of image processing circuit in one embodiment.Such as Figure 11 institutes
Show, for purposes of illustration only, only showing the various aspects of the image processing techniques related to the embodiment of the present application.
As shown in figure 11, image processing circuit includes ISP processors 1140 and control logic device 1150.Imaging device 1110
The view data of seizure is handled by ISP processors 1140 first, and ISP processors 1140 are analyzed view data can with seizure
For determination and/or the image statistics of one or more control parameters of imaging device 1110.Imaging device 1110 can wrap
Include the camera with one or more lens 1112 and imaging sensor 1114.Imaging sensor 1114 may include colour filter
Array (such as Bayer filters), imaging sensor 1114 can obtain the light caught with each imaging pixel of imaging sensor 1114
Intensity and wavelength information, and the one group of raw image data that can be handled by ISP processors 1140 is provided.(such as top of sensor 1120
Spiral shell instrument) parameter (such as stabilization parameter) of the image procossing of collection can be supplied to based on the interface type of sensor 1120 by ISP processing
Device 1140.The interface of sensor 1120 can utilize SMIA, and (Standard Mobile Imaging Architecture, standard are moved
Dynamic Imager Architecture) interface, other serial or parallel camera interfaces or above-mentioned interface combination.
In addition, raw image data can also be sent to sensor 1120 by imaging sensor 1114, sensor 1120 can base
Raw image data is supplied to ISP processors 1140 in the interface type of sensor 1120, or sensor 1120 is by original graph
As in data Cun Chudao video memories 1130.
ISP processors 1140 handle raw image data pixel by pixel in various formats.For example, each image pixel can
Bit depth with 8,10,12 or 14 bits, ISP processors 1140 can be carried out at one or more images to raw image data
Reason operation, statistical information of the collection on view data.Wherein, image processing operations can be by identical or different bit depth precision
Carry out.
ISP processors 1140 can also receive view data from video memory 1130.For example, the interface of sensor 1120 is by original
Beginning view data is sent to video memory 1130, and the raw image data in video memory 1130 is available to ISP processing
Device 1140 is for processing.Video memory 1130 can be only in a part, storage device or electronic equipment for storage arrangement
Vertical private memory, and may include DMA (Direct Memory Access, direct direct memory access (DMA)) feature.
The interface of imaging sensor 1114 is come from when receiving or from the interface of sensor 1120 or from video memory
During 1130 raw image data, ISP processors 1140 can carry out one or more image processing operations, such as time-domain filtering.Place
View data after reason can be transmitted to video memory 1130, to carry out other processing before shown.ISP processors
1140 can also be carried out in original domain and RGB and YCbCr from the reception processing data of video memory 1130 to above-mentioned processing data
Image real time transfer in color space.View data after processing may be output to display 1180, for user viewing and/or
Further handled by graphics engine or GPU (Graphics Processing Unit, graphics processor).In addition, ISP processors
1140 output also can be transmitted to video memory 1130, and display 1180 can read picture number from video memory 1130
According to.In one embodiment, video memory 1130 can be configured as realizing one or more frame buffers.In addition, ISP processing
The output of device 1140 can be transmitted to encoder/decoder 1170, so as to encoding/decoding image data.The view data of coding can
It is saved, and is decompressed before being shown in the equipment of display 1180.
The step of processing view data of ISP processors 1140, includes:VFE (Video Front are carried out to view data
End, video front) handle and CPP (Camera Post Processing, camera post processing) processing.To view data
VFE processing may include correct view data contrast or brightness, modification record in a digital manner illumination conditions data, to figure
As data compensate processing (such as white balance, automatic growth control, γ correction etc.), to view data be filtered processing etc..
CPP processing to view data may include to zoom in and out image, preview frame and record frame provided to each path.Wherein, CPP
Different codecs can be used to handle preview frame and record frame.
View data after the processing of ISP processors 1140, which can be transmitted, gives beautification module 1160, so as to right before shown
Image carries out landscaping treatment.Beautification module 1160 may include to view data landscaping treatment:Whitening, mill skin, thin face, anti-acne, increasing
Oxeye, weight reducing etc..Wherein, beautify module 1160 can be electronic equipment in CPU (Central Processing Unit, in
Central processor), GPU or coprocessor etc..Data after beautification module 1160 is handled can be transmitted to encoder/decoder 1170,
So as to encoding/decoding image data.The view data of coding can be saved, and be solved before being shown in the equipment of display 1180
Compression.Wherein, beautification module 1160 may be additionally located between encoder/decoder 1170 and display 1180, that is, beautify module pair
The image being imaged carries out landscaping treatment.Above-mentioned encoder/decoder 1170 can be CPU, GPU or coprocessor in electronic equipment
Deng.
The statistics that ISP processors 1140 determine, which can be transmitted, gives the unit of control logic device 1150.For example, statistics can
Passed including the image such as automatic exposure, AWB, automatic focusing, flicker detection, black level compensation, the shadow correction of lens 1112
The statistical information of sensor 1114.Control logic device 1150 may include the processor for performing one or more examples (such as firmware) and/or micro-
Controller, one or more routines can be determined at control parameter and the ISP of imaging device 1110 according to the statistics of reception
Manage the control parameter of device 1140.For example, the control parameter of imaging device 1110 may include that the control parameter of sensor 1120 (such as increases
Benefit, the time of integration of spectrum assignment), camera flash control parameter, the control parameter of lens 1112 (such as focus on or zoom Jiao
Away from), or the combination of these parameters.ISP control parameters may include to be used for AWB and color adjustment (for example, in RGB processing
Period) gain level and color correction matrix, and the shadow correction parameter of lens 1112.
In the present embodiment, above-mentioned image processing method can be realized with image processing techniques in Figure 11.
In one embodiment, there is provided a kind of electronic equipment, including memory and processor, be stored with calculating in memory
Machine program, when computer program is executed by processor so that computing device following steps:
Human bioequivalence is carried out to pending image, determines the limbs region of pending image;
Extract the limbs feature in limbs region;
The adjustment region in limbs region is determined according to limbs feature, and chooses beautification ginseng corresponding with each adjustment region
Number;
Landscaping treatment is carried out to corresponding adjustment region in limbs region according to beautification parameter.
In one embodiment, there is provided a kind of computer-readable recording medium, be stored thereon with computer program, the calculating
Machine program realizes above-mentioned image processing method when being executed by processor.
In one embodiment, there is provided a kind of computer program product for including computer program, when it is in electronic equipment
During upper operation so that electronic equipment realizes above-mentioned image processing method when performing.
One of ordinary skill in the art will appreciate that realize all or part of flow in above-described embodiment method, being can be with
The hardware of correlation is instructed to complete by computer program, described program can be stored in a non-volatile computer and can be read
In storage medium, the program is upon execution, it may include such as the flow of the embodiment of above-mentioned each method.Wherein, described storage is situated between
Matter can be magnetic disc, CD, read-only memory (Read-Only Memory, ROM) etc..
Any reference to memory, storage, database or other media may include non-volatile as used herein
And/or volatile memory.Suitable nonvolatile memory may include read-only storage (ROM), programming ROM (PROM),
Electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include arbitrary access
Memory (RAM), it is used as external cache.By way of illustration and not limitation, RAM is available in many forms, such as
It is static RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDR SDRAM), enhanced
SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM
(RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM).
Each technical characteristic of embodiment described above can be combined arbitrarily, to make description succinct, not to above-mentioned reality
Apply all possible combination of each technical characteristic in example to be all described, as long as however, the combination of these technical characteristics is not deposited
In contradiction, the scope that this specification is recorded all is considered to be.
Embodiment described above only expresses the several embodiments of the application, and its description is more specific and detailed, but simultaneously
Can not therefore it be construed as limiting the scope of the patent.It should be pointed out that come for one of ordinary skill in the art
Say, on the premise of the application design is not departed from, various modifications and improvements can be made, these belong to the protection of the application
Scope.Therefore, the protection domain of the application patent should be determined by the appended claims.
Claims (10)
- A kind of 1. image processing method, it is characterised in that including:Human bioequivalence is carried out to pending image, determines the limbs region of the pending image;Extract the limbs feature in the limbs region;The adjustment region in the limbs region is determined according to the limbs feature, and chooses beautification corresponding with each adjustment region Parameter;Landscaping treatment is carried out to corresponding adjustment region in the limbs region according to the beautification parameter.
- 2. according to the method for claim 1, it is characterised in that the limbs region for determining the pending image, bag Include:Obtain the depth of view information of pending image;The human face region of the pending image is obtained, and the average scape of the human face region is calculated according to the depth of view information It is deep;Limbs region corresponding with the human face region is obtained according to the average depth of field.
- 3. method according to claim 1 or 2, it is characterised in that the limbs feature includes physical characteristic;The adjustment region that the limbs region is determined according to the limbs feature, and choose corresponding with each adjustment region Beautify parameter, including:The target build according to corresponding to being chosen the physical characteristic;The target build is compared in proportion with the limbs region, generates comparison data;Weight reducing region is determined according to the comparison data, and chooses weight reducing parameter corresponding with each weight reducing region.
- 4. method according to claim 1 or 2, it is characterised in that the limbs feature includes physical characteristic;The adjustment region that the limbs region is determined according to the limbs feature, and choose corresponding with each adjustment region Beautify parameter, including:Body proportion is calculated according to the physical characteristic;Stretch zones are determined according to the Body proportion, and choose extensograph parameter corresponding with each stretch zones.
- 5. method according to claim 1 or 2, it is characterised in that the limbs feature in the extraction limbs region, bag Include:The skin area in the limbs region is obtained, and extracts the features of skin colors of the skin area;The adjustment region that the limbs region is determined according to the limbs feature, and choose corresponding with each adjustment region Beautify parameter, including:Whitening parameter corresponding with the skin area is chosen according to the features of skin colors.
- 6. according to the method for claim 4, it is characterised in that the features of skin colors of the extraction skin area, including:By the skin area from the first color space conversion to the second color space;The average of the pixel each component in second color space included in the skin area is calculated, and by described in Features of skin colors of the average of each component as the skin area.
- 7. the method according to claim 4 or 5, it is characterised in that the features of skin colors includes brightness;It is described that whitening parameter corresponding with the skin area is chosen according to the features of skin colors, including:The brightness section residing for the brightness is determined, and obtains the parameter corresponding relation matched with the brightness section;Whitening parameter corresponding with the brightness is calculated according to the parameter corresponding relation.
- A kind of 8. image processing apparatus, it is characterised in that including:Identification module, for carrying out human bioequivalence to pending image, determine the limbs region of the pending image;Characteristic extracting module, for extracting the limbs feature in the limbs region;Parameter chooses module, for determining the adjustment region in the limbs region according to the limbs feature, and choose with it is each Beautification parameter corresponding to adjustment region;Processing module, for carrying out landscaping treatment to corresponding adjustment region in the limbs region according to the beautification parameter.
- 9. a kind of electronic equipment, including memory and processor, computer program, the computer are stored with the memory When program is by the computing device so that the processor realizes the method as described in claim 1 to 7 is any.
- 10. a kind of computer-readable recording medium, is stored thereon with computer program, it is characterised in that the computer program The method as described in claim 1 to 7 is any is realized when being executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711045718.2A CN107808137A (en) | 2017-10-31 | 2017-10-31 | Image processing method, device, electronic equipment and computer-readable recording medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201711045718.2A CN107808137A (en) | 2017-10-31 | 2017-10-31 | Image processing method, device, electronic equipment and computer-readable recording medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN107808137A true CN107808137A (en) | 2018-03-16 |
Family
ID=61582984
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201711045718.2A Pending CN107808137A (en) | 2017-10-31 | 2017-10-31 | Image processing method, device, electronic equipment and computer-readable recording medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107808137A (en) |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108540716A (en) * | 2018-03-29 | 2018-09-14 | 广东欧珀移动通信有限公司 | Image processing method, device, electronic equipment and computer readable storage medium |
CN108765274A (en) * | 2018-05-31 | 2018-11-06 | 北京市商汤科技开发有限公司 | A kind of image processing method, device and computer storage media |
CN108830784A (en) * | 2018-05-31 | 2018-11-16 | 北京市商汤科技开发有限公司 | A kind of image processing method, device and computer storage medium |
CN108830783A (en) * | 2018-05-31 | 2018-11-16 | 北京市商汤科技开发有限公司 | A kind of image processing method, device and computer storage medium |
CN109035177A (en) * | 2018-08-27 | 2018-12-18 | 三星电子(中国)研发中心 | A kind of photo processing method and device |
CN109166082A (en) * | 2018-08-22 | 2019-01-08 | Oppo广东移动通信有限公司 | Image processing method, device, electronic equipment and computer readable storage medium |
CN109191396A (en) * | 2018-08-22 | 2019-01-11 | Oppo广东移动通信有限公司 | Facial image processing method and apparatus, electronic equipment, computer readable storage medium |
CN109190533A (en) * | 2018-08-22 | 2019-01-11 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment, computer readable storage medium |
CN109242868A (en) * | 2018-09-17 | 2019-01-18 | 北京旷视科技有限公司 | Image processing method, device, electronic equipment and storage medium |
CN109325907A (en) * | 2018-09-18 | 2019-02-12 | 北京旷视科技有限公司 | Image landscaping treatment method, apparatus and system |
CN109376575A (en) * | 2018-08-20 | 2019-02-22 | 奇酷互联网络科技(深圳)有限公司 | Method, mobile terminal and the storage medium that human body in image is beautified |
CN109447896A (en) * | 2018-09-21 | 2019-03-08 | 维沃移动通信(杭州)有限公司 | A kind of image processing method and terminal device |
CN109461124A (en) * | 2018-09-21 | 2019-03-12 | 维沃移动通信(杭州)有限公司 | A kind of image processing method and terminal device |
CN109495688A (en) * | 2018-12-26 | 2019-03-19 | 华为技术有限公司 | Method for previewing of taking pictures, graphic user interface and the electronic equipment of electronic equipment |
CN109658355A (en) * | 2018-12-19 | 2019-04-19 | 维沃移动通信有限公司 | A kind of image processing method and device |
CN109872283A (en) * | 2019-01-18 | 2019-06-11 | 维沃移动通信有限公司 | A kind of image processing method and mobile terminal |
CN110136051A (en) * | 2019-04-30 | 2019-08-16 | 北京市商汤科技开发有限公司 | A kind of image processing method, device and computer storage medium |
CN110290395A (en) * | 2019-06-14 | 2019-09-27 | 北京奇艺世纪科技有限公司 | A kind of image processing method, device and computer readable storage medium |
WO2019227915A1 (en) * | 2018-05-31 | 2019-12-05 | 北京市商汤科技开发有限公司 | Image processing method and apparatus, electronic device, and storage medium |
CN110555794A (en) * | 2018-05-31 | 2019-12-10 | 北京市商汤科技开发有限公司 | image processing method and device, electronic equipment and storage medium |
CN110751668A (en) * | 2019-09-30 | 2020-02-04 | 北京迈格威科技有限公司 | Image processing method, device, terminal, electronic equipment and readable storage medium |
CN110942422A (en) * | 2018-09-21 | 2020-03-31 | 北京市商汤科技开发有限公司 | Image processing method and device and computer storage medium |
CN112488965A (en) * | 2020-12-23 | 2021-03-12 | 联想(北京)有限公司 | Image processing method and device |
CN112651956A (en) * | 2020-12-30 | 2021-04-13 | 深圳云天励飞技术股份有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN113194267A (en) * | 2021-04-29 | 2021-07-30 | 北京达佳互联信息技术有限公司 | Image processing method and device and photographing method and device |
CN113572955A (en) * | 2021-06-25 | 2021-10-29 | 维沃移动通信(杭州)有限公司 | Image processing method and device and electronic equipment |
CN113763286A (en) * | 2021-09-27 | 2021-12-07 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
US11410268B2 (en) | 2018-05-31 | 2022-08-09 | Beijing Sensetime Technology Development Co., Ltd | Image processing methods and apparatuses, electronic devices, and storage media |
CN112651956B (en) * | 2020-12-30 | 2024-05-03 | 深圳云天励飞技术股份有限公司 | Image processing method, device, electronic equipment and storage medium |
Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103325089A (en) * | 2012-03-21 | 2013-09-25 | 腾讯科技(深圳)有限公司 | Method and device for processing skin color in image |
CN103632165A (en) * | 2013-11-28 | 2014-03-12 | 小米科技有限责任公司 | Picture processing method, device and terminal equipment |
CN104902177A (en) * | 2015-05-26 | 2015-09-09 | 广东欧珀移动通信有限公司 | Intelligent photographing method and terminal |
CN105513007A (en) * | 2015-12-11 | 2016-04-20 | 惠州Tcl移动通信有限公司 | Mobile terminal based photographing beautifying method and system, and mobile terminal |
CN105530435A (en) * | 2016-02-01 | 2016-04-27 | 深圳市金立通信设备有限公司 | Shooting method and mobile terminal |
CN105657287A (en) * | 2015-08-24 | 2016-06-08 | 宇龙计算机通信科技(深圳)有限公司 | Backlighting scene detection method and device and imaging device |
CN106056552A (en) * | 2016-05-31 | 2016-10-26 | 努比亚技术有限公司 | Image processing method and mobile terminal |
CN106991654A (en) * | 2017-03-09 | 2017-07-28 | 广东欧珀移动通信有限公司 | Human body beautification method and apparatus and electronic installation based on depth |
CN107274354A (en) * | 2017-05-22 | 2017-10-20 | 奇酷互联网络科技(深圳)有限公司 | image processing method, device and mobile terminal |
CN107277299A (en) * | 2017-07-27 | 2017-10-20 | 广东欧珀移动通信有限公司 | Image processing method, device, mobile terminal and computer-readable recording medium |
-
2017
- 2017-10-31 CN CN201711045718.2A patent/CN107808137A/en active Pending
Patent Citations (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103325089A (en) * | 2012-03-21 | 2013-09-25 | 腾讯科技(深圳)有限公司 | Method and device for processing skin color in image |
CN103632165A (en) * | 2013-11-28 | 2014-03-12 | 小米科技有限责任公司 | Picture processing method, device and terminal equipment |
CN104902177A (en) * | 2015-05-26 | 2015-09-09 | 广东欧珀移动通信有限公司 | Intelligent photographing method and terminal |
CN105657287A (en) * | 2015-08-24 | 2016-06-08 | 宇龙计算机通信科技(深圳)有限公司 | Backlighting scene detection method and device and imaging device |
CN105513007A (en) * | 2015-12-11 | 2016-04-20 | 惠州Tcl移动通信有限公司 | Mobile terminal based photographing beautifying method and system, and mobile terminal |
CN105530435A (en) * | 2016-02-01 | 2016-04-27 | 深圳市金立通信设备有限公司 | Shooting method and mobile terminal |
CN106056552A (en) * | 2016-05-31 | 2016-10-26 | 努比亚技术有限公司 | Image processing method and mobile terminal |
CN106991654A (en) * | 2017-03-09 | 2017-07-28 | 广东欧珀移动通信有限公司 | Human body beautification method and apparatus and electronic installation based on depth |
CN107274354A (en) * | 2017-05-22 | 2017-10-20 | 奇酷互联网络科技(深圳)有限公司 | image processing method, device and mobile terminal |
CN107277299A (en) * | 2017-07-27 | 2017-10-20 | 广东欧珀移动通信有限公司 | Image processing method, device, mobile terminal and computer-readable recording medium |
Cited By (42)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108540716A (en) * | 2018-03-29 | 2018-09-14 | 广东欧珀移动通信有限公司 | Image processing method, device, electronic equipment and computer readable storage medium |
WO2019227915A1 (en) * | 2018-05-31 | 2019-12-05 | 北京市商汤科技开发有限公司 | Image processing method and apparatus, electronic device, and storage medium |
CN108765274A (en) * | 2018-05-31 | 2018-11-06 | 北京市商汤科技开发有限公司 | A kind of image processing method, device and computer storage media |
CN108830784A (en) * | 2018-05-31 | 2018-11-16 | 北京市商汤科技开发有限公司 | A kind of image processing method, device and computer storage medium |
CN108830783A (en) * | 2018-05-31 | 2018-11-16 | 北京市商汤科技开发有限公司 | A kind of image processing method, device and computer storage medium |
US11410268B2 (en) | 2018-05-31 | 2022-08-09 | Beijing Sensetime Technology Development Co., Ltd | Image processing methods and apparatuses, electronic devices, and storage media |
US11288796B2 (en) * | 2018-05-31 | 2022-03-29 | Beijing Sensetime Technology Development Co., Ltd. | Image processing method, terminal device, and computer storage medium |
US11216904B2 (en) | 2018-05-31 | 2022-01-04 | Beijing Sensetime Technology Development Co., Ltd. | Image processing method and apparatus, electronic device, and storage medium |
CN108830783B (en) * | 2018-05-31 | 2021-07-02 | 北京市商汤科技开发有限公司 | Image processing method and device and computer storage medium |
CN110555806A (en) * | 2018-05-31 | 2019-12-10 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
CN110555794A (en) * | 2018-05-31 | 2019-12-10 | 北京市商汤科技开发有限公司 | image processing method and device, electronic equipment and storage medium |
CN109376575A (en) * | 2018-08-20 | 2019-02-22 | 奇酷互联网络科技(深圳)有限公司 | Method, mobile terminal and the storage medium that human body in image is beautified |
CN109166082A (en) * | 2018-08-22 | 2019-01-08 | Oppo广东移动通信有限公司 | Image processing method, device, electronic equipment and computer readable storage medium |
CN109191396A (en) * | 2018-08-22 | 2019-01-11 | Oppo广东移动通信有限公司 | Facial image processing method and apparatus, electronic equipment, computer readable storage medium |
CN109190533A (en) * | 2018-08-22 | 2019-01-11 | Oppo广东移动通信有限公司 | Image processing method and device, electronic equipment, computer readable storage medium |
CN109191396B (en) * | 2018-08-22 | 2021-01-08 | Oppo广东移动通信有限公司 | Portrait processing method and device, electronic equipment and computer readable storage medium |
CN109035177A (en) * | 2018-08-27 | 2018-12-18 | 三星电子(中国)研发中心 | A kind of photo processing method and device |
CN109242868A (en) * | 2018-09-17 | 2019-01-18 | 北京旷视科技有限公司 | Image processing method, device, electronic equipment and storage medium |
CN109325907A (en) * | 2018-09-18 | 2019-02-12 | 北京旷视科技有限公司 | Image landscaping treatment method, apparatus and system |
CN109461124A (en) * | 2018-09-21 | 2019-03-12 | 维沃移动通信(杭州)有限公司 | A kind of image processing method and terminal device |
CN109447896A (en) * | 2018-09-21 | 2019-03-08 | 维沃移动通信(杭州)有限公司 | A kind of image processing method and terminal device |
CN109447896B (en) * | 2018-09-21 | 2023-07-25 | 维沃移动通信(杭州)有限公司 | Image processing method and terminal equipment |
CN110942422A (en) * | 2018-09-21 | 2020-03-31 | 北京市商汤科技开发有限公司 | Image processing method and device and computer storage medium |
JP7090169B2 (en) | 2018-09-21 | 2022-06-23 | ベイジン センスタイム テクノロジー デベロップメント カンパニー, リミテッド | Image processing methods, equipment and computer storage media |
JP2021515313A (en) * | 2018-09-21 | 2021-06-17 | ベイジン センスタイム テクノロジー デベロップメント カンパニー, リミテッド | Image processing methods, devices and computer storage media |
CN109658355A (en) * | 2018-12-19 | 2019-04-19 | 维沃移动通信有限公司 | A kind of image processing method and device |
WO2020134891A1 (en) * | 2018-12-26 | 2020-07-02 | 华为技术有限公司 | Photo previewing method for electronic device, graphical user interface and electronic device |
CN109495688A (en) * | 2018-12-26 | 2019-03-19 | 华为技术有限公司 | Method for previewing of taking pictures, graphic user interface and the electronic equipment of electronic equipment |
CN109872283A (en) * | 2019-01-18 | 2019-06-11 | 维沃移动通信有限公司 | A kind of image processing method and mobile terminal |
CN110136051A (en) * | 2019-04-30 | 2019-08-16 | 北京市商汤科技开发有限公司 | A kind of image processing method, device and computer storage medium |
US11501407B2 (en) | 2019-04-30 | 2022-11-15 | Beijing Sensetime Technology Development Co., Ltd. | Method and apparatus for image processing, and computer storage medium |
CN110290395B (en) * | 2019-06-14 | 2021-05-25 | 北京奇艺世纪科技有限公司 | Image processing method and device and computer readable storage medium |
CN110290395A (en) * | 2019-06-14 | 2019-09-27 | 北京奇艺世纪科技有限公司 | A kind of image processing method, device and computer readable storage medium |
CN110751668B (en) * | 2019-09-30 | 2022-12-27 | 北京迈格威科技有限公司 | Image processing method, device, terminal, electronic equipment and readable storage medium |
CN110751668A (en) * | 2019-09-30 | 2020-02-04 | 北京迈格威科技有限公司 | Image processing method, device, terminal, electronic equipment and readable storage medium |
CN112488965A (en) * | 2020-12-23 | 2021-03-12 | 联想(北京)有限公司 | Image processing method and device |
CN112651956A (en) * | 2020-12-30 | 2021-04-13 | 深圳云天励飞技术股份有限公司 | Image processing method, image processing device, electronic equipment and storage medium |
CN112651956B (en) * | 2020-12-30 | 2024-05-03 | 深圳云天励飞技术股份有限公司 | Image processing method, device, electronic equipment and storage medium |
CN113194267A (en) * | 2021-04-29 | 2021-07-30 | 北京达佳互联信息技术有限公司 | Image processing method and device and photographing method and device |
CN113194267B (en) * | 2021-04-29 | 2023-03-24 | 北京达佳互联信息技术有限公司 | Image processing method and device and photographing method and device |
CN113572955A (en) * | 2021-06-25 | 2021-10-29 | 维沃移动通信(杭州)有限公司 | Image processing method and device and electronic equipment |
CN113763286A (en) * | 2021-09-27 | 2021-12-07 | 北京市商汤科技开发有限公司 | Image processing method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107808137A (en) | Image processing method, device, electronic equipment and computer-readable recording medium | |
CN107730446B (en) | Image processing method, image processing device, computer equipment and computer readable storage medium | |
CN107680128A (en) | Image processing method, device, electronic equipment and computer-readable recording medium | |
CN107862657A (en) | Image processing method, device, computer equipment and computer-readable recording medium | |
CN107808136B (en) | Image processing method, image processing device, readable storage medium and computer equipment | |
CN108537155B (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
CN108537749B (en) | Image processing method, image processing device, mobile terminal and computer readable storage medium | |
CN107730444B (en) | Image processing method, image processing device, readable storage medium and computer equipment | |
CN107945135B (en) | Image processing method, image processing apparatus, storage medium, and electronic device | |
CN107766831A (en) | Image processing method, device, mobile terminal and computer-readable recording medium | |
CN107509031A (en) | Image processing method, device, mobile terminal and computer-readable recording medium | |
CN107818305A (en) | Image processing method, device, electronic equipment and computer-readable recording medium | |
CN107909057A (en) | Image processing method, device, electronic equipment and computer-readable recording medium | |
CN107862653B (en) | Image display method, image display device, storage medium and electronic equipment | |
CN108009999A (en) | Image processing method, device, computer-readable recording medium and electronic equipment | |
CN107993209B (en) | Image processing method, image processing device, computer-readable storage medium and electronic equipment | |
CN108540716A (en) | Image processing method, device, electronic equipment and computer readable storage medium | |
CN107945107A (en) | Image processing method, device, computer-readable recording medium and electronic equipment | |
CN108055452A (en) | Image processing method, device and equipment | |
CN107742274A (en) | Image processing method, device, computer-readable recording medium and electronic equipment | |
CN107451969A (en) | Image processing method, device, mobile terminal and computer-readable recording medium | |
CN108022206A (en) | Image processing method, device, electronic equipment and computer-readable recording medium | |
CN109191403A (en) | Image processing method and device, electronic equipment, computer readable storage medium | |
CN107800965B (en) | Image processing method, device, computer readable storage medium and computer equipment | |
CN108616700B (en) | Image processing method and device, electronic equipment and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20180316 |