CN109255763A - Image processing method, device, equipment and storage medium - Google Patents
Image processing method, device, equipment and storage medium Download PDFInfo
- Publication number
- CN109255763A CN109255763A CN201810985872.6A CN201810985872A CN109255763A CN 109255763 A CN109255763 A CN 109255763A CN 201810985872 A CN201810985872 A CN 201810985872A CN 109255763 A CN109255763 A CN 109255763A
- Authority
- CN
- China
- Prior art keywords
- region
- brightness
- face
- light
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003860 storage Methods 0.000 title claims abstract description 27
- 238000003672 processing method Methods 0.000 title claims abstract description 21
- 238000000034 method Methods 0.000 claims abstract description 38
- 238000012545 processing Methods 0.000 claims description 33
- 210000001508 eye Anatomy 0.000 claims description 28
- 238000006243 chemical reaction Methods 0.000 claims description 19
- 238000012360 testing method Methods 0.000 claims description 19
- 238000004590 computer program Methods 0.000 claims description 16
- 230000015654 memory Effects 0.000 claims description 13
- 210000001061 forehead Anatomy 0.000 claims description 10
- 238000000605 extraction Methods 0.000 claims description 8
- 238000012549 training Methods 0.000 claims description 6
- 238000010801 machine learning Methods 0.000 claims description 5
- 230000002452 interceptive effect Effects 0.000 claims description 3
- 238000013461 design Methods 0.000 description 23
- 238000010586 diagram Methods 0.000 description 9
- 230000001815 facial effect Effects 0.000 description 8
- 230000008569 process Effects 0.000 description 8
- 230000000694 effects Effects 0.000 description 7
- 238000003708 edge detection Methods 0.000 description 4
- 230000006870 function Effects 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 239000003086 colorant Substances 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 241001300078 Vitrea Species 0.000 description 1
- 230000006399 behavior Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 210000005252 bulbus oculi Anatomy 0.000 description 1
- 230000014759 maintenance of location Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 230000001755 vocal effect Effects 0.000 description 1
- 238000013316 zoning Methods 0.000 description 1
Classifications
-
- G06T5/77—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
Abstract
The present invention provides a kind of image processing method, device, equipment and storage medium, this method comprises: by determining the face range in image;By the face range, multiple regions are divided into according to face;Obtain the brightness value of each region;According to the brightness value, different light compensation deals are carried out to each region.The present invention can the different zones to face in image carry out different degrees of light compensation deals so that the image more true nature after U.S. face.
Description
Technical field
The present invention relates to technical field of image processing more particularly to a kind of image processing method, device, equipment and storage to be situated between
Matter.
Background technique
With the development of terminal technology, more and more terminals are provided with camera function.The photo shot it
Afterwards, user can also carry out U.S. face processing to character image by terminal, to promote the shooting effect of character image.
Currently, most of terminals for having camera function can add U.S. face application, the unskilled labourer in used U.S. face application
Tool to carry out the photo of terminal shooting U.S. face processing.
But existing U.S. face is applied and carries out U.S. face processing both for general image;If user is to the part in image
Position carries out U.S. face processing, then needs to carry out the manual operation of a series of complex, operating efficiency is low, and U.S. face is ineffective.
Summary of the invention
The present invention provides a kind of image processing method, device, equipment and storage medium, can be to the difference of face in image
Region carries out different degrees of light compensation deals, so that the image more true nature after U.S. face.
In a first aspect, the embodiment of the present invention provides a kind of image processing method, comprising:
Determine the face range in image;
By the face range, multiple regions are divided into according to face;
Obtain the brightness value of each region;
According to the brightness value, different light compensation deals are carried out to each region.
A kind of face range in possible design, in the determining image, comprising:
Feature extraction is carried out to image, obtains image feature information, described image characteristic information includes: eyes, nose, mouth
The coordinate position of lip, chin in the picture;
Described image characteristic information is inputted into object module, by the seat of the face range in object module output image
Cursor position;Wherein, the object module is by that will include the test image of face as input, really to mark in test image
The face range coordinate position of note is exported as target, the machine learning model that training obtains.
In a kind of possible design, by the face range, multiple regions are divided into according to face, comprising:
The face range is divided according to the distributing position of face are as follows: forehead region, left eye region, right eye region, nose
Subregion, left side cheek region, right side cheek region, lip region, chin area.
In a kind of possible design, the brightness value of each region is obtained, comprising:
Test light is projected to the face range, and receives the reflection light of each region;
According to the intensity of the reflection light, the brightness value of each region is determined.
In a kind of possible design, the brightness value of each region is obtained, comprising:
The average brightness value, is made the brightness of corresponding region by the average brightness value for calculating pixel within the scope of each region
Value.
In a kind of possible design, according to the brightness value, different light compensation deals are carried out to each region, are wrapped
It includes:
According to preset conversion rule, the brightness value of each region is scaled to the brightness score value of corresponding region;
According to the brightness score value, different light compensation deals are carried out to each region.
In a kind of possible design, according to preset conversion rule, the brightness value of each region is scaled corresponding area
The brightness score value in domain, comprising:
According to preset conversion list, threshold interval belonging to the brightness value of each region is determined;Wherein, different threshold value
Section corresponds to different brightness score values;
Using the corresponding brightness score value of the threshold interval as the brightness score value in the region.
In a kind of possible design, according to the brightness score value, different light compensation deals are carried out to each region,
Include:
According to the brightness score value in the region, the compensation light intensity in the region is determined, wherein brightness score value is higher,
Then corresponding compensation light intensity is higher;
According to the intensity of the compensation light in the region, light compensation deals are carried out to the region.
In a kind of possible design, according to the brightness value, different light compensation deals are carried out to each region
Before, further includes:
According to the operation information that user inputs, the light source type and/or filter color of compensation light are determined;It is wherein described
Light source type includes: cold light source, warm light source, lamp.
Second aspect, the embodiment of the present invention provide a kind of image processing apparatus, comprising:
Determining module, for determining the face range in image;
Division module, for being divided into multiple regions according to face for the face range;
Module is obtained, for obtaining the brightness value of each region;
Compensating module, for carrying out different light compensation deals to each region according to the brightness value.
In a kind of possible design, the determining module is specifically used for:
Feature extraction is carried out to image, obtains image feature information, described image characteristic information includes: eyes, nose, mouth
The coordinate position of lip, chin in the picture;
Described image characteristic information is inputted into object module, by the seat of the face range in object module output image
Cursor position;Wherein, the object module is by that will include the test image of face as input, really to mark in test image
The face range coordinate position of note is exported as target, the machine learning model that training obtains.
In a kind of possible design, the division module is specifically used for:
The face range is divided according to the distributing position of face are as follows: forehead region, left eye region, right eye region, nose
Subregion, left side cheek region, right side cheek region, lip region, chin area.
In a kind of possible design, the acquisition module is specifically used for:
Test light is projected to the face range, and receives the reflection light of each region;
According to the intensity of the reflection light, the brightness value of each region is determined.
In a kind of possible design, the acquisition module is also used to:
The average brightness value, is made the brightness of corresponding region by the average brightness value for calculating pixel within the scope of each region
Value.
In a kind of possible design, the compensating module is specifically used for:
According to preset conversion rule, the brightness value of each region is scaled to the brightness score value of corresponding region;
According to the brightness score value, different light compensation deals are carried out to each region.
In a kind of possible design, according to preset conversion rule, the brightness value of each region is scaled corresponding area
The brightness score value in domain, comprising:
According to preset conversion list, threshold interval belonging to the brightness value of each region is determined;Wherein, different threshold value
Section corresponds to different brightness score values;
Using the corresponding brightness score value of the threshold interval as the brightness score value in the region.
In a kind of possible design, according to the brightness score value, different light compensation deals are carried out to each region,
Include:
According to the brightness score value in the region, the compensation light intensity in the region is determined, wherein brightness score value is higher,
Then corresponding compensation light intensity is higher;
According to the intensity of the compensation light in the region, light compensation deals are carried out to the region.
In a kind of possible design, further includes:
Interactive module, for according to the brightness value, before different light compensation deals are carried out to each region, root
According to the operation information that user inputs, the light source type and/or filter color of compensation light are determined;The wherein light source type packet
It includes: cold light source, warm light source, lamp.
The third aspect, the embodiment of the present invention provide a kind of image processing equipment, comprising: processor and memory, memory
In be stored with the executable instruction of the processor;Wherein, the processor is configured to next via the executable instruction is executed
Execute image processing method described in any one of first aspect.
Fourth aspect, the embodiment of the present invention provide a kind of computer readable storage medium, are stored thereon with computer program,
Image processing method described in any one of first aspect is realized when the program is executed by processor.
5th aspect, the embodiment of the present invention provide a kind of program product, and described program product includes computer program, described
Computer program is stored in readable storage medium storing program for executing, at least one processor of server can be read from the readable storage medium storing program for executing
The computer program is taken, at least one described processor executes the computer program and makes server implementation first aspect sheet
Any image processing method of inventive embodiments.
6th aspect, the embodiment of the present invention provide a kind of photo terminal, comprising: camera, processor, memory, it is described
The photo of shooting is sent to processor for shooting photo by camera;
The processor calls the computer program in the memory, to execute the processing of the described image in first aspect
Method handles the photo that the camera takes.
A kind of image processing method, device, equipment and storage medium provided by the invention, by determining the face in image
Range;By the face range, multiple regions are divided into according to face;Obtain the brightness value of each region;According to the brightness
Value, different light compensation deals are carried out to each region.The present invention can be to the different zones progress difference of face in image
The light compensation deals of degree, so that the image more true nature after U.S. face.In addition, the present invention is also in optinal plan
The operation information that can be inputted according to user, determines the light source type and/or filter color of compensation light, to promote light
The effect of compensation.
Detailed description of the invention
In order to more clearly explain the embodiment of the invention or the technical proposal in the existing technology, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is this hair
Bright some embodiments for those of ordinary skill in the art without any creative labor, can be with
It obtains other drawings based on these drawings.
Fig. 1 (a) is the image schematic diagram before handling in an application scenarios of the invention;
Fig. 1 (b) is the image schematic diagram in an application scenarios of the invention in processing;
Fig. 1 (c) is that treated in of the invention application scenarios image schematic diagram;
Fig. 2 is the flow chart for the image processing method that the embodiment of the present invention one provides;
Fig. 3 is the flow chart of image processing method provided by Embodiment 2 of the present invention;
Fig. 4 is the flow chart for the image processing method that the embodiment of the present invention three provides;
Fig. 5 is the structural schematic diagram for the image processing apparatus that the embodiment of the present invention four provides;
Fig. 6 is the structural schematic diagram for the image processing apparatus that the embodiment of the present invention five provides;
Fig. 7 is the structural schematic diagram for the image processing equipment that the embodiment of the present invention six provides.
Through the above attached drawings, it has been shown that the specific embodiment of the disclosure will be hereinafter described in more detail.These attached drawings
It is not intended to limit the scope of this disclosure concept by any means with verbal description, but is by referring to specific embodiments
Those skilled in the art illustrate the concept of the disclosure.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with the embodiment of the present invention
In attached drawing, technical scheme in the embodiment of the invention is clearly and completely described, it is clear that described embodiment is
A part of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, those of ordinary skill in the art
Every other embodiment obtained without creative efforts, shall fall within the protection scope of the present invention.
Description and claims of this specification and term " first ", " second ", " third ", " in above-mentioned attached drawing
The (if present)s such as four " are to be used to distinguish similar objects, without being used to describe a particular order or precedence order.It should manage
The data that solution uses in this way are interchangeable under appropriate circumstances, so that the embodiment of the present invention described herein for example can be to remove
Sequence other than those of illustrating or describe herein is implemented.In addition, term " includes " and " having " and theirs is any
Deformation, it is intended that cover it is non-exclusive include, for example, containing the process, method of a series of steps or units, system, production
Product or equipment those of are not necessarily limited to be clearly listed step or unit, but may include be not clearly listed or for this
A little process, methods, the other step or units of product or equipment inherently.
Technical solution of the present invention is described in detail with specifically embodiment below.These specific implementations below
Example can be combined with each other, and the same or similar concept or process may be repeated no more in some embodiments.
Fig. 1 (a), Fig. 1 (b), Fig. 1 (c) show the image processing process of an application scenarios, as shown in Fig. 1 (a), first
The photo (being also possible to directly shoot obtained photo) of available storage, and include face facial area in the photo.It is logical
It crosses and feature extraction is carried out to photo, the characteristic information of photo is obtained, according to this feature information come locating human face's range.For example, can
Rapidly to extract the position that eyes, lip etc. have special shape from photo, eyes, lip position are then locked
Coverage;Face range region in the coverage being determined as in photo.As shown in Fig. 1 (b), when determining face model
After enclosing 10, face range 10 is divided into multiple regions according to face, such as: left eye region 11, right eye region 12, left side
Buccal region domain 13, nasal area 14, lip region 15 etc..It should be noted that not limiting each region in the present embodiment
Specific area.Then the brightness value of each region is obtained;According to brightness value, each region is carried out at different light compensation
Reason;Finally obtain the U.S. face image after the different zones of face are carried out with different degrees of light compensation deals, wherein treated
Shown in image such as Fig. 1 (c).Using U.S. face image obtained by the above method more true nature.
How to be solved with technical solution of the specifically embodiment to technical solution of the present invention and the application below above-mentioned
Technical problem is described in detail.These specific embodiments can be combined with each other below, for the same or similar concept
Or process may repeat no more in certain embodiments.Below in conjunction with attached drawing, the embodiment of the present invention is described.
Fig. 2 is the flow chart for the image processing method that the embodiment of the present invention one provides, as shown in Fig. 2, in the present embodiment
Method may include:
S101, face range in image is determined.
In the present embodiment, the photo (being also possible to directly shoot obtained photo) of available storage, and in the photo
Include face facial area.By carrying out feature extraction to photo, the characteristic information of photo is obtained, is determined according to this feature information
Position face range.For example, the position that eyes, lip etc. have special shape can be extracted from photo rapidly, then lock
Eyes, lip position coverage;Face range region in the coverage being determined as in photo.
In a kind of optional embodiment, feature extraction can be carried out to image, obtain image feature information, image is special
Reference breath includes: the coordinate position of eyes, nose, lip, chin in the picture;Image feature information is inputted into object module,
By the coordinate position of the face range in object module output image;Wherein, object module is by that will include the test of face
Image is as input, and the face range coordinate position of authentic signature is exported as target using in test image, the machine that training obtains
Device learning model.
In another optional embodiment, a face choice box can also be shown on the display interface of terminal,
User can drag the face choice box on the display interface of terminal, to be customized by the user the face selected in image
Range.Optionally, user can generate touch signal by key, or on terminal interface to control the big of face choice box
It is small.It should be noted that the quantity and concrete shape of face choice box are not limited in the present embodiment.For example, face choice box
It can be rectangle, circle, ellipse etc..
S102, by face range, be divided into multiple regions according to face.
In a kind of optional embodiment, face range can be divided according to the distributing position of face are as follows: forehead area
Domain, left eye region, right eye region, nasal area, left side cheek region, right side cheek region, lip region, chin area.
In another optional embodiment, a face region division can also be shown on the display interface of terminal
Figure layer includes the multiple regions frame arranged according to default behavior in the face zoning plan layer, and user can be by face area
Domain divides figure layer and is dragged to the face range determined in step S101, then the size and location of manual fine-tuning each region frame,
So as to carry out region division to face range more accurately.
Specifically, by taking photo shown in FIG. 1 as an example, after determining face range 10, face range 10 is divided into: left
Vitrea eye domain 11, right eye region 12, left side cheek region 13, nasal area 14, lip region 15.At this point, can be in the terminal of user
The regional frame (as shown in phantom in Figure 1) of division is shown on interface, user the size and location to regional frame can carry out manually
Adjustment.
It further, can be according to need when there is the case where blocking due to backlight or face range in facial image
Increase or reduce the quantity of regional frame.To avoid the U.S. face processing to non-face area, so that photo realistic declines.
S103, the brightness value for obtaining each region.
In a kind of optional embodiment, test light can be projected to face range, and receive the anti-of each region
Penetrate light;According to the intensity of reflection light, the brightness value of each region is determined.Wherein, the intensity of reflection light and region is bright
Angle value is directly proportional, and the intensity of reflection light is stronger, then the brightness value of corresponding region is also bigger.
In another optional embodiment, the average brightness value of pixel within the scope of each region can be calculated, it will
Average brightness value makees the brightness value of corresponding region.
Specifically, facial image can be converted to gray level image, and obtain the brightness value of all pixels point in each region
Average value, using the average value as the final brightness value in region.
S104, according to brightness value, different light compensation deals are carried out to each region.
In a kind of optional embodiment, the brightness value of each region can be converted according to preset conversion rule
For the brightness score value of corresponding region;According to brightness score value, different light compensation deals are carried out to each region.
It is alternatively possible to determine threshold interval belonging to the brightness value of each region according to preset conversion list;Its
In, different threshold intervals corresponds to different brightness score values;Using the corresponding brightness score value of threshold interval as the brightness in region point
Value.
It should be noted that the list that converts in the present embodiment is obtained previously according to test of many times result, in conversion table
Brightness value is divided into multiple threshold intervals, the corresponding brightness score value of each threshold interval.In practical applications, convert list
It is not unalterable, such as can be adjusted according to the background color of photographing mode or photo.
It is alternatively possible to determine the compensation light intensity in region, wherein brightness score value is got over according to the brightness score value in region
Height, then corresponding compensation light intensity is higher;According to the intensity of the compensation light in region, light compensation deals are carried out to region.
Specifically, according to the brightness score value of different zones, light compensation deals are carried out to different zones respectively;Such as: eye
The brightness of eyeball is scored at 8 points, and the brightness of chin is 2 points, then by 80% compensation of light to chin, extremely by 20% compensation of light
Chin.
It further, can also be by making light compensation to the facial area for being subdivided into each region within the scope of face
Processing.Specifically, the brightness value of these facial areas can be determined according to gray value, and corresponding region is made according to brightness value
Light compensation deals.
In practical applications, light compensation deals, GrayWorld can be carried out by GrayWorld color balance algorithm
Color balance algorithm be be similar to three R (red), G (green), B (blue) respective average values of color component it is same
Gray value.Specifically, average value avgR, avgG, avgB and original graph of three color components of original image are calculated separately first
Then the average gray value avgGray of picture adjusts separately R, G, B value of each pixel so that adjustment after image three colors
The average value of component is all similar to average gray value avgGray.
In practical applications, light compensation deals can also be carried out based on the algorithm of reference white, by pixel in image
Brightness is arranged according to sequence from high to low, extract arrangement preceding 5% pixel, if the number of these pixels it is enough (such as:
Greater than 100), then using their brightness as reference white, three R (red) of its color, G (green), B (blue) colors are divided
Amount is adjusted to maximum 255, and the color component of other pixels of entire image is also according to this dimensional variation.To make non-reference
R, G, B value of white partial pixel also correspondingly increase.
In an application scenarios, face range can also be divided according to face region and non-face region, such as:
Using the non-face region above eye areas as forehead region, using the non-face region below lip region as chin area
Domain, using the non-face region in addition to forehead region, chin area as Zhongting region.It is identified by multistage edge detection algorithm
Hair line, and be multiple regions by hair line forehead region division below.Then by equal resolution algorithms by hair line with
Under forehead region division be multiple regions.Further, the wing of nose and lower canthus are identified using multistage edge detection algorithm, and
It is multiple regions by the Zhongting region division in addition to the wing of nose and lower canthus.Similarly, it is identified using multistage edge detection algorithm
Lip edge, and the chin area in addition to the lip region defined by the lip edge is divided into multiple regions.Specifically, in order to
It further accurately identifies chin area, lip edge can also be identified using multistage edge detection algorithm.Then use etc.
Chin area in addition to the lip region defined by the lip edge is divided into multiple regions by resolution algorithm.Finally, can be right
Make light compensation deals in the region of division.Specifically, the brightness value in corresponding region can be determined according to gray value, and according to bright
Angle value makees light compensation deals to corresponding region.
It should be noted that method can also be overlapped with the function in existing U.S. face application in the present embodiment, example
Such as: after light compensation is completed, the mill skin score value in corresponding region is determined according to gray value, and according to mill skin score value to correspondence
Region make mill skin processing.
The present embodiment, by determining the face range in image;By face range, multiple regions are divided into according to face;
Obtain the brightness value of each region;According to brightness value, different light compensation deals are carried out to each region.The present invention can be right
The different zones of face carry out different degrees of light compensation deals in image so that the image after U.S. face it is more true from
So.
Fig. 3 is the flow chart of image processing method provided by Embodiment 2 of the present invention, as shown in figure 3, in the present embodiment
Method may include:
S201, face range in image is determined.
S202, by face range, be divided into multiple regions according to face.
S203, the brightness value for obtaining each region.
In the present embodiment, step S201~step S203 specific implementation process and technical principle are shown in Figure 2
Associated description in method in step S101~step S103, details are not described herein again.
S204, the operation information inputted according to user, determine the light source type and/or filter color of compensation light.
In the present embodiment, before carrying out different light compensation deals to each region, user can show from terminal
The light source type and/or filter color of selection compensation light in toolbar on interface.Due to image color information often by
Influence to factors such as light source, the deviations of color for acquiring equipment is mobile to a direction so as to cause color on the whole, such as:
Phenomena such as coloration of photo is colder, and photo is partially yellow.Color is seen for the ease of existing in this whole image of processing counteracting of image
Deviation needs to carry out light compensation to image conducive to the development of subsequent image processing.It specifically, can when photo tone is colder
To select warm light source to carry out light compensation.
S205, according to brightness value, different light compensation deals are carried out to each region.
In the present embodiment, step in the specific implementation process and technical principle of step S205 method shown in Figure 2
Associated description in S105, details are not described herein again.
The present embodiment, by determining the face range in image;By face range, multiple regions are divided into according to face;
Obtain the brightness value of each region;According to brightness value, different light compensation deals are carried out to each region.The present invention can be right
The different zones of face carry out different degrees of light compensation deals in image so that the image after U.S. face it is more true from
So.In addition, the operation information that the present embodiment can also be inputted according to user, determines the light source type and/or filter of compensation light
Color, to promote the effect of light compensation.
Fig. 4 is the flow chart for the image processing method that the embodiment of the present invention three provides, as shown in figure 4, in the present embodiment
Method may include:
S301, face range in image is determined.
S302, by face range, be divided into multiple regions according to face.
S303, the brightness value for obtaining each region.
S304, according to brightness value, different light compensation deals are carried out to each region.
In the present embodiment, step S301~step S304 specific implementation process and technical principle are shown in Figure 2
Associated description in method in step S101~step S104, details are not described herein again.
S305, display light compensation deals after, in image within the scope of face each region light Remedy percent.
It, can be in any area in the periphery for the image that terminal is shown by taking the terminal for having camera function as an example in the present embodiment
The light Remedy percent of each region within the scope of face in image is shown in domain.The present embodiment does not limit the light of each region
The appearance form of Remedy percent.For example, can show the light Remedy percent of each region in the form of progress bar.Its
In, light Remedy percent refers to the conversion rate for carrying out the light intensity of light compensation;For example, light Remedy percent is
80%, then light compensation deals are carried out to region according to 80% light intensity.
Further, the light Remedy percent of each region can double-click or click touching according to user within the scope of face
Photos and sending messages show in the display interface or hide.
S306, the operation information inputted according to user are adjusted any or multiple regions percentages.
In the present embodiment, user can be adjusted the light Remedy percent of each region within the scope of face.Specifically
Ground, when showing the light Remedy percent of each region in the form of progress bar, user can drag progress bar to change pair
Answer the light Remedy percent in region.
S307, again according to light Remedy percent adjusted, different light compensation deals are carried out to each region.
In the present embodiment, when the light Remedy percent in the user change region, real-time display is according to light adjusted
Remedy percent carries out the image effect after light compensation deals, so that user carries out each region in image
Further adjustment promotes the processing accuracy of image, meets individual requirement.
It further, can also be by making light compensation to the facial area for being subdivided into each region within the scope of face
Processing.Specifically, the brightness value of these facial areas can be determined according to gray value, and corresponding region is made according to brightness value
Light compensation deals.
The present embodiment, by determining the face range in image;By face range, multiple regions are divided into according to face;
Obtain the brightness value of each region;According to brightness value, different light compensation deals are carried out to each region.The present invention can be right
The different zones of face carry out different degrees of light compensation deals in image so that the image after U.S. face it is more true from
So.In addition, the operation information that the present embodiment can also be inputted according to user, after showing light compensation deals, face in image
The light Remedy percent of each region in range;According to user input operation information, to it is any or appoint multiple regions hundred
Ratio is divided to be adjusted;Again according to light Remedy percent adjusted, different light compensation deals are carried out to each region.
So as to after carrying out different degrees of light compensation deals to the different zones of face automatically, further to image into
Row adjustment, promotes the processing accuracy of image, meets individual requirement.
Fig. 5 is the structural schematic diagram for the image processing apparatus that the embodiment of the present invention four provides, as shown in figure 5, the present embodiment
Image processing apparatus may include:
Determining module 21, for determining the face range in image;
Division module 22, for being divided into multiple regions according to face for face range;
Module 23 is obtained, for obtaining the brightness value of each region;
Compensating module 24, for carrying out different light compensation deals to each region according to brightness value.
In a kind of possible design, determining module 21 is specifically used for:
Feature extraction is carried out to image, obtains image feature information, image feature information include: eyes, nose, lip,
The coordinate position of chin in the picture;
Image feature information is inputted into object module, by the coordinate position of the face range in object module output image;
Wherein, object module is by that will include the test image of face as input, with the face model of authentic signature in test image
Cursor position of sitting around is exported as target, the machine learning model that training obtains.
In a kind of possible design, division module 22 is specifically used for:
Face range is divided according to the distributing position of face are as follows: forehead region, left eye region, right eye region, nose region
Domain, left side cheek region, right side cheek region, lip region, chin area.
In a kind of possible design, module 23 is obtained, is specifically used for:
Test light is projected to face range, and receives the reflection light of each region;
According to the intensity of reflection light, the brightness value of each region is determined.
In a kind of possible design, module 23 is obtained, is also used to:
Average brightness value, is made the brightness value of corresponding region by the average brightness value for calculating pixel within the scope of each region.
In a kind of possible design, compensating module 24 is specifically used for:
According to preset conversion rule, the brightness value of each region is scaled to the brightness score value of corresponding region;
According to brightness score value, different light compensation deals are carried out to each region.
In a kind of possible design, according to preset conversion rule, the brightness value of each region is scaled corresponding area
The brightness score value in domain, comprising:
According to preset conversion list, threshold interval belonging to the brightness value of each region is determined;Wherein, different threshold value
Section corresponds to different brightness score values;
Using the corresponding brightness score value of threshold interval as the brightness score value in region.
In a kind of possible design, according to brightness score value, different light compensation deals are carried out to each region, are wrapped
It includes:
According to the brightness score value in region, the compensation light intensity in region is determined, wherein brightness score value is higher, then corresponding
It is higher to compensate light intensity;
According to the intensity of the compensation light in region, light compensation deals are carried out to region.
The image processing apparatus of the present embodiment can execute the technical solution in method shown in Fig. 2, realization principle and skill
Art effect is similar, and details are not described herein again.
Fig. 6 is the structural schematic diagram for the image processing apparatus that the embodiment of the present invention five provides, as shown in fig. 6, the present embodiment
Image processing apparatus device shown in Fig. 5 on the basis of, can also include:
Interactive module 25, for according to brightness value, before different light compensation deals are carried out to each region, according to
The operation information of user's input, determines the light source type and/or filter color of compensation light;Wherein light source type includes: cold light
Source, warm light source, lamp.
The image processing apparatus of the present embodiment can execute the technical solution in method shown in above-mentioned Fig. 2~Fig. 4, in fact
Existing principle is similar with technical effect, and details are not described herein again.
Fig. 7 is the structural schematic diagram for the image processing equipment that the embodiment of the present invention six provides, as shown in fig. 7, the present embodiment
Image processing equipment 30 may include: processor 31 and memory 32.
Memory 32 (such as realizes application program, the functional module of above-mentioned image processing method for storing computer program
Deng), computer instruction etc., above-mentioned computer program, computer instruction etc. can be with partitioned storages in one or more memories
In 32.And above-mentioned computer program, computer instruction, data etc. can be called with device 31 processed.
Processor 31, for executing the computer program of the storage of memory 32, to realize method that above-described embodiment is related to
In each step.It specifically may refer to the associated description in previous methods embodiment.
Processor 31 and memory 32 can be absolute construction, be also possible to the integrated morphology integrated.Work as processing
When device 31 and memory 32 are absolute construction, memory 32, processor 31 can be of coupled connections by bus 33.
The server of the present embodiment can execute the technical solution in the method for any of the above-described embodiment of the method, realize
Principle is similar with technical effect, and details are not described herein again.
In addition, the embodiment of the present application also provides a kind of computer readable storage medium, deposited in computer readable storage medium
Computer executed instructions are contained, when at least one processor of user equipment executes the computer executed instructions, user equipment
Execute above-mentioned various possible methods.
Wherein, computer-readable medium includes computer storage media and communication media, and wherein communication media includes being convenient for
From a place to any medium of another place transmission computer program.Storage medium can be general or specialized computer
Any usable medium that can be accessed.A kind of illustrative storage medium is coupled to processor, to enable a processor to from this
Read information, and information can be written to the storage medium.Certainly, storage medium is also possible to the composition portion of processor
Point.Pocessor and storage media can be located in ASIC.In addition, the ASIC can be located in user equipment.Certainly, processor and
Storage medium can also be used as discrete assembly and be present in communication equipment.
The application also provides a kind of program product, and program product includes computer program, and computer program is stored in readable
In storage medium, at least one processor of server can read computer program from readable storage medium storing program for executing, at least one
Reason device executes the image processing method that computer program makes the server implementation embodiments of the present invention any.
Those of ordinary skill in the art will appreciate that: realize that all or part of the steps of above-mentioned each method embodiment can lead to
The relevant hardware of program instruction is crossed to complete.Program above-mentioned can be stored in a computer readable storage medium.The journey
When being executed, execution includes the steps that above-mentioned each method embodiment to sequence;And storage medium above-mentioned include: ROM, RAM, magnetic disk or
The various media that can store program code such as person's CD.
Finally, it should be noted that the above embodiments are only used to illustrate the technical solution of the present invention., rather than its limitations;To the greatest extent
Pipe present invention has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that: its according to
So be possible to modify the technical solutions described in the foregoing embodiments, or part of or all technical features are carried out etc.
With replacement;And these modifications or substitutions, technical solution of various embodiments of the present invention that it does not separate the essence of the corresponding technical solution
Range.
Claims (20)
1. a kind of image processing method characterized by comprising
Determine the face range in image;
By the face range, multiple regions are divided into according to face;
Obtain the brightness value of each region;
According to the brightness value, different light compensation deals are carried out to each region.
2. the method according to claim 1, wherein the face range in the determining image, comprising:
Feature extraction is carried out to image, obtains image feature information, described image characteristic information include: eyes, nose, lip,
The coordinate position of chin in the picture;
Described image characteristic information is inputted into object module, by the coordinate bit of the face range in object module output image
It sets;Wherein, the object module is by that will include the test image of face as input, with authentic signature in test image
Face range coordinate position is exported as target, the machine learning model that training obtains.
3. the method according to claim 1, wherein the face range is divided into multiple areas according to face
Domain, comprising:
The face range is divided according to the distributing position of face are as follows: forehead region, left eye region, right eye region, nose region
Domain, left side cheek region, right side cheek region, lip region, chin area.
4. the method according to claim 1, wherein obtaining the brightness value of each region, comprising:
Test light is projected to the face range, and receives the reflection light of each region;
According to the intensity of the reflection light, the brightness value of each region is determined.
5. the method according to claim 1, wherein obtaining the brightness value of each region, comprising:
The average brightness value, is made the brightness value of corresponding region by the average brightness value for calculating pixel within the scope of each region.
6. method according to any one of claims 1-5, which is characterized in that according to the brightness value, to each region
Carry out different light compensation deals, comprising:
According to preset conversion rule, the brightness value of each region is scaled to the brightness score value of corresponding region;
According to the brightness score value, different light compensation deals are carried out to each region.
7. according to the method described in claim 6, it is characterized in that, according to preset conversion rule, by the brightness of each region
Value is scaled the brightness score value of corresponding region, comprising:
According to preset conversion list, threshold interval belonging to the brightness value of each region is determined;Wherein, different threshold interval
Corresponding different brightness score value;
Using the corresponding brightness score value of the threshold interval as the brightness score value in the region.
8. according to the method described in claim 6, it is characterized in that, being carried out to each region different according to the brightness score value
Light compensation deals, comprising:
According to the brightness score value in the region, the compensation light intensity in the region is determined, wherein brightness score value is higher, then right
The compensation light intensity answered is higher;
According to the intensity of the compensation light in the region, light compensation deals are carried out to the region.
9. the method according to claim 1, wherein being carried out to each region different according to the brightness value
Light compensation deals before, further includes:
According to the operation information that user inputs, the light source type and/or filter color of compensation light are determined;The wherein light source
Type includes: cold light source, warm light source, lamp.
10. a kind of image processing apparatus characterized by comprising
Determining module, for determining the face range in image;
Division module, for being divided into multiple regions according to face for the face range;
Module is obtained, for obtaining the brightness value of each region;
Compensating module, for carrying out different light compensation deals to each region according to the brightness value.
11. device according to claim 10, which is characterized in that the determining module is specifically used for:
Feature extraction is carried out to image, obtains image feature information, described image characteristic information include: eyes, nose, lip,
The coordinate position of chin in the picture;
Described image characteristic information is inputted into object module, by the coordinate bit of the face range in object module output image
It sets;Wherein, the object module is by that will include the test image of face as input, with authentic signature in test image
Face range coordinate position is exported as target, the machine learning model that training obtains.
12. device according to claim 10, which is characterized in that the division module is specifically used for:
The face range is divided according to the distributing position of face are as follows: forehead region, left eye region, right eye region, nose region
Domain, left side cheek region, right side cheek region, lip region, chin area.
13. device according to claim 10, which is characterized in that the acquisition module is specifically used for:
Test light is projected to the face range, and receives the reflection light of each region;
According to the intensity of the reflection light, the brightness value of each region is determined.
14. device according to claim 10, which is characterized in that the acquisition module is also used to:
The average brightness value, is made the brightness value of corresponding region by the average brightness value for calculating pixel within the scope of each region.
15. device described in any one of 0-14 according to claim 1, which is characterized in that the compensating module is specifically used for:
According to preset conversion rule, the brightness value of each region is scaled to the brightness score value of corresponding region;
According to the brightness score value, different light compensation deals are carried out to each region.
16. device according to claim 15, which is characterized in that according to preset conversion rule, by the bright of each region
Angle value is scaled the brightness score value of corresponding region, comprising:
According to preset conversion list, threshold interval belonging to the brightness value of each region is determined;Wherein, different threshold interval
Corresponding different brightness score value;
Using the corresponding brightness score value of the threshold interval as the brightness score value in the region.
17. device according to claim 15, which is characterized in that according to the brightness score value, carried out not to each region
Same light compensation deals, comprising:
According to the brightness score value in the region, the compensation light intensity in the region is determined, wherein brightness score value is higher, then right
The compensation light intensity answered is higher;
According to the intensity of the compensation light in the region, light compensation deals are carried out to the region.
18. device according to claim 10, which is characterized in that further include:
Interactive module, for according to the brightness value, before different light compensation deals are carried out to each region, according to
The operation information of family input, determines the light source type and/or filter color of compensation light;Wherein the light source type includes: cold
Light source, warm light source, lamp.
19. a kind of image processing equipment characterized by comprising memory and processor are stored with the processing in memory
The executable instruction of device;Wherein, the processor is configured to carry out perform claim requirement 1-9 via the execution executable instruction
Image processing method described in one.
20. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the program is by processor
Claim 1-9 described in any item image processing methods are realized when execution.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810985872.6A CN109255763A (en) | 2018-08-28 | 2018-08-28 | Image processing method, device, equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810985872.6A CN109255763A (en) | 2018-08-28 | 2018-08-28 | Image processing method, device, equipment and storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109255763A true CN109255763A (en) | 2019-01-22 |
Family
ID=65049647
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810985872.6A Pending CN109255763A (en) | 2018-08-28 | 2018-08-28 | Image processing method, device, equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109255763A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111598813A (en) * | 2020-05-25 | 2020-08-28 | 北京字节跳动网络技术有限公司 | Face image processing method and device, electronic equipment and computer readable medium |
WO2021036853A1 (en) * | 2019-08-31 | 2021-03-04 | 华为技术有限公司 | Image processing method and electronic apparatus |
CN112700396A (en) * | 2019-10-17 | 2021-04-23 | 中国移动通信集团浙江有限公司 | Illumination evaluation method and device for face picture, computing equipment and storage medium |
CN113673268A (en) * | 2021-08-11 | 2021-11-19 | 广州爱格尔智能科技有限公司 | Identification method, system and equipment for different brightness |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140348399A1 (en) * | 2013-05-22 | 2014-11-27 | Asustek Computer Inc. | Image processing system and method of improving human face recognition |
CN104978710A (en) * | 2015-07-02 | 2015-10-14 | 广东欧珀移动通信有限公司 | Method and device for identifying and adjusting human face luminance based on photographing |
CN106530361A (en) * | 2016-11-16 | 2017-03-22 | 上海市东方医院 | Color correction method for color face image |
CN107578380A (en) * | 2017-08-07 | 2018-01-12 | 北京金山安全软件有限公司 | Image processing method and device, electronic equipment and storage medium |
CN108307101A (en) * | 2017-05-16 | 2018-07-20 | 腾讯科技(深圳)有限公司 | A kind of image processing method and electronic equipment, server |
-
2018
- 2018-08-28 CN CN201810985872.6A patent/CN109255763A/en active Pending
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20140348399A1 (en) * | 2013-05-22 | 2014-11-27 | Asustek Computer Inc. | Image processing system and method of improving human face recognition |
CN104978710A (en) * | 2015-07-02 | 2015-10-14 | 广东欧珀移动通信有限公司 | Method and device for identifying and adjusting human face luminance based on photographing |
CN106530361A (en) * | 2016-11-16 | 2017-03-22 | 上海市东方医院 | Color correction method for color face image |
CN108307101A (en) * | 2017-05-16 | 2018-07-20 | 腾讯科技(深圳)有限公司 | A kind of image processing method and electronic equipment, server |
CN107578380A (en) * | 2017-08-07 | 2018-01-12 | 北京金山安全软件有限公司 | Image processing method and device, electronic equipment and storage medium |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2021036853A1 (en) * | 2019-08-31 | 2021-03-04 | 华为技术有限公司 | Image processing method and electronic apparatus |
CN112700396A (en) * | 2019-10-17 | 2021-04-23 | 中国移动通信集团浙江有限公司 | Illumination evaluation method and device for face picture, computing equipment and storage medium |
CN111598813A (en) * | 2020-05-25 | 2020-08-28 | 北京字节跳动网络技术有限公司 | Face image processing method and device, electronic equipment and computer readable medium |
CN113673268A (en) * | 2021-08-11 | 2021-11-19 | 广州爱格尔智能科技有限公司 | Identification method, system and equipment for different brightness |
CN113673268B (en) * | 2021-08-11 | 2023-11-14 | 广州爱格尔智能科技有限公司 | Identification method, system and equipment for different brightness |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109255763A (en) | Image processing method, device, equipment and storage medium | |
CN1475969B (en) | Method and system for intensify human image pattern | |
CN110838084B (en) | Method and device for transferring style of image, electronic equipment and storage medium | |
US10964070B2 (en) | Augmented reality display method of applying color of hair to eyebrows | |
EP3772038A1 (en) | Augmented reality display method of simulated lip makeup | |
CN109255760A (en) | Distorted image correction method and device | |
CN110288614A (en) | Image processing method, device, equipment and storage medium | |
CN110310247A (en) | Image processing method, device, terminal and computer readable storage medium | |
CN110232326A (en) | A kind of D object recognition method, device and storage medium | |
CN106780313A (en) | Image processing method and device | |
WO2023005743A1 (en) | Image processing method and apparatus, computer device, storage medium, and computer program product | |
US10957092B2 (en) | Method and apparatus for distinguishing between objects | |
CN107730568B (en) | Coloring method and device based on weight learning | |
US10810775B2 (en) | Automatically selecting and superimposing images for aesthetically pleasing photo creations | |
CN107133932A (en) | Retina image preprocessing method and device and computing equipment | |
CN111080754B (en) | Character animation production method and device for connecting characteristic points of head and limbs | |
CN113610723B (en) | Image processing method and related device | |
CN110288663A (en) | Image processing method, image processing device, mobile terminal and storage medium | |
US11900564B2 (en) | Storage medium storing program, image processing apparatus, and training method of machine learning model | |
WO2023273111A1 (en) | Image processing method and apparatus, and computer device and storage medium | |
CN115423724A (en) | Underwater image enhancement method, device and medium for reinforcement learning parameter optimization | |
JP2009251634A (en) | Image processor, image processing method, and program | |
CN105654541B (en) | Video in window treating method and apparatus | |
CN115565213A (en) | Image processing method and device | |
CN110766079B (en) | Training data generation method and device for screen abnormal picture detection |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190122 |