CN109300186A - Image processing method and device, storage medium, electronic equipment - Google Patents
Image processing method and device, storage medium, electronic equipment Download PDFInfo
- Publication number
- CN109300186A CN109300186A CN201811142984.1A CN201811142984A CN109300186A CN 109300186 A CN109300186 A CN 109300186A CN 201811142984 A CN201811142984 A CN 201811142984A CN 109300186 A CN109300186 A CN 109300186A
- Authority
- CN
- China
- Prior art keywords
- image
- character image
- face region
- human face
- shade
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/50—Lighting effects
- G06T15/60—Shadow generation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2215/00—Indexing scheme for image rendering
- G06T2215/12—Shadow map, environment map
Landscapes
- Engineering & Computer Science (AREA)
- Computer Graphics (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Processing (AREA)
Abstract
This application involves a kind of image processing methods and device, electronic equipment, computer readable storage medium, character image is plucked out from image to be processed, the direction of current light is obtained from the human face region of character image, it is that character image adds shade according to the direction of current light, the character image after obtaining addition shade.When the character image come out to stingy figure is added similar to shade in real scene, the direction of current light is obtained from the human face region for the character image that stingy figure comes out first, because human face region is generally the focus of image attention, and human face region contains more light information, so improving the efficiency and accuracy for obtaining the direction of current light on whole image to be processed only from the direction that can intuitively and accurately get current light on whole image to be processed in human face region.Direction further according to current light is that character image adds shade, so that finally the character image after addition shade is more in line with real scene.
Description
Technical field
This application involves field of computer technology, more particularly to a kind of image processing method and device, storage medium, electricity
Sub- equipment.
Background technique
With universal and mobile Internet the rapid development of mobile terminal, user's usage amount of mobile terminal is increasingly
Greatly.The function of taking pictures, make video in mobile terminal has become one of user's common function.It taking pictures, making video
In the process, user is frequently found under the scene of available light, it is captured or production video light effects it is impossible to meet
The individual demand of user.
Summary of the invention
The embodiment of the present application provides a kind of image processing method and device, storage medium, electronic equipment, can add to image
Add shade, to realize different light effects.
A kind of image processing method, comprising:
Character image is plucked out from image to be processed;
The direction of current light is obtained from the human face region of the character image;
It is that the character image adds shade according to the direction of the current light, the figure map after obtaining addition shade
Picture.
A kind of image processing apparatus, described device include:
Module is scratched, for plucking out character image from image to be processed;
Radiation direction obtains module, for obtaining the direction of current light from the human face region of the character image;
Shadow adding module is added for being that the character image adds shade according to the direction of the current light
Character image after adding shade.A kind of computer readable storage medium is stored thereon with computer program, the computer program
The step of image processing method as described above is realized when being executed by processor.
A kind of electronic equipment including memory, processor and stores the meter that can be run on a memory and on a processor
The step of calculation machine program, processor executes image processing method as described above when executing computer program.
Above-mentioned image processing method and device, storage medium, electronic equipment, pluck out character image from image to be processed,
The direction that current light is obtained from the human face region of character image is character image addition yin according to the direction of current light
Shadow, the character image after obtaining addition shade.In the embodiment of the present application to stingy figure come out character image be added it is similar
When shade in real scene, the direction of current light is obtained from the human face region for the character image that stingy figure comes out first,
Because human face region is generally the focus of image attention, and human face region contains more light information, so only from face
The direction that current light on whole image to be processed can be intuitively and accurately got in region improves whole of acquisition wait locate
Manage the efficiency and accuracy in the direction of current light on image.Direction further according to current light is that character image adds shade,
Character image after obtaining addition shade.So that finally the character image after addition shade is more in line with real scene.
Detailed description of the invention
In order to illustrate the technical solutions in the embodiments of the present application or in the prior art more clearly, to embodiment or will show below
There is attached drawing needed in technical description to be briefly described, it should be apparent that, the accompanying drawings in the following description is only this
Some embodiments of application for those of ordinary skill in the art without creative efforts, can be with
It obtains other drawings based on these drawings.
Fig. 1 is the internal structure chart of electronic equipment in one embodiment;
Fig. 2 is the flow chart of image processing method in one embodiment;
Fig. 3 is the flow chart for obtaining the direction method of current light in Fig. 2 from the human face region of character image;
Fig. 4 is the process for obtaining lightness V component method in Fig. 3 from the corresponding HSV image of human face region of character image
Figure;
Fig. 5 is the schematic diagram divided in one embodiment to the human face region of character image;
Fig. 6 is the schematic diagram divided in another embodiment to the human face region of character image;
Fig. 7 is that a kind of direction according to current light is character image addition shade, the people after obtaining addition shade in Fig. 2
The flow chart of object image method;
Fig. 8 be in Fig. 2 another kind according to the direction of current light be character image addition shade, obtain addition shade after
The flow chart of character image method;
Fig. 9 is the structural schematic diagram of image processing apparatus in one embodiment;
Figure 10 is the schematic diagram of image processing circuit in one embodiment.
Specific embodiment
It is with reference to the accompanying drawings and embodiments, right in order to which the objects, technical solutions and advantages of the application are more clearly understood
The application is further elaborated.It should be appreciated that specific embodiment described herein is only used to explain the application, and
It is not used in restriction the application.
Fig. 1 is the schematic diagram of internal structure of electronic equipment in one embodiment.As shown in Figure 1, the electronic equipment includes logical
Cross processor, memory and the network interface of system bus connection.Wherein, which is used to provide calculating and control ability,
Support the operation of entire electronic equipment.Memory for storing data, program etc., at least one computer journey is stored on memory
Sequence, the computer program can be executed by processor, to realize the scene suitable for electronic equipment provided in the embodiment of the present application
Recognition methods.Memory may include that magnetic disk, CD, read-only memory (Read-Only Memory, ROM) etc. are non-volatile
Storage medium or random access memory (Random-Access-Memory, RAM) etc..For example, in one embodiment, depositing
Reservoir includes non-volatile memory medium and built-in storage.Non-volatile memory medium is stored with operating system and computer journey
Sequence.The computer program can be performed by processor, for realizing a kind of image procossing provided by following each embodiment
Method.Built-in storage provides the running environment of cache for the operating system computer program in non-volatile memory medium.
Network interface can be Ethernet card or wireless network card etc., for being communicated with external electronic equipment.The electronic equipment can
To be mobile phone, tablet computer or personal digital assistant or wearable device etc..
In one embodiment, as shown in Fig. 2, providing a kind of image processing method, it is applied in Fig. 1 in this way
It is illustrated for electronic equipment, comprising:
Step 220, character image is plucked out from image to be processed.
Image to be processed can be preview screen of taking pictures, and be also possible to the photo being saved in electronic equipment after taking pictures, or
Person is to save the photo into electronic equipment from acquisition elsewhere.Character image is plucked out from image to be processed, it specifically, can
Character image completely to be extracted from image to be processed using a variety of stingy drawing methods.
Step 240, the direction of current light is obtained from the human face region of character image.
The direction of current light is certainly existed in every image to be processed, therefore, the stingy figure of institute comes out from image to be processed
Character image on also embody the direction of current light on image to be processed.Character image may include head zone (packet
Include human face region) and body region, the direction of current light can be extracted from head zone or body region, specifically,
Select the direction that current light is obtained from the human face region of character image.Because human face region is generally the coke of image attention
Point, and human face region contains more light information, so only can intuitively and accurately be got from human face region whole
The direction of current light on image to be processed, improve obtain on whole image to be processed the efficiency in the direction of current light and
Accuracy.
Polishing intensity herein refers to being HSV model by the RGB model conversation of image, the value of the V in HSV model it is big
Small, the more big then polishing intensity of V value is bigger, and the smaller then polishing intensity of V value is smaller.The parameter of color is respectively in HSV model: tone
(H), saturation degree (S), lightness (V).Wherein lightness indicates bright degree, for light source colour, brightness value and illuminator
Brightness is related;Usual value range is 0% (black) to 100% (white).
It step 260, is that character image adds shade according to the direction of current light, the figure map after obtaining addition shade
Picture.
After the direction for obtaining current light in the human face region from character image, so that it may according to current light
Direction is that character image adds shade, the character image after obtaining addition shade.Because the direction of acquired current light with
Radiation direction in image to be processed is consistent, so being that character image adds shade according to the direction of current light, is added
The shade added mutually merges in which can be very natural with character image, is more in line with real scene.
In the embodiment of the present application, character image is plucked out from image to be processed, is obtained from the human face region of character image
The direction of current light is that character image adds shade according to the direction of current light, the character image after obtaining addition shade.
In the embodiment of the present application when the character image come out to stingy figure is added similar to shade in real scene, first from scratching
The direction that current light is obtained in the human face region for the character image that figure comes out, because human face region is generally the coke of image attention
Point, and human face region contains more light information, so only can intuitively and accurately be got from human face region whole
The direction of current light on image to be processed, improve obtain on whole image to be processed the efficiency in the direction of current light and
Accuracy.Direction further according to current light is that character image adds shade, the character image after obtaining addition shade.So that most
The character image after addition shade is more in line with real scene eventually.
In one embodiment, as shown in figure 3, step 240, obtains current light from the human face region of character image
Direction, comprising:
Step 242, the corresponding RGB RGB image of the human face region of character image tone is transformed by rgb space to satisfy
With degree lightness HSV space, the corresponding HSV image of human face region of character image is obtained.
The rgb color model and HSV colour model of image are referred here to, the following are the explanations to the two models.
Rgb color model is by the variation to red (Red, R), green (Green, G), blue (Blue, B) three Color Channels
And their mutual superpositions obtain various colors.Rgb color mode almost includes human vision institute energy
The all colours perceived, and most wide one of color system, the figure that people usually see or shot with terminal are used at present
Picture usually RGB image.Rgb color mode is assigned with one using the RGB component that RGB model is each of image pixel
Intensity value in 0~255 range.Such as: it be 0, B value is 255 that pure blue R value, which is 0, G value,;Tri- values of RGB of grey are equal (to be removed
0 and 255), white R, G, B value is all 255;R, G, B of black are 0.RGB image, which only uses three kinds of colors, can just make
They mix according to different ratios, occur 16777216 kinds of colors on the screen.
HSV colour model is the intuitive nature according to color and a kind of color space created, color in this model
Parameter is respectively tone (Hue, H), saturation degree (Saturation, S), lightness (Value, V).Wherein, H is measured with angle, is taken
Being worth range is 0 °~360 °, is calculated counterclockwise since red, and red is 0 °, and green is 120 °, and blue is 240 °.It
Complementary color be: yellow be 60 °, cyan be 180 °, magenta be 300 °;S indicates color close to the degree of spectrum colour.A kind of face
Color can regard the result that certain spectrum colour is mixed with white as.Wherein ratio shared by spectrum colour is bigger, and color is close to spectrum
The degree of color is just higher, and the saturation degree of color is also just higher.Saturation degree is high, and color is then deep and gorgeous.The white light ingredient of spectrum colour is
0, saturation degree reaches highest.Usual value range is 0%~100%, and value is bigger, and color is more saturated.Lightness V indicates bright
Degree, for light source colour, brightness value is related with the brightness of illuminator;For object color, the transmittance of this value and object or
Reflectivity is related.Usual value range is 0% (black) to 100% (white).
For a width RGB image, HSV image can be converted to, RGB image can also be converted to for HSV image.In reality
During border is realized, the corresponding RGB RGB image of the human face region of character image can be converted into HSV according to the following steps
Image:
The value range of the value of R, G, B is modified to 0-1 from 0-255 by step 1.
Here it is possible to be realized by following formula: (1-1) (1-2) (1-3)
R'=R/255 (1-1);
Wherein, in formula (1-1), R indicates the R value of each pixel in RGB image to be processed, and R ' indicates 0-1 range
Interior R value.
G'=G/255 (1-2);
Wherein, in formula (1-2), G indicates the G value of each pixel in RGB image to be processed, and G ' indicates 0-1 range
Interior G value.
B'=B/255 (1-3);
Wherein, in formula (1-3), B indicates the B value of each pixel in RGB image to be processed, and B ' indicates 0-1 range
Interior B value.
Step 2 determines the R ', G ', B that are transformed within the scope of 0-1 ' in maximum value and minimum value and maximum value and most
The difference of small value.
Here, R ', G ', B are determined according to formula (1-4) ' in maximum value:
Cmax=max (R', G', B') (1-4);
Wherein, in formula (1-4), Cmax R ', G ', B ' in maximum value, max () is maximizing function.
Determine R ', G ', B according to formula (1-5) ' in minimum value:
Cmin=min (R', G', B') (1-5);
Wherein, in formula (1-5), Cmin R ', G ', B ' in minimum value, min () is function of minimizing.
The difference of maximum value and minimum value is determined according to formula (1-6):
Δ=Cmax-Cmin (1-6);
Wherein, in formula (1-6), Δ is the difference of maximum value and minimum value.
Step 3 calculates H value according to formula (1-7).
Wherein, in formula (1-7), H is the value of the tone H in HSV, and mod is mod function.
Step 4 calculates S value according to formula (1-8).
Wherein, in formula (1-8), S is the value of saturation degree S in HSV.
Step 5 calculates V value according to formula (1-9).
V=Cmax (1-9);
Wherein, in formula (1-9), V is the value of lightness V in HSV.
In the corresponding RGB RGB image of human face region to character image, calculated pair according to above-mentioned conversion method
After H, S, V value answered, the corresponding HSV image of human face region of character image has just been obtained.
Step 244, lightness V component is obtained from the corresponding HSV image of the human face region of character image.
After the corresponding HSV image of human face region for having obtained character image, so that it may obtain bright on the HSV image
Spend the corresponding numerical value of V component.
Step 246, the direction of current light is determined according to lightness V component.
The parameter of color is respectively tone (Hue, H), saturation degree (Saturation, S), lightness in HSV colour model
(Value, V).Wherein, H is measured with angle, and value range is 0 °~360 °;S indicate color close to spectrum colour degree, usually
Value range is 0%~100%, and the bigger color of value is more saturated.Saturation degree is high, and color is then deep and gorgeous.Lightness V then indicates that color is bright
Bright degree, therefore, lightness V can embody the direction of light.Specifically, lightness is not only determined by object illumination degree, but also
It is determined by the reflection coefficient of body surface.If it is seen that light source in light source, lightness is determined by the intensity of light source
It is fixed.If it is seen that be derived from body surface reflection light, lightness is by the intensity and object of the light source illuminated
The reflection coefficient on surface codetermines.
Specifically, wherein lightness indicates bright degree, and for light source colour, the brightness of brightness value and illuminator has
It closes;Usual value range is 0% (black) to 100% (white).That is the more big then polishing intensity of V value is bigger, the smaller then polishing intensity of V value
It is smaller.So according to the lightness V component on the corresponding HSV image of human face region of character image, so that it may determine current light
The direction of line.
In the embodiment of the present application, after the RGB image for the human face region for obtaining character image, which is turned
It is changed to HSV image, so that V component directly can be got from HSV image, determines current light further according to lightness V component
Direction.The direction that current light is determined by the V component gone out embodied in HSV image, is compared to other methods more
Accurately.
In one embodiment, as shown in figure 4, step 244, from the corresponding HSV image of human face region of character image
Obtain lightness V component, comprising:
Step 244a, by the corresponding HSV image of the human face region of character image from centre be divided into the first human face region and
First face region division is preset quantity sub-regions, the second human face region is divided into present count by the second human face region
Measure sub-regions.
Step 244b calculates separately the average value of V component in preset quantity sub-regions.
Specifically, as shown in figure 5, left hand view (a1) and (a2) are same face, it is certain that this face is biased into right side
Angle.Right part of flg (b1) and (b2) are same face, this face is just facing towards on the outside of paper.Such as (b1) in Fig. 5
It is shown, by the corresponding HSV image of the human face region of character image from it is intermediate it is longitudinally divided be the first human face region 501 and the second people
Face region 502.Certainly, it as shown in (d1) in Fig. 6, can also laterally be divided to obtain the first human face region 601 and the second people
Face region 602.Here in Fig. 5 in first human face region 501 and the second human face region 502, Fig. 6 of (b1) (d1) the first face
Region 601 and the second human face region 602 all refer to the human face region part belonged in rectangle frame, do not include belong in rectangle frame but
Have exceeded the part of human face region.Again respectively by the first face region division be preset quantity sub-regions, the second face will be stated
Region division is preset quantity sub-regions.The two preset quantities may be the same or different.When preset quantity is identical
When, as shown in (b2) in Fig. 5,3 sub-regions (503,505,507) laterally are divided into the first human face region 501, then to the
Two human face regions 502 are also laterally divided into 3 sub-regions (504,506,508).Similarly, subregion here, which also refers to, belongs to square
Human face region part in shape frame does not include the part for belonging in rectangle frame but having exceeded human face region.In addition, in Fig. 5
Left hand view (a1) and (a2), and divided using identical method, details are not described herein again.It is right for shown in (d2) in Fig. 6
Longitudinally divided first human face region 601 is 2 sub-regions (603,605), then being also laterally divided into the second human face region 602
2 sub-regions (604,606).It is, of course, also possible to carry out the division of other modes.After having divided subregion, calculate separately
The average value of V component in each sub-regions.
In the embodiment of the present application, lightness V component is obtained from the corresponding HSV image of human face region of character image, it can be with
Using the HSV image is first divided into preset quantity sub-regions, thus by the lightness V component for calculating each subregion, most
It is embodied eventually by the lightness V component of each subregion come the lightness V component to the HSV image.The each son obtained in this way
The lightness V component in region is able to reflect more light informations in the HSV image.So that the determination of subsequent more lightness V component is worked as
The accuracy in the direction of final identified current light is improved in the direction of preceding light.
In one embodiment, the direction of current light is determined according to lightness V component, comprising:
Calculate the ratio of the average value of the average value of the V component of the first human face region and the V component of the second human face region;
When ratio within a preset range, it is determined that the direction of current light be directional light.
It is calculating separately out in each sub-regions after the average value of V component, is needing to further determine that human face region is worked as
The direction of preceding light.It specifically, can be by calculating the average value of the V component of the first human face region 501, calculating the second face again
The average value of the V component in region 502.For example, passing through the flat of the V component of the subregion by above-mentioned calculated first human face region
Mean value carries out the average value of the average V component for obtaining the first human face region later, by by above-mentioned calculated second face area
The average value of the V component of the subregion in domain carries out the average value of the average V component for obtaining the second human face region later.For example, point
The average value of the V component of the subregion (503,505,507) of the first human face region is not calculated, then this 3 sub-regions is put down
Mean value is averaged, using the average value as the average value of the V component of the first human face region 501Calculate separately out the second people
The average value of the V component of the subregion (504,506,508) in face region, then be averaged to the average value of this 3 sub-regions, it will
Average value of the average value as the V component of the second human face region 502Finally by obtained first human face region 501
The average value of V componentDivided by the average value of the V component of the second human face region 502The two average value is just obtained
Ratio
Further, judge whether the size of ratio falls within a preset range.Preset range, which can be, to be rule of thumb worth
The numerical intervals arrived, such as 0.9-1.1.Certainly, preset range herein is also possible to other reasonable numerical value.That is face
The distribution of the V component in region is relatively uniform.When judging ratio within a preset range, it is determined that the direction of current light is parallel
Light.Directional light refer to light propagate on the way its wavefront remain be a plane light beam.Sunlight was a point light source originally,
The light of sending is spherical surface light and is towards each different directions, is not directional light.But the because distance of sun light propagation
After very remote, such as when passing to tellurian, the degree of divergence of beam very little, approximately it is considered that the sun
Each light beam of light is parallel to each other.Think that sunlight is exactly a kind of common directional light.
It is of course also possible to the ratio of the average value of the V component of each horizontally-arranged subregion in (b2) is calculated separately in Fig. 5,
Such as the average value of the V component of calculating subregion 503 obtains the first ratio divided by the average value of the V component of subregion 504;Meter
The average value of the V component in operator region 505 obtains the second ratio divided by the average value of the V component of subregion 506;Calculate sub-district
The average value of the V component in domain 507 obtains third ratio divided by the average value of the V component of subregion 508.Judge the first ratio,
Two ratios, third ratio these three ratios, if all between preset range 0.9-1.1, if so, determining the side of current light
To for directional light.
The ratio of the average value of the V component of the subregion of each tandem in (d2) can also be calculated separately in Fig. 6, such as is counted
The average value of the V component in operator region 603 obtains the 4th ratio divided by the average value of the V component of subregion 604;Calculate sub-district
The average value of the V component in domain 605 obtains the 5th ratio divided by the average value of the V component of subregion 606.Judge the 4th ratio,
The two ratios of five ratios, if all between preset range 0.9-1.1, if so, determining that the direction of current light is parallel
Light.
In the embodiment of the present application, by the average value of V component in calculating subregion, then the V of the first human face region is calculated
The ratio of the average value of the V component of the average value of component and the second human face region.And then judge the ratio whether in preset range
It is interior, if then illustrating that current light direction is directional light.Directional light will not then generate shade.
In one embodiment, in the V component for the average value and the second human face region for calculating the V component of the first human face region
Average value ratio after, comprising:
When ratio not within a preset range, then according to the average value of the V component of the first human face region and the second human face region
V component average value ratio, obtain the biggish region of average value of V component;
The maximum subregion of average value of V component is obtained from the biggish region of average value of V component;
The direction of current light is determined according to the position of subregion.
Specifically, when judging ratio not within a preset range, then according to the average value of the V component of the first human face region with
The ratio of the average value of the V component of second human face region obtains the biggish region of average value of V component.
If after being carried out averagely by the average value of the V component of the subregion by above-mentioned calculated first human face region
The average value of the V component of the first human face region is obtained, the V component of the subregion by above-mentioned calculated second human face region is passed through
Average value carry out it is average after obtain the second human face region V component average value.Then directly acquire ratio biggish first
Human face region or the second human face region.The maximum sub-district of average value of V component is obtained from the biggish region of the ratio again
Domain.The direction of current light is determined according to the position of subregion.For example, shown in Fig. 5, if it is the first that it is biggish, which to obtain ratio,
Face region 501, and the average value for getting from the first human face region 501 V component of subregion 503 is maximum.Then according to sub-district
The position in domain 503 determines the direction of current light.Subregion 503 is located at the upper left side of entire human face region, it is thus determined that currently
The direction of light is also to carry out polishing from upper left side.When the subsequent progress polishing to character image, so that it may come from upper left side
Polishing is carried out, so that being formed by shade is more in line with the original affiliated real scene of character image.If herein to face
Region division is six sub-regions, then the direction of the current light obtained corresponds to six direction, respectively from upper left orientation
It sets and carries out polishing, polishing is carried out from left position, polishing is carried out from lower left position, carry out polishing from lower right position, from the right side
It sees lower polishing, carry out polishing from upper right side position in middle position.If being divided into other quantity according to other methods to human face region
Subregion, then corresponding to obtained polishing direction also can be more diversified.
If the ratio of the average value by the V component for the subregion for calculating each horizontally-arranged or each tandem, then judge every
Within a preset range whether one ratio.As long as obtaining any one ratio not within a preset range, V component is obtained at this time
The biggish region of average value, i.e. the first human face region or the second human face region.For example, calculating separately in Fig. 5 each in (b2)
The ratio of the average value of the V component of horizontally-arranged subregion, for example, calculate subregion 503 V component average value divided by subregion
The average value of 504 V component, obtaining the first ratio is 1.5;The average value of the V component of subregion 505 is calculated divided by subregion
The average value of 506 V component, obtaining the second ratio is 1;The average value of the V component of subregion 507 is calculated divided by subregion 508
V component average value, obtain third ratio be 1.1.Then at this time according to above three ratio just obtain the average value of V component compared with
Big region is the first human face region.The maximum son of average value of V component is got from the subregion of the first human face region again
Region is 503.The direction of current light is then determined according to the position of subregion 503.Subregion 503 is located at entire human face region
Upper left side, it is thus determined that the direction of current light is also to carry out polishing from upper left side.It is subsequent to character image carry out polishing when
Wait, so that it may carry out polishing from upper left side so that be formed by shade be more in line with character image it is original belonging to it is true
Real field scape.
In the embodiment of the present application, by the average value of V component in calculating subregion, then the V of the first human face region is calculated
The ratio of the average value of the V component of the average value of component and the second human face region.And then judge the ratio whether in preset range
It is interior, if otherwise obtaining the biggish region of average value of V component according to the ratio.And then it is biggish from the average value of the V component again
The maximum subregion of average value that V component is obtained in region, the direction of current light is determined according to the position of subregion.So really
The accuracy for making the direction of the current light of character image is relatively high, further according to obtained current light direction to image
It is handled.So that the effect after image procossing compares close to the original affiliated real scene of character image, will not seem prominent
It is towering unnatural.
In one embodiment, as shown in fig. 7, step 260, is character image addition yin according to the direction of current light
Shadow, the character image after obtaining addition shade, comprising:
Step 262, polishing is carried out to character image according to the direction of current light using polishing template and forms shade;
Step 264, shade is superimposed with character image, the character image after generating addition shade.
In the embodiment of the present application, obtain to beat on character image after the direction of current light by the above method
Polishing direction is set as the direction of current light in optical mode plate, using polishing template according to the direction of current light to character image
It carries out polishing and forms shade.Shade is superimposed with character image again, the character image after generating addition shade.As formed
Character image after adding shade is personage because shade is also to be formed according to the original radiation direction of character image
It will be more in line with the original affiliated real scene of character image after image addition shade, will not seem lofty unnatural.Polishing
Template is to be stored in advance in the electronic device for use at any time.
In one embodiment, as shown in figure 8, step 262, using polishing template according to the direction of current light to personage
Image carries out polishing and is formed after shade, comprising:
Step 266, shade is deviated according to the direction of current light, obtains the shade sequence of gradual change;
Step 268, the shade sequence of gradual change is superimposed with character image, the character image after generating addition shade.
In the embodiment of the present application, polishing is carried out to character image according to the direction of current light using polishing template above-mentioned
It is formed after shade, which can also be deviated according to the direction of current light, obtain the shade sequence of gradual change, that is, will form
Multiple shades.The shade sequence of gradual change is superimposed with character image, the character image after generating addition shade.Form dynamic light
Line effect so that after processing image effect is more diversified, interest and appeal.
In one embodiment, step 266, shade is deviated according to the direction of current light, obtains the shade sequence of gradual change
Column, comprising:
RGB-a transparent channel is extracted from image to be processed;
Shade is deviated by RGB-a transparent channel according to the direction of current light, the shade sequence of gradual change is obtained.
Specifically, RGB-a transparent channel is one 8 gray channels, which is recorded in image with 256 grades of gray scales
Transparence information, define transparent, opaque and translucent area, wherein black expression all-transparent, white to indicate opaque, ash is indicated
It is translucent.Under needing to explain, this it is transparent with it is opaque not human eye can see.Utilize the element for having the channel RGB-a
Material image is synthesized, and directly material image is stacked together according to certain level, unwanted part in material image
Meeting automatic transparent, and show image below, i.e., easy and composograph mass effect is preferable.
Extract RGB-a transparent channel from image to be processed, then by it is above-mentioned using polishing template according to current light
Direction carries out polishing to character image and is formed by shade, is deviated, is obtained according to the direction of current light by RGB-a transparent channel
To the shade sequence of gradual change.
In the embodiment of the present application, RGB-a transparent channel is extracted from image to be processed first, because using RGB- is had
The material image in the channel a is synthesized, and material image is stacked together according to certain level directly, is not required in material image
The part meeting automatic transparent wanted, and show image below, i.e., easy and composograph mass effect is preferable.So will
Shade is deviated by RGB-a transparent channel according to the direction of current light, and the shade sequence of gradual change, simple and easy to do and gained are obtained
The gradual change shade sequence arrived is more life-like.
In one embodiment, as shown in figure 9, providing a kind of image processing apparatus 900 includes: stingy module 920, light
Line direction obtains module 940 and shadow adding module 960.Wherein,
Module 920 is scratched, for plucking out character image from image to be processed;
Radiation direction obtains module 940, for obtaining the direction of current light from the human face region of character image;
Shadow adding module 960 obtains addition shade for being that character image adds shade according to the direction of current light
Character image afterwards.
In one embodiment, radiation direction obtains module 940, including HSV image conversion module, is used for character image
Human face region corresponding RGB RGB image tone saturation degree lightness HSV space is transformed by rgb space, obtain figure map
The corresponding HSV image of the human face region of picture;
Lightness V component obtains module, for obtaining lightness V points in the corresponding HSV image of human face region from character image
Amount;
Radiation direction determining module, for determining the direction of current light according to lightness V component.
In one embodiment, lightness V component obtains module, is also used to the corresponding HSV of the human face region of character image
Image is divided into the first human face region and the second human face region from centre, is preset quantity sub-district by the first face region division
Second human face region is divided into preset quantity sub-regions by domain;Calculate separately being averaged for V component in preset quantity sub-regions
Value;
Radiation direction determining module is also used to calculate the average value and the second human face region of the V component of the first human face region
V component average value ratio;When ratio within a preset range, it is determined that the direction of current light be directional light.
In one embodiment, radiation direction determining module is also used to calculate the average value of the V component of the first human face region
With the ratio of the average value of the V component of the second human face region;When ratio not within a preset range, then according to the first human face region
The ratio of the average value of the V component of the average value of V component and the second human face region obtains the biggish region of average value of V component;
The maximum subregion of average value of V component is obtained from the biggish region of average value of V component;It is determined according to the position of subregion
The direction of current light.
In one embodiment, shadow adding module 960 are also used to the direction pair using polishing template according to current light
Character image carries out polishing and forms shade;Shade is superimposed with character image, the character image after generating addition shade.
In one embodiment, shadow adding module 960 are also used to deviate shade according to the direction of current light, obtain
To the shade sequence of gradual change;The shade sequence of gradual change is superimposed with character image, the character image after generating addition shade.
In one embodiment, it is transparent logical to be also used to extract RGB-a from image to be processed for shadow adding module 960
Road;Shade is deviated by RGB-a transparent channel according to the direction of current light, the shade sequence of gradual change is obtained.
The division of modules is only used for for example, in other embodiments, can will scheme in above-mentioned image processing apparatus
As processing unit is divided into different modules as required, to complete all or part of function of above-mentioned image processing apparatus.
In one embodiment, a kind of computer readable storage medium is provided, computer program is stored thereon with, is calculated
The step of image processing method provided by the various embodiments described above is realized when machine program is executed by processor.
In one embodiment, a kind of electronic equipment is provided, including memory, processor and storage are on a memory simultaneously
The computer program that can be run on a processor, processor realize figure provided by the various embodiments described above when executing computer program
As the step of processing method.
The embodiment of the present application also provides a kind of computer program products, when run on a computer, so that calculating
Machine executes the step of image processing method provided by the various embodiments described above.
The embodiment of the present application also provides a kind of electronic equipment.The electronic equipment can be include mobile phone, tablet computer, PDA
(Personal Digital Assistant, personal digital assistant), POS (Point of Sales, point-of-sale terminal), vehicle mounted electric
Any terminal device such as brain, wearable device, by taking electronic equipment is mobile phone as an example: including image procossing electricity in above-mentioned electronic equipment
Road, image processing circuit can use hardware and or software component realization, it may include define ISP (Image Signal
Processing, image signal process) pipeline various processing units.Figure 10 is that image processing circuit shows in one embodiment
It is intended to.As shown in Figure 10, for purposes of illustration only, only showing the various aspects of image processing techniques relevant to the embodiment of the present application.
As shown in Figure 10, image processing circuit includes the first ISP processor 1030, the 2nd ISP processor 1040 and control
Logic device 1050.First camera 1010 includes one or more first lens 1012 and the first imaging sensor 1014.First
Imaging sensor 1014 may include colour filter array (such as Bayer filter), and the first imaging sensor 1014 can be obtained with first
The luminous intensity and wavelength information that each imaging pixel of imaging sensor 1014 captures, and providing can be by the first ISP processor
One group of image data of 1030 processing.Second camera 1020 includes one or more second lens 1022 and the second image sensing
Device 1024.Second imaging sensor 1024 may include colour filter array (such as Bayer filter), and the second imaging sensor 1024 can
Luminous intensity and wavelength information that each imaging pixel of the second imaging sensor 1024 captures are obtained, and providing can be by second
One group of image data of the processing of ISP processor 1040.
First image transmitting of the first camera 1010 acquisition is handled to the first ISP processor 1030, at the first ISP
It, can be by statistical data (brightness of such as image, the contrast value of image, the image of the first image after managing the first image of processing of device 1030
Color etc.) be sent to control logic device 1050, control logic device 1050 can determine the first camera 1010 according to statistical data
Control parameter, so that the first camera 1010 can carry out auto-focusing, the operation such as automatic exposure according to control parameter.First figure
As that can store after the first ISP processor 1030 is handled into video memory 1060, the first ISP processor 1030
The image that stores in video memory 1060 can be read with to handling.In addition, the first image passes through ISP processor 1030
It can be sent directly to display 1070 after being handled and shown that display 1070 can also be read in video memory 1060
Image to be shown.
Wherein, the first ISP processor 1030 handles image data pixel by pixel in various formats.For example, each image
Pixel can have the bit depth of 8,10,12 or 14 bits, and the first ISP processor 1030 can carry out image data one or more
The statistical information of image processing operations, collection about image data.Wherein, image processing operations can be by identical or different locating depth
Computational accuracy is spent to carry out.
Video memory 1060 can be independent special in a part, storage equipment or electronic equipment of memory device
It with memory, and may include DMA (Direct Memory Access, direct direct memory access (DMA)) feature.
When receiving from the first 1014 interface of imaging sensor, the first ISP processor 1030 can carry out one or more
A image processing operations, such as time-domain filtering.Image data that treated can be transmitted to video memory 1060, so as to shown
Other processing is carried out before.First ISP processor 1030 receives processing data from video memory 1060, and to processing data
Carry out the image real time transfer in RGB and YCbCr color space.Treated that image data can be defeated for first ISP processor 1030
Out to display 1070, so that user watches and/or by graphics engine or GPU (Graphics Processing Unit, figure
Processor) it is further processed.In addition, the output of the first ISP processor 1030 also can be transmitted to video memory 1060, and show
Device 1070 can read image data from video memory 1060.In one embodiment, video memory 1060 can be configured to
Realize one or more frame buffers.
The statistical data that first ISP processor 1030 determines can be transmitted to control logic device 1050.For example, statistical data can
Including automatic exposure, automatic white balance, automatic focusing, flicker detection, black level compensation, 1012 shadow correction of the first lens etc. the
One imaging sensor, 1014 statistical information.Control logic device 1050 may include executing the processing of one or more routines (such as firmware)
Device and/or microcontroller, one or more routines can statistical data based on the received, determine the control ginseng of the first camera 1010
Several and the first ISP processor 1030 control parameter.For example, the control parameter of the first camera 1010 may include gain, exposure
Time of integration of control, stabilization parameter, flash of light control parameter, 1012 control parameter of the first lens (such as focus or zoom coke
Away from) or the combination of these parameters etc..ISP control parameter may include for automatic white balance and color adjustment (for example, at RGB
During reason) 1012 shadow correction parameter of gain level and color correction matrix and the first lens.
Similarly, the second image transmitting that second camera 1020 acquires is handled to the 2nd ISP processor 1040, the
After two ISP processors 1040 handle the first image, can by the statistical data of the second image (brightness of such as image, image contrast
Value, color of image etc.) it is sent to control logic device 1050, control logic device 1050 can determine the second camera shooting according to statistical data
First 1020 control parameter, so that second camera 1020 can carry out the operation such as auto-focusing, automatic exposure according to control parameter.
Second image can store after the 2nd ISP processor 1040 is handled into video memory 1060, the 2nd ISP processor
1040 can also read the image that stores in video memory 1060 with to handling.In addition, the second image is handled by ISP
Device 1040 can be sent directly to display 1070 after being handled and be shown, display 1070 can also read video memory
Image in 1060 is to be shown.Second camera 1020 and the 2nd ISP processor 1040 also may be implemented such as the first camera shooting
First 1010 and the first treatment process described in ISP processor 1030.
The following are realize image processing method with image processing techniques in Figure 10.
Any reference to memory, storage, database or other media used in this application may include non-volatile
And/or volatile memory.Suitable nonvolatile memory may include read-only memory (ROM), programming ROM (PROM),
Electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include arbitrary access
Memory (RAM), it is used as external cache.By way of illustration and not limitation, RAM is available in many forms, such as
It is static RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDR SDRAM), enhanced
SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM
(RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM).
The several embodiments of the application above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously
The limitation to the application the scope of the patents therefore cannot be interpreted as.It should be pointed out that for those of ordinary skill in the art
For, without departing from the concept of this application, various modifications and improvements can be made, these belong to the guarantor of the application
Protect range.Therefore, the scope of protection shall be subject to the appended claims for the application patent.
Claims (10)
1. a kind of image processing method characterized by comprising
Character image is plucked out from image to be processed;
The direction of current light is obtained from the human face region of the character image;
It is that the character image adds shade according to the direction of the current light, the character image after obtaining addition shade.
2. the method according to claim 1, wherein described obtain from the human face region of the character image is worked as
The direction of preceding light, comprising:
The corresponding RGB RGB image of the human face region of the character image is transformed into tone saturation degree lightness by rgb space
HSV space obtains the corresponding HSV image of human face region of the character image;
Lightness V component is obtained from the corresponding HSV image of human face region of the character image;
The direction of current light is determined according to the lightness V component.
3. according to the method described in claim 2, it is characterized in that, the human face region from the character image is corresponding
Lightness V component is obtained in HSV image, comprising:
The corresponding HSV image of the human face region of the character image is divided into the first human face region and the second face area from centre
The first face region division is preset quantity sub-regions, second human face region is divided into preset quantity by domain
Sub-regions;
Calculate separately the average value of V component in the preset quantity sub-regions;
The direction that current light is determined according to the lightness V component, comprising:
Calculate the ratio of the average value of the average value of the V component of first human face region and the V component of second human face region
Value;
When the ratio within a preset range, it is determined that the direction of current light be directional light.
4. according to the method described in claim 3, it is characterized in that, in the V component for calculating first human face region
After the ratio of the average value of the V component of average value and second human face region, comprising:
When the ratio not within a preset range, then according to the average value of the V component of first human face region and described second
The ratio of the average value of the V component of human face region obtains the biggish region of average value of V component;
The maximum subregion of average value of V component is obtained from the biggish region of average value of the V component;
The direction of current light is determined according to the position of the subregion.
5. the method according to claim 3 or 4, which is characterized in that the direction according to the current light is described
Character image adds shade, the character image after obtaining addition shade, comprising:
Polishing is carried out to the character image according to the direction of the current light using polishing template and forms shade;
The shade is superimposed with the character image, the character image after generating addition shade.
6. according to the method described in claim 5, it is characterized in that, it is described using polishing template according to the side of the current light
It is formed after shade to polishing is carried out to the character image, comprising:
The shade is deviated according to the direction of the current light, obtains the shade sequence of gradual change;
The shade sequence of the gradual change is superimposed with the character image, the character image after generating addition shade.
7. according to the method described in claim 6, it is characterized in that, it is described to the shade according to the direction of the current light
Offset, obtains the shade sequence of gradual change, comprising:
RGB-a transparent channel is extracted from the image to be processed;
The shade is deviated by the RGB-a transparent channel according to the direction of the current light, the shade of gradual change is obtained
Sequence.
8. a kind of image processing apparatus, which is characterized in that described device includes:
Module is scratched, for plucking out character image from image to be processed;
Radiation direction obtains module, for obtaining the direction of current light from the human face region of the character image;
Shadow adding module obtains addition yin for being that the character image adds shade according to the direction of the current light
The character image of movie queen.
9. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program quilt
The step of image processing method as described in any one of claims 1 to 7 is realized when processor executes.
10. a kind of electronic equipment including memory, processor and stores the calculating that can be run on a memory and on a processor
Machine program, which is characterized in that the processor is realized described in any one of claims 1 to 7 when executing the computer program
Image processing method the step of.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811142984.1A CN109300186B (en) | 2018-09-28 | 2018-09-28 | Image processing method and device, storage medium and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811142984.1A CN109300186B (en) | 2018-09-28 | 2018-09-28 | Image processing method and device, storage medium and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109300186A true CN109300186A (en) | 2019-02-01 |
CN109300186B CN109300186B (en) | 2023-03-31 |
Family
ID=65164902
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811142984.1A Active CN109300186B (en) | 2018-09-28 | 2018-09-28 | Image processing method and device, storage medium and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109300186B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109920045A (en) * | 2019-02-02 | 2019-06-21 | 珠海金山网络游戏科技有限公司 | A kind of scene shade drafting method and device calculate equipment and storage medium |
CN113592753A (en) * | 2021-07-23 | 2021-11-02 | 深圳思谋信息科技有限公司 | Image processing method and device based on industrial camera shooting and computer equipment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100054584A1 (en) * | 2008-08-28 | 2010-03-04 | Microsoft Corporation | Image-based backgrounds for images |
JP2013042428A (en) * | 2011-08-18 | 2013-02-28 | Fujifilm Corp | Imaging device and image processing method |
CN104123743A (en) * | 2014-06-23 | 2014-10-29 | 联想(北京)有限公司 | Image shadow adding method and device |
CN107909057A (en) * | 2017-11-30 | 2018-04-13 | 广东欧珀移动通信有限公司 | Image processing method, device, electronic equipment and computer-readable recording medium |
CN108537749A (en) * | 2018-03-29 | 2018-09-14 | 广东欧珀移动通信有限公司 | Image processing method, device, mobile terminal and computer readable storage medium |
-
2018
- 2018-09-28 CN CN201811142984.1A patent/CN109300186B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100054584A1 (en) * | 2008-08-28 | 2010-03-04 | Microsoft Corporation | Image-based backgrounds for images |
JP2013042428A (en) * | 2011-08-18 | 2013-02-28 | Fujifilm Corp | Imaging device and image processing method |
CN104123743A (en) * | 2014-06-23 | 2014-10-29 | 联想(北京)有限公司 | Image shadow adding method and device |
CN107909057A (en) * | 2017-11-30 | 2018-04-13 | 广东欧珀移动通信有限公司 | Image processing method, device, electronic equipment and computer-readable recording medium |
CN108537749A (en) * | 2018-03-29 | 2018-09-14 | 广东欧珀移动通信有限公司 | Image processing method, device, mobile terminal and computer readable storage medium |
Non-Patent Citations (1)
Title |
---|
郭娟娟等: "基于移动区域的快速车辆检测", 《计算机应用》 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109920045A (en) * | 2019-02-02 | 2019-06-21 | 珠海金山网络游戏科技有限公司 | A kind of scene shade drafting method and device calculate equipment and storage medium |
CN113592753A (en) * | 2021-07-23 | 2021-11-02 | 深圳思谋信息科技有限公司 | Image processing method and device based on industrial camera shooting and computer equipment |
CN113592753B (en) * | 2021-07-23 | 2024-05-07 | 深圳思谋信息科技有限公司 | Method and device for processing image shot by industrial camera and computer equipment |
Also Published As
Publication number | Publication date |
---|---|
CN109300186B (en) | 2023-03-31 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108024055B (en) | Method, apparatus, mobile terminal and the storage medium of white balance processing | |
CN107431790B (en) | Method and non-transitory computer-readable medium for image procossing | |
EP3542347B1 (en) | Fast fourier color constancy | |
CN103430551B (en) | Use the imaging system and its operating method of the lens unit with axial chromatic aberration | |
CN105933617B (en) | A kind of high dynamic range images fusion method for overcoming dynamic problem to influence | |
CN107977940A (en) | background blurring processing method, device and equipment | |
CN108055452A (en) | Image processing method, device and equipment | |
CN107396079B (en) | White balance adjustment method and device | |
CN109191403A (en) | Image processing method and device, electronic equipment, computer readable storage medium | |
CN108537155A (en) | Image processing method, device, electronic equipment and computer readable storage medium | |
CN108024056B (en) | Imaging method and device based on dual camera | |
CN108712608A (en) | Terminal device image pickup method and device | |
CN102663741B (en) | Method for carrying out visual stereo perception enhancement on color digit image and system thereof | |
CN108024054A (en) | Image processing method, device and equipment | |
CN107509031A (en) | Image processing method, device, mobile terminal and computer-readable recording medium | |
CN108154514A (en) | Image processing method, device and equipment | |
CN107872631A (en) | Image capturing method, device and mobile terminal based on dual camera | |
CN109242794B (en) | Image processing method, image processing device, electronic equipment and computer readable storage medium | |
CN108024057A (en) | Background blurring processing method, device and equipment | |
CN108616700A (en) | Image processing method and device, electronic equipment, computer readable storage medium | |
CN107580205B (en) | White balance adjustment method and device | |
CN108156369A (en) | Image processing method and device | |
CN109360254A (en) | Image processing method and device, electronic equipment, computer readable storage medium | |
CN108810406A (en) | Portrait light efficiency processing method, device, terminal and computer readable storage medium | |
CN108053438A (en) | Depth of field acquisition methods, device and equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |