CN107909058A - Image processing method, device, electronic equipment and computer-readable recording medium - Google Patents

Image processing method, device, electronic equipment and computer-readable recording medium Download PDF

Info

Publication number
CN107909058A
CN107909058A CN201711242759.0A CN201711242759A CN107909058A CN 107909058 A CN107909058 A CN 107909058A CN 201711242759 A CN201711242759 A CN 201711242759A CN 107909058 A CN107909058 A CN 107909058A
Authority
CN
China
Prior art keywords
image
catchlights
ocular
region
human face
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201711242759.0A
Other languages
Chinese (zh)
Inventor
欧阳丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201711242759.0A priority Critical patent/CN107909058A/en
Publication of CN107909058A publication Critical patent/CN107909058A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • G06T5/77
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The invention relates to a kind of image processing method, device, electronic equipment and computer-readable recording medium.The above method, including:Recognition of face is carried out to pending image, determines human face region;The characteristic point of the human face region is gathered, and according to the positioning feature point ocular;Detect in the ocular and whether include catchlights, if comprising carrying out enhancing processing to the catchlights;If not including, catchlights figure is added in the ocular.Above-mentioned image processing method, device, electronic equipment and computer-readable recording medium, can make the personage of image more vividly catch the spirit, and improve the visual display effect of image.

Description

Image processing method, device, electronic equipment and computer-readable recording medium
Technical field
This application involves technical field of image processing, more particularly to a kind of image processing method, device, electronic equipment and Computer-readable recording medium.
Background technology
When gathering character image when imaging device by camera, often it may be noted that catching catchlights, in people during shooting Eyeball on the hot spot that is formed can be described as catchlights, catchlights can make to shoot the character image come more lively, vivid.
The content of the invention
The embodiment of the present application provides a kind of image processing method, device, electronic equipment and computer-readable recording medium, can The personage of image is more vividly caught the spirit, improve the visual display effect of image.
A kind of image processing method, including:
Recognition of face is carried out to pending image, determines human face region;
The characteristic point of the human face region is gathered, and according to the positioning feature point ocular;
Detect in the ocular and whether include catchlights, if comprising carrying out enhancing processing to the catchlights;
If not including, catchlights figure is added in the ocular.
A kind of image processing apparatus, including:
Identification module, for carrying out recognition of face to pending image, determines human face region;
Eye locating module, for gathering the characteristic point of the human face region, and according to the positioning feature point eye area Domain;
Catchlights detection module, for detecting in the ocular whether include catchlights;
Strengthen module, if for including catchlights in the ocular, enhancing processing is carried out to the catchlights;
Add module, if for not including catchlights in the ocular, expression in the eyes is added in the ocular Light figure.
A kind of electronic equipment, including memory and processor, are stored with computer program, the calculating in the memory When machine program is performed by the processor so that the processor realizes method as described above.
A kind of computer-readable recording medium, is stored thereon with computer program, and the computer program is held by processor Method as described above is realized during row.
Above-mentioned image processing method, device, electronic equipment and computer-readable recording medium, to pending image into pedestrian Face identifies, determines human face region, gathers the characteristic point of human face region, and according to positioning feature point ocular, detection eye area Whether catchlights are included in domain, if comprising carrying out enhancing processing to catchlights, if not including, added in ocular Catchlights figure, can strengthen the catchlights of ocular or automatic addition catchlights figure, make the personage of image more lively It is vivid, improve the visual display effect of image.
Brief description of the drawings
Fig. 1 is the block diagram of electronic equipment in one embodiment;
Fig. 2 is the flow diagram of image processing method in one embodiment;
Fig. 3 is to judge whether ocular includes the flow diagram of catchlights in one embodiment;
Fig. 4 is to add the flow diagram of catchlights figure in ocular in one embodiment;
Fig. 5 is the flow diagram that light source direction is detected in one embodiment;
Fig. 6 is the schematic diagram that human face region is divided into some subregions in one embodiment;
Fig. 7 is the flow diagram that direction of visual lines is calculated in one embodiment;
Fig. 8 is to add the flow diagram of catchlights figure in ocular in another embodiment;
Fig. 9 is the block diagram of image processing apparatus in one embodiment;
Figure 10 is the schematic diagram of image processing circuit in one embodiment.
Embodiment
It is with reference to the accompanying drawings and embodiments, right in order to which the object, technical solution and advantage of the application are more clearly understood The application is further elaborated.It should be appreciated that specific embodiment described herein is only to explain the application, not For limiting the application.
It is appreciated that term " first " used in this application, " second " etc. can be used to describe various elements herein, But these elements should not be limited by these terms.These terms are only used to distinguish first element and another element.Citing comes Say, in the case where not departing from scope of the present application, the first client can be known as the second client, and similarly, can incite somebody to action Second client is known as the first client.First client and the second client both clients, but it is not same visitor Family end.
Fig. 1 is the block diagram of electronic equipment in one embodiment.As shown in Figure 1, the electronic equipment includes passing through system bus Processor, memory, display screen and the input unit of connection.Wherein, memory may include non-volatile memory medium and processing Device.The non-volatile memory medium of electronic equipment is stored with operating system and computer program, and the computer program is by processor A kind of image processing method provided during execution with realizing in the embodiment of the present application.The processor, which is used to provide, calculates and controls energy Power, supports the operation of whole electronic equipment.Built-in storage in electronic equipment is the computer journey in non-volatile memory medium The operation of sequence provides environment.The display screen of electronic equipment can be liquid crystal display or electric ink display screen etc., and input fills It can be button, trace ball or the Trackpad set on the touch layer or electronic equipment casing covered on display screen to put, Can also be external keyboard, Trackpad or mouse etc..The electronic equipment can be that mobile phone, tablet computer or individual digital help Reason or Wearable etc..It will be understood by those skilled in the art that the structure shown in Fig. 1, only with application scheme phase The block diagram of the part-structure of pass, does not form the restriction for the electronic equipment being applied thereon to application scheme, specific electricity Sub- equipment can include, than more or fewer components shown in figure, either combining some components or having different components Arrangement.
As shown in Fig. 2, in one embodiment, there is provided a kind of image processing method, comprises the following steps:
Step 210, recognition of face is carried out to pending image, determines human face region.
Electronic equipment can obtain pending image, and pending image can be electronic equipment by imaging first-class imaging device The preview image that can be in display screen preview of collection or the image that has generated and stored.Electronic equipment can treat place Manage image and carry out recognition of face, determine the human face region in pending image.Electronic equipment can extract the image of pending image Feature, and characteristics of image is analyzed by default human face recognition model, judge whether include face in pending image, If comprising, it is determined that corresponding human face region.Characteristics of image may include shape facility, space characteristics and edge feature etc., wherein, Shape facility refers to shape local in pending image, and space characteristics refer to splitting in pending image multiple Mutual locus or relative direction relation between region, edge feature refer to forming two regions in pending image Between boundary pixel etc..
In one embodiment, human face recognition model can be the decision model built beforehand through machine learning, build During human face recognition model, substantial amounts of sample image can be obtained, facial image and unmanned image are included in sample image, can basis Whether each sample image is marked sample image comprising face, and using the sample image of mark as human face recognition model Input, be trained by machine learning, obtain human face recognition model.
Step 220, the characteristic point of human face region is gathered, and according to positioning feature point ocular.
Electronic equipment can gather the characteristic point of human face region, characteristic point can be used in description human face region the position of face and The information such as shape, each characteristic point can include coordinate value, the corresponding location of pixels of coordinate value available feature point of characteristic point into Row represents, such as the coordinate value of characteristic point is corresponding location of pixels X row Y row etc..
Electronic equipment can first gather the characteristic point of human face region roughly, and by default analysis model to gathering roughly Characteristic point is analyzed, and analysis model can correct the characteristic point of collection by way of iteration, gradually reduces the characteristic point of collection Coordinate value and pending image in human face region true face characteristic point error, it is last exportable to obtain in human face region Accurately characteristic point, accurately characteristic point can form the face mask and face profile of human face region.Electronic equipment can be according to defeated Ocular in the accurately positioning feature point human face region gone out.
Step 230, detect in ocular whether include catchlights, if so, step 240 is then performed, if it is not, then performing step Rapid 250.
Electronic equipment can detect in ocular whether include catchlights.Alternatively, electronic equipment can first obtain eye area Pupil region in domain, and the monochrome information of each pixel in pupil region is obtained, the brightness region of catchlights can be preset Between, if can determine that in ocular and wrap in the pixel of default brightness section comprising multiple monochrome informations in pupil region Containing catchlights, and pixel of the monochrome information in default brightness section can be defined as catchlights.It is to be appreciated that electronics is set The standby color-values that can also obtain each pixel in pupil region, detect whether to include catchlights according to color-values, wherein, color Value may refer to value of the pixel in color spaces such as RGB (red, green, blue), HSV (tone, saturation degree, lightness), can set in advance Determine the color-values scope of catchlights, and the pixel that color-values are fallen into default color-values scope is defined as catchlights.
Step 240, enhancing processing is carried out to catchlights.
If including catchlights in ocular, electronic equipment can carry out enhancing processing to catchlights, can improve and be defined as The brightness of the pixel of catchlights, makes the catchlights of ocular more obvious, also can adjust the transparency and size of catchlights, By expression in the eyes light amplification, personage is set more vividly to catch the spirit.
Step 250, catchlights figure is added in ocular.
If catchlights figure can not be added in ocular comprising catchlights, electronic equipment in ocular. In one embodiment, electronic equipment can detect the light source direction of pending image, and be added according to light source direction in ocular Add catchlights figure, wherein, light source direction refers to the direction of source emissioning light line in pending image.Electronic equipment can basis The light source direction of pending image chooses Adding Area in ocular, and adds catchlights figure in Adding Area.It is optional Ground, electronic equipment can be chosen in the pupil region of ocular and the relatively uniform Adding Area of light source direction, for example, light source Direction is upper right side, and direction of visual lines is also to be seen to upper right, then the upper right side that can choose pupil region is Adding Area.Catchlights figure Shape can be the shape being pre-selected by user, such as crescent, circle, rectangle etc., light source type can also be analyzed, according to light source Type selects expression in the eyes light image, such as incandescent lamp that rectangle may be selected, and sunlight can choose circle etc.
In the present embodiment, recognition of face is carried out to pending image, determines human face region, gather the feature of human face region Point, and according to positioning feature point ocular, detect in ocular whether include catchlights, if comprising, to catchlights into Row enhancing handle, if not including, in ocular add catchlights figure, can strengthen ocular catchlights or from Dynamic addition catchlights figure, makes the personage of image more vividly catch the spirit, improves the visual display effect of image.
As shown in figure 3, in one embodiment, whether catchlights are included in step 230 detection ocular, including it is following Step:
Step 302, the pupil region of ocular is obtained according to characteristic point.
Electronic equipment can carry out edge detection according to characteristic point to ocular, obtain the orbital border and pupil of ocular Bore edges, wherein, edge may be used to indicate that the beginning to terminate with another characteristic area of a characteristic area, usually may be present Between target and target, target and background, region and region etc..Alternatively, edge detection can use a variety of edge detections to calculate Son, such as Roberts Cross operators, Prewitt operators, Sobel operators, Kirsch operators, compass operator etc..
Electronic equipment can calculate the single order or second dervative of the corresponding gray level image of ocular, be led by single order or second order Number can find the pixel that sudden transformation or ridge change etc. occur for gray value.Electronic equipment carries out edge inspection to ocular Survey, processing can be first filtered to ocular, reduce the error that the noise in ocular is brought to edge detection.Filtered After ripple noise reduction, the pixel that electronic equipment can have significant change gray value in ocular strengthens, and can be according to gray scale The gradient magnitude threshold value of the first derivative of value is to detect the marginal point in ocular, then the pixel at edge is positioned, Obtain the information such as the position or orientation of edge pixel point.
Step 304, the brightness histogram of pupil region is generated, and judges whether pupil region includes according to brightness histogram Catchlights.
After the orbital border and pupil edge of electronic equipment detection ocular, lesser ring of Merkel can be obtained according to pupil edge Domain, can obtain the monochrome information of each pixel in pupil region, and to the monochrome information of each pixel in pupil region into Row statistics, generates the brightness histogram of pupil region, and brightness histogram describes pixel in pupil region in each intensity level Distribution situation in not.Electronic equipment obtains the monochrome information of each pixel in pupil region, can be converted into monochrome information Corresponding gray scale, alternatively, in brightness histogram, can include 0~255 totally 256 gray scales, can count pupil Belong to the quantity of the pixel of each gray scale in region, and generate brightness histogram.
Electronic equipment can judge whether pupil region includes catchlights according to the brightness histogram of generation, can be straight according to brightness The Luminance Distribution of pixel is judged in square figure.When the intensity level in the brightness histogram of pupil region more than pre-set level The pixel more than default quantity is distributed with not, that is, illustrates to include the high pixel of some brightness in pupil region, then It can determine whether to include catchlights in pupil region, if big without distribution in the gray scale in brightness histogram more than pre-set level In the pixel of default quantity, almost all of pixel is distributed in relatively low gray scale, then can determine whether in pupil region Not comprising catchlights.
In one embodiment, if including catchlights in ocular, electronic equipment can detect in pupil region with it is adjacent The luminance difference of pixel is more than the pixel of preset value, determines the edge of catchlights, can obtain eye according to the edge of catchlights God Light region, and carry out enhancing processing to belonging to the pixel in catch light region.
In the present embodiment, it can be judged whether to include catchlights according to the brightness histogram of pupil region, can accurately detected Go out the catchlights that ocular includes and carry out enhancing processing, the personage of image can be made more vividly to catch the spirit, improve regarding for image Feel display effect.
As shown in figure 4, in one embodiment, step 250 adds catchlights figure, including following step in ocular Suddenly:
Step 402, the light source direction of pending image is detected according to human face region.
Electronic equipment detects the light source direction of pending image, and alternatively, electronic equipment can obtain the brightness of human face region Information, the light source direction of pending image is detected according to the monochrome information of human face region.In one embodiment, electronic equipment can The bright dark areas that human face region is analyzed according to the monochrome information of human face region is distributed, and is distributed estimation light source side according to bright dark areas To.For example, when the dark areas distribution in human face region is less, light source direction can be estimated to face face direction, works as face Dark areas in region is distributed in left side, bright area is distributed in right side constantly, and light source direction can be estimated for right etc., usual light source Direction and direction that bright area is distributed it is more consistent.
Step 404, direction of visual lines is calculated according to the characteristic point of ocular.
The characteristic point of ocular may include the characteristic point for forming eye contour and pupil profile, and electronic equipment can lead to Cross orbital border and pupil region that characteristic point obtains ocular.Direction of visual lines may include three visual angles and three directions, three A visual angle may include to look up, look squarely and overlook, and three directions may include to eye left, eye right, be seen to centre.In one embodiment In, electronic equipment can calculate eye profile size according to orbital border, visual angle first be distinguished by eye profile size, further according to pupil Direction is distinguished in position of the bore region in ocular, so as to obtain direction of visual lines.
In one embodiment, electronic equipment can also obtain the superior orbit side of ocular according to the characteristic point of ocular Edge and inside and outside angle point, the center of circle is determined according to superior orbit edge and inside and outside angle point, and calculates the central angle of eyelid, passes through the central angle Visual angle is distinguished, different visual angles can correspond to different center of circle angular regions, for example, central angle can be set at 85 degree~95 degree to look squarely, Less than 85 degree to overlook, more than 95 degree to look up, but not limited to this., can be according to definite circle after electronic equipment distinguishes visual angle Main shaft is established at the center of the heart and pupil region, is calculated according to the deviation in main shaft Relative vertical direction and is distinguished direction, so as to obtain Direction of visual lines.It is to be appreciated that can also use other modes to calculate direction of visual lines, it be not limited in aforesaid way.
Step 406, catchlights figure is added in ocular according to light source direction and direction of visual lines.
Electronic equipment can be chosen according to the light source direction of pending image and the direction of visual lines of face in ocular to be added Add region, and catchlights figure is added in Adding Area.Alternatively, when light source direction is consistent with direction of visual lines, electronic equipment Can be chosen in the pupil region of ocular with the relatively uniform Adding Area of light source direction, for example, light source direction is upper right Side, direction of visual lines is also to be seen to upper right, then the upper right side that can choose pupil region is Adding Area.In one embodiment, if Light source direction is inconsistent with direction of visual lines, can set different weights respectively, Adding Area is chosen according to weight, for example, can set The weight for putting light source direction is 7, and the weight of direction of visual lines is 3, is added according to light source direction weight calculation corresponding with direction of visual lines Add region in the relative position of pupil region.Light source direction and the weight of direction of visual lines can be it is pre-set, also can basis The light intensity of pending image, deflection angle of face etc. are determined, such as when light intensity is higher, can set light source side To weight it is higher, but not limited to this.
In the present embodiment, can be according to light source direction and direction of visual lines in eye if ocular does not include catchlights Catchlights figure is added in region, the personage in image is more vividly caught the spirit, improves the visual display effect of image.
As shown in figure 5, in one embodiment, step 402 detects the light source direction of pending image according to human face region, Comprise the following steps:
Step 502, human face region is divided into some subregions.
Human face region can be divided into some subregions by electronic equipment according to predetermined manner, can be set previously according to actual demand Surely the mode divided and the number of subregion etc., for example, human face region can be divided into four sub-regions, electronic equipment can basis The positioning feature point nasal area of human face region, and using nasal area as center line, human face region is divided into left and right two vertically Part, then using the wing of nose position of nasal area as center line, it is horizontal that human face region is divided into two parts up and down, so that by face Region division is four sub-regions, but not limited to this.
Step 504, extract the monochrome information of each sub-regions and be compared, obtain comparative result.
Electronic equipment can extract the monochrome information of each sub-regions in human face region, can calculate each sub-regions respectively Average brightness, and the monochrome information using average brightness as corresponding sub-region.Electronic equipment can believe the brightness of each sub-regions Breath is compared, and is judged the monochrome information size of subregion, is obtained comparative result, wherein, it can include in comparative result each The brightness magnitude relationship of subregion.
Step 506, the light source direction of pending image is determined according to comparative result.
Electronic equipment can obtain the subregion that monochrome information is larger in human face region according to comparative result, wherein, brightness letter Cease larger subregion and may refer to the monochrome information subregion bigger than other adjacent subregions.Electronic equipment can be according to brightness Distributing position of the larger subregion of information in human face region determines the light source direction of pending image, for example, monochrome information Larger subregion is located at the upper right side of human face region, then can determine that light source direction is upper right side, the larger sub-district of monochrome information Domain is located at the lower left of human face region, then can determine that light source direction for lower left etc..
Fig. 6 is the schematic diagram that human face region is divided into some subregions in one embodiment.As shown in fig. 6, electronics is set It is standby that human face region can be divided into four sub-regions, it is respectively subregion 610, subregion 620, subregion 630 and subregion 640, the monochrome information of extractable four sub-regions is simultaneously compared, and judges the monochrome information height of four sub-regions.Electronics is set The standby light source direction that pending image can be determined according to comparative result, if for example, the monochrome information of subregion 610 is than other sub-districts Domain is big, and since subregion 610 is located at the upper right side of human face region, then can determine that light source direction is upper right side;If subregion 640 Monochrome information is bigger than other subregions, since subregion 640 is located at the upper left side of human face region, then can determine that light source direction for a left side Top etc..When each sub-regions monochrome information difference within a preset range, and monochrome information is all higher than preset first threshold value, Illustrate that the smaller and brightness of monochrome information difference of each sub-regions in human face region is higher, then can determine that light source direction is face people Face region direction.When each sub-regions monochrome information difference within a preset range, and monochrome information is respectively less than default second During threshold value, illustrate that the smaller and brightness of monochrome information difference of each sub-regions in human face region is relatively low, then can determine that light source direction For back to the direction of human face region, namely backlight direction.
In the present embodiment, light source direction can be detected according to the monochrome information of each sub-regions in human face region, inspection can be made The light source direction of survey is more accurate, makes the catchlights of addition more natural, lively.
As shown in fig. 7, in one embodiment, step 404 calculates direction of visual lines according to the characteristic point of ocular, including Following steps:
Step 702, the orbital border and pupil region of ocular are detected according to characteristic point.
Electronic equipment can carry out edge detection according to characteristic point to ocular, obtain the orbital border and pupil of ocular Bore edges.
Step 704, the pupil center of pupil region is obtained.
After the orbital border and pupil edge of electronic equipment detection ocular, lesser ring of Merkel can be obtained according to pupil edge Domain, can choose the pupil center of pupil region, and pupil center can be the central point of pupil region.
Step 706, pupil center and the distance of the orbital border of each different azimuth are calculated respectively.
In one embodiment, orbital border may include superior orbit edge, inferior orbit edge, interior orbital border and outer eye socket The orbital border of the different azimuths such as edge, wherein, interior orbital border refers to the orbital border close to nasal portion, interior eye socket side Edge can be a characteristic point, for representing the position of inner eye corner;Outer orbital border refers to the eye socket side away from nasal portion Edge, outer orbital border can also be a characteristic point, for representing the position of eye tail or the tail of the eye, interior orbital border and outer eye socket Edge can be the point of interface at superior orbit edge and inferior orbit edge.
Electronic equipment can calculate pupil center and the distance of the orbital border of each different azimuth respectively, can calculate pupil respectively The distance of hole center and superior orbit edge, inferior orbit edge, interior orbital border and outer orbital border etc..
Step 708, direction of visual lines is determined according to the distance of pupil center and the orbital border of each different azimuth.
After electronic equipment calculates the distance of the orbital border of pupil center and each different azimuth, can according to pupil center with The distance ratio of the orbital border of each different azimuth determines direction of visual lines.Further, electronic equipment can calculate pupil center Distance ratio with the distance at superior orbit edge and with inferior orbit edge, then calculate pupil center and interior orbital border distance and With the distance ratio of outer orbital border, direction of visual lines is determined according to two distance ratios, different direction of visual lines can correspond to difference Distance ratio scope.
In one embodiment, after electronic equipment calculates the distance of the orbital border of pupil center and each different azimuth, Direction of visual lines can be calculated by default line-of-sight detection model, which can be built by machine learning.Structure , can be by largely marking the sample image having to learn, progressively determining difference when building line-of-sight detection model The corresponding distance ratio scope of direction of visual lines.
In the present embodiment, sight can be calculated by calculating pupil center and the distance of the orbital border of each different azimuth Direction, can make the direction of visual lines that detects more accurate, make the catchlights of addition more natural, lively.
As shown in figure 8, in one embodiment, step 406 adds according to light source direction and direction of visual lines in ocular Add catchlights figure, comprise the following steps:
Step 802, Adding Area is chosen in pupil region according to light source direction and direction of visual lines.
Electronic equipment can be chosen according to the light source direction of pending image and the direction of visual lines of face in ocular to be added Add region, and catchlights figure is added in Adding Area.Alternatively, electronic equipment can divide multiple solid in pupil region in advance Fixed Adding Area, is chosen further according to light source direction and direction of visual lines from the Adding Area of multiple fixations of division.Add It can also be the region for being not fixed setting to add region, and electronic equipment can also be chosen according to light source direction and direction of visual lines, when When light source direction is consistent with direction of visual lines, electronic equipment can choose opposite with light source direction one in the pupil region of ocular The Adding Area of cause;When light source direction is inconsistent with direction of visual lines, different weights can be respectively set, added according to weight calculation Region pupil region relative position, so as to choose Adding Area.
Step 804, deviation angle of the Adding Area relative to pupil center is calculated.
After electronic equipment chooses Adding Area, deviation angle of the Adding Area relative to pupil center can be calculated, wherein, should Deviation angle may refer to Adding Area and angle and direction formed by the horizontal line or vertical curve where pupil center.For example, The Adding Area that electronic equipment is chosen is located at the upper right side of pupil region, the Adding Area relative to pupil center deviation angle For to inclined 45 degree of upper right etc., but not limited to this.
Step 806, acquisition and the matched expression in the eyes optical mode plate of deviation angle, and added according to expression in the eyes optical mode plate in Adding Area Catchlights figure.
Different deviation angles can match different expression in the eyes optical mode plates, and definable has the shape of catchlights in expression in the eyes optical mode plate The parameters such as shape, color, transparency and size.Electronic equipment can obtain with the matched expression in the eyes optical mode plate of deviation angle, and according to eye The parameters such as shape, color, transparency and the size of the catchlights defined in God Light template add catchlights figure in Adding Area. For example, Adding Area is inclined 45 degree to upper right relative to the deviation angle of pupil center, can determine in its corresponding expression in the eyes optical mode plate The shape of ocular prosthesis God Light is crescent, and transparency is 50% etc., and Adding Area is 0 relative to the deviation angle of pupil center, That is, Adding Area is located at pupil center, then the shape of definable catchlights is circular in its corresponding expression in the eyes optical mode plate, transparency For 30% etc., but not limited to this.
In one embodiment, when electronic equipment adds catchlights figure in the Adding Area of pupil region, can obtain The white of the eye region of ocular, and extract the color-values in white of the eye region, wherein, color-values refer to pixel RGB (it is red, green, It is blue), the value of the color space such as HSV (tone, saturation degree, lightness).Electronic equipment can make the color-values in the white of the eye region of extraction For the color parameter of catchlights, catchlights figure is added in Adding Area according to the color-values in white of the eye region and expression in the eyes optical mode plate Shape.
In the present embodiment, Adding Area can be chosen in pupil region according to light source direction and direction of visual lines, and according to Different expression in the eyes optical mode plate addition catchlights figures is chosen in the position of Adding Area, and the personage in image can be made more vividly to pass God, improves the visual display effect of image.
In one embodiment, above-mentioned image processing method, further includes:According to the list of characters of Feature point recognition human face region Feelings;Expression in the eyes optical mode plate is chosen according to facial expression, and is added in ocular according to light source direction and direction of visual lines and expression in the eyes The matched catchlights figure of optical mode plate.
Electronic equipment gathers the characteristic point of human face region, be able to can be led to according to the facial expression of Feature point recognition human face region Default Expression Recognition model analysis characteristic point is crossed, identifies the facial expression of human face region, wherein, Expression Recognition model can be advance Built by machine learning.In one embodiment, electronic equipment can build Expression Recognition model in advance, can obtain a large amount of Sample image, facial expression can be marked with each sample image.Alternatively, facial expression, which can include, laughs, smiles, is tight It is respectful, quiet, sad, cry.Electronic equipment can be using sample image as Expression Recognition model input, pass through machine learning etc. side Formula is trained, and builds Expression Recognition model.
In one embodiment, when electronic equipment is trained, each sample image can be mapped to high-dimensional feature space, Training obtains representing the supporting vector collection of the facial feature points of each sample image, is formed each for sentencing in Expression Recognition model The discriminant function of facial expression belonging to disconnected characteristic point.Electronic equipment is gathered in pending image after the characteristic point of human face region, Characteristic point is inputted into Expression Recognition model, the characteristic point of human face region can be mapped to high-dimensional feature space by Expression Recognition model, And the facial expression of human face region is determined according to each discriminant function.
Different facial expressions can match different expression in the eyes optical mode plates, and definable has the shape of catchlights in expression in the eyes optical mode plate The parameters such as shape, color, transparency and size.After electronic equipment identification facial expression, it can choose and the matched expression in the eyes of facial expression Optical mode plate, after choosing Adding Area in pupil region according to the direction of visual lines of the light source direction of pending image and personage, then Catchlights figure is added in Adding Area according to expression in the eyes optical mode plate.For example, the facial expression of identification is smiles, then corresponding eye Definable catchlights are crescent in God Light template, and transparency is 50% etc., and the facial expression of identification is serious, its corresponding eye Definable catchlights are rectangle in God Light template, and transparency is 55% etc., but not limited to this.
In the present embodiment, expression in the eyes optical mode plate addition catchlights figure can be chosen according to facial expression, can be made in image Personage more vividly catches the spirit, and improves the visual display effect of image.
In one embodiment, recognition of face is carried out to pending image in step 210, before determining human face region, also Comprise the following steps:
Step (a), if in the presence of the multiple image being continuously shot, eye opening image is chosen according to human eye state in multiple image.
The image being continuously shot refers to from same orientation, same angle, the uninterrupted image quickly shot.Normal conditions Under, the image similarity being continuously shot is higher.The above-mentioned multiple image being continuously shot can be the image that electronic equipment shooting obtains, Or the image that electronic equipment is obtained by network transmission.Electronic equipment after the multiframe facial image that is continuously shot is obtained, Human face characteristic point in extractable facial image, such as the face characteristic point of face.Electronic equipment can be marked according to human face characteristic point The positional information of face characteristic, such as the eyeball Feature point recognition ocular according to face.Obtaining the feature of human face region After point, electronic equipment can extract human eye feature in face, and eye opening image is determined further according to human eye feature.Above-mentioned eye opening image is figure Human eye is in the image of eyes-open state as in.Above-mentioned human eye feature may include:Eyeball shape, eyeball position, eyeball area, eye Refreshing direction, pupil height and white of the eye area etc..The corresponding Rule of judgment of predeterminable human eye feature in electronic equipment, it is above-mentioned obtaining After human eye feature, electronic equipment can compare human eye feature and default Rule of judgment one by one, judge whether facial image is to open Eye pattern picture.For example, when the eyeball area for detecting face in facial image is more than first threshold, judgement face is in eye opening shape State, then above-mentioned image is eye opening image.Or the pupil height that ought detect face in facial image is within a preset range, judges people Face is in eyes-open state, then above-mentioned image is eye opening image.
Step (b), if synthesizing multiframe eye opening image there are multiframe eye opening image in multiple image, image after synthesis being made For pending image.
When, there are during multiframe eye opening image, electronic equipment can open eyes above-mentioned multiframe in the above-mentioned multiple image being continuously shot Image synthesizes, using image after synthesis as pending image.Synthesized by image, noise in image can be reduced, improve image Quality.
Step (c), if there are a frame eye opening image in multiple image, using a frame eye opening image as pending image.
, can be using the frame eye opening image as pending if only existing a frame eye opening image in the multiple image being continuously shot Image, and catchlights figure is added in the pending image.
In the present embodiment, if in the presence of the multiple image being continuously shot, chosen and opened eyes according to human eye state in multiple image Image can improve the quality of image, make the visual display effect of image more preferable as pending image.
In one embodiment, there is provided a kind of image processing method, comprises the following steps:
Step (1), carries out recognition of face to pending image, determines human face region.
Alternatively, before step (1), further include:If in the presence of the multiple image being continuously shot, according to people in multiple image Eye shape state chooses eye opening image;If multiframe eye opening image is synthesized, by image after synthesis there are multiframe eye opening image in multiple image As pending image;If there are a frame eye opening image in multiple image, using a frame eye opening image as pending image.
Step (2), gathers the characteristic point of human face region, and according to positioning feature point ocular.
Step (3), detects in ocular whether include catchlights.
Alternatively, step (3), including:The pupil region of ocular is obtained according to characteristic point;Generate the bright of pupil region Histogram is spent, and judges whether pupil region includes catchlights according to brightness histogram.
Catchlights if including catchlights in ocular, are carried out enhancing processing by step (4).
Step (5), if not including catchlights in ocular, adds catchlights figure in ocular.
Alternatively, step (5), including:The light source direction of pending image is detected according to human face region;According to ocular Characteristic point calculate direction of visual lines;Catchlights figure is added in ocular according to light source direction and direction of visual lines.
Alternatively, the light source direction of pending image is detected according to human face region, including:Human face region is divided into some Subregion;Extract the monochrome information of each sub-regions and be compared, obtain comparative result;Determined according to comparative result pending The light source direction of image.
Alternatively, direction of visual lines is calculated according to the characteristic point of ocular, including:Ocular is detected according to characteristic point Orbital border and pupil region;Obtain the pupil center of pupil region;Pupil center and the eye of each different azimuth are calculated respectively The distance at socket of the eye edge;Direction of visual lines is determined according to the distance of pupil center and the orbital border of each different azimuth.
Alternatively, catchlights figure is added in ocular according to light source direction and direction of visual lines, including:According to light source Direction and direction of visual lines choose Adding Area in pupil region;Calculate deviation angle of the Adding Area relative to pupil center; Acquisition and the matched expression in the eyes optical mode plate of deviation angle, and catchlights figure is added in Adding Area according to expression in the eyes optical mode plate.
In the present embodiment, recognition of face is carried out to pending image, determines human face region, gather the feature of human face region Point, and according to positioning feature point ocular, detect in ocular whether include catchlights, if comprising, to catchlights into Row enhancing handle, if not including, in ocular add catchlights figure, can strengthen ocular catchlights or from Dynamic addition catchlights figure, makes the personage of image more vividly catch the spirit, improves the visual display effect of image.
As shown in figure 9, in one embodiment, there is provided a kind of image processing apparatus 900, including eye locating module 910, Eye locating module 920, catchlights detection module 930, enhancing module 940 and add module 950.
Identification module 910, for carrying out recognition of face to pending image, determines human face region;
Eye locating module 920, for gathering the characteristic point of human face region, and according to positioning feature point ocular.
Catchlights detection module 930, for detecting in ocular whether include catchlights.
Strengthen module 940, if for including catchlights in ocular, enhancing processing is carried out to catchlights.
Add module 950, if for not including catchlights in ocular, adds catchlights figure in ocular Shape.
In the present embodiment, recognition of face is carried out to pending image, determines human face region, gather the feature of human face region Point, and according to positioning feature point ocular, detect in ocular whether include catchlights, if comprising, to catchlights into Row enhancing handle, if not including, in ocular add catchlights figure, can strengthen ocular catchlights or from Dynamic addition catchlights figure, makes the personage of image more vividly catch the spirit, improves the visual display effect of image.
In one embodiment, catchlights detection module 930, including area acquisition unit and generation unit.
Area acquisition unit, for obtaining the pupil region of ocular according to characteristic point.
Generation unit, judges that pupil region is for generating the brightness histogram of pupil region, and according to brightness histogram It is no to include catchlights.
In the present embodiment, it can be judged whether to include catchlights according to the brightness histogram of pupil region, can accurately detected Go out the catchlights that ocular includes and carry out enhancing processing, the personage of image can be made more vividly to catch the spirit, improve regarding for image Feel display effect.
In one embodiment, add module 950, including light source direction detection unit, sight calculations unit and addition are single Member.
Light source direction detection unit, for detecting the light source direction of pending image according to human face region.
Sight calculations unit, for calculating direction of visual lines according to the characteristic point of ocular.
Adding device, for adding catchlights figure in ocular according to light source direction and direction of visual lines.
In the present embodiment, can be according to light source direction and direction of visual lines in eye if ocular does not include catchlights Catchlights figure is added in region, the personage in image is more vividly caught the spirit, improves the visual display effect of image.
In one embodiment, light source direction detection unit, including division subelement, comparing subunit and definite son are single Member.
Subelement is divided, for human face region to be divided into some subregions.
Comparing subunit, for extracting the monochrome information of each sub-regions and being compared, obtains comparative result.
Light source determination subelement, for determining the light source direction of pending image according to comparative result.
In the present embodiment, light source direction can be detected according to the monochrome information of each sub-regions in human face region, inspection can be made The light source direction of survey is more accurate, makes the catchlights of addition more natural, lively.
In one embodiment, sight calculations unit, including edge detection subelement, center obtain subelement, distance meter Operator unit and sight determination subelement.
Edge detection subelement, for detecting the orbital border and pupil region of ocular according to characteristic point.
Center obtains subelement, for obtaining the pupil center of pupil region.
Apart from computation subunit, for calculating pupil center and the distance of the orbital border of each different azimuth respectively.
Sight determination subelement, for determining sight according to the distance of pupil center and the orbital border of each different azimuth Direction.
In the present embodiment, sight can be calculated by calculating pupil center and the distance of the orbital border of each different azimuth Direction, can make the direction of visual lines that detects more accurate, make the catchlights of addition more natural, lively.
In one embodiment, adding device, including choose subelement, calculations of offset subelement and addition subelement.
Subelement is chosen, for choosing Adding Area in pupil region according to light source direction and direction of visual lines.
Calculations of offset subelement, for calculating deviation angle of the Adding Area relative to pupil center.
Subelement is added, for acquisition and the matched expression in the eyes optical mode plate of deviation angle, and is being added according to expression in the eyes optical mode plate Add catchlights figure in region.
In the present embodiment, Adding Area can be chosen in pupil region according to light source direction and direction of visual lines, and according to Different expression in the eyes optical mode plate addition catchlights figures is chosen in the position of Adding Area, and the personage in image can be made more vividly to pass God, improves the visual display effect of image.
In one embodiment, above-mentioned image processing apparatus 900, except including eye locating module 910, eye positioning mould Block 920, catchlights detection module 930, enhancing module 940 and add module 950, further include image and choose module and synthesis mould Block.
Image chooses module, if in the presence of the multiple image being continuously shot, being chosen according to human eye state in multiple image Eye opening image.
Synthesis module, if for, there are multiframe eye opening image, synthesizing multiframe eye opening image in multiple image, scheming after synthesis As being used as pending image.
Image chooses module, if being additionally operable in multiple image there are a frame eye opening image, using a frame eye opening image as treating Handle image.
In the present embodiment, if in the presence of the multiple image being continuously shot, chosen and opened eyes according to human eye state in multiple image Image can improve the quality of image, make the visual display effect of image more preferable as pending image.
The embodiment of the present application also provides a kind of electronic equipment.Above-mentioned electronic equipment includes image processing circuit, at image Managing circuit can utilize hardware and or software component to realize, it may include define ISP (Image Signal Processing, figure As signal processing) the various processing units of pipeline.Figure 10 is the schematic diagram of image processing circuit in one embodiment.Such as Figure 10 institutes Show, for purposes of illustration only, only showing the various aspects with the relevant image processing techniques of the embodiment of the present application.
As shown in Figure 10, image processing circuit includes ISP processors 1040 and control logic device 1050.Imaging device 1010 The view data of seizure is handled by ISP processors 1040 first, and ISP processors 1040 analyze view data can with seizure Image statistics for definite and/or imaging device 1010 one or more control parameters.Imaging device 1010 can wrap Include the camera with one or more lens 1012 and imaging sensor 1014.Imaging sensor 1014 may include colour filter Array (such as Bayer filters), imaging sensor 1014 can obtain the light caught with each imaging pixel of imaging sensor 1014 Intensity and wavelength information, and the one group of raw image data that can be handled by ISP processors 1040 is provided.1020 (such as top of sensor Spiral shell instrument) parameter (such as stabilization parameter) of the image procossing of collection can be supplied to based on 1020 interface type of sensor by ISP processing Device 1040.1020 interface of sensor can utilize SMIA, and (Standard Mobile Imaging Architecture, standard are moved Dynamic Imager Architecture) interface, other serial or parallel camera interfaces or above-mentioned interface combination.
In addition, raw image data can be also sent to sensor 1020 by imaging sensor 1014, sensor 1020 can base Raw image data is supplied to ISP processors 1040 in 1020 interface type of sensor, or sensor 1020 is by original graph As data storage is into video memory 1030.
ISP processors 1040 handle raw image data pixel by pixel in various formats.For example, each image pixel can Bit depth with 8,10,12 or 14 bits, ISP processors 1040 can carry out raw image data at one or more images Reason operation, statistical information of the collection on view data.Wherein, image processing operations can be by identical or different bit depth precision Carry out.
ISP processors 1040 can also receive view data from video memory 1030.For example, 1020 interface of sensor is by original Beginning view data is sent to video memory 1030, and the raw image data in video memory 1030 is available to ISP processing Device 1040 is for processing.Video memory 1030 can be only in a part, storage device or electronic equipment for storage arrangement Vertical private memory, and may include DMA (Direct Memory Access, direct direct memory access (DMA)) feature.
1014 interface of imaging sensor is come from when receiving or from 1020 interface of sensor or from video memory During 1030 raw image data, ISP processors 1040 can carry out one or more image processing operations, such as time-domain filtering.Place View data after reason can be transmitted to video memory 1030, to carry out other processing before shown.ISP processors 1040 can also from video memory 1030 receive processing data, to above-mentioned processing data carry out original domain in and RGB and YCbCr Image real time transfer in color space.View data after processing may be output to display 1080, for user viewing and/or Further handled by graphics engine or GPU (Graphics Processing Unit, graphics processor).In addition, ISP processors 1040 output also can be transmitted to video memory 1030, and display 1080 can read picture number from video memory 1030 According to.In one embodiment, video memory 1030 can be configured as realizing one or more frame buffers.In addition, ISP processing The output of device 1040 can be transmitted to encoder/decoder 1070, so as to encoding/decoding image data.The view data of coding can It is saved, and is decompressed before being shown in 1080 equipment of display.
The step of processing view data of ISP processors 1040, includes:VFE (Video Front are carried out to view data End, video front) handle and CPP (Camera Post Processing, camera post processing) processing.To view data VFE processing may include correct view data contrast or brightness, modification record in a digital manner illumination conditions data, to figure As data compensate processing (such as white balance, automatic growth control, γ correction etc.), to view data be filtered processing etc.. CPP processing to view data may include to zoom in and out image, preview frame and record frame provided to each path.Wherein, CPP Different codecs can be used to handle preview frame and record frame.
View data after the processing of ISP processors 1040 can be transmitted to U.S. face module 1060, so as to right before shown Image carries out U.S. face processing.U.S. face module 1060 may include the face processing of view data U.S.:Whitening, nti-freckle, grind skin, thin face, dispel Acne, increase eyes etc..Wherein, U.S. face module 1060 can be electronic equipment in CPU (Central Processing Unit, in Central processor), GPU or coprocessor etc..Data after U.S. face module 1060 is handled can be transmitted to encoder/decoder 1070, So as to encoding/decoding image data.The view data of coding can be saved, and be solved before being shown in 1080 equipment of display Compression.Wherein, U.S. face module 1060 may be additionally located between encoder/decoder 1070 and display 1080, i.e., U.S. face module pair The image being imaged carries out U.S. face processing.Above-mentioned encoder/decoder 1070 can be CPU, GPU or coprocessor in electronic equipment Deng.
The definite statistics of ISP processors 1040, which can be transmitted, gives control logic device Unit 1050.For example, statistics can Passed including the image such as automatic exposure, automatic white balance, automatic focusing, flicker detection, black level compensation, 1012 shadow correction of lens 1014 statistical information of sensor.Control logic device 1050 may include the processor for performing one or more examples (such as firmware) and/or micro- Controller, one or more routines can be determined at control parameter and the ISP of imaging device 1010 according to the statistics of reception Manage the control parameter of device 1040.For example, the control parameter of imaging device 1010 may include that 1020 control parameter of sensor (such as increases Benefit, the time of integration of spectrum assignment), camera flash control parameter, 1012 control parameter of lens (such as focus on or zoom Jiao Away from), or the combination of these parameters.ISP control parameters may include to be used for automatic white balance and color adjustment (for example, in RGB processing Period) gain level and color correction matrix, and 1012 shadow correction parameter of lens.
In the present embodiment, above-mentioned image processing method can be realized with image processing techniques in Figure 10.
In one embodiment, there is provided a kind of electronic equipment, including memory and processor, are stored with calculating in memory Machine program, when computer program is executed by processor so that processor performs following steps:
Recognition of face is carried out to pending image, determines human face region;
The characteristic point of human face region is gathered, and according to positioning feature point ocular;
Whether catchlights are included in detection ocular, if comprising carrying out enhancing processing to catchlights;
If not including, catchlights figure is added in ocular.
In one embodiment, there is provided a kind of computer-readable recording medium, is stored thereon with computer program, the calculating Machine program realizes above-mentioned image processing method when being executed by processor.
In one embodiment, there is provided a kind of computer program product for including computer program, when it is in electronic equipment During upper operation so that electronic equipment realizes above-mentioned image processing method when performing.
One of ordinary skill in the art will appreciate that realize all or part of flow in above-described embodiment method, being can be with Relevant hardware is instructed to complete by computer program, the program can be stored in a non-volatile computer and can be read In storage medium, the program is upon execution, it may include such as the flow of the embodiment of above-mentioned each method.Wherein, the storage is situated between Matter can be magnetic disc, CD, read-only memory (Read-Only Memory, ROM) etc..
Any reference to memory, storage, database or other media may include non-volatile as used herein And/or volatile memory.Suitable nonvolatile memory may include read-only storage (ROM), programming ROM (PROM), Electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include arbitrary access Memory (RAM), it is used as external cache.By way of illustration and not limitation, RAM is available in many forms, such as It is static RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDR SDRAM), enhanced SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM (RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM).
Each technical characteristic of embodiment described above can be combined arbitrarily, to make description succinct, not to above-mentioned reality Apply all possible combination of each technical characteristic in example to be all described, as long as however, the combination of these technical characteristics is not deposited In contradiction, the scope that this specification is recorded all is considered to be.
Embodiment described above only expresses the several embodiments of the application, its description is more specific and detailed, but simultaneously Cannot therefore it be construed as limiting the scope of the patent.It should be pointed out that come for those of ordinary skill in the art Say, on the premise of the application design is not departed from, various modifications and improvements can be made, these belong to the protection of the application Scope.Therefore, the protection domain of the application patent should be determined by the appended claims.

Claims (10)

  1. A kind of 1. image processing method, it is characterised in that including:
    Recognition of face is carried out to pending image, determines human face region;
    The characteristic point of the human face region is gathered, and according to the positioning feature point ocular;
    Detect in the ocular and whether include catchlights, if comprising carrying out enhancing processing to the catchlights;
    If not including, catchlights figure is added in the ocular.
  2. 2. according to the method described in claim 1, it is characterized in that, whether include expression in the eyes in the detection ocular Light, including:
    The pupil region of the ocular is obtained according to the characteristic point;
    The brightness histogram of the pupil region is generated, and judges whether the pupil region includes according to the brightness histogram Catchlights.
  3. 3. according to the method described in claim 1, it is characterized in that, it is described in the ocular add catchlights figure, Including:
    The light source direction of the pending image is detected according to the human face region;
    Direction of visual lines is calculated according to the characteristic point of the ocular;
    Catchlights figure is added in the ocular according to the light source direction and direction of visual lines.
  4. 4. according to the method described in claim 3, it is characterized in that, described detect the pending figure according to the human face region The light source direction of picture, including:
    The human face region is divided into some subregions;
    Extract the monochrome information of each sub-regions and be compared, obtain comparative result;
    The light source direction of the pending image is determined according to the comparative result.
  5. 5. according to the method described in claim 3, it is characterized in that, described calculate sight according to the characteristic point of the ocular Direction, including:
    The orbital border and pupil region of the ocular are detected according to the characteristic point;
    Obtain the pupil center of the pupil region;
    The pupil center and the distance of the orbital border of each different azimuth are calculated respectively;
    Direction of visual lines is determined according to the distance of the pupil center and the orbital border of each different azimuth.
  6. 6. according to the method described in claim 3, it is characterized in that, it is described according to the light source direction and direction of visual lines described Catchlights figure is added in ocular, including:
    Adding Area is chosen in the pupil region according to the light source direction and direction of visual lines;
    Calculate deviation angle of the Adding Area relative to the pupil center;
    Acquisition and the matched expression in the eyes optical mode plate of the deviation angle, and added according to the expression in the eyes optical mode plate in the Adding Area Catchlights figure.
  7. 7. method according to any one of claims 1 to 6, it is characterised in that face knowledge is carried out to pending image described Before not, further include:
    If in the presence of the multiple image being continuously shot, eye opening image is chosen according to human eye state in the multiple image;
    If the multiframe eye opening image is synthesized, using image after synthesis as institute there are multiframe eye opening image in the multiple image State pending image;
    If there are a frame eye opening image in the multiple image, using the frame eye opening image as the pending image.
  8. A kind of 8. image processing apparatus, it is characterised in that including:
    Identification module, for carrying out recognition of face to pending image, determines human face region;
    Eye locating module, for gathering the characteristic point of the human face region, and according to the positioning feature point ocular;
    Catchlights detection module, for detecting in the ocular whether include catchlights;
    Strengthen module, if for including catchlights in the ocular, enhancing processing is carried out to the catchlights;
    Add module, if for not including catchlights in the ocular, adds catchlights figure in the ocular Shape.
  9. 9. a kind of electronic equipment, including memory and processor, computer program, the computer are stored with the memory When program is performed by the processor so that the processor realizes the method as described in claim 1 to 7 is any.
  10. 10. a kind of computer-readable recording medium, is stored thereon with computer program, it is characterised in that the computer program The method as described in claim 1 to 7 is any is realized when being executed by processor.
CN201711242759.0A 2017-11-30 2017-11-30 Image processing method, device, electronic equipment and computer-readable recording medium Pending CN107909058A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711242759.0A CN107909058A (en) 2017-11-30 2017-11-30 Image processing method, device, electronic equipment and computer-readable recording medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711242759.0A CN107909058A (en) 2017-11-30 2017-11-30 Image processing method, device, electronic equipment and computer-readable recording medium

Publications (1)

Publication Number Publication Date
CN107909058A true CN107909058A (en) 2018-04-13

Family

ID=61848294

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711242759.0A Pending CN107909058A (en) 2017-11-30 2017-11-30 Image processing method, device, electronic equipment and computer-readable recording medium

Country Status (1)

Country Link
CN (1) CN107909058A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109035136A (en) * 2018-07-26 2018-12-18 北京小米移动软件有限公司 Image processing method and device, storage medium
CN109255796A (en) * 2018-09-07 2019-01-22 浙江大丰实业股份有限公司 Stage equipment security solution platform
CN109544444A (en) * 2018-11-30 2019-03-29 深圳市脸萌科技有限公司 Image processing method, device, electronic equipment and computer storage medium
WO2021139382A1 (en) * 2020-01-06 2021-07-15 北京字节跳动网络技术有限公司 Face image processing method and apparatus, readable medium, and electronic device
CN113228097A (en) * 2018-12-29 2021-08-06 浙江大华技术股份有限公司 Image processing method and system
CN113361463A (en) * 2021-06-30 2021-09-07 深圳市斯博科技有限公司 Optimal salient region determining method and device, computer equipment and storage medium
WO2022148248A1 (en) * 2021-01-06 2022-07-14 腾讯科技(深圳)有限公司 Image processing model training method, image processing method and apparatus, electronic device, and computer program product
WO2022246606A1 (en) * 2021-05-24 2022-12-01 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Electrical device, method of generating image data, and non-transitory computer readable medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1949822A (en) * 2005-10-14 2007-04-18 三星电子株式会社 Apparatus, media and method for facial image compensating
CN101344919A (en) * 2008-08-05 2009-01-14 华南理工大学 Sight tracing method and disabled assisting system using the same
CN106023104A (en) * 2016-05-16 2016-10-12 厦门美图之家科技有限公司 Human face eye area image enhancement method and system and shooting terminal

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1949822A (en) * 2005-10-14 2007-04-18 三星电子株式会社 Apparatus, media and method for facial image compensating
CN101344919A (en) * 2008-08-05 2009-01-14 华南理工大学 Sight tracing method and disabled assisting system using the same
CN106023104A (en) * 2016-05-16 2016-10-12 厦门美图之家科技有限公司 Human face eye area image enhancement method and system and shooting terminal

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
形色主义-鹤: ""点亮你的双眸:用PS打造无敌眼神光"", 《HTTP://WWW.360DOC.COM/CONTENT/06/1108/17/11821_253375.SHTML》 *
郝群等: ""基于图像处理的人眼注视方向检测研究"", 《光学技术》 *

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109035136B (en) * 2018-07-26 2023-05-09 北京小米移动软件有限公司 Image processing method and device and storage medium
CN109035136A (en) * 2018-07-26 2018-12-18 北京小米移动软件有限公司 Image processing method and device, storage medium
CN109255796A (en) * 2018-09-07 2019-01-22 浙江大丰实业股份有限公司 Stage equipment security solution platform
CN109255796B (en) * 2018-09-07 2022-01-28 浙江大丰实业股份有限公司 Safety analysis platform for stage equipment
CN109544444A (en) * 2018-11-30 2019-03-29 深圳市脸萌科技有限公司 Image processing method, device, electronic equipment and computer storage medium
CN113228097A (en) * 2018-12-29 2021-08-06 浙江大华技术股份有限公司 Image processing method and system
CN113228097B (en) * 2018-12-29 2024-02-02 浙江大华技术股份有限公司 Image processing method and system
WO2021139382A1 (en) * 2020-01-06 2021-07-15 北京字节跳动网络技术有限公司 Face image processing method and apparatus, readable medium, and electronic device
GB2599036A (en) * 2020-01-06 2022-03-23 Beijing Bytedance Network Tech Co Ltd Face image processing method and apparatus, readable medium, and electronic device
US11887325B2 (en) 2020-01-06 2024-01-30 Beijing Bytedance Network Technology Co., Ltd. Face image processing method and apparatus, readable medium, and electronic device
WO2022148248A1 (en) * 2021-01-06 2022-07-14 腾讯科技(深圳)有限公司 Image processing model training method, image processing method and apparatus, electronic device, and computer program product
WO2022246606A1 (en) * 2021-05-24 2022-12-01 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Electrical device, method of generating image data, and non-transitory computer readable medium
CN113361463B (en) * 2021-06-30 2024-02-02 深圳万兴软件有限公司 Optimal salient region determination method, device, computer equipment and storage medium
CN113361463A (en) * 2021-06-30 2021-09-07 深圳市斯博科技有限公司 Optimal salient region determining method and device, computer equipment and storage medium

Similar Documents

Publication Publication Date Title
CN107909057A (en) Image processing method, device, electronic equipment and computer-readable recording medium
CN107909058A (en) Image processing method, device, electronic equipment and computer-readable recording medium
CN108537155B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111402135B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN107680128A (en) Image processing method, device, electronic equipment and computer-readable recording medium
CN107808137A (en) Image processing method, device, electronic equipment and computer-readable recording medium
CN107730446B (en) Image processing method, image processing device, computer equipment and computer readable storage medium
CN107945135B (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN108022206A (en) Image processing method, device, electronic equipment and computer-readable recording medium
CN107451969B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN107818305A (en) Image processing method, device, electronic equipment and computer-readable recording medium
CN108012080A (en) Image processing method, device, electronic equipment and computer-readable recording medium
CN107862657A (en) Image processing method, device, computer equipment and computer-readable recording medium
CN107730445A (en) Image processing method, device, storage medium and electronic equipment
CN108537749A (en) Image processing method, device, mobile terminal and computer readable storage medium
CN107886484A (en) U.S. face method, apparatus, computer-readable recording medium and electronic equipment
CN107945107A (en) Image processing method, device, computer-readable recording medium and electronic equipment
CN107766831A (en) Image processing method, device, mobile terminal and computer-readable recording medium
CN108009999A (en) Image processing method, device, computer-readable recording medium and electronic equipment
CN107509031A (en) Image processing method, device, mobile terminal and computer-readable recording medium
KR20130108456A (en) Image processing device, image processing method, and control program
CN107730444A (en) Image processing method, device, readable storage medium storing program for executing and computer equipment
CN108024107A (en) Image processing method, device, electronic equipment and computer-readable recording medium
CN107911625A (en) Light measuring method, device, readable storage medium storing program for executing and computer equipment
CN107277356A (en) The human face region treating method and apparatus of backlight scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20180413