CN109658331B - Image processing method, device, system and computer storage medium - Google Patents

Image processing method, device, system and computer storage medium Download PDF

Info

Publication number
CN109658331B
CN109658331B CN201811540162.9A CN201811540162A CN109658331B CN 109658331 B CN109658331 B CN 109658331B CN 201811540162 A CN201811540162 A CN 201811540162A CN 109658331 B CN109658331 B CN 109658331B
Authority
CN
China
Prior art keywords
iris
eye
image data
region
pixel point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201811540162.9A
Other languages
Chinese (zh)
Other versions
CN109658331A (en
Inventor
刘思遥
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kuangshi Technology Co Ltd
Original Assignee
Beijing Kuangshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kuangshi Technology Co Ltd filed Critical Beijing Kuangshi Technology Co Ltd
Priority to CN201811540162.9A priority Critical patent/CN109658331B/en
Publication of CN109658331A publication Critical patent/CN109658331A/en
Application granted granted Critical
Publication of CN109658331B publication Critical patent/CN109658331B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/193Preprocessing; Feature extraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/18Eye characteristics, e.g. of the iris
    • G06V40/197Matching; Classification

Abstract

The invention provides an image processing method, an image processing device, an image processing system and a computer storage medium, and relates to the technical field of image processing, wherein the method comprises the following steps: acquiring image data of an eye region of a target object; determining image data of an iris region contained in the image data of the eye region; performing diffusion processing on the image data of the eye region to obtain the processed image data of the eye region; wherein the degree of positional deviation of the image data of the iris region after the diffusion process is smaller than the degree of positional deviation of the image data of the eye region other than the iris region. The invention can better promote the eye modification effect and effectively promote the user experience.

Description

Image processing method, device, system and computer storage medium
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method, apparatus, system, and computer storage medium.
Background
With the development of image processing technology, more and more users choose to make a face of an individual after taking a picture of the individual to improve the image of the individual, and eye enlargement processing is a key means of face modification, and many beauty APPs are also provided with eye enlargement functions.
Most of the existing eye amplifying treatment modes uniformly and radially diffuse the pixels of eyes outwards, and the iris part of the eyes is obviously amplified along with the periphery of the eyes due to the fact that the eyes of the human are not round. However, the iris areas of different people are not quite different, the difference of the iris areas of large eyes is generally caused by larger eyeballs or larger opening and closing degrees of eyelid, and the iris of the eye after the existing eye enlargement treatment can cause sensory violation due to obvious enlargement, so that the natural feeling is not strong, the eye modification effect is poor, and the user experience is low.
Disclosure of Invention
Accordingly, the present invention is directed to an image processing method, apparatus, system and computer storage medium, which can better improve the eye modification effect and effectively improve the user experience.
In order to achieve the above object, the technical scheme adopted by the embodiment of the invention is as follows:
in a first aspect, an embodiment of the present invention provides an image processing method, including: acquiring image data of an eye region of a target object; determining image data of an iris region contained in the image data of the eye region; performing diffusion treatment on the image data of the eye region to obtain the image data of the eye region after treatment; wherein the degree of positional deviation of the image data of the iris region after the diffusion processing is smaller than the degree of positional deviation of the image data of the eye region other than the iris region.
Further, the step of acquiring image data of an eye region of the target object includes: acquiring a target image to be processed; performing face detection on the target image to determine a face region of a target object in the target image; determining eye feature points in the face region; image data of an eye region of the target object is determined based on the eye feature points.
Further, the step of determining the image data of the iris region included in the image data of the eye region includes: extracting iris characteristic points from the eye characteristic points; image data of the iris region is determined based on the iris feature points.
Further, the step of performing diffusion processing on the image data of the eye region to obtain processed image data of the eye region includes: determining basic offset of each pixel point according to the current position of each pixel point of the eye region, the iris center characteristic point, the first iris characteristic point and the second iris characteristic point; the first iris characteristic point is a pixel point with the smallest abscissa in the iris region, and the second iris characteristic point is a pixel point with the largest abscissa in the iris region; the first iris characteristic points and the second iris characteristic points are positioned on the eye sockets; calculating the offset protection coefficient of each pixel point according to the current position of each pixel point and the position of the key feature point; wherein the key feature points comprise a plurality of the first iris feature points, the second iris feature points, the iris center feature points, the outer canthus feature points and the inner canthus feature points; and determining the position of each pixel point after diffusion processing based on the current position of each pixel point, the basic offset and the offset protection coefficient.
Further, according to the current position of each pixel point of the eye region and the central characteristic of the irisThe method comprises the steps of determining basic offset of each pixel point according to positions of points, a first iris characteristic point and a second iris characteristic point, and comprises the following steps: acquiring the abscissa x of the pixel point of the eye region 0 And the ordinate y 0 The method comprises the steps of carrying out a first treatment on the surface of the Acquiring the abscissa x of the iris center feature point eye-center The abscissa x of the first iris characteristic point 1t And the ordinate y 1t And the abscissa x of the second iris feature point 2t The method comprises the steps of carrying out a first treatment on the surface of the Determining a longitudinal base offset Δy=y of the pixel point 0 -y 1t The method comprises the steps of carrying out a first treatment on the surface of the If x 0 <x eye-center Determining a lateral basic offset Δx=x of the pixel point 0 -x 1t The method comprises the steps of carrying out a first treatment on the surface of the If x 0 ≥x eye-center Determining a lateral basic offset Δx=x of the pixel point 0 -x 2t
Further, the step of calculating the offset protection coefficient of each pixel point according to the current position of each pixel point and the position of the key feature point includes: calculating an offset protection coefficient of each pixel point according to the following formula:
wherein protection_x 1 For the lateral first offset protection factor, protection_x 2 For the second lateral offset protection factor, protection_x is the third lateral offset protection factor, and protection_y is the longitudinal offset protection factorA number; x is x 0 Is the abscissa, y, of the pixel points of the eye region 0 Is the ordinate of the pixel point of the eye region; x is x 1c An abscissa of the external canthus feature point; x is x 2c An abscissa of the inner canthus feature point; x is x eye-center An abscissa of the iris center feature point; y is eye-center An ordinate of the iris center feature point; x is x 1t An abscissa of the first iris feature point; y is 1t Is the ordinate of the first iris feature point; t is a preset constant; exp () represents an exponential function with a base of e; max () represents the maximum value selection function.
Further, the step of determining the position of each pixel point after diffusion processing based on the current position of each pixel point, the basic offset and the offset protection coefficient includes: if x 0 <x eye-center Calculating the abscissa x of each pixel point after diffusion treatment according to the following formula 0 ' and ordinate y 0 ’:
x ‘=x 0 +Δx*protection_x 1 *protection_y;y 0 ‘=y 0 +Δy*protection_x;
If x 0 ≥x eye-center Calculating the abscissa x of each pixel point after diffusion treatment according to the following formula 0 ' and ordinate y 0 ’:
x 0 ‘=x 0 +Δx*protection_x 2 *protection_y;y 0 ‘=y 0 +Δy_x protection. Further, before the determining the eye feature points in the face region, the method further includes: judging whether the central axis of the face area is parallel to a preset reference longitudinal axis or not; if not, determining a deflection angle between the central axis and the reference longitudinal axis; and carrying out deflection processing on the image data of the face area according to the deflection angle so that the central axis of the face area after the deflection processing is parallel to the reference longitudinal axis.
Further, the method further comprises: and carrying out deflection processing on the processed image data of the eye region according to the deflection angle.
In a second aspect, an embodiment of the present invention further provides an image processing apparatus, including: the eye image acquisition module is used for acquiring image data of an eye region of a target object; an iris image determining module for determining image data of an iris region included in the image data of the eye region; the diffusion processing module is used for performing diffusion processing on the image data of the eye region to obtain the processed image data of the eye region; wherein the degree of positional deviation of the image data of the iris region after the diffusion processing is smaller than the degree of positional deviation of the image data of the eye region other than the iris region.
In a third aspect, an embodiment of the present invention provides an image processing system, including: the device comprises an image acquisition device, a processor and a storage device; the image acquisition device is used for acquiring image data; the storage means has stored thereon a computer program which, when executed by the processor, performs the method according to any of the first aspects.
In a fourth aspect, embodiments of the present invention provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of any of the first aspects described above.
The embodiment of the invention provides an image processing method, an image processing device, an image processing system and a computer storage medium, which can determine the image data of an iris region contained in the acquired image data of an eye region and perform diffusion processing on the image data of the area, wherein the position deviation degree of the image data of the iris region after the diffusion processing is smaller than the position deviation degree of the image data except the iris region in the eye region. The iris region can be protected when the eye region is enlarged, and the phenomenon that the iris region is correspondingly and synchronously enlarged along with the enlargement of the whole eye region to cause the iris enlargement abnormality can be effectively avoided by enabling the position deviation degree of the iris region to be smaller than the position deviation degree of other eye regions. The embodiment of the invention can better promote the eye modification effect and effectively promote the user experience.
Additional features and advantages of embodiments of the invention will be set forth in the description which follows, or in part will be obvious from the description, or may be learned by practice of the embodiments of the invention.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 shows a schematic structural diagram of an electronic device according to an embodiment of the present invention;
FIG. 2 is a flowchart of an image processing method according to an embodiment of the present invention;
figure 3 shows a schematic view of an eye provided by an embodiment of the invention,
fig. 4 shows a schematic view of a face image according to an embodiment of the present invention;
fig. 5 shows a schematic view of a face image after being deflected according to an embodiment of the present invention;
FIG. 6 illustrates a deflection calculation schematic provided by an embodiment of the present invention;
fig. 7 is a block diagram showing the structure of an image processing apparatus according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The inventor finds that most of the eye enlargement treatment modes adopted in the prior art do not consider the influence of the iris in the research process, but the whole eye area is enlarged, so that the iris area can be correspondingly enlarged along with the whole eye area. However, the iris areas of eyes with different sizes are usually not different, so that the obviously enlarged iris is difficult to achieve a realistic effect, and the eye modification effect is poor. To improve this problem, the embodiments of the present invention provide an image processing method, apparatus, system, and computer storage medium, and the technique can be applied to any scene where modification of an eye image is required, such as various beauty APPs, beauty tools, and the like. Embodiments of the present invention are described in detail below.
Embodiment one:
first, an example electronic apparatus 100 for implementing an image processing method, apparatus, system, and computer storage medium of an embodiment of the present invention is described with reference to fig. 1.
As shown in fig. 1, an electronic device 100 includes one or more processors 102, one or more storage devices 104, an input device 106, an output device 108, and an image capture device 110, which are interconnected by a bus system 112 and/or other forms of connection mechanisms (not shown). It should be noted that the components and structures of the electronic device 100 shown in fig. 1 are exemplary only and not limiting, as the electronic device may have other components and structures as desired.
The processor 102 may be implemented in at least one hardware form of a Digital Signal Processor (DSP), field Programmable Gate Array (FPGA), programmable Logic Array (PLA), the processor 102 may be one or a combination of several of a Central Processing Unit (CPU) or other form of processing unit with data processing and/or instruction execution capabilities, and may control other components in the electronic device 100 to perform desired functions.
The storage 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer readable storage medium that can be executed by the processor 102 to implement client functions and/or other desired functions in embodiments of the present invention as described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer readable storage medium.
The input device 106 may be a device used by a user to input instructions and may include one or more of a keyboard, mouse, microphone, touch screen, and the like.
The output device 108 may output various information (e.g., images or sounds) to the outside (e.g., a user), and may include one or more of a display, a speaker, and the like.
The image capture device 110 may capture images (e.g., photographs, videos, etc.) desired by the user and store the captured images in the storage device 104 for use by other components.
For example, example electronic devices for implementing the image processing methods, apparatus, systems, and computer storage media according to embodiments of the invention may be implemented as devices having data processing capabilities, such as smartphones, tablets, computers, and the like.
Embodiment two:
referring to a flowchart of an image processing method shown in fig. 2, the method may be executed by the electronic device, and when implemented, may be executed by a processor of the electronic device, and may specifically include the following steps:
step S202, acquiring image data of an eye region of a target object. The image data may be 2D image data or 3D image data. In particular, the image data may be characterized by pixel coordinates.
In one embodiment, a target image to be processed may be first acquired; the target image to be processed can be an image acquired by a camera of the electronic equipment, or can be image data uploaded manually. Furthermore, in the case of 3D image data, 3D modeling implementations may also be employed. Then, carrying out face detection on the target image to determine a face area of a target object in the target image; in the specific implementation, the method can be realized by adopting a related face detection algorithm, and details are not repeated here. If a face is detected, eye feature points may be determined in the detected face region; the number of the eye feature points may be plural, and the eye feature points may be regarded as the inner canthus, the outer canthus, the orbit, the pupil center, and the like. Further, image data of an eye region of the target subject is determined based on the eye feature points, such as a region enclosed by a plurality of the eye feature points may be regarded as the eye region of the target subject.
Step S204, determining image data of an iris region included in the image data of an eye region. For example, iris feature points may be extracted from eye feature points, and then image data of an iris region may be determined based on the iris feature points. The number of iris feature points may be plural, and an area surrounded by the plural iris feature points may be regarded as an iris area. In general, the iris is circular, in one embodiment, a point on the peripheral arc of the iris may be selected as an iris feature point, in another embodiment, in order to more conveniently mark the iris feature point, the intersection point of a square in which the circular iris can be just embedded (i.e., the side length of the square is the same as the diameter of the circular iris, and the center of the square coincides with the center of the iris) and the orbit may be determined as an iris feature point, and in addition, the center point (pupil) of the iris may be determined as an iris feature point. The iris feature points may be arranged in various ways, and may represent the iris region, which is not limited herein.
Step S206, performing diffusion processing on the image data of the eye region to obtain the processed image data of the eye region; wherein the degree of positional deviation of the image data of the iris region after the diffusion process is smaller than the degree of positional deviation of the image data of the eye region other than the iris region. That is, the image data of the eye region is subjected to diffusion processing in the following manner: the pixel points in the eye region are shifted, specifically, the pixel points in the iris region are outwards diffused to a smaller extent than the pixel points in the eye region except the iris region, and the processing mode can be also called anisotropic diffusion amplification processing (i.e. diffusion processing with different extents), so that the phenomenon of eye distortion possibly caused by too large diffusion extent of the pixel points in the iris region is avoided.
According to the eye magnification method provided by the embodiment of the invention, the image data of the iris region contained in the acquired image data of the eye region can be determined, and the image data of the part region is subjected to diffusion processing, wherein the position deviation degree of the image data of the iris region after diffusion processing is smaller than the position deviation degree of the image data of the eye region except the iris region. The iris region can be protected when the eye region is enlarged, and the phenomenon that the iris region is correspondingly and synchronously enlarged along with the enlargement of the whole eye region to cause the iris enlargement abnormality can be effectively avoided by enabling the position deviation degree of the iris region to be smaller than the position deviation degree of other eye regions. The embodiment of the invention can better promote the eye modification effect and effectively promote the user experience.
In this embodiment, a specific implementation manner of performing diffusion processing on image data of an eye region to obtain processed image data of the eye region is provided, and a left eye is taken as an example to illustrate a first iris feature point a, a second iris feature point B, an iris center feature point E, an outer canthus feature point C and an inner canthus feature point D in combination with an eye schematic diagram shown in fig. 3; the method specifically comprises the following steps:
step one: determining basic offset of each pixel point according to the current position of each pixel point of the eye region, the iris center feature point, the first iris feature point and the second iris feature point; the first iris characteristic point is a pixel point with the smallest abscissa in the iris area, and the second iris characteristic point is a pixel point with the largest abscissa in the iris area; and the first iris feature point and the second iris feature point are both located on the orbit. In practice, the base offset may include a lateral base offset and a longitudinal base offset.
In specific implementation, the following (1) to (5) may be used to determine the basic offset of each pixel point:
(1) Acquiring the abscissa x of the pixel point of the eye region 0 And the ordinate y 0
(2) Acquiring the abscissa x of the iris center feature point eye-center Abscissa x of first iris feature point 1t And the ordinate y 1t And the abscissa x of the second iris feature point 2t
(3) Determining a longitudinal base offset Δy=y of the pixel point 0 -y 1t
(4) If x 0 <x eye-center Determining a lateral base offset Δx=x of the pixel point 0 -x 1t
(5) If x 0 ≥x eye-center Determining a lateral base offset Δx=x of the pixel point 0 -x 2t
Step two: calculating the offset protection coefficient of each pixel point according to the current position of each pixel point and the position of the key feature point; the key feature points comprise a plurality of first iris feature points, second iris feature points, iris center feature points, outer corner feature points and inner corner feature points.
In this embodiment, different amplification degrees may be set correspondingly according to the positions of the pixel points. For example, in the case of the left eye, the magnification in the direction of the left outer corner of the iris is preferably greater than the magnification in the direction of the right inner corner of the iris, so that the eye magnification effect is more vivid. Based on this, it is possible to set a protection coefficient for a boundary such as an iris region as a distinction boundary to control the degree of enlargement of pixels of the inner corner region and the outer corner region, respectively. In specific implementation, the offset protection coefficient of each pixel point can be calculated according to the following formula:
wherein protection_x 1 For the lateral first offset protection factor, protection_x 2 For the lateral second offset protection factor, protection_x is the lateral third offset protection factor, and protection_y is the longitudinal offset protection factor; x is x 0 Is the abscissa, y, of the pixel point of the eye region 0 Is the ordinate of the pixel point of the eye region; x is x 1c Is the abscissa of the feature points of the external canthus; x is x 2c Is the abscissa of the characteristic points of the inner canthus; x is x eye-center Is the abscissa of the iris center feature point; y is eye-center Is the ordinate of the central characteristic point of the iris; x is x 1t The abscissa of the first iris feature point; y is 1t Is the ordinate of the first iris feature point; t is a preset constant; exp () represents an exponential function with a base of e; max () represents the maximum value selection function. In practical applications, the T value may be manually set to adjust the offset degree of the pixel (i.e., the eye-enlargeable degree).
It can be understood that the positions of the eyes where the pixel points are located are different, and the corresponding offset protection coefficients areMay be different, for example, if the left eye is the abscissa x of the pixel point 0 <x eye-center The pixel point is indicated to be positioned in the left external corner direction of the iris and can be understood to belong to the external corner area, so that when the lateral offset position of the pixel point after diffusion treatment is calculated, the lateral first offset protection coefficient protection_x can be selected correspondingly 1 The method comprises the steps of carrying out a first treatment on the surface of the If the abscissa x of the pixel point 0 ≥x eye-center The pixel point is indicated to be located in the right inner corner direction of the iris and can be understood to belong to the inner corner region, so that when the lateral offset position of the pixel point after diffusion treatment is calculated, the lateral second offset protection coefficient protection_x can be selected correspondingly 2
Step three: and determining the positions of the pixels after diffusion processing based on the current positions of the pixels, the basic offset and the offset protection coefficient.
The embodiment provides the calculation of the abscissa x of each pixel point after diffusion treatment 0 ' and ordinate y 0 One embodiment of:
if x 0 <x eye-center Calculating the abscissa x of each pixel point after diffusion treatment according to the following formula 0 ' and ordinate y 0 ’:
x 0 ‘=x 0 +Δx*protection_x 1 *protection_y;y 0 ‘=y 0 +Δy*protection_x;
If x 0 ≥x eye-center Calculating the abscissa x of each pixel point after diffusion treatment according to the following formula 0 ' and ordinate y 0 ’:
x 0 ‘=x 0 +Δx*protection_x 2 *protection_y;
y 0 ‘=y 0 +Δy*protection_x。
Considering that the 2D image data acquired at times is not a forward image, as shown in the face image schematic diagram of fig. 4, the central axis of the face image forms an angle θ with the preset reference longitudinal axis. In order to more conveniently amplify the eye region under the preset reference coordinate axis, before the eye feature points are determined in the face region, the eye amplifying method provided in the embodiment may further include:
(1) And judging whether the central axis of the face area is parallel to a preset reference longitudinal axis or not. The central axis may be determined according to the feature points on the face region, and in one manner, at least one pair of symmetrical points (i.e., two symmetrical key points) on the face region may be found, where the pair of symmetrical points may be left and right pupil feature points, left and right mouth corner feature points, symmetrical points on two sides of the nose bridge, and the like, and the central axis is then determined as a perpendicular bisector between the symmetrical points.
(2) If not, a yaw angle between the central axis and the reference longitudinal axis is determined. The deflection angle is the angle between the central axis and the reference axis.
(3) And carrying out deflection processing on the image data of the face area according to the deflection angle so that the central axis of the face area after the deflection processing is parallel to the reference longitudinal axis. The face image in fig. 4 is rotated clockwise by an angle θ to obtain a face image schematic diagram after deflection processing as shown in fig. 5.
When the image data of the face area is subjected to deflection processing according to the deflection angle, the offset position of the pixel point of the face area can be calculated. For example, the pixel point P (x, y) may be calculated to be rotated to a new position P' (s, t) based on the deflection angle θ with reference to the following deflection formula in conjunction with the deflection calculation schematic diagram shown in fig. 6:
s=xcos(θ)-ysin(θ)
t=xsin(θ)+ycos(θ)
if the image data is subjected to the deflection processing in advance, the above method provided by the present embodiment further includes, after the diffusion processing is performed on the image data: and carrying out deflection processing on the processed image data of the eye region according to the deflection angle, namely, restoring the whole image data to the original direction. Such as by re-combining the resulting (x 0 ’,y 0 ' reverse rotation by an angle θ. For example, if the entire face image is rotated clockwise by an angle θ, the image data obtained at this time is rotated counterclockwise by the angle θ.
In summary, the image processing method provided in the embodiment can effectively avoid the phenomenon that the iris region is correspondingly and synchronously enlarged along with the enlargement of the whole eye region, so that the iris is enlarged abnormally. The embodiment of the invention can better promote the eye modification effect and effectively promote the user experience.
Embodiment four:
for the image processing method provided in the second embodiment, an embodiment of the present invention provides an image processing apparatus, referring to a block diagram of an image processing apparatus shown in fig. 7, the apparatus includes the following modules:
an eye image acquisition module 702, configured to acquire image data of an eye region of a target object;
an iris image determination module 704 for determining image data of an iris region included in the image data of an eye region;
the diffusion processing module 706 is configured to perform diffusion processing on the image data of the eye area, so as to obtain processed image data of the eye area; wherein the degree of positional deviation of the image data of the iris region after the diffusion process is smaller than the degree of positional deviation of the image data of the eye region other than the iris region.
The image processing device provided by the embodiment of the invention can protect the iris area when the eye area is enlarged, and can effectively avoid the phenomenon that the iris area is correspondingly and synchronously enlarged along with the enlargement of the whole eye area to cause the enlargement abnormality of the iris by making the position deviation degree of the iris area smaller than the position deviation degree of other eye areas. The embodiment of the invention can better promote the eye modification effect and effectively promote the user experience.
In one embodiment, the ocular image acquisition module 702 is configured to: acquiring a target image to be processed; performing face detection on the target image to determine a face region of a target object in the target image; determining eye feature points in a face region; image data of an eye region of the target object is determined based on the eye feature points.
In one embodiment, iris image determination module 704 is configured to: extracting iris characteristic points from the eye characteristic points; image data of an iris region is determined based on iris feature points.
In one embodiment, the diffusion processing module 706 is configured to: determining basic offset of each pixel point according to the current position of each pixel point of the eye region, the iris center feature point, the first iris feature point and the second iris feature point; the first iris characteristic point is a pixel point with the smallest abscissa in the iris area, and the second iris characteristic point is a pixel point with the largest abscissa in the iris area; the first iris characteristic point and the second iris characteristic point are both positioned on the eye orbit; calculating an offset protection coefficient of each pixel point according to the current position of each pixel point, the basic offset of each pixel point and the position of the key feature point; the key feature points comprise a plurality of first iris feature points, second iris feature points, iris center feature points, outer corner feature points and inner corner feature points; and determining the positions of the pixels after diffusion processing based on the current positions of the pixels, the basic offset and the offset protection coefficient.
In one embodiment, the diffusion processing module 706 is specifically configured to: acquiring the abscissa x of the pixel point of the eye region 0 And the ordinate y 0 The method comprises the steps of carrying out a first treatment on the surface of the Acquiring the abscissa x of the iris center feature point eye-center Abscissa x of first iris feature point 1t And the ordinate y 1t And the abscissa x of the second iris feature point 2t The method comprises the steps of carrying out a first treatment on the surface of the Determining a longitudinal base offset Δy=y of the pixel point 0 -y 1t The method comprises the steps of carrying out a first treatment on the surface of the If x 0 <x eye-center Determining a lateral base offset Δx=x of the pixel point 0 -x 1t The method comprises the steps of carrying out a first treatment on the surface of the If x 0 ≥x eye-center Determining a lateral base offset Δx=x of the pixel point 0 -x 2t
In one embodiment, the diffusion processing module 706 is specifically configured to: calculating the offset protection coefficient of each pixel point according to the following formula:
wherein protection_x 1 For the lateral first offset protection factor, protection_x 2 For the lateral second offset protection factor, protection_x is the lateral third offset protection factor, and protection_y is the longitudinal offset protection factor; x is x 0 Is the abscissa, y, of the pixel point of the eye region 0 Is the ordinate of the pixel point of the eye region; x is x 1c Is the abscissa of the feature points of the external canthus; x is x 2c Is the abscissa of the characteristic points of the inner canthus; x is x eye-center Is the abscissa of the iris center feature point; y is eye-center Is the ordinate of the central characteristic point of the iris; x is x 1t The abscissa of the first iris feature point; y is 1t Is the ordinate of the first iris feature point; t is a preset constant; exp () represents an exponential function with a base of e;
max () represents the maximum value selection function.
In one embodiment, the diffusion processing module 806 is specifically configured to: if x 0 <x eye-center Calculating the abscissa x of each pixel point after diffusion treatment according to the following formula 0 ' and ordinate y 0 ’:
x 0 ‘=x 0 +Δx*protection_x 1 *protection_y;y 0 ‘=y 0 +Δy*protection_x;
If x 0 ≥x eye-center Calculating the abscissa x of each pixel point after diffusion treatment according to the following formula 0 ' and longitudinal directionCoordinate y 0 ’:
x 0 ‘=x 0 +Δx*protection_x 2 *protection_y;y 0 ‘=y 0 +Δy*protection_x。
In one embodiment, the apparatus further comprises:
the judging module is used for judging whether the central axis of the face area is parallel to a preset reference longitudinal axis or not;
the deflection angle determining module is used for determining the deflection angle between the central axis and the reference longitudinal axis if the judging result of the judging module is negative;
the first deflection module is used for carrying out deflection processing on the image data of the face area according to the deflection angle so that the central axis of the face area after the deflection processing is parallel to the reference longitudinal axis.
In another embodiment, the apparatus further comprises: and the second deflection module is used for carrying out deflection processing on the processed image data of the eye region according to the deflection angle.
The device provided in this embodiment has the same implementation principle and technical effects as those of the foregoing embodiment, and for brevity, reference may be made to the corresponding content in the foregoing method embodiment for a part of the description of the device embodiment that is not mentioned.
Fifth embodiment:
an embodiment of the present invention provides an image processing system, including: the device comprises an image acquisition device, a processor and a storage device; the image acquisition device is used for acquiring image data; the storage means has stored thereon a computer program which, when run by a processor, performs a method as provided by the foregoing method embodiments.
It will be clear to those skilled in the art that, for convenience and brevity of description, reference may be made to the corresponding process in the foregoing method embodiment for the specific working process of the above-described system, which is not described herein again.
Further, the present embodiment also provides a computer readable storage medium, on which a computer program is stored, which when being executed by a processor, performs the steps of the method provided by the foregoing method embodiments.
The image processing method, apparatus, system and computer program product of the computer storage medium provided in the embodiments of the present invention include a computer readable storage medium storing program codes, where the instructions included in the program codes may be used to execute the method described in the foregoing method embodiment, and specific implementation may refer to the method embodiment and will not be described herein.
In addition, in the description of embodiments of the present invention, unless explicitly stated and limited otherwise, the terms "mounted," "connected," and "connected" are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between two elements. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In the description of the present invention, it should be noted that the directions or positional relationships indicated by the terms "center", "upper", "lower", "left", "right", "vertical", "horizontal", "inner", "outer", etc. are based on the directions or positional relationships shown in the drawings, are merely for convenience of describing the present invention and simplifying the description, and do not indicate or imply that the devices or elements referred to must have a specific orientation, be configured and operated in a specific orientation, and thus should not be construed as limiting the present invention. Furthermore, the terms "first," "second," and "third" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance.
Finally, it should be noted that: the above examples are only specific embodiments of the present invention, and are not intended to limit the scope of the present invention, but it should be understood by those skilled in the art that the present invention is not limited thereto, and that the present invention is described in detail with reference to the foregoing examples: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the claims.

Claims (11)

1. An image processing method, the method comprising:
acquiring image data of an eye region of a target object;
determining image data of an iris region contained in the image data of the eye region;
performing diffusion treatment on the image data of the eye region to obtain the image data of the eye region after treatment; wherein the degree of positional deviation of the image data of the iris region after the diffusion processing is smaller than the degree of positional deviation of the image data of the eye region other than the iris region;
the step of performing diffusion processing on the image data of the eye region to obtain the processed image data of the eye region comprises the following steps:
determining basic offset of each pixel point according to the current position of each pixel point of the eye region, the iris center characteristic point, the first iris characteristic point and the second iris characteristic point; the first iris characteristic point is a pixel point with the smallest abscissa in the iris region, and the second iris characteristic point is a pixel point with the largest abscissa in the iris region; the first iris characteristic points and the second iris characteristic points are positioned on the eye sockets;
calculating the offset protection coefficient of each pixel point according to the current position of each pixel point and the position of the key feature point; wherein the key feature points comprise a plurality of the first iris feature points, the second iris feature points, the iris center feature points, the outer canthus feature points and the inner canthus feature points;
and determining the position of each pixel point after diffusion processing based on the current position of each pixel point, the basic offset and the offset protection coefficient.
2. The method of claim 1, wherein the step of acquiring image data of the eye region of the target object comprises:
acquiring a target image to be processed;
performing face detection on the target image to determine a face region of a target object in the target image;
determining eye feature points in the face region;
image data of an eye region of the target object is determined based on the eye feature points.
3. The method according to claim 2, wherein the step of determining the image data of the iris region contained in the image data of the eye region includes:
extracting iris characteristic points from the eye characteristic points;
image data of the iris region is determined based on the iris feature points.
4. The method of claim 1, wherein the step of determining the base offset for each pixel point of the eye region based on the current location of the pixel point, the iris center feature point, the first iris feature point, and the second iris feature point comprises:
acquiring the abscissa x of the pixel point of the eye region 0 And the ordinate y 0
Acquiring the abscissa x of the iris center feature point eye-center The abscissa x of the first iris characteristic point 1t And the ordinate y 1t And the abscissa x of the second iris feature point 2t
Determining a longitudinal base offset Δy=y of the pixel point 0 -y 1t
If x 0 <x eye-center Determining a lateral basic offset Δx=x of the pixel point 0 -x 1t
If x 0 ≥x eye-center Determining a lateral basic offset Δx=x of the pixel point 0 -x 2t
5. The method of claim 1, wherein the step of calculating the offset protection factor for each of the pixels based on the current location of each of the pixels and the location of the key feature point comprises:
calculating an offset protection coefficient of each pixel point according to the following formula:
wherein protection_x 1 For the lateral first offset protection factor, protection_x 2 For the lateral second offset protection factor, protection_x is the lateral third offset protection factor, and protection_y is the longitudinal offset protection factor; x is x 0 Is the abscissa, y, of the pixel points of the eye region 0 Is the ordinate of the pixel point of the eye region; x is x 1c An abscissa of the external canthus feature point; x is x 2c An abscissa of the inner canthus feature point; x is x eye-center An abscissa of the iris center feature point; y is eye-center An ordinate of the iris center feature point; x is x 1t An abscissa of the first iris feature point; y is 1t Is the ordinate of the first iris feature point; t is a preset constant; exp () represents an exponential function with a base of e; max () represents the maximum value selection function.
6. The method of claim 5, wherein the step of determining the diffused positions of the pixels based on the current positions of the pixels, the base offset, and the offset protection factor comprises:
if x 0 <x eye-center Calculating the abscissa x of each pixel point after diffusion treatment according to the following formula 0 ' and ordinate y 0 ’:
x 0 ‘=x 0 +Δx*protection_x 1 *protection_y;
y 0 ‘=y 0 +Δy*protection_x;
If x 0 ≥x eye-center Calculating the abscissa x of each pixel point after diffusion treatment according to the following formula 0 ' and ordinate y 0 ’:
x 0 ‘=x 0 +Δx*protection_x 2 *protection_y;
y 0 ‘=y 0 +Δy*protection_x。
7. The method of claim 2, wherein prior to determining the eye feature points in the face region, the method further comprises:
judging whether the central axis of the face area is parallel to a preset reference longitudinal axis or not;
if not, determining a deflection angle between the central axis and the reference longitudinal axis;
and carrying out deflection processing on the image data of the face area according to the deflection angle so that the central axis of the face area after the deflection processing is parallel to the reference longitudinal axis.
8. The method of claim 7, wherein the method further comprises:
and carrying out deflection processing on the processed image data of the eye region according to the deflection angle.
9. An image processing apparatus, characterized in that the apparatus comprises:
the eye image acquisition module is used for acquiring image data of an eye region of a target object;
an iris image determining module for determining image data of an iris region included in the image data of the eye region;
the diffusion processing module is used for performing diffusion processing on the image data of the eye region to obtain the processed image data of the eye region; wherein the degree of positional deviation of the image data of the iris region after the diffusion processing is smaller than the degree of positional deviation of the image data of the eye region other than the iris region.
10. An image processing system, the system comprising: the device comprises an image acquisition device, a processor and a storage device;
the image acquisition device is used for acquiring image data;
the storage means has stored thereon a computer program which, when executed by the processor, performs the method of any of claims 1 to 8.
11. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, performs the steps of the method of any of the preceding claims 1 to 8.
CN201811540162.9A 2018-12-14 2018-12-14 Image processing method, device, system and computer storage medium Active CN109658331B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811540162.9A CN109658331B (en) 2018-12-14 2018-12-14 Image processing method, device, system and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811540162.9A CN109658331B (en) 2018-12-14 2018-12-14 Image processing method, device, system and computer storage medium

Publications (2)

Publication Number Publication Date
CN109658331A CN109658331A (en) 2019-04-19
CN109658331B true CN109658331B (en) 2023-07-21

Family

ID=66114345

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811540162.9A Active CN109658331B (en) 2018-12-14 2018-12-14 Image processing method, device, system and computer storage medium

Country Status (1)

Country Link
CN (1) CN109658331B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111862287A (en) * 2020-07-20 2020-10-30 广州市百果园信息技术有限公司 Eye texture image generation method, texture mapping method, device and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105205779A (en) * 2015-09-15 2015-12-30 厦门美图之家科技有限公司 Eye image processing method and system based on image morphing and shooting terminal
CN107833177A (en) * 2017-10-31 2018-03-23 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN108288248A (en) * 2018-01-02 2018-07-17 腾讯数码(天津)有限公司 A kind of eyes image fusion method and its equipment, storage medium, terminal

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9053524B2 (en) * 2008-07-30 2015-06-09 Fotonation Limited Eye beautification under inaccurate localization

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105205779A (en) * 2015-09-15 2015-12-30 厦门美图之家科技有限公司 Eye image processing method and system based on image morphing and shooting terminal
CN107833177A (en) * 2017-10-31 2018-03-23 维沃移动通信有限公司 A kind of image processing method and mobile terminal
CN108288248A (en) * 2018-01-02 2018-07-17 腾讯数码(天津)有限公司 A kind of eyes image fusion method and its equipment, storage medium, terminal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
梯度先验模型耦合改进复扩散的图像放大;海涛等;《计算机工程与设计》;20170916(第09期);全文 *

Also Published As

Publication number Publication date
CN109658331A (en) 2019-04-19

Similar Documents

Publication Publication Date Title
CN108961303B (en) Image processing method and device, electronic equipment and computer readable medium
WO2022134337A1 (en) Face occlusion detection method and system, device, and storage medium
JP6629513B2 (en) Liveness inspection method and apparatus, and video processing method and apparatus
US11176355B2 (en) Facial image processing method and apparatus, electronic device and computer readable storage medium
CN107944420B (en) Illumination processing method and device for face image
JP5671533B2 (en) Perspective and parallax adjustment in stereoscopic image pairs
WO2021189807A1 (en) Image processing method, apparatus and system, and electronic device
US10956733B2 (en) Image processing apparatus and image processing method
CN105243371A (en) Human face beauty degree detection method and system and shooting terminal
WO2020019504A1 (en) Robot screen unlocking method, apparatus, smart device and storage medium
CN112330527A (en) Image processing method, image processing apparatus, electronic device, and medium
CN110503704B (en) Method and device for constructing three-dimensional graph and electronic equipment
US20190205635A1 (en) Process for capturing content from a document
WO2020164284A1 (en) Method and apparatus for recognising living body based on planar detection, terminal, and storage medium
CN109658331B (en) Image processing method, device, system and computer storage medium
CN109726613B (en) Method and device for detection
JP4659722B2 (en) Human body specific area extraction / determination device, human body specific area extraction / determination method, human body specific area extraction / determination program
WO2022095318A1 (en) Character detection method and apparatus, electronic device, storage medium, and program
US9584510B2 (en) Image capture challenge access
CN111476741B (en) Image denoising method, image denoising device, electronic equipment and computer readable medium
CN111091031A (en) Target object selection method and face unlocking method
CN111062279B (en) Photo processing method and photo processing device
EP3699865B1 (en) Three-dimensional face shape derivation device, three-dimensional face shape deriving method, and non-transitory computer readable medium
JP6677980B2 (en) Panorama video data processing device, processing method and processing program
KR101825321B1 (en) System and method for providing feedback of real-time optimal shooting composition using mobile camera recognition technology

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant