CN110827204B - Image processing method and device and electronic equipment - Google Patents

Image processing method and device and electronic equipment Download PDF

Info

Publication number
CN110827204B
CN110827204B CN201810923248.3A CN201810923248A CN110827204B CN 110827204 B CN110827204 B CN 110827204B CN 201810923248 A CN201810923248 A CN 201810923248A CN 110827204 B CN110827204 B CN 110827204B
Authority
CN
China
Prior art keywords
face
pixel
original image
probability
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810923248.3A
Other languages
Chinese (zh)
Other versions
CN110827204A (en
Inventor
宋子奇
吕江靖
李晓波
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alibaba Group Holding Ltd
Original Assignee
Alibaba Group Holding Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alibaba Group Holding Ltd filed Critical Alibaba Group Holding Ltd
Priority to CN201810923248.3A priority Critical patent/CN110827204B/en
Publication of CN110827204A publication Critical patent/CN110827204A/en
Application granted granted Critical
Publication of CN110827204B publication Critical patent/CN110827204B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/94Dynamic range modification of images or parts thereof based on local image properties, e.g. for local contrast enhancement
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/64Computer-aided capture of images, e.g. transfer from script file into camera, check of taken image quality, advice or proposal for image composition or decision on when to take image
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The embodiment of the invention provides an image processing method, an image processing device and electronic equipment, wherein the method comprises the following steps: acquiring an original image to be processed, and determining a face area in the original image; determining an offset position corresponding to a pixel point in an original image; and carrying out fuzzy processing on the face region by updating the pixel value of the pixel point to the pixel value of the corresponding offset position. The image processing method, the image processing device and the electronic equipment provided by the embodiment of the invention can combine the face detection technology with the image blurring technology, quickly and effectively realize face blurring by replacing the original-position pixel points with the offset positions, protect the privacy of a photographed person, and improve the image and video safety.

Description

Image processing method and device and electronic equipment
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, and an electronic device.
Background
With the continuous development of mobile terminals and internet technologies, the application of live video is more and more extensive. In the prior art, in order to protect the privacy of a photographed person, a large number of scenes in which the appearances of people need to be hidden exist in live video.
The traditional method for hiding the appearance is to use props and set a specific shooting angle to shield the face during shooting, for example, the face of a shot person is shielded by arranging certain shielding objects such as leaves of plants during live broadcasting.
The method has the disadvantages that the relative positions of the camera, the shielding object and the photographed person need to be kept unchanged, the camera, the shielding object and the photographed person can not move randomly, the flexibility of photographing is greatly limited, and the method is difficult to be applied to live broadcasting of movement of a machine position.
Disclosure of Invention
In view of this, embodiments of the present invention provide an image processing method, an image processing apparatus, and an electronic device, so as to protect the privacy of a photographed person more quickly and conveniently and improve video security.
In a first aspect, an embodiment of the present invention provides an image processing method, including:
acquiring an original image to be processed, and determining a face area in the original image;
determining an offset position corresponding to a pixel point in an original image;
and carrying out fuzzy processing on the face region by updating the pixel value of the pixel point to the pixel value of the corresponding offset position.
In a second aspect, an embodiment of the present invention provides an image processing apparatus, including:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring an original image to be processed and determining a face area in the original image;
the determining module is used for determining the offset position corresponding to the pixel point in the original image;
and the processing module is used for carrying out fuzzy processing on the face region by updating the pixel value of the pixel point to the pixel value of the corresponding offset position.
In a third aspect, an embodiment of the present invention provides an electronic device, including a processor and a memory, where the memory is used to store one or more computer instructions, and when the one or more computer instructions are executed by the processor, the electronic device implements the image processing method in the first aspect. The electronic device may also include a communication interface for communicating with other devices or a communication network.
An embodiment of the present invention provides a computer storage medium for storing a computer program, where the computer program is used to enable a computer to implement the image processing method in the first aspect when executed.
The image processing method, the image processing device and the electronic equipment provided by the embodiment of the invention have the advantages that firstly, the face area in the original image to be processed is determined, then, the offset position corresponding to the pixel point in the original image is calculated, the pixel value of the pixel point in the face area is updated to the pixel value of the corresponding offset position, the blurred face image is obtained, the face detection technology and the image blurring technology can be combined for use, the face blurring is quickly and effectively realized by replacing the pixel point of the original position with the offset position, the privacy of a shot person is protected, the image and video safety is improved, in addition, as the blurring processing is directly carried out according to the face area in the image, properties do not need to be set or specific machine positions and angles do not need to be maintained during shooting, the shooting requirement is lower, the shooting cost is reduced, and the shooting convenience is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the embodiments or the description of the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a schematic flowchart of a first embodiment of an image processing method according to the present invention;
fig. 2 is a schematic diagram of a face region according to an embodiment of the present invention;
fig. 3 is a schematic flowchart of a method for determining an offset position according to an embodiment of the present invention;
fig. 4 is a schematic flowchart of a second embodiment of an image processing method according to the present invention;
fig. 5 is a schematic flow chart of a method for calculating a face skin probability according to an embodiment of the present invention;
fig. 6 is a schematic flowchart of a skin probability calculation method according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a face contour according to an embodiment of the present invention;
fig. 8 is a schematic flow chart of a face probability calculation method according to an embodiment of the present invention;
fig. 9 is a schematic flowchart of a third embodiment of an image processing method according to the present invention;
fig. 10 is a schematic flowchart of a fourth embodiment of an image processing method according to the present invention;
FIG. 11 is a schematic diagram of an interface display according to an embodiment of the present invention;
FIG. 12 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention;
fig. 13 is a schematic structural diagram of an electronic device corresponding to the image processing apparatus provided in the embodiment shown in fig. 12.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a", "an", and "the" are intended to include the plural forms as well, and "a" and "an" generally include at least two, but do not exclude at least one, unless the context clearly dictates otherwise.
It should be understood that the term "and/or" as used herein is merely one type of association that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
The words "if", as used herein may be interpreted as "at \8230; \8230whenor" when 8230; \8230when or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
It should also be noted that the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a good or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such good or system. Without further limitation, an element defined by the phrases "comprising one of 8230; \8230;" 8230; "does not exclude the presence of additional like elements in articles of commerce or systems including such elements.
In addition, the sequence of steps in the embodiments of the methods described below is merely an example, and is not strictly limited.
The invention provides a new solution for the real-time face shielding requirement in image or video playing, and possible application scenes comprise the information protection of live broadcast interview, the face application of various entertainment scenes and the like.
Fig. 1 is a schematic flowchart of an image processing method according to a first embodiment of the present invention. The execution subject of the method in the embodiment of the present invention may be a user device such as a mobile phone, a tablet computer, a computer, or the like, or may also be a server or a server cluster, or the like. As shown in fig. 1, the image processing method in the present embodiment may include:
step 101, obtaining an original image to be processed, and determining a face area in the original image.
Specifically, the original image may be a frame image in a video or a dynamic image, or may be a single static image. The face area in the original image refers to an area where a face is located, and specifically may refer to a square frame where the face is located, or may refer to a circle or any other shape where the face is located.
Fig. 2 is a schematic diagram of a face region according to an embodiment of the present invention. As shown in fig. 2, a box 1 where a face is located may be used as the face region.
It will be understood by those skilled in the art that an image may be composed of a plurality of pixels, each having its corresponding pixel value and position information such as horizontal and vertical coordinates. According to the pixel value of each pixel point in the image, any image processing algorithm can be adopted to determine the face region in the image, which is not limited in this embodiment.
Alternatively, the Viola-Jones algorithm may be used to determine the face region. Specifically, the Viola-Jones algorithm may use a pixel value of each pixel point in an image as an input, construct an integral graph through Haar features to perform feature calculation, use AdaBoost as a feature classifier to classify features, and position a face region in a cascading manner.
And 102, determining an offset position corresponding to a pixel point in the original image.
Specifically, to-be-processed pixel points may be selected from the original image, and an offset position corresponding to each pixel point in the to-be-processed pixel points is determined. The pixel points to be processed may include all pixel points in the original image, or may include only some pixel points in the original image.
The offset position corresponding to the pixel point may be the position of any other pixel point in the original image. The offset location of the pixel point correspondence may be determined by a random function.
The effect of the offset position is: the pixel values of the pixel points can be replaced by the pixel values of the corresponding offset positions, so that the fuzzy effect is realized. The distance between the pixel point and the offset position is recorded as an offset, and in order to ensure the fuzzy effect, the offsets corresponding to different pixel points can be unequal.
And 103, carrying out fuzzy processing on the face region by updating the pixel value of the pixel point to the pixel value of the corresponding offset position.
In the embodiment of the invention, the pixel value of each pixel point in at least one pixel point in the original image can be updated to the pixel value of the offset position corresponding to the pixel point, and the more the updated pixel points are, the greater the fuzzy degree is possibly. It can be understood that the process of blurring a certain area in an image/image is a process of updating pixel values of pixel points in a certain area in an image/image.
In an alternative embodiment, the pixel points in the face region may be updated. Specifically, the pixel value of each pixel point in the face region may be updated to the pixel value of the corresponding offset position in the original image, so as to obtain the blurred face image. And the other areas except the human face area in the image can be kept consistent with the original image without blurring processing, so that the privacy of the photographed person can be protected, and the other areas except the human face area can not be prevented from being seen by the audience.
In another optional embodiment, the pixel value of each pixel point in the whole original image may be updated to the pixel value of the corresponding offset position to obtain a whole blurred image, and then the original image and the blurred image are fused according to the information of the face area to obtain a face blurred image with a good blurring effect.
In other optional embodiments, only part of the pixel points in the face region may be updated, or part of the pixel points in the whole original image may be updated, which is not limited in this embodiment of the present invention.
In practical application, in a scene where the privacy of a photographed person needs to be protected, or in an entertainment scene where the photographed person is not required to know the long-phase scene of the photographed person when the image is viewed, or in some entertainment scenes for increasing interest, the image processing method provided by the embodiment of the invention can be adopted to obtain and display the image with fuzzy human faces, so that the shielding of the human faces can be effectively realized, and meanwhile, the information display of other areas is not influenced.
Optionally, the method in the embodiment of the present invention may be applied to a video technology, and particularly may be applied to a live video technology, and in a live broadcast process, a video shot by a camera may be obtained in real time, and each frame of image in the video is subjected to blur processing and then output, so that real-time face blur may be implemented, and privacy of a shot person may be protected.
If at least two face areas exist in the original image, blurring processing is carried out on the at least two face areas by updating the pixel values of the pixel points to the pixel values of the corresponding offset positions. Specifically, the following operations may be performed for each face region: and updating the pixel values of the pixel points in the face area into the pixel values of the corresponding offset positions in the original image, thereby realizing the fuzzy processing of the face area.
In summary, the image processing method provided in this embodiment determines a face area in an original image to be processed, calculates offset positions corresponding to pixels in the original image, updates pixel values of the pixels in the face area to pixel values of the corresponding offset positions, and obtains a blurred face image, and can combine a face detection technology with an image blurring technology, and quickly and effectively implement face blurring by using the offset positions instead of the pixels in the original position, so as to protect privacy of a photographer and improve image and video security.
On the basis of the technical solutions provided in the foregoing embodiments, there may be multiple implementation manners for determining the offset position corresponding to the pixel point, for example, an offset position may be randomly selected for each pixel point, or the position information of the pixel point may be input into a function, and the output of the function is used as the position information of the offset position. The position information may include an abscissa and an ordinate. The function can be set according to actual needs, and different functions can have different blurring effects.
The embodiment of the invention also provides a method for determining the offset position according to the continuous function. Fig. 3 is a flowchart illustrating a method for determining an offset position according to an embodiment of the present invention. As shown in fig. 3, the method for determining an offset position in this embodiment may include:
step 301, obtaining position information of pixel points in the original image.
Step 302, inputting the position information into a continuous function, and intercepting a part of numerical values in output numerical values of the continuous function.
And step 303, determining a corresponding offset position according to the intercepted partial value and the fuzzy radius.
Wherein, the continuous function is used for realizing smooth fuzzy effect, and no obvious fault area appears; the output part of numerical values are intercepted, certain randomness can be increased, the blurring effect is better, and the situation that the human face can be roughly seen through the long-sight is avoided.
In an alternative embodiment, the continuous function may be a sine function, and the truncated part value may be a function taking a fractional part, in which case, the offset position (x ', y') corresponding to the pixel point (x, y) may be calculated according to the following formula:
x′=blurRadius*fract(a*sin(b*x+c*y))/h (1)
y′=blurRadius*fract(a*sin(b*y+c*x))/w (2)
wherein blurRadius is a fuzzy radius and is an integer greater than 0; fraction () means the fractional part; x and y are the abscissa and ordinate of the pixel point, and x 'and y' are the abscissa and ordinate of the offset position; a. b and c are constants; h is the number of lines of the original image and w is the number of columns of the original image.
In equations (1) and (2), the sine function may be replaced with a cosine function, an exponential function, or the like. fract () can intercept the fractional part of the value of the sine function output, for example, fract (17.86) =0.86 when the sine function output is 17.86. fract () may be replaced with other functions, such as a function that may intercept the integer portion, or a function that may intercept one or more intermediate values.
a. The values of b and c can be set according to actual needs, optionally, a can be 6791.0, b can be 47.0, and c can be 9973.0, and experiments prove that a better fuzzy effect can be realized.
h and w are the number of rows and columns, respectively, of the original image, for example, 100 x 80 of the original image, that is, 100 pixels per row and 80 pixels per column, then h and w may be 80 and 100, respectively.
blurRadius is a blur radius, and the larger the blur radius, the more the image is blurred, and the smaller the blur radius, the lighter the image is blurred, and the closer to the original image is. The value of the fuzzy radius can be preset or determined by the input of the user.
Optionally, the fuzzy degree information input by the user may be obtained; and determining the fuzzy radius according to the fuzzy degree information. For example, the user may input the blur degree information through a touch screen, a keyboard, a mouse, a button, a microphone, and the like.
After the blur degree information is obtained, a numerical value contained in the blur degree information may be directly used as the blur radius, or a corresponding relationship between the blur degree information and the blur radius may be stored in advance, the corresponding blur radius is determined according to the blur degree information input by the user, and then the offset position corresponding to the pixel point is determined according to the blur radius.
After the offset position corresponding to each pixel point is determined, the pixel value of each pixel point may be updated to the pixel value of the corresponding offset position in the original image. For example, in the original image, the pixel value of the pixel point a is (1, 1), the pixel value of the pixel point B is (12, 12, 12), and the offset position corresponding to the pixel point a is the position where the pixel point B is located through the calculation of the equations (1) and (2), so that the pixel value of the pixel point a can be updated to be (12, 12, 12).
After all the pixel points needing to be subjected to the blurring processing are processed, the whole visual brightness of the blurred image may be changed due to the fact that the human face has darker positions such as eyes, eyebrows and hairs and also has brighter positions such as skin, and the darker positions and the brighter positions are mutually fused. Therefore, the image after the blur processing can be subjected to luminance adjustment.
Optionally, after the pixel value of the pixel point is updated to the pixel value of the corresponding offset position in the original image, the brightness of the pixel point may be adjusted according to the brightness coefficient.
Specifically, the updated pixel value of the pixel point may be divided by the luminance coefficient to obtain a new pixel value of the pixel point, and the luminance coefficient may be set according to actual needs, for example, may be 1.2. Assuming that the pixel value at a point before the luminance adjustment is (12, 12, 12,), the adjustment is (10, 10, 10,).
Generally, the larger the pixel value, the higher the brightness, the smaller the pixel value, the lower the brightness, and therefore, the brightness coefficient may be between 0 and 1 when the blurred portion needs to be brightened, and may be greater than 1 when the blurred portion needs to be dimmed.
Alternatively, the brightness coefficient may be increased or decreased with respect to the pixel value of the pixel point, or other operations may be performed, so that the brightness may be adjusted.
The brightness coefficient may be preset, or may be adjusted according to information input by a user, which is not limited in this embodiment.
The original image according to the embodiment of the present invention may be any type of image, such as a grayscale image, an RGB image, a YUV image, and the like, and the pixel value of each pixel point may also be represented in a corresponding form, for example, for a grayscale image, the corresponding pixel value may be represented by a grayscale value; for an RGB image, the corresponding pixel values may include a red (R) component, a green (G) component, a blue (B) component; for a YUV image, the corresponding pixel values may include a Y-channel component, a U-channel component, and a V-channel component.
Optionally, in some image or video output rules, normalization processing is required to be performed on the abscissa and the ordinate of the pixel point for output, and therefore, the abscissa and the ordinate of the pixel point may be normalized to the range of [0,1] before the offset position of the pixel point is calculated, or normalization processing may be performed before the image is output after the offset position is calculated.
In summary, according to the method for determining the offset position provided by the embodiment of the present invention, the offset position calculation function is constructed according to the continuous function and the function of taking the partial value, so that the smoothness and the blurring effect of the blurring region can be considered; brightness adjustment is carried out after the fuzzy processing is finished, so that the brightness of an output picture can be ensured; the user is allowed to set the fuzzy degree and the brightness adjusting coefficient by himself, and the method and the device can be suitable for various different scenes and meet the personalized requirements of the user.
Fig. 4 is a flowchart illustrating a second embodiment of an image processing method according to the present invention. On the basis of the technical solution provided in any of the above embodiments, the present embodiment calculates the probability that each pixel point in the original image belongs to the face skin according to the face region, and fuses the overall blurred image and the original image according to the probability to obtain the final blurred image of the face. As shown in fig. 4, the image processing method in the present embodiment may include:
step 401, obtaining an original image to be processed, and determining a face area in the original image.
And step 402, determining the offset position corresponding to the pixel point in the original image.
And 403, updating the pixel values of the pixel points in the original image to the pixel values of the corresponding offset positions in the original image to obtain the overall blurred image.
And 404, calculating the probability that pixel points in the original image belong to the face skin according to the face area.
And 405, fusing the whole fuzzy image and the original image according to the probability that the pixel points belong to the face skin to obtain a face fuzzy image.
In this embodiment, firstly, the whole original image is subjected to blurring processing to obtain an overall blurred image, and then, the overall blurred image and the original image are fused according to the probability that each pixel point belongs to the face skin.
There are many methods for calculating the probability that a pixel belongs to the skin of a human face. Optionally, the big data analysis may be performed in an earlier stage, a model of the face skin is determined, and the model of the face skin is stored, and then, in this embodiment, the probability that each pixel point in the original image belongs to the face skin may be calculated according to the pre-stored face skin model.
After the probability that each pixel point belongs to the face skin is determined, the overall blurred image and the original image can be fused according to the probability to obtain a face blurred image.
Optionally, performing weighted summation on the pixel value of the pixel point in the original image and the pixel value in the whole blurred image to obtain the pixel value of the pixel point in the face blurred image; the weight corresponding to the whole fuzzy image is the probability that the pixel point belongs to the human face skin, and the weight corresponding to the original image is the probability that the pixel point does not belong to the human face skin.
Specifically, for each pixel point, multiplying the probability that the pixel point belongs to the face skin by the pixel value of the pixel point in the whole fuzzy image to obtain a first product; multiplying the probability that the pixel point does not belong to the face skin by the pixel value of the pixel point in the original image to obtain a second product; and adding the first product and the second product to obtain the pixel value of the pixel point in the face blurred image.
The probability that a pixel belongs to the face skin may be equal to 1 minus the probability that the pixel belongs to the face skin, for example, if the probability that a certain pixel belongs to the face skin is 0.8, the probability that the certain pixel does not belong to the face skin is 0.2.
In summary, the image processing method provided in this embodiment performs a blurring process on the entire original image to obtain an overall blurred image, then calculates the probability that each pixel belongs to the face skin, and fuses the original image and the overall blurred image according to the probability that each pixel belongs to the face skin, where the greater the probability that a certain pixel belongs to the face skin, the closer its pixel value is to the corresponding pixel value in the blurred image, and the smaller the probability that a certain pixel belongs to the face skin, the closer its pixel value is to the corresponding pixel value in the original image, so that the blurring process on the face can be effectively implemented, the normal display of other areas is not affected, and the transition between the face and other areas is smooth and natural, and has a high application value.
Fig. 5 is a schematic flow chart of a face skin probability calculation method according to an embodiment of the present invention. As shown in fig. 5, the calculating, according to the face region, a probability that each pixel point in the original image belongs to a face skin in step 403 may include:
step 501, determining position information of a plurality of key points of the face according to the face area.
Optionally, positioning of 68 key points of the face may be realized by using an SDM (super resolved Method) algorithm according to the face area, so as to obtain position information of the 68 key points.
502, calculating the probability of the pixel point belonging to the skin according to the pixel value corresponding to the pixel point and the pixel values corresponding to the key points aiming at each pixel point in the original image; and/or calculating the probability that the pixel point belongs to the face according to the position information of the pixel point and the position information of the key points.
Step 503, calculating the probability that each pixel point belongs to the skin of the human face according to the probability that each pixel point belongs to the skin and/or the probability that each pixel point belongs to the human face.
In each embodiment of the present invention, the probability of belonging to a face/skin/face skin may refer to the probability of belonging to a pixel of a face/skin/face skin in an image, and is recorded as the probability of belonging to a face/skin/face skin for simplifying description. For example, if a certain pixel is considered to belong to a face, it means that the pixel is one of the pixels constituting the face in the image.
After the position information of the plurality of key points of the face is obtained, the probability that each pixel point in the original image belongs to the skin and/or the probability that each pixel point belongs to the face can be calculated according to the position information of the plurality of key points, and then the probability that each pixel point belongs to the face skin is calculated according to the probability that each pixel point belongs to the skin and/or the probability that each pixel point belongs to the face.
In an optional embodiment, the probability that a pixel belongs to the skin can be directly used as the probability that the pixel belongs to the skin of the human face, that is, the whole blurred image and the original image can be directly fused according to the probability that the pixel belongs to the skin.
Or, the probability that the pixel belongs to the face skin can be determined according to the probability that the pixel belongs to the skin and the position information of the pixel. Specifically, if the position of the pixel point is outside the face region, the probability that the pixel point belongs to the face skin is considered to be 0 or a decimal close to 0, and if the position of the pixel point is within the face region, the probability that the pixel point belongs to the face skin is considered to be equal to the probability that the pixel point belongs to the skin.
In another optional implementation, the probability that the pixel belongs to the face may be directly used as the probability that the pixel belongs to the face skin, that is, the whole blurred image and the original image may be directly fused according to the probability that the pixel belongs to the face.
In yet another optional embodiment, the probability that the pixel belongs to the skin of the face can be comprehensively calculated according to the probability that the pixel belongs to the skin and the probability that the pixel belongs to the face.
Optionally, the probability that each pixel belongs to the face skin can be calculated according to the following formula:
P faceskin =d 1 *P skin *P face +d 2 *P face (3)
wherein, P faceskin Is the probability that a pixel belongs to the face skin, P skin Is the probability that the pixel belongs to the skin, P face The probability that the pixel belongs to the face, d 1 And d 2 Are all constant, optional, d 1 May be 0.2,d 2 May be 0.8.
To sum up, the method for calculating the probability of the skin of the human face provided by the embodiment of the invention can calculate the probability that each pixel point belongs to the human face and/or the probability that each pixel point belongs to the skin according to the position information of a plurality of key points of the human face, and represents the probability that each pixel point belongs to the skin of the human face through the probability that each pixel point belongs to the human face or the probability that each pixel point belongs to the skin of the human face, and the method has simple steps and is easy to realize; the probability of belonging to the face skin is comprehensively determined through the probability of belonging to the face and the probability of belonging to the skin, the probability of belonging to the face skin is more accurate, and the probability of belonging to the face skin of the pixel point can be effectively reflected, so that the degree of blurring of the pixel point is more reasonably determined, and the blurring of other skin color areas caused by only considering the skin probability or poor blurring effect of areas such as the forehead and the like except for the outline of the key point caused by only considering the face probability is avoided.
Fig. 6 is a schematic flow chart of a skin probability calculation method according to an embodiment of the present invention. The process shown in fig. 6 is used to calculate the probability that a pixel belongs to the skin. As shown in fig. 6, the calculating, in step 502, the probability that the pixel point belongs to the skin according to the pixel value corresponding to the pixel point and the pixel values corresponding to the plurality of key points may include:
step 601, determining the outline of the human face according to the position information of the plurality of key points.
Specifically, the plurality of key points may be connected by straight lines or smooth curves to obtain the contour of the face. Fig. 7 is a schematic view of a face contour according to an embodiment of the present invention. As shown in fig. 7, the edge of the white region is a face contour, and the white region in the face contour represents a region belonging to the skin.
Step 602, calculating a skin model corresponding to the face according to pixel values corresponding to pixel points in the contour of the face.
As described above, the area where the skin is located in the face can be determined according to the face contour, and the skin model corresponding to the face in the original image can be calculated according to the pixel value corresponding to the pixel point of the area where the skin is located.
Optionally, a gaussian skin color model may be used to calculate a skin model corresponding to the human face.
Specifically, X represents a vector formed by pixel values corresponding to all pixels in the face contour (i.e., all pixels in the white area in fig. 7): x = [ X ] 0 ,x 1 ,…,x N ]N is the number of all pixel points in the face contour, wherein the element x i Representing the pixel value corresponding to the ith pixel point.
The skin model includes: m = E (X), C = E ((X-M) T ). Wherein M is the mean value of the pixel values corresponding to all the pixel points in the face contour, and C is the covariance matrix of the pixel values corresponding to all the pixel points in the face contour.
In real-time application, the pixel value corresponding to the pixel point can be one-dimensional, also can be multidimensional, optionally, when YUV image is processed, can beA skin model is calculated from the U-channel component and the V-channel component corresponding to each pixel point, that is,
Figure BDA0001764763870000131
in the YUV color coding method, a y-channel component is used for representing brightness (Luminance), that is, a gray value, a U-channel component and a V-channel component are used for representing chromaticity (chroma), and the functions of describing image color and saturation are used for specifying the color of a pixel point, so that a skin color model can be accurately calculated by using the U-channel component and the V-channel component without being influenced by brightness.
Alternatively, the skin model may be calculated from the Cb and Cr values corresponding to each pixel point, that is,
Figure BDA0001764763870000141
step 603, determining the probability that the pixel point belongs to the skin according to the skin model.
Specifically, according to the obtained mean value M and covariance matrix C, the probability P that each pixel point in the original image belongs to the skin can be calculated by a two-dimensional Gaussian function skin
Figure BDA0001764763870000142
In the formula (4), x is the pixel value corresponding to the pixel point, P skin The probability that the pixel belongs to the skin is obtained.
In summary, the skin calculation method provided by the embodiment of the present invention can calculate the mean M and the covariance matrix C of the skin model by using the pixel values of the pixels belonging to the skin in the face contour, can effectively reflect the skin model corresponding to the face in the image, then calculate the probability that each pixel in the image belongs to the skin by using the two-dimensional gaussian function, and can quickly and accurately detect the skin region in the image.
In other alternative embodiments, a pre-stored skin model may also be used to calculate the probability that each pixel belongs to the skin. Alternatively, the skin model may be calculated according to a plurality of existing skin samples, where the skin samples may include pixel values corresponding to skin portions selected from the human image, for example, the skin portions are extracted from 500 human images, and then the 500 samples are processed to calculate the mean M and the covariance matrix C and stored. When an original image to be processed is processed, a pre-stored skin model can be directly obtained, the probability that each pixel point belongs to the skin is calculated according to the formula (4), and the speed of image processing is effectively improved.
Fig. 8 is a schematic flow chart of a face probability calculation method according to an embodiment of the present invention. The process shown in fig. 8 is used to calculate the probability that a pixel belongs to a face. As shown in fig. 8, the calculating, according to the position information of the pixel point and the position information of the plurality of key points in step 502, the probability that the pixel point belongs to a face may include:
step 801, determining feature information of the face according to the position information of the plurality of key points, wherein the feature information includes position information of the center of the face, height information of the face and width information of the face.
Specifically, the plurality of key points may be a plurality of key points that constitute a face contour, and the position information of the center of the face may be obtained by averaging the position information of the plurality of key points; subtracting the minimum abscissa from the maximum abscissa of the abscissas of the plurality of key points to obtain the width information of the face; and subtracting the minimum ordinate from the maximum ordinate in the ordinates of the plurality of key points to obtain the height information of the face.
Step 802, calculating the probability that the pixel point belongs to the face according to the position information of the pixel point and the feature information of the face.
An alternative calculation is given below. Suppose a total of 68 faces of keypoints are obtained, denoted as (fp) ix ,fp it ) I =1,2, \ 8230;, 68, where fp ix Is the abscissa of the ith key point, fp iy As the ordinate of the ith key pointFrom this, the coordinates (fc) of the face center can be obtained x ,fc y ):
Figure BDA0001764763870000151
Wherein fc x And fc y Respectively representing the abscissa and ordinate of the center of the face.
Face width f width
f width =max i fp ix -min i fp ix (6)
Height f of human face height
f height =max i fp iy -min i fp iy (7)
Wherein, the first and the second end of the pipe are connected with each other,
Figure BDA0001764763870000152
it is indicated that the maximum value is taken,
Figure BDA0001764763870000153
indicating taking the minimum value.
For any point (p) on the whole original image x ,p y ) Probability P of belonging to a face face (p x ,p y ) Comprises the following steps:
Figure BDA0001764763870000154
wherein p is x And p y The abscissa and the ordinate of the pixel point are respectively, k, m, and l are respectively constants, optionally, k =0.3, m =0.2, and l =0.6.
To sum up, the face probability calculation method provided by the embodiment of the present invention can calculate the position information of the center of the face, the height information of the face, and the width information of the face according to the position information of a plurality of key points of the face, and then process the position information of each pixel point according to the position information of the center of the face, the height information of the face, and the width information of the face, so as to obtain the probability that each pixel point belongs to the face.
In other alternative embodiments, other methods may be used to calculate the probability that each pixel belongs to a face. For example, the calculation may be performed only according to the distance between the pixel point and the center of the face, without considering the width information and the height information of the face, and if the distance between the pixel point and the center of the face is closer, the probability of belonging to the face is larger, and the distance between the pixel point and the center of the face is farther, the probability of belonging to the face is smaller; for another example, the calculation may be performed according to the distance between the pixel point and the outline of the human face; in addition to calculating the probability by using a formula, the corresponding relation between the distance and the probability can be stored in advance, and if the corresponding relation is stored in a table, the corresponding probability can be obtained by directly reading the data of the table; and so on.
Fig. 9 is a schematic flowchart of a third embodiment of an image processing method according to the present invention. The embodiment is based on the technical scheme provided by any embodiment, and each frame of image in the video is processed and output in real time. As shown in fig. 9, the image processing method in the present embodiment may include:
step 901, determining an original image to be processed according to video data shot in real time, where the original image to be processed is an original image to be played in a next frame.
And step 902, determining a face area in the original image.
Step 903, determining the offset position corresponding to the pixel point in the original image.
And 904, carrying out fuzzy processing on the face region by updating the pixel value of the pixel point to the pixel value of the corresponding offset position.
And step 905, displaying the blurred image on a playing interface.
Specifically, in the video playing process, especially in the live video broadcasting process, the face blurring processing can be performed on each frame of image, the specific processing method can be seen in any of the above embodiments, after the blurring processing is completed, the blurred images can be displayed on the playing interface, and the continuously played images form a video, so that the user can watch the video conveniently.
Optionally, when processing an image, the image may be converted into a YUV format, and then, the image may be divided into two branches to perform different processing, where the first branch is used to implement a blurring process to obtain an overall blurred image; the second branch is used for realizing the face region detection and probability calculation.
When determining the face region, only the Y channel component may be used, and the UV channel component is not needed, for example, the Y channel component may be directly input into the Viola-Jones face detector, so as to obtain the face region.
After the face region is determined, the probability calculation can be performed by using the UV channel components. Specifically, the U-channel component and the Y-channel component may be used as pixel values of the pixel points, and the probability calculation may be performed by using the methods shown in fig. 5 to 8. When the face area is determined, only the Y-channel component is used, and when the probability calculation is carried out, only the UV-channel component is used, so that the face fuzzy processing can be quickly and accurately realized.
In summary, the image processing method provided in this embodiment performs face blurring processing on each frame of image in a video, and displays the blurred image to a user on a playing interface after the blurring processing is completed, so that face blurring of the video can be effectively achieved, and a privacy protection requirement of video playing or live broadcasting is met.
Fig. 10 is a flowchart illustrating a fourth embodiment of an image processing method according to the present invention. In this embodiment, on the basis of the technical solution provided in any of the above embodiments, the face fuzzy switch is displayed to the user, and the user controls whether to turn on the face fuzzy switch. As shown in fig. 10, the image processing method in this embodiment may include:
step 1001, determining an original image to be processed according to video data shot in real time, wherein the original image to be processed is an original image to be played in a next frame.
And step 1002, judging whether a face fuzzy switch is turned on, if so, executing step 1003, and if not, executing step 1004.
Alternatively, the face blur switch may be turned on or off in response to an operation event of the user. In the embodiments of the present invention, the user may be a video photographer, a video viewer, a background manager, or the like.
For example, when a video is broadcast directly, a live person can select to turn on or turn off the face fuzzy switch, or a live watching person can select to turn on or turn off the face fuzzy switch, or a manager of a video broadcast platform can select to turn on or turn off the face fuzzy switch.
The face fuzzy switch can be turned on or turned off in various ways, for example, the face fuzzy switch can be displayed on a playing interface corresponding to a video viewer, or the face fuzzy switch can be displayed on a shooting interface corresponding to a video photographer, or the face fuzzy switch can be displayed on a management interface corresponding to a background manager, and a user can turn on or turn off the face fuzzy switch in ways such as clicking.
In other embodiments, the user may turn the face blur switch on or off by other means, such as shaking the device, pressing a key, voice input, etc.
Step 1003, determining a face region in the original image, calculating an offset position corresponding to a pixel point in the original image, and performing fuzzy processing on the face region by updating a pixel value of the pixel point to a pixel value of the corresponding offset position.
And step 1004, displaying the original image on a playing interface.
Specifically, when the face blur switch is turned on, the face blur function is activated to output the blurred image to the user, and when the face blur switch is turned off, the face blur function is deactivated to output the original image to the user.
In summary, the image processing method provided by this embodiment sets the face blur switch for the user, and the user can turn on or turn off the face blur switch according to actual needs, so as to enable or disable the face blur function, meet video playing requirements in different scenes, and improve user experience.
Further, an interface for adjusting the degree of blur may also be provided; acquiring fuzzy degree information input on the interface by a user; and determining the fuzzy radius according to the fuzzy degree information, so that a user can conveniently adjust the fuzzy degree information. Fig. 11 is a schematic interface display diagram according to an embodiment of the present invention. As shown in fig. 11, a face blur switch and a control bar of blur degree information may be displayed on an interface, where the interface may be an interface facing any type of user, and may be, for example, a video playing interface corresponding to a viewer.
In other alternative embodiments, an input box of the blur degree information may also be displayed, and a user inputs a specific numerical value or text (high, medium, and low), where different numerical values or texts represent different blur degrees, and a corresponding blur radius may be determined according to the blur degree information.
Alternatively, the input box may not be displayed, and the corresponding blur level information may be determined directly in response to a trigger event of the user, for example, the user presses the screen for a long time to increase the blur level, or shakes the electronic device to increase the blur level, and so on.
On the basis of the solutions provided by the above embodiments, after the image is blurred, the blurred image may be displayed on a playing interface, or the blurred image may be output. For example, after the server performs the blurring processing on the image, the image may be sent to the user equipment and displayed by the user equipment. Or after the first user equipment blurs the image, the image can be sent to the second user equipment, and the second user equipment displays the image to the user.
An image processing apparatus according to one or more embodiments of the present invention will be described in detail below. Those skilled in the art will appreciate that these image processing apparatuses can be configured by the steps taught in the present embodiment using commercially available hardware components.
Fig. 12 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention. As shown in fig. 12, the apparatus may include:
the acquisition module 11 is configured to acquire an original image to be processed and determine a face area in the original image;
a determining module 12, configured to determine an offset position corresponding to a pixel point in an original image;
and the processing module 13 is configured to perform blurring processing on the face region by updating the pixel value of the pixel point to the pixel value of the corresponding offset position.
Optionally, the processing module 13 may be specifically configured to: and updating the pixel values of the pixel points of the face area into the pixel values of the corresponding offset positions in the original image to obtain a face blurred image.
Optionally, the processing module 13 may include: the updating unit is used for updating the pixel values of the pixel points in the original image into the pixel values of the corresponding offset positions in the original image to obtain an integral blurred image; the calculating unit is used for calculating the probability that pixel points in the original image belong to the face skin according to the face area; and the fusion unit is used for fusing the whole blurred image and the original image according to the probability that the pixel points belong to the face skin to obtain a face blurred image.
Optionally, the determining module 12 may be specifically configured to: acquiring position information of pixel points in an original image; inputting the position information into a continuous function, and intercepting partial numerical values in output numerical values of the continuous function; and determining a corresponding offset position according to the intercepted partial numerical value and the fuzzy radius.
Optionally, the processing module 13 may be specifically configured to: and if at least two face areas exist in the original image, blurring the at least two face areas according to the offset positions corresponding to the pixel points.
Optionally, the processing module 13 may be further configured to: and after the pixel value of the pixel point is updated to the pixel value of the corresponding offset position in the original image, brightness adjustment is carried out on the pixel point according to the brightness coefficient.
Optionally, the processing module 13 may be further configured to: providing an interface for adjusting the degree of blur; acquiring fuzzy degree information input by a user on the interface; and determining the fuzzy radius according to the fuzzy degree information.
Optionally, the computing unit may include: the determining subunit is used for determining the position information of a plurality of key points of the face according to the face region; the first calculating subunit is used for calculating the probability that the pixel point belongs to the skin according to the pixel value corresponding to the pixel point and the pixel values corresponding to the key points aiming at the pixel points in the original image, and the second calculating subunit is used for calculating the probability that the pixel point belongs to the face according to the position information of the pixel point and the position information of the key points aiming at each pixel point in the original image; and the third calculation subunit is used for calculating the probability that each pixel point belongs to the skin of the face according to the probability that the pixel point belongs to the skin and/or the probability that the pixel point belongs to the face.
Optionally, the second calculating subunit may be specifically configured to: determining feature information of the face according to the position information of the plurality of key points, wherein the feature information comprises position information of the center of the face, height information of the face and width information of the face; and calculating the probability of the pixel point belonging to the face according to the position information of the pixel point and the feature information of the face.
Optionally, the first calculating subunit may be specifically configured to: determining the outline of the face according to the position information of the plurality of key points; calculating a skin model corresponding to the face according to pixel values corresponding to pixel points in the contour of the face; and determining the probability of the pixel point belonging to the skin according to the skin model.
Optionally, the fusion unit may be specifically configured to: carrying out weighted summation on pixel values of pixel points in an original image and pixel values in an integral blurred image to obtain the pixel values of the pixel points in a face blurred image; the weight corresponding to the whole fuzzy image is the probability that the pixel point belongs to the human face skin, and the weight corresponding to the original image is the probability that the pixel point does not belong to the human face skin.
Optionally, the obtaining module 11 may be specifically configured to: determining an original image to be processed according to video data shot in real time, wherein the original image to be processed is an original image to be played in the next frame; and determining the face area in the original image.
Correspondingly, the processing module 13 may be further configured to: and after blurring the face region by updating the pixel value of the pixel point to the pixel value of the corresponding offset position, displaying the blurred image on a playing interface.
Optionally, the apparatus may further include: and the response module is used for responding to the operation event of the user and turning on or off the face fuzzy switch.
Correspondingly, the obtaining module 11 may specifically be configured to: determining an original image to be processed according to video data shot in real time, wherein the original image to be processed is an original image to be played in the next frame; when the face fuzzy switch is turned on, determining a face area in the original image;
the apparatus may further include: and the display module is used for displaying the original image on a playing interface when the face fuzzy switch is turned off.
The apparatus shown in fig. 12 can execute the image processing method provided by any of the foregoing embodiments, and reference may be made to the related description of the foregoing embodiments for a part of this embodiment that is not described in detail. The implementation process and technical effect of the technical solution refer to the description in the foregoing embodiments, and are not described herein again.
The internal functions and structures of the image processing apparatus are described above, and in one possible design, the structure of the image processing apparatus may be implemented as an electronic device, such as a user device, e.g., a mobile phone, a tablet computer, a computer, etc., or may be a server or a cluster of servers, etc. As shown in fig. 13, the electronic device may include: a processor 21 and a memory 22. Wherein the memory 22 is used for storing a program for supporting an electronic device to execute the image processing method provided by any one of the foregoing embodiments, and the processor 21 is configured to execute the program stored in the memory 22.
The program comprises one or more computer instructions which, when executed by the processor 21, are capable of performing the steps of:
acquiring an original image to be processed, and determining a face area in the original image;
determining an offset position corresponding to a pixel point in an original image;
and carrying out fuzzy processing on the face region by updating the pixel value of the pixel point to the pixel value of the corresponding offset position.
Optionally, the processor 21 is further configured to perform all or part of the steps in the embodiments shown in fig. 1 to 10.
The electronic device may further include a communication interface 23 for communicating with other devices or a communication network.
Additionally, embodiments of the present invention provide a computer-readable storage medium storing computer instructions that, when executed by a processor, cause the processor to perform acts comprising:
acquiring an original image to be processed, and determining a face area in the original image;
determining an offset position corresponding to a pixel point in an original image;
and carrying out fuzzy processing on the face region by updating the pixel value of the pixel point to the pixel value of the corresponding offset position.
In addition, the computer instructions, when executed by a processor, may further cause the processor to perform all or part of the steps involved in the image processing method in the above embodiments.
The above-described embodiments of the apparatus are merely illustrative, and the units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by adding a necessary general hardware platform, and of course, can also be implemented by a combination of hardware and software. With this understanding in mind, the above-described solutions and/or portions thereof that are prior art may be embodied in the form of a computer program product, which may be embodied on one or more computer-usable storage media having computer-usable program code embodied therein (including but not limited to disk storage, CD-ROM, optical storage, etc.).
The present invention has been described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems), and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flow diagrams and/or block diagrams, and combinations of flows and/or blocks in the flow diagrams and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable image processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable image processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable image processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable image processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
In a typical configuration, a computing device includes one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include transitory computer readable media (transmyedia) such as modulated data signals and carrier waves.
Finally, it should be noted that: the above examples are only intended to illustrate the technical solution of the present invention, and not to limit it; although the present invention has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (11)

1. An image processing method, comprising:
acquiring an original image to be processed, and determining a face area in the original image;
determining an offset position corresponding to a pixel point in an original image;
carrying out fuzzy processing on the face region by updating the pixel value of the pixel point to the pixel value of the corresponding offset position;
wherein, the blurring process is performed on the face region by updating the pixel value of the pixel point to the pixel value of the corresponding offset position, and the blurring process includes: updating the pixel values of the pixel points in the original image to the pixel values of the corresponding offset positions in the original image to obtain an overall blurred image; calculating the probability that pixel points in the original image belong to the human face skin according to the human face area; according to the probability that the pixel points belong to the face skin, the integral fuzzy image and the original image are fused to obtain a face fuzzy image;
calculating the probability that pixel points in the original image belong to the face skin according to the face area, wherein the probability comprises the following steps:
determining position information of a plurality of key points of the face according to the face region, wherein the key points are a plurality of key points forming a face contour;
aiming at the pixel points in the original image, calculating the probability that the pixel points belong to the skin according to the pixel values corresponding to the pixel points and the pixel values corresponding to the key points; calculating the probability that the pixel point belongs to the face according to the position information of the pixel point and the position information of the key points;
calculating the probability that the pixel point belongs to the skin of the face according to the probability that the pixel point belongs to the skin and the probability that the pixel point belongs to the face;
determining an offset position corresponding to a pixel point in an original image, comprising: acquiring position information of pixel points in an original image; inputting the position information into a continuous function, and intercepting partial numerical values in output numerical values of the continuous function; and determining a corresponding offset position according to the intercepted partial numerical value and the fuzzy radius.
2. The method of claim 1, wherein blurring the face region by updating pixel values of pixel points to pixel values of corresponding offset positions comprises:
and if at least two face areas exist in the original image, blurring the at least two face areas by updating the pixel values of the pixel points to the pixel values of the corresponding offset positions.
3. The method of claim 1, further comprising, after updating the pixel values of the pixels to the pixel values corresponding to the offset position in the original image:
and adjusting the brightness of the pixel points according to the brightness coefficient.
4. The method of claim 1, further comprising:
providing an interface for adjusting the degree of blur;
acquiring fuzzy degree information input by a user on the interface;
and determining the fuzzy radius according to the fuzzy degree information.
5. The method of claim 1, wherein calculating the probability that the pixel belongs to the face according to the position information of the pixel and the position information of the plurality of key points comprises:
determining feature information of the face according to the position information of the plurality of key points, wherein the feature information comprises position information of the center of the face, height information of the face and width information of the face;
and calculating the probability of the pixel point belonging to the face according to the position information of the pixel point and the feature information of the face.
6. The method of claim 1, wherein calculating the probability that the pixel point belongs to the skin according to the pixel value corresponding to the pixel point and the pixel values corresponding to the plurality of key points comprises:
determining the outline of the face according to the position information of the plurality of key points;
calculating a skin model corresponding to the face according to pixel values corresponding to pixel points in the outline of the face;
and determining the probability of the pixel point belonging to the skin according to the skin model.
7. The method of claim 1, wherein the fusing the overall blurred image and the original image according to the probability that the pixel point belongs to the face skin to obtain the face blurred image comprises:
carrying out weighted summation on pixel values of pixel points in an original image and pixel values in an integral blurred image to obtain the pixel values of the pixel points in a face blurred image;
the weight corresponding to the whole fuzzy image is the probability that the pixel point belongs to the human face skin, and the weight corresponding to the original image is the probability that the pixel point does not belong to the human face skin.
8. The method of claim 1, wherein acquiring the original image to be processed comprises:
determining an original image to be processed according to video data shot in real time, wherein the original image to be processed is an original image to be played in the next frame;
correspondingly, after the blurring processing is performed on the face region by updating the pixel value of the pixel point to the pixel value of the corresponding offset position, the method further includes: and displaying the blurred image on a playing interface.
9. The method of claim 8, further comprising:
responding to an operation event of a user, and turning on or off a face fuzzy switch;
correspondingly, determining the face area in the original image includes: when the face fuzzy switch is turned on, determining a face area in the original image;
the method further comprises the following steps: and when the face fuzzy switch is turned off, displaying the original image on a playing interface.
10. An image processing apparatus characterized by comprising:
the system comprises an acquisition module, a processing module and a processing module, wherein the acquisition module is used for acquiring an original image to be processed and determining a face area in the original image;
the determining module is used for determining the offset position corresponding to the pixel point in the original image;
the processing module is used for carrying out fuzzy processing on the face region by updating the pixel value of the pixel point to the pixel value of the corresponding offset position;
wherein the processing module comprises: the updating unit is used for updating the pixel values of the pixel points in the original image into the pixel values of the corresponding offset positions in the original image to obtain an overall blurred image; the calculating unit is used for calculating the probability that pixel points in the original image belong to the face skin according to the face area; the fusion unit is used for fusing the integral blurred image and the original image according to the probability that the pixel points belong to the face skin to obtain a face blurred image;
the calculation unit includes:
the determining subunit is configured to determine, according to the face region, position information of a plurality of key points of the face, where the plurality of key points are a plurality of key points that form a face contour;
the first calculating subunit is used for calculating the probability that the pixel point belongs to the skin according to the pixel value corresponding to the pixel point and the pixel values corresponding to the key points aiming at the pixel point in the original image, and the second calculating subunit is used for calculating the probability that the pixel point belongs to the face according to the position information of the pixel point and the position information of the key points aiming at each pixel point in the original image;
the third calculation subunit is used for calculating the probability that each pixel point belongs to the skin of the face according to the probability that the pixel point belongs to the skin and the probability that the pixel point belongs to the face;
the determining module is specifically configured to: acquiring position information of pixel points in an original image; inputting the position information into a continuous function, and intercepting partial numerical values in output numerical values of the continuous function; and determining a corresponding offset position according to the intercepted partial numerical value and the fuzzy radius.
11. An electronic device, comprising: a memory and a processor; wherein the content of the first and second substances,
the memory is to store one or more computer instructions, wherein the one or more computer instructions, when executed by the processor, implement the image processing method of any of claims 1 to 9.
CN201810923248.3A 2018-08-14 2018-08-14 Image processing method and device and electronic equipment Active CN110827204B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810923248.3A CN110827204B (en) 2018-08-14 2018-08-14 Image processing method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810923248.3A CN110827204B (en) 2018-08-14 2018-08-14 Image processing method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN110827204A CN110827204A (en) 2020-02-21
CN110827204B true CN110827204B (en) 2022-10-04

Family

ID=69547262

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810923248.3A Active CN110827204B (en) 2018-08-14 2018-08-14 Image processing method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN110827204B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111539008B (en) * 2020-05-22 2023-04-11 蚂蚁金服(杭州)网络技术有限公司 Image processing method and device for protecting privacy
CN112788359B (en) * 2020-12-30 2023-05-09 北京达佳互联信息技术有限公司 Live broadcast processing method and device, electronic equipment and storage medium
CN112507988B (en) * 2021-02-04 2021-05-25 联仁健康医疗大数据科技股份有限公司 Image processing method and device, storage medium and electronic equipment
CN113573086A (en) * 2021-07-22 2021-10-29 哈尔滨徙木科技有限公司 Live social platform
CN113965695A (en) * 2021-09-07 2022-01-21 福建库克智能科技有限公司 Method, system, device, display unit and medium for image display

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106296576A (en) * 2016-08-05 2017-01-04 厦门美图之家科技有限公司 Image processing method and image processing apparatus
CN107895352A (en) * 2017-10-30 2018-04-10 维沃移动通信有限公司 A kind of image processing method and mobile terminal

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8031961B2 (en) * 2007-05-29 2011-10-04 Hewlett-Packard Development Company, L.P. Face and skin sensitive image enhancement
CN106611429B (en) * 2015-10-26 2019-02-05 腾讯科技(深圳)有限公司 Detect the method for skin area and the device of detection skin area
CN107305686A (en) * 2016-04-20 2017-10-31 掌赢信息科技(上海)有限公司 A kind of image processing method and electronic equipment
CN106530241B (en) * 2016-10-31 2020-08-11 努比亚技术有限公司 Image blurring processing method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106296576A (en) * 2016-08-05 2017-01-04 厦门美图之家科技有限公司 Image processing method and image processing apparatus
CN107895352A (en) * 2017-10-30 2018-04-10 维沃移动通信有限公司 A kind of image processing method and mobile terminal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
图像的模糊化(像素偏移);qq_2773878606;《https://blog.csdn.net/qq_18343569/article/details/47112851》;20150728;第1页 *

Also Published As

Publication number Publication date
CN110827204A (en) 2020-02-21

Similar Documents

Publication Publication Date Title
CN110827204B (en) Image processing method and device and electronic equipment
CN105122302B (en) Generation without ghost image high dynamic range images
KR101725884B1 (en) Automatic processing of images
US20180350043A1 (en) Shallow Depth Of Field Rendering
CN110366001B (en) Method and device for determining video definition, storage medium and electronic device
EP3238213B1 (en) Method and apparatus for generating an extrapolated image based on object detection
CN107948733B (en) Video image processing method and device and electronic equipment
WO2022033485A1 (en) Video processing method and electronic device
US20140341425A1 (en) Providing visual effects for images
CN112351195B (en) Image processing method, device and electronic system
US20180173957A1 (en) Methods, systems, and media for detecting two-dimensional videos placed on a sphere in abusive spherical video content by tiling the sphere
CN107564085B (en) Image warping processing method and device, computing equipment and computer storage medium
CN114862735A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN114845158A (en) Video cover generation method, video publishing method and related equipment
CN113573044B (en) Video data processing method and device, computer equipment and readable storage medium
CN112257729A (en) Image recognition method, device, equipment and storage medium
WO2023103813A1 (en) Image processing method and apparatus, device, storage medium, and program product
CN112435173A (en) Image processing and live broadcasting method, device, equipment and storage medium
CN113610723B (en) Image processing method and related device
CN113592753B (en) Method and device for processing image shot by industrial camera and computer equipment
CN111866573B (en) Video playing method and device, electronic equipment and storage medium
CN113240760A (en) Image processing method and device, computer equipment and storage medium
US10237614B2 (en) Content viewing verification system
CN113628122A (en) Image processing method, model training method, device and equipment
US20190197747A1 (en) Automatic obfuscation engine for computer-generated digital images

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant