CN109727193B - Image blurring method and device and electronic equipment - Google Patents

Image blurring method and device and electronic equipment Download PDF

Info

Publication number
CN109727193B
CN109727193B CN201910026279.3A CN201910026279A CN109727193B CN 109727193 B CN109727193 B CN 109727193B CN 201910026279 A CN201910026279 A CN 201910026279A CN 109727193 B CN109727193 B CN 109727193B
Authority
CN
China
Prior art keywords
image
focus
sub
contour
processed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910026279.3A
Other languages
Chinese (zh)
Other versions
CN109727193A (en
Inventor
廖声洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Kuangshi Technology Co Ltd
Original Assignee
Beijing Kuangshi Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Kuangshi Technology Co Ltd filed Critical Beijing Kuangshi Technology Co Ltd
Priority to CN201910026279.3A priority Critical patent/CN109727193B/en
Publication of CN109727193A publication Critical patent/CN109727193A/en
Application granted granted Critical
Publication of CN109727193B publication Critical patent/CN109727193B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides an image blurring method, an image blurring device and electronic equipment, and relates to the technical field of image processing. And taking an image area corresponding to the focal point sub-outline in the image to be processed as a focal plane area, and blurring the image to be processed based on the focal plane area. The method can highlight the object with the focus, enhance the display effect of the image and further improve the user experience.

Description

Image blurring method and device and electronic equipment
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to an image blurring method, an image blurring device, and an electronic device.
Background
Along with the development of science and technology and the improvement of the technical industrialization application level, the functions of electronic devices such as mobile terminals are more and more powerful, and most mobile terminals are provided with cameras with higher performance so as to facilitate users to take photos or record videos. Although electronic devices such as mobile terminals are equipped with cameras with higher performance, the images captured by the electronic devices still need to be processed in the later stage to improve the quality of the images.
In the practical application process, the intention of the user in blurring is mostly to highlight the object with the focus, and to blur other areas of the image except the object with the focus. The area containing the focus determined in the current image blurring method is an area within a predetermined range, and the predetermined range is usually a pattern having a regular contour. However, the object at which the focus is located may be a person, a vehicle, an animal, a plant or other objects, and the outline of the object is irregular, so that the object at which the focus is located cannot be highlighted by the above method, and the user experience is poor.
Disclosure of Invention
In view of the above, the present invention aims to provide an image blurring method, an image blurring device and an electronic device, which alleviate the problem that the existing image blurring method cannot highlight the object where the focus is located, and improve the user experience.
In order to achieve the above object, the technical scheme adopted by the embodiment of the invention is as follows:
in a first aspect, an embodiment of the present invention provides an image blurring method, including:
performing contour detection on an image to be processed to obtain at least one sub-contour contained in the image to be processed;
determining a focus of the image to be processed;
Selecting a focus sub-contour containing the focus from the at least one sub-contour, and taking an image area corresponding to the focus sub-contour in the image to be processed as a focal plane area;
performing blurring processing on the image to be processed to obtain a blurred image;
and carrying out fusion processing on the blurred image and the focal plane area of the image to be processed to obtain an image after blurring.
With reference to the first aspect, an embodiment of the present invention provides a first possible implementation manner of the first aspect, where the step of determining a focus of the image to be processed includes:
receiving a focus selected by a user; or taking the center point of the image to be processed as a focus.
With reference to the first aspect, the embodiment of the present invention provides a second possible implementation manner of the first aspect, where the step of selecting a focal sub-contour including the focal point from the at least one sub-contour includes:
taking each sub-contour in the at least one sub-contour as a current sub-contour one by one; and if the current sub-contour comprises at least two vertical intersection points with the same abscissa and at least two horizontal intersection points with the same ordinate, wherein the at least two vertical intersection points are respectively positioned at two sides of the focus, the at least two horizontal intersection points are also respectively positioned at two sides of the focus, and the current sub-contour is taken as a focus sub-contour containing the focus.
With reference to the first aspect, an embodiment of the present invention provides a third possible implementation manner of the first aspect, where the step of selecting a focal sub-contour including the focal point from the at least one sub-contour includes:
selecting a sub-contour comprising a vertical intersection point of the focus and a horizontal intersection point of the focus from the at least one sub-contour to form a candidate sub-contour set; the vertical intersection point and the focus have the same abscissa; the horizontal intersection point and the focus have the same ordinate;
taking each sub-contour in the candidate sub-contour set as a current sub-contour one by one, judging whether at least two vertical intersection points on the current sub-contour are positioned on two sides of the focus, and judging whether at least two horizontal intersection points on the current sub-contour are positioned on two sides of the focus;
and if so, taking the current sub-contour as a focus sub-contour containing the focus.
With reference to the second or third possible implementation manner of the first aspect, the embodiment of the present invention provides a fourth possible implementation manner of the first aspect, where the method includes:
calculating a difference value between the ordinate of the focus and the ordinate of each vertical intersection point;
Calculating the product of any two differences;
judging whether a negative value exists in the product of the difference values of the ordinate;
if yes, determining that the at least two vertical focuses are positioned on two sides of the focus;
calculating the difference value between the abscissa of the focus and the abscissa of each horizontal intersection point;
calculating the product of any two differences;
judging whether a negative value exists in the product of the difference values of the abscissa;
if so, determining that the at least two horizontal intersection points are positioned on two sides of the focus.
With reference to the first aspect, an embodiment of the present invention provides a fifth possible implementation manner of the first aspect, where the step of performing fusion processing on the blurred image and the focal plane area of the image to be processed to obtain a blurred image includes:
acquiring a focal plane image corresponding to a focal plane region of the image to be processed and an out-of-focus plane region image except the focal plane region in the blurred image;
and obtaining a virtual image according to the focal plane image and the focal plane outer area image.
With reference to the fifth possible implementation manner of the first aspect, an embodiment of the present invention provides a sixth possible implementation manner of the first aspect, where the step of acquiring a focal plane image corresponding to a focal plane area of the image to be processed and an image of an out-of-focal plane area in the blurred image except for the focal plane area includes:
Constructing a mask image according to the focal sub-outline; the mask image is a binary image with the same size as the image to be processed, a region inside the focus sub-outline in the mask image is provided with a first pixel value, and a region outside the focus sub-outline in the mask image is provided with a second pixel value;
acquiring a focal plane image in the image to be processed through the mask image;
and acquiring an out-of-focus area image in the blurred image through the mask image.
With reference to the sixth possible implementation manner of the first aspect, an embodiment of the present invention provides a seventh possible implementation manner of the first aspect, where the step of obtaining the blurred image according to the focal plane image and the focal plane out-of-area image includes:
determining fusion coefficients corresponding to all pixel points in the focal plane image according to the mask image;
and determining the pixel value of each pixel point in the blurred image according to the product of the pixel value of each pixel point in the focal plane image and the corresponding fusion coefficient and the pixel value of each pixel point in the out-of-focus-plane area image.
With reference to the seventh possible implementation manner of the first aspect, an embodiment of the present invention provides an eighth possible implementation manner of the first aspect, where the step of determining, according to the mask image, a fusion coefficient corresponding to each pixel point in the focal plane image includes:
Calculating the mass center of the mask image according to the pixel value of each pixel point in the mask image;
and calculating fusion coefficients corresponding to all the pixel points in the focal plane image according to the mass center of the mask image and the pixel values of all the pixel points in the region inside the focal point sub-outline of the mask image.
In a second aspect, an embodiment of the present invention further provides an image blurring apparatus, including:
the contour detection module is used for carrying out contour detection on the image to be processed to obtain at least one sub-contour contained in the image to be processed;
a focus acquisition module, configured to determine a focus of the image to be processed;
the focus area determining module is used for selecting a focus sub-outline containing the focus from the at least one sub-outline, and taking an image area corresponding to the focus sub-outline in the image to be processed as a focus plane area;
the blurring processing module is used for blurring processing the image to be processed to obtain a blurred image;
and the blurring processing module is used for carrying out fusion processing on the blurred image and the focal plane area of the image to be processed to obtain a blurred image.
In a third aspect, an embodiment of the present invention further provides an electronic device, including a processor and a storage device; the storage means has stored thereon a computer program which, when executed by the processor, performs the method according to any of the first aspects.
In a fourth aspect, embodiments of the present invention provide a computer-readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method of any of the first aspects described above.
According to the image blurring method, the image blurring device and the electronic equipment, at least one sub-contour contained in an image to be processed can be obtained through contour detection of the image to be processed, a focus of the image to be processed is determined, a focus sub-contour containing the focus is selected from the at least one sub-contour, and the focus sub-contour is an outer contour of an object where the focus is located. And taking an image area corresponding to the focal point sub-outline in the image to be processed as a focal plane area, and blurring the image to be processed based on the focal plane area. The method can highlight the object with the focus, enhance the display effect of the image and further improve the user experience.
Additional features and advantages of the invention will be set forth in the description which follows, or in part will be obvious from the description, or may be learned by practice of the invention.
In order to make the above objects, features and advantages of the present invention more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
Fig. 1 shows a schematic structural diagram of an electronic device according to an embodiment of the present invention;
FIG. 2 is a flow chart of an image blurring method according to an embodiment of the present invention;
FIG. 3 illustrates a schematic diagram of selecting a focus sub-profile from sub-profiles provided by an embodiment of the present invention;
FIG. 4 is a flowchart of another image blurring method according to an embodiment of the present invention;
FIG. 5 shows a schematic diagram of a mask image;
fig. 6 shows a schematic structural diagram of an image blurring apparatus according to an embodiment of the present invention.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the embodiments of the present invention more apparent, the technical solutions of the present invention will be clearly and completely described below with reference to the accompanying drawings, and it is apparent that the described embodiments are some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Aiming at the problem that the image can only be subjected to blurring based on the region containing the focus in the preset range in the prior art, the image blurring method, the device and the electronic equipment provided by the embodiment of the invention can be used for relieving the problem that the existing image blurring method cannot be used for highlighting the object where the focus is located and improving the user experience. Embodiments of the present invention are described in detail below.
Embodiment one:
first, an example electronic device 100 for implementing the image blurring method and apparatus of an embodiment of the present invention is described with reference to fig. 1.
As shown in fig. 1, electronic device 100 includes one or more processors 102, one or more storage devices 104, an input device 106, an output device 108, and an image capture device 110, which are interconnected by a bus system 112 and/or other forms of connection mechanisms (not shown). It should be noted that the components and structures of the electronic device 100 shown in fig. 1 are exemplary only and not limiting, as the electronic device may have other components and structures as desired.
The processor 102 may be a Central Processing Unit (CPU), a graphics processing unit (Graphics Processing Unit, GPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 100 to perform desired functions.
The storage 104 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, random Access Memory (RAM) and/or cache memory (cache), and the like. The non-volatile memory may include, for example, read Only Memory (ROM), hard disk, flash memory, and the like. One or more computer program instructions may be stored on the computer readable storage medium that can be executed by the processor 102 to implement client functions and/or other desired functions in embodiments of the present invention as described below. Various applications and various data, such as various data used and/or generated by the applications, may also be stored in the computer readable storage medium.
The input device 106 may be a device used by a user to input instructions and may include one or more of a keyboard, mouse, microphone, touch screen, and the like.
The output device 108 may output various information (e.g., images or sounds) to the outside (e.g., a user), and may include one or more of a display, a speaker, and the like.
The image capture device 110 may capture images (e.g., pictures, videos, etc.) desired by the user and may also store the captured images in the storage device 104 for use by other components.
The image capture device 110 may capture images (e.g., pictures, videos, etc.) desired by the user, and store the captured images in the storage device 104 for use by other components. The image capture device 110 may also capture a video stream of the pre-shot scene for a user to preview prior to taking the image. In an alternative embodiment, the image capture device 110 may comprise a color camera. A color camera may acquire color images of objects that are within the field of view of the camera.
Exemplary electronic devices for implementing the image blurring method and apparatus according to embodiments of the present invention may be implemented on a computer or a server, or may be implemented on a mobile terminal such as an electronic camera, a smart phone, a tablet computer, or the like.
Embodiment two:
the present embodiment provides an image blurring method, and fig. 2 shows a flowchart of the image enhancement method. It should be noted that the steps illustrated in the flowchart of fig. 2 may be performed in a computer system, such as a set of computer executable instructions, and that, although a logical order is illustrated in the flowchart, in some cases, the steps illustrated or described may be performed in an order other than that illustrated herein. The present embodiment is described in detail below.
As shown in fig. 2, the method comprises the steps of:
step S202, performing contour detection on an image to be processed to obtain at least one sub-contour contained in the image to be processed.
The image to be processed may be an image acquired by the image acquisition device in real time, for example, a picture taken by the image acquisition device in real time, or an image of a pre-shooting scene captured by the image acquisition device before taking the picture. The image to be processed may also be a pre-stored image, for example, an image pre-stored in a storage means of the electronic device, or an image downloaded by the electronic device from another device via a network or other means. In addition, the image to be detected may be an image in a picture format or an image frame in a video, and the embodiment of the invention is not limited.
The sub-contour may be an outer contour of an object contained in the image to be processed, and may be expressed in the form of an outer peripheral contour line or a frame. The object may be a person, a vehicle, an animal, a plant or other object with a border, and the contour detection may detect the outer contours of all objects contained in the image to be processed, or may detect all sub-contours contained in the image to be processed. For example, in a to-be-processed image having flowers and leaves, the detected sub-contours may include contours of a plurality of flowers and contours of a plurality of leaves.
Alternatively, an edge detection algorithm may be used to detect the contour of the image to be processed, where the edge detection algorithm determines the sub-contour included in the image to be processed by detecting points in the image to be processed where the brightness change is significant.
Step S204, determining a focus of the image to be processed.
In an alternative embodiment, the image to be processed may be presented to the user, who may view the image to be processed through a man-machine interaction interface, and may select the focus on the image to be processed by clicking or the like. The electronic device may receive a user-selected focus.
In another alternative embodiment, the center point of the image to be processed may be taken as the focal point. For example, if the electronic device does not receive the focus selected by the user within the preset time, or does not provide the user with a function of selecting the focus, the center point of the image to be processed may be regarded as the focus.
Step S206, selecting a focus sub-contour containing a focus from at least one sub-contour, and taking an image area corresponding to the focus sub-contour in the image to be processed as a focus plane area.
Each of the at least one sub-contour is taken as a current sub-contour one by one. If the current sub-profile includes at least two vertical intersection points having the same abscissa as the focus and at least two horizontal intersection points having the same ordinate as the focus, and the at least two vertical intersection points are located at both sides of the focus, respectively, and the at least two horizontal intersection points are also located at both sides of the focus, the current sub-profile may be regarded as a focus sub-profile including the focus.
As shown in connection with fig. 3, the image to be processed comprises a sub-contour a, a sub-contour B, a sub-contour C. It has been determined that there are two vertical intersections of the sub-profile a with the focus P, respectively the intersection A3 and the intersection A4, and at the same time, there are two horizontal intersections of the sub-profile a with the focus P, respectively the intersection A1 and the intersection A2. However, the intersection point A1 and the intersection point A2 are located on the same side of the focal point P, and therefore, the sub-contour a is not a focal point sub-contour. There are two horizontal intersections of the sub-profile B and the focus P, which are the intersection B1 and the intersection B2, respectively, but there is no vertical intersection of the sub-profile B and the focus P, so the sub-profile B is not a focus sub-profile either. Two vertical intersection points exist between the sub-profile C and the focus P, namely an intersection point C2 and an intersection point C4, and two horizontal intersection points exist between the sub-profile C and the focus P, namely an intersection point C1 and an intersection point C3. Here, the intersection point C2 and the intersection point C4 are located on both sides of the focus P, and the intersection point C1 and the intersection point C3 are also located on both sides of the focus P, so the sub-contour C is a focus sub-contour.
Step S208, blurring processing is carried out on the image to be processed, and a blurred image is obtained.
Illustratively, any one of an Exponential Blur algorithm (exponentiation blue), a mean value Blur algorithm (Box blue, which may also be referred to as a block Blur algorithm), a Stack Blur algorithm (Stack blue), a Gaussian Blur algorithm (Gaussian blue) or the like may be used to Blur the image to be processed, so as to obtain a blurred image.
For example, in an alternative embodiment, the image to be processed may be blurred by a gaussian blur method to obtain a blurred image. Gaussian blur can be understood as taking the pixel value of each pixel (which can be the center pixel) as a weighted average of the pixel values of its surrounding pixels. When the weighted average is calculated, the larger the selection range is, the more blurred the image is. The size of the selection range can be determined by a preset gaussian kernel radius, and the larger the gaussian kernel radius is, the more blurred the image is. The gaussian kernel radius may be set to 1, 3 or 5, depending on the needs. When the pixel value of a certain central pixel point is obtained, the weights corresponding to the pixel points at different positions are different relative to the central pixel point, and the distribution of the weights accords with the normal distribution and can be represented by a weight matrix. For example, when the gaussian kernel radius is 1, the weight matrix may be:
The weight matrix can also be called as Gaussian convolution kernel, and the process of carrying out Gaussian blur processing on the image to be processed can be understood as the process of carrying out convolution on the Gaussian convolution kernel and the image to be processed, and the blurred image is obtained after convolution.
Step S208 may not be performed after step S206. For example, step S208 may be performed before step S206, step S204, or even step S202, and embodiments of the present invention are not limited.
Step S210, fusion processing is carried out on the focal plane area of the blurred image and the focal plane area of the image to be processed, and the blurred image is obtained.
This step can be understood as replacing the area corresponding to the focal plane area in the blurred image with the focal plane area of the image to be processed, and synthesizing the blurred image.
In summary, according to the image blurring method provided by the embodiment of the invention, at least one sub-contour included in the image to be processed can be obtained by performing contour detection on the image to be processed, a focus of the image to be processed is determined, and then a focus sub-contour including the focus is selected from the at least one sub-contour, wherein the focus sub-contour is an outer contour of an object where the focus is located. And taking an image area corresponding to the focal point sub-outline in the image to be processed as a focal plane area, and then carrying out fuzzy processing on the image to be processed to obtain a fuzzy image, and carrying out fusion processing on the fuzzy image and the focal plane area of the image to be processed to obtain a virtual image, so that the problem that the object where the focus is located cannot be highlighted in the existing image virtual mode is solved, the display effect of the image is enhanced, and further the user experience is improved.
Embodiment III:
on the basis of the second embodiment, the present embodiment describes an image blurring method provided by the embodiment of the present invention in combination with a specific application scenario, and fig. 4 shows a flowchart of an image blurring method provided by the present embodiment. For example, the user may start a real-time image blurring function during photographing using an electronic device, and the electronic device may process an image frame in a preview video stream captured by an image capturing apparatus by an image blurring method described below, where the method includes the steps of:
step S402, inputting the image to be processed into a pre-trained contour detection model to obtain a contour set output by the contour detection model, wherein the contour set comprises at least one sub-contour.
The contour detection model is used for detecting all sub-contours contained in the image to be processed. The contour detection model may employ convolutional neural networks (e.g., CNN networks, VGG networks, etc.). The training process of the contour detection model may include: and acquiring a plurality of calibration images, wherein outlines in the calibration images are accurately marked, and dividing the plurality of calibration images into a training set, a verification set and a test set. And training the contour detection model by using the training set, adjusting network parameters in the contour detection model, and verifying the contour detection model in the training process by using the verification set. When the training precision and the verification precision of the contour detection model reach the set threshold values, the training is completed. And testing the contour detection model by adopting a test set, and measuring the performance of the contour detection model.
Step S404, determining a focus of the image to be processed.
Step S406, selecting a sub-contour comprising a vertical intersection point of the focus and a horizontal intersection point of the focus from at least one sub-contour to form a candidate sub-contour set.
Wherein the vertical intersection point and the focus have the same abscissa; the horizontal intersection point has the same ordinate as the focal point.
For each sub-contour in the image to be processed, judging whether all points on the contour line of the sub-contour have the same abscissa and ordinate with the focus one by one, and finding a vertical intersection point and a horizontal intersection point. If the sub-contour includes both a vertical intersection of the focal points and a horizontal intersection of the focal points, the sub-contour is added to the candidate sub-contour set.
Step S408, each sub-contour in the candidate sub-contour set is used as a current sub-contour one by one, whether at least two vertical intersection points on the current sub-contour are positioned at two sides of the focus or not is judged, and whether at least two horizontal intersection points on the current sub-contour are positioned at two sides of the focus or not is judged; if yes, step S410 is performed, and if no, step S408 is performed with the next sub-contour as the current sub-contour.
In step S410, the current sub-contour is taken as a focus sub-contour including a focus, and an image area corresponding to the focus sub-contour in the image to be processed is taken as a focus plane area.
Illustratively, if the current sub-profile includes only one vertical intersection or only one horizontal intersection, then the current sub-profile is not the focus sub-profile, and the determination is continued as to whether the next sub-profile satisfies the above condition.
If the current sub-profile includes two vertical intersection points and two horizontal intersection points, it may be first determined whether the two vertical intersection points on the current sub-profile are located at two sides of the focal point, and a method may be adopted, including: and taking the difference value of the ordinate of the focus and the ordinate of the first vertical intersection point as a first difference value, taking the difference value of the ordinate of the focus and the ordinate of the second vertical intersection point as a second difference value, judging whether the product of the first difference value and the second difference value is negative, and if so, determining that the two vertical focuses are positioned on two sides of the focus. Then, judging whether two horizontal intersection points on the current sub-contour are positioned on two sides of the focus, wherein the method comprises the following steps: taking the difference value between the abscissa of the focus and the abscissa of the first horizontal intersection point as a third difference value; taking the difference value between the abscissa of the focus and the abscissa of the second horizontal intersection point as a fourth difference value; and judging whether the product of the third difference value and the fourth difference value is negative, and if so, determining that the two horizontal intersection points are positioned on two sides of the focus. If both conditions are met at the same time, i.e. two vertical foci are located on both sides of the focus and two horizontal intersections are located on both sides of the focus, the current sub-profile may be taken as the focus sub-profile. If one of the two conditions is not satisfied, the current sub-profile is not the focus sub-profile, and a determination is continued as to whether the next sub-profile satisfies the above condition.
In some embodiments, it may be determined whether two vertical intersection points on the current sub-profile are located on both sides of the focus, and then whether two horizontal intersection points on the current sub-profile are located on both sides of the focus. In other embodiments, it may be determined whether two horizontal intersecting points on the current sub-contour are located on two sides of the focus, and then whether two vertical intersecting points on the current sub-contour are located on two sides of the focus. In other embodiments, it may also be determined whether the two conditions are satisfied at the same time.
If the current sub-profile comprises more than two vertical intersection points, the condition that at least two vertical intersection points on the current sub-profile are located on two sides of the focus is satisfied as long as one of the vertical intersection points is not located on the same side of the focus as the other vertical intersection points. The difference between the ordinate of the focal spot and the ordinate of each vertical intersection point can be calculated and then the product of any two differences can be calculated. If all the obtained products are positive values, the fact that all the vertical intersection points on the current sub-profile are located on the same side of the focus is indicated. As long as there is a negative value in the product, it is stated that at least two vertical intersection points on the current sub-profile are located on both sides of the focal point. Similarly, if the current sub-profile includes more than two horizontal intersection points, as long as one of the horizontal intersection points is not on the same side of the focal point as the other horizontal intersection points, the condition that at least two horizontal intersection points on the current sub-profile are on both sides of the focal point is satisfied. The difference between the abscissa of the focus and the abscissa of each horizontal intersection point can be calculated, and then the product of any two differences can be calculated. If all the obtained products are positive values, the fact that all the horizontal intersection points on the current sub-profile are located on the same side of the focus is indicated. As long as there is a negative value in the product, it is stated that at least two horizontal intersection points on the current sub-contour are located on both sides of the focus.
In step S412, the image to be processed is blurred to obtain a blurred image.
In step S414, a focal plane image corresponding to the focal plane region of the image to be processed and a focal plane outer region image excluding the focal plane region in the blurred image are acquired.
After the focal sub-profile is determined, a mask image may be constructed from the focal sub-profile. The mask image is a binary image with the same size as the image to be processed, the area inside the focus sub-outline in the mask image is provided with a first pixel value, and the area outside the focus sub-outline in the mask image is provided with a second pixel value. Illustratively, the first pixel value may be 255 and the second pixel value may be 0. Fig. 5 shows a schematic diagram of a mask image, such as the mask image shown in fig. 5, in which the white areas are areas inside the focal sub-outline and the black areas are areas outside the focal sub-outline.
The focal plane image in the image to be processed can be acquired by masking the image. Because the mask image is a binary image, the area outside the focal point sub-outline in the mask image is provided with a second pixel value of 0, and the pixel value of each pixel point in the mask image is multiplied by the pixel value of each pixel point in the image to be processed correspondingly, so that a focal plane image in the image to be processed can be obtained, and the images of other areas in the image to be processed are removed. The above process can be understood as that the pixel value of each pixel point in the white area in fig. 5 is multiplied by the pixel value of each pixel point corresponding to the image to be processed, so as to obtain the focal plane image. Let the image to be processed be S and the Mask image be Mask, the focal plane image may be represented as S x Mask.
An out-of-focus area image in the blurred image can be acquired by masking the image. The pixel value of each pixel in the mask image is subtracted from a preset value, which may be a first pixel value, such as 255. This step can be understood as reversing the mask image, and after reversing the mask image shown in fig. 5, the area inside the focal sub-outline becomes a black area, and the area outside the focal sub-outline becomes a white area. And correspondingly multiplying the pixel value of each pixel point in the mask image after the color reversal with the pixel value of each pixel point in the blurred image to obtain an image of an area outside the focal plane in the blurred image, and removing images of other areas in the blurred image. Let the blurred image be a, the out-of-focus area image may be represented as (255-Mask) a.
Step S416, obtaining a virtual image according to the focal plane image and the out-of-focal-plane area image.
Alternatively, the fusion coefficients corresponding to the pixels in the focal plane image may be determined according to the mask image, and then the pixel values of the pixels in the blurred image may be determined according to the product of the pixel values of the pixels in the focal plane image and the corresponding fusion coefficients and the pixel values of the pixels in the out-of-focus area image.
The fusion coefficients corresponding to each pixel point in the focal plane image can be determined by the following method:
step 1, calculating the centroid of the mask image according to the pixel values of all pixel points in the mask image, and calculating the value x of the centroid abscissa of the mask image according to the following formula (1) 0 The value y of the ordinate of the centroid of the mask image can be calculated by the following expression (2) 0
Wherein f (x, y) is the pixel value of each pixel point in the mask image. Value x of centroid abscissa of mask image 0 The value of the abscissa of each pixel point of the mask image is multiplied by the ratio of the sum of f (x, y) to the sum of f (x, y). Value y of centroid ordinate of mask image 0 The value of the ordinate of each pixel point of the mask image is multiplied by the ratio of the sum of f (x, y) to the sum of f (x, y).
And 2, calculating fusion coefficients corresponding to all the pixel points in the focal plane image according to the mass center of the mask image and the pixel values of all the pixel points in the region inside the focal point sub-outline of the mask image.
The fusion coefficient corresponding to each pixel point in the focal plane image is calculated by the formula (3):
wherein K is a fusion coefficient, (x, y) is a coordinate value of each pixel point of the region inside the focus sub-outline in the mask image, and x 0 Is the value of the centroid abscissa of the mask image, y 0 Is the value of the centroid ordinate of the mask image.
According to the product of the pixel value of each pixel point in the focal plane image and the corresponding fusion coefficient and the pixel value of each pixel point in the out-of-focal-plane area image, the pixel value of each pixel point in the blurred image can be determined, and further the blurred image is obtained. From the above description, the blurred image Output can be expressed as: output=s Mask k+ (255-Mask) a.
Embodiment four:
corresponding to the foregoing method embodiment, the present embodiment provides an image blurring apparatus, referring to a schematic structural diagram of an image blurring apparatus shown in fig. 6, which includes:
the contour detection module 61 is configured to perform contour detection on an image to be processed, so as to obtain at least one sub-contour included in the image to be processed;
a focus acquisition module 62, configured to determine a focus of the image to be processed;
a focus area determining module 63, configured to select a focus sub-contour including the focus from the at least one sub-contour, and use an image area corresponding to the focus sub-contour in the image to be processed as a focal plane area;
the blurring processing module 64 is configured to perform blurring processing on the image to be processed to obtain a blurred image;
And the blurring processing module 65 is configured to perform fusion processing on the blurred image and the focal plane area of the image to be processed, so as to obtain a blurred image.
The focus acquisition module 62 may also be configured to: receiving a focus selected by a user; or taking the center point of the image to be processed as a focus.
In an alternative embodiment, the focal region determination module 63 may also be configured to: taking each sub-contour in the at least one sub-contour as a current sub-contour one by one; and if the current sub-contour comprises at least two vertical intersection points with the same abscissa and at least two horizontal intersection points with the same ordinate, wherein the at least two vertical intersection points are respectively positioned at two sides of the focus, the at least two horizontal intersection points are also respectively positioned at two sides of the focus, and the current sub-contour is taken as a focus sub-contour containing the focus.
In another alternative embodiment, the focal region determination module 63 may also be configured to: selecting a sub-contour comprising a vertical intersection point of the focus and a horizontal intersection point of the focus from the at least one sub-contour to form a candidate sub-contour set; the vertical intersection point and the focus have the same abscissa; the horizontal intersection point and the focus have the same ordinate; taking each sub-contour in the candidate sub-contour set as a current sub-contour one by one, judging whether at least two vertical intersection points on the current sub-contour are positioned on two sides of the focus, and judging whether at least two horizontal intersection points on the current sub-contour are positioned on two sides of the focus; and if so, taking the current sub-contour as a focus sub-contour containing the focus.
If the current sub-profile includes two vertical intersection points and two horizontal intersection points; the focus area determination module 63 may also be configured to: taking the difference value between the ordinate of the focal point and the ordinate of the first vertical intersection point as a first difference value; taking the difference value between the ordinate of the focus and the ordinate of the second vertical intersection point as a second difference value; judging whether the product of the first difference value and the second difference value is negative or not; for the following: taking the difference value of the abscissa of the focus and the abscissa of the first horizontal intersection point as a third difference value; taking the difference value between the abscissa of the focus and the abscissa of the second horizontal intersection point as a fourth difference value; and judging whether the product of the third difference value and the fourth difference value is negative or not.
The blurring processing module 65 may also be configured to: acquiring a focal plane image corresponding to a focal plane region of the image to be processed and an out-of-focus plane region image except the focal plane region in the blurred image; and obtaining a virtual image according to the focal plane image and the focal plane outer area image.
The blurring processing module 65 may also be configured to: constructing a mask image according to the focal sub-outline; the mask image is a binary image with the same size as the image to be processed, a region inside the focus sub-outline in the mask image is provided with a first pixel value, and a region outside the focus sub-outline in the mask image is provided with a second pixel value; acquiring a focal plane image in the image to be processed through the mask image; and acquiring an out-of-focus area image in the blurred image through the mask image.
The blurring processing module 65 may also be configured to: determining fusion coefficients corresponding to all pixel points in the focal plane image according to the mask image; and determining the pixel value of each pixel point in the blurred image according to the product of the pixel value of each pixel point in the focal plane image and the corresponding fusion coefficient and the pixel value of each pixel point in the out-of-focus-plane area image. The blurring processing module 65 may also be configured to: calculating the mass center of the mask image according to the pixel value of each pixel point in the mask image; and calculating fusion coefficients corresponding to all the pixel points in the focal plane image according to the mass center of the mask image and the pixel values of all the pixel points in the region inside the focal point sub-outline of the mask image.
According to the image blurring device provided by the embodiment of the invention, at least one sub-contour contained in the image to be processed can be obtained by carrying out contour detection on the image to be processed, the focus of the image to be processed is determined, then the focus sub-contour containing the focus is selected from the at least one sub-contour, the image area corresponding to the focus sub-contour in the image to be processed is used as a focal plane area, blurring processing is carried out on the image to be processed to obtain a blurred image, and the blurred image is obtained by carrying out fusion processing on the blurred image and the focal plane area of the image to be processed, so that the problem that the existing image blurring mode cannot highlight the object where the focus is located is solved, and the user experience is improved.
The image blurring apparatus provided in this embodiment has the same implementation principle and technical effects as those of the foregoing embodiment, and for brevity, reference may be made to the corresponding contents of the foregoing method embodiment for the description of the apparatus embodiment.
The embodiment of the invention also provides electronic equipment which comprises an image acquisition device, a processor and a storage device. The image acquisition device is used for shooting images. The storage means has stored thereon a computer program which, when executed by the processor, performs the steps of the image blurring method of:
performing contour detection on an image to be processed to obtain at least one sub-contour contained in the image to be processed;
determining a focus of the image to be processed;
selecting a focus sub-contour containing the focus from the at least one sub-contour, and taking an image area corresponding to the focus sub-contour in the image to be processed as a focal plane area;
performing blurring processing on the image to be processed to obtain a blurred image;
and carrying out fusion processing on the blurred image and the focal plane area of the image to be processed to obtain an image after blurring.
Further, the present embodiment also provides a computer readable storage medium having stored thereon a computer program which, when executed by a processor, performs the steps of the method provided by the foregoing method embodiments.
The embodiment of the invention provides an image blurring method, an image blurring apparatus and a computer program product of an electronic device, which include a computer readable storage medium storing a program code, and the program code includes instructions for executing the method described in the foregoing method embodiment, and specific implementation may refer to the method embodiment and will not be repeated herein.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art or in a part of the technical solution, in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device (which may be a personal computer, a server, a network device, etc.) to perform all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
Finally, it should be noted that: the above examples are only specific embodiments of the present invention, and are not intended to limit the scope of the present invention, but it should be understood by those skilled in the art that the present invention is not limited thereto, and that the present invention is described in detail with reference to the foregoing examples: any person skilled in the art may modify or easily conceive of the technical solution described in the foregoing embodiments, or perform equivalent substitution of some of the technical features, while remaining within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the technical solutions of the embodiments of the present invention, and are intended to be included in the scope of the present invention.

Claims (10)

1. A method of image blurring, comprising:
performing contour detection on an image to be processed to obtain at least one sub-contour contained in the image to be processed;
determining a focus of the image to be processed;
selecting a focus sub-contour containing the focus from the at least one sub-contour, and taking an image area corresponding to the focus sub-contour in the image to be processed as a focal plane area;
Performing blurring processing on the image to be processed to obtain a blurred image;
carrying out fusion processing on the blurred image and the focal plane area of the image to be processed to obtain a blurred image;
the step of fusing the blurred image and the focal plane area of the image to be processed to obtain a blurred image comprises the following steps: acquiring a focal plane image corresponding to a focal plane region of the image to be processed and an out-of-focus plane region image except the focal plane region in the blurred image; obtaining a virtual image according to the focal plane image and the focal plane outer area image;
obtaining a blurred image according to the focal plane image and the out-of-focal-plane area image, wherein the method comprises the following steps: determining the pixel value of each pixel point in the blurred image according to the product of the pixel value of each pixel point in the focal plane image and the fusion coefficient corresponding to each pixel point and the pixel value of each pixel point in the out-of-focus area image; the fusion coefficient is determined according to a mask image corresponding to the image to be processed, and the mask image is a binary image with the same size as the image to be processed.
2. The method according to claim 1, wherein the step of determining the focus of the image to be processed comprises:
receiving a focus selected by a user; or taking the center point of the image to be processed as a focus.
3. The method of claim 1, wherein selecting a focus sub-contour from the at least one sub-contour that includes the focus comprises:
taking each sub-contour in the at least one sub-contour as a current sub-contour one by one; and if the current sub-contour comprises at least two vertical intersection points with the same abscissa and at least two horizontal intersection points with the same ordinate, wherein the at least two vertical intersection points are respectively positioned at two sides of the focus, the at least two horizontal intersection points are also respectively positioned at two sides of the focus, and the current sub-contour is taken as a focus sub-contour containing the focus.
4. The method of claim 1, wherein selecting a focus sub-contour from the at least one sub-contour that includes the focus comprises:
selecting a sub-contour comprising a vertical intersection point of the focus and a horizontal intersection point of the focus from the at least one sub-contour to form a candidate sub-contour set; the vertical intersection point and the focus have the same abscissa; the horizontal intersection point and the focus have the same ordinate;
Taking each sub-contour in the candidate sub-contour set as a current sub-contour one by one, judging whether at least two vertical intersection points on the current sub-contour are positioned on two sides of the focus, and judging whether at least two horizontal intersection points on the current sub-contour are positioned on two sides of the focus;
and if so, taking the current sub-contour as a focus sub-contour containing the focus.
5. A method according to claim 3 or 4, characterized in that the method comprises:
calculating a difference value between the ordinate of the focus and the ordinate of each vertical intersection point;
calculating the product of any two differences;
judging whether a negative value exists in the product of the difference values of the ordinate;
if yes, determining that the at least two vertical focuses are positioned on two sides of the focus;
calculating the difference value between the abscissa of the focus and the abscissa of each horizontal intersection point;
calculating the product of any two differences;
judging whether a negative value exists in the product of the difference values of the abscissa;
if so, determining that the at least two horizontal intersection points are positioned on two sides of the focus.
6. The method according to claim 1, wherein the step of acquiring a focal plane image corresponding to a focal plane region of the image to be processed and an out-of-focus plane region image other than the focal plane region in the blurred image includes:
Constructing a mask image according to the focal sub-outline; the area inside the focus sub-outline in the mask image is provided with a first pixel value, and the area outside the focus sub-outline in the mask image is provided with a second pixel value;
acquiring a focal plane image in the image to be processed through the mask image;
and acquiring an out-of-focus area image in the blurred image through the mask image.
7. The method of claim 1, wherein determining the fusion coefficients for each pixel in the focal plane image from the mask image comprises:
calculating the mass center of the mask image according to the pixel value of each pixel point in the mask image;
and calculating fusion coefficients corresponding to all the pixel points in the focal plane image according to the mass center of the mask image and the pixel values of all the pixel points in the region inside the focal point sub-outline of the mask image.
8. An image blurring apparatus, comprising:
the contour detection module is used for carrying out contour detection on the image to be processed to obtain at least one sub-contour contained in the image to be processed;
a focus acquisition module, configured to determine a focus of the image to be processed;
The focus area determining module is used for selecting a focus sub-outline containing the focus from the at least one sub-outline, and taking an image area corresponding to the focus sub-outline in the image to be processed as a focus plane area;
the blurring processing module is used for blurring processing the image to be processed to obtain a blurred image;
the blurring processing module is used for carrying out fusion processing on the blurred image and the focal plane area of the image to be processed to obtain a blurred image;
the blurring processing module is used for acquiring a focal plane image corresponding to a focal plane area of the image to be processed and an out-of-focus area image except the focal plane area in the blurred image; obtaining a virtual image according to the focal plane image and the focal plane outer area image;
the blurring processing module is configured to determine a pixel value of each pixel point in the blurred image according to a product of a pixel value of each pixel point in the focal plane image and a fusion coefficient corresponding to each pixel point, and a pixel value of each pixel point in the out-of-focal-plane area image; the fusion coefficient is determined according to a mask image corresponding to the image to be processed, and the mask image is a binary image with the same size as the image to be processed.
9. An electronic device comprising a processor and a memory device; the storage means has stored thereon a computer program which, when executed by the processor, performs the method of any of claims 1 to 7.
10. A computer-readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, performs the steps of the method of any of the preceding claims 1 to 7.
CN201910026279.3A 2019-01-10 2019-01-10 Image blurring method and device and electronic equipment Active CN109727193B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910026279.3A CN109727193B (en) 2019-01-10 2019-01-10 Image blurring method and device and electronic equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910026279.3A CN109727193B (en) 2019-01-10 2019-01-10 Image blurring method and device and electronic equipment

Publications (2)

Publication Number Publication Date
CN109727193A CN109727193A (en) 2019-05-07
CN109727193B true CN109727193B (en) 2023-07-21

Family

ID=66298996

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910026279.3A Active CN109727193B (en) 2019-01-10 2019-01-10 Image blurring method and device and electronic equipment

Country Status (1)

Country Link
CN (1) CN109727193B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112184610B (en) * 2020-10-13 2023-11-28 深圳市锐尔觅移动通信有限公司 Image processing method and device, storage medium and electronic equipment

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103561205B (en) * 2013-11-15 2015-07-08 努比亚技术有限公司 Shooting method and shooting device
CN105979165B (en) * 2016-06-02 2019-02-05 Oppo广东移动通信有限公司 Blur photograph generation method, device and mobile terminal
CN108230333B (en) * 2017-11-28 2021-01-26 深圳市商汤科技有限公司 Image processing method, image processing apparatus, computer program, storage medium, and electronic device
CN109086761B (en) * 2018-06-28 2020-12-01 Oppo广东移动通信有限公司 Image processing method and device, storage medium and electronic equipment
CN108961279A (en) * 2018-06-28 2018-12-07 Oppo(重庆)智能科技有限公司 Image processing method, device and mobile terminal

Also Published As

Publication number Publication date
CN109727193A (en) 2019-05-07

Similar Documents

Publication Publication Date Title
CN108898567B (en) Image noise reduction method, device and system
US10997696B2 (en) Image processing method, apparatus and device
CN111028189B (en) Image processing method, device, storage medium and electronic equipment
CN110121882B (en) Image processing method and device
CN108961303B (en) Image processing method and device, electronic equipment and computer readable medium
KR101662846B1 (en) Apparatus and method for generating bokeh in out-of-focus shooting
CN108234858B (en) Image blurring processing method and device, storage medium and electronic equipment
CN110839129A (en) Image processing method and device and mobile terminal
CN113313661B (en) Image fusion method, device, electronic equipment and computer readable storage medium
WO2018136373A1 (en) Image fusion and hdr imaging
WO2017076040A1 (en) Image processing method and device for use during continuous shooting operation
CN108230333B (en) Image processing method, image processing apparatus, computer program, storage medium, and electronic device
CN110796041B (en) Principal identification method and apparatus, electronic device, and computer-readable storage medium
CN108848367B (en) Image processing method and device and mobile terminal
CN110349080B (en) Image processing method and device
CN109785264B (en) Image enhancement method and device and electronic equipment
JP4515208B2 (en) Image processing method, apparatus, and program
CN111968052B (en) Image processing method, image processing apparatus, and storage medium
CN113674303B (en) Image processing method, device, electronic equipment and storage medium
WO2020087729A1 (en) Image processing method and apparatus, electronic device and storage medium
CN113313626A (en) Image processing method, image processing device, electronic equipment and storage medium
CN113793257B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN110689496A (en) Method and device for determining noise reduction model, electronic equipment and computer storage medium
CN109727193B (en) Image blurring method and device and electronic equipment
CN108734712B (en) Background segmentation method and device and computer storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant