CN112561777A - Method and device for adding light spots to image - Google Patents

Method and device for adding light spots to image Download PDF

Info

Publication number
CN112561777A
CN112561777A CN201910910315.2A CN201910910315A CN112561777A CN 112561777 A CN112561777 A CN 112561777A CN 201910910315 A CN201910910315 A CN 201910910315A CN 112561777 A CN112561777 A CN 112561777A
Authority
CN
China
Prior art keywords
image
light spot
processed
area
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910910315.2A
Other languages
Chinese (zh)
Inventor
李绪琴
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Megvii Technology Co Ltd
Original Assignee
Beijing Megvii Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Megvii Technology Co Ltd filed Critical Beijing Megvii Technology Co Ltd
Priority to CN201910910315.2A priority Critical patent/CN112561777A/en
Publication of CN112561777A publication Critical patent/CN112561777A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Abstract

The invention relates to the technical field of image processing, and provides a method for adding light spots to an image, which comprises the following steps: determining a light spot superposition position according to the pixel value of the pixel of the image to be processed; obtaining a mask of the image to be processed according to the light spot superposition position and the light spot attribute, wherein the mask comprises a light spot area and a non-light spot area; performing first fuzzy processing on a first area of an image to be processed to obtain a first image, wherein the first area is a light spot area of a mask and corresponds to a corresponding area of the image to be processed; performing second fuzzy processing on all original pixels of the image to be processed to obtain a second image; and carrying out image fusion on the first image and the second image according to the mask of the image to be processed to obtain the image added with the light spots. The method has the advantages that the spot adding position is determined, the spot image is subjected to fuzzy processing and then is fused with the image to be processed, and therefore the processing effect of adding the spots into the image is improved.

Description

Method and device for adding light spots to image
Technical Field
The present invention relates generally to the field of image processing technology, and more particularly, to a speckle method and apparatus for image addition.
Background
With the development of electronic technology, intelligent terminals are widely used by people, and the functions of the intelligent terminals are increasingly improved. In photography, in order to achieve a beautiful and fantastic photographing effect, a flare effect may be artificially added to a highlight region of a photographed image. When the single lens reflex camera wants to obtain the facula effect except the circular shape, a specific template needs to be added in front of the lens.
In mobile phone photography, a facula effect similar to a single lens reflex is difficult to directly shoot, and in the prior art, a PS technology is adopted to process pictures in a post period, so that the processing effect is poor and unnatural, and the shooting requirement of a user cannot be met.
Disclosure of Invention
In order to solve the above problems in the prior art, the present invention provides a method and an apparatus for adding flare to an image.
In a first aspect, an embodiment of the present invention provides a method for adding spots to an image, including: determining a light spot superposition position according to the pixel value of the pixel of the image to be processed; obtaining a mask of the image to be processed according to the light spot superposition position and the light spot attribute, wherein the mask comprises a light spot area and a non-light spot area; performing first fuzzy processing on a first area of an image to be processed to obtain a first image, wherein the first area is a light spot area of a mask and corresponds to a corresponding area of the image to be processed; performing second fuzzy processing on all original pixels of the image to be processed to obtain a second image; and carrying out image fusion on the first image and the second image according to the mask of the image to be processed to obtain the image added with the light spots.
In one embodiment, the performing the first blurring process on the first region of the image to be processed includes: performing first transformation on the pixels of the first area so as to increase the pixel values of a first part of the pixels of the first area, wherein the pixel values of the first part of the pixels are greater than a preset pixel value threshold; performing fuzzy processing on the first area after the first transformation; and performing second transformation on the pixels of the first area after the blurring processing so that the pixel values of the first part of the pixels of the first area are reduced.
In an embodiment, the first transforming the pixels of the first region comprises: the pixel values of the first region pixels are exponentially transformed.
In an embodiment, the second transforming the pixels of the first region after the blurring process comprises: and carrying out logarithmic transformation on the pixel values of the pixels of the first area after the blurring processing.
In an embodiment, the blurring the first transformed first region includes: and carrying out fuzzy processing on the first area after the first transformation according to the depth information of the pixel point of the light spot superposition position.
In one embodiment, determining the spot superposition position according to the pixel value of the pixel of the image to be processed includes: selecting a pixel point in the image to be processed as a circle center pixel point, acquiring all pixel points on a circumference with the circle center pixel point as a circle center and a preset length as a radius, and determining the position of the circle center pixel point as a light spot superposition position if the pixel value of the circle center pixel point is greater than the pixel value of the pixel point on any circumference.
In one embodiment, the spot attributes include a spot shape; obtaining a mask of the image to be processed according to the light spot superposition position and the light spot attribute, comprising the following steps of: the spot shape is determined based on user mapping or user selection.
In an embodiment, the spot attribute further includes weight information of the spot; obtaining a mask of the image to be processed according to the light spot superposition position and the light spot attribute, and further comprising: and determining light spot weight information for representing the light spot brightness according to the pixel value of the pixel point of the light spot superposition position.
In a second aspect, an embodiment of the present invention provides an apparatus for adding spots to an image, including: the light spot adding module is used for determining a light spot superposition position according to the initial pixel value of the pixel of the image to be processed, and superposing the light spots at the light spot superposition position; the acquisition module is used for obtaining a mask of the image to be processed according to the light spot superposition position and the light spot attribute, and the mask comprises a light spot area and a non-light spot area; the first fuzzy processing module is used for carrying out first fuzzy processing on a first area of the image to be processed to obtain a first image, wherein the first area is a corresponding area of the mask light spot area corresponding to the image to be processed; the second blurring processing module is used for performing second blurring processing on all original pixels of the image to be processed to obtain a second image; and the fusion processing module is used for carrying out image fusion on the first image and the second image according to the mask of the image to be processed to obtain the image added with the light spots.
In a third aspect, an embodiment of the present invention provides an electronic device, where the electronic device includes: a memory to store instructions; and a processor for calling the instructions stored in the memory to execute the method for adding the facula to the image.
In a fourth aspect, embodiments of the present invention provide a computer-readable storage medium, where the computer-readable storage medium stores computer-executable instructions that, when executed by a processor, perform a method for adding spots in an image.
According to the method and the device for adding the light spots to the image, the light spot superposition position is determined, the mask of the image to be processed is obtained according to the light spot superposition position and the light spot attributes, the corresponding area of the image to be processed corresponding to the light spot superposition position and all pixels of the image to be processed are processed respectively and are fused, the light spots with different attributes can be generated without manual PS adjustment of a user, and various requirements of the user are met.
Drawings
The above and other objects, features and advantages of embodiments of the present invention will become readily apparent from the following detailed description read in conjunction with the accompanying drawings. Several embodiments of the invention are illustrated by way of example, and not by way of limitation, in the figures of the accompanying drawings and in which:
FIG. 1 is a schematic diagram illustrating a method for adding light spots to an image according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating an embodiment of the present invention for performing an exponential transformation on pixel values of all pixels of an image to be processed;
FIG. 3 is a schematic diagram illustrating an apparatus for adding light spots to an image according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of an electronic device provided by an embodiment of the invention;
in the drawings, the same or corresponding reference numerals indicate the same or corresponding parts.
Detailed Description
The principles and spirit of the present invention will be described with reference to a number of exemplary embodiments. It is understood that these embodiments are given solely for the purpose of enabling those skilled in the art to better understand and to practice the invention, and are not intended to limit the scope of the invention in any way.
It should be noted that although the expressions "first", "second", etc. are used herein to describe different modules, steps, data, etc. of the embodiments of the present invention, the expressions "first", "second", etc. are merely used to distinguish between different modules, steps, data, etc. and do not indicate a particular order or degree of importance. Indeed, the terms "first," "second," and the like are fully interchangeable.
Fig. 1 shows a flowchart of a method for adding spots to an image according to an embodiment of the present invention. As shown in fig. 1, the method includes:
in step S110, a spot superimposition position is determined based on the pixel value of the pixel of the image to be processed.
The light spots are spots which can play a beautifying effect and are displayed on the image. The light spots are generated in the highlight area in the image, so that the image has a beautiful and fantastic effect. The points with large pixel values of the image to be processed, namely the points with high brightness in the image are selected, the points are the positions where light spots need to be added, and the light spots are superposed at the points to achieve a better image processing effect.
In step S120, a mask of the image to be processed is obtained according to the light spot superposition position and the light spot attribute, where the mask includes a light spot region and a non-light spot region.
The mask may be similar in concept to the mask of the center lithography technology in the photo industry, and may be a mask that blocks all or part of the processed image with a selected image, graphic or object to control the area of image processing. And obtaining a mask of the image to be processed according to the light spot superposition position and the light spot attribute determined in the step S110, wherein the mask comprises a light spot area and a non-light spot area. The spot attributes may include spot shape, spot material, spot weight information, etc. The mask matrix is a binary matrix composed of 0 and 1, when the mask is applied, the light spot area, i.e. the 1 value area is processed, and the non-light spot area, i.e. the 0 value area is shielded from being processed.
In step S130, a first blurring process is performed on a first region of the image to be processed to obtain a first image, where the first region is a corresponding region of the mask corresponding to the spot region of the image to be processed.
The first region is a light spot region of the mask, namely a non-zero region corresponds to a region corresponding to the image to be processed, and as can be understood, the first region is a region corresponding to the light spot image of the image to be processed, and the first fuzzy processing is performed on the first region, so that the added light spot effect is more natural, and the image brightness is excessively more uniform.
In step S140, a second blurring process is performed on all original pixels of the image to be processed to obtain a second image.
The blurring process may be understood as a process of convolving all pixels of the image to be processed with a blurring kernel. The larger the blur kernel radius, the more blurred the image. Any one of methods such as a mean fuzzy algorithm, a median fuzzy algorithm or a Gaussian fuzzy algorithm can be adopted to carry out fuzzy processing on the original pixels of the image to be processed.
And performing second blurring on all original pixels of the image to be processed, for example, blurring by adopting a gaussian blurring algorithm, so that a depth of field effect of the image can be realized, and a main body part of the image is highlighted to ensure that a background part is blurred and is closer to a real optical imaging effect.
In step S150, the first image and the second image are subjected to image fusion according to the mask of the image to be processed, so as to obtain an image with the light spots added.
And fusing the first image after the first blurring processing and the second image after the second blurring processing according to the mask of the image to be processed to obtain the image added with the light spots. For example, the first image and the second image may be alpha fused according to a mask of the image to be processed.
Alpha fusion, also known as Alpha transparent blending, the fusion formula is:
the Colortarget is alpha Color1+ (1-alpha) Color2, wherein alpha is a weight value and is taken between 0 and 1, the Colortarget is a fused pixel value, and the Color1 and the Color2 are pixel values of pixels at opposite positions of the first image and the second image respectively.
The mask of the image to be processed is obtained according to the light spot superposition position and the light spot attribute, the light spot attribute comprises light spot weight information representing light spot brightness, the light spot weight information of each pixel point of the mask of the image to be processed is used as a weight alpha, the first image and the second image are fused by using the formula, the pixel value of each pixel is obtained, and the image with the light spots is obtained.
In some embodiments, the original color space of the picture to be processed may be an RGB color space or a YUV color space. For example, the picture to be processed is an RGB color space, and in the embodiment of the present disclosure, in step S110, step S120, step S130, and step S140, R, G, B three channels of the image may be separately processed, and in step S150, the separately processed images are subjected to multi-channel fusion, so as to obtain the image with the light spot added. Three channels may also be merged in the above steps, which is not limited in the embodiment of the present disclosure.
According to the method for adding the light spots to the image, the light spot superposition position is determined, the mask of the image to be processed is obtained according to the light spot superposition position and the light spot attributes, the corresponding area of the image to be processed corresponding to the light spot superposition position and all pixels of the image to be processed are processed respectively, the light spot superposition position and all pixels of the image to be processed are fused, the light spots with different attributes can be generated without manual PS adjustment of a user, and various requirements of the user are met.
In one embodiment, the performing the first blurring process on the first region of the image to be processed includes: performing first transformation on the pixels of the first area so as to increase the pixel values of a first part of the pixels of the first area, wherein the pixel values of the first part of the pixels are greater than a preset pixel value threshold; performing fuzzy processing on the first area after the first transformation; and performing second transformation on the pixels of the first area after the blurring processing so that the pixel values of the first part of the pixels of the first area are reduced.
The blurring process may cause a loss of luminance of a pixel having high luminance of the first image, and the pixel value of the pixel having high luminance, i.e., a relatively large pixel value, in the first image may be increased before the blurring process to reduce the loss of luminance. For example, a pixel value threshold may be preset, and a first transformation may be performed on pixels whose pixel values are greater than the preset pixel value threshold, so that the pixel values of the part of pixels are further increased by the first transformation.
The pixel value range is 0 to 255, for example, a pixel value threshold k may be set in advance, for example, if k is 220, then all pixels with pixel values greater than k are the first partial pixels, and the pixel values of the first partial pixels are increased. The increasing algorithm may be a linear increase or a non-linear increase, which is not limited by the embodiment of the present disclosure.
The pixel value of the pixel of the first area is subjected to first transformation, the pixel value of the highlight area of the spot image to be added is improved, the highlight area is not lightened, the brightness of the highlight area of the spot is better diffused to the neighborhood, and finally the color of the generated spot is brighter. It will be understood that the larger the value of the pixel value threshold k, the greater the intensity of the added light spot, and vice versa.
And performing fuzzy processing on the first area after the first transformation, namely performing convolution calculation on the first area after the first transformation, wherein the shape of a convolution kernel of the convolution calculation corresponds to the shape of the light spot, and the size of the convolution kernel corresponds to the size of the light spot. The fuzzy processing can make the regional edge of the image to be processed corresponding to the facula region smooth and the transition between the parts with different brightness more natural.
And performing second transformation on the pixels of the first area after the blurring processing so that the pixel values of the first part of the pixels of the first area are reduced. It will be appreciated that after the blurring process, the pixel values of the first portion of pixels need to be reduced in order to restore the pixels of the first region to the original color space due to the prior incremental transformation of the pixel values of the first portion of pixels. The algorithm for reducing the pixel value of the first portion of pixels may be a linear reduction or a non-linear reduction, which is not limited by the embodiments of the present disclosure.
In an embodiment, the first transformation of the pixels of the first region comprises an exponential transformation of pixel values of the pixels of the first region. The exponential transformation can improve the contrast of the image, can effectively further improve the pixel values of partial pixels with relatively high pixel values, the contrast of the image is higher after the first transformation, and the high pixel values are also expanded to a wider range.
Specifically, the pixel value after the exponential transformation of the first region pixels may be calculated according to the exponential transformation formula by determining the pixel value threshold k. Fig. 2 is a schematic diagram illustrating an exponential transformation of pixel values according to an embodiment of the present invention.
Referring to fig. 2, the abscissa in fig. 2 represents an initial pixel value of a pixel, and the ordinate represents an exponentially transformed pixel value. The straight line 1 indicates that the image to be processed is not transformed, and the curve 2 in fig. 2 is an exponential transformation on the image to be processed. The intersection of the straight line 1 and the exponential exp curve 2 in the figure is the pixel value threshold k. It can be seen from fig. 2 that the pixel value of the pixel having the pixel value larger than the pixel value threshold k is increased after the pixel value is subjected to the exponential transformation.
In an embodiment, the second transforming the pixels of the first region after the blurring process comprises logarithmically transforming the pixel values of the pixels of the first region. It will be appreciated that the second transform may be a logarithmic transform corresponding to the exponential transform in the first transform. And calculating the logarithmically transformed pixel value according to a logarithmically transformed formula by determining a pixel value threshold k. And restoring the image to the original color space through second transformation on the pixel values of the pixels in the first area.
In an embodiment, the first region after the first transformation is subjected to fuzzy processing according to the depth information of the pixel point at the light spot superposition position.
The depth information reflects the distance from a point of the object corresponding to each pixel in the image to be processed to the camera. For example, the acquisition method may be based on a main shot image and a sub shot image shot by two cameras, and the distance between the pixel point and the camera is calculated, so as to obtain the depth information.
For another example, the depth information of the image to be processed is obtained from the main shot image and the sub-shot image of the original image. Scene depth estimation may also be performed to extract depth information in a manner possible in the related art, which is not limited by the present disclosure.
And the depths of all pixel points in the image to be processed are different, and the depth information of the pixel points at the light spot superposition position is obtained, so that the light spot depth information is determined. Different depth information can represent the distance of pixel points in the image to be processed.
And performing fuzzy processing on the first area after the first transformation according to the depth information, so that the larger the size of a convolution kernel corresponding to a pixel point with a large depth in the first area is, the larger the corresponding radius of a circle of confusion is, the larger the blurring concentration of the circle of confusion is, and even if the blurring degree is higher, the larger the corresponding light spot is, thereby realizing the effect of adding the light spots in different sizes.
In an embodiment, the spot overlay position may be determined as follows. Specifically, a pixel point in the image to be processed is selected as a center pixel point, or a highlight region in the image to be processed is obtained, for example, a region formed by pixel points larger than a certain threshold value may be used as the highlight region, and a pixel point is selected from the highlight region as the center pixel point.
Further, all the pixel points on the circumference with the circle center pixel point as the circle center and the preset length as the radius are obtained, and if the pixel value of the circle center pixel point is larger than that of the pixel point on any circumference, the position of the circle center pixel point is determined as the light spot superposition position.
The specific method for judging whether a certain pixel point is a light spot superposition position comprises the following steps: the pixel point is used as the center of a circle, and the preset length of the selected area is used as the radius to form a circle. The preset length can take a plurality of values, and the corresponding circumference is also a plurality of values.
And comparing the pixel value of the pixel point with the pixel values of all the pixel points on the circumference, and if the pixel value of the pixel point at the center of the circle is larger than the pixel value of the pixel point on any circumference of the plurality of circumferences, determining the position of the pixel point as a light spot superposition position.
For example, for the current pixel point, three preset lengths r1, r2 and r3 are selected as radii to construct a circle, where r1< r2< r 3.
Firstly, judging whether a current pixel point is a point with the largest pixel value among pixel points on a circumference with the radius of r1, and if so, determining the position of the pixel point as a light spot superposition position; if not, judging whether the current pixel point is the point with the maximum pixel value in the pixels on the circumference with the radius of r2, if so, determining the position of the pixel point as a light spot superposition position, and if not, judging the circumference with the radius of r 3. The judging method is the same as the method.
It can be understood that if the current pixel point is not the point with the maximum pixel value on the circumference constructed corresponding to the three preset length radii through the three judgments of the three preset length radii, the position of the current pixel point is not the light spot superposition position.
For example, r1, r2 and r3 may be 1, 3 and 5 in sequence, and the preset length and the number of times of judgment are not limited in the present disclosure. It can be understood that the more times the judgment is made by selecting several radii, the more positions of the added light spots in the image to be processed are determined, and correspondingly, the number of the added light spots in the image is increased.
The method for determining the light spot superposition position in the implementation can adjust the adding number of the light spots according to the length of the preset length radius and the judgment times.
In an embodiment, the light spot attribute includes a light spot shape, and obtaining a mask of the image to be processed according to the light spot superposition position and the light spot attribute includes: the spot shape is determined based on user mapping or user selection.
The shape of the convolution kernel corresponds to the shape of the spot generated on the image to be processed. The shape of a convolution kernel for convolution calculation can be obtained according to the shape of the light spot, so that after the first fuzzy processing, a virtual light spot with a corresponding shape is obtained.
It will be appreciated that the shape of the spot convolution kernel may be a regular or irregular geometric figure, such as a circle, star, and other figures. The embodiment of the invention does not limit the specific shape of the optical spot convolution kernel. The light spot convolution kernel can be pre-stored by a system or can be drawn according to the preference of a user. The embodiments of the present disclosure are not limited thereto.
It can also be understood that the light spot convolution kernels with different sizes can be preset and stored in the convolution kernel library, and when the light spot convolution kernels are to be used, the existing light spot convolution kernels with corresponding sizes are directly selected from the library.
In an embodiment, the light spot attribute further includes light spot weight information, and the mask of the image to be processed is obtained according to the light spot superposition position and the light spot attribute, further including: and determining light spot weight information for representing the light spot brightness according to the pixel value of the pixel point of the light spot superposition position.
When the facula is added to the image to be processed, the facula with uniform brightness is added to the areas with different brightness in the image, and the generated facula effect is relatively hard and not beautiful. And determining light spot weight information for representing the light spot brightness according to the pixel value of the pixel point of the light spot superposition position.
The light spot weight information is related to the pixel value of the pixel point of the superposition position, namely, for the point with high brightness of the superposition position, the light spot weight value is high, so that the corresponding light spot brightness is high when the first image and the second image are subjected to image fusion, light spots corresponding to the brightness of the first image and the second image are generated at different positions, and the quality of adding light spots is further improved.
Fig. 3 is a block diagram illustrating an image blurring processing apparatus according to an exemplary embodiment. Referring to fig. 3, the apparatus 200 includes: and the light spot adding module 210 is configured to determine a light spot overlapping position according to an initial pixel value of a pixel of the image to be processed, and overlap the light spot at the light spot overlapping position.
The obtaining module 220 is configured to obtain a mask of the image to be processed according to the light spot superposition position and the light spot attribute, where the mask includes a light spot region and a non-light spot region.
The first blurring module 230 is configured to perform first blurring on a first region of the image to be processed to obtain a first image, where the first region is a corresponding region of the mask corresponding to the image to be processed, and the first region is a light spot region of the mask.
And the second blurring module 240 is configured to perform second blurring on all original pixels of the image to be processed.
And the fusion processing module 250 is configured to perform image fusion on the first image and the second image according to the mask of the image to be processed, so as to obtain an image to which the light spot is added.
In an embodiment, the first blurring module 230 performs the first blurring process on the first region of the image to be processed as follows: performing first transformation on the pixels of the first area so as to increase the pixel values of a first part of the pixels of the first area, wherein the pixel values of the first part of the pixels are greater than a preset pixel value threshold; performing fuzzy processing on the first area after the first transformation; and performing second transformation on the pixels of the first area after the blurring processing so that the pixel values of the first part of the pixels of the first area are reduced.
In one embodiment, the first blurring processing module 230 performs the first transformation on the pixels of the first region as follows: the pixel values of the first region pixels are exponentially transformed.
In one embodiment, the first blurring module 230 performs the second transformation on the pixels of the first region after the blurring process as follows: and carrying out logarithmic transformation on the pixel values of the pixels of the first area after the blurring processing.
In an embodiment, the first blurring module 230 performs blurring processing on the first transformed first region according to depth information of a pixel point at the light spot overlapping position.
In an embodiment, the light spot adding module 210 determines the light spot superposition position according to the pixel value of the pixel of the image to be processed in the following manner: selecting a pixel point in the image to be processed as a circle center pixel point, acquiring all pixel points on a circumference with the circle center pixel point as a circle center and a preset length as a radius, and determining the position of the circle center pixel point as a light spot superposition position if the pixel value of the circle center pixel point is greater than the pixel value of the pixel point on any circumference.
In an embodiment, the light spot attribute includes a light spot shape, and the light spot adding module 210 obtains a mask of the image to be processed according to the light spot superposition position and the light spot attribute in the following manner: the spot shape is determined based on user mapping or user selection.
In an embodiment, the light spot attributes further include weight information of the light spots, and the light spot adding module 210 obtains the mask of the image to be processed according to the light spot superposition position and the light spot attributes in the following manner: and determining light spot weight information for representing the light spot brightness according to the pixel value of the pixel point of the light spot superposition position.
The functions implemented by the modules in the apparatus correspond to the steps in the method described above, and for concrete implementation and technical effects, please refer to the description of the method steps above, which is not described herein again.
As shown in fig. 4, one embodiment of the present invention provides an electronic device 30. The electronic device 30 includes a memory 310, a processor 320, and an Input/Output (I/O) interface 330. The memory 310 is used for storing instructions. And a processor 320 for calling the instructions stored in the memory 310 to execute the method for adding spots to an image according to the embodiment of the present invention. The processor 320 is connected to the memory 310 and the I/O interface 330, respectively, for example, via a bus system and/or other connection mechanism (not shown). The memory 310 may be used to store programs and data, including the program for image adding light spots involved in the embodiments of the present invention, and the processor 320 executes various functional applications and data processing of the electronic device 30 by running the programs stored in the memory 310.
In an embodiment of the present invention, the processor 320 may be implemented in at least one hardware form of a Digital Signal Processor (DSP), a Field-Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), and the processor 320 may be one or a combination of several Central Processing Units (CPUs) or other forms of Processing units with data Processing capability and/or instruction execution capability.
Memory 310 in embodiments of the present invention may comprise one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile Memory may include, for example, a Random Access Memory (RAM), a cache Memory (cache), and/or the like. The nonvolatile Memory may include, for example, a Read-Only Memory (ROM), a Flash Memory (Flash Memory), a Hard Disk Drive (HDD), a Solid-State Drive (SSD), or the like.
In the embodiment of the present invention, the I/O interface 330 may be used to receive input instructions (e.g., numeric or character information, and generate key signal inputs related to user settings and function control of the electronic device 30, etc.), and may also output various information (e.g., images or sounds, etc.) to the outside. The I/O interface 330 may comprise one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a mouse, a joystick, a trackball, a microphone, a speaker, a touch panel, and the like.
In some embodiments, the invention provides a computer-readable storage medium having stored thereon computer-executable instructions that, when executed by a processor, perform any of the methods described above.
Although operations are depicted in the drawings in a particular order, this should not be understood as requiring that such operations be performed in the particular order shown or in serial order, or that all illustrated operations be performed, to achieve desirable results. In certain environments, multitasking and parallel processing may be advantageous.
The methods and apparatus of the present invention can be accomplished with standard programming techniques with rule based logic or other logic to accomplish the various method steps. It should also be noted that the words "means" and "module," as used herein and in the claims, is intended to encompass implementations using one or more lines of software code, and/or hardware implementations, and/or equipment for receiving inputs.
Any of the steps, operations, or procedures described herein may be performed or implemented using one or more hardware or software modules, alone or in combination with other devices. In one embodiment, the software modules are implemented using a computer program product comprising a computer readable medium containing computer program code, which is executable by a computer processor for performing any or all of the described steps, operations, or procedures.
The foregoing description of the implementation of the invention has been presented for purposes of illustration and description. It is not intended to be exhaustive or to limit the invention to the precise form disclosed, and modifications and variations are possible in light of the above teachings or may be acquired from practice of the invention. The embodiments were chosen and described in order to explain the principles of the invention and its practical application to enable one skilled in the art to utilize the invention in various embodiments and with various modifications as are suited to the particular use contemplated.

Claims (11)

1. A method of image adding speckle, the method comprising:
determining a light spot superposition position according to the pixel value of the pixel of the image to be processed;
obtaining a mask of the image to be processed according to the light spot superposition position and the light spot attribute, wherein the mask comprises a light spot area and a non-light spot area;
performing first fuzzy processing on a first area of the image to be processed to obtain a first image, wherein the first area is a corresponding area of the light spot area of the mask corresponding to the image to be processed;
performing second blurring processing on all original pixels of the image to be processed to obtain a second image;
and carrying out image fusion on the first image and the second image according to the mask of the image to be processed to obtain an image added with the light spots.
2. The method for adding spots to the image according to claim 1, wherein the first blurring processing on the first area of the image to be processed comprises:
performing first transformation on the pixels of the first area so as to increase the pixel values of a first part of the pixels of the first area, wherein the pixel values of the first part of the pixels are greater than a preset pixel value threshold;
performing fuzzy processing on the first area after the first transformation;
and performing second transformation on the pixels of the first area after the blurring processing so that the pixel values of the first part of the pixels of the first area are reduced.
3. The method of image adding flare according to claim 2, wherein the first transforming the pixels of the first region comprises: and performing exponential transformation on the pixel values of the first region pixels.
4. The method of adding flare to an image according to claim 3, wherein the second transforming the pixels of the first region after the blurring process comprises: and carrying out logarithmic transformation on the pixel values of the pixels in the first area after the blurring processing.
5. The method for adding spots to the image according to any one of claims 2 to 4, wherein the blurring the first transformed first region comprises:
and carrying out fuzzy processing on the first area after the first transformation according to the depth information of the pixel point of the light spot superposition position.
6. The method for adding light spots to the image according to claim 1, wherein the determining the light spot superposition position according to the pixel value of the pixel of the image to be processed comprises:
selecting a pixel point in the image to be processed as a circle center pixel point, acquiring all pixel points on a circumference with the circle center pixel point as a circle center and a preset length as a radius, and if the pixel value of the circle center pixel point is greater than that of any one pixel point on the circumference, determining the position of the circle center pixel point as the light spot superposition position.
7. The method of image adding spots according to claim 1 wherein the spot attributes comprise spot shape;
the obtaining of the mask of the image to be processed according to the light spot superposition position and the light spot attribute comprises: the spot shape is determined based on user mapping or user selection.
8. The method of image adding flare according to claim 1, wherein the flare attributes comprise: the weight information of the light spot;
the obtaining of the mask of the image to be processed according to the light spot superposition position and the light spot attribute further comprises: and determining the light spot weight information for representing the light spot brightness according to the pixel value of the pixel point of the light spot superposition position.
9. An apparatus for adding spots to an image, the apparatus comprising:
the light spot adding module is used for determining a light spot superposition position according to an initial pixel value of a pixel of an image to be processed;
the acquisition module is used for obtaining a mask of the image to be processed according to the light spot superposition position and the light spot attribute, and the mask comprises a light spot area and a non-light spot area;
the first blurring processing module is used for performing first blurring processing on a first area of the image to be processed to obtain a first image, wherein the first area is a corresponding area of the light spot area of the mask corresponding to the image to be processed;
the second blurring processing module is used for performing second blurring processing on all original pixels of the image to be processed to obtain a second image;
and the fusion processing module is used for carrying out image fusion on the first image and the second image according to the mask of the image to be processed to obtain the image added with the light spots.
10. An electronic device, wherein the electronic device comprises:
a memory to store instructions; and
a processor for invoking the memory stored instructions to perform the method of image adding spots of any of claims 1-8.
11. A computer-readable storage medium having stored thereon computer-executable instructions which, when executed by a processor, perform the method of image adding spots of any of claims 1-8.
CN201910910315.2A 2019-09-25 2019-09-25 Method and device for adding light spots to image Pending CN112561777A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910910315.2A CN112561777A (en) 2019-09-25 2019-09-25 Method and device for adding light spots to image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910910315.2A CN112561777A (en) 2019-09-25 2019-09-25 Method and device for adding light spots to image

Publications (1)

Publication Number Publication Date
CN112561777A true CN112561777A (en) 2021-03-26

Family

ID=75029100

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910910315.2A Pending CN112561777A (en) 2019-09-25 2019-09-25 Method and device for adding light spots to image

Country Status (1)

Country Link
CN (1) CN112561777A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113421211A (en) * 2021-06-18 2021-09-21 Oppo广东移动通信有限公司 Method for blurring light spots, terminal device and storage medium
CN117241131A (en) * 2023-11-16 2023-12-15 荣耀终端有限公司 Image processing method and device
WO2023240452A1 (en) * 2022-06-14 2023-12-21 北京小米移动软件有限公司 Image processing method and apparatus, electronic device, and storage medium
WO2023245364A1 (en) * 2022-06-20 2023-12-28 北京小米移动软件有限公司 Image processing method and apparatus, electronic device, and storage medium
WO2023245363A1 (en) * 2022-06-20 2023-12-28 北京小米移动软件有限公司 Image processing method and apparatus, and electronic device and storage medium

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113421211A (en) * 2021-06-18 2021-09-21 Oppo广东移动通信有限公司 Method for blurring light spots, terminal device and storage medium
CN113421211B (en) * 2021-06-18 2024-03-12 Oppo广东移动通信有限公司 Method for blurring light spots, terminal equipment and storage medium
WO2023240452A1 (en) * 2022-06-14 2023-12-21 北京小米移动软件有限公司 Image processing method and apparatus, electronic device, and storage medium
WO2023245364A1 (en) * 2022-06-20 2023-12-28 北京小米移动软件有限公司 Image processing method and apparatus, electronic device, and storage medium
WO2023245363A1 (en) * 2022-06-20 2023-12-28 北京小米移动软件有限公司 Image processing method and apparatus, and electronic device and storage medium
CN117241131A (en) * 2023-11-16 2023-12-15 荣耀终端有限公司 Image processing method and device
CN117241131B (en) * 2023-11-16 2024-04-19 荣耀终端有限公司 Image processing method and device

Similar Documents

Publication Publication Date Title
CN112561777A (en) Method and device for adding light spots to image
CN110675310B (en) Video processing method and device, electronic equipment and storage medium
US9639945B2 (en) Depth-based application of image effects
US11663733B2 (en) Depth determination for images captured with a moving camera and representing moving features
US9591237B2 (en) Automated generation of panning shots
US10410327B2 (en) Shallow depth of field rendering
CN114245905A (en) Depth aware photo editing
US20190089910A1 (en) Dynamic generation of image of a scene based on removal of undesired object present in the scene
CN108234858B (en) Image blurring processing method and device, storage medium and electronic equipment
JP7333467B2 (en) Learning-Based Lens Flare Removal
US10992845B1 (en) Highlight recovery techniques for shallow depth of field rendering
US9734551B1 (en) Providing depth-of-field renderings
CN112927144A (en) Image enhancement method, image enhancement device, medium, and electronic apparatus
CN110751593A (en) Image blurring processing method and device
US11783454B2 (en) Saliency map generation method and image processing system using the same
Huang et al. AdvBokeh: Learning to adversarially defocus Blur
CN110580696A (en) Multi-exposure image fast fusion method for detail preservation
CN112184609B (en) Image fusion method and device, storage medium and terminal
Luo et al. Defocus to focus: Photo-realistic bokeh rendering by fusing defocus and radiance priors
CN114418897B (en) Eye spot image restoration method and device, terminal equipment and storage medium
US20220398704A1 (en) Intelligent Portrait Photography Enhancement System
US20230368340A1 (en) Gating of Contextual Attention and Convolutional Features
KR20230022153A (en) Single-image 3D photo with soft layering and depth-aware restoration
CN114387443A (en) Image processing method, storage medium and terminal equipment
CN115150606A (en) Image blurring method and device, storage medium and terminal equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination