CN112419218B - Image processing method and device and electronic equipment - Google Patents
Image processing method and device and electronic equipment Download PDFInfo
- Publication number
- CN112419218B CN112419218B CN202011303615.3A CN202011303615A CN112419218B CN 112419218 B CN112419218 B CN 112419218B CN 202011303615 A CN202011303615 A CN 202011303615A CN 112419218 B CN112419218 B CN 112419218B
- Authority
- CN
- China
- Prior art keywords
- layer
- image
- processing
- layers
- value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 26
- 230000000694 effects Effects 0.000 claims abstract description 106
- 238000000034 method Methods 0.000 claims abstract description 45
- 238000007499 fusion processing Methods 0.000 claims abstract description 24
- 239000007787 solid Substances 0.000 claims abstract description 21
- 239000006185 dispersion Substances 0.000 claims abstract description 14
- 238000011946 reduction process Methods 0.000 claims description 5
- 238000004891 communication Methods 0.000 abstract description 7
- 238000009792 diffusion process Methods 0.000 description 13
- 230000006870 function Effects 0.000 description 8
- 239000000284 extract Substances 0.000 description 7
- 238000004364 calculation method Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 2
- 206010016322 Feeling abnormal Diseases 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000007599 discharging Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T1/00—General purpose image data processing
- G06T1/20—Processor architectures; Processor configuration, e.g. pipelining
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20221—Image fusion; Image merging
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
Abstract
The application discloses an image processing method and device and electronic equipment, belongs to the technical field of communication, and can solve the problem that the electronic equipment cannot take a picture with a soft focus effect. The method comprises the following steps: acquiring N layers corresponding to a first image; the N layers include: a first layer with a blurring effect, a second layer with an edge dispersion effect and a third layer with a solid color effect; according to the hierarchical sequence of the N layers, fusion processing is carried out on each layer based on the target object in sequence, and a second image with a soft focus effect is obtained; wherein, under the condition of processing a first layer of the N layers, the target object is a first image; under the condition of processing other N-1 layers except the first layer in the N layers, the target object is the layer obtained by the previous fusion processing; n is a positive integer greater than 2. The embodiment of the application is applied to the image processing scene.
Description
Technical Field
The embodiment of the application relates to the technical field of communication, in particular to an image processing method, an image processing device and electronic equipment.
Background
With the development of the image device industry, mobile phones are becoming products with professional-level image capabilities. The public demand for mobile phone photography is not limited to basic daily records, but pursues more advanced artistic creation and the like. The soft focus effect is one of the important techniques in the traditional photographic art creation, and the shot photo can show soft feeling of object edge dispersion through a special soft focus lens or a special soft Jiao Lv lens, so that not only can a unique artistic picture effect be obtained, but also the defect of the face of a person can be covered to a certain extent, and the photo is more favored by a person who takes the photo.
However, other electronic devices than professional cameras cannot capture images with soft focus effects by mounting professional soft focus lenses due to assembly and space constraints.
Disclosure of Invention
The embodiment of the application aims to provide an image processing method, an image processing device and electronic equipment, which can solve the problem that the electronic equipment cannot take a photo with a soft focus effect.
In order to solve the technical problems, the application is realized as follows:
In a first aspect, an embodiment of the present application provides an image processing method, including: acquiring N layers corresponding to a first image; the N layers include: a first layer with a blurring effect, a second layer with an edge dispersion effect and a third layer with a solid color effect; according to the hierarchical sequence of the N layers, fusion processing is carried out on each layer based on the target object in sequence, and a second image with a soft focus effect is obtained; wherein, under the condition of processing a first layer of the N layers, the target object is a first image; under the condition of processing other N-1 layers except the first layer in the N layers, the target object is the layer obtained by the previous fusion processing; n is a positive integer greater than 2.
In a second aspect, an embodiment of the present application further provides an image processing apparatus, including: the device comprises an acquisition module and a processing module; the acquisition module is used for acquiring N layers corresponding to the first image; the N layers include: a first layer with a blurring effect, a second layer with an edge dispersion effect and a third layer with a solid color effect; the processing module is used for sequentially carrying out fusion processing on each layer based on the target object according to the hierarchical sequence of the N layers acquired by the acquisition module to acquire a second image with a soft focus effect; wherein, under the condition of processing a first layer of the N layers, the target object is a first image; under the condition of processing other N-1 layers except the first layer in the N layers, the target object is the layer obtained by the previous fusion processing; n is a positive integer greater than 2.
In a third aspect, an embodiment of the present application provides an electronic device, including a processor, a memory, and a program or instructions stored on the memory and executable on the processor, the program or instructions implementing the steps of the image processing method according to the first aspect when executed by the processor.
In a fourth aspect, embodiments of the present application provide a readable storage medium having stored thereon a program or instructions which when executed by a processor perform the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and where the processor is configured to execute a program or instructions to implement a method according to the first aspect.
In the embodiment of the application, N layers such as a first layer with a blurring effect, a second layer with an edge diffusion effect and a third layer with a solid color effect corresponding to a first image are obtained, and fusion processing is sequentially carried out on each layer based on a target object according to the hierarchical sequence of the N layers to obtain a second image with a soft focus effect, wherein the target object is the first image under the condition of processing the first layer of the N layers; under the condition of processing other N-1 layers except the first layer in the N layers, the target object is the layer obtained by the previous fusion processing. Thus, after the user takes the picture by using the electronic device, the electronic device can generate the picture with the soft focus effect by the method, and the same function as a camera with the soft focus lens is realized.
Drawings
Fig. 1 is a schematic flow chart of an image processing method according to an embodiment of the present application;
FIG. 2 is a schematic diagram of an image processing apparatus according to an embodiment of the present application;
fig. 3 is a schematic structural diagram of an electronic device according to an embodiment of the present application;
Fig. 4 is a schematic hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
The following description of the embodiments of the present application will be made clearly and fully with reference to the accompanying drawings, in which it is evident that the embodiments described are some, but not all embodiments of the application. All other embodiments, which can be made by those skilled in the art based on the embodiments of the application without making any inventive effort, are intended to be within the scope of the application.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type, and are not limited to the number of objects, such as the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The image processing method provided by the embodiment of the application can be applied to an image processing scene.
For example, in the related art, a user needs to take an image with a special soft focus lens or a professional soft focus filter lens to obtain an image with a soft focus effect; aiming at a scene that a user wants to add a soft focus effect to an image, in the related art, the user needs to edit a photo on a computer by using professional image editing software to beautify a common image into an image with the soft focus effect. However, such cameras with special soft focus lenses or specialized soft focus lenses are expensive and affordable to non-average users. Professional image editing software requires a certain image editing skill and cannot be realized by an ordinary user.
In view of this problem, in the technical solution provided in the embodiment of the present application, when a user takes a photograph, or after uploading an image, the electronic device may extract Red Green Blue (RGB) channel information of the image 1 from the taken image 1 or the uploaded image 1, and generate, according to the RGB channel information, a layer with a blurring effect, a layer with an edge diffusion effect, and a layer with a solid color effect. Then, the plurality of layers are fused with the image 1 to generate an image 2 with a soft focus effect. The common user can obtain (shoot or modify based on the common image) an image with soft focus effect through the electronic equipment without increasing any hardware cost.
The image processing method provided by the embodiment of the application is described in detail below through specific embodiments and application scenes thereof with reference to the accompanying drawings.
As shown in fig. 1, an image processing method provided in an embodiment of the present application may include the following steps 201 and 202:
step 201, an image processing device acquires N layers corresponding to a first image.
Wherein, the N layers include: the first layer with blurring effect, the second layer with edge diffusion effect and the third layer with solid color effect.
Alternatively, in the embodiment of the present application, the solid color value corresponding to the third layer may be an average color value of the first image. By fusing the third layer with the solid effect of the average color value, the hue shift effect of soft focus is simulated, and the hue of the whole second image can be made to tend to be uniform.
Illustratively, the color values in embodiments of the present application may be represented in RGB, or may be represented in other modes, such as printing four-color modes (CYAN MAGENTA yellow balcak, CMYK), (hue saturation value, HSV), and so on.
The first image may be an image acquired by a camera of the electronic device when the user uses the electronic device to shoot, and after the electronic device acquires the first image shot by the camera, the image processing method provided by the embodiment of the application is used for processing the first image to generate an image with a soft focus effect, so that the electronic device can shoot the image with the soft focus effect without installing a soft focus lens or a professional soft focus filter lens.
The first image may also be an image uploaded by an image processing application installed on the electronic device by the user, and when the user wants to add a soft focus effect to a favorite image, the image may be processed by an application installed on the electronic device and provided with the image processing method provided by the embodiment of the present application, and an image with a soft focus effect may be generated.
Illustratively, the N layers may be generated based on channel information of RGB channels of the first image. That is, the electronic device extracts channel information of RGB channels of the first image, and then generates the N layers based on the channel information. In the RGB image, the image may be represented by a three-dimensional matrix including three layers (i.e., the three channels of the RGB channel, the R channel, the G channel, and the B channel, the channel information includes a gray value of each channel), gray values of R, G, B monochromatic lights (for indicating brightness, the range of values is 0 to 255, and the greater the value, the higher the brightness), and each layer includes pixels within a rectangular range. For example, in a pure color RGB image with a resolution of 400×300 pixels, three layers of single color gray values R (400×300, 37), G (400×300, 60), and B (400×300, 70) are included, i.e., the image includes 400×300 pixels, and each pixel has an RGB value of (37,60,70).
The N images may be generated separately after the electronic device extracts the channel information of the RGB channels of the first image and processes the channel information by a specific algorithm, or may be generated after editing, in accordance with a hierarchical order (generation order), in which the later generated layers are previously generated layers.
And 202, the image processing device sequentially performs fusion processing on each layer based on the target object according to the hierarchical sequence of the N layers to obtain a second image with a soft focus effect.
Wherein, in the case of processing a first layer of the N layers, the target object is a first image; under the condition of processing other N-1 layers except the first layer in the N layers, the target object is the layer obtained after the previous fusion processing; n is a positive integer greater than 2.
After the electronic device acquires the N layers, fusion processing is sequentially performed on each layer based on the target object according to the generation sequence (i.e., the hierarchical sequence) of the N layers, so as to obtain a second image with a soft focus effect.
For example, the electronic device obtains the first image layer with the blurring effect, the second image layer with the edge diffusion effect and the third image layer with the solid color effect after image processing according to the acquired channel information of the RGB channel of the first image. The generation sequence of the layers is as follows: a first layer, a second layer and a third layer. The electronic equipment sequentially carries out fusion processing on each layer based on the target object, and the method comprises the following steps: when the electronic device first fuses the first image layer with the first image to obtain an intermediate image 1, then fuses the second image layer with the intermediate image 1 to obtain an intermediate image 2, and finally fuses the third image layer with the intermediate image 2 to obtain a second image.
It should be noted that, besides the above fusion in sequence, the electronic device may fuse the N layers and the first image at the same time, which all belong to the protection scope of the embodiment of the present application.
In this way, N layers such as a first layer with a blurring effect, a second layer with an edge diffusion effect and a third layer with a solid color effect corresponding to a first image are obtained, and fusion processing is sequentially performed on each layer based on a target object according to the hierarchical sequence of the N layers, so as to obtain a second image with a soft focus effect, wherein the target object is the first image when the first layer of the N layers is processed; under the condition of processing other N-1 layers except the first layer in the N layers, the target object is the layer obtained by the previous fusion processing. Thus, after the user takes the picture by using the electronic device, the electronic device can generate the picture with the soft focus effect by the method, and the same function as a camera with the soft focus lens is realized.
Optionally, in the embodiment of the present application, the electronic device may acquire the first layer with the blurring effect through the following method.
Illustratively, in the step 201, the step of obtaining the first layer corresponding to the first image may include the following step 201a:
In step 201a, the image processing device extracts a target pixel with a gray value greater than a preset brightness value in the first image, and sequentially replaces the color value of each first pixel in the target pixel with an average color value corresponding to a first area corresponding to each first pixel, so as to obtain a first image layer.
The first region includes pixels within a preset range corresponding to the first pixels.
For example, after the electronic device obtains the bright portion information of the RGB channel, the electronic device generates the intermediate layer 1 according to the bright portion information, where the bright portion information is: in the first image, the gray value is greater than the pixel of the preset brightness value. And then, the electronic equipment performs Gaussian blur processing on the middle layer 1 to obtain a first layer.
The gaussian blur processing is an image processing method in which each pixel in an image is replaced with a normal distribution (gaussian distribution) average value of surrounding pixels. The calculation formula of the Gaussian blur radius is (L/100+W/100)/2, L is the number of pixels of the first image in the length direction, and W is the number of pixels of the first image in the width direction.
For example, in order to make the image of the first layer brighter, the electronic device performs gaussian blur processing on the intermediate layer 1 to obtain the intermediate layer 2, and may further perform multiplication on the gray level of the pixel color of the processed intermediate layer 2 and the gray level of the pixel color of the first image to obtain the image with a higher gray level of the pixel color as the first layer. It will be appreciated that the effect of the processed image is simply that the pixels of high gray scale are displayed and the low gray scale are not displayed, i.e. the light color is displayed and the dark color is not displayed. The gray scale is a so-called gradation or gray scale, and means a brightness level. For example, the higher the gray level, the richer the color and the more gorgeous the color; otherwise, the display color is single, and the change is simple.
Thus, the electronic equipment can obtain the first image layer which is bright as a whole and has hazy feeling through the method.
Optionally, in the embodiment of the present application, the second layer may include two layers with edge diffusion effects, but different brightness, and the electronic device may obtain the second layer with darker brightness and edge diffusion effects by the following method.
Illustratively, the second layer includes: a first sub-layer. The first sub-layer is a darker layer with edge diffusion effect.
Illustratively, before the second image layer corresponding to the first image is obtained in the step 201, the image processing method provided in the embodiment of the present application may further include the following step 201b:
Step 201B, the image processing device acquires channel information of RGB-B channels of the first image.
Illustratively, after the step 201b, in the step 201, the obtaining the second image layer corresponding to the first image may include the following steps 201c1 and 201c2:
In step 201c1, the image processing apparatus generates a first intermediate layer according to the channel information of the RGB-B channel.
In step 201c2, the image processing apparatus performs blurring processing and gray-level reduction processing on the first intermediate layer to generate a first sub-layer.
The electronic device generates the first intermediate layer according to the bright portion information in the channel information of the RGB-B channel after obtaining the channel information of the RGB-B channel. And then, performing block blurring (boxblur) on the first intermediate layer to obtain a first sub-layer with an edge diffusion effect.
Illustratively, the above block blurring process is: an image processing method for performing an overall blurring operation on pixels included in a rectangle with a certain size as a unit. The calculation formula of the block blur radius is (L/90+W/90)/2.
For example, in order to obtain a darker first sub-layer with an edge dispersion effect, after performing the block blurring processing on the first intermediate layer, the electronic device may further use the first layer as a base color layer, compare each pixel in the first intermediate layer with the gray value of the color in the RGB channel of the pixel in the corresponding area in the base color layer, where the pixel with a smaller gray value than the gray value in the base color layer remains, and replace the pixel with a larger gray value than the gray value in the base color layer with the pixel in the base color layer, so as to obtain the first sub-layer with a smaller average gray value (with a darker effect).
Therefore, the electronic equipment can obtain the first sub-layer with darker overall effect and edge dispersion effect through the method.
Further alternatively, in the embodiment of the present application, the electronic device may obtain the second sub-layer that has a brighter luminance and has an edge dispersion effect compared to the loudness of the first sub-layer.
Illustratively, the second layer includes a second sub-layer; the brightness of the first sub-layer is smaller than that of the second sub-layer.
Illustratively, the brightness in embodiments of the present application may be represented in gray scale values.
Illustratively, in the step 201, the obtaining the second image layer corresponding to the first image may include the following steps 201d1 and 201d2:
In step 201d1, the image processing apparatus generates a second intermediate layer according to the channel information of the pixels in the second region in the image.
Illustratively, the second layer includes: in the first image, the gray value is larger than the pixel of the preset value.
In step 201d2, the image processing apparatus performs a target operation on the second intermediate layer, and generates a second sub-layer.
Wherein the second region is: in the first image, the gray value exceeds the preset gray value, specifically, the preset gray value may be 230. The above objective operation: blur processing and a first operation.
For example, the blurring process may be a block blurring process, where the calculation formula of the block blurring radius is (L/100+w/100)/2, and other specific processing methods have been explained in detail in the embodiments of the present application, and are not repeated here.
Illustratively, the first operation may include the following step 201e1 or step 201e2:
In step 201e1, in the case where the gray value of the second pixel in the second intermediate layer is greater than the intermediate gray value, the first operation includes a gray value increasing process.
In step 201e2, in the case where the gray value of the second pixel in the second intermediate layer is smaller than the intermediate gray value, the first operation includes a gray value reduction process.
Wherein the second pixel is: pixels in the second intermediate layer; the intermediate gray value is an average gray value of pixels of the second intermediate layer.
For example, to obtain a second sub-layer with higher brightness than the first sub-layer and softer color, the electronic device may increase the gray value of the pixels in the second intermediate layer with gray values greater than the intermediate gray value, and decrease the gray value of the pixels with gray values less than the intermediate gray value, i.e. increase the image contrast, make the bright area brighter, the dark area darker, and increase the image contrast similar to the effect of a soft light illuminating an image after obtaining the second intermediate layer.
Therefore, the electronic equipment can obtain the second sub-layer with brighter overall effect, softer color and edge dispersion effect through the method.
According to the image processing method provided by the embodiment of the application, when the user shoots a picture, the electronic equipment can extract RGB channel information of the first image from the shot first image, and generate a first image layer with a blurring effect, a second image layer with an edge diffusion effect and a third image layer with a solid color effect according to the RGB channel information. And then, fusing the first image with the second image in sequence according to the hierarchical sequence of the plurality of image layers to generate a second image with a soft focus effect. The common user can shoot an image with soft focus effect through the common electronic equipment without increasing any hardware cost.
It should be noted that, in the image processing method provided in the embodiment of the present application, the execution subject may be an image processing apparatus, or a control module for executing the image processing method in the image processing apparatus. In the embodiment of the present application, an image processing apparatus is described by taking an example of an image processing method performed by the image processing apparatus.
Fig. 2 is a schematic diagram of a possible structure of an image processing apparatus according to an embodiment of the present application, and as shown in fig. 2, an image processing apparatus 600 includes: an acquisition module 601 and a processing module 602, wherein: an acquiring module 601, configured to acquire N layers corresponding to the first image; the N layers include: a first layer with a blurring effect, a second layer with an edge dispersion effect and a third layer with a solid color effect; the processing module 602 is configured to sequentially perform fusion processing on each layer based on the target object according to the hierarchical sequence of the N layers acquired by the acquiring module 601, so as to obtain a second image with a soft focus effect; wherein, under the condition of processing a first layer of the N layers, the target object is a first image; under the condition of processing other N-1 layers except the first layer in the N layers, the target object is the layer obtained by the previous fusion processing; n is a positive integer greater than 2.
Optionally, the solid color value corresponding to the third layer is an average color value of the first image.
Optionally, the processing module 602 is specifically configured to extract a target pixel in the first image, where the gray value of the target pixel is greater than the preset brightness value, and sequentially replace the color value of each first pixel in the target image with an average color value corresponding to a first area corresponding to each first pixel, so as to obtain a first layer; the first area comprises pixels in a preset range corresponding to the first pixels.
Optionally, the second layer includes a first sub-layer; the acquiring module 601 is further configured to acquire channel information of an RGB-B channel of the first image; the obtaining module 601 is specifically configured to generate a first intermediate layer according to channel information of an RGB-B channel; and carrying out blurring processing and gray level reduction processing on the first intermediate layer to generate a first sub-layer.
Optionally, the second layer includes a second sub-layer; the brightness of the first sub-layer is less than the brightness of the second sub-layer; the obtaining module 601 is specifically configured to generate a second intermediate layer according to channel information of pixels in a second area in an image; the obtaining module 601 is specifically configured to perform a target operation on the second intermediate layer, and generate a second sub-layer; wherein the second region is: in the first image, a region with a gray value exceeding a preset gray value; target operation: blurring processing and a first operation; in the case where the gradation value of the second pixel in the second intermediate layer is greater than the intermediate gradation value, the first operation includes a gradation value increasing process; in the case where the gradation value of the second pixel in the second intermediate layer is smaller than the intermediate gradation value, the first operation includes a gradation value reduction process; the second layer comprises: pixels with gray values greater than a preset value in the first image; the intermediate gray value is the average gray value of the pixels of the second intermediate layer.
The image processing device in the embodiment of the application can be a device, and can also be a component, an integrated circuit or a chip in a terminal. The device may be a mobile electronic device or a non-mobile electronic device. By way of example, the mobile electronic device may be a mobile phone, a tablet computer, a notebook computer, a palm computer, a vehicle-mounted electronic device, a wearable device, an ultra-mobile personal computer (UMPC), a netbook or Personal Digital Assistant (PDA), etc., and the non-mobile electronic device may be a server, a network attached storage (Network Attached Storage, NAS), a personal computer (personal computer, PC), a Television (TV), a teller machine, a self-service machine, etc., and the embodiments of the present application are not limited in particular.
The image processing apparatus in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android operating system, an ios operating system, or other possible operating systems, and the embodiment of the present application is not limited specifically.
The image processing device provided by the embodiment of the present application can implement each process implemented by the above method embodiment, and in order to avoid repetition, details are not repeated here.
According to the image processing device provided by the embodiment of the application, when a user shoots a photo, the electronic equipment can extract RGB channel information of the first image from the shot first image, and generate a first image layer with a blurring effect, a second image layer with an edge diffusion effect and a third image layer with a solid color effect according to the RGB channel information. And then, fusing the first image with the second image in sequence according to the hierarchical sequence of the plurality of image layers to generate a second image with a soft focus effect. The common user can shoot an image with soft focus effect through the common electronic equipment without increasing any hardware cost.
Optionally, as shown in fig. 3, the embodiment of the present application further provides an electronic device M00, which includes a processor M01, a memory M02, and a program or an instruction stored in the memory M02 and capable of running on the processor M01, where the program or the instruction implements each process of the above image processing method embodiment when executed by the processor M01, and the process can achieve the same technical effect, so that repetition is avoided and no further description is given here.
It should be noted that, the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 4 is a schematic hardware structure of an electronic device implementing various embodiments of the present application.
The electronic device 100 includes, but is not limited to: radio frequency unit 101, network module 102, audio output unit 103, input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, and processor 110.
Those skilled in the art will appreciate that the electronic device 100 may further include a power source (e.g., a battery) for powering the various components, and that the power source may be logically coupled to the processor 110 via a power management system to perform functions such as managing charging, discharging, and power consumption via the power management system. The electronic device structure shown in fig. 4 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than shown, or may combine certain components, or may be arranged in different components, which are not described in detail herein.
The processor 110 is configured to obtain N layers corresponding to the first image; the N layers include: a first layer with a blurring effect, a second layer with an edge dispersion effect and a third layer with a solid color effect; the processor 110 is configured to sequentially perform fusion processing on each layer based on the target object according to the obtained hierarchical order of the N layers, so as to obtain a second image with a soft focus effect; wherein, under the condition of processing a first layer of the N layers, the target object is a first image; under the condition of processing other N-1 layers except the first layer in the N layers, the target object is the layer obtained by the previous fusion processing; n is a positive integer greater than 2.
In this way, N layers such as a first layer with a blurring effect, a second layer with an edge diffusion effect and a third layer with a solid color effect corresponding to a first image are obtained, and fusion processing is sequentially performed on each layer based on a target object according to the hierarchical sequence of the N layers, so as to obtain a second image with a soft focus effect, wherein the target object is the first image when the first layer of the N layers is processed; under the condition of processing other N-1 layers except the first layer in the N layers, the target object is the layer obtained by the previous fusion processing. Thus, after the user takes the picture by using the electronic device, the electronic device can generate the picture with the soft focus effect by the method, and the same function as a camera with the soft focus lens is realized.
Optionally, the processor 110 is specifically configured to extract a target pixel in the first image, where the gray value of the target pixel is greater than the preset brightness value, and sequentially replace the color value of each first pixel in the target image with the average color value corresponding to the first area corresponding to each first pixel to obtain a first image layer; the first area comprises pixels in a preset range corresponding to the first pixels.
Thus, the electronic equipment can obtain the first image layer which is bright as a whole and has hazy feeling through the method.
Optionally, the second layer includes a first sub-layer; the processor 110 is further configured to obtain channel information of an RGB-B channel of the first image; the processor 110 is specifically configured to generate a first intermediate layer according to channel information of the RGB-B channel; and carrying out blurring processing and gray level reduction processing on the first intermediate layer to generate a first sub-layer.
Therefore, the electronic equipment can obtain the first sub-layer with darker overall effect and edge dispersion effect through the method.
Optionally, the second layer includes a second sub-layer; the brightness of the first sub-layer is less than the brightness of the second sub-layer; a processor 110, specifically configured to generate a second intermediate layer according to channel information of pixels in a second region in an image; the processor 110 is specifically configured to perform a target operation on the second intermediate layer, and generate a second sub-layer; wherein the second region is: in the first image, a region with a gray value exceeding a preset gray value; target operation: blurring processing and a first operation; in the case where the gradation value of the second pixel in the second intermediate layer is greater than the intermediate gradation value, the first operation includes a gradation value increasing process; in the case where the gradation value of the second pixel in the second intermediate layer is smaller than the intermediate gradation value, the first operation includes a gradation value reduction process; the second layer comprises: pixels with gray values greater than a preset value in the first image; the intermediate gray value is the average gray value of the pixels of the second intermediate layer.
Therefore, the electronic equipment can obtain the second sub-layer with brighter overall effect, softer color and edge dispersion effect through the method.
According to the electronic device provided by the embodiment of the application, when a user shoots a photo, the electronic device can extract RGB channel information of the first image from the shot first image, and generate a first image layer with a blurring effect, a second image layer with an edge diffusion effect and a third image layer with a solid color effect according to the RGB channel information. And then, fusing the first image with the second image in sequence according to the hierarchical sequence of the plurality of image layers to generate a second image with a soft focus effect. The common user can shoot an image with soft focus effect through the common electronic equipment without increasing any hardware cost.
It should be appreciated that in embodiments of the present application, the input unit 104 may include a graphics processor (Graphics Processing Unit, GPU) 1041 and a microphone 1042, the graphics processor 1041 processing image data of still pictures or video obtained by an image capturing device (e.g. a camera) in a video capturing mode or an image capturing mode. The display unit 106 may include a display panel 1061, and the display panel 1061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 107 includes a touch panel 1071 and other input devices 1072. The touch panel 1071 is also referred to as a touch screen. The touch panel 1071 may include two parts of a touch detection device and a touch controller. Other input devices 1072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and so forth, which are not described in detail herein. Memory 109 may be used to store software programs as well as various data including, but not limited to, application programs and an operating system. The processor 110 may integrate an application processor that primarily handles operating systems, user interfaces, applications, etc., with a modem processor that primarily handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The embodiment of the application also provides a readable storage medium, on which a program or an instruction is stored, which when executed by a processor, implements each process of the above image processing method embodiment, and can achieve the same technical effects, and in order to avoid repetition, a detailed description is omitted here.
Wherein the processor is a processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium such as a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a magnetic disk or an optical disk, and the like.
The embodiment of the application further provides a chip, which comprises a processor and a communication interface, wherein the communication interface is coupled with the processor, and the processor is used for running programs or instructions to realize the processes of the embodiment of the image processing method, and can achieve the same technical effects, so that repetition is avoided, and the description is omitted here.
It should be understood that the chips referred to in the embodiments of the present application may also be referred to as system-on-chip chips, chip systems, or system-on-chip chips, etc.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element. Furthermore, it should be noted that the scope of the methods and apparatus in the embodiments of the present application is not limited to performing the functions in the order shown or discussed, but may also include performing the functions in a substantially simultaneous manner or in an opposite order depending on the functions involved, e.g., the described methods may be performed in an order different from that described, and various steps may be added, omitted, or combined. Additionally, features described with reference to certain examples may be combined in other examples.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) comprising several instructions for causing an electronic device (which may be a mobile phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method according to the embodiments of the present application.
The embodiments of the present application have been described above with reference to the accompanying drawings, but the present application is not limited to the above-described embodiments, which are merely illustrative and not restrictive, and many forms may be made by those having ordinary skill in the art without departing from the spirit of the present application and the scope of the claims, which are to be protected by the present application.
Claims (7)
1. An image processing method, the method comprising:
acquiring N layers corresponding to a first image; the N layers include: a first layer with a blurring effect, a second layer with an edge dispersion effect and a third layer with a solid color effect;
According to the hierarchical sequence of the N layers, sequentially carrying out fusion processing on each layer based on the target object to obtain a second image with a soft focus effect;
wherein, in the case of processing a first layer of the N layers, the target object is the first image; under the condition of processing other N-1 layers except the first layer in the N layers, the target object is the layer obtained after the previous fusion processing; n is a positive integer greater than 2;
The second layer comprises a first sub-layer; before the second image layer corresponding to the first image is acquired, the method further comprises the following steps:
acquiring channel information of RGB-B channels of the first image;
Obtaining a second image layer corresponding to the first image comprises the following steps:
generating a first intermediate layer according to the channel information of the RGB-B channel;
performing fuzzy processing and gray level reduction processing on the first intermediate layer to generate the first sub-layer;
the second layer comprises a second sub-layer; the brightness of the first sub-layer is smaller than the brightness of the second sub-layer;
Obtaining a second image layer corresponding to the first image comprises the following steps:
generating a second intermediate layer according to channel information of pixels in a second region in the first image;
Executing target operation on a second intermediate layer to generate a second sub-layer;
Wherein the second region is: in the first image, a region with a gray value exceeding a preset gray value; the target operation: blurring processing and a first operation;
in the case where the gradation value of the second pixel in the second intermediate layer is greater than the intermediate gradation value, the first operation includes a gradation value increasing process;
in the case where the gradation value of the second pixel in the second intermediate layer is smaller than the intermediate gradation value, the first operation includes a gradation value reduction process;
The second layer includes: pixels with gray values larger than a preset value in the first image; the intermediate gray value is an average gray value of pixels of the second intermediate layer.
2. The method of claim 1, wherein the solid color value corresponding to the third layer is an average color value of the first image.
3. The method of claim 1, wherein obtaining a first layer corresponding to the first image comprises:
extracting target pixels with gray values larger than a preset brightness value from the first image, and sequentially replacing the color value of each first pixel in the target pixels with an average color value corresponding to a first area corresponding to each first pixel to obtain the first image layer;
the first area comprises pixels in a preset range corresponding to the first pixels.
4. An image processing apparatus, characterized in that the apparatus comprises: the device comprises an acquisition module and a processing module;
the acquisition module is used for acquiring N layers corresponding to the first image; the N layers include: a first layer with a blurring effect, a second layer with an edge dispersion effect and a third layer with a solid color effect;
The processing module is used for sequentially carrying out fusion processing on each layer based on the target object according to the hierarchical sequence of the N layers acquired by the acquisition module to acquire a second image with a soft focus effect;
wherein, in the case of processing a first layer of the N layers, the target object is the first image; under the condition of processing other N-1 layers except the first layer in the N layers, the target object is the layer obtained after the previous fusion processing; n is a positive integer greater than 2;
the second layer comprises a first sub-layer;
The acquisition module is further used for acquiring channel information of RGB-B channels of the first image;
The acquisition module is specifically configured to generate a first intermediate layer according to channel information of the RGB-B channel;
performing fuzzy processing and gray level reduction processing on the first intermediate layer to generate the first sub-layer;
the second layer comprises a second sub-layer; the brightness of the first sub-layer is smaller than the brightness of the second sub-layer;
The acquisition module is specifically configured to generate a second intermediate layer according to channel information of pixels in a second area in the first image;
The acquiring module is specifically configured to perform a target operation on a second intermediate layer, and generate the second sub-layer;
Wherein the second region is: in the first image, a region with a gray value exceeding a preset gray value; the target operation: blurring processing and a first operation;
in the case where the gradation value of the second pixel in the second intermediate layer is greater than the intermediate gradation value, the first operation includes a gradation value increasing process;
in the case where the gradation value of the second pixel in the second intermediate layer is smaller than the intermediate gradation value, the first operation includes a gradation value reduction process;
The second layer includes: pixels with gray values larger than a preset value in the first image; the intermediate gray value is an average gray value of pixels of the second intermediate layer.
5. The apparatus of claim 4, wherein the solid color value corresponding to the third layer is an average color value of the first image.
6. The apparatus of claim 4, wherein the device comprises a plurality of sensors,
The processing module is specifically configured to extract a target pixel in the first image, where the gray value of the target pixel is greater than a preset brightness value, and replace the color value of each first pixel in the target pixel with an average color value corresponding to a first area corresponding to each first pixel in sequence, so as to obtain the first layer;
the first area comprises pixels in a preset range corresponding to the first pixels.
7. An electronic device comprising a processor, a memory and a program or instruction stored on the memory and executable on the processor, which when executed by the processor, implements the steps of the image processing method of any one of claims 1 to 3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011303615.3A CN112419218B (en) | 2020-11-19 | 2020-11-19 | Image processing method and device and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011303615.3A CN112419218B (en) | 2020-11-19 | 2020-11-19 | Image processing method and device and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112419218A CN112419218A (en) | 2021-02-26 |
CN112419218B true CN112419218B (en) | 2024-06-28 |
Family
ID=74774350
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011303615.3A Active CN112419218B (en) | 2020-11-19 | 2020-11-19 | Image processing method and device and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112419218B (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113438412A (en) * | 2021-05-26 | 2021-09-24 | 维沃移动通信有限公司 | Image processing method and electronic device |
CN113792653B (en) * | 2021-09-13 | 2023-10-20 | 山东交通学院 | Method, system, equipment and storage medium for cloud detection of remote sensing image |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109068056B (en) * | 2018-08-17 | 2021-03-30 | Oppo广东移动通信有限公司 | Electronic equipment, filter processing method of image shot by electronic equipment and storage medium |
-
2020
- 2020-11-19 CN CN202011303615.3A patent/CN112419218B/en active Active
Non-Patent Citations (1)
Title |
---|
调色案例:打造朦胧柔焦效果;IT部落窝;http://www.ittribalwo.com/article/2164.html;第1-3页 * |
Also Published As
Publication number | Publication date |
---|---|
CN112419218A (en) | 2021-02-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111654635A (en) | Shooting parameter adjusting method and device and electronic equipment | |
CN113808120B (en) | Image processing method, device, electronic equipment and storage medium | |
CN112419218B (en) | Image processing method and device and electronic equipment | |
CN105432068A (en) | Imaging device, imaging method, and image processing device | |
US10204403B2 (en) | Method, device and medium for enhancing saturation | |
CN112509068B (en) | Image dominant color recognition method and device, electronic equipment and storage medium | |
CN112508820B (en) | Image processing method and device and electronic equipment | |
CN112437237B (en) | Shooting method and device | |
CN113989387A (en) | Camera shooting parameter adjusting method and device and electronic equipment | |
CN111968605A (en) | Exposure adjusting method and device | |
WO2022262848A1 (en) | Image processing method and apparatus, and electronic device | |
CN113393391B (en) | Image enhancement method, image enhancement device, electronic apparatus, and storage medium | |
CN112468794B (en) | Image processing method and device, electronic equipment and readable storage medium | |
CN113703881B (en) | Display method, device and storage medium | |
CN114816619A (en) | Information processing method and electronic equipment | |
CN111866401A (en) | Shooting method and device and electronic equipment | |
CN112669229B (en) | Image processing method and device and electronic equipment | |
CN114143447B (en) | Image processing method and device and electronic equipment | |
CN114071016B (en) | Image processing method, device, electronic equipment and storage medium | |
CN117408927B (en) | Image processing method, device and storage medium | |
CN114782261B (en) | Image processing method and device, electronic equipment and readable storage medium | |
CN117745620B (en) | Image processing method and electronic equipment | |
CN113114930B (en) | Information display method, device, equipment and medium | |
CN117956296A (en) | Video shooting method and device | |
CN113949816A (en) | Shooting method and device and electronic equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |