CN107509031B - Image processing method, image processing device, mobile terminal and computer readable storage medium - Google Patents

Image processing method, image processing device, mobile terminal and computer readable storage medium Download PDF

Info

Publication number
CN107509031B
CN107509031B CN201710775174.9A CN201710775174A CN107509031B CN 107509031 B CN107509031 B CN 107509031B CN 201710775174 A CN201710775174 A CN 201710775174A CN 107509031 B CN107509031 B CN 107509031B
Authority
CN
China
Prior art keywords
depth
field
area
normal distribution
determining
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710775174.9A
Other languages
Chinese (zh)
Other versions
CN107509031A (en
Inventor
丁佳铭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201710775174.9A priority Critical patent/CN107509031B/en
Publication of CN107509031A publication Critical patent/CN107509031A/en
Application granted granted Critical
Publication of CN107509031B publication Critical patent/CN107509031B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/62Control of parameters via user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)
  • Facsimile Image Signal Circuits (AREA)

Abstract

The embodiment of the application relates to an image processing method, an image processing device, a mobile terminal and a computer readable storage medium. The method comprises the following steps: performing face recognition on the preview image to obtain a face area; determining a portrait area in the preview image according to the face area; blurring the other areas except the portrait area, and reducing the brightness of the other areas. The image processing method, the image processing device, the mobile terminal and the computer readable storage medium can make the main body of the preview image prominent, improve the blurring effect and make the visual display effect of the blurred preview image better.

Description

Image processing method, image processing device, mobile terminal and computer readable storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to an image processing method and apparatus, a mobile terminal, and a computer-readable storage medium.
Background
Blurring is a digital camera photographing technique, which can highlight a photographed subject by blurring a background to keep the subject clear. When the user utilizes the mobile terminal to virtualize the acquired image, the user can select to virtualize the preview image and check the virtualization effect after shooting. The traditional preview image blurring is limited by processing speed and power consumption, so that blurring leakage often exists, a main body cannot be prominent, blurring effect is poor, and visual display effect of pictures is affected.
Disclosure of Invention
The embodiment of the application provides an image processing method, an image processing device, a mobile terminal and a computer readable storage medium, which can make a main body of a preview image prominent, improve a blurring effect and make a visual display effect of the blurred preview image better.
An image processing method comprising:
performing face recognition on the preview image to obtain a face area;
determining a portrait area in the preview image according to the face area;
blurring the other areas except the portrait area, and reducing the brightness of the other areas.
An image processing apparatus comprising:
the face recognition module is used for carrying out face recognition on the preview image to obtain a face area;
the region determining module is used for determining a portrait region in the preview image according to the face region;
and the blurring module is used for blurring other areas except the portrait area and reducing the brightness of the other areas.
A mobile terminal comprising a memory and a processor, the memory having stored therein a computer program which, when executed by the processor, causes the processor to carry out the method as described above.
A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the method as set forth above.
According to the image processing method, the image processing device, the mobile terminal and the computer readable storage medium, the face recognition is carried out on the preview image to obtain the face area, the portrait area in the preview image is determined according to the face area, the blurring processing is carried out on the other areas except the portrait area, the brightness of the other areas is reduced, the main body of the preview image can be protruded, the blurring effect is improved, and the visual display effect of the previewed image after the blurring processing is better.
Drawings
FIG. 1 is a block diagram of a mobile terminal in one embodiment;
FIG. 2 is a flow diagram illustrating a method for image processing according to one embodiment;
FIG. 3 is a flow diagram illustrating the determination of portrait areas in one embodiment;
FIG. 4 is a diagram illustrating the calculation of depth information according to one embodiment;
FIG. 5 is a flowchart illustrating blurring of regions other than the portrait region in the preview image according to an embodiment;
FIG. 6 is a flow diagram illustrating an embodiment of determining a first depth of field range corresponding to a portrait area;
FIG. 7(a) is a depth histogram generated from depth information of a preview image in one embodiment;
FIG. 7(b) is a diagram illustrating an embodiment of a normal distribution curve corresponding to a peak according to the peak;
FIG. 8 is a flow diagram illustrating an embodiment of selecting a normal distribution range corresponding to a second average depth of field for a portrait area;
FIG. 9(a) is a diagram illustrating a normal distribution curve of a second average depth of field for a region of a human figure in an embodiment;
FIG. 9(b) is a diagram illustrating an embodiment of determining a normal distribution range corresponding to a second average depth of field for a portrait area;
FIG. 10 is a graph of sharpness changes produced in one embodiment;
FIG. 11 is a block diagram of an image processing apparatus in one embodiment;
FIG. 12 is a block diagram of a region determination module in one embodiment;
FIG. 13 is a schematic diagram of an image processing circuit in one embodiment.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another. For example, a first client may be referred to as a second client, and similarly, a second client may be referred to as a first client, without departing from the scope of the present application. Both the first client and the second client are clients, but they are not the same client.
Fig. 1 is a block diagram of a mobile terminal in one embodiment. As shown in fig. 1, the mobile terminal includes a processor, a non-volatile storage medium, an internal memory and a network interface, a display screen, and an input device, which are connected through a system bus. The non-volatile storage medium of the mobile terminal stores an operating system and a computer program, and the computer program is executed by a processor to implement the image processing method provided in the embodiment of the present application. The processor is used to provide computing and control capabilities to support the operation of the entire mobile terminal. The internal memory in the mobile terminal provides an environment for the execution of computer-readable instructions in the non-volatile storage medium. The network interface is used for network communication with the server. The display screen of the mobile terminal can be a liquid crystal display screen or an electronic ink display screen, and the input device can be a touch layer covered on the display screen, a key, a track ball or a touch pad arranged on a shell of the mobile terminal, or an external keyboard, a touch pad or a mouse. The mobile terminal can be a mobile phone, a tablet computer, a personal digital assistant or a wearable device. Those skilled in the art will appreciate that the architecture shown in fig. 1 is only a block diagram of a portion of the architecture associated with the subject application and does not constitute a limitation on the mobile terminal to which the subject application applies, and that a particular mobile terminal may include more or less components than those shown, or may combine certain components, or have a different arrangement of components.
As shown in fig. 2, in one embodiment, there is provided an image processing method including the steps of:
and step 210, performing face recognition on the preview image to obtain a face area.
Specifically, the mobile terminal can acquire a preview image which can be previewed on a display screen through a camera, and perform face recognition on the preview image to obtain a face area in the preview image. The mobile terminal can extract image features of the preview image, analyze the image features through a preset face recognition model, judge whether the preview image contains a face, and determine a corresponding face area if the preview image contains the face. The image features may include shape features, spatial features, edge features, and the like, where the shape features refer to local shapes in the image to be uploaded, the spatial features refer to mutual spatial positions or relative directional relationships between a plurality of regions segmented from the image to be uploaded, and the edge features refer to boundary pixels forming two regions in the image to be uploaded.
In one embodiment, the face recognition model may be a decision model constructed in advance through machine learning, when the face recognition model is constructed, a large number of sample images may be obtained, the sample images include face images and unmanned images, the sample images may be labeled according to whether each sample image includes a face, the labeled sample images are used as input of the face recognition model, and the face recognition model is obtained through machine learning and training.
And step 220, determining a human image area in the preview image according to the human face area.
Specifically, after the mobile terminal determines the face region of the preview image, the portrait region in the preview image can be determined according to the face region, wherein the portrait region can include body regions such as limbs and trunk of a person besides the face region.
In one embodiment, the mobile terminal may obtain depth of field information, color information, and the like of the face region, and determine the portrait region in the preview image according to the depth of field information, the color information, and the like of the face region, where depth of field refers to a range of a front-back distance of a subject measured by imaging that can obtain a clear image at a front edge of a camera lens or other imaging devices. The mobile terminal can extract pixel points of which the depth of field information and the color information are both close to the face region in the preview image, wherein the pixel points of which the depth of field information is close refer to the pixel points of which the difference value between the depth of field information and the depth of field information of the face region is smaller than a first numerical value, the pixel points of which the color information is close refer to the pixel points of which the RGB (red, green and blue color space) values and the RGB values of the face region belong to the same RGB range, the mobile terminal can select the corresponding RGB range according to the RGB values of the face region, and the pixel points belonging to the RGB range are judged to be the pixel points of which. The mobile terminal can extract pixel points of which the difference value between the depth of field information and the depth of field information of the face area is smaller than a first numerical value and the RGB value belongs to the selected RGB range, and determines the portrait contour from the extracted pixel points. The mobile terminal can select the pixel points with the difference value of the depth of field information of the adjacent pixel points larger than the preset second numerical value from the extracted pixel points to form a portrait contour, and the difference value of the depth of field information of the two adjacent pixel points is larger than the preset second numerical value, which indicates that the two adjacent pixel points have sudden change of the depth of field information and can be used for distinguishing a portrait region, a background region and the like.
In step 230, blurring is performed on the other regions except the portrait region, and the brightness of the other regions is reduced.
Specifically, the mobile terminal may perform blurring on regions other than the portrait region through a smoothing filter. In one embodiment, a gaussian filter may be selected to perform blurring processing on other regions except for the portrait region, the gaussian filter is a linear smooth filter and is a process of performing weighted average on the whole image, and the value of each pixel point may be obtained by performing weighted average on itself and other pixel values in the neighborhood. In other areas except the portrait area, the size of a window for Gaussian filtering can be selected according to the blurring degree, the larger the selected window is, the larger the blurring degree is, and the weight of each pixel point in the window is distributed according to a weight distribution mode of normal distribution, so that the weighted average value of each pixel point is recalculated.
After blurring the other areas except the portrait area, the mobile terminal may reduce the brightness of the other areas except the portrait area, may set a brightness empirical value in advance according to experience, and reduce the brightness of the other areas except the portrait area to the brightness empirical value. In an embodiment, the corresponding brightness adjustment ratio may also be selected according to the brightness of the preview image, for example, if the brightness of the preview image is higher, a larger brightness adjustment ratio may be selected, for example, 30%, the brightness of the pixel points in the other regions except the portrait region may be reduced by 30%, and if the brightness of the preview image is lower, a smaller brightness adjustment ratio may be selected, for example, 5%, the brightness of the pixel points in the other regions except the portrait region may be reduced by 5%, and the like, but is not limited thereto. The brightness adjustment may also be manually set by the user, and the brightness of other areas except for the portrait area is reduced according to the setting ratio of the user, and the like. The brightness of other areas except the portrait area is reduced, the phenomenon of blurring missing in the preview image can be reduced, and the main body of the preview image is more prominent.
It is to be understood that the mobile terminal may also decrease the brightness of the other areas except the portrait area first, and then perform blurring processing on the other areas except the portrait area, and is not limited to the above-described execution sequence.
According to the image processing method, the face recognition is carried out on the preview image to obtain the face area, the portrait area in the preview image is determined according to the face area, the blurring processing is carried out on the other areas except the portrait area, the brightness of the other areas is reduced, the main body of the preview image can be protruded, the blurring effect is improved, and the visual display effect of the previewed image after the blurring processing is better.
As shown in fig. 3, in one embodiment, the step 220 of determining a human face region in the preview image according to the human face region includes the following steps:
step 302, obtaining depth information of the preview image.
Specifically, the mobile terminal may obtain depth of field information of each pixel point in the preview image, and in one embodiment, the mobile terminal may be provided with two cameras on the back side, including a first camera and a second camera, where the first camera and the second camera may be arranged on the same horizontal line, horizontally arranged left and right, or arranged on the same vertical line, vertically arranged up and down. In this embodiment, the first camera and the second camera may be cameras of different pixels, wherein the first camera may be a camera with a higher pixel and is mainly used for imaging, and the second camera may be an auxiliary depth-of-field camera with a lower pixel and is used for acquiring depth-of-field information of the acquired image.
Furthermore, the mobile terminal can acquire a first image of a scene through the first camera, acquire a second image of the same scene through the second camera, correct and calibrate the first image and the second image, and synthesize the corrected and calibrated first image and the calibrated second image to obtain a preview image. The mobile terminal can generate a parallax image according to the corrected and calibrated first image and the second image, and then generate a depth map of the preview image according to the parallax image, wherein the depth map can contain depth information of each pixel point in the preview image, in the depth map, areas of similar depth information can be filled with the same color, and the color change can reflect the change of the depth. In one embodiment, the mobile terminal may calculate a correction parameter according to the optical center distance of the first camera and the second camera, the height difference of the optical centers on the horizontal line, the height difference of the lenses of the two cameras, and the like, and correct and calibrate the first image and the second image according to the correction parameter.
The mobile terminal calculates the parallax of the same object in the first image and the second image, and obtains the depth information of the object in the preview image according to the parallax, wherein the parallax refers to the direction difference generated by observing the same object at two points. Fig. 4 is a diagram illustrating the calculation of depth information according to an embodiment. As shown in fig. 4, the first camera and the second camera are arranged on the same horizontal line, the main optical axes of the two cameras are parallel, OL and OR are the optical centers of the first camera and the second camera, respectively, and the shortest distance from the optical center to the corresponding image plane is the focal length f. If P is a point in the world coordinate system, imaging points of the imaging system on the left phase plane and the right phase plane are PL and PR, distances from PL and PR to the left edge of each image plane are XL and XR respectively, and the parallax of P is XL-XR or XR-XL. The distance between the optical center OL of the first camera and the optical center OR of the second camera is b, and the depth of field Z of the point P can be calculated according to the distance b between OL and OR, the focal length f and the parallax XL-XR OR XR-XL, wherein the calculation method is as shown in formula (1):
or
The mobile terminal can perform feature point matching on the first image and the second image, extract feature points of the first image and find optimal matching points in corresponding lines of the second image, and can calculate the parallax of the feature points of the first image and the optimal matching points of the second image as imaging points of the same points in the first image and the second image respectively, so that a parallax image can be generated, and then the depth information of each pixel point in the preview image is calculated according to the formula (1).
In other embodiments, the depth information of the preview image may be obtained in other manners, for example, the depth information of the preview image is calculated by using a structured light (structured light) or Time of flight (TOF), and the like, which is not limited to the above-mentioned manners.
And step 304, calculating a first average depth of field of the face area according to the depth of field information.
Specifically, the mobile terminal can acquire depth of field information of each pixel point in a face region of the preview image, and calculate a first average depth of field of the face region.
Step 306, acquiring color information of the face region.
Specifically, the color information of the face region may include RGB values of each pixel point in the face region, and the skin color of the face region is detected according to the RGB values of each pixel point in the face region, and a corresponding RGB range may be selected according to the skin color of the face region, and a pixel point similar to the color information of the face region is selected according to the RGB range.
And 308, determining a portrait area in the preview image according to the first average depth of field and the color information.
Specifically, the mobile terminal can extract a pixel point of which the difference between the depth of field information and the first average depth of field of the face region is smaller than a first numerical value and the RGB value belongs to the selected RGB range, and determine a portrait contour from the extracted pixel point, so as to determine the portrait region in the preview image.
In this embodiment, the portrait area in the preview image is determined according to the first average depth of field and the color information of the face area, so that the determined portrait area is more accurate, and the visual display effect of the preview image after blurring processing is better.
As shown in fig. 4, in an embodiment, the blurring processing on the other areas except the portrait area includes the following steps:
step 502, selecting a first depth of field range corresponding to the portrait area according to the depth of field information.
Specifically, the mobile terminal may select a first depth of field range corresponding to the portrait area according to depth of field information of each pixel point included in the portrait area, where the first depth of field range may be a depth of field range in which blurring is not performed, and all pixel points belonging to the first depth of field range in the preview image are not subjected to blurring.
In step 504, a second depth of field range of the region to be blurred is determined according to the first depth of field range.
Specifically, the mobile terminal may determine a second depth-of-field range to be blurred according to the selected first depth-of-field range not to be blurred, and pixel points belonging to the second depth-of-field range constitute a region to be blurred in the preview image, where the region to be blurred generally belongs to other regions except for the portrait region. In one embodiment, the corresponding blurring degree may be adjusted according to the depth information of the pixel point, and when the depth of field is within the second depth of field range and is farther from the first depth of field range, the blurring degree may be higher, but is not limited thereto.
And step 506, performing blurring processing on the to-be-blurred region according to the second depth-of-field range.
In this embodiment, the depth of field range to be blurred in the preview image can be accurately selected according to the depth of field of the portrait area, so that the blurring effect can be improved, and the visual display effect of the blurred image is better.
As shown in fig. 6, in an embodiment, the step 502 of selecting the first depth of field range corresponding to the portrait area according to the depth of field information includes the following steps:
step 602, a depth histogram is generated according to the depth information.
Specifically, the depth-of-field histogram may be used to represent the number of pixels having a certain depth of field in the image, and the depth-of-field histogram describes the distribution of the pixels in the image at each depth of field. The mobile terminal obtains the depth of field information of each pixel point in the preview image, can count the number of the pixel points corresponding to each depth of field value, and generates a depth of field histogram of the preview image. Fig. 7(a) is a depth histogram generated according to depth information of a preview image in one embodiment. As shown in fig. 7(a), the horizontal axis of the depth histogram represents depth, and the vertical axis represents the number of pixels, and the depth histogram describes the distribution of pixels in the preview image at each depth.
Step 604, obtaining each peak and corresponding peak of the depth histogram.
Specifically, the mobile terminal may determine each peak of the depth histogram and a peak corresponding to each peak, where a peak refers to a maximum value of an amplitude within a segment of waves formed in the depth histogram, and may be determined by solving a first-order difference of each point in the depth histogram, and a peak refers to a maximum value on a peak.
And step 606, drawing a normal distribution curve according with the corresponding peak according to the peak value.
Specifically, the mobile terminal can draw a normal distribution curve fitting a corresponding peak according to the peak value of each peak, wherein normal distribution is mainly determined by two values, including mathematical expectation μ and variance σ, wherein the mathematical expectation μ is a position parameter of the normal distribution and describes a centralized trend position of the normal distribution, the normal distribution is approximately symmetrical left and right by taking X ═ μ as a symmetry axis, and expectation, mean, median and mode of the normal distribution are the same and are all μ; the variance σ is used to describe the degree of dispersion of the data distribution in the normal distribution, the larger σ is, the more dispersed the data distribution is, the smaller σ is, the more concentrated the data distribution is, and σ can also be called the shape parameter of the normal distribution, and the larger σ is, the flatter the curve is, the smaller σ is, the thinner the curve is. After the mobile terminal obtains each peak in the depth of field histogram and the peak of the peak, the normal distribution curve corresponding to the peak can be fitted according to the peak, the value range of the depth of field of each peak on the horizontal axis can be determined, and the mathematical expectation and the variance of the fitted normal distribution curve are calculated, so that the normal distribution curve fitting the corresponding peak is drawn.
FIG. 7(b) is a diagram illustrating an embodiment of a normal distribution curve corresponding to a peak according to the peak. As shown in fig. 7(b), each peak and the corresponding peak of the depth histogram are obtained, and a normal distribution curve conforming to the corresponding peak is drawn according to the peak of each peak, so that a curve 720 is finally obtained, where the curve 720 is formed by combining the normal distribution curves of a plurality of fitted peaks in the depth histogram.
Step 608, determining a normal distribution range corresponding to the portrait area according to the normal distribution curve, and taking the normal distribution range as a first depth-of-field range corresponding to the portrait area.
As shown in fig. 8, in one embodiment, the step of determining the normal distribution range corresponding to the portrait area according to the normal distribution curve includes the steps of:
at step 802, a second average depth of field for the portrait area is calculated.
Specifically, after the mobile terminal determines the portrait area of the preview image, the depth of field information of each pixel point in the portrait area can be acquired from the depth of field image, and the second average depth of field of the portrait area is calculated.
And step 804, searching a normal distribution curve of the second average depth of field in the depth of field histogram.
Specifically, after the mobile terminal calculates the second average depth of field of the portrait area of the preview image, the position of the second average depth of field in the depth of field histogram can be searched, and the peak corresponding to the second average depth of field can be determined, so that the normal distribution curve of the second average depth of field, which is attached to the corresponding peak, is determined. FIG. 9(a) is a diagram illustrating a normal distribution curve of the second average depth of field for the region of the artifact in an embodiment. As shown in fig. 9(a), if the second average depth of field calculated by the mobile terminal in the portrait area is 85 meters, the position of the second average depth of field in the depth of field histogram can be found as the position pointed by the key head, and it can be determined that the second average depth of field is on the normal distribution curve corresponding to the second peak of the depth of field histogram.
At step 806, the variance of the normal distribution curve is obtained.
And 808, determining a normal distribution range corresponding to the second average depth of field according to the variance.
Specifically, the mobile terminal may obtain a variance σ of a normal distribution curve where the second average depth of field of the portrait area is located in the depth-of-field histogram and the mathematical expectation μ, and determine a normal distribution range corresponding to the second average depth of field of the portrait area according to a 3 σ principle of normal distribution. In a normal distribution, the probability P of any point appearing at σ + (-) μ (σ - μ < X < σ + μ) is 68.26%, the probability P of appearing at σ + (-)2 μ (σ -2 μ < X < σ +2 μ) is 95.45%, and the probability P of appearing at σ + (-)3 μ (σ -3 μ < X < σ +3 μ) is 99.73%, so it can be seen that in a normal distribution, the data falls substantially within the range of σ + (-)3 μ. After the mobile terminal obtains the variance σ and the mathematical expectation μ of the normal distribution curve of the second average depth of field in the depth of field histogram of the portrait area, the range of the depth of field in the normal distribution curve of the second average depth of field in the depth of field histogram of the portrait area in σ + (-)3 μ can be selected as the normal distribution range, and the normal distribution range is used as the first depth of field range corresponding to the portrait area, namely the depth of field range without blurring processing.
FIG. 9(b) is a diagram illustrating an embodiment of determining a normal distribution range corresponding to a second average depth of field for a portrait area. As shown in fig. 9(b), if the second average depth of field calculated by the mobile terminal in the portrait area is 85 meters, the position of the second average depth of field in the depth of field histogram can be found as the position pointed by the key head, and it can be determined that the second average depth of field is on the normal distribution curve corresponding to the second peak of the depth of field histogram. The variance and mathematical expectation of the normal distribution curve can be obtained, the range of the depth of field on the normal distribution curve being σ + (-)3 μ is selected as the normal distribution range 902, the normal distribution range 902 is the first depth of field range corresponding to the portrait area, that is, the depth of field range without blurring processing.
In this embodiment, a depth of field histogram is generated according to depth of field information of a preview image, a closest normal distribution curve is fitted according to a peak value of each peak of the depth of field histogram, and then the normal distribution curve and a corresponding normal distribution range are searched according to an average depth of field of a portrait area, so that it can be ensured that areas close to the depth of field information of the portrait area are not subjected to blurring processing, the depth of field range to be blurred can be accurately determined, the blurring effect can be improved, and the visual display effect of the blurred image is better.
In one embodiment, the step 506 of blurring the region to be blurred according to the second depth of field includes: and generating a definition change diagram according to the second depth of field range, and blurring the region to be blurred according to the definition change diagram.
Specifically, after determining a first depth of field range in which blurring is not performed and a second depth of field range of the region to be blurred, the mobile terminal may generate the sharpness variation map, where the second depth of field range may include a first portion smaller than the first depth of field range and a second portion larger than the first depth of field range. In the definition change image, when the depth of field is smaller than the first depth of field range, the definition and the depth of field have positive correlation, and the definition can be increased along with the increase of the depth of field; when the depth of field is larger than the first depth of field range, the definition and the depth of field have a negative correlation relationship, and the definition can be reduced along with the increase of the depth of field. The definition in the first part of the second depth of field range can be increased along with the increase of the depth of field, the definition in the second part can be reduced along with the increase of the depth of field, the definition in the first depth of field range reaches the highest value, and the definition corresponding to each depth of field can be determined according to the definition change diagram, so that the corresponding virtualization degree can be adjusted according to the depth of field information of the pixel points in the preview image, and the smaller the definition is, the higher the virtualization degree is.
In one embodiment, the size of the window for gaussian filtering can be selected according to the definition change diagram, the area to be blurred with higher definition can be selected, the smaller window can be selected for gaussian filtering, the area to be blurred with lower definition can be selected, and the larger window can be selected for gaussian filtering.
FIG. 10 is a graph of sharpness changes produced in one embodiment. As shown in fig. 10, the mobile terminal selects a first depth of field range 1006 corresponding to the portrait area and determines a second depth of field range of the area to be blurred, where the second depth of field range may include a first portion 1002 smaller than the first depth of field range 1006 and a second portion 1004 larger than the first depth of field range 1006. In the sharpness variation graph, in the first part 1002 of the second depth of field range, the sharpness and the depth of field are in a positive correlation relationship, the sharpness increases with the increase of the depth of field, the first depth of field range 1006 reaches the highest value of the sharpness, and in the second part 1004 of the second depth of field range, the sharpness and the depth of field are in a negative correlation relationship, and the sharpness decreases with the increase of the depth of field. In one embodiment, the first portion 1002 and the second portion 1004 of the second depth-of-field range may also be selected according to the first depth-of-field range 1006 of the portrait area, where when the first depth-of-field range 1006 is smaller, the first portion 1002 has a larger definition change rate, and the second portion 1004 has a smaller definition change rate; when the first depth of field range 1006 is larger, the rate of change of the definition of the first portion 1002 may be smaller and the rate of change of the definition of the second portion 1004 may be larger; when the first depth range 1006 is located in the middle range of the depth histogram, the sharpness change rates of the first portion 1002 and the second portion 1004 may be similar, but are not limited thereto.
In this embodiment, a definition change map may be generated, and the area to be blurred of the preview image is blurred according to the definition change map, and the definition changes along with the change of the depth of field, so that the depth of field range to be blurred and the corresponding blurring degree may be accurately determined, the blurring effect may be improved, and the visual display effect of the blurred image is better.
In one embodiment, there is provided an image processing method including the steps of:
and (1) carrying out face recognition on the preview image to obtain a face area.
And (2) acquiring the depth of field information of the preview image, calculating a first average depth of field of the face region according to the depth of field information, acquiring color information of the face region, and determining the portrait region in the preview image according to the first average depth of field and the color information.
And (3) generating a depth of field histogram according to the depth of field information, acquiring each peak of the depth of field histogram and a corresponding peak, and drawing a normal distribution curve which accords with the corresponding peak according to the peak.
And (4) calculating a second average depth of field of the portrait area, searching a normal distribution curve of the second average depth of field in the depth of field histogram, acquiring the variance of the normal distribution curve, determining a normal distribution range corresponding to the second average depth of field according to the variance, and taking the normal distribution range as a first depth of field range corresponding to the portrait area.
And (5) determining a second depth of field range of the area to be blurred according to the first depth of field range.
And (6) generating a definition change map according to the second depth of field range, and blurring the region to be blurred according to the definition change map.
In this embodiment, the brightness of the other regions except the portrait region can be reduced, the blurring missing phenomenon in the preview image can be reduced, the main body of the preview image is more prominent, the depth of field range to be blurred in the preview image is accurately selected according to the depth of field of the portrait region, the blurring effect can be improved, and the visual display effect of the blurred image is better.
As shown in fig. 11, in one embodiment, an image processing apparatus 1100 is provided that includes a face recognition module 1110, a region determination module 1120, and a blurring module 1130.
The face recognition module 1110 is configured to perform face recognition on the preview image to obtain a face region.
The region determining module 1120 is configured to determine a human face region in the preview image according to the human face region.
The blurring module 1130 is configured to perform blurring on other areas except the portrait area and reduce the brightness of the other areas.
The image processing device performs face recognition on the preview image to obtain a face area, determines the portrait area in the preview image according to the face area, performs blurring processing on other areas except the portrait area, and reduces the brightness of the other areas, so that the main body of the preview image is prominent, the blurring effect is improved, and the visual display effect of the blurred preview image is better.
As shown in fig. 12, in one embodiment, the region determining module 1120 includes a depth of field acquiring unit 1122, a first calculating unit 1124, a color acquiring unit 1126, and a region determining unit 1128.
The depth-of-field acquisition unit 1122 is configured to acquire depth-of-field information of the preview image.
The first calculating unit 1124 is configured to calculate a first average depth of field of the face region according to the depth of field information.
A color obtaining unit 1126, configured to obtain color information of the face region.
An area determining unit 1128, configured to determine a portrait area in the preview image according to the first average depth and the color information.
In this embodiment, the portrait area in the preview image is determined according to the first average depth of field and the color information of the face area, so that the determined portrait area is more accurate, and the visual display effect of the preview image after blurring processing is better.
In one embodiment, the blurring module 1130 includes a selecting unit, a depth determining unit, and a blurring unit.
And the selection unit is used for selecting a first depth of field range corresponding to the portrait area according to the depth of field information.
And the depth of field determining unit is used for determining a second depth of field range of the area to be blurred according to the first depth of field range.
And the blurring unit is used for blurring the area to be blurred according to the second depth-of-field range.
In this embodiment, the depth of field range to be blurred in the preview image can be accurately selected according to the depth of field of the portrait area, so that the blurring effect can be improved, and the visual display effect of the blurred image is better.
In one embodiment, the selecting unit includes a generating subunit, a peak obtaining subunit, a drawing subunit, and a determining subunit.
And the generating subunit is used for generating a depth of field histogram according to the depth of field information.
And the peak acquisition subunit is used for acquiring each peak of the depth histogram and the corresponding peak.
And the drawing subunit is used for drawing a normal distribution curve which accords with the corresponding peak according to the peak value.
And the determining subunit is used for determining a normal distribution range corresponding to the portrait area according to the normal distribution curve, and taking the normal distribution range as a first depth-of-field range corresponding to the portrait area.
In an embodiment, the determining subunit is further configured to calculate a second average depth of field of the portrait area, find a normal distribution curve of the second average depth of field in the depth-of-field histogram, obtain a variance of the normal distribution curve, and determine a normal distribution range corresponding to the second average depth of field according to the variance.
In this embodiment, a depth of field histogram is generated according to depth of field information of a preview image, a closest normal distribution curve is fitted according to a peak value of each peak of the depth of field histogram, and then the normal distribution curve and a corresponding normal distribution range are searched according to an average depth of field of a portrait area, so that it can be ensured that areas close to the depth of field information of the portrait area are not subjected to blurring processing, the depth of field range to be blurred can be accurately determined, the blurring effect can be improved, and the visual display effect of the blurred image is better.
In an embodiment, the blurring unit is further configured to generate a sharpness variation map according to the second depth-of-field range, and perform blurring processing on the region to be blurred according to the sharpness variation map.
In this embodiment, a definition change map may be generated, and the area to be blurred of the preview image is blurred according to the definition change map, and the definition changes along with the change of the depth of field, so that the depth of field range to be blurred and the corresponding blurring degree may be accurately determined, the blurring effect may be improved, and the visual display effect of the blurred image is better.
The embodiment of the application also provides the mobile terminal. The mobile terminal includes an Image Processing circuit, which may be implemented using hardware and/or software components, and may include various Processing units defining an ISP (Image Signal Processing) pipeline. FIG. 13 is a schematic diagram of an image processing circuit in one embodiment. As shown in fig. 13, for convenience of explanation, only aspects of the image processing technique related to the embodiment of the present application are shown.
As shown in fig. 13, the image processing circuit includes an ISP processor 1340 and a control logic 1350. The image data captured by the imaging device 1310 is first processed by the ISP processor 1340, and the ISP processor 1340 analyzes the image data to capture image statistics that may be used to determine and/or control one or more parameters of the imaging device 1310. The imaging device 1310 may include a camera with one or more lenses 1312 and an image sensor 1314. The image sensor 1314 may include an array of color filters (e.g., Bayer filters), and the image sensor 1314 may acquire light intensity and wavelength information captured with each imaging pixel of the image sensor 1314 and provide a set of raw image data that may be processed by the ISP processor 1340. The sensor 1320 (e.g., a gyroscope) may provide parameters of the acquired image processing (e.g., anti-shake parameters) to the ISP processor 1340 based on the type of interface of the sensor 1320. The sensor 1320 interface may utilize a SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination of the above.
In addition, the image sensor 1314 may also send raw image data to the sensor 1320, the sensor 1320 may provide the raw image data to the ISP processor 1340 based on the sensor 1320 interface type, or the sensor 1320 may store the raw image data in the image memory 1330.
ISP processor 1340 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and ISP processor 1340 may perform one or more image processing operations on the raw image data, collecting statistics about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
ISP processor 1340 may also receive image data from image memory 1330. For example, the sensor 1320 interface sends raw image data to the image memory 1330, and the raw image data in the image memory 1330 is then provided to the ISP processor 1340 for processing. The image Memory 1330 may be a portion of a Memory device, a storage device, or a separate dedicated Memory within an electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving raw image data from the image sensor 1314 or from the sensor 1320 interface or from the image memory 1330, the ISP processor 1340 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to an image memory 1330 for additional processing before being displayed. ISP processor 1340 may also receive processed data from image memory 1330 for image data processing in the raw domain and in the RGB and YCbCr color spaces. The processed image data may be output to a display 1380 for viewing by a user and/or further Processing by a Graphics Processing Unit (GPU). Further, the output of the ISP processor 1340 may also be sent to an image memory 1330, and a display 1380 may read the image data from the image memory 1330. In one embodiment, image memory 1330 may be configured to implement one or more frame buffers. In addition, the output of the ISP processor 1340 may be transmitted to an encoder/decoder 1370 for encoding/decoding image data. The encoded image data may be saved and decompressed before being displayed on the display 1380 device.
The steps of the ISP processor 1340 processing the image data include: the image data is subjected to VFE (Video Front End) Processing and CPP (Camera Post Processing). The VFE processing of the image data may include modifying the contrast or brightness of the image data, modifying digitally recorded lighting status data, performing compensation processing (e.g., white balance, automatic gain control, gamma correction, etc.) on the image data, performing filter processing on the image data, etc. CPP processing of image data may include scaling an image, providing a preview frame and a record frame to each path. Among other things, the CPP may use different codecs to process the preview and record frames.
The image data processed by ISP processor 1340 may be sent to a blurring module 1360 to blur the image before it is displayed. The blurring module 1360 may perform blurring on the regions other than the portrait region in the preview image, and reduce the brightness of the regions other than the portrait region. The blurring module 1360 may be a Central Processing Unit (CPU), a GPU, a coprocessor, or the like. After the image data is subjected to the blurring process by the blurring module 1360, the blurred image data may be transmitted to the encoder/decoder 1370 to encode/decode the image data. The encoded image data may be saved and decompressed before being displayed on the display 1380 device. The blurring module 1360 may also be located between the encoder/decoder 1370 and the display 1380, i.e., the blurring module performs blurring on the imaged image. The encoder/decoder can be a CPU, a GPU, a coprocessor or the like in the mobile terminal.
The statistics determined by ISP processor 1340 may be transmitted to control logic 1350 unit. For example, the statistical data may include image sensor 1314 statistics such as auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, lens 1312 shading correction, and the like. Control logic 1350 may include a processor and/or microcontroller executing one or more routines, such as firmware, that determine control parameters of imaging device 1310 and control parameters of ISP processor 1340 based on the received statistical data. For example, the control parameters of imaging device 1310 may include sensor 1320 control parameters (e.g., gain, integration time for exposure control), camera flash control parameters, lens 1312 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), as well as lens 1312 shading correction parameters.
In the present embodiment, the image processing method described above can be realized by using the image processing technique in fig. 13.
In one embodiment, a computer-readable storage medium is provided, on which a computer program is stored, which, when being executed by a processor, carries out the above-mentioned image processing method.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. The storage medium may be a magnetic disk, an optical disk, a Read-Only Memory (ROM), or the like.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
The above-mentioned embodiments only express several embodiments of the present application, and the description thereof is more specific and detailed, but not construed as limiting the scope of the invention. It should be noted that, for a person skilled in the art, several variations and modifications can be made without departing from the concept of the present application, which falls within the scope of protection of the present application. Therefore, the protection scope of the present patent shall be subject to the appended claims.

Claims (10)

1. An image processing method, comprising:
performing face recognition on the preview image to obtain a face area;
determining a portrait area in the preview image according to the face area;
blurring other regions except the portrait region, and reducing the brightness of the other regions;
the determining the human image area in the preview image according to the human face area comprises: acquiring depth of field information of the preview image;
the blurring processing of the other areas except the portrait area includes:
selecting a first depth-of-field range corresponding to the portrait area according to the depth-of-field information;
determining a second depth of field range of the area to be blurred according to the first depth of field range;
performing virtualization processing on the region to be virtualized according to the second depth of field range;
the selecting a first depth of field range corresponding to the portrait area according to the depth of field information includes:
generating a depth of field histogram according to the depth of field information;
acquiring each peak of the depth histogram and a corresponding peak;
drawing a normal distribution curve which accords with the corresponding peak according to the peak;
and determining a normal distribution range corresponding to the portrait area according to the normal distribution curve, and taking the normal distribution range as a first depth-of-field range corresponding to the portrait area.
2. The method of claim 1, wherein determining the face region in the preview image according to the face region further comprises:
calculating a first average depth of field of the face region according to the depth of field information;
acquiring color information of the face area;
and determining a portrait area in the preview image according to the first average depth of field and the color information.
3. The method of claim 1, wherein determining the normal distribution range corresponding to the portrait region according to the normal distribution curve comprises:
calculating a second average depth of field of the portrait area;
searching a normal distribution curve of the second average depth of field in the depth of field histogram;
acquiring the variance of the normal distribution curve;
and determining a normal distribution range corresponding to the second average depth of field according to the variance.
4. The method according to any one of claims 1 to 3, wherein the blurring the region to be blurred according to the second depth of field range comprises:
generating a definition change map according to the second depth of field range;
and performing blurring treatment on the area to be blurred according to the definition change diagram.
5. An image processing apparatus characterized by comprising:
the face recognition module is used for carrying out face recognition on the preview image to obtain a face area;
the region determining module is used for determining a portrait region in the preview image according to the face region;
the blurring module is used for blurring other areas except the portrait area and reducing the brightness of the other areas;
the region determination module includes: a depth of field acquisition unit configured to acquire depth of field information of the preview image;
the blurring module comprises:
the selection unit is used for selecting a first depth of field range corresponding to the portrait area according to the depth of field information;
the depth of field determining unit is used for determining a second depth of field range of the area to be blurred according to the first depth of field range;
the blurring unit is used for blurring the area to be blurred according to the second depth-of-field range;
the selecting unit comprises:
the generating subunit is used for generating a depth of field histogram according to the depth of field information;
the peak acquisition subunit is used for acquiring each peak of the depth of field histogram and the corresponding peak;
the drawing subunit is used for drawing a normal distribution curve which accords with the corresponding peak according to the peak value;
and the determining subunit is used for determining a normal distribution range corresponding to the portrait area according to the normal distribution curve, and taking the normal distribution range as a first depth-of-field range corresponding to the portrait area.
6. The apparatus of claim 5, wherein the region determining module further comprises:
the first calculating unit is used for calculating a first average depth of field of the face area according to the depth of field information;
the color acquisition unit is used for acquiring color information of the face area;
and the area determining unit is used for determining the portrait area in the preview image according to the first average depth of field and the color information.
7. The apparatus of claim 5, wherein the determining subunit is further configured to calculate a second average depth of field for the portrait area; searching a normal distribution curve of the second average depth of field in the depth of field histogram; acquiring the variance of the normal distribution curve; and determining a normal distribution range corresponding to the second average depth of field according to the variance.
8. The apparatus according to any of the claims 5 to 7, wherein the blurring unit is further configured to generate a sharpness variation map according to the second depth-of-field range; and performing blurring treatment on the area to be blurred according to the definition change diagram.
9. A mobile terminal comprising a memory and a processor, the memory having stored thereon a computer program that, when executed by the processor, causes the processor to carry out the method of any one of claims 1 to 4.
10. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1 to 4.
CN201710775174.9A 2017-08-31 2017-08-31 Image processing method, image processing device, mobile terminal and computer readable storage medium Active CN107509031B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710775174.9A CN107509031B (en) 2017-08-31 2017-08-31 Image processing method, image processing device, mobile terminal and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710775174.9A CN107509031B (en) 2017-08-31 2017-08-31 Image processing method, image processing device, mobile terminal and computer readable storage medium

Publications (2)

Publication Number Publication Date
CN107509031A CN107509031A (en) 2017-12-22
CN107509031B true CN107509031B (en) 2019-12-27

Family

ID=60694622

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710775174.9A Active CN107509031B (en) 2017-08-31 2017-08-31 Image processing method, image processing device, mobile terminal and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN107509031B (en)

Families Citing this family (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108009539B (en) * 2017-12-26 2021-11-02 中山大学 Novel text recognition method based on counting focusing model
CN108830892B (en) * 2018-06-13 2020-03-06 北京微播视界科技有限公司 Face image processing method and device, electronic equipment and computer readable storage medium
CN108900790B (en) * 2018-06-26 2021-01-01 努比亚技术有限公司 Video image processing method, mobile terminal and computer readable storage medium
CN108848367B (en) * 2018-07-26 2020-08-07 宁波视睿迪光电有限公司 Image processing method and device and mobile terminal
CN109068063B (en) * 2018-09-20 2021-01-15 维沃移动通信有限公司 Three-dimensional image data processing and displaying method and device and mobile terminal
CN109379531B (en) * 2018-09-29 2021-07-16 维沃移动通信有限公司 Shooting method and mobile terminal
CN109561257B (en) * 2019-01-18 2020-09-18 深圳看到科技有限公司 Picture focusing method, device, terminal and corresponding storage medium
TWI693576B (en) * 2019-02-26 2020-05-11 緯創資通股份有限公司 Method and system for image blurring processing
CN110991298B (en) * 2019-11-26 2023-07-14 腾讯科技(深圳)有限公司 Image processing method and device, storage medium and electronic device
CN111161136B (en) * 2019-12-30 2023-11-03 深圳市商汤科技有限公司 Image blurring method, image blurring device, equipment and storage device
CN111586348B (en) * 2020-04-15 2022-04-12 福建星网视易信息系统有限公司 Video background image acquisition method, storage medium, video matting method and storage device
CN113313646B (en) * 2021-05-27 2024-04-16 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium
CN114125286A (en) * 2021-11-18 2022-03-01 维沃移动通信有限公司 Shooting method and device thereof

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103973977A (en) * 2014-04-15 2014-08-06 联想(北京)有限公司 Blurring processing method and device for preview interface and electronic equipment
CN104092955A (en) * 2014-07-31 2014-10-08 北京智谷睿拓技术服务有限公司 Flash control method and device, as well as image acquisition method and equipment
CN105979165A (en) * 2016-06-02 2016-09-28 广东欧珀移动通信有限公司 Blurred photos generation method, blurred photos generation device and mobile terminal
CN106937054A (en) * 2017-03-30 2017-07-07 维沃移动通信有限公司 Take pictures weakening method and the mobile terminal of a kind of mobile terminal
CN106991379A (en) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 Human body skin recognition methods and device and electronic installation with reference to depth information
CN107016639A (en) * 2017-03-30 2017-08-04 努比亚技术有限公司 A kind of image processing method and device
CN107111749A (en) * 2014-12-22 2017-08-29 诺瓦赛特有限公司 System and method for improved display

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103973977A (en) * 2014-04-15 2014-08-06 联想(北京)有限公司 Blurring processing method and device for preview interface and electronic equipment
CN104092955A (en) * 2014-07-31 2014-10-08 北京智谷睿拓技术服务有限公司 Flash control method and device, as well as image acquisition method and equipment
CN107111749A (en) * 2014-12-22 2017-08-29 诺瓦赛特有限公司 System and method for improved display
CN105979165A (en) * 2016-06-02 2016-09-28 广东欧珀移动通信有限公司 Blurred photos generation method, blurred photos generation device and mobile terminal
CN106991379A (en) * 2017-03-09 2017-07-28 广东欧珀移动通信有限公司 Human body skin recognition methods and device and electronic installation with reference to depth information
CN106937054A (en) * 2017-03-30 2017-07-07 维沃移动通信有限公司 Take pictures weakening method and the mobile terminal of a kind of mobile terminal
CN107016639A (en) * 2017-03-30 2017-08-04 努比亚技术有限公司 A kind of image processing method and device

Also Published As

Publication number Publication date
CN107509031A (en) 2017-12-22

Similar Documents

Publication Publication Date Title
CN107509031B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN107680128B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN107493432B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN107948519B (en) Image processing method, device and equipment
CN107730445B (en) Image processing method, image processing apparatus, storage medium, and electronic device
CN107451969B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN108055452B (en) Image processing method, device and equipment
CN108537155B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN108111749B (en) Image processing method and device
CN109068058B (en) Shooting control method and device in super night scene mode and electronic equipment
CN107730446B (en) Image processing method, image processing device, computer equipment and computer readable storage medium
JP6903816B2 (en) Image processing method and equipment
CN108537749B (en) Image processing method, image processing device, mobile terminal and computer readable storage medium
CN108419028B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN108154514B (en) Image processing method, device and equipment
CN107481186B (en) Image processing method, image processing device, computer-readable storage medium and computer equipment
CN107993209B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN107862658B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
CN107704798B (en) Image blurring method and device, computer readable storage medium and computer device
CN109242794B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN107872631B (en) Image shooting method and device based on double cameras and mobile terminal
CN108053438B (en) Depth of field acquisition method, device and equipment
CN111932587A (en) Image processing method and device, electronic equipment and computer readable storage medium
CN107945106B (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN107563329B (en) Image processing method, image processing device, computer-readable storage medium and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant after: OPPO Guangdong Mobile Communications Co., Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Applicant before: Guangdong Opel Mobile Communications Co., Ltd.

GR01 Patent grant
GR01 Patent grant