CN113610884A - Image processing method, image processing device, electronic equipment and computer readable storage medium - Google Patents

Image processing method, image processing device, electronic equipment and computer readable storage medium Download PDF

Info

Publication number
CN113610884A
CN113610884A CN202110771363.5A CN202110771363A CN113610884A CN 113610884 A CN113610884 A CN 113610884A CN 202110771363 A CN202110771363 A CN 202110771363A CN 113610884 A CN113610884 A CN 113610884A
Authority
CN
China
Prior art keywords
image
foreground
area
blurring
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110771363.5A
Other languages
Chinese (zh)
Inventor
王顺飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN202110771363.5A priority Critical patent/CN113610884A/en
Publication of CN113610884A publication Critical patent/CN113610884A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Studio Devices (AREA)

Abstract

The embodiment of the application discloses an image processing method, an image processing device, electronic equipment and a computer readable storage medium. The method comprises the following steps: identifying a foreground area in the first image to obtain a first foreground identification result; blurring the first image based on the first foreground identification result to obtain a first blurred image; in response to a selection operation for the first blurred image, determining one or more image areas selected by the selection operation in the first blurred image; identifying a foreground area of each image area to obtain a second foreground identification result corresponding to each image area; and performing blurring processing on the first image or the first blurred image based on a second foreground identification result of each image area to obtain a second blurred image. The image processing method, the image processing device, the electronic equipment and the computer readable storage medium can improve the accuracy of image foreground identification and improve the blurring effect of the image.

Description

Image processing method, image processing device, electronic equipment and computer readable storage medium
Technical Field
The present application relates to the field of image technologies, and in particular, to an image processing method and apparatus, an electronic device, and a computer-readable storage medium.
Background
After the electronic device acquires an image through an imaging device (e.g., a camera), in order to highlight an object of interest in the image, the image is divided into a foreground region and a background region, and the background region of the image is blurred, so that an image effect of highlighting the object of interest in the foreground region is achieved. For some images in which the foreground region and the background region are easily confused, the situation that part of the foreground region is mistakenly blurred or part of the background region is left out and is not blurred easily occurs, so that the blurring effect of the images is poor.
Disclosure of Invention
The embodiment of the application discloses an image processing method, an image processing device, electronic equipment and a computer readable storage medium, which can improve the accuracy of image foreground identification and improve the blurring effect of an image.
The embodiment of the application discloses an image processing method, which comprises the following steps:
identifying a foreground area in the first image to obtain a first foreground identification result;
blurring the first image based on the first foreground identification result to obtain a first blurred image;
in response to a selection operation for the first blurred image, determining one or more image areas selected by the selection operation in the first blurred image;
identifying a foreground area of each image area to obtain a second foreground identification result corresponding to each image area;
and performing blurring processing on the first image or the first blurred image based on a second foreground identification result of each image area to obtain a second blurred image.
An embodiment of the application discloses an image processing apparatus, including:
the first identification module is used for identifying a foreground area in the first image to obtain a first foreground identification result;
a blurring module, configured to perform blurring processing on the first image based on the first foreground identification result to obtain a first blurred image;
a region selection module, configured to determine, in response to a selection operation for the first blurred image, one or more image regions selected by the selection operation in the first blurred image;
the second identification module is used for identifying the foreground area of each image area to obtain a second foreground identification result corresponding to each image area;
the blurring module is further configured to perform blurring processing on the first blurred image based on the second foreground identification result of each image region to obtain a second blurred image.
The embodiment of the application discloses an electronic device, which comprises a memory and a processor, wherein a computer program is stored in the memory, and when the computer program is executed by the processor, the processor is enabled to realize the method.
An embodiment of the application discloses a computer-readable storage medium, on which a computer program is stored, which, when executed by a processor, implements the method as described above.
According to the image processing method, the image processing device, the electronic device and the computer readable storage medium, after a first image is subjected to blurring processing based on a first foreground identification result to obtain a first blurred image, one or more image areas selected in the first blurred image by selection operation are determined in response to selection operation for the first blurred image, foreground areas of the image areas are identified to obtain second foreground identification results corresponding to the image areas, and then the first image or the first blurred image is subjected to blurring processing based on the second foreground identification results of the image areas to obtain a second blurred image. After the first image is subjected to primary blurring processing, a user can select an image area needing further optimization, foreground recognition is performed on the selected image area again, accuracy of the foreground recognition is improved, secondary blurring processing is performed on the first image or the first blurring image based on a more accurate second foreground recognition result, the situation that part of foreground areas are mistakenly blurred or part of background areas are omitted and blurring effects of the images are improved. In addition, the user can select an image area needing further optimization from the first blurring image, different requirements of the user are met, and interaction with the user is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a block diagram of image processing circuitry in one embodiment;
FIG. 2 is a flow diagram of a method of image processing in one embodiment;
FIG. 3A is a diagram illustrating a selection operation performed on a first blurred image according to an embodiment;
FIG. 3B is a diagram illustrating a display selection box in one embodiment;
FIG. 3C is a diagram illustrating a selection operation performed on a first blurred image according to another embodiment;
FIG. 3D is a diagram illustrating a display selection box according to another embodiment;
FIG. 3E is a diagram illustrating resizing of a selection box, in accordance with an embodiment;
FIG. 4 is a flowchart of an image processing method in another embodiment;
FIG. 5 is a diagram illustrating a modification of a first foreground recognition result using a second foreground recognition result of a selected image region in one embodiment;
FIG. 6 is a diagram illustrating a modification of a first depth map using a second foreground identification of a selected image region in one embodiment;
FIG. 7 is a flowchart of an image processing method in another embodiment;
FIG. 8 is a diagram illustrating obtaining local hair matting results corresponding to image regions in one embodiment;
FIG. 9 is a block diagram of an image processing apparatus in one embodiment;
FIG. 10 is a block diagram showing the structure of an electronic apparatus according to an embodiment.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
It is to be noted that the terms "comprises" and "comprising" and any variations thereof in the examples and figures of the present application are intended to cover non-exclusive inclusions. For example, a process, method, system, article, or apparatus that comprises a list of steps or elements is not limited to only those steps or elements listed, but may alternatively include other steps or elements not listed, or inherent to such process, method, article, or apparatus.
It will be understood that, as used herein, the terms "first," "second," and the like may be used herein to describe various elements, but these elements are not limited by these terms. These terms are only used to distinguish one element from another. For example, a first foreground recognition result may be referred to as a second foreground recognition result, and similarly, a second foreground recognition result may be referred to as a first foreground recognition result, without departing from the scope of the present application. Both the first foreground recognition result and the second foreground recognition result are foreground recognition results, but they are not the same foreground recognition result.
The embodiment of the application provides electronic equipment. The electronic device includes therein an Image Processing circuit, which may be implemented using hardware and/or software components, and may include various Processing units defining an ISP (Image Signal Processing) pipeline. FIG. 1 is a block diagram of an image processing circuit in one embodiment. For ease of illustration, FIG. 1 illustrates only aspects of image processing techniques related to embodiments of the present application.
As shown in fig. 1, the image processing circuit includes an ISP processor 140 and control logic 150. The image data captured by the imaging device 110 is first processed by the ISP processor 140, and the ISP processor 140 analyzes the image data to capture image statistics that may be used to determine one or more control parameters of the imaging device 110. The imaging device 110 may include one or more lenses 112 and an image sensor 114. Image sensor 114 may include an array of color filters (e.g., Bayer filters), and image sensor 114 may acquire light intensity and wavelength information captured by each imaging pixel and provide a set of raw image data that may be processed by ISP processor 140. The attitude sensor 120 (e.g., a three-axis gyroscope, hall sensor, accelerometer, etc.) may provide parameters of the acquired image processing (e.g., anti-shake parameters) to the ISP processor 140 based on the type of interface of the attitude sensor 120. The attitude sensor 120 interface may employ an SMIA (Standard Mobile Imaging Architecture) interface, other serial or parallel camera interfaces, or a combination thereof.
It should be noted that, although only one imaging device 110 is shown in fig. 1, in the embodiment of the present application, at least two imaging devices 110 may be included, each imaging device 110 may respectively correspond to one image sensor 114, or a plurality of imaging devices 110 may correspond to one image sensor 114, which is not limited herein. The operation of each image forming apparatus 110 can refer to the above description.
In addition, the image sensor 114 may also transmit raw image data to the attitude sensor 120, the attitude sensor 120 may provide the raw image data to the ISP processor 140 based on the type of interface of the attitude sensor 120, or the attitude sensor 120 may store the raw image data in the image memory 130.
The ISP processor 140 processes the raw image data pixel by pixel in a variety of formats. For example, each image pixel may have a bit depth of 8, 10, 12, or 14 bits, and the ISP processor 140 may perform one or more image processing operations on the raw image data, gathering statistical information about the image data. Wherein the image processing operations may be performed with the same or different bit depth precision.
The ISP processor 140 may also receive image data from the image memory 130. For example, the attitude sensor 120 interface sends raw image data to the image memory 130, and the raw image data in the image memory 130 is then provided to the ISP processor 140 for processing. The image Memory 130 may be a portion of a Memory device, a storage device, or a separate dedicated Memory within an electronic device, and may include a DMA (Direct Memory Access) feature.
Upon receiving raw image data from the image sensor 114 interface or from the attitude sensor 120 interface or from the image memory 130, the ISP processor 140 may perform one or more image processing operations, such as temporal filtering. The processed image data may be sent to image memory 130 for additional processing before being displayed. ISP processor 140 receives the processed data from image memory 130 and performs image data processing on the processed data in the raw domain and in the RGB and YCbCr color spaces. The image data processed by ISP processor 140 may be output to display 160 for viewing by a user and/or further processed by a Graphics Processing Unit (GPU). Further, the output of the ISP processor 140 may also be sent to the image memory 130, and the display 160 may read image data from the image memory 130. In one embodiment, image memory 130 may be configured to implement one or more frame buffers.
The statistics determined by the ISP processor 140 may be sent to the control logic 150. For example, the statistical data may include image sensor 114 statistics such as gyroscope vibration frequency, auto-exposure, auto-white balance, auto-focus, flicker detection, black level compensation, lens 112 shading correction, and the like. The control logic 150 may include a processor and/or microcontroller that executes one or more routines (e.g., firmware) that may determine control parameters of the imaging device 110 and control parameters of the ISP processor 140 based on the received statistical data. For example, the control parameters of the imaging device 110 may include attitude sensor 120 control parameters (e.g., gain, integration time of exposure control, anti-shake parameters, etc.), camera flash control parameters, camera anti-shake displacement parameters, lens 112 control parameters (e.g., focal length for focusing or zooming), or a combination of these parameters. The ISP control parameters may include gain levels and color correction matrices for automatic white balance and color adjustment (e.g., during RGB processing), as well as lens 112 shading correction parameters.
The image processing method provided by the embodiment of the present application is exemplarily described with reference to the image processing circuit of fig. 1. The ISP processor 140 may obtain the first image from the imaging device 110 or the image memory 130, identify a foreground region in the first image to obtain a first foreground identification result, and perform blurring on the first image based on the first foreground identification result to obtain a first blurred image. The ISP processor 140 may output the first ghosted image to the display 160 for display. The user may select an image region to be optimized according to the first blurred image displayed by the display 160, and the ISP processor 140 may determine one or more image regions selected by the selection operation in the first blurred image in response to the selection operation for the first blurred image, identify a foreground region of each image region, obtain a second foreground identification result corresponding to each image region, and perform blurring processing on the first blurred image based on the second foreground identification result of each image region, to obtain a second blurred image. Alternatively, the ISP processor 140 may output the second blurred image to the display 160 for display, or may store the second blurred image in the image memory 130.
As shown in fig. 2, in an embodiment, an image processing method is provided, which can be applied to the above-mentioned electronic devices, which may include, but are not limited to, a mobile phone, a smart wearable device, a tablet Computer, a PC (Personal Computer), a vehicle-mounted terminal, a digital camera, and the like, and the embodiment of the present application is not limited thereto. The image processing method may include the steps of:
step 210, identifying a foreground region in the first image to obtain a first foreground identification result.
The first image may include a foreground region and a background region, where the foreground region may refer to an image region where a target object in the first image is located, and the background region may refer to an image region other than the target object in the first image. For example, the first image may be a human image, the target object may be a human image in the human image, the first image may be an animal image, the target object may be an animal in the animal image, and the like, the first image may be a building image, and the target object may be a building in the building image, and the like, but is not limited thereto.
The first image may be a color image, and may be, for example, an image in RGB (Red Green Blue) format or an image in YUV (Y denotes brightness, and U and V denote chroma) format, or the like. The first image may be an image pre-stored in a memory of the electronic device, or an image acquired by the electronic device in real time through a camera.
The electronic device can perform foreground recognition on the first image to obtain a first foreground recognition result, and the first foreground recognition result can be used for labeling a foreground region in the first image. As an implementation manner, the electronic device may obtain depth information of each pixel point in the first image, where the depth information may be used to represent a distance between the object to be photographed and the camera, and the larger the depth information is, the farther the distance is. The depth information corresponding to the foreground region and the background region has a large difference, so that the foreground region and the background region in the first image can be divided by using the depth information corresponding to each pixel point in the first image, for example, the background region may be a region formed by pixel points whose depth information is greater than a first threshold, and the foreground region may be a region formed by pixel points whose depth information is less than a second threshold, and the like.
In some embodiments, the electronic device can extract image features of the first image and analyze the image features to determine foreground regions of the first image. Alternatively, the image features may include, but are not limited to, edge features, color features, location features, and the like.
In some embodiments, the electronic device may also determine a foreground region in the first image by using a neural network, may input the first image into a pre-trained object segmentation model, and identify a target object included in the first image by using the object segmentation model to obtain a foreground region corresponding to the target object. The object segmentation model may be obtained by training according to a plurality of groups of sample training images, each group of sample training images may include sample images, and each sample image may be labeled with a foreground region. The image segmentation model may include, but is not limited to, a network based on a deep semantic segmentation algorithm, a U-Net network structure, an FCN (full Convolutional neural network), and the like, and is not limited herein.
It should be noted that the electronic device may also identify the foreground region in the first image in other manners, and the manner of identifying the foreground region is not limited in this embodiment of the application.
Step 220, blurring the first image based on the first foreground identification result to obtain a first blurred image.
The electronic device may determine a foreground region and a background region of the first image according to the first foreground identification result, and perform blurring on the background region in the first image to obtain a first blurred image, where the blurring may be implemented by using a gaussian filter, a mean blurring process, a median blurring process, and the like, and is not limited herein.
In some embodiments, the electronic device may also perform blurring on the first image, and then fuse the blurred first image and the first image before blurring based on the first foreground recognition result to obtain a first blurred image. The fusion mode may include, but is not limited to, taking an average value for fusion, assigning different weight coefficients for fusion, Alpha fusion process, etc. Taking Alpha fusion processing as an example, the Alpha fusion processing may assign an Alpha value to each pixel point in the first image before the blurring and the first image after the blurring, respectively, so that the first image before the blurring and the first image after the blurring have different transparencies. The first foreground recognition result may be used as an Alpha value of the first image after the blurring, and the first image after the blurring and the first image before the blurring are fused.
In some embodiments, the electronic device may perform depth estimation on the first image to obtain a depth estimation result of the first image, where the depth estimation result may include depth information of each pixel point in the first image. The first image can be subjected to region division according to the depth estimation result, and pixel points with the same or similar depth information are divided into the same image region. The blurring parameters corresponding to the image areas can be determined according to the depth information of the pixel points of the divided image areas, and then blurring processing is carried out on the image areas according to the blurring parameters corresponding to the image areas. The blurring parameter may be used to describe a blurring degree, and for example, the blurring parameter may include parameters such as a blurring strength and a blurring coefficient, where an image region with larger depth information may correspond to a larger blurring degree, and an image region with smaller depth information may correspond to a smaller blurring degree, so that different image regions may be blurred to different degrees.
In step 230, in response to the selection operation for the first blurred image, one or more image areas selected by the selection operation in the first blurred image are determined.
After the electronic device performs blurring processing on the first image, the obtained first blurring image can be displayed through the display device, and a user can view the first blurring image and select an image area needing to be optimized. Optionally, the image area to be optimized may be an area where blurring errors or blurring omission easily occur, such as a boundary portion between the foreground area and the background area, for example, a boundary area between hair and the background in the portrait image, or a leak area in hair.
The electronic device may determine, in response to the selection operation for the first blurred image, one or more image areas selected by the selection operation in the first blurred image. The selection operation may include, but is not limited to, a touch operation, a voice operation, a line-of-sight interaction operation, a gesture operation, and other interaction operation manners.
As an embodiment, the selection operation may be a touch operation performed by a user on a touch screen. The electronic device may acquire one or more touch positions of the selection operation on the screen, may form, for each touch position, a selection frame corresponding to each touch position according to the area size, and determine an image area corresponding to each selection frame in the first blurred image. The touch position of the selection operation on the screen can comprise touch coordinates, and a user can touch the screen for multiple times to select multiple image areas needing optimization at the same time. The electronic device can simultaneously acquire a plurality of touch positions to determine a plurality of image areas to be touched, and then perform optimized blurring processing (i.e. a multi-touch one-time optimization mode) on the plurality of image areas selected by the user. The electronic device may also acquire the currently detected touch position to obtain a corresponding image area and perform optimized blurring processing on the image area (i.e., a multi-touch and multi-time optimization manner) each time it is detected that the user performs a selection operation.
The area size may be a fixed size preset by a user according to actual requirements, or may be a fixed size uniformly set by a developer before the electronic device leaves a factory, and the area size may be dynamically adjusted according to an image resolution, an image size, and the like of the first blurred image, for example, the image size is larger, and the corresponding area size may also be larger, but is not limited thereto. After determining each touch position, the electronic device may form a selection frame corresponding to each touch position based on the area size, and the selection frame may be any shape, such as a rectangle, a square, a polygon, a circle, and the like, which is not limited herein. The touch position may be at a specific position of the selection box, for example, at a center position of the selection box, or at a corner position of the selection box (e.g., an upper left corner, an upper right corner, etc.).
The electronic device can display the selection frame on the screen according to a preset display mode (such as a preset color, a preset line and the like), the image content in the selection frame in the first blurring image is the selected image area, and a user can intuitively know the selected image area through the displayed selection frame. In some embodiments, a user may adjust the size of the selection box according to actual requirements, and if the electronic device detects an adjustment operation triggered for a target selection box, the size of the target selection box may be adjusted according to the adjustment operation, where the target selection box refers to a selection box that the user needs to adjust the size. The operation manner of the adjustment operation may be distinguished from the operation manner of the selection operation, for example, the selection operation may be a single-click operation, the adjustment operation may be a sliding operation, or the selection operation may be a single-click operation, the adjustment operation may be a double-click operation, and the like, but is not limited thereto.
For example, referring to fig. 3A and 3B, fig. 3A is a schematic diagram illustrating a selection operation performed on a first blurred image in an embodiment, and fig. 3B is a schematic diagram illustrating a selection frame in an embodiment. As shown in fig. 3A and 3B, the electronic device 10 may display a first blurring image 310 on the screen, and the user may select an area that needs to be optimized and blurred according to actual needs, and may select the area through touch control, and the electronic device 10 may display a selection frame 320 on the screen according to a touch position touched by the user, where image content in the selection frame 320 is the selected image area.
Referring to fig. 3C and 3D, fig. 3C is a schematic diagram illustrating a selection operation performed on the first blurred image in another embodiment, and fig. 3D is a schematic diagram illustrating a selection frame in another embodiment. As shown in fig. 3C and fig. 3D, the electronic device 10 may display the first blurred image 330 on the screen, the user may perform multiple touch operations on the first blurred image 310 according to actual requirements, the electronic device 10 may display the selection frames 340 corresponding to each touch position on the screen according to multiple touch positions touched by the user, and the image content in each selection frame 340 is the selected image area. FIG. 3E is a diagram illustrating resizing of a selection box, in accordance with an embodiment. As shown in fig. 3E, the user can adjust the size of the selection frame 340 according to actual requirements, thereby adjusting the selected image area.
And 240, identifying the foreground area of each image area to obtain a second foreground identification result corresponding to each image area.
After determining one or more image areas selected by the user, the electronic device can respectively re-identify the foreground areas of the image areas to obtain second foreground identification results corresponding to the image areas. Optionally, since the first blurred image is an image subjected to blurring processing, in order to ensure accuracy of foreground recognition, after determining each image region, the electronic device may cut out a region image at the same image position from the first image according to an image position of each image region in the first blurred image, and perform foreground recognition on the region image cut out from the first image, so as to obtain a second foreground recognition result corresponding to each image region. Because the image area is only the local area in the first blurred image, the cut area image is also the local image in the first image, and the second foreground identification result which is more precise and accurate can be obtained by performing foreground identification on the local image again, so that the first foreground identification result is corrected, and the accuracy of foreground identification is improved.
And step 250, performing blurring processing on the first image or the first blurred image based on the second foreground identification result of each image area to obtain a second blurred image.
In some embodiments, after obtaining the second foreground recognition result corresponding to each image region, the electronic device may correct the first foreground recognition result according to the second foreground recognition result corresponding to each image region to obtain a corrected target foreground recognition result, and perform blurring processing on the first image based on the target foreground recognition result to obtain a second blurred image. Because the recognition accuracy of the foreground and the background of the target foreground recognition result is higher, a second blurred image with a better blurring effect can be obtained.
In some embodiments, after obtaining the second foreground recognition result corresponding to each image region, the electronic device may also perform blurring processing on each image region in the first blurred image directly according to the second foreground recognition result corresponding to each image region, so as to obtain a second blurred image. And only each image area selected from the first blurring image is subjected to secondary blurring processing, so that the blurring effect is improved, the calculation amount is reduced, and the processing effect is improved.
In the embodiment of the application, after the first image is subjected to primary blurring, a user can select an image area which needs to be further optimized, and perform foreground recognition on the selected image area again, so that the accuracy of foreground recognition is improved, and the second blurring of the first image or the first blurring image is performed based on a more accurate second foreground recognition result, so that the condition that part of the foreground area is mistakenly blurred or part of the background area is not blurred can be improved, and the blurring effect of the image is improved. In addition, the user can select an image area needing further optimization from the first blurring image, different requirements of the user are met, and interaction with the user is improved.
In another embodiment, as shown in fig. 4, an image processing method is provided, which is applicable to the electronic device described above, and which may include the steps of:
step 402, identifying a foreground region in the first image to obtain a first foreground identification result.
The description of step 402 can refer to the description of step 210 in the above embodiments, and is not repeated herein.
And step 404, performing depth estimation on the first image to obtain a depth estimation result.
The electronic equipment can carry out depth estimation on the first image, determine depth information of each pixel point in the first image and obtain a depth estimation result. The electronic device may perform depth estimation on the first image in a software depth estimation manner, or in a manner of calculating depth information in combination with a hardware device. The depth estimation manner of the software may include, but is not limited to, a manner of performing depth estimation using a neural network such as a depth estimation model, where the depth estimation model may be obtained by training a depth training set, and the depth training set may include a plurality of sample images and a depth map corresponding to each sample image. The depth estimation method combined with the hardware device may include, but is not limited to, depth estimation using multiple cameras (e.g., dual cameras), depth estimation using structured light, depth estimation using Time of flight (TOF), and the like. The depth estimation method is not limited in the embodiments of the present application.
It should be noted that the execution sequence between the steps 402 and 404 is not limited herein, and the step 404 may be executed first and then the step 402 is executed, or the step 402 and the step 404 may be executed simultaneously.
And 406, blurring the first image according to the first foreground identification result and the depth estimation result to obtain a first blurred image.
The depth estimation result of the first image may include depth information of each pixel point in the first image, and the first image may be divided into a plurality of image blocks according to the depth information of each pixel point, so that a blurring parameter corresponding to each image block may be determined. For example, the depth information of the pixels divided into the same image block may belong to the same depth value interval, or the difference between the depth information of the pixels divided into the same image block is smaller than a depth threshold, and the like.
The electronic device can divide the first image into a foreground area and a background area according to the depth information of each pixel point, and the foreground area and the background area divided in the depth estimation result are divided based on the depth information, so that the edge of the foreground area is not accurate enough, and the foreground area identified by the first foreground identification result is more accurate. Optionally, edge information of the foreground region divided in the depth estimation result may be adjusted based on the first foreground recognition result to obtain a first depth map, and the first depth map is then used to perform blurring processing on the first image to obtain a first blurred image. The edge information may include pixel point coordinates labeled as edge pixel points.
Optionally, the electronic device may compare the first foreground recognition result with the foreground region divided in the depth estimation result, determine whether edge information of the foreground region in the first foreground recognition result is consistent with edge information of the foreground region in the depth estimation result, if not, may directly modify the edge information of the foreground region in the depth estimation result into the edge information of the foreground region in the first foreground recognition result, and may also fuse the edge information of the foreground region in the depth estimation result with the edge information of the foreground region in the first foreground recognition result. Optionally, the fusion mode may include, but is not limited to, mean fusion of pixel points, fusion according to different weight coefficients, and the like, and since the recognition accuracy of the first foreground recognition result on the foreground region is greater than the depth estimation result, the weight coefficient corresponding to the edge information of the foreground region in the first foreground recognition result may be greater than the weight coefficient of the edge information of the foreground region in the depth estimation result. It should be noted that the depth estimation result may be corrected and adjusted by other methods, which are not limited herein.
After the first depth map is obtained, the electronic device may divide the background region according to the foreground region and the background region divided in the first depth map and the depth information of each pixel point in the background region, so as to obtain a plurality of background sub-regions. The electronic device may determine the blurring parameters corresponding to the background sub-regions based on the depth information of the pixel points in the background sub-regions, so as to perform blurring processing with different blurring strengths on the background sub-regions according to the blurring parameters corresponding to the background sub-regions.
In response to the selection operation for the first blurred image, one or more image areas selected by the selection operation in the first blurred image are determined, step 408.
And step 410, identifying the foreground area of each image area to obtain a second foreground identification result corresponding to each image area.
The descriptions of steps 408-410 can refer to the descriptions of steps 230-240 in the above embodiments, and are not repeated herein.
And step 412, fusing the second foreground identification result corresponding to each image area with the first foreground identification result to obtain a target foreground identification result.
In some embodiments, the electronic device may correct the first foreground recognition result according to the selected second foreground recognition result corresponding to each image region, and fuse the second foreground recognition result corresponding to each image region with the first foreground recognition result to obtain a more accurate target foreground recognition result.
As a specific implementation manner, the electronic device may replace the foreground recognition results corresponding to each image region in the first foreground recognition result with the second foreground recognition results corresponding to each image region, respectively, to obtain the target foreground recognition result. The first foreground identification result may include a foreground mask of the first image, which may be used to label a position of a foreground region of the first image. After determining each image area selected by the user, the electronic device may determine, according to the image position of each image area in the first blurred image, a mask area in the foreground mask, which has the same image position as each image area, that is, a mask area corresponding to each image area. The second foreground recognition result of each image region may include a local foreground mask corresponding to each image region, and the mask region corresponding to each image region in the foreground mask of the first image may be replaced with the corresponding local foreground mask to obtain a more accurate target foreground mask, which may be used as a target foreground recognition result.
Fig. 5 is a schematic diagram illustrating that the first foreground recognition result is corrected by using the second foreground recognition result of the selected image area in one embodiment. As shown in fig. 5, the electronic device performs foreground recognition on the first image 510 to obtain a first foreground recognition result 520, and performs blurring processing on the first image 510 based on the first foreground recognition result 520 and the depth estimation result of the first image 510 to obtain a first blurred image 530. A user may select an image region 532 to be optimized from the first blurred image 530 displayed on the screen of the electronic device, and may perform local foreground recognition on the image region 532 to obtain a second foreground recognition result 540 corresponding to the image region 532. The foreground recognition result 522 corresponding to the image area 532 in the first foreground recognition result 520 may be replaced by the second foreground recognition result 540, resulting in the target foreground recognition result 550. If there are a plurality of selected image areas, the foreground recognition result corresponding to each image area 5 in the first foreground recognition result may be sequentially replaced with the corresponding second foreground recognition result.
It should be noted that other methods may be used to fuse the second foreground recognition result corresponding to each image region with the first foreground recognition result, for example, methods such as performing weighted average fusion on the foreground recognition result corresponding to each image region in the first foreground recognition result and the second foreground recognition result corresponding to each image region may be used, and the method is not limited herein.
And 414, correcting the depth estimation result according to the target foreground identification result to obtain a target depth map, and blurring the first image based on the target depth map to obtain a second blurred image.
The depth estimation result can be corrected according to the more accurate target foreground identification result, a more accurate target depth map of the foreground region is obtained, and the electronic equipment can perform blurring processing on the first image again according to the target depth map so as to obtain a second blurring image with a better blurring effect. It can be understood that a manner of correcting the depth estimation result according to the target foreground recognition result may be similar to the manner of correcting the depth estimation result according to the first foreground recognition result described in the foregoing embodiment, and a manner of blurring the first image according to the target depth map may be similar to the manner of blurring the first image according to the first depth map described in the foregoing embodiment, and details are not repeated here.
In the embodiment of the application, after the first image is subjected to primary blurring, a user can select an image region which needs to be further optimized, the electronic device can perform foreground recognition on each selected image region, and fuse the obtained second foreground recognition result corresponding to each image region with the first foreground recognition result to correct the first foreground recognition result to obtain a more accurate target foreground recognition result, so that the blurring of the first image is performed based on the target foreground recognition result to obtain a second blurred image with a better blurring effect, and the accuracy of the foreground recognition and the image blurring effect are improved.
In some embodiments, in addition to performing blurring processing on the whole image again on the first image by using the second foreground recognition result of each selected image region in the above embodiments, local blurring processing may be performed on the first blurred image by directly using the second foreground recognition result of each selected image region, so as to reduce the amount of calculation and improve the image processing efficiency. After obtaining the second foreground recognition results corresponding to the image regions, the electronic device may correct the first depth map based on the second foreground recognition results of the image regions, and may adjust edge information corresponding to the image regions in the first depth map based on the second foreground recognition results of the image regions to obtain a second depth map.
As a specific embodiment, a depth map region in the first depth map having the same image position as each selected image region may be determined according to the image position of each selected image region in the first blurred image, and edge information of a foreground region included in the corresponding depth map region may be modified according to the second foreground recognition result of each selected image region, so as to adjust the edge information of the foreground region in the first depth map.
Taking the first image area of the selected image areas as an example, the first image area may be any selected image area. The edge information of the foreground region in the second foreground identification result of the first image region can be obtained, the edge information of the foreground region in the second foreground identification result of the first image region can be compared with the edge information of the foreground region contained in the first depth map region corresponding to the first image region in the first depth map, whether the two are consistent or not is judged, and if the two are not consistent, the edge information of the foreground region contained in the first depth map region can be modified into the edge information of the foreground region in the second foreground identification result of the first image region. Optionally, if the two are not consistent, the edge information of the foreground region included in the first depth map region may also be fused with the edge information of the foreground region in the second foreground identification result of the first image region, and the fusion manner may include, but is not limited to, mean fusion of pixel points, fusion according to different weight coefficients, and the like, which is not limited herein.
Because the second foreground identification result of each image area is more accurate, the second foreground identification result of each image area is used for correcting the edge information of the foreground area of the corresponding depth map area in the first depth map, and the second depth map of the foreground area and the background area can be more accurately divided.
As an embodiment, the electronic device may perform blurring on the first image by using the second depth map to obtain a more accurate second blurred image. As another embodiment, the electronic device may also perform blurring processing on each image region in the first blurred image according to the depth information of each image region in the second depth map, so as to obtain the second blurred image.
Further, the depth map region corresponding to each image region in the second depth map accurately divides the foreground region and the background region, the blurring parameter corresponding to each image region may be re-determined according to the depth information of the background region included in the depth map region corresponding to each image region in the second depth map, and the blurring process may be performed on the background region included in each image region according to the blurring parameter corresponding to each image region, so as to obtain the second blurred image.
Fig. 6 is a schematic diagram illustrating a modification of the first depth map by using the second foreground recognition result of the selected image region in one embodiment. As shown in fig. 6, the electronic device may perform depth estimation on the first image 610 to obtain a depth estimation result, and correct the depth estimation result by using the first foreground identification result of the first image to obtain a first depth map 620. The first image 610 may be blurred according to the first depth map 620, resulting in a first blurred image 630. The user may select an image area that needs to be optimized in the first blurred image 630, and the electronic device may determine the image area 632 according to the selection operation of the user, and perform foreground recognition on the image area 632 to obtain a second foreground recognition result 640 corresponding to the image area 632. The edge information of the foreground region in the depth map region 622 of the first depth map 620 having the same image position as the image region 632 may be adjusted according to the second foreground recognition result 640 corresponding to the image region 632, so as to obtain a second depth map 650.
Optionally, after obtaining the second depth map 650, the electronic device may perform blurring processing on the image area 632 in the first blurred image 630 according to the depth information of the depth map area in the second depth map 650, which has the same image position as the image area 632, to obtain a second blurred image.
In the embodiment of the application, the first depth map can be corrected according to the second foreground recognition result of each image area selected by the user, the image area in the first blurred image is blurred based on the corrected second depth map, a second blurred image with a better blurring effect is obtained, the second foreground recognition result of each selected image area is directly used for performing local blurring on the first blurred image, the accuracy of foreground recognition and the image blurring effect are improved, the calculation amount is reduced, and the image processing efficiency is improved.
As shown in fig. 7, in another embodiment, an image processing method is provided, which can be applied to the electronic device described above, and the method can include the following steps:
step 702, identifying a portrait area and a hair area of the first image to obtain a portrait segmentation result meeting the precision condition.
In this embodiment of the application, the first image may include a portrait image, where the portrait image refers to an image including a portrait, a portrait area in the portrait image is a foreground area, and an area other than the portrait area is a background area. When the portrait image identifies the foreground portrait area, the hair area of the person is particularly prone to be identified incorrectly, for example, the hair part of the hair area is incorrectly identified as the background area, or the background area is incorrectly identified as the hair area, so that the obtained foreground portrait area is inaccurate. Therefore, in the embodiment of the present application, the portrait area and the hair area of the first image can be respectively identified, so as to obtain a portrait segmentation result satisfying the precision condition, and the portrait segmentation result can be used for labeling the position of the portrait area in the first image. The portrait segmentation result meeting the precision condition can accurately position the hair area in the first image, and the portrait segmentation result is more accurate. Alternatively, the precision condition may be set by using one or more precision indexes, for example, the precision indexes may include Sum of Absolute Difference (SAD), Mean Squared Error (MSE), gradient error, and the like between the obtained portrait segmentation result and the real portrait segmentation result; the accuracy condition may include one or more of an error between the resulting portrait segmentation result and the true portrait segmentation result being less than a SAD threshold, less than a MSE threshold, less than a gradient error threshold, etc.
In some embodiments, the electronic device may first identify a portrait area in the first image to obtain a portrait segmentation map corresponding to the first image, then identify a hair area in the first image based on the portrait segmentation map to obtain a hair cut result of the first image, and may correct the portrait segmentation map according to the hair cut result to obtain a portrait segmentation result meeting the precision condition.
Specifically, the manner in which the electronic device identifies the portrait area in the first image may include, but is not limited to, a portrait segmentation method based on graph theory, a portrait segmentation method based on clustering, a portrait segmentation method based on semantics, a portrait segmentation method based on examples, a portrait segmentation method based on a deepab series Network model, a segmentation method based on a U-Network (U-Net), or a portrait segmentation method based on a full volume Network (FCN).
Taking the example that the electronic device identifies the portrait area of the first image through the portrait segmentation model to obtain the portrait segmentation map, the portrait segmentation model may be a model of a U-Net structure, the portrait segmentation model may include an encoder and a decoder, the encoder may include a plurality of downsampling layers, and the decoder may include a plurality of upsampling layers. The portrait segmentation model can firstly carry out down-sampling convolution processing on the first image for multiple times through a plurality of down-sampling layers of the encoder, and then carry out up-sampling processing for multiple times through a plurality of up-sampling layers of the decoder to obtain the portrait segmentation image. In the portrait segmentation model, jump connection can be realized between the down-sampling layer and the up-sampling layer between the same resolution, and the features of the down-sampling layer and the up-sampling layer between the same resolution are fused, so that the up-sampling process is more accurate.
Optionally, the portrait segmentation model may be obtained by training according to a portrait sample set, the portrait sample set may include a plurality of portrait sample images carrying portrait labels, and the portrait labels may be used to label portrait areas in the portrait sample images, for example, the portrait labels may include a portrait mask, pixel points belonging to the portrait areas in the portrait pickling film may correspond to first pixel values, pixel points belonging to the background areas may correspond to second pixel values, and the portrait areas of the sample images may be accurately labeled through the binarized portrait mask.
In some embodiments, before the first image is input into the portrait segmentation model, the first image may be scaled and/or rotated according to an input size of the portrait segmentation model to obtain a first image satisfying the input size, and then the first image is input into the portrait segmentation model for portrait recognition. For example, the input size of the portrait segmentation model is a vertical size (the side of the image parallel to the horizontal line is shorter than the side perpendicular to the horizontal line), and if the first image is a horizontal size (the side of the image parallel to the horizontal line is longer than the side perpendicular to the horizontal line), the first image may be first rotated by 90 degrees clockwise or counterclockwise; or the input size of the portrait segmentation model is smaller than the image size of the first image, the first image may be reduced to obtain a first image having the same input size. The method can ensure the adaptation of the input first image and the portrait segmentation model and ensure the accuracy of the output portrait segmentation image.
After the portrait segmentation image is obtained, the electronic equipment can perform channel splicing on the portrait segmentation image and the first image to obtain a spliced image, and identify a hair region in the first image according to the spliced image to obtain a portrait segmentation result meeting the precision condition.
In some embodiments, the portrait segmentation map obtained by the electronic device may be a single-channel image, and further, the portrait segmentation map may be a single-channel three-valued image, in the portrait segmentation map, a pixel identified as belonging to the portrait area may correspond to a first pixel value, a pixel identified as belonging to the background area may correspond to a second pixel value, and a pixel identified as belonging to the portrait-background junction area may correspond to a third pixel value. For example, taking the portrait segmentation map as a gray image, the gray scale value corresponding to the pixel identified as belonging to the portrait area may be 0, the gray scale value corresponding to the pixel identified as belonging to the background area may be 255, and the gray scale value corresponding to the pixel identified as belonging to the portrait-background intersection area may be 127.5, but the invention is not limited thereto.
The portrait segmentation image of the single channel can be subjected to channel splicing with a first image, the first image is a three-channel image (such as an RGB image or an HSV image), and the portrait segmentation image can be spliced into a 4 th channel of the first image, so that a spliced image with four channels is obtained. Optionally, before the channel stitching is performed on the portrait segmentation image and the first image, normalization processing may be performed on the first image and the portrait segmentation image respectively, and then the channel stitching is performed on the first image and the portrait segmentation image after the normalization processing. The normalization processing method may include, but is not limited to, subtracting the mean value from the pixel value of each pixel in the image, and then dividing the difference by the mean value to obtain the normalized pixel value, or directly dividing the pixel value of each pixel from the gray scale value region (e.g., 255) to obtain the normalized pixel value. The first image and the portrait segmentation image are normalized and then are subjected to channel splicing, so that the accuracy and the efficiency of subsequent hair region identification can be improved.
The electronic equipment can carry out hair cutting and matting on the spliced image, identify a hair region, obtain a hair cutting and matting result of the first image, and obtain a portrait segmentation result meeting the precision condition based on the hair cutting and matting result and the portrait segmentation image. The method of hair Matting can include, but is not limited to, a Poisson Matting method, a bayesian Matting based on bayesian theory, a machine learning Matting based on data driving method, a closed surface Matting method, and other traditional Matting methods without deep learning, or Matting methods based on deep learning using artificial neural networks such as Convolutional Neural Networks (CNN).
As a specific implementation manner, the electronic device may input the stitched image into a first hair matting model, extract features of the stitched image through the first hair matting model, and determine a hair region in the first image according to the features, so as to obtain a portrait segmentation result meeting a precision condition. The first hair cutting model can be obtained based on a first training set, the first training set comprises a plurality of portrait sample images marked with hair regions, hair labels can be carried in the portrait sample images, and the hair labels can be used for marking the hair regions in the portrait sample images. Optionally, in order to ensure that a portrait segmentation result meeting a precision condition is obtained, the first hair matting model may be trained according to the precision condition, so that the predicted hair region output by the first hair matting model meets the precision condition, for example, an error between the predicted hair region output by the first hair matting model and a real hair region of the portrait sample image may be smaller than a set SAD threshold, smaller than an MSE threshold, smaller than a gradient error threshold, and the like, but is not limited thereto.
The first hair matting model can also be a model adopting a network architecture such as U-Net, and the first hair matting model can comprise an encoder and a decoder, the first hair matting model outputs a hair matting result of a first image based on an input spliced image, the hair matting result can comprise a hair mask corresponding to the first image, the hair mask can comprise position information of a hair region in the first image, and the hair mask can be used for marking the hair region in the first image.
The portrait segmentation image can be corrected according to the hair cutting result output by the first hair cutting model, so that the portrait segmentation result meeting the precision condition is obtained. Furthermore, the pixel values of the pixel points belonging to the junction area of the portrait and the background in the portrait segmentation picture can be adjusted according to the hair cutting result, and whether each pixel point belonging to the junction area of the portrait and the background belongs to the hair area or the background area is determined. The method can search each pixel point which is identified to belong to the portrait and background joint area in the portrait segmentation picture, corresponds to the hair area or the background area in the hair matting result, and accurately divides each pixel point which is identified to belong to the portrait and background joint area in the portrait segmentation picture by the hair matting result.
As another embodiment, the hair cutting result may be a single-channel hair mask, and the region where the portrait segmentation map matches the hair cutting result may be directly replaced with the hair cutting result, so as to obtain the portrait segmentation result meeting the precision condition. The portrait region and the hair region of the first image are identified by respectively adopting the portrait segmentation model and the first hair cutting model, so that the stability and the accuracy of the obtained portrait segmentation result meeting the precision condition can be improved.
In another embodiment, the electronic device may directly input the first image into the image processing model, and recognize the portrait area and the hair area of the first image through the image processing model to obtain the portrait segmentation result satisfying the accuracy condition. Optionally, the image processing model may be a neural network with a dual codec structure, and the image processing model may be obtained by training a portrait sample image carrying a portrait label and a hair label at the same time. By adopting the method, the calculation amount of the obtained portrait segmentation result meeting the precision condition can be reduced, and the image processing efficiency is improved.
Step 704, performing blurring processing on the first image based on the portrait segmentation result meeting the precision condition to obtain a first blurred image.
In some embodiments, the electronic device may further perform depth estimation on the first image to obtain a depth estimation result of the first image, correct the depth estimation result according to the portrait segmentation result satisfying the accuracy condition to obtain a first depth map of the first image, and perform blurring processing on the first image by using the first depth map to obtain a first blurred image.
In response to the selection operation for the first blurred image, one or more image areas selected by the selection operation in the first blurred image are determined, step 706.
The description of step 706 may refer to the related descriptions in the above embodiments, and is not repeated herein.
Step 708, identifying the hair region of each image region, and obtaining a local hair matting result corresponding to each image region.
For a portrait image, an area with a poor blurring effect is likely to be a boundary area between a hair and a background, and in order to improve the image blurring effect, in the embodiment of the present application, after one or more image areas selected by a user are determined, a hair area in each image area may be identified again to obtain a local hair matting result corresponding to each image area, so as to refine local hair identification in a first image.
The manner of identifying the hair region in each image region may include, but is not limited to, a traditional matting method without deep learning, such as a poisson matting method, bayesian matting based on bayesian theory, a machine learning matting method based on data driving, or a closed surface matting method, or a matting method based on deep learning that uses an artificial neural network, such as a convolutional neural network.
As a specific implementation manner, the electronic device may identify a hair region in each image region through the second hair matting model, and obtain a local hair matting result corresponding to each image region. The network architecture of the second hair cut model and the network architecture of the first hair cut model can be the same or similar, wherein the second hair cut model can be obtained based on a second training set, and the second training set comprises a plurality of sample images obtained by randomly cutting the portrait sample images of the first training set.
Optionally, the sample images in the second training set may be randomly cut from the portrait sample images in the first training set according to the area size, and since the portrait sample images in the first training set carry the hair label, the image area cut from the portrait sample images in the first training set and having the same size as the selection frame is directly used as the sample image for the second hair cut model, so that the local hair identification capability of the second hair cut model can be improved, and the training efficiency can be improved.
In some embodiments, before the electronic device identifies the hair regions in the respective image regions through the second hair strand matting model, first region images corresponding to the respective image regions may be cropped from the first image, and second region images corresponding to the respective image regions may be cropped from the portrait segmentation map. The first area image corresponding to each image area may refer to an image formed by image content having the same image position as each image area in the first image, and the second area image corresponding to each image area may refer to an image formed by image content having the same image position as each image area in the portrait segmentation map.
The first area image and the second area image corresponding to each image area can be subjected to channel splicing to obtain the input image corresponding to each image area, and the input image corresponding to each image area is input into the second hair cutting model. The hair regions contained in the input image corresponding to the input image regions can be identified through the second hair cutting model, and local hair cutting results corresponding to the image regions are obtained.
Exemplarily, the local hair matting result corresponding to each image region obtained in the above embodiment is now described with reference to fig. 8. As shown in fig. 8, the screen of the electronic device may display a first blurred image 810, the user may select an image area 812 that needs to be optimized, and the electronic device may crop a first area image 822 having the same image position as the image area 812 from a first image 820 and crop a second area image 832 having the same image position as the image area 812 from a portrait division image 830. The first area image 822 and the second area image 832 may be channel-stitched, and the first area image 822 may be stitched into a fourth channel of the second area image 832 to obtain a four-channel input image 840. The input image 840 may be input into the second hair matting model, and a hair region of the input image 840 is identified by the second hair matting model, resulting in a local hair matting result 850 corresponding to the image region 812.
In some embodiments, before the input image of each image region is input into the second hair scratch model, the input image may be preprocessed to fit the preprocessed input image to the second hair scratch model. The input image can be subjected to scaling processing and/or rotation processing according to the size requirement corresponding to the second hairline matting model, so that the input image meeting the size requirement is obtained, and the size requirement is the size requirement of the second hairline matting model for the input image. For example, if the first and second cut region images are both the size of the selection frame and smaller than the size requirement of the second hair matting model, the input image may be enlarged first; or the input image obtained by splicing is a vertical image (the side of the image parallel to the horizontal line is shorter than the side perpendicular to the horizontal line), and the size of the second hair cutting model is required to be a transverse size (the side of the image parallel to the horizontal line is longer than the side perpendicular to the horizontal line), the input image can be rotated by 90 degrees clockwise or anticlockwise. The input image is subjected to scaling processing and/or rotation processing, so that the input image meets the size requirement, and then the input image meeting the size requirement is input to the second hair cutting model for hair region identification, and the accuracy of an identification result can be ensured.
Further, after the scaling processing and/or the rotation processing are/is performed on the input image, the normalization processing may be performed in a manner similar to that of the normalization processing performed on the first image and the portrait segmentation image described in the above embodiment, and then the input image after the normalization processing is input to the second hair cutting model for hair region identification. Optionally, after the first area image and the second area image are obtained by cutting, normalization processing is performed on the first area image and the second area image respectively, and then channel splicing is performed on the first area image and the second area image after normalization processing.
After the local hair cutting cutout result output by the second hair cutting cutout model is obtained, the local hair cutting cutout result can be subjected to scaling processing and/or rotation processing according to the original size of the input image so as to obtain the local hair cutting cutout result with the size being the same as that of the image area.
And 710, performing blurring processing on the first image or the first blurred image based on the local hairline matting result of each image area to obtain a second blurred image.
As an embodiment, the electronic device may fuse the local hair matting result of each image region with the portrait segmentation result satisfying the precision condition to obtain the target portrait segmentation result. The portrait segmentation result meeting the precision condition can be corrected according to the local hairline matting result of each image region so as to obtain a more accurate target portrait segmentation result. Specifically, the image content having the same position as each image region in the human image segmentation result satisfying the accuracy condition may be replaced with the local cut-out result corresponding to each image region. After the target portrait segmentation result is obtained, the depth estimation result of the first image can be corrected according to the target foreground recognition result to obtain a target depth map, and the first image is subjected to blurring processing based on the target depth map to obtain a second blurring image.
As another embodiment, the electronic device may modify the local hair matting result of each image region to the first depth map, and may adjust a hair edge corresponding to each image region in the first depth map based on the local hair matting result of each image region to obtain the second depth map. And performing blurring processing on each image area in the first blurring image respectively by using the depth information of each image area in the second depth map to obtain a second blurring image. Optionally, the first image may also be subjected to blurring processing by using the second depth map, so as to obtain a more accurate second blurred image.
It should be noted that the scheme of the present application may be applied to other image processing scenarios besides the blurring optimization scenario, for example, after local hair matting is performed on each selected image area to obtain a more accurate portrait area, processing such as color adjustment or spot blurring may be performed on a boundary area between the hair and the background based on the more accurate portrait area, which is not limited herein.
In the embodiment of the application, the interactive mode of optimizing hair cut drawing rendering is adopted, the user can select the image area needing to be optimized according to actual requirements, and the selected image area is subjected to local hair cut drawing, so that the accuracy of the hair cut drawing is improved, the accuracy of portrait identification is improved, the hair significance effect of blurring treatment consequences is improved, the condition that the background of the hair area is leaked to be blurred or the hair area is mistakenly blurred is avoided, and the interactivity between the user and the hair area is improved.
As shown in fig. 9, in an embodiment, an image processing apparatus 900 is provided, which can be applied to the electronic device described above, and the image processing apparatus 900 can include a first identification module 910, a blurring module 920, a region selection module 930, and a second identification module 940.
The first identifying module 910 is configured to identify a foreground region in the first image, and obtain a first foreground identification result.
The blurring module 920 is configured to perform blurring processing on the first image based on the first foreground identification result to obtain a first blurred image.
A region selection module 930 configured to determine one or more image regions selected by the selection operation in the first blurred image in response to the selection operation for the first blurred image.
In one embodiment, the area selection module 930 is further configured to obtain one or more touch positions of the selection operation on the screen; forming a selection frame corresponding to each touch position according to the area size aiming at each touch position; and determining image areas in the first blurred image corresponding to the selection frames.
In one embodiment, the area selection module 930 is further configured to, if the adjustment operation triggered for the target selection box is detected, adjust the size of the target selection box according to the adjustment operation.
The second identifying module 940 is configured to identify foreground regions of the image regions, and obtain second foreground identification results corresponding to the image regions.
The blurring module 920 is further configured to perform blurring processing on the first blurred image based on the second foreground identification result of each image region, so as to obtain a second blurred image.
In the embodiment of the application, after the first image is subjected to primary blurring, a user can select an image area which needs to be further optimized, and perform foreground recognition on the selected image area again, so that the accuracy of foreground recognition is improved, and the second blurring of the first image or the first blurring image is performed based on a more accurate second foreground recognition result, so that the condition that part of the foreground area is mistakenly blurred or part of the background area is not blurred can be improved, and the blurring effect of the image is improved. In addition, the user can select an image area needing further optimization from the first blurring image, different requirements of the user are met, and interaction with the user is improved.
In one embodiment, the blurring module 920 includes a fusion unit and a blurring unit.
And the fusion unit is used for fusing the second foreground identification result corresponding to each image area with the first foreground identification result to obtain a target foreground identification result.
In an embodiment, the fusion unit is further configured to replace the foreground identification results corresponding to the image regions in the first foreground identification result with the second foreground identification results corresponding to the image regions, respectively, so as to obtain the target foreground identification result.
And the blurring unit is used for blurring the first image based on the target foreground identification result to obtain a second blurred image.
In one embodiment, the image processing apparatus 900 further includes a depth estimation module in addition to the first identification module 910, the blurring module 920, the region selection module 930, and the second identification module 940.
And the depth estimation module is used for carrying out depth estimation on the first image to obtain a depth estimation result, and the depth estimation result comprises depth information of each pixel point in the first image.
And the blurring unit is further used for correcting the depth estimation result according to the target foreground identification result to obtain a target depth map, and blurring the first image based on the target depth map to obtain a second blurred image.
In the embodiment of the application, after the first image is subjected to primary blurring, a user can select an image region which needs to be further optimized, the electronic device can perform foreground recognition on each selected image region, and fuse the obtained second foreground recognition result corresponding to each image region with the first foreground recognition result to correct the first foreground recognition result to obtain a more accurate target foreground recognition result, so that the blurring of the first image is performed based on the target foreground recognition result to obtain a second blurred image with a better blurring effect, and the accuracy of the foreground recognition and the image blurring effect are improved.
In an embodiment, the blurring module 920 is further configured to correct the depth estimation result according to the first foreground identification result to obtain a first depth map of the first image, and perform blurring processing on the first image according to the first depth map to obtain a first blurred image.
In an embodiment, the blurring module 920 is further configured to adjust edge information corresponding to each image region in the first depth map based on the second foreground identification result of each image region, so as to obtain a second depth map; and performing blurring processing on each image area in the first blurring image respectively according to the depth information of each image area in the second depth map to obtain a second blurring image.
In the embodiment of the application, the first depth map can be corrected according to the second foreground recognition result of each image area selected by the user, the image area in the first blurred image is blurred based on the corrected second depth map, a second blurred image with a better blurring effect is obtained, the second foreground recognition result of each selected image area is directly used for performing local blurring on the first blurred image, the accuracy of foreground recognition and the image blurring effect are improved, the calculation amount is reduced, and the image processing efficiency is improved.
In one embodiment, the first image comprises a portrait image. The first identifying module 910 is further configured to identify a portrait area and a hair area of the first image, so as to obtain a portrait segmentation result meeting the precision condition.
In one embodiment, the first identification module 910 includes a portrait segmentation unit, a stitching unit, and a hair matting unit.
And the portrait segmentation unit is used for identifying the portrait area of the first image to obtain a portrait segmentation image.
And the splicing unit is used for carrying out channel splicing on the portrait segmentation image and the first image to obtain a spliced image.
And the hair cutting unit is used for identifying the hair region in the first image according to the spliced image so as to obtain a portrait segmentation result meeting the precision condition.
In an embodiment, the hair cutting unit is further configured to input the stitched image into a first hair cutting model, extract features of the stitched image through the first hair cutting model, and determine a hair region in the first image according to the features to obtain a portrait segmentation result meeting a precision condition, where the first hair cutting model is obtained based on a first training set, and the first training set includes a plurality of portrait sample images marked with hair regions.
In an embodiment, the second identifying module 940 is further configured to identify a hair region in each image region through a second hair matting model to obtain a local hair matting result corresponding to each image region, where the second hair matting model is trained based on a second training set, and the second training set includes a plurality of sample images randomly clipped from a portrait sample image of the first training set.
In an embodiment, the image processing apparatus 900 further includes a cropping module and a stitching module.
The cutting module is used for cutting first area images corresponding to all the image areas from the first image; and cutting out second area images corresponding to the image areas from the portrait segmentation image.
And the splicing module is used for carrying out channel splicing on the first area image and the second area image corresponding to each image area to obtain the input image corresponding to each image area.
The second identifying module 940 is further configured to input the input image corresponding to each image region into the second hair cutting model, and identify a hair region in the input image corresponding to each image region through the second hair cutting model, so as to obtain a local hair cutting result corresponding to each image region.
In an embodiment, the image processing apparatus 900 further includes a preprocessing module.
And the preprocessing module is used for carrying out scaling processing and/or rotation processing on the input image according to the size requirement corresponding to the second hair cutting model after the input image corresponding to each image area is obtained by the splicing module, so as to obtain the input image meeting the size requirement.
In the embodiment of the application, the interactive mode of optimizing hair cut drawing rendering is adopted, the user can select the image area needing to be optimized according to actual requirements, and the selected image area is subjected to local hair cut drawing, so that the accuracy of the hair cut drawing is improved, the accuracy of portrait identification is improved, the hair significance effect of blurring treatment consequences is improved, the condition that the background of the hair area is leaked to be blurred or the hair area is mistakenly blurred is avoided, and the interactivity between the user and the hair area is improved.
FIG. 10 is a block diagram showing the structure of an electronic apparatus according to an embodiment. As shown in fig. 10, electronic device 1000 may include one or more of the following components: a processor 1010, a memory 1020 coupled to the processor 1010, wherein the memory 1020 may store one or more computer programs that may be configured to be executed by the one or more processors 1010 to implement the methods as described in the various embodiments above.
Processor 1010 may include one or more processing cores. The processor 1010 interfaces with various components throughout the electronic device 1000 using various interfaces and circuitry to perform various functions of the electronic device 1000 and process data by executing or executing instructions, programs, code sets, or instruction sets stored in the memory 1020 and invoking data stored in the memory 1020. Alternatively, the processor 1010 may be implemented in hardware using at least one of Digital Signal Processing (DSP), Field-Programmable Gate Array (FPGA), and Programmable Logic Array (PLA). The processor 1010 may integrate one or more of a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a modem, and the like. Wherein, the CPU mainly processes an operating system, a user interface, an application program and the like; the GPU is used for rendering and drawing display content; the modem is used to handle wireless communications. It is understood that the modem may not be integrated into the processor 1010, but may be implemented by a communication chip.
The Memory 1020 may include a Random Access Memory (RAM) or a Read-Only Memory (ROM). The memory 1020 may be used to store instructions, programs, code, sets of codes, or sets of instructions. The memory 1020 may include a stored program area and a stored data area, wherein the stored program area may store instructions for implementing an operating system, instructions for implementing at least one function (such as a touch function, a sound playing function, an image playing function, etc.), instructions for implementing the various method embodiments described above, and the like. The stored data area may also store data created during use by the electronic device 1000, and the like.
It is understood that the electronic device 1000 may include more or less structural elements than those shown in the above structural block diagrams, for example, a power module, a physical button, a WiFi (Wireless Fidelity) module, a speaker, a bluetooth module, a sensor, etc., and is not limited herein.
The embodiment of the application discloses a computer readable storage medium, which stores a computer program, wherein the computer program realizes the method described in the above embodiment when being executed by a processor.
Embodiments of the present application disclose a computer program product comprising a non-transitory computer readable storage medium storing a computer program, and the computer program, when executed by a processor, implements the method as described in the embodiments above.
It will be understood by those skilled in the art that all or part of the processes of the methods of the embodiments described above can be implemented by a computer program, which can be stored in a non-volatile computer-readable storage medium, and can include the processes of the embodiments of the methods described above when the program is executed. The storage medium may be a magnetic disk, an optical disk, a ROM, etc.
Any reference to memory, storage, database, or other medium as used herein may include non-volatile and/or volatile memory. Suitable non-volatile memory can include ROM, Programmable ROM (PROM), Erasable PROM (EPROM), Electrically Erasable PROM (EEPROM), or flash memory. Volatile memory can include Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, RAM can take many forms, such as Static RAM (SRAM), Dynamic RAM (DRAM), Synchronous DRAM (SDRAM), Double Data Rate SDRAM (DDR SDRAM), Enhanced SDRAM (ESDRAM), Synchlink DRAM (SLDRAM), Rambus Direct RAM (RDRAM), and Direct Rambus DRAM (DRDRAM).
It should be appreciated that reference throughout this specification to "one embodiment" or "an embodiment" means that a particular feature, structure or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. Those skilled in the art should also appreciate that the embodiments described in this specification are all alternative embodiments and that the acts and modules involved are not necessarily required for this application.
In various embodiments of the present application, it should be understood that the size of the serial number of each process described above does not mean that the execution sequence is necessarily sequential, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation on the implementation process of the embodiments of the present application.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The foregoing detailed description has provided a detailed description of an image processing method, an image processing apparatus, an electronic device, and a computer-readable storage medium, which are disclosed in the embodiments of the present application, and the detailed description has been provided to explain the principles and implementations of the present application, and the description of the embodiments is only provided to help understanding the method and the core idea of the present application. Meanwhile, for a person skilled in the art, according to the idea of the present application, there may be variations in the specific embodiments and the application scope, and in summary, the content of the present specification should not be construed as a limitation to the present application.

Claims (17)

1. An image processing method, comprising:
identifying a foreground area in the first image to obtain a first foreground identification result;
blurring the first image based on the first foreground identification result to obtain a first blurred image;
in response to a selection operation for the first blurred image, determining one or more image areas selected by the selection operation in the first blurred image;
identifying a foreground area of each image area to obtain a second foreground identification result corresponding to each image area;
and performing blurring processing on the first image or the first blurred image based on a second foreground identification result of each image area to obtain a second blurred image.
2. The method according to claim 1, wherein the blurring the first image or the first blurred image based on the second foreground identification result of each of the image regions to obtain a second blurred image comprises:
fusing a second foreground identification result corresponding to each image area with the first foreground identification result to obtain a target foreground identification result;
and performing blurring processing on the first image based on the target foreground identification result to obtain a second blurring image.
3. The method according to claim 2, wherein the fusing the second foreground recognition result corresponding to each of the image regions with the first foreground recognition result to obtain a target foreground recognition result includes:
and replacing the foreground identification results corresponding to the image areas in the first foreground identification result with second foreground identification results corresponding to the image areas respectively to obtain target foreground identification results.
4. The method of claim 2, wherein prior to the blurring the first image based on the first foreground identification, the method further comprises:
performing depth estimation on the first image to obtain a depth estimation result, wherein the depth estimation result comprises depth information of each pixel point in the first image;
performing blurring processing on the first image based on the target foreground identification result to obtain a second blurred image, including:
correcting the depth estimation result according to the target foreground identification result to obtain a target depth map;
and performing blurring processing on the first image based on the target depth map to obtain a second blurring image.
5. The method of claim 1, wherein prior to the blurring the first image based on the first foreground identification, the method further comprises:
performing depth estimation on the first image to obtain a depth estimation result, wherein the depth estimation result comprises depth information of each pixel point in the first image;
performing blurring processing on the first image based on the first foreground identification result to obtain a first blurred image, including:
correcting the depth estimation result according to the first foreground identification result to obtain a first depth map of the first image;
and performing blurring processing on the first image according to the first depth map to obtain a first blurring image.
6. The method of claim 5, wherein the blurring the first image or the first blurred image based on the second foreground identification result of each of the image regions to obtain a second blurred image comprises:
adjusting edge information corresponding to each image area in the first depth map based on a second foreground identification result of each image area to obtain a second depth map;
and performing blurring processing on each image area in the first blurring image respectively according to the depth information of each image area in the second depth map to obtain a second blurring image.
7. The method of any of claims 1 to 6, wherein the first image comprises a portrait image; the identifying a foreground region in the first image to obtain a first foreground identification result includes:
and identifying the portrait area and the hair area of the first image to obtain a portrait segmentation result meeting the precision condition.
8. The method of claim 7, wherein the identifying the portrait region and the hair region of the first image to obtain a portrait segmentation result satisfying a precision condition comprises:
identifying a portrait area of the first image to obtain a portrait segmentation map;
performing channel splicing on the portrait segmentation image and the first image to obtain a spliced image;
and identifying a hair region in the first image according to the spliced image so as to obtain a portrait segmentation result meeting the precision condition.
9. The method according to claim 8, wherein the identifying a hair region in the first image according to the stitched image to obtain a human image segmentation result satisfying an accuracy condition comprises:
the method comprises the steps of inputting a first hair matting model by a spliced image, extracting features of the spliced image through the first hair matting model, determining hair regions in the first image according to the features to obtain a portrait segmentation result meeting a precision condition, wherein the first hair matting model is obtained based on training of a first training set, and the first training set comprises a plurality of portrait sample images marked with hair regions.
10. The method according to claim 9, wherein the identifying the foreground region of each of the image regions to obtain a second foreground identification result corresponding to each of the image regions comprises:
identifying each hair region in the image region through a second hair cut model to obtain each local hair cut result corresponding to the image region, wherein the second hair cut model is obtained based on second training set training, and the second training set comprises a plurality of sample images obtained by randomly cutting the portrait sample images of the first training set.
11. A method according to claim 10, wherein prior to said identifying hair regions in each of said image regions by a second hair matting model, the method further comprises:
cutting out a first area image corresponding to each image area from the first image;
cutting out a second area image corresponding to each image area from the portrait segmentation image;
performing channel splicing on a first area image and a second area image corresponding to each image area to obtain an input image corresponding to each image area;
and inputting the input image corresponding to each image area into a second hair matting model.
12. The method according to claim 11, wherein after obtaining the input image corresponding to each of the image regions, the method further comprises:
and according to the size requirement corresponding to the second hair matting model, carrying out scaling processing and/or rotation processing on the input image to obtain the input image meeting the size requirement.
13. The method of any of claims 1 to 6 and 8 to 12, wherein the determining one or more image regions selected by the selection operation in the first blurred image comprises:
acquiring one or more touch positions of the selection operation on a screen;
forming a selection frame corresponding to each touch position according to the area size aiming at each touch position;
and determining image areas corresponding to the selection frames in the first blurring image.
14. The method of claim 13, wherein after the forming a selection box corresponding to each of the touch positions according to the area size, the method further comprises:
and if the adjustment operation triggered by the target selection frame is detected, adjusting the size of the target selection frame according to the adjustment operation.
15. An image processing apparatus characterized by comprising:
the first identification module is used for identifying a foreground area in the first image to obtain a first foreground identification result;
a blurring module, configured to perform blurring processing on the first image based on the first foreground identification result to obtain a first blurred image;
a region selection module, configured to determine, in response to a selection operation for the first blurred image, one or more image regions selected by the selection operation in the first blurred image;
the second identification module is used for identifying the foreground area of each image area to obtain a second foreground identification result corresponding to each image area;
the blurring module is further configured to perform blurring processing on the first blurred image based on the second foreground identification result of each image region to obtain a second blurred image.
16. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program that, when executed by the processor, causes the processor to carry out the method of any one of claims 1 to 14.
17. A computer-readable storage medium, on which a computer program is stored, which, when being executed by a processor, carries out the method according to any one of claims 1 to 14.
CN202110771363.5A 2021-07-08 2021-07-08 Image processing method, image processing device, electronic equipment and computer readable storage medium Pending CN113610884A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110771363.5A CN113610884A (en) 2021-07-08 2021-07-08 Image processing method, image processing device, electronic equipment and computer readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110771363.5A CN113610884A (en) 2021-07-08 2021-07-08 Image processing method, image processing device, electronic equipment and computer readable storage medium

Publications (1)

Publication Number Publication Date
CN113610884A true CN113610884A (en) 2021-11-05

Family

ID=78304190

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110771363.5A Pending CN113610884A (en) 2021-07-08 2021-07-08 Image processing method, image processing device, electronic equipment and computer readable storage medium

Country Status (1)

Country Link
CN (1) CN113610884A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114359307A (en) * 2022-01-04 2022-04-15 浙江大学 Full-automatic high-resolution image matting method
CN114758391A (en) * 2022-04-08 2022-07-15 北京百度网讯科技有限公司 Hairstyle image determining method and device, electronic equipment, storage medium and product

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104219445A (en) * 2014-08-26 2014-12-17 小米科技有限责任公司 Method and device for adjusting shooting modes
CN108076286A (en) * 2017-11-30 2018-05-25 广东欧珀移动通信有限公司 Image weakening method, device, mobile terminal and storage medium
CN108848367A (en) * 2018-07-26 2018-11-20 宁波视睿迪光电有限公司 A kind of method, device and mobile terminal of image procossing
CN109146767A (en) * 2017-09-04 2019-01-04 成都通甲优博科技有限责任公司 Image weakening method and device based on depth map
CN110009556A (en) * 2018-01-05 2019-07-12 广东欧珀移动通信有限公司 Image background weakening method, device, storage medium and electronic equipment
CN110009555A (en) * 2018-01-05 2019-07-12 广东欧珀移动通信有限公司 Image weakening method, device, storage medium and electronic equipment
CN111311482A (en) * 2018-12-12 2020-06-19 Tcl集团股份有限公司 Background blurring method and device, terminal equipment and storage medium
CN111741283A (en) * 2019-03-25 2020-10-02 华为技术有限公司 Image processing apparatus and method
CN111754528A (en) * 2020-06-24 2020-10-09 Oppo广东移动通信有限公司 Portrait segmentation method, portrait segmentation device, electronic equipment and computer-readable storage medium
CN112487974A (en) * 2020-11-30 2021-03-12 叠境数字科技(上海)有限公司 Video stream multi-person segmentation method, system, chip and medium
CN112614057A (en) * 2019-09-18 2021-04-06 华为技术有限公司 Image blurring processing method and electronic equipment
CN112950641A (en) * 2021-02-24 2021-06-11 Oppo广东移动通信有限公司 Image processing method and device, computer readable storage medium and electronic device

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104219445A (en) * 2014-08-26 2014-12-17 小米科技有限责任公司 Method and device for adjusting shooting modes
CN109146767A (en) * 2017-09-04 2019-01-04 成都通甲优博科技有限责任公司 Image weakening method and device based on depth map
CN108076286A (en) * 2017-11-30 2018-05-25 广东欧珀移动通信有限公司 Image weakening method, device, mobile terminal and storage medium
CN110009556A (en) * 2018-01-05 2019-07-12 广东欧珀移动通信有限公司 Image background weakening method, device, storage medium and electronic equipment
CN110009555A (en) * 2018-01-05 2019-07-12 广东欧珀移动通信有限公司 Image weakening method, device, storage medium and electronic equipment
CN108848367A (en) * 2018-07-26 2018-11-20 宁波视睿迪光电有限公司 A kind of method, device and mobile terminal of image procossing
CN111311482A (en) * 2018-12-12 2020-06-19 Tcl集团股份有限公司 Background blurring method and device, terminal equipment and storage medium
CN111741283A (en) * 2019-03-25 2020-10-02 华为技术有限公司 Image processing apparatus and method
CN112614057A (en) * 2019-09-18 2021-04-06 华为技术有限公司 Image blurring processing method and electronic equipment
CN111754528A (en) * 2020-06-24 2020-10-09 Oppo广东移动通信有限公司 Portrait segmentation method, portrait segmentation device, electronic equipment and computer-readable storage medium
CN112487974A (en) * 2020-11-30 2021-03-12 叠境数字科技(上海)有限公司 Video stream multi-person segmentation method, system, chip and medium
CN112950641A (en) * 2021-02-24 2021-06-11 Oppo广东移动通信有限公司 Image processing method and device, computer readable storage medium and electronic device

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114359307A (en) * 2022-01-04 2022-04-15 浙江大学 Full-automatic high-resolution image matting method
CN114758391A (en) * 2022-04-08 2022-07-15 北京百度网讯科技有限公司 Hairstyle image determining method and device, electronic equipment, storage medium and product
CN114758391B (en) * 2022-04-08 2023-09-12 北京百度网讯科技有限公司 Hair style image determining method, device, electronic equipment, storage medium and product

Similar Documents

Publication Publication Date Title
CN111402135B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN110248096B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN110149482B (en) Focusing method, focusing device, electronic equipment and computer readable storage medium
CN110473185B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN110572573B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN110493527B (en) Body focusing method and device, electronic equipment and storage medium
CN113888437A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
EP3480784B1 (en) Image processing method, and device
CN108198152B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN110191287B (en) Focusing method and device, electronic equipment and computer readable storage medium
CN112017137B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN110956679B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN110881103B (en) Focusing control method and device, electronic equipment and computer readable storage medium
WO2022261828A1 (en) Image processing method and apparatus, electronic device, and computer-readable storage medium
CN113313661A (en) Image fusion method and device, electronic equipment and computer readable storage medium
CN113610884A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN113313626A (en) Image processing method, image processing device, electronic equipment and storage medium
CN113673474B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN110650288B (en) Focusing control method and device, electronic equipment and computer readable storage medium
CN113610865B (en) Image processing method, device, electronic equipment and computer readable storage medium
JP5673624B2 (en) Object search apparatus, method, and program
CN113658197B (en) Image processing method, device, electronic equipment and computer readable storage medium
CN110365897B (en) Image correction method and device, electronic equipment and computer readable storage medium
CN112581481B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN113223023A (en) Image processing method and device, electronic device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination