CN112258380A - Image processing method, device, equipment and storage medium - Google Patents

Image processing method, device, equipment and storage medium Download PDF

Info

Publication number
CN112258380A
CN112258380A CN201910591533.4A CN201910591533A CN112258380A CN 112258380 A CN112258380 A CN 112258380A CN 201910591533 A CN201910591533 A CN 201910591533A CN 112258380 A CN112258380 A CN 112258380A
Authority
CN
China
Prior art keywords
image
sky
area
confidence
region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910591533.4A
Other languages
Chinese (zh)
Inventor
杜俊增
宋萍
刘霄翔
申远南
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Xiaomi Mobile Software Co Ltd
Original Assignee
Beijing Xiaomi Mobile Software Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Xiaomi Mobile Software Co Ltd filed Critical Beijing Xiaomi Mobile Software Co Ltd
Priority to CN201910591533.4A priority Critical patent/CN112258380A/en
Publication of CN112258380A publication Critical patent/CN112258380A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/143Segmentation; Edge detection involving probabilistic approaches, e.g. Markov random field [MRF] modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection

Abstract

The application discloses an image processing method, an image processing device, image processing equipment and a storage medium, and relates to the field of image processing. The method comprises the following steps: processing the image through an image segmentation model to obtain a probability distribution map of a sky region in the image, namely the confidence of pixel points of the sky region in the image; determining a high confidence coefficient area with the confidence coefficient higher than a first confidence coefficient threshold value from the sky area; when the pixel points in the high-confidence-degree area do not accord with the preset conditions, replacing the sky area with a target sky material to obtain a target image; the problem that the sky area can be manually replaced by at least 7 steps in the related technology is solved, the purpose of automatically replacing the original sky area in the image by the target sky material is achieved, the user does not need to carry out later-stage image repairing, manual operation steps of the user are reduced, and the man-machine interaction efficiency is improved. Even if the user does not have the work of correcting the picture, the sky picture that can only be shot to compare favourably professional single reflection lens can be obtained.

Description

Image processing method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing, and in particular, to an image processing method, an image processing apparatus, an image processing device, and a storage medium.
Background
Generally, a user uses a terminal to photograph an object of interest, and when a sky scene is included in a photographed image, the quality of the photographed image is affected by ambient light. For example, when a user shoots, the user may encounter the situations of poor weather conditions, poor light, and the like, and the shooting effect of the scenery may not achieve the ideal effect in the user's mind.
In the related art, a user uses a retouching software to replace a sky area in a shot image with an ideal sky material, and the related steps are as follows:
1. after a user uses a terminal to shoot an image, uploading the image to a computer;
2. on the premise that the computer is provided with the retouching software (if the computer is not provided with the retouching software, the retouching software needs to be installed firstly), the retouching software is opened;
3. opening images and ideal sky materials in the retouching software;
4. setting a sky material as a background and an image as a layer 1;
5. selecting a non-sky area in the image by using a matting tool, copying the non-sky area, and setting the non-sky area as an image layer 2;
6. deleting the layer 1, and performing feathering operation on the layer 2;
7. and saving the modified image.
The user can obtain the image after replacing the sky area through at least 7 steps, and the method is complex in operation, complex in steps and low in man-machine interaction efficiency.
Disclosure of Invention
The embodiment of the application provides an image processing method, device, equipment and storage medium, and can solve the problems that more manual operations are needed when sky materials are replaced in a sky area in a shot image, the steps are complex, and the human-computer interaction efficiency is low. The technical scheme is as follows:
according to a first aspect of the present application, there is provided an image processing method, the method comprising:
processing the image through an image segmentation model to obtain a probability distribution map of a sky region in the image; the probability distribution map is the confidence of the pixel points in the sky area;
determining a high confidence region from the sky region where the confidence is above a first confidence threshold;
and when the pixel points in the high-confidence-degree area do not accord with preset conditions, replacing the sky area with a target sky material to obtain a target image.
In some embodiments, the replacing the sky area with a target sky material to obtain a target image includes:
performing segmentation processing on the edge of the sky area;
and replacing the segmented sky area with the target sky material to obtain the target image.
In some embodiments, the replacing the segmented processed sky region with the target sky material to obtain the target image includes:
acquiring attribute parameters of the image;
screening the target sky material from candidate sky materials according to the attribute parameters;
wherein the attribute parameter includes at least one of a photographing time and a photographing place of the image.
In some embodiments, the preset conditions include:
the average primary color value of the blue channel of the pixel point in the high-confidence-degree area is smaller than a first primary color value threshold value;
or the average gray value of the pixel points in the high-confidence-degree area is smaller than a gray value threshold value;
or the average primary color value of the blue channel of the pixel point in the high-confidence-degree region is smaller than the first primary color value threshold, and the average gray value of the pixel point in the high-confidence-degree region is smaller than the gray value threshold.
In some embodiments, before the processing the image through the image segmentation model to obtain the probability distribution map of the sky region in the image, the processing includes:
acquiring an average primary color value of a dark channel in the image;
and when the average primary color value of the dark channel is smaller than a second primary color value threshold value, executing the step of processing the image through the image segmentation model to obtain a probability distribution map of the sky area in the image.
In some embodiments, replacing the sky area with a target sky material to obtain a target image includes:
determining a first proportion of pixel points of the sky area in total pixel points of the image;
and when the first ratio is larger than a first ratio threshold value, executing the step of replacing the sky area with a target sky material to obtain a target image.
In some embodiments, replacing the sky area with a target sky material to obtain a target image further includes:
determining the area with the confidence coefficient of the pixel points in the non-sky area higher than a second confidence coefficient threshold value and lower than a third confidence coefficient threshold value as a first fuzzy area; determining a region with the confidence coefficient of the pixel points in the sky region higher than the third confidence coefficient threshold value and lower than a fourth confidence coefficient threshold value as a second fuzzy region;
determining a second proportion of pixel points of the first fuzzy region and the second fuzzy region in the image;
when the second proportion is smaller than a second proportion threshold value, executing the step of replacing the sky area with a target sky material to obtain a target image;
wherein the non-sky region refers to a region of the image other than the sky region.
In some embodiments, the method further comprises:
when the pixel points in the high-confidence-degree area accord with the preset conditions, processing the image by adopting a filter; the filter is used for enhancing the display effect of the image.
According to a second aspect of the present application, there is provided an image processing apparatus comprising:
the processing module is configured to process the image through an image segmentation model to obtain a probability distribution map of a sky region in the image; the probability distribution map is the confidence of the pixel points in the sky area;
a determination module configured to determine a high confidence region from the sky region where the confidence is above a first confidence threshold;
and the replacing module is configured to replace the sky area with a target sky material to obtain a target image when the pixel points in the high-confidence-level area do not accord with preset conditions.
In some embodiments, the replacement module comprises:
a segmentation submodule configured to segment edges of the sky region;
a replacing sub-module configured to replace the segmented sky area with the target sky material to obtain the target image.
In some embodiments, the replacement sub-module is configured to obtain attribute parameters of the image; screening the target sky material from candidate sky materials according to the attribute parameters; wherein the attribute parameter includes at least one of a photographing time and a photographing place of the image.
In some embodiments, the preset conditions include:
the average primary color value of the blue channel of the pixel point in the high-confidence-degree area is smaller than a first primary color value threshold value;
or the average gray value of the pixel points in the high-confidence-degree area is smaller than a gray value threshold value;
or the average primary color value of the blue channel of the pixel point in the high-confidence-degree region is smaller than the first primary color value threshold, and the average gray value of the pixel point in the high-confidence-degree region is smaller than the gray value threshold.
In some embodiments, the apparatus comprises:
the acquisition module is configured to acquire an average primary color value of a dark channel in the image;
the determining module is configured to execute the step of processing the image through the image segmentation model to obtain a probability distribution map of a sky region in the image when the average primary color value of the dark channel is smaller than a second primary color value threshold.
In some embodiments, the determining module is configured to determine a first proportion of pixels of the sky region among total pixels of the image; and when the first ratio is larger than a first ratio threshold value, executing the step of replacing the sky area with a target sky material to obtain a target image.
In some embodiments, the determining module is configured to determine an area in which the confidence of the pixel points in the non-sky area is higher than the second confidence threshold and lower than a third confidence threshold as the first fuzzy area; determining a region with the confidence coefficient of the pixel points in the sky region higher than the third confidence coefficient threshold value and lower than a fourth confidence coefficient threshold value as a second fuzzy region;
the determining module is configured to determine a second proportion of pixel points of the first fuzzy region and the second fuzzy region in the image; when the second proportion is smaller than a second proportion threshold value, executing the step of replacing the sky area with a target sky material to obtain a target image; wherein the non-sky region refers to a region of the image other than the sky region.
In some embodiments, the processing module is further configured to process the image by using a filter when the pixel points in the high-confidence region meet the preset condition; the filter is used for enhancing the display effect of the image.
According to a third aspect of the present application, there is provided a terminal comprising a processor and a memory, the memory having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by the processor to implement the image processing method according to any of the first aspects above.
According to a fourth aspect of the present application, there is provided a computer readable storage medium having stored therein at least one instruction, at least one program, set of codes, or set of instructions, which is loaded and executed by the processor to implement the image processing method of any of the first aspects above.
The beneficial effects brought by the technical scheme provided by the embodiment of the application at least comprise:
processing the image through an image segmentation model to obtain a probability distribution map of a sky region in the image; the probability distribution map is the confidence of the pixel points in the sky area; determining a high confidence coefficient area with the confidence coefficient higher than a first confidence coefficient threshold value from the sky area; and when the pixel points in the high-confidence-degree area do not accord with the preset conditions, replacing the sky area with a target sky material to obtain a target image. The method solves the problem that a target image with a sky area replaced by a target sky material can be obtained only by at least 7 steps in the related technology, achieves the purpose of automatically replacing the target sky material with the original sky area in the image, does not need a user to carry out later-stage map repairing, reduces manual operation steps of the user, and improves the man-machine interaction efficiency. Even if the user does not have the work of correcting the picture, the sky picture that can only be shot to compare favourably professional single reflection lens can be obtained.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a block diagram of a terminal according to an exemplary embodiment of the present application;
fig. 2 is a block diagram of a terminal according to another exemplary embodiment of the present application;
FIG. 3 is a flow chart of an image processing method provided by an exemplary embodiment of the present application;
FIG. 4 is an interface schematic diagram of an image processing method provided by an exemplary embodiment of the present application;
FIG. 5 is an interface schematic diagram of an image processing method provided by another exemplary embodiment of the present application;
FIG. 6 is a flow chart of an image processing method provided by another exemplary embodiment of the present application;
FIG. 7 is a flow chart of an image processing method provided by another exemplary embodiment of the present application;
FIG. 8 is a flow chart of an image processing method provided by another exemplary embodiment of the present application;
FIG. 9 is a flow chart of an image processing method provided by another exemplary embodiment of the present application;
FIG. 10 is a block diagram of an image processing apparatus provided in an exemplary embodiment of the present application;
fig. 11 is a block diagram of an image processing apparatus according to another exemplary embodiment of the present application.
Detailed Description
To make the objects, technical solutions and advantages of the present application more clear, embodiments of the present application will be described in further detail below with reference to the accompanying drawings.
First, several terms related to the embodiments of the present application are explained:
a neural network model: the artificial neural network is formed by connecting n neurons, wherein n is a positive integer. In this application, the neural network model is the artificial network model that can discern the sky region in the image, and this neural network model can divide into input layer, hidden layer and output layer, and the terminal is with the input layer of image input neural network model, is carried out down-sampling by hidden layer to the image of input, carries out convolution calculation to the pixel in the image promptly, finally through output layer output recognition result. The Neural Network model includes a CNN (Convolutional Neural Network) model, an FCN (full Convolutional Neural Network) model, a DNN (Deep Neural Network) model, an RNN (Recurrent Neural Network) model, an embedding model, a GBDT (Gradient Boosting Decision Tree) model, an LR (Logistic Regression) model, and the like.
CNN model: is a deep feedforward artificial neural network. CNN includes but is not limited to the following three parts: the sensor comprises an input layer, a combination of n convolutional layers and a pooling layer, and a fully-connected multilayer sensor, wherein n is a positive integer. The CNN includes a feature extractor consisting of convolutional and pooling layers. And the characteristic extractor performs characteristic extraction on the samples input by the input layer to obtain model parameters, and completes final model training in the perceptron according to the model parameters. In recent years, CNN models are widely applied to speech recognition, general object recognition, face recognition, image recognition, motion analysis, natural language processing, brain wave analysis, and the like, and the application takes the application of CNN models to image recognition as an example, and specifically, the CNN models are used to recognize sky regions in images.
FCN model: is a deep feedforward artificial neural network. The difference from the above-described CNN model is that the output layer is a convolutional layer, and the output layer of the CNN model is a fully-connected layer. The FCN model is subjected to convolution, pooling and deconvolution processes, and finally the identified image is output.
DNN model: is a deep learning framework. The DNN model includes an input layer, at least one hidden layer (or intermediate layer), and an output layer. Optionally, the input layer, the at least one hidden layer (or intermediate layer), and the output layer each include at least one neuron, and the neuron is configured to process the received data. Alternatively, the number of neurons between different layers may be the same; alternatively, it may be different.
RNN model: is a neural network with a feedback structure. In the RNN model, the output of a neuron can be directly applied to itself at the next time stamp, i.e., the input of the i-th layer neuron at time m includes its own output at time (m-1) in addition to the output of the (i-1) layer neuron at that time.
Imbedding model: relationships in each triplet instance are treated as translations from entity head to entity tail based on the entity and relationship distributed vector representations. The triple instance comprises a subject, a relation and an object, and can be expressed as (subject, relation and object); the subject is an entity head, and the object is an entity tail. Such as: dad of the small is large, then represented by the triple instance as (small, dad, large).
GBDT model: is an iterative decision tree algorithm, which consists of a plurality of decision trees, and the results of all the trees are accumulated to be the final result. Each node of the decision tree obtains a predicted value, and taking age as an example, the predicted value is an average value of ages of all people belonging to the node corresponding to the age.
LR model: the method is a model established by applying a logic function on the basis of linear regression.
An image segmentation model: the model is established according to at least one of a CNN model, an FCN model, a DNN model, an RNN model, an embedding model, a GBDT model and an LR model, and is used for identifying a sky region in an image and obtaining a probability distribution map of the sky region in the image, wherein the probability distribution map is confidence of pixel points of the sky region.
Illustratively, the image segmentation model adopts a deep convolutional neural network model, and trainable guide filtering is added to participate in training, so as to improve the precision of the image segmentation model and increase detail information.
Confidence coefficient: the confidence interval for a sample is an interval estimate for the parameter in the sample, and the probability that the parameter in the sample belongs to the confidence interval is the confidence. By taking the confidence of the pixel point in the sky area as an example, the sky area corresponds to a confidence interval, and the probability that the pixel point in the sky area belongs to the confidence interval is the confidence corresponding to the pixel point.
Viewing images: and the image data collected by the photosensitive device is used for displaying the image in the shooting preview interface. If a shutter signal triggered by a user is received, the framing image can be processed and stored as a shot image.
Shooting an image: and storing the viewfinder image according to the shutter signal to obtain an image.
Attribute parameters: and obtaining a format file of the image after shooting the image, wherein the format file comprises file header information. The header information includes information such as aperture, shutter, white balance, sensitivity, focal length, date, time, and location when the image is captured, and the header information is used as an attribute parameter.
Candidate sky materials: is a plurality of sky materials pre-stored in a memory, the sky materials comprise sky materials of different time instants, and/or different geographical areas, and/or different weather types. The candidate sky materials can be high-quality sky materials selected manually or sky materials shot by a high-performance single-lens reflex camera.
Target sky material: the sky material is determined from a plurality of candidate sky materials and corresponds to the attribute parameters of the image.
Target image: the sky image is obtained after a sky area in the image is replaced by a target sky material.
Referring to fig. 1, a block diagram of a terminal according to an exemplary embodiment of the present application is shown. The terminal includes: a light receiving device 101, an ISP (Image Signal Processing) module 102, a processor 103, and a memory 104.
And a light sensing device 101 configured to sense a shooting environment to obtain a through image. The light sensing device 101 may be a CCD (charge coupled device) image sensor or a CMOS (Complementary Metal Oxide Semiconductor) image sensor.
The ISP module 102 is electrically connected to the light sensing device 101. Optionally, the ISP module 102 and the photosensitive device 101 are connected by a bus, or the ISP module 102 and the photosensitive device 101 are integrated into the same electrical package or chip. The photosensitive device 101 transmits the acquired image data to the ISP module 102 for processing.
The ISP module 102 is configured to acquire a through image captured by the light sensing device 101, and when receiving a shutter signal, captures a captured image from the through image. In some embodiments, the ISP module 102 is also configured to perform auto-exposure, auto-focus, auto-adjust white balance, and the like.
The processor 103 is electrically connected to the ISP module 102. Optionally, the processor 103 and the ISP module 102 are connected by a bus, or the processor 103 and the ISP module 102 are integrated into the same electrical package or chip. The processor 103 may include one or more processing cores for transmitting a shutter signal to the ISP module 102 and for acquiring and storing a photographed image photographed by the ISP module 102. Optionally, the processor 103 includes an image segmentation model; the processor 103 loads and executes an executable command to implement the image processing method provided by the present application, and illustratively, the processor 103 identifies a sky region in an image through an image segmentation model, and obtains a probability distribution map of the sky region, where the probability distribution map is a confidence level of a pixel point of the sky region; secondly, determining a high confidence coefficient area with the confidence coefficient higher than a first confidence coefficient threshold value from the sky area; and thirdly, when the pixel points in the high-confidence-degree area do not accord with the preset conditions, acquiring a target sky material from the sky material stored in the memory 104, and replacing the sky area in the image with the target sky material to obtain a target image.
The memory 104 is electrically connected to the processor 103. Optionally, the memory 104 is connected to the processor 103 via a bus. The Memory 120 may include a RAM (Random Access Memory) and a ROM (Read-Only Memory). The memory 104 is used for storing preset sky materials and storing images processed by the processor 103. The memory 104 is also used for storing programs, which are loaded and executed by the processor 103 to implement the image processing method provided by the present application.
In some embodiments, the terminal further comprises: an AI (Artificial Intelligence) chip 105. Referring to fig. 2, the AI chip 105 is electrically connected to the ISP module 102. Optionally, the AI chip 105 is connected to the ISP module 102 via a bus. The AI chip comprises an image segmentation model; in some embodiments, the AI chip 105 identifies a sky region in the image through an image segmentation model, and obtains a probability distribution map of the sky region.
The AI chip 105 is also electrically connected to the memory 104. Optionally, the AI chip 105 is connected to the memory 104 via a bus. The AI chip is also used for determining a high confidence coefficient area with the confidence coefficient higher than a first confidence coefficient threshold value from the sky area; secondly, when the pixel points in the high-confidence-degree area do not accord with the preset conditions, acquiring a target sky material from the sky material stored in the memory 104, and replacing the sky area in the image with the target sky material to obtain a target image; again, the target image is stored in the memory 104.
Referring to fig. 3, a flowchart of an image processing method provided in an exemplary embodiment of the present application is shown, where the present embodiment is illustrated by applying the method to the terminal shown in fig. 1 or fig. 2, and the method includes:
step 201, the terminal processes the image through the image segmentation model to obtain a probability distribution map of the sky region in the image.
And the probability distribution graph of the sky area is the confidence of the pixel points of the sky area.
Optionally, after the terminal captures the image, the image is opened in the album, and the sky area in the image and the probability distribution map of the sky area are obtained through image segmentation model identification.
Optionally, the image segmentation model includes, but is not limited to, at least one of an FCN model, a CNN model, a DNN model, an RNN model, an embedding model, a GBDT model, and an LR model.
Illustratively, the image segmentation model adopts a deep convolutional neural network model, and trainable guide filtering is added to participate in training, so as to improve the precision of the image segmentation model and increase detail information.
Optionally, the sky of the sky area may be a sky shot at different times, and/or different locations, and/or different weather types; for example, the images may be taken early morning, at noon, at sunrise, at sunset, in the city, on prairie, at sea, on sunny days, cloudy, etc.
In step 202, the terminal determines a high confidence level region with a confidence level higher than a first confidence level threshold value from the sky region.
And the terminal selects an area with the confidence coefficient of the pixel points higher than a first confidence coefficient threshold value from the sky area and determines the area as a high confidence coefficient area. The first confidence threshold is used for representing the degree that pixel points in the sky area are corresponding to sky pixel points; when the confidence of a pixel point is greater than the first confidence threshold, the pixel point can be determined as a pixel point corresponding to the sky.
In step 203, the terminal determines whether the pixel points in the high-confidence region meet a preset condition.
The preset condition is that the pixel points in the high-confidence-degree area belong to dark scenes; when the pixel point in the high-confidence region belongs to a dark scene, the terminal executes step 205; when the pixel point in the high-confidence region belongs to the bright scene, the terminal executes step 204. The bright scene represents that the image is shot in the daytime and is suitable for replacing the sky area; the dark scene indicates that the image is shot at night and is not suitable for replacing the sky area.
Optionally, the preset conditions include:
the average primary color value of the blue channel of the pixel point in the high-confidence-degree area is smaller than a first primary color value threshold value;
or the average gray value of the pixel points in the high-confidence-degree area is smaller than the gray value threshold;
or the average primary color value of the blue channel of the pixel point in the high-confidence-degree area is smaller than the first primary color value threshold, and the average gray value of the pixel point in the high-confidence-degree area is smaller than the gray value threshold.
The first primary color value threshold is used for determining whether pixel points in a high-confidence region of the sky region belong to a bright scene or a dark scene; the gray value threshold is also used for determining whether pixel points in the high-confidence region of the sky region belong to a bright scene or a dark scene.
Schematically, when the average primary color value of a blue channel of a pixel point in a high-confidence-degree area is smaller than a first primary color value threshold, the terminal determines that an image is shot at night;
or when the gray value of the pixel point in the high-confidence-degree area is smaller than the gray value threshold, the terminal determines that the image is shot at night;
or when the average primary color value of the blue channel of the pixel point in the high-confidence-degree area is smaller than the first primary color value threshold and the gray value of the pixel point in the high-confidence-degree area is smaller than the gray value threshold, the terminal determines that the image is shot at night.
And step 204, replacing the sky area with a target sky material by the terminal to obtain a target image.
And when the pixel points in the high-confidence-degree area do not accord with the preset conditions, replacing the sky area with a target sky material to obtain a target image. The pixel points in the high-confidence-degree area do not accord with preset conditions, namely, the pixel points in the high-confidence-degree area belong to a bright scene, and the image is shot in the daytime and is suitable for replacing the sky area.
Alternatively, the replacement of the sky area includes the following two exemplary steps:
1) and the terminal performs segmentation processing on the edge of the sky area.
The edge is a connection portion between the sky region and a non-sky region, and the non-sky region is a region other than the sky region in the image.
Optionally, the terminal performs segmentation processing on the edge of the sky region through an edge optimization algorithm, and the exemplary steps are as follows:
a. and the terminal counts to obtain a histogram of the brightness values of the blue channels of the sky area pixel points, reads the minimum value of the brightness values of the blue channels from the histogram and records the minimum value as min _ blue.
b. The terminal averagely divides the histogram into at least two regions according to the brightness value represented by the abscissa axis, and determines a target region with the largest number of pixel points from the at least two regions; for example, the average number of the regions is 4, which are region 1, region 2, region 3, and region 4, where the number of pixels in region 4 is the largest, and region 4 is determined as the target region.
c. Determining the minimum value of the brightness value in the target area, and recording the minimum value as b _ margin _ min; and the maximum value of the luminance value, denoted as b _ margin _ max.
d. And performing segmentation processing on the sky area according to the min _ blue, the b _ margin _ min, the b _ margin _ max and the image.
Optionally, step d may include at least one of the following processing modes:
(1) determining a first area of which the brightness value of the blue channel in the image is smaller than min _ blue, and adjusting the confidence coefficient in the segmented image corresponding to the first area to be 1/2 of the original image;
(2) and determining a second region with the confidence coefficient higher than a fifth confidence coefficient threshold value in the segmented image and the brightness value of the blue channel in the corresponding image larger than b _ margin _ min and smaller than b _ margin _ max, and adjusting the confidence coefficient in the segmented image corresponding to the second region to be 1. Wherein the fifth confidence threshold is used to determine the edge region.
The operation can sharpen the edge and reduce the probability that the fine sky area is erased due to smaller probability value.
Schematically, the edge area comprises the edge of the leaf, and the edge of the leaf is segmented through an edge optimization algorithm to obtain a finely segmented edge; the edge optimization algorithm is used for edge segmentation between a sky region and a non-sky region, improves the segmentation precision of edges in an image, and enables the segmented edge region to achieve a finer segmentation degree.
2) And the terminal replaces the processed sky area with a target sky material to obtain a target image.
Optionally, the terminal acquires attribute parameters of the image; and screening out a target sky material from the candidate sky materials according to the attribute parameters. Wherein the attribute parameter includes at least one of a photographing time and a photographing place of the image.
Optionally, the terminal reads EXIF information of the image as the attribute parameter. For most of the shot scenes, the EXIF format file of the image is obtained after the image is shot and saved. The file header information of the EXIF format file is EXIF information, the terminal reads the EXIF information of the EXIF format file corresponding to the image, and the EXIF information is used as an attribute parameter. Wherein the EXIF information includes at least one of an aperture, a shutter, a white balance, a sensitivity, a focal length, a date, a time, and a place at the time of image capturing, and therefore, the attribute parameter includes at least one of a capturing time and a capturing place of the image.
In some optional embodiments, the attribute parameters include: shooting time;
and the terminal screens out a target sky material corresponding to the time period from the candidate sky materials according to the time period to which the shooting time belongs.
In some optional embodiments, the attribute parameters include: a shooting location;
and the terminal screens out a target sky material corresponding to the geographical area from the candidate sky materials according to the geographical area to which the shooting place belongs.
In some optional embodiments, the attribute parameters include: a shooting time and a shooting location;
the terminal determines a corresponding weather type according to the shooting time and the shooting place; and screening out a target sky material corresponding to the weather type from the candidate sky materials.
Alternatively, the candidate sky material can be a sky image shot under different time, and/or different place, and/or different weather types; for example, the images may be taken early morning, at noon, at sunrise, at sunset, in the city, on prairie, at sea, on sunny days, cloudy, etc.
In step 205, the terminal saves the image.
And when the pixel points in the high-confidence-degree area accord with the preset conditions, the image is not processed and is directly stored.
Optionally, before saving the image, the method may further include: and the terminal processes the image by adopting a filter. The filter is used for enhancing the display effect of the image; for example, adjusting the color tone of the image, improving the texture of the image, and the like.
The image has many shooting parameters in the shooting process, and the shooting parameters are stored as attribute parameters. Optionally, the attribute parameter may also be used as reference data for acquiring the filter by the terminal. And the terminal determines a target filter from the candidate filters stored in the memory according to the attribute parameters and processes the image by adopting the target filter.
The attribute parameters further include at least one of an aperture, a shutter, white balance, sensitivity, and a focal length at the time of image capturing.
Optionally, the filter includes at least one of an inner threshold filter, an inner filter, and an outer filter.
Schematically, referring to fig. 4, in an album in the terminal, the user selects and opens an image 31 including a sky area, i.e., a portion of the image 31 circled with a dotted line; the user triggers an image processing function in the album for replacing the sky area in the image, illustratively, the user clicks the control button 32; as shown in the lower diagram of fig. 4, the terminal processes the image 31 through a network segmentation model to obtain a sky region in the image and a probability distribution map of the sky region; the probability distribution map is the confidence of the pixel points in the sky area; the terminal determines a high confidence coefficient area with the confidence coefficient higher than a first confidence coefficient threshold value from the sky area; when the pixel points in the high-confidence-level area belong to a bright scene, the sky area in the image 31 is automatically replaced by a target sky material to obtain a target image 33, and obvious difference is obtained by comparing the sky area in the image 31 with the sky area in the image 33. Optionally, the terminal creates an image file and automatically saves the image 33.
In some embodiments, the triggering mode of the image processing function of the space area in the replacement image in the album can be at least one of a long-press operation, a pressure touch operation, a double-finger press operation, a finger joint double-click operation and a multi-click operation.
In other embodiments, a button control is provided on the display interface of the image for triggering the replacement of the sky area in the image, such as the control button 32 in fig. 4.
Optionally, the user may customize the target sky material. Schematically, referring to fig. 5, the terminal displays an image 35 in an album, and further displays an image 36 corresponding to the candidate sky material 1 and an image 37 corresponding to the candidate sky material 2 below the image 35; a user selects a candidate sky material 2 in an image 37 as a target sky material, and a terminal replaces a sky area of the image 35 with the candidate sky material 2 to obtain an image 38, as shown in fig. 5, the sky area in the image 35 is obviously different from the sky area in the image 38; the user triggers the save control, the terminal creates an image file and saves the image 38.
In summary, in the image processing method provided in the embodiment of the present application, an image is processed through an image segmentation model, so as to obtain a probability distribution map of a sky region in the image; the probability distribution map is the confidence of the pixel points in the sky area; determining a high confidence coefficient area with the confidence coefficient higher than a first confidence coefficient threshold value from the sky area; and when the pixel points in the high-confidence-degree area do not accord with the preset conditions, replacing the sky area with a target sky material to obtain a target image. The method solves the problem that a target image with a sky area replaced by a target sky material can be obtained only by at least 7 steps in the related technology, achieves the purpose of automatically replacing the target sky material with the original sky area in the image, does not need a user to carry out later-stage map repairing, reduces manual operation steps of the user, and improves the man-machine interaction efficiency. Even if the user does not have the work of correcting the picture, the sky picture that can only be shot to compare favourably professional single reflection lens can be obtained.
In addition, according to the image processing method provided by the embodiment of the application, the user can also screen the image according to the preset conditions, so that the image which is not suitable for replacing the sky area is screened out, the terminal can replace the sky area for the suitable image, the success rate of replacing the sky area in the image is improved, and the user experience is improved.
It should be noted that, before the terminal processes the image, the terminal screens the image to screen out a picture that is not suitable for replacing the sky area, and schematically, the screening approaches of the image include the following three ways:
firstly, screening out a whitened image through a dark channel algorithm;
secondly, screening out images with small proportion of sky areas in the images;
and thirdly, screening out the image with fuzzy division of sky areas and non-sky areas in the image.
In the first case, based on fig. 3, step 301 to step 302 are added before step 201, as shown in fig. 6, and the exemplary steps are as follows:
step 301, the terminal obtains an average primary color value of a dark channel in an image.
Each pixel point in the image comprises three primary color channels, namely a Red channel (Red), a Blue channel (Blue) and a Green channel (Green); the smallest primary color value of the three primary color channels is the primary color value of the dark channel. For example, if the primary color value of the red channel corresponding to a pixel is 0.5, the primary color value of the blue channel is 0.2, and the primary color value of the green channel is 0.6, the primary color value of the dark channel corresponding to the pixel is 0.2.
The terminal firstly obtains the primary color value of a dark channel corresponding to each pixel point in an image; and secondly, determining the average primary color value of the dark channel of the pixel point in the image. That is, the image includes n pixel points, and the primary color values of the dark channels of the n pixel points are added to obtain a sum; dividing the sum of the primary color values by n to obtain an average primary color value of a dark channel of a pixel point in the image; n is a positive integer.
In step 302, the terminal determines whether the average primary color value of the dark channel is less than the second primary color value threshold.
The second primary color value threshold is used for determining a clear image; when the average primary color value of the dark channel is equal to or larger than the second primary color value threshold, the terminal determines that the image pixel is not clear, such as overexposure of the image or whitening of the image; when the average primary color value of the dark channel is less than the second primary color value threshold, the terminal determines that the pixel of the image is sharp.
When the average primary color value of the dark channel is smaller than the second primary color value threshold, executing step 201, and replacing the sky area with a target sky material to obtain a target image; when the average primary color value of the dark channel is equal to or greater than the second primary color value threshold, step 205 is executed to save the image.
In summary, the image processing method provided by the application screens the image through the primary color value of the dark channel, screens out the image which is not suitable for replacing the sky area, for example, a snowscene image or an image with over exposure and the like which are whitish, so that the terminal replaces the sky area for the suitable image, the success rate of replacing the sky area in the image is improved, and the user experience is improved.
In the second case, based on fig. 3, step 401 is added to step 402 before step 204, as shown in fig. 7, the exemplary steps are as follows:
in step 401, the terminal determines a first proportion of pixel points in the sky region in the total pixel points of the image.
Optionally, the terminal counts the number of pixel points in the sky area and determines the number as a first numerical value; counting the number of pixel points in the image, and determining the number as a second numerical value; and comparing the first numerical value with the second numerical value to obtain a first proportion of pixel points of the sky area in the total pixel points of the image.
Optionally, the terminal determines the area of the sky area as a first area; determining the area of the image as a second area; and determining the ratio of the first area to the second area as a first proportion of the pixel points in the sky area in the total pixel points of the image.
In step 402, the terminal determines whether the first ratio is greater than a first ratio threshold.
Optionally, the first scale threshold is used for screening out an image including a sky area; or, the first proportional threshold is used for screening out the images of the sky area which meet the proportion condition. When the first ratio is larger than a first ratio threshold, the terminal determines that the image includes the sky region, and the ratio of the sky region in the image is larger than the first ratio threshold, that is, the size of the sky region in the image is suitable for replacing the sky region. When the first ratio is smaller than or equal to a first ratio threshold value, the terminal determines that the sky area is not included in the image; alternatively, the terminal determines that the sky area is included in the image, but the sky area in the image is too small to be suitable for replacing the sky area.
When the first ratio is larger than a first ratio threshold, executing step 204, and replacing the sky area with a target sky material to obtain a target image; when the first ratio is smaller than or equal to the first ratio threshold, step 205 is executed to save the image.
In summary, the image processing method provided by the application screens the images according to the proportion of the sky area in the images, and screens out the images with an excessively small proportion; the proportion of the sky region in the image is too small, the integral influence on the image is small, and the sky region does not need to be replaced, so that the terminal can replace the sky region in the image, the success rate of replacing the sky region in the image is improved, and the user experience is improved.
In the third case, step 501 to step 503 are added before step 204 based on fig. 3, as shown in fig. 8, the exemplary steps are as follows:
step 501, the terminal determines a region, in which the confidence of a pixel point in a non-sky region is higher than a second confidence threshold and lower than a third confidence threshold, as a first fuzzy region; and determining the area, in which the confidence coefficient of the pixel points in the sky area is higher than the third confidence coefficient threshold and lower than the fourth confidence coefficient threshold, as a second fuzzy area.
The terminal also stores a second confidence threshold, a third confidence threshold and a fourth confidence threshold, wherein the second confidence threshold is less than the third confidence threshold and less than the fourth confidence threshold.
And the third confidence threshold is used for determining pixel points of sky areas and pixel points of non-sky areas in the image. When the confidence of the pixel point is greater than a third confidence threshold, the terminal determines the pixel point as the pixel point of the sky area; and when the confidence of the pixel point is less than or equal to the third confidence threshold, the terminal determines the pixel point as the pixel point of the non-sky area.
The confidence of the first pixel point is larger than the third confidence threshold value, the probability that the first pixel point is a pixel point of the sky area is larger; conversely, the closer the confidence of the first pixel point is to the third confidence threshold, the smaller the probability that the first pixel point is a pixel point in the sky area is, and the first pixel point is a blurred pixel point in the sky area.
The fourth confidence threshold is used for determining fuzzy pixel points in the sky area; when the first pixel point is smaller than the fourth confidence threshold value, the first pixel point is a fuzzy pixel point; otherwise, the first pixel point is a clear pixel point.
The confidence of the second pixel point is smaller than a third confidence threshold, and the probability that the second pixel point is a pixel point of a non-sky area is higher; conversely, the closer the confidence of the second pixel point is to the third confidence threshold, the smaller the probability that the pixel point is a pixel point in the sky area is, and the second pixel point is a fuzzy pixel point in the sky area.
The second confidence threshold is used for determining fuzzy pixel points in the non-sky area; when the second pixel point is larger than the second confidence coefficient threshold value, the second pixel point is a fuzzy pixel point; otherwise, the second pixel point is a clear pixel point.
The first pixel point is a pixel point in a sky area of the image; the second pixel point is a pixel point in a non-sky area of the image.
Optionally, the terminal determines an area where the confidence of the second pixel point is higher than the second confidence threshold and lower than the third confidence threshold as a first fuzzy area; the first fuzzy area is an area where fuzzy pixel points in a non-sky area are located.
The terminal determines the area, in which the confidence coefficient of the first pixel point is higher than the third confidence coefficient threshold and lower than the fourth confidence coefficient threshold, as a second fuzzy area; the second fuzzy area is an area where fuzzy pixel points in the sky area are located.
Step 502, the terminal determines a second proportion of the pixel points of the first fuzzy region and the second fuzzy region in the image.
Optionally, the terminal counts the sum of the number of the pixel points in the first fuzzy region and the second fuzzy region, and determines the sum as a third numerical value; counting the number of pixel points of the image, and determining the number as a second numerical value; and comparing the third numerical value with the second numerical value to obtain a second proportion, wherein the second proportion is the proportion of the pixel points of the first fuzzy region and the second fuzzy region in the image.
In step 503, the terminal determines whether the second ratio is smaller than a second ratio threshold.
The second proportion threshold is used for determining an image with clear sky area and non-sky area division; when the second proportion is smaller than a second proportion threshold value, the terminal determines that the sky area and the non-sky area in the image are clearly divided, and the sky area can be clearly identified; when the second ratio is equal to or greater than a second ratio threshold, the terminal determines that the division of the sky region from the non-sky region in the image is blurred, and the sky region cannot be clearly identified.
When the second ratio is smaller than a second ratio threshold, executing step 204, and replacing the sky area with a target sky material to obtain a target image; when the second ratio is equal to or greater than the second ratio threshold, step 205 is executed to save the image.
In summary, the image processing method provided by the application judges the proportion of the fuzzy area in the image by determining the fuzzy area, screens out the image with a large proportion, namely screens out the image with a large fuzzy area, and the number of fuzzy pixels in the sky area and the non-sky area of the image with the large fuzzy area is too many, so that the definition of the division of the sky area and the non-sky area is directly influenced, and the replacement of the sky area in the image is not facilitated; the images with the fuzzy area accounting for a large proportion of the images are screened out, so that the terminal can replace the sky area of the suitable images, the success rate of replacing the sky area of the images is improved, and the user experience is improved.
In some embodiments, the terminal may combine at least two of the three situations to filter the image, and schematically, the image processing method provided by the present application is exemplified by combining the three situations with the terminal, as shown in fig. 9, and schematically includes the following steps:
step 601, the terminal judges whether the image is whitened or not through a dark channel algorithm.
Optionally, the terminal obtains a primary color value of a dark channel of each pixel point in the image, and divides the sum of the primary color values of the dark channels of the pixel points in the image by the sum of the pixel points in the image to obtain an average primary color value of the dark channel in the image.
The terminal judges whether the average primary color value of the dark channel is smaller than a second primary color value threshold value; when the average primary color value of the dark channel is smaller than the second primary color value threshold value, the image is not whitened, step 602 is executed, and the sky area is replaced by a target sky material to obtain a target image; when the average primary color value of the dark channel is equal to or greater than the second primary color value threshold, step 608 is performed without replacing the sky area in the image.
Step 602, the terminal performs image segmentation through the image segmentation model to obtain a probability distribution map of a sky region in the image.
Optionally, the image segmentation model adopts a deep convolutional neural network model, and trainable guided filtering is added to participate in training, so that the precision of the image segmentation model is improved, and the detail information is increased.
The probability distribution map is the confidence of the pixel points in the sky area.
Step 603, the terminal takes a region with high sky confidence from the sky region, and judges whether the shooting scene of the image is night or not according to the region with high sky confidence.
And the terminal determines the region corresponding to the pixel point with the confidence coefficient higher than the first confidence coefficient threshold value in the sky region as the region with high sky confidence coefficient, namely the region with high confidence coefficient.
Optionally, for a pixel point in a high-confidence-degree area, the terminal acquires an average primary color value of a blue channel of the pixel point and an average gray value of the pixel point; when the average primary color value of the blue channel of the pixel point in the high-confidence-degree area is smaller than the first primary color value threshold value and the average gray value of the pixel point in the high-confidence-degree area is smaller than the gray value threshold value, the terminal determines the shooting scene of the image as night, namely the image is shot at night.
Or the terminal acquires the average primary color value of the blue channel of the pixel point; when the average primary color value of the blue channel of the pixel point in the high-confidence-degree area is smaller than the first primary color value threshold value, the terminal determines the shooting scene of the image as night, namely the image is shot at night.
Or the terminal acquires the average gray value of the pixel points; and when the average gray value of the pixel points in the high-confidence-degree area is smaller than the gray value threshold, the terminal determines the shooting scene of the image as night, namely the image is shot at night.
When the shooting scene of the image is night, executing step 608; when the photographing scene of the image is not at night, step 604 is performed.
In step 604, the terminal determines whether the proportion of the sky area in the image is less than 10%.
Optionally, the terminal determines the proportion of the sky area in the image according to the ratio of the number of the pixel points of the sky area to the number of the pixel points in the image.
Or the terminal determines the proportion of the sky area in the image according to the ratio of the area of the sky area to the area of the image.
When the proportion of the sky area in the image is less than 10%, executing step 608; when the proportion of the sky area in the image is equal to or greater than 10%, step 605 is performed.
Step 605, determine whether the ratio of the image in the blurred region after image segmentation is higher than a threshold.
The segmented image comprises a sky area and a non-sky area, and the fuzzy area comprises a first fuzzy area corresponding to the non-sky area and a second fuzzy area comprising the sky area.
The terminal determines the area, in which the confidence coefficient of the pixel points in the non-sky area is higher than a second confidence coefficient threshold and lower than a third confidence coefficient threshold, as a first fuzzy area; and determining the area, in which the confidence coefficient of the pixel points in the sky area is higher than the third confidence coefficient threshold and lower than the fourth confidence coefficient threshold, as a second fuzzy area.
Optionally, the terminal determines the proportion of the blurred region in the image according to the ratio of the number of the pixel points of the blurred region to the number of the pixel points in the image.
Or the terminal determines the proportion of the fuzzy area in the image according to the ratio of the area of the fuzzy area to the area of the image.
When the proportion of the blurred region in the image is higher than the threshold, executing step 608; when the ratio of the uncertain region in the image is lower than or equal to the threshold, step 606 is executed.
In step 606, the terminal performs segmentation on the connection portion between the sky region and the non-sky region.
And the terminal carries out segmentation processing on the connecting part of the sky area and the non-sky area through an edge optimization algorithm.
And step 607, replacing the sky area with a target sky material to obtain a target image.
Alternatively, the replacement of the sky area may include the following exemplary steps:
1) acquiring attribute parameters of an image;
2) and screening out a target sky material from the candidate sky materials according to the attribute parameters.
Wherein the attribute parameter includes at least one of a photographing time and a photographing place of the image.
In step 608, the sky area in the image is not replaced.
In summary, in the image processing method provided in the embodiment of the present application, an image is processed through an image segmentation model, so as to obtain a probability distribution map of a sky region in the image; the probability distribution map is the confidence of the pixel points in the sky area; determining a high confidence coefficient area with the confidence coefficient higher than a first confidence coefficient threshold value from the sky area; and when the pixel points in the high-confidence-degree area do not accord with the preset conditions, replacing the sky area with a target sky material to obtain a target image. The method solves the problem that a target image with a sky area replaced by a target sky material can be obtained only by at least 7 steps in the related technology, achieves the purpose of automatically replacing the target sky material with the original sky area in the image, does not need a user to carry out later-stage map repairing, reduces manual operation steps of the user, and improves the man-machine interaction efficiency. Even if the user does not have the work of correcting the picture, the sky picture that can only be shot to compare favourably professional single reflection lens can be obtained.
In addition, according to the image processing method provided by the embodiment of the application, the user can also screen the image through factors such as the average primary color value of the dark channel, the proportion of the sky area in the image, the proportion of the fuzzy area in the image and the like, so that the image which is not suitable for replacing the sky area is screened out, the terminal can replace the sky area with the suitable image, the success rate of replacing the sky area in the image is improved, and the user experience is improved.
Referring to fig. 10, a block diagram of an image processing apparatus provided by an exemplary embodiment of the present application, which may be implemented as a part or all of a terminal by software, hardware, or a combination of the two, is shown, and the apparatus includes:
the processing module 701 is configured to process the image through the image segmentation model to obtain a probability distribution map of a sky region in the image; the probability distribution map is the confidence of the pixel points in the sky area;
a determining module 702 configured to determine a high confidence region from the sky region, the confidence of which is higher than a first confidence threshold;
the replacing module 703 is configured to replace the sky area with a target sky material to obtain a target image when the pixel point in the high-confidence-level area does not meet the preset condition.
In some embodiments, the replacement module 703 includes:
a segmentation submodule 7031 configured to perform segmentation processing on the edge of the sky region;
and a replacing sub-module 7032 configured to replace the segmented sky area with a target sky material, so as to obtain a target image.
In some embodiments, replacement sub-module 7032 is configured to obtain attribute parameters of the image; screening a target sky material from the candidate sky materials according to the attribute parameters; wherein the attribute parameter includes at least one of a photographing time and a photographing place of the image.
In some embodiments, the preset conditions include:
the average primary color value of the blue channel of the pixel point in the high-confidence-degree area is smaller than a first primary color value threshold value;
or the average gray value of the pixel points in the high-confidence-degree area is smaller than the gray value threshold;
or the average primary color value of the blue channel of the pixel point in the high-confidence-degree area is smaller than the first primary color value threshold, and the average gray value of the pixel point in the high-confidence-degree area is smaller than the gray value threshold.
In some embodiments, an apparatus comprises:
an obtaining module 704 configured to obtain an average primary color value of a dark channel in an image;
a determining module 702 configured to execute the step of processing the image through the image segmentation model to obtain the probability distribution map of the sky region in the image when the average primary color value of the dark channel is smaller than the second primary color value threshold.
In some embodiments, the determining module 702 is configured to determine a first proportion of pixels of the sky region among total pixels of the image; and when the first ratio is larger than a first ratio threshold value, replacing the sky area with a target sky material to obtain a target image.
In some embodiments, the determining module 702 is configured to determine an area in which the confidence of the pixel points in the non-sky area is higher than the second confidence threshold and lower than the third confidence threshold as the first blurred area; determining the area, in which the confidence coefficient of the pixel points in the sky area is higher than a third confidence coefficient threshold and lower than a fourth confidence coefficient threshold, as a second fuzzy area;
a determining module 702 configured to determine a second proportion of the pixel points of the first blurred region and the second blurred region in the image; when the second proportion is smaller than a second proportion threshold value, replacing the sky area with a target sky material to obtain a target image; the non-sky region refers to a region other than the sky region in the image.
In some embodiments, the processing module 701 is further configured to process the image by using a filter when the pixel point in the high-confidence region meets a preset condition; the filter is used for enhancing the display effect of the image.
In summary, the image processing apparatus provided in the embodiment of the present application processes an image through an image segmentation model to obtain a probability distribution map of a sky region in the image; the probability distribution map is the confidence of the pixel points in the sky area; determining a high confidence coefficient area with the confidence coefficient higher than a first confidence coefficient threshold value from the sky area; and when the pixel points in the high-confidence-degree area do not accord with the preset conditions, replacing the sky area with a target sky material to obtain a target image. The method solves the problem that a target image with a sky area replaced by a target sky material can be obtained only by at least 7 steps in the related technology, achieves the purpose of automatically replacing the target sky material with the original sky area in the image, does not need a user to carry out later-stage map repairing, reduces manual operation steps of the user, and improves the man-machine interaction efficiency. Even if the user does not have the work of correcting the picture, the sky picture that can only be shot to compare favourably professional single reflection lens can be obtained.
In addition, according to the image processing device provided by the embodiment of the application, the user can also screen the images through preset conditions, and the images which are not suitable for replacing the sky area are screened out, so that the terminal can replace the sky area for the suitable images, the success rate of replacing the sky area in the images is improved, and the user experience is improved.
Fig. 11 is a block diagram of an image processing apparatus 800 according to an exemplary embodiment of the present application. For example, the apparatus 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, an exercise device, a personal digital assistant, and the like.
Referring to fig. 11, the apparatus 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 818 to execute instructions to perform all or a portion of the steps performed by the UE20 in the above-described method embodiments. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the apparatus 800. Examples of such data include instructions for any application or method operating on device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
Power components 806 provide power to the various components of device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the device 800.
The multimedia component 808 includes a screen that provides an output interface between the device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the device 800 is in an operating mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the apparatus 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the device 800. For example, the sensor assembly 814 may detect the open/closed status of the device 800, the relative positioning of components, such as a display and keypad of the device 800, the sensor assembly 814 may also detect a change in the position of the device 800 or a component of the device 800, the presence or absence of user contact with the device 800, the orientation or acceleration/deceleration of the device 800, and a change in the temperature of the device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate communications between the apparatus 800 and other devices in a wired or wireless manner. The device 800 may access a wireless network based on a communication standard, such as WiFi, 2G or 3G, or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast associated information from an external broadcast management system via a broadcast channel.
In an exemplary embodiment, the apparatus 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the image processing methods in the above method embodiments.
In an exemplary embodiment, a non-transitory computer-readable storage medium comprising instructions, such as the memory 804 comprising instructions, executable by the processor 818 of the apparatus 800 to perform the image processing method of the above-described method embodiment is also provided. For example, the non-transitory computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
In an exemplary embodiment, a computer-readable storage medium is also provided, and the computer-readable storage medium is a non-volatile computer-readable storage medium, and a computer program is stored in the computer-readable storage medium, and when being executed by a processing component, the stored computer program can implement the image processing method provided by the above-mentioned embodiment of the present disclosure.
The disclosed embodiments also provide a computer program product having instructions stored therein, which when run on a computer, enable the computer to perform the image processing method provided by the disclosed embodiments.
The embodiment of the present disclosure also provides a chip, which includes a programmable logic circuit and/or a program instruction, and when the chip runs, the chip can execute the image processing method provided by the embodiment of the present disclosure.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (18)

1. An image processing method, characterized in that the method comprises:
processing the image through an image segmentation model to obtain a probability distribution map of a sky region in the image; the probability distribution map is the confidence of the pixel points in the sky area;
determining a high confidence region from the sky region where the confidence is above a first confidence threshold;
and when the pixel points in the high-confidence-degree area do not accord with preset conditions, replacing the sky area with a target sky material to obtain a target image.
2. The method of claim 1, wherein the replacing the sky region with a target sky material to obtain a target image comprises:
performing segmentation processing on the edge of the sky area;
and replacing the segmented sky area with the target sky material to obtain the target image.
3. The method of claim 2, wherein the replacing the segmented sky region with the target sky material to obtain the target image comprises:
acquiring attribute parameters of the image;
screening the target sky material from candidate sky materials according to the attribute parameters;
wherein the attribute parameter includes at least one of a photographing time and a photographing place of the image.
4. The method according to any one of claims 1 to 3, wherein the preset conditions include:
the average primary color value of the blue channel of the pixel point in the high-confidence-degree area is smaller than a first primary color value threshold value;
or the average gray value of the pixel points in the high-confidence-degree area is smaller than a gray value threshold value;
or the average primary color value of the blue channel of the pixel point in the high-confidence-degree region is smaller than the first primary color value threshold, and the average gray value of the pixel point in the high-confidence-degree region is smaller than the gray value threshold.
5. The method according to any one of claims 1 to 3, wherein before processing the image by the image segmentation model to obtain the probability distribution map of the sky region in the image, the method comprises:
acquiring an average primary color value of a dark channel in the image;
and when the average primary color value of the dark channel is smaller than a second primary color value threshold value, executing the step of processing the image through the image segmentation model to obtain a probability distribution map of the sky area in the image.
6. The method of any of claims 1-3, wherein replacing the sky region with a target sky material to obtain a target image comprises:
determining a first proportion of pixel points of the sky area in total pixel points of the image;
and when the first ratio is larger than a first ratio threshold value, executing the step of replacing the sky area with a target sky material to obtain a target image.
7. The method of any of claims 1-3, wherein replacing the sky region with a target sky material to obtain a target image further comprises:
determining the area with the confidence coefficient of the pixel points in the non-sky area higher than a second confidence coefficient threshold value and lower than a third confidence coefficient threshold value as a first fuzzy area; determining a region with the confidence coefficient of the pixel points in the sky region higher than the third confidence coefficient threshold value and lower than a fourth confidence coefficient threshold value as a second fuzzy region;
determining a second proportion of pixel points of the first fuzzy region and the second fuzzy region in the image;
when the second proportion is smaller than a second proportion threshold value, executing the step of replacing the sky area with a target sky material to obtain a target image;
wherein the non-sky region refers to a region of the image other than the sky region.
8. The method of any of claims 1 to 3, further comprising:
when the pixel points in the high-confidence-degree area accord with the preset conditions, processing the image by adopting a filter; the filter is used for enhancing the display effect of the image.
9. An image processing apparatus, characterized in that the apparatus comprises:
the processing module is configured to process the image through an image segmentation model to obtain a probability distribution map of a sky region in the image; the probability distribution map is the confidence of the pixel points in the sky area;
a determination module configured to determine a high confidence region from the sky region where the confidence is above a first confidence threshold;
and the replacing module is configured to replace the sky area with a target sky material to obtain a target image when the pixel points in the high-confidence-level area do not accord with preset conditions.
10. The apparatus of claim 9, wherein the replacement module comprises:
a segmentation submodule configured to segment edges of the sky region;
a replacing sub-module configured to replace the segmented sky area with the target sky material to obtain the target image.
11. The apparatus of claim 10,
the replacing sub-module is configured to acquire attribute parameters of the image; screening the target sky material from candidate sky materials according to the attribute parameters; wherein the attribute parameter includes at least one of a photographing time and a photographing place of the image.
12. The apparatus according to any one of claims 9 to 11, wherein the preset conditions include:
the average primary color value of the blue channel of the pixel point in the high-confidence-degree area is smaller than a first primary color value threshold value;
or the average gray value of the pixel points in the high-confidence-degree area is smaller than a gray value threshold value;
or the average primary color value of the blue channel of the pixel point in the high-confidence-degree region is smaller than the first primary color value threshold, and the average gray value of the pixel point in the high-confidence-degree region is smaller than the gray value threshold.
13. The apparatus according to any one of claims 9 to 11, wherein the apparatus comprises:
the acquisition module is configured to acquire an average primary color value of a dark channel in the image;
the determining module is configured to execute the step of processing the image through the image segmentation model to obtain a probability distribution map of a sky region in the image when the average primary color value of the dark channel is smaller than a second primary color value threshold.
14. The apparatus according to any one of claims 9 to 11,
the determination module is configured to determine a first proportion of pixels of the sky region among total pixels of the image; and when the first ratio is larger than a first ratio threshold value, executing the step of replacing the sky area with a target sky material to obtain a target image.
15. The apparatus according to any one of claims 9 to 11,
the determining module is configured to determine an area, in which the confidence of the pixel points in the non-sky area is higher than a second confidence threshold and lower than a third confidence threshold, as a first fuzzy area; determining a region with the confidence coefficient of the pixel points in the sky region higher than the third confidence coefficient threshold value and lower than a fourth confidence coefficient threshold value as a second fuzzy region;
the determining module is configured to determine a second proportion of pixel points of the first fuzzy region and the second fuzzy region in the image; when the second proportion is smaller than a second proportion threshold value, executing the step of replacing the sky area with a target sky material to obtain a target image; wherein the non-sky region refers to a region of the image other than the sky region.
16. The apparatus according to any one of claims 9 to 11,
the processing module is further configured to process the image by using a filter when the pixel points in the high-confidence-degree region meet the preset condition; the filter is used for enhancing the display effect of the image.
17. A terminal, characterized in that it comprises a processor and a memory in which at least one instruction, at least one program, set of codes or set of instructions is stored, which is loaded and executed by the processor to implement the image processing method according to any one of claims 1 to 8.
18. A computer readable storage medium having stored therein at least one instruction, at least one program, a set of codes, or a set of instructions, which is loaded and executed by a processor to implement the image processing method according to any one of claims 1 to 8.
CN201910591533.4A 2019-07-02 2019-07-02 Image processing method, device, equipment and storage medium Pending CN112258380A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910591533.4A CN112258380A (en) 2019-07-02 2019-07-02 Image processing method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910591533.4A CN112258380A (en) 2019-07-02 2019-07-02 Image processing method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112258380A true CN112258380A (en) 2021-01-22

Family

ID=74223803

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910591533.4A Pending CN112258380A (en) 2019-07-02 2019-07-02 Image processing method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112258380A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112767392A (en) * 2021-03-02 2021-05-07 百果园技术(新加坡)有限公司 Image definition determining method, device, equipment and storage medium
CN114998159A (en) * 2022-08-04 2022-09-02 邹城市天晖软件科技有限公司 Design image self-adaptive enhancement method
WO2022194079A1 (en) * 2021-03-19 2022-09-22 影石创新科技股份有限公司 Sky region segmentation method and apparatus, computer device, and storage medium
CN115908413A (en) * 2023-01-06 2023-04-04 华慧健(天津)科技有限公司 Contrast image segmentation method, electronic device, processing system, and storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112767392A (en) * 2021-03-02 2021-05-07 百果园技术(新加坡)有限公司 Image definition determining method, device, equipment and storage medium
CN112767392B (en) * 2021-03-02 2024-04-09 百果园技术(新加坡)有限公司 Image definition determining method, device, equipment and storage medium
WO2022194079A1 (en) * 2021-03-19 2022-09-22 影石创新科技股份有限公司 Sky region segmentation method and apparatus, computer device, and storage medium
CN114998159A (en) * 2022-08-04 2022-09-02 邹城市天晖软件科技有限公司 Design image self-adaptive enhancement method
CN115908413A (en) * 2023-01-06 2023-04-04 华慧健(天津)科技有限公司 Contrast image segmentation method, electronic device, processing system, and storage medium

Similar Documents

Publication Publication Date Title
CN111418201B (en) Shooting method and equipment
CN112585940B (en) System and method for providing feedback for artificial intelligence based image capture devices
WO2019237992A1 (en) Photographing method and device, terminal and computer readable storage medium
CN108322646B (en) Image processing method, image processing device, storage medium and electronic equipment
CN112258380A (en) Image processing method, device, equipment and storage medium
CN109379572B (en) Image conversion method, image conversion device, electronic equipment and storage medium
US11070717B2 (en) Context-aware image filtering
US20220383508A1 (en) Image processing method and device, electronic device, and storage medium
CN110971841B (en) Image processing method, image processing device, storage medium and electronic equipment
CN113411498B (en) Image shooting method, mobile terminal and storage medium
Yang et al. Personalized exposure control using adaptive metering and reinforcement learning
CN111654643B (en) Exposure parameter determination method and device, unmanned aerial vehicle and computer readable storage medium
WO2023098743A1 (en) Automatic exposure method, apparatus and device, and storage medium
CN112887610A (en) Shooting method, shooting device, electronic equipment and storage medium
CN110956063A (en) Image processing method, device, equipment and storage medium
CN112804464A (en) HDR image generation method and device, electronic equipment and readable storage medium
CN113691724A (en) HDR scene detection method and device, terminal and readable storage medium
CN110392211B (en) Image processing method and device, electronic equipment and computer readable storage medium
CN111405185A (en) Zoom control method and device for camera, electronic equipment and storage medium
WO2023071933A1 (en) Camera photographing parameter adjustment method and apparatus and electronic device
CN110956576B (en) Image processing method, device, equipment and storage medium
CN115552415A (en) Image processing method, image processing device, electronic equipment and storage medium
CN112785537A (en) Image processing method, device and storage medium
CN113808066A (en) Image selection method and device, storage medium and electronic equipment
CN111026893A (en) Intelligent terminal, image processing method and computer-readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination