CN110706162A - Image processing method and device and computer storage medium - Google Patents

Image processing method and device and computer storage medium Download PDF

Info

Publication number
CN110706162A
CN110706162A CN201910824501.4A CN201910824501A CN110706162A CN 110706162 A CN110706162 A CN 110706162A CN 201910824501 A CN201910824501 A CN 201910824501A CN 110706162 A CN110706162 A CN 110706162A
Authority
CN
China
Prior art keywords
image
brightness
background
layer
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910824501.4A
Other languages
Chinese (zh)
Inventor
周晨航
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Transsion Holdings Co Ltd
Original Assignee
Shenzhen Transsion Holdings Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Transsion Holdings Co Ltd filed Critical Shenzhen Transsion Holdings Co Ltd
Priority to CN201910824501.4A priority Critical patent/CN110706162A/en
Publication of CN110706162A publication Critical patent/CN110706162A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • G06T5/92
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Abstract

The invention discloses an image processing method, an image processing device and a computer storage medium, wherein the image processing method comprises the following steps: acquiring a brightness parameter of an image to be processed; and dynamically adjusting the brightness of the image to be processed based on the brightness parameter so as to obtain a target image with the integrally and dynamically improved brightness. According to the image processing method, the image processing device and the computer storage medium, the brightness of the single-frame image is dynamically adjusted, so that the overall dynamic range of the brightness of the image is improved, the image quality is improved, the operation is convenient and fast, and the user experience is improved.

Description

Image processing method and device and computer storage medium
Technical Field
The present invention relates to the field of image processing, and in particular, to an image processing method, an image processing apparatus, and a computer storage medium.
Background
The dynamic range of the image is one of the main indexes of the imaging system, and more image details can be captured by improving the dynamic range of the imaged image. In the image shooting process, whether a human face exists or not can be detected, a plurality of exposure images under different exposures are formed according to scenes, and the plurality of exposure images are synthesized to improve the overall dynamic range of the image in the related technology. However, the above method requires multiple exposure images, resulting in a long imaging delay time and insufficient improvement of the overall picture brightness, thereby affecting the user experience.
Disclosure of Invention
The invention aims to provide an image processing method, an image processing device and a computer storage medium, which can improve the overall dynamic range of an image, improve the image quality, are convenient and quick to operate and improve the user experience.
In order to achieve the purpose, the technical scheme of the invention is realized as follows:
in a first aspect, an embodiment of the present invention provides an image processing method, including:
acquiring a brightness parameter of an image to be processed;
and dynamically adjusting the brightness of the image to be processed based on the brightness parameter so as to obtain a target image with the integrally and dynamically improved brightness.
As one embodiment, the acquiring a brightness parameter of an image to be processed includes:
extracting brightness component data of an image to be processed and component data other than the brightness component data;
the dynamically adjusting the brightness of the image to be processed based on the brightness parameter to obtain the target image with the integrally and dynamically improved brightness comprises the following steps:
performing brightness stretching processing on the brightness component data to obtain the brightness component data after the brightness stretching processing;
and fusing the brightness component data subjected to the brightness stretching processing and the component data except the brightness component data to obtain a target image with the brightness integrally and dynamically improved.
As one embodiment, the extracting of the brightness component data of the image to be processed and the component data other than the brightness component data includes:
if the image to be processed is in an RGB color mode, converting the image to be processed into a YUV color space, and extracting brightness component data and component data except the brightness component data of the image to be processed in the YUV color mode.
As one embodiment, the performing a luminance stretching process on the brightness component data to obtain the brightness component data after the luminance stretching process includes:
filtering the brightness component data to obtain a background layer and a detail layer of the brightness component data;
carrying out contrast-limiting adaptive histogram equalization processing on the background image layer to obtain a target background image layer;
and fusing the detail layer and the target background layer to obtain the brightness component data after brightness stretching processing.
As an embodiment, the performing contrast-limited adaptive histogram equalization processing on the background image layer to obtain a target background image layer includes:
respectively and uniformly dividing the background image layer into blocks according to at least two different set image division modes to obtain at least two divided background image layers;
respectively carrying out histogram equalization processing on the at least two background image layers which are divided into blocks to obtain at least two background image layers which are divided into blocks and subjected to histogram equalization processing;
and performing weighted fusion on the at least two background image layers which are divided into blocks and subjected to histogram equalization processing to obtain a target background image layer.
As one embodiment, the image segmentation modes include a first image segmentation mode for segmenting the background layer into N1 × N1 blocks and a second image segmentation mode for segmenting the background layer into N2 × N2 blocks, where N1 and N2 are positive integers and N1 is less than N2; the at least two background image layers which are divided into blocks and subjected to histogram equalization processing comprise a first background image layer and a second background image layer; the weighting and fusing the at least two background image layers which are divided into blocks and subjected to histogram equalization processing to obtain a target background image layer comprises the following steps:
acquiring a first weight matrix of the first background layer and a second weight matrix of the second background layer, wherein each element in the first weight matrix represents the weight of a corresponding pixel point in the first background layer, and each element in the second weight matrix represents the weight of a corresponding pixel point in the second background layer;
and performing weighted fusion on the first background layer and the second background layer according to the first weight matrix and the second weight matrix to obtain a target background layer.
As an implementation manner, the obtaining a first weight matrix of the first background layer and a second weight matrix of the second background layer includes:
displaying a weight setting interface comprising the first background layer and/or the second background layer to a user, wherein the weight setting interface comprises a block selection option and a weight setting option;
acquiring the selection operation of the user aiming at the block selection option and the setting operation of the weight setting option in the weight setting interface;
and correspondingly generating a first weight matrix of the first background layer and/or a second weight matrix of the second background layer according to the selection operation and the setting operation of the user.
As an implementation manner, after the dynamically adjusting the brightness of the image to be processed based on the brightness parameter to obtain the target image with the overall dynamically improved brightness, the method further includes:
extracting a region image corresponding to a target object from the target image;
carrying out light effect enhancement processing on the region image corresponding to the target object to obtain the region image corresponding to the target object after the light effect enhancement processing;
and weighting and superposing the regional image corresponding to the target object after the light effect enhancement processing into the target image to obtain the target image after regional enhancement.
As an embodiment, before extracting the region image corresponding to the target object from the target image, the method further includes:
detecting whether a target object exists in the target image;
and when the target object exists in the target image, extracting a region image corresponding to the target object from the target image.
As one of the embodiments, the target object includes a human face; the extracting of the region image corresponding to the target object from the target image includes:
acquiring coordinates of a face region in the target image and key point data of the face;
determining the outline of the face region by adopting a region growing method based on the key point data of the face;
and extracting a region image corresponding to the face from the target image according to the face region coordinates and the face region outline.
As an embodiment, the performing a light effect enhancement process on the region image corresponding to the target object to obtain the region image corresponding to the target object after the light effect enhancement process includes:
carrying out light effect enhancement from strong to weak on the region image corresponding to the face region from the center to the edge to obtain a region image corresponding to the face after the light effect enhancement processing; alternatively, the first and second electrodes may be,
and carrying out light effect enhancement from weak to strong on the region image corresponding to the face region from the edge to the center to obtain the region image corresponding to the face after the light effect enhancement processing.
As one of the embodiments, the target object includes a moon; the extracting of the region image corresponding to the target object from the target image includes:
detecting the position of the moon in the target image;
and based on the position of the moon, adopting a segmentation algorithm to segment a region image corresponding to the moon from the target image.
As an embodiment, the performing a light effect enhancement process on the region image corresponding to the target object to obtain the region image corresponding to the target object after the light effect enhancement process includes:
and carrying out light effect enhancement on the area image corresponding to the moon by adopting a deblurring algorithm and a super-resolution algorithm to obtain the area image corresponding to the moon after the light effect enhancement processing.
In a second aspect, an embodiment of the present invention provides an image processing apparatus, including a processor and a storage device for storing a program; when executed by the processor, causes the processor to implement the image processing method according to the first aspect.
In a third aspect, an embodiment of the present invention provides a computer storage medium storing a computer program, which when executed by a processor, implements the image processing method according to the first aspect.
The embodiment of the invention provides an image processing method, an image processing device and a computer storage medium, wherein the image processing method comprises the following steps: acquiring a brightness parameter of an image to be processed; and dynamically adjusting the brightness of the image to be processed based on the brightness parameter so as to obtain a target image with the integrally and dynamically improved brightness. Therefore, the brightness of the single-frame image is dynamically adjusted, so that the whole dynamic range of the brightness of the image is improved, the brightness of the high-brightness area in the image is restrained and/or the brightness of the low-brightness area in the image is improved, the image quality is improved, the dynamic range adjustment is not required to be realized by utilizing the multi-frame exposure image, the operation is convenient and fast, and the user experience is improved. In addition, images taken or obtained under various complex light environments can be processed.
Drawings
Fig. 1 is a schematic flowchart of an image processing method according to an embodiment of the present invention;
fig. 2 is a schematic flowchart of an image processing method according to an embodiment of the present invention;
FIG. 3 is a diagram illustrating a background layer after being divided into blocks according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a setting interface of a background layer according to an embodiment of the present disclosure;
fig. 5 is a schematic structural diagram of an image processing apparatus according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating comparison between before and after image processing according to an embodiment of the present invention.
Detailed Description
The technical scheme of the invention is further elaborated by combining the drawings and the specific embodiments in the specification. Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
Referring to fig. 1, an image processing method provided in an embodiment of the present invention is applicable to a situation where a dynamic range of an image is increased, where the image processing method may be executed by an image processing apparatus provided in an embodiment of the present invention, and the image processing apparatus may be implemented in a software and/or hardware manner, and in a specific application, the image processing apparatus may specifically be a mobile terminal such as a smart phone, a personal digital assistant, a tablet computer, and the like, and the image processing method includes the following steps:
step S101: acquiring a brightness parameter of an image to be processed;
step S102: and dynamically adjusting the brightness of the image to be processed based on the brightness parameter so as to obtain a target image with the integrally and dynamically improved brightness.
Here, the image to be processed is a single frame image, and the image to be processed may be a frame image extracted from a video currently being photographed or already photographed by a photographing device such as a camera, may also be a single frame image currently being photographed or already photographed by the photographing device such as the camera, and may also be an image acquired by other means. The luminance parameter is used for representing luminance information of the image, and may be brightness component data of the image, and the like. And dynamically adjusting the brightness of the image to be processed based on the brightness parameter, for example, performing brightness suppression on a high-brightness region in the image and/or performing brightness enhancement on a low-brightness region, so as to enhance the overall dynamic range of the brightness of the image, thereby improving the image quality.
Optionally, taking the obtaining of the luminance parameter of the image to be processed, including extracting brightness component data of the image to be processed and component data other than the brightness component data as an example, as shown in fig. 2, a specific flowchart of the image processing method provided in the embodiment of the present invention includes:
step S201: extracting brightness component data of an image to be processed and component data other than the brightness component data;
here, in order to increase the overall dynamic range of the image to be processed, the dynamic range stretching process may be performed on the brightness component data of the image to be processed, and thus the brightness component data of the image to be processed needs to be extracted. In this embodiment, it is preferable to adopt a YUV color mode for the to-be-processed image, and if the space of the to-be-processed image itself is an RGB (Red, Green, Blue, Red, Green, Blue) color space or an HSV (Hue, Saturation, brightness) color space, the space of the image may be converted into a YUV (Luma, Chroma, brightness) color space through space conversion, that is, the color mode of the to-be-processed image is converted into a YUV color mode, where Y component data is brightness component data, and U component data and V component data are chrominance component data. It should be noted that, in practical applications, the Y component data may also be referred to as Y channel data, the U component data may be referred to as U channel data, and the V component data may be referred to as V channel data. Here, the brightness component data of the image to be processed is actually each initial pixel value of the brightness Y of the image to be processed. In one embodiment, the extracting the brightness component data of the image to be processed and the component data other than the brightness component data includes: if the image to be processed is in an RGB color mode, converting the image to be processed into a YUV color space, and extracting brightness component data and component data except the brightness component data of the image to be processed in the YUV color mode. Since image distortion is caused by directly performing image processing in the RGB color space, it is necessary to first perform conversion from the RGB color space to the YUV color space on the image to be processed. Therefore, the image to be processed in the RGB color mode is converted into the YUV color mode, the change of image information can be conveniently and accurately controlled, and the improvement of the dynamic range of the image is accurately ensured.
Step S202: performing brightness stretching processing on the brightness component data to obtain the brightness component data after the brightness stretching processing;
optionally, the performing luminance stretching processing on the brightness component data to obtain the brightness component data after the luminance stretching processing includes:
filtering the brightness component data to obtain a background layer and a detail layer of the brightness component data;
carrying out contrast-limiting adaptive histogram equalization processing on the background image layer to obtain a target background image layer;
and fusing the detail layer and the target background layer to obtain the brightness component data after brightness stretching processing.
It should be noted that, since the brightness stretching processing is performed on the brightness component data, only the brightness stretching processing is performed on the background layer of the brightness component data, the background layer and the detail layer of the brightness component data are obtained by filtering the brightness component data, that is, the Y channel data of the image to be processed is divided into the background layer and the detail layer. The detail layer contains specific detail information in the image to be processed, and the background layer contains background information in the image to be processed, such as light brightness and the like. Further, by filtering the brightness component data, information of the image can be retained as much as possible.
Here, the performing contrast-limited adaptive histogram equalization processing on the background image layer to obtain a target background image layer may include:
respectively and uniformly dividing the background image layer into blocks according to at least two different set image division modes to obtain at least two divided background image layers;
respectively carrying out histogram equalization processing on the at least two background image layers which are divided into blocks to obtain at least two background image layers which are divided into blocks and subjected to histogram equalization processing;
and performing weighted fusion on the at least two background image layers which are divided into blocks and subjected to histogram equalization processing to obtain a target background image layer.
It can be understood that, the step of dividing the background layer into blocks according to at least two different set image dividing manners means that the background layer is divided into blocks which are uniform and non-overlapping according to at least two different set image dividing manners, and each image dividing manner independently divides the background layer into blocks, so that at least two background layers which are divided into blocks are obtained, and each background layer is divided by only one image dividing manner. Taking the image segmentation modes as a first image segmentation mode for segmenting the background image layer into N1 × N1 blocks and a second image segmentation mode for segmenting the background image layer into N2 × N2 blocks, taking the example that N1 and N2 are positive integers and N1 is smaller than N2, assuming that N1 is 2 and N2 is 4, referring to fig. 3, the image segmentation modes are schematic diagrams of the background image layer after segmentation into blocks, wherein fig. 3(a) is a schematic diagram of the background image layer after segmentation into N1 × N1 blocks, and each block is sequentially marked as 1, 2, 3 and 4; fig. 3(b) is a schematic diagram of the background image layer after being divided into N2 × N2 blocks, each block being sequentially labeled 1, 2, 3, 4, … 15, 16. Here, the sizes of N1 and N2 may be set according to actual requirements, for example, N1 is less than or equal to 5, and N2 is greater than or equal to 4.
Here, the background image layer is divided into blocks according to at least two different image division modes, at least two background image layers after being divided into blocks are obtained, histogram equalization processing is performed on the at least two background image layers after being divided into blocks, at least two background image layers after being divided into blocks and histogram equalization processing are obtained, a certain number of CLAHE modules for limiting contrast ratio histogram equalization can be created, each CLAHE module separately divides one background image layer into a set number of blocks, and then histogram equalization processing is performed on the divided background image layers. The weighting and fusing the at least two background image layers which are divided into blocks and subjected to histogram equalization processing to obtain the target background image layer means that the weighting and fusing the values of the same pixel points in the at least two background image layers which are divided into blocks and subjected to histogram equalization processing to obtain the target background image layer. In one embodiment, the image segmentation methods include a first image segmentation method for segmenting the background layer into N1 × N1 blocks and a second image segmentation method for segmenting the background layer into N2 × N2 blocks, where N1 and N2 are positive integers and N1 is less than N2; the at least two background image layers which are divided into blocks and subjected to histogram equalization processing comprise a first background image layer and a second background image layer; the weighting and fusing the at least two background image layers which are divided into blocks and subjected to histogram equalization processing to obtain a target background image layer comprises the following steps:
acquiring a first weight matrix of the first background layer and a second weight matrix of the second background layer, wherein each element in the first weight matrix represents the weight of a corresponding pixel point in the first background layer, and each element in the second weight matrix represents the weight of a corresponding pixel point in the second background layer;
and performing weighted fusion on the first background layer and the second background layer according to the first weight matrix and the second weight matrix to obtain a target background layer.
Referring to fig. 3 again, taking the background layer shown in fig. 3(a) as the first background layer and the background layer shown in fig. 3(b) as the second background layer as an example, if the number of pixels included in the first background layer and the second background layer is denoted as X × Y, the size of the first weight matrix and the size of the second weight matrix are also denoted as X × Y. Assuming that e is a pixel point in the first background layer and the second background layer, e1 represents a value of the pixel point in the first background layer, e2 represents a value of the pixel point in the second background layer, f1 represents a weight of the pixel point in the first weight matrix, and f2 represents a weight of the pixel point in the second weight matrix, when the first background layer and the second background layer are subjected to weighted fusion according to the first weight matrix and the second weight matrix, the value of the pixel point can be represented as e1 f1+ e2 f 2. In practical application, the first weight matrix and the second weight matrix may be predefined, that is, default, for example, the weights of all pixel points in the first weight matrix and the second weight matrix are respectively 0.5 by default, that is, the weight corresponding to the first background layer and the weight corresponding to the second background layer are both 0.5 by default. The first weight matrix and the second weight matrix may also be self-defined, for example, the weights of all the pixels in the first weight matrix are set to 0.4, and the weights of all the pixels in the second weight matrix are set to 0.6. In addition, a user may also use the first background layer as a bottom plate, and specify the weights of different blocks in the second background layer for performing weighted fusion, for example, the user may select the 1 st block and the 2 nd block in the second background layer and set the weights of both the blocks to be 0.6, at this time, the weights of the block in the first background layer and the remaining blocks in the second background layer may be default values, and in order to improve the local brightness of the image, the weights of the blocks in the first background layer may be set to be 1.
In an embodiment, the obtaining a first weight matrix of the first background layer and a second weight matrix of the second background layer includes:
displaying a setting interface comprising the first background layer and/or the second background layer to a user, wherein the setting interface comprises a block selection option and a weight setting option;
acquiring the selection operation of the user aiming at the block selection option and the setting operation of the weight setting option in the setting interface;
and correspondingly generating a first weight matrix of the first background layer and/or a second weight matrix of the second background layer according to the selection operation and the setting operation of the user.
So thatThe second background layer is divided into 16 blocks, the default weight is 0.5, for example, refer to fig. 4, which is a schematic diagram of a setting interface of the background layer, where fig. 4(a) shows a schematic diagram of the setting interface of the background layer at the beginning, fig. 4(b) shows a schematic diagram of the setting interface of the background layer after setting, where the weight of each block is 0.5 by default and is W respectively1、W2、……、W15、W16It is indicated that small circles in each block are used to identify whether the block is selected. The setting interface displays the second background layer after being divided into blocks and the default weight of each block. If a user needs to select a certain block, the user can click the block or a circle in the block to realize block selection operation, and after any block in the background image layer is selected, a small circle in the block can be filled with black so as to be convenient for the user to identify. If the user needs to set the weight of a certain block, the weight to be set can be input in the weight input rectangular frame corresponding to the block, so that the weight can be freely set. In this example, taking the example that the user selects the 1 st block and the 2 nd block and sets the weights of the 1 st block and the 2 nd block to 0.6, respectively, the small circles in the 1 st block and the 2 nd block are filled with black and W is set to be equal to the first value1=0.6、W20.6. Therefore, by setting the weight of the block in the background layer, namely setting the first weight matrix of the first background layer and/or the second weight matrix of the second background layer, the image processing effect can be intelligently improved through different weighting modes, and the operation is convenient and flexible.
In another embodiment, if the user sets the weight value of the first background layer and the weight value of the second background layer in advance, the obtaining of the first weight matrix of the first background layer and the second weight matrix of the second background layer may be to generate a first weight matrix according to the weight value of the first background layer and the number of pixels of the first background layer, and generate a second weight matrix according to the weight value of the second background layer and the number of pixels of the second background layer, where the number of pixels of the first background layer is the same as the number of pixels of the second background layer.
Step S203: and fusing the brightness component data subjected to the brightness stretching processing and the component data except the brightness component data to obtain a target image with the brightness integrally and dynamically improved.
Specifically, the target image is obtained by fusing the brightness component data obtained in step S202 after the brightness stretching process and the component data obtained in step S201 except for the brightness component data, so as to improve the overall dynamic range of the image.
In summary, in the image processing method provided in the above embodiment, the luminance of the single-frame image is dynamically adjusted, so that the overall dynamic range of the luminance of the image is improved, including performing luminance suppression on a high-luminance region in the image and/or performing luminance improvement on a low-luminance region in the image, the image quality is improved, and it is not necessary to adjust the dynamic range by using a multi-frame exposure image, so that the operation is convenient and fast, and the user experience is improved. In addition, images taken or obtained under various complex light environments can be processed.
In an embodiment, after the dynamically adjusting the brightness of the image to be processed based on the brightness parameter to obtain the target image with the overall dynamically improved brightness, the method further includes:
extracting a region image corresponding to a target object from the target image;
carrying out light effect enhancement processing on the region image corresponding to the target object to obtain the region image corresponding to the target object after the light effect enhancement processing;
and weighting and superposing the regional image corresponding to the target object after the light effect enhancement processing into the target image to obtain the target image after regional enhancement.
It can be understood that the image subjected to the dynamic range stretching based on the brightness stretching processing is significantly improved in the overall dynamic range, but the regional image enhancement processing cannot be realized for the region where some target objects are located, such as the face region, and it may be difficult to meet the user requirements, thereby affecting the user experience. In addition, the effect of the target object may be the most expected by the user, and if the effect of the target object on the image is not good, the user experience may be directly affected. Therefore, the light effect enhancement processing can be carried out on the region image corresponding to the target object in the target image, so that the light effect enhancement processing can adapt to various complex light environments, the image quality is further improved, and the target object can be better acquired. Optionally, before the extracting the region image corresponding to the target object region from the target image, the method further includes: detecting whether a target object exists in the target image; and when the target object exists in the target image, extracting a region image corresponding to a target object region from the target image. Specifically, whether a target object exists in the target image is detected, when the target object exists in the target image, a step of extracting a region image corresponding to a target object region from the target image is executed, and if not, the processing is stopped. Here, the step of extracting the region image corresponding to the target object region from the target image is performed only when the target object is detected to be present in the target image, thereby realizing accurate enhancement processing of the target object.
In one embodiment, the target object comprises a human face; the extracting of the region image corresponding to the target object from the target image includes:
acquiring coordinates of a face region in the target image and key point data of the face;
determining the outline of the face region by adopting a region growing method based on the key point data of the face;
and extracting a region image corresponding to the face from the target image according to the face region coordinates and the face region outline.
Here, the target image may be detected by an existing face recognition method, so that when a face exists in the target image, coordinates of a face region in the target image and key point data of the face are obtained. The face region coordinates refer to coordinate data in a rectangular frame surrounding a face region, so that the face region coordinates contain data except for the face coordinates, a face region contour needs to be determined by adopting a region growing method based on key point data of the face in order to accurately judge a region image corresponding to the face, and then the region image corresponding to the face is extracted from the target image according to the face region coordinates and the face region contour. Therefore, the accuracy of the light effect enhancement processing on the area image corresponding to the face is ensured by accurately acquiring the area image corresponding to the face, and the interference is avoided.
In an embodiment, the performing a light effect enhancement process on the region image corresponding to the target object to obtain the region image corresponding to the target object after the light effect enhancement process includes:
carrying out light effect enhancement from strong to weak on the region image corresponding to the face from the center to the edge to obtain the region image corresponding to the face after the light effect enhancement processing; alternatively, the first and second electrodes may be,
and carrying out light effect enhancement from weak to strong on the region image corresponding to the face from the edge to the center to obtain the region image corresponding to the face after the light effect enhancement processing.
Here, the center and the edge of the area image corresponding to the face can be obtained according to the area image corresponding to the face, so that light effect enhancement from strong to weak can be performed on the area image corresponding to the face from the center to the edge, or light effect enhancement from weak to strong can be performed on the area image corresponding to the face from the edge to the center, so that the face is subjected to brightening processing, and the brightness of the face is gradually reduced from the center to the edge, that is, the area image corresponding to the face after the light effect enhancement processing is obtained. Therefore, the light effect enhancement processing is carried out on the regional image corresponding to the face, so that the method can adapt to various complex light environments, the image quality is further improved, and the acquisition of the face can be better carried out.
In one embodiment, the target object comprises the moon; the extracting of the region image corresponding to the target object from the target image includes:
detecting the position of the moon in the target image;
and based on the position of the moon, adopting a segmentation algorithm to segment a region image corresponding to the moon from the target image.
Specifically, the position of the moon in the target image is detected, and the moon in the target image is completely segmented by adopting a segmentation algorithm based on the detected position of the moon in the target image, so that a region image corresponding to the moon is segmented from the target image. Here, the segmentation algorithm may employ an example segmentation algorithm or the like. Therefore, the accuracy of the light effect enhancement processing on the area image corresponding to the moon is ensured by accurately acquiring the area image corresponding to the moon, and the interference is avoided.
In an embodiment, the performing a light effect enhancement process on the area image corresponding to the moon to obtain the area image corresponding to the moon after the light effect enhancement process includes:
and carrying out light effect enhancement on the area image corresponding to the moon by adopting a deblurring algorithm and a super-resolution algorithm to obtain the area image corresponding to the moon after the light effect enhancement processing.
Therefore, the light effect of the area image corresponding to the moon is enhanced, the definition of the area image corresponding to the moon can be improved, and light diffusion can be effectively reduced.
It should be noted that the weighting and superimposing the area image corresponding to the target object after the light effect enhancement processing on the target image to obtain the area-enhanced target image means weighting and fusing values of the same pixel points in the area image corresponding to the target object after the light effect enhancement processing and the target image to obtain the area-enhanced target image, or may be understood as weighting and fusing the area image corresponding to the target object and the target image according to a third weight matrix of the target image and a fourth weight matrix of the area image corresponding to the target object to obtain the area-enhanced target image. The size of the third weight matrix is the same as the number of pixel points contained in the target image, and the size of the fourth weight matrix is the same as the number of pixel points contained in the regional image corresponding to the target object. In practical application, the third weight matrix and the fourth weight matrix may be predefined, that is, default, for example, the weights of all pixel points in the third weight matrix and the fourth weight matrix are respectively set to 0.5 by default, that is, the weight corresponding to the target image and the weight corresponding to the area image corresponding to the target object are both set to 0.5 by default. The third weight matrix and the fourth weight matrix may also be self-defined, for example, the weights of all the pixels in the third weight matrix are set to 0.4, and the weights of all the pixels in the fourth weight matrix are set to 0.6, etc. Therefore, the regional image corresponding to the target object after the light effect enhancement processing is weighted and superposed into the target image, so that regional enhancement processing of the target object is realized, the quality of the image is improved, and the user experience is further improved.
An embodiment of the present invention provides an image processing apparatus, as shown in fig. 5, including: a processor 110 and a memory 111 for storing computer programs capable of running on the processor 110; the processor 110 illustrated in fig. 5 is not used to refer to the number of the processors 110 as one, but is only used to refer to the position relationship of the processor 110 relative to other devices, and in practical applications, the number of the processors 110 may be one or more; similarly, the memory 111 illustrated in fig. 5 is also used in the same sense, that is, it is only used to refer to the position relationship of the memory 111 relative to other devices, and in practical applications, the number of the memory 111 may be one or more. The processor 110 is configured to implement the image processing method when running the computer program.
The image processing apparatus may further include: at least one network interface 112. The various components in the image processing apparatus are coupled together by a bus system 113. It will be appreciated that the bus system 113 is used to enable communications among the components. The bus system 113 includes a power bus, a control bus, and a status signal bus in addition to the data bus. For clarity of illustration, however, the various buses are labeled as bus system 113 in FIG. 5.
The memory 111 may be a volatile memory or a nonvolatile memory, or may include both volatile and nonvolatile memories. Among them, the nonvolatile Memory may be a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read-Only Memory (EPROM), an Electrically Erasable Programmable Read-Only Memory (EEPROM), a magnetic random access Memory (FRAM), a Flash Memory (Flash Memory), a magnetic surface Memory, an optical disk, or a Compact Disc Read-Only Memory (CD-ROM); the magnetic surface storage may be disk storage or tape storage. Volatile memory can be Random Access Memory (RAM), which acts as external cache memory. By way of illustration and not limitation, many forms of RAM are available, such as Static Random Access Memory (SRAM), Synchronous Static Random Access Memory (SSRAM), Dynamic Random Access Memory (DRAM), Synchronous Dynamic Random Access Memory (SDRAM), Double Data Rate Synchronous Dynamic Random Access Memory (DDRSDRAM), Enhanced Synchronous Dynamic Random Access Memory (ESDRAM), Enhanced Synchronous Dynamic Random Access Memory (Enhanced DRAM), Synchronous Dynamic Random Access Memory (SLDRAM), Direct Memory (DRmb Access), and Random Access Memory (DRAM). The memory 111 described in connection with the embodiments of the invention is intended to comprise, without being limited to, these and any other suitable types of memory.
The memory 111 in the embodiment of the present invention is used to store various types of data to support the operation of the image processing apparatus. Examples of such data include: any computer program for operating on the image processing apparatus, such as an operating system and an application program; contact data; telephone book data; a message; a picture; video, etc. The operating system includes various system programs, such as a framework layer, a core library layer, a driver layer, and the like, and is used for implementing various basic services and processing hardware-based tasks. The application programs may include various application programs such as a Media Player (Media Player), a Browser (Browser), etc. for implementing various application services. Here, the program that implements the method of the embodiment of the present invention may be included in an application program.
Based on the same inventive concept of the foregoing embodiments, this embodiment further provides a computer storage medium, where a computer program is stored in the computer storage medium, where the computer storage medium may be a Memory such as a magnetic random access Memory (FRAM), a Read Only Memory (ROM), a Programmable Read Only Memory (PROM), an Erasable Programmable Read Only Memory (EPROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a flash Memory (flash Memory), a magnetic surface Memory, an optical Disc, or a Compact Disc Read Only Memory (CD-ROM), and the like; or may be a variety of devices including one or any combination of the above memories, such as a mobile phone, computer, tablet device, personal digital assistant, etc. The computer program stored in the computer storage medium implements the image processing method applied to the above-described image processing apparatus when executed by a processor. Please refer to the description of the embodiment shown in fig. 1 or fig. 2 for a specific step flow realized when the computer program is executed by the processor, which is not described herein again.
Based on the same inventive concept of the foregoing embodiments, the present embodiment describes technical solutions of the foregoing embodiments in detail through specific examples. The image processing method provided by the embodiment of the invention has the following specific flow:
first, the image is converted to YUV color space, and Y luminance channel data is extracted for dynamic range stretching processing.
Secondly, the Y channel is divided into two layers, namely a background layer and a detail layer, by using a filter.
Here, the detail layer needs to be retained, and the background layer needs to be subjected to dynamic range stretching.
Then, two threads are created to stretch the dynamic range of the background image layer, and two parallel CLAHE modules for limiting contrast histogram equalization are used for corresponding processing in the embodiment; wherein the content of the first and second substances,
the first CLAHE module is CLAHE2X 2: dividing the image into 2x2 small blocks, and performing histogram equalization on the image to adjust the overall dynamic range of the image;
the second CLAHE module is CLAHE16X 16: the image is divided into 16-16 small blocks, histogram equalization is carried out on the image, the local brightness of the image is improved, and an obvious enhancement effect is achieved on a local area.
Then, according to a weighted fusion method, the CLAHE2X2 result and the CLAHE16X16 result are weighted to obtain a processed background image layer, wherein the weighted fusion method is as follows:
the predefined mode is as follows: the CLAHE2X2 result and the CLAHE16X16 result account for half of each result and are subjected to weighted fusion;
a self-defining mode: the user self-defines and adjusts the weighted fusion proportion of the CLAHE2X2 result and the CLAHE16X16 result according to the requirement;
intelligent mode: based on the weighted fusion scheme of the blocks, a user can specify which blocks need to be weighted and fused to realize image enhancement, and the CLAHE16X16 blocks with different weights are used as a bottom plate to intelligently improve the image effect.
And then, obtaining an image with a stretched dynamic range according to the processed background image layer and the processed detail image layer.
Then, the targeted target object detection-based image area is enhanced, the dynamic range-based stretched image is significantly improved in the overall dynamic range, but specific enhancement is not performed on certain specific areas, such as a human face area, or the problem that light diffusion occurs in the moon when the moon is shot in a night scene.
Taking a human face as an example, firstly, whether the human face exists in the image is detected, and if the human face exists, the human face area is specially enhanced. The method comprises the steps of obtaining coordinates of a face area and a plurality of key point data of the face, detecting an accurate face area outline in a face frame by using an area growing method based on the plurality of key point data of the face, and enhancing light effect of the face according to the coordinates of the face area.
Taking the example of shooting the moon at night, light diffusion easily occurs during shooting, so that the edge of the moon is very fuzzy and the details are not clear. Firstly, detecting the moon position in the night scene, segmenting a complete moon through a segmentation algorithm, and enhancing the moon by using a deblurring and super-resolution algorithm, thereby improving the definition of the moon region and reducing light diffusion.
And finally, weighting and superposing the enhanced region image of the target object back to the original image, and finishing the stretching of the dynamic range of the single-frame image enhanced aiming at the region of the target object. Referring to fig. 6, which is a schematic diagram showing comparison before and after image processing, fig. 6(a) is a schematic diagram before image processing according to the image processing method, and fig. 6(b) is a schematic diagram after image processing according to the image processing method.
Here, the weighted fusion method for weighting and superimposing the region image of the enhanced target object back to the original image can be classified into the following two methods: one is that the default setting is that the weighting is performed by taking half of the weight; the other is to accomplish the weighting by the user manually setting the weight ratio.
In summary, in the image processing method provided in the above embodiment, the overall brightness of the image is improved and the local brightness of the image is improved through weighting fusion, so as to complete the dynamic range stretching of the single-frame image. In addition, special areas in the images are detected and judged, and different light effects are processed on the areas, so that individuation or special processing can be realized.
The technical features of the embodiments described above may be arbitrarily combined, and for the sake of brevity, all possible combinations of the technical features in the embodiments described above are not described, but should be considered as being within the scope of the present specification as long as there is no contradiction between the combinations of the technical features.
As used herein, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, including not only those elements listed, but also other elements not expressly listed.
The above description is only for the specific embodiments of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of the changes or substitutions within the technical scope of the present invention, and all the changes or substitutions should be covered within the scope of the present invention. Therefore, the protection scope of the present invention shall be subject to the protection scope of the appended claims.

Claims (10)

1. An image processing method, comprising:
acquiring a brightness parameter of an image to be processed;
and dynamically adjusting the brightness of the image to be processed based on the brightness parameter so as to obtain a target image with the integrally and dynamically improved brightness.
2. The image processing method according to claim 1, wherein the obtaining of the brightness parameter of the image to be processed comprises:
extracting brightness component data of an image to be processed and component data other than the brightness component data;
the dynamically adjusting the brightness of the image to be processed based on the brightness parameter to obtain the target image with the integrally and dynamically improved brightness comprises the following steps:
performing brightness stretching processing on the brightness component data to obtain the brightness component data after the brightness stretching processing;
and fusing the brightness component data subjected to the brightness stretching processing and the component data except the brightness component data to obtain a target image with the brightness integrally and dynamically improved.
3. The image processing method according to claim 2, wherein the performing luminance stretching processing on the luminance component data to obtain the luminance component data after the luminance stretching processing includes:
filtering the brightness component data to obtain a background layer and a detail layer of the brightness component data;
carrying out contrast-limiting adaptive histogram equalization processing on the background image layer to obtain a target background image layer;
and fusing the detail layer and the target background layer to obtain the brightness component data after brightness stretching processing.
4. The image processing method according to claim 3, wherein said performing contrast-limited adaptive histogram equalization on the background image layer to obtain a target background image layer comprises:
respectively and uniformly dividing the background image layer into blocks according to at least two different set image division modes to obtain at least two divided background image layers;
respectively carrying out histogram equalization processing on the at least two background image layers which are divided into blocks to obtain at least two background image layers which are divided into blocks and subjected to histogram equalization processing;
and performing weighted fusion on the at least two background image layers which are divided into blocks and subjected to histogram equalization processing to obtain a target background image layer.
5. The image processing method according to claim 4, wherein the image segmentation includes a first image segmentation that segments the background layer into N1 × N1 blocks and a second image segmentation that segments the background layer into N2 × N2 blocks, and N1, N2 are positive integers and N1 is smaller than N2; the at least two background image layers which are divided into blocks and subjected to histogram equalization processing comprise a first background image layer and a second background image layer; the weighting and fusing the at least two background image layers which are divided into blocks and subjected to histogram equalization processing to obtain a target background image layer comprises the following steps:
acquiring a first weight matrix of the first background layer and a second weight matrix of the second background layer, wherein each element in the first weight matrix represents the weight of a corresponding pixel point in the first background layer, and each element in the second weight matrix represents the weight of a corresponding pixel point in the second background layer;
and performing weighted fusion on the first background layer and the second background layer according to the first weight matrix and the second weight matrix to obtain a target background layer.
6. The image processing method according to claim 5, wherein the obtaining a first weight matrix of the first background layer and a second weight matrix of the second background layer includes:
displaying a setting interface comprising the first background layer and the second background layer to a user, wherein the setting interface comprises a block selection option and a weight setting option;
acquiring the selection operation of the user aiming at the block selection option and the setting operation of the weight setting option in the setting interface;
and generating a first weight matrix of the first background layer and a second weight matrix of the second background layer according to the selection operation and the setting operation of the user.
7. The image processing method according to any one of claims 1 to 6, wherein after dynamically adjusting the brightness of the image to be processed based on the brightness parameter to obtain a target image with an overall dynamically improved brightness, the method further comprises:
extracting a region image corresponding to a target object from the target image;
carrying out light effect enhancement processing on the region image corresponding to the target object to obtain the region image corresponding to the target object after the light effect enhancement processing;
and weighting and superposing the regional image corresponding to the target object after the light effect enhancement processing into the target image to obtain the target image after regional enhancement.
8. The image processing method according to claim 7, wherein the target object includes a human face; the performing light effect enhancement processing on the region image corresponding to the target object to obtain the region image corresponding to the target object after the light effect enhancement processing includes:
carrying out light effect enhancement from strong to weak on the region image corresponding to the face from the center to the edge to obtain the region image corresponding to the face after the light effect enhancement processing; alternatively, the first and second electrodes may be,
and carrying out light effect enhancement from weak to strong on the region image corresponding to the face from the edge to the center to obtain the region image corresponding to the face after the light effect enhancement processing.
9. An image processing apparatus characterized by comprising: a processor and a memory for storing a computer program capable of running on the processor,
wherein the processor, when running the computer program, implements the image processing method of any one of claims 1 to 8.
10. A computer storage medium, characterized in that a computer program is stored which, when executed by a processor, implements the image processing method according to any one of claims 1 to 8.
CN201910824501.4A 2019-09-02 2019-09-02 Image processing method and device and computer storage medium Pending CN110706162A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910824501.4A CN110706162A (en) 2019-09-02 2019-09-02 Image processing method and device and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910824501.4A CN110706162A (en) 2019-09-02 2019-09-02 Image processing method and device and computer storage medium

Publications (1)

Publication Number Publication Date
CN110706162A true CN110706162A (en) 2020-01-17

Family

ID=69194123

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910824501.4A Pending CN110706162A (en) 2019-09-02 2019-09-02 Image processing method and device and computer storage medium

Country Status (1)

Country Link
CN (1) CN110706162A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112257729A (en) * 2020-11-13 2021-01-22 腾讯科技(深圳)有限公司 Image recognition method, device, equipment and storage medium
CN113487513A (en) * 2021-07-20 2021-10-08 浙江大华技术股份有限公司 Picture brightness adjusting method and adjusting device and storage medium thereof

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112257729A (en) * 2020-11-13 2021-01-22 腾讯科技(深圳)有限公司 Image recognition method, device, equipment and storage medium
CN112257729B (en) * 2020-11-13 2023-10-03 腾讯科技(深圳)有限公司 Image recognition method, device, equipment and storage medium
CN113487513A (en) * 2021-07-20 2021-10-08 浙江大华技术股份有限公司 Picture brightness adjusting method and adjusting device and storage medium thereof

Similar Documents

Publication Publication Date Title
US11375128B2 (en) Method for obtaining exposure compensation values of high dynamic range image, terminal device and non-transitory computer-readable storage medium
CN111418201B (en) Shooting method and equipment
US8311355B2 (en) Skin tone aware color boost for cameras
US10021313B1 (en) Image adjustment techniques for multiple-frame images
CN110602467B (en) Image noise reduction method and device, storage medium and electronic equipment
US9489726B2 (en) Method for processing a video sequence, corresponding device, computer program and non-transitory computer-readable-medium
WO2018176925A1 (en) Hdr image generation method and apparatus
US20080316354A1 (en) Method and Device for Creating High Dynamic Range Pictures from Multiple Exposures
US8284271B2 (en) Chroma noise reduction for cameras
WO2015012040A1 (en) Image processing device, image capture device, image processing method, and program
CN109416831B (en) Low cost color expansion module for expanding colors of an image
EP3345155A1 (en) Method and apparatus for inverse tone mapping
WO2012098768A1 (en) Image processing device, image processing method, image processing program, and photography device
EP3836532A1 (en) Control method and apparatus, electronic device, and computer readable storage medium
JP4375325B2 (en) Image processing apparatus, image processing method, and program
CN110706162A (en) Image processing method and device and computer storage medium
CN115082350A (en) Stroboscopic image processing method and device, electronic device and readable storage medium
KR20230074136A (en) Salience-based capture or image processing
JP2015211233A (en) Image processing apparatus and control method for image processing apparatus
CN113472997A (en) Image processing method and device, mobile terminal and storage medium
CN114862734A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN112118394B (en) Dim light video optimization method and device based on image fusion technology
CN114862735A (en) Image processing method, image processing device, electronic equipment and computer readable storage medium
CN111583104B (en) Light spot blurring method and device, storage medium and computer equipment
US11935285B1 (en) Real-time synthetic out of focus highlight rendering

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination