CN114612337A - Image processing method, image processing device, electronic equipment and storage medium - Google Patents

Image processing method, image processing device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114612337A
CN114612337A CN202210277863.8A CN202210277863A CN114612337A CN 114612337 A CN114612337 A CN 114612337A CN 202210277863 A CN202210277863 A CN 202210277863A CN 114612337 A CN114612337 A CN 114612337A
Authority
CN
China
Prior art keywords
region
sub
target
area
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210277863.8A
Other languages
Chinese (zh)
Inventor
磯部駿
陶鑫
戴宇荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202210277863.8A priority Critical patent/CN114612337A/en
Publication of CN114612337A publication Critical patent/CN114612337A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/73Deblurring; Sharpening
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The disclosure relates to an image processing method, an image processing device, an electronic device and a storage medium, and relates to the technical field of image processing. The image processing method comprises the following steps: acquiring region parameters of a first sub-region and region parameters of a second sub-region in each target region and frequency band information of each target region in a plurality of target regions in an original image; the first sub-area and the second sub-area are two intersected sub-areas which divide the target area according to a preset rule; for each target area, determining the filtering weight of the target area according to the area parameter of the first sub-area and the area parameter of the second sub-area; and obtaining a target image processed by the original image according to the filtering weights and the frequency band information of the plurality of target areas. Therefore, whether the edge characteristics exist in the target area can be determined according to the area parameters of the sub-area of the target area of the image, the corresponding filtering weight is adjusted in a self-adaptive mode, and the quality of image processing is improved.

Description

Image processing method, image processing device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to an image processing method and apparatus, an electronic device, and a storage medium.
Background
As the number of user experiences in the short video industry continues to increase, more and more users are transitioning from viewers of the video to creators. However, these creators may not have professional shooting equipment at the time of creation, and thus the clarity of the produced video is poor. In order to make the produced video more real and natural, sharpening processing can be generally carried out on images in the video. The sharpening process may be to obtain low-frequency information and high-frequency information of the image through a filter, adjust the high-frequency information by using a preset filtering weight, and superimpose the adjusted high frequency on the low-frequency information to obtain a superimposed image. The superposed image is the sharpened image.
Generally, when an image is sharpened, high-frequency information of the image is generally adjusted by using a manually selected fixed filtering weight, and then the adjusted high-frequency information and low-frequency information are superimposed to obtain a sharpened image. However, the workload required for manually selecting the fixed filter weights is very large, and if the same fixed filter weights are adopted in all regions of the image, the sharpening effect tends to be uniform, so that the image sharpening effect is poor, and further the quality of image processing is low.
Disclosure of Invention
The present disclosure provides an image processing method, apparatus, electronic device and storage medium for improving the quality of image processing.
The technical scheme of the embodiment of the disclosure is as follows:
according to a first aspect of embodiments of the present disclosure, there is provided an image processing method. The method can comprise the following steps:
acquiring region parameters of a first sub-region and region parameters of a second sub-region in each target region and frequency band information of each target region in a plurality of target regions in an original image; the first sub-area and the second sub-area are two intersected sub-areas which divide the target area according to a preset rule;
for each target area, determining the filtering weight of the target area according to the area parameter of the first sub-area and the area parameter of the second sub-area;
and obtaining a target image processed by the original image according to the filtering weights and the frequency band information of the plurality of target areas.
Optionally, the area parameters include: pixel mean, pixel variance, and pixel intensity; acquiring a region parameter of a first sub-region and a region parameter of a second sub-region in a plurality of target regions in an original image, wherein the acquiring comprises the following steps:
acquiring pixel values of all pixel points in a target sub-region, and determining the ratio of the sum of the pixel values of all the pixel points to the number of the pixel points in the target sub-region as the pixel average value of the target sub-region; the target sub-area is a first sub-area or a second sub-area;
determining the pixel variance of the target sub-region according to the pixel average value of the target sub-region;
the pixel average of the target sub-region is determined as the pixel intensity of the target sub-region.
Optionally, determining the filtering weight of the target region according to the region parameter of the first sub-region and the region parameter of the second sub-region includes:
when the region parameters of the first sub-region and the second sub-region meet preset conditions, determining the filtering weight of the target region as a first weight; the preset conditions include: the absolute value of the difference between the pixel variance of the first sub-area and the pixel variance of the second sub-area is greater than a first preset difference, or the absolute value of the difference between the pixel intensity of the first sub-area and the pixel intensity of the second sub-area is greater than zero and less than a second preset difference;
when the area parameters of the first sub-area and the second sub-area do not meet the preset conditions, determining the filtering weight of the target area as a second weight; the second weight is less than the first weight.
Optionally, the filtering weight of the target region is positively correlated with the target value; the target value is the absolute value of the difference between the pixel variance of the first sub-region and the pixel variance of the second sub-region, or the absolute value of the difference between the pixel intensity of the first sub-region and the pixel intensity of the second sub-region.
Optionally, the preset rule includes: the pixel average value of the target area is equal to the sum of the first numerical value and the second numerical value; the first value is half of the average value of the pixels of the first subregion; the second value is half the average value of the pixels of the second sub-area.
Optionally, the frequency band information includes high frequency information and low frequency information; obtaining a target image processed by an original image according to the filtering weights and the frequency band information of a plurality of target areas, wherein the method comprises the following steps:
for each target area, performing weighted fusion on the high-frequency information of the target area and the filtering weight of the target area, and fusing the fused result with the low-frequency information of the target area to obtain a processed area image of the target area;
and fusing the area images of the plurality of target areas to obtain a target image.
Optionally, the obtaining of the frequency band information of each target area includes:
calling a preset filter to filter the target area aiming at each target area so as to obtain low-frequency information of the target area;
and determining the high-frequency information of the target area according to the difference value between the image information of the target area and the low-frequency information of the target area.
According to a second aspect of the embodiments of the present disclosure, there is provided an image processing apparatus. The apparatus may include: an acquisition unit and a processing unit;
the device comprises an acquisition unit, a processing unit and a processing unit, wherein the acquisition unit is used for acquiring the region parameters of a first subregion and the region parameters of a second subregion in a plurality of target regions in an original image and the frequency band information of each target region; the first sub-area and the second sub-area are two intersected sub-areas which divide the target area according to a preset rule;
the processing unit is used for determining the filtering weight of the target area according to the area parameter of the first sub-area and the area parameter of the second sub-area aiming at each target area;
and the processing unit is also used for obtaining a target image processed by the original image according to the filtering weights and the frequency band information of the plurality of target areas.
Optionally, the area parameters include: pixel mean, pixel variance, and pixel intensity; a processing unit, specifically configured to:
acquiring pixel values of all pixel points in a target sub-region, and determining the ratio of the sum of the pixel values of all the pixel points to the number of the pixel points in the target sub-region as the pixel average value of the target sub-region; the target sub-area is a first sub-area or a second sub-area;
determining the pixel variance of the target sub-region according to the pixel average value of the target sub-region;
the pixel average of the target sub-region is determined as the pixel intensity of the target sub-region.
Optionally, the processing unit is specifically configured to:
when the region parameters of the first sub-region and the second sub-region meet preset conditions, determining the filtering weight of the target region as a first weight; the preset conditions include: the absolute value of the difference between the pixel variance of the first sub-area and the pixel variance of the second sub-area is greater than a first preset difference, or the absolute value of the difference between the pixel intensity of the first sub-area and the pixel intensity of the second sub-area is greater than zero and less than a second preset difference;
when the area parameters of the first sub-area and the second sub-area do not meet the preset conditions, determining the filtering weight of the target area as a second weight; the second weight is less than the first weight.
Optionally, the filtering weight of the target region is positively correlated with the target value; the target value is the absolute value of the difference between the pixel variance of the first sub-area and the pixel variance of the second sub-area, or the absolute value of the difference between the pixel intensity of the first sub-area and the pixel intensity of the second sub-area.
Optionally, the preset rule includes: the pixel average value of the target area is equal to the sum of the first numerical value and the second numerical value; the first value is half of the average value of the pixels of the first subregion; the second value is half the average value of the pixels of the second sub-area.
Optionally, the frequency band information includes high frequency information and low frequency information; a processing unit, specifically configured to:
for each target area, performing weighted fusion on the high-frequency information of the target area and the filtering weight of the target area, and fusing the fused result with the low-frequency information of the target area to obtain a processed area image of the target area;
and fusing the area images of the plurality of target areas to obtain a target image.
Optionally, the obtaining unit is specifically configured to:
calling a preset filter to filter the target area aiming at each target area so as to obtain low-frequency information of the target area;
and determining the high-frequency information of the target area according to the difference value between the image information of the target area and the low-frequency information of the target area.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device, which may include: a processor and a memory for storing processor-executable instructions; wherein the processor is configured to execute the instructions to implement any one of the optional image processing methods of the first aspect.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium having instructions stored thereon, which, when executed by an electronic device, enable the electronic device to perform any one of the above-mentioned optional image processing methods of the first aspect.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product comprising computer instructions which, when run on an electronic device, cause the electronic device to perform the optional image processing method as any one of the first aspects.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
based on any one of the above aspects, in the present disclosure, after obtaining the region parameters of the first sub-region and the second sub-region in each target region, and the frequency band information of each target region, of the plurality of target regions in the original image, the filtering weight corresponding to each target region may be determined according to the region parameters of the first sub-region and the region parameters of the second sub-region, and the target image after the original image processing is obtained according to the filtering weights and the frequency band information of the plurality of target regions. The filtering weights are respectively in one-to-one correspondence with the regional parameters of the subregions of the multiple target regions of the original image, so that whether the edge features exist in the target regions can be determined according to the regional parameters of the subregions of the target regions of the original image, and the corresponding filtering weights can be adjusted in a self-adaptive mode.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 illustrates a schematic diagram of an original image provided by an embodiment of the present disclosure;
FIG. 2 is a schematic diagram illustrating yet another original image provided by an embodiment of the present disclosure;
fig. 3 is a schematic flowchart illustrating an image processing method provided in an embodiment of the present disclosure;
FIG. 4 is a schematic diagram illustrating an image processing structure provided by an embodiment of the present disclosure;
FIG. 5 is a diagram illustrating a processed image of a filter kernel provided by an embodiment of the present disclosure;
FIG. 6 is a flow chart illustrating a further image processing method provided by an embodiment of the present disclosure;
fig. 7 is a schematic structural diagram of an image processing apparatus provided in an embodiment of the present disclosure;
fig. 8 shows a schematic structural diagram of another image processing apparatus provided in the embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below do not represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
It will be further understood that the terms "comprises" and/or "comprising," when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, and/or components.
It should be noted that the user information (including but not limited to user device information, user personal information, user behavior information, etc.) and data (including but not limited to image data) referred to in the present disclosure may be data authorized by the user or sufficiently authorized by various parties.
Before the embodiments of the present application are described, terms related to the embodiments of the present application will be explained.
Image sharpening: the image sharpening is to highlight the edge, contour or feature of some target elements of the ground object on the image. For example, in order to compensate for the contour of an image, the edge and the gradation transition portion of the image are enhanced to make the image sharp, and the image may be sharpened.
The edge and the gray transition portion of the enhanced image may refer to high-frequency information (which may also be referred to as high-frequency components) of the enhanced image. The sharpening process on the image may refer to superimposing the high-frequency information after the image enhancement and the low-frequency information (which may also be referred to as a low-frequency component) of the image.
In a possible implementation manner, the low-frequency information of the original image may be extracted through low-frequency filtering, and then the original image is subtracted from the low-frequency information to obtain the high-frequency information. Then, the high-frequency information is superposed on the original image to realize image enhancement.
And (3) low-frequency filtering: the method is characterized in that high-frequency information in an image is filtered through a low-frequency filter, so that low-frequency information of the image can be extracted. For example, the low frequency filter may include an averaging filter (also referred to as an averaging convolver), a gaussian convolution filter (also referred to as a gaussian filter), and the like.
A filter: may be used to filter the image. The filter may include a filter kernel (which may also be referred to as a convolution kernel). The filter may be configured with a filtering kernel and smoothing parameters. The filter kernel sizes of different filters may be different, as may the smoothing parameters.
Wherein, the size of the filter kernel may refer to the length × width of the filter kernel. For example, the size of the filter kernel may include 3 × 3, 5 × 5, 7 × 7, 11 × 11, 12 × 12, and so on. The size of the filter kernel may be determined by the filter radius. The size of the filtering kernel may be a width that is obtained by expanding a filtering radius in four directions, i.e., up, down, left, and right, of the pixel to be processed, respectively, with the pixel to be processed as a center. For example, if the filter radius is r and the length and width of the filter kernel are k, k is 2r + 1. For another example, if the filter radius is 1, the size of the filter kernel is 3 × 3; with a filter radius of 2, the size of the filter kernel is 5 × 5.
It should be noted that, in the embodiment of the present application, the filter kernel may include a numerical matrix composed of a plurality of filter weight values. Each value in the value matrix corresponds to a pixel point of the original image. For example, when the size of the filter kernel is 12 × 12, the number of pixels corresponding to the filter kernel may be 144.
Wherein the smoothing parameters of the filter may be used to characterize the filter's ability to filter low frequency information. The smaller the smoothing parameter of the filter, the weaker the filtering capability, and the closer the filtered image is to the original image.
Specifically, the filter may sharpen the image based on an image addition algorithm.
And (3) an image enhancement algorithm: the method can be used for adjusting the brightness, the contrast, the saturation, the hue and the like of the image, increasing the definition of the image, reducing noise and the like. For example, the image enhancement algorithm may include an edge sharpening algorithm.
The edge sharpening algorithm can be called as a sharpening algorithm for short, and the sharpening of the image is realized by decomposing the image into high-frequency information and low-frequency information and then manually selecting a set of high-frequency filtering weights to superimpose the high-frequency information on the low-frequency information. The performance of the edge sharpening algorithm depends mainly on the design of the filter. Reasonable filter design can bring rich edge information, and meanwhile, excessive sharpening noise such as edge white edges, background noise and the like can not occur.
In the related art, generally, a mode of image decomposition in different scales can be adopted, an image is decomposed through a plurality of filters in different scales, and then the obtained high-frequency information is superimposed on an original image, so that a processed image result is obtained.
Specifically, the process of decomposing the image through a plurality of filters with different scales may be: bn ═ Kn (x), where Kn is the filter kernel of the filter, e.g., the filter kernel may include Guided filter (Guided filter), gaussian filter, bilateral filter, etc. Bn is the low-frequency information of the image, X is the input original image, and the image is an RGB three-channel image. The high frequency information (Dn) corresponding to the image may be: dn is X-Bn.
Then, when the obtained high frequency information is superimposed on the original image and a processed image result is obtained, the filtering weight of the high frequency information may be manually set. For example, taking low-frequency information obtained by an edge sharpening algorithm through filters of 3 different sizes as an example, the low-frequency information corresponding to each size is B1, B2, and B3, and the corresponding high-frequency information is D1, D2, and D3, respectively. And manually adjusting the high-frequency filtering weight corresponding to each piece of high-frequency information, wherein the adjusted high-frequency filtering weights are w1, w2 and w3 respectively. And finally, multiplying the high-frequency information by the corresponding high-frequency filtering weight and then superposing the high-frequency information with the low-frequency information, wherein the superposition process can be as follows: y is w 1D 1+ w 2D 2+ w 3D 3+ B1, and is the result after image processing.
This approach typically uses a gaussian kernel as Kn. Gaussian filtering is a linear smoothing filter that uses the same statistical properties for different regions of the image. Usually the distribution of this gaussian kernel will be related to radius, mean and variance. When the mean value and the variance are constant, the size of the radius is changed, the sampling range of the Gaussian kernel is enlarged, and the local sampling range of the image is wider. Although the gaussian kernel is distributed by statistical properties, its statistical properties are independent of the image itself, and are related to the specified variance and mean. The variance in the gaussian kernel determines the smoothing of the image, and as the variance is larger, the frequency band of the gaussian kernel is wider, and the result after sampling is smoother. However, excessive smoothing results in very flat low frequency information, which means that high frequency information is saturated with noise, so that multi-scale superposition can severely amplify image noise.
In one example, the sharpening requirements of the images of different regions in the original image are different. For example, for an image with less texture, such as the image with region 1 shown in fig. 1 including the sky, since the image is smooth and does not need to be over-sharpened, a smaller high-frequency filtering weight needs to be selected for the image. For another example, for an image with rich texture, such as an image with a brick wall in the area 2 shown in fig. 1, the image needs to be sharpened appropriately in order to increase the texture in the image, and therefore, a larger high-frequency filtering weight needs to be selected for the image.
In the related art, if each region of the image is processed by manually designating a fixed high-frequency filtering weight, the entire sharpening effect of the image tends to be similar, that is, the sharpening degree of each region of the image is the same. But the image includes a plurality of different scenes, and the sharpening requirements for each scene are different. If the image is sharpened by manually designating the fixed high-frequency filtering weight, the optimal sharpening effect is obviously not achieved.
As shown in fig. 2, although the image of the brick wall of region 2 is sharpened, a distinct white edge appears at the edge. This is because the gaussian kernel weights all regions equally, so that the edges are over-emphasized.
In view of this, an embodiment of the present disclosure provides an image processing method, where after obtaining a region parameter of a first sub-region and a region parameter of a second sub-region in each target region, and frequency band information of each target region in a plurality of target regions in an original image, a filtering weight corresponding to each target region may be determined according to the region parameter of the first sub-region and the region parameter of the second sub-region, and a target image after processing the original image is obtained according to the filtering weights and the frequency band information of the plurality of target regions. The filtering weights are respectively in one-to-one correspondence with the regional parameters of the subregions of the multiple target regions of the original image, so that whether the edge features exist in the target regions can be determined according to the regional parameters of the subregions of the target regions of the original image, and the corresponding filtering weights can be adjusted in a self-adaptive mode.
The image processing method and device, the electronic device and the storage medium provided by the embodiment of the disclosure are applied to a scene in which an image is sharpened. When the electronic device responds to the image processing request, the image can be sharpened according to the method provided by the embodiment of the disclosure. The image processing request may be used to indicate a sharpening process to be performed on the image.
The following describes an image processing method provided by the embodiments of the present disclosure with reference to the accompanying drawings:
exemplary electronic devices for performing the image processing method provided by the embodiments of the present disclosure may be a mobile phone, a tablet computer, a desktop computer, a laptop computer, a handheld computer, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a cellular phone, a Personal Digital Assistant (PDA), an Augmented Reality (AR) \ Virtual Reality (VR) device, and the like, which may install and use a content community application (e.g., a multimedia application, and the like).
By way of further example, the electronic device executing the image processing method provided by the embodiment of the present disclosure may also be a server or other devices. The server may also perform a sharpening process on the image. That is, the image processing method provided in the embodiments of the present application may also be performed.
The server may be a single server, or may be a server cluster including a plurality of servers. In some embodiments, the server cluster may also be a distributed cluster.
In this embodiment, the server may include a plurality of application service platforms, each application service platform uniquely corresponds to one application, and the application is installed on the terminal device. The server is mainly used for storing relevant service data of the application program.
The present disclosure does not particularly limit the specific form of the electronic device. The system can be used for man-machine interaction with a user through one or more modes of a keyboard, a touch pad, a touch screen, a remote controller, voice interaction or handwriting equipment and the like.
As shown in fig. 3, the image processing method provided by the embodiment of the present disclosure may include S301 to S303.
S301, the electronic device acquires the region parameters of the first sub-region and the second sub-region in each target region and the frequency band information of each target region in a plurality of target regions in the original image.
The first sub-area and the second sub-area are two intersected sub-areas which divide the target area according to a preset rule.
Optionally, the preset rule may include: the pixel average value of the target area is equal to the sum of the first numerical value and the second numerical value; the first value is half of the average value of the pixels of the first subregion; the second value is half the average value of the pixels of the second sub-area. That is, the pixel average value E [ X ] of the target region, the pixel average value E [ Xa ] of the first sub-region, and the pixel average value E [ Xb ] of the second sub-region may satisfy the following formula:
E[X]=E[Xa]/2+E[Xb]/2。
therefore, a specific implementation mode for dividing the first sub-region and the second sub-region can be provided through the pixel average value of the target region, the pixel average value of the first sub-region and the pixel average value of the second sub-region, and further the subsequent accurate acquisition of the region parameters of the two sub-regions can be ensured.
Illustratively, as shown in fig. 4, one cell is 1 pixel. The electronic device may divide the original image 401 (an image consisting of 11 × 17 pixels in height) into 2 target regions: target area 402 (area of 11 × 8 pixels in height × width) and target area 403 (area of 11 × 9 pixels in height × width). The target region 402 includes a first sub-region 404 (a region with a height and a width of 7 × 8 pixels) and a second sub-region 405 (a region with a height and a width of 8 × 8 pixels). The first subregion 404 and the second subregion 405 comprise an intersection region 406.
Optionally, the preset rule may further include: the target area is divided into two intersected sub-areas at random, namely, the first sub-area and the second sub-area have an intersected part.
In one implementation, when the electronic device is a server, the server may perform an image processing operation on an original image after receiving an image processing request (including the original image) from the mobile device. For example, the execution of the above S301 may be started.
Wherein the image processing request can be used for indicating the original image to be subjected to sharpening processing. The image processing request may include image data of the original image.
In yet another possible implementation, when the electronic device is a mobile device, the mobile device may perform a sharpening operation on the original image in response to an image processing operation performed by a user. For example, the execution of the above S301 may be started.
Optionally, when the electronic device obtains, in a plurality of target regions in the original image, the region parameter of the first sub-region and the region parameter of the second sub-region in each target region, and the frequency band information of each target region, the electronic device may perform filtering processing on the sub-regions in each target region by using a preset filter to obtain the region parameter of the first sub-region and the region parameter of the second sub-region in each target region, and the frequency band information of each target region.
The preset Filter may be set as needed, for example, a box Filter (box Filter), a Guided Filter (Guided Filter), a bilateral Filter (bilateral Filter), and the like, without limitation.
Optionally, the area parameters include: pixel mean, pixel variance, and pixel intensity. The method for acquiring, by the electronic device, the region parameter of the first sub-region and the region parameter of the second sub-region in each target region of the plurality of target regions in the original image specifically includes:
and obtaining the pixel values of all pixel points in the target subregion, and determining the ratio of the sum of the pixel values of all the pixel points to the number of the pixel points in the target subregion as the pixel average value of the target subregion.
Wherein the target sub-region is the first sub-region or the second sub-region.
Optionally, when the electronic device obtains the pixel values of all the pixel points in the target sub-region, the electronic device may slide on the original image according to a preset step length by calling a preset filter, so as to obtain the pixel values of all the pixel points in each target sub-region in the original image.
For example, as shown in fig. 5, the preset filter may be provided with a filtering kernel, and the filtering kernel may be slid on the original image by a preset step size. In fig. 5, arrows indicate the sliding direction of the preset filter, and 11 indicates the size of the filter kernel. The filter kernel may include a plurality of sub-filter kernels. When the filter kernel moves to a certain target area of the original image, the pixel values of all pixel points of the target area can be obtained.
The size of the filter kernel may be set according to needs, for example, may be 12 × 12, and may also be other sizes, without limitation. The preset step length can be set according to needs, for example, can be one or more pixel points, and is not limited.
The electronic device determines a pixel variance of the target sub-region according to the pixel average of the target sub-region.
Wherein the pixel average value E [ M ] of the target subregion and the pixel variance Var [ M ] of the target subregion satisfy the following formula:
Var(M)=E[M^2]–(E[M])^2。
in this way, after acquiring the pixel average value of the target sub-region and the pixel variance of the target sub-region, the electronic device may determine the correspondence between the pixel variance and the pixel average value of the target region and the pixel variance and the pixel average value of the first sub-region, and the correspondence between the pixel variance and the pixel average value of the second sub-region according to the pixel average value and the pixel variance of the target sub-region.
Wherein the pixel variance var (X) and the pixel mean E [ X ] of the target region, the pixel variance var (Xa) and the pixel mean E [ Xa ] of the first sub-region, and the pixel variance var (Xb) and the pixel mean E [ Xb ] of the second sub-region satisfy the following formula:
Var(X)=E[X^2]–(E[X])^2
=E[Xa^2]/2+E[Xb^2]/2–(E[Xa]/2+E[Xb]/2)^2
=Var(Va)/2+Var(Xb)/2+(E[Xa]–E[Xb])^2/4。
the electronics determine an average of the pixels of the target sub-region as a pixel intensity of the target sub-region.
Therefore, the electronic equipment can accurately acquire the region parameters including the pixel average value, the pixel variance and the pixel intensity, a specific implementation mode for acquiring the region parameters is provided, and the filtering weight of each target region can be determined in a self-adaptive mode according to the region parameters.
Optionally, when the original image is an RGB image and the electronic device determines the pixel intensity of the target sub-region, the original image may be mapped from an RGB color space to a lab brightness space, so as to determine the average value of the brightness of the target sub-region as the pixel intensity of the target sub-region.
S302, the electronic equipment determines the filtering weight of the target area according to the area parameter of the first sub-area and the area parameter of the second sub-area aiming at each target area.
Specifically, after obtaining the region parameter of the first sub-region and the region parameter of the second sub-region in each target region, the electronic device may determine whether there is an obvious edge feature in the corresponding target region according to the region parameter of the first sub-region and the region parameter of the second sub-region in each target region. If there are significant edge features in the target region, the filter weight of the target region may be set to a larger value. Accordingly, if there are no significant edge features in the target region, the filtering weight of the target region may be set to a smaller value.
Optionally, the filtering weight of the target region is positively correlated with the target value.
The target value is the absolute value of the difference between the pixel variance of the first sub-area and the pixel variance of the second sub-area, or the absolute value of the difference between the pixel intensity of the first sub-area and the pixel intensity of the second sub-area. Therefore, the electronic device can adaptively adjust the corresponding filtering weight, and compared with a mode of manually specifying a fixed filtering weight, the image processing quality can be improved.
Optionally, when the region parameter of the first sub-region and the region parameter of the second sub-region satisfy a preset condition, the electronic device determines that the filtering weight of the target region is the first weight.
Wherein the preset conditions include: the absolute value of the difference between the pixel variance of the first sub-area and the pixel variance of the second sub-area is greater than a first preset difference, or the absolute value of the difference between the pixel intensity of the first sub-area and the pixel intensity of the second sub-area is greater than zero and less than a second preset difference.
Specifically, as can be seen from the correspondence between the pixel variance and the pixel average value of the target region and the pixel variance and the pixel average value of the first sub-region, and the correspondence between the pixel variance and the pixel average value of the second sub-region, when the absolute value of the difference between the pixel variance of the first sub-region and the pixel variance of the second sub-region is greater than a first preset difference, it is determined that the pixel variance of the target region is greater. That is, the target region may include significant edge features. In this case, the electronic device may determine that the filtering weight of the target region is the first weight, so as to ensure that the sharpening effect of the target region is improved as much as possible.
When the absolute value of the difference between the pixel intensity of the first sub-area and the pixel intensity of the second sub-area is greater than zero and smaller than a second preset difference, it is indicated that the pixels of the first sub-area and the pixels of the second sub-area are closer but different. In this case, the target region may also include sharp edge features. Therefore, the electronic device can determine that the filtering weight of the target area is the first weight, so that the sharpening effect of the target area is ensured to be improved as much as possible.
It should be noted that, when the electronic device determines that the filter weight of the target region is the first weight, it needs to perform corresponding setting according to the filter weights of all target regions in the original image, so as to avoid the situation that the target region is over-sharpened when the filter weight of the target region is determined to be the first weight.
Optionally, the preset condition may further include other conditions for indicating that the target area includes an edge feature, which is not limited in this application.
And when the area parameter of the first sub-area and the area parameter of the second sub-area do not meet the preset condition, determining the filtering weight of the target area as a second weight.
Wherein the second weight is less than the first weight.
Specifically, when the region parameter of the first sub-region and the region parameter of the second sub-region do not satisfy the preset condition, it is indicated that the target region may not include an obvious edge feature, that is, the target region may include a smooth feature. In this case, the electronic device may determine that the filtering weight of the target region is a second weight smaller than the first weight, so as to ensure that the sharpening effect of the target region is reduced as much as possible.
When the electronic device determines that the filter weight of the target region is the second weight, the electronic device also needs to perform corresponding setting according to the filter weights of all the target regions in the original image, so as to avoid the situation that when the filter weight of the target region is determined to be the first weight, the target region is not sharpened.
Therefore, the filtering weight of the target region can be accurately determined according to the difference of the variance of the region or the difference of the pixel intensity, and the quality of image processing is further improved.
And S303, the electronic equipment obtains the target image processed by the original image according to the filtering weight and the frequency band information of the plurality of target areas.
Optionally, the frequency band information includes high frequency information and low frequency information.
The method for obtaining the target image processed by the original image by the electronic device according to the filtering weights and the frequency band information of the plurality of target areas specifically comprises the following steps:
and for each target area, performing weighted fusion on the high-frequency information of the target area and the filtering weight of the target area, and fusing the fused result with the low-frequency information of the target area to obtain a processed area image of the target area.
And fusing the area images of the plurality of target areas to obtain a target image.
In this manner, the electronic device may determine the corresponding filtering weights in regions. Because different target areas may correspond to different filtering weights, different target areas on the same original image can determine the filtering weights matched with the area parameters of the target areas, and then the original image can be sharpened in different areas, so that the quality of image processing is improved, and the integral sharpening tendency of the original image is avoided.
For example, the electronic device may determine the target image according to the following formula.
Figure BDA0003556766470000121
Wherein I' represents a target image, RBkLow frequency representation of kth target regionInformation, wkRepresenting the filter weight, RD, of the kth target regionkHigh frequency information representing the kth target region. N denotes the number of target regions included in the original image. N is a positive integer.
The technical scheme provided by the embodiment can at least bring the following beneficial effects: as can be seen from S301 to S303, after obtaining the region parameters of the first sub-region and the second sub-region in each target region, and the frequency band information of each target region, of the plurality of target regions in the original image, the filtering weight corresponding to each target region may be determined according to the region parameters of the first sub-region and the region parameters of the second sub-region, and the target image processed from the original image may be obtained according to the filtering weights and the frequency band information of the plurality of target regions. The filtering weights are respectively in one-to-one correspondence with the regional parameters of the subregions of the multiple target regions of the original image, so that whether the edge features exist in the target regions can be determined according to the regional parameters of the subregions of the target regions of the original image, and the corresponding filtering weights can be adjusted in a self-adaptive mode.
With reference to fig. 3, as shown in fig. 6, in the above S301, the method for acquiring the frequency band information of each target area by the electronic device specifically includes:
s601, the electronic equipment calls a preset filter to filter each target area to obtain low-frequency information of the target area.
The preset filter may include a filtering kernel, and the filtering kernel may be configured to filter the image to obtain low-frequency information of the image. For example, the filter kernel is a low frequency filter kernel. The smoothing parameter of the filter kernel can be set according to the requirement, for example, can be 0.0001, and is not limited.
In one example, when the electronic device slides to a target area on the original image using a filter kernel of a preset filter, the electronic device may filter the target area using the filter kernel, resulting in low frequency information of the target area.
S602, the electronic equipment determines high-frequency information of the target area according to the difference value between the image information of the target area and the low-frequency information of the target area.
The image information of the target area may be used to represent the image content of the target area, such as brightness information or gray scale value information of the image in the target area. For example, the image information of the target area may include low frequency information and high frequency information of the target area. After determining the low frequency information, the electronic device may subtract the low frequency information of the target area from the image information of the target area to obtain the high frequency information of the target area.
Thus, through steps S601-S602, the electronic device can obtain low frequency information and high frequency information of a target area.
It should be noted that the electronic device may control the preset filter to slide on the original image, so as to traverse each target region of the original image, and repeatedly perform the above S601 and S602 to obtain the low frequency information and the high frequency information corresponding to each target region respectively.
The technical scheme provided by the embodiment can at least bring the following beneficial effects: as can be seen from S601 and S602, the electronic device may obtain the low frequency information of the target region using the filter kernel, and determine the high frequency information of the target region according to a difference between the image information of the target region and the low frequency information of the target region. Therefore, the electronic equipment can rapidly and accurately determine the low-frequency information and the high-frequency information of the target area, so that the target image can be accurately obtained subsequently according to the filtering weight corresponding to each target area and the obtained frequency band information, and the quality of image processing is improved.
It is understood that, in practical implementation, the electronic device according to the embodiment of the present disclosure may include one or more hardware structures and/or software modules for implementing the corresponding image processing method, and these hardware structures and/or software modules may constitute an electronic device. Those of skill in the art will readily appreciate that the present disclosure can be implemented in hardware or a combination of hardware and computer software for implementing the exemplary algorithm steps described in connection with the embodiments disclosed herein. Whether a function is performed in hardware or computer software drives hardware depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present disclosure.
Based on such understanding, the embodiment of the present disclosure also provides an image processing apparatus, and fig. 7 illustrates a schematic structural diagram of the image processing apparatus provided by the embodiment of the present disclosure. As shown in fig. 7, the image processing apparatus may include: an acquisition unit 701 and a processing unit 702;
an obtaining unit 701, configured to obtain, in a plurality of target regions in an original image, a region parameter of a first sub-region and a region parameter of a second sub-region in each target region, and frequency band information of each target region; the first sub-area and the second sub-area are two intersected sub-areas which divide the target area according to a preset rule;
a processing unit 702, configured to determine, for each target region, a filtering weight of the target region according to a region parameter of the first sub-region and a region parameter of the second sub-region;
the processing unit 702 is further configured to obtain a target image after the original image is processed according to the filtering weights and the frequency band information of the multiple target regions.
Optionally, the area parameters include: pixel mean, pixel variance, and pixel intensity; the processing unit 702 is specifically configured to:
acquiring pixel values of all pixel points in a target sub-region, and determining the ratio of the sum of the pixel values of all the pixel points to the number of the pixel points in the target sub-region as the pixel average value of the target sub-region; the target sub-area is a first sub-area or a second sub-area;
determining the pixel variance of the target sub-region according to the pixel average value of the target sub-region;
the pixel average of the target sub-region is determined as the pixel intensity of the target sub-region.
Optionally, the processing unit 702 is specifically configured to:
when the region parameters of the first sub-region and the second sub-region meet preset conditions, determining the filtering weight of the target region as a first weight; the preset conditions include: the absolute value of the difference between the pixel variance of the first sub-area and the pixel variance of the second sub-area is greater than a first preset difference, or the absolute value of the difference between the pixel intensity of the first sub-area and the pixel intensity of the second sub-area is greater than zero and less than a second preset difference;
when the area parameters of the first sub-area and the second sub-area do not meet the preset conditions, determining the filtering weight of the target area as a second weight; the second weight is less than the first weight.
Optionally, the filtering weight of the target region is positively correlated with the target value; the target value is the absolute value of the difference between the pixel variance of the first sub-area and the pixel variance of the second sub-area, or the absolute value of the difference between the pixel intensity of the first sub-area and the pixel intensity of the second sub-area.
Optionally, the preset rule includes: the pixel average value of the target area is equal to the sum of the first numerical value and the second numerical value; the first value is half of the average value of the pixels of the first subregion; the second value is half the average value of the pixels of the second sub-area.
Optionally, the frequency band information includes high frequency information and low frequency information; the processing unit 702 is specifically configured to:
for each target area, performing weighted fusion on the high-frequency information of the target area and the filtering weight of the target area, and fusing the fused result with the low-frequency information of the target area to obtain a processed area image of the target area;
and fusing the area images of the plurality of target areas to obtain a target image.
Optionally, the obtaining unit 701 is specifically configured to:
calling a preset filter to filter the target area aiming at each target area so as to obtain low-frequency information of the target area;
and determining the high-frequency information of the target area according to the difference value between the image information of the target area and the low-frequency information of the target area.
As described above, the embodiment of the present disclosure can perform division of functional modules on an image processing apparatus according to the above-described method example. The integrated module can be realized in a hardware form, and can also be realized in a software functional module form. In addition, it should be further noted that the division of the modules in the embodiments of the present disclosure is schematic, and is only a logic function division, and there may be another division manner in actual implementation. For example, the functional blocks may be divided for the respective functions, or two or more functions may be integrated into one processing block.
The specific manner in which each unit performs operations and the beneficial effects of the image processing apparatus in the foregoing embodiments have been described in detail in the foregoing method embodiments, and are not described again here.
Fig. 8 is a schematic structural diagram of another image processing apparatus provided by the present disclosure. As shown in fig. 8, the image processing apparatus 12 may include at least one processor 121 and a memory 123 for storing processor-executable instructions. Wherein, the processor 121 is used for executing the instructions in the memory 123 to implement the image processing method in the above-mentioned embodiment.
In addition, the image processing apparatus 12 may also include a communication bus 122 and at least one communication interface 124.
Processor 121 may be a processor (CPU), a micro-processing unit, an ASIC, or one or more integrated circuits for controlling the execution of programs in accordance with the disclosed aspects.
The communication bus 122 may include a path that conveys information between the aforementioned components.
The communication interface 124 may be any device, such as a transceiver, for communicating with other devices or communication networks, such as an ethernet, a Radio Access Network (RAN), a Wireless Local Area Network (WLAN), etc.
The memory 123 may be, but is not limited to, a read-only memory (ROM) or other type of static storage device that can store static information and instructions, a Random Access Memory (RAM) or other type of dynamic storage device that can store information and instructions, an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM) or other optical disk storage, optical disk storage (including compact disc, laser disc, optical disc, digital versatile disc, blu-ray disc, etc.), magnetic disk storage media or other magnetic storage devices, or any other medium that can be used to carry or store desired program code in the form of instructions or data structures and that can be accessed by a computer. The memory may be self-contained and connected to the processing unit by a bus. The memory may also be integrated with the processing unit.
The memory 123 is used for storing instructions for executing the disclosed solution, and is controlled by the processor 121. The processor 121 is configured to execute instructions stored in the memory 123 to implement the functions of the disclosed method.
In particular implementations, processor 121 may include one or more CPUs such as CPU0 and CPU1 in fig. 8 as an example.
In particular implementations, image processing apparatus 20 may include multiple processors, such as processor 121 and processor 127 in FIG. 8, as an example. Each of these processors may be a single-core (single-CPU) processor or a multi-core (multi-CPU) processor. A processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (e.g., computer program instructions).
In a particular implementation, the image processing apparatus 12 may further include an output device 125 and an input device 126, as an embodiment. The output device 125 is in communication with the processor 121 and may display information in a variety of ways. For example, the output device 125 may be a Liquid Crystal Display (LCD), a Light Emitting Diode (LED) display device, a Cathode Ray Tube (CRT) display device, a projector (projector), or the like. The input device 126 is in communication with the processor 121 and may accept user input in a variety of ways. For example, the input device 126 may be a mouse, a keyboard, a touch screen device, or a sensing device, among others.
Those skilled in the art will appreciate that the configuration shown in fig. 8 does not constitute a limitation of the image processing apparatus 12, and may include more or fewer components than those shown, or combine certain components, or employ a different arrangement of components.
In addition, the present disclosure also provides a computer-readable storage medium including instructions that, when executed by an electronic device, cause the electronic device to perform the image processing method provided as the above embodiment.
In addition, the present disclosure also provides a computer program product comprising instructions which, when executed by an electronic device, cause the electronic device to perform the image processing method as provided in the above embodiments.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This disclosure is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.

Claims (10)

1. An image processing method, comprising:
acquiring region parameters of a first sub-region and region parameters of a second sub-region in each target region and frequency band information of each target region in a plurality of target regions in an original image; the first sub-area and the second sub-area are two intersected sub-areas which divide the target area according to a preset rule;
for each target area, determining a filtering weight of the target area according to the area parameter of the first sub-area and the area parameter of the second sub-area;
and obtaining the target image processed by the original image according to the filtering weights of the target areas and the frequency band information.
2. The image processing method according to claim 1, wherein the region parameters include: pixel mean, pixel variance, and pixel intensity; the acquiring of the region parameters of the first sub-region and the second sub-region in the plurality of target regions in the original image includes:
acquiring pixel values of all pixel points in a target sub-region, and determining the ratio of the sum of the pixel values of all the pixel points to the number of the pixel points in the target sub-region as the pixel average value of the target sub-region; the target sub-region is the first sub-region or the second sub-region;
determining the pixel variance of the target sub-region according to the pixel average value of the target sub-region;
determining the pixel intensity of the target sub-region as the pixel average of the target sub-region.
3. The image processing method according to claim 2, wherein the determining the filtering weight of the target region according to the region parameter of the first sub-region and the region parameter of the second sub-region comprises:
when the region parameter of the first sub-region and the region parameter of the second sub-region meet a preset condition, determining the filtering weight of the target region as a first weight; the preset conditions include: the absolute value of the difference between the pixel variance of the first sub-area and the pixel variance of the second sub-area is greater than a first preset difference, or the absolute value of the difference between the pixel intensity of the first sub-area and the pixel intensity of the second sub-area is greater than zero and less than a second preset difference;
when the area parameter of the first sub-area and the area parameter of the second sub-area do not meet the preset condition, determining the filtering weight of the target area as a second weight; the second weight is less than the first weight.
4. The image processing method according to claim 3, wherein the filtering weight of the target region is positively correlated with a target value; the target value is a difference absolute value between a pixel variance of the first sub-region and a pixel variance of the second sub-region, or a difference absolute value between a pixel intensity of the first sub-region and a pixel intensity of the second sub-region.
5. The image processing method according to any one of claims 1 to 4, wherein the preset rule includes: the pixel average value of the target area is equal to the sum of the first numerical value and the second numerical value; the first value is half of the average value of the pixels of the first sub-region; the second value is half of the average value of the pixels of the second sub-region.
6. The image processing method according to any one of claims 1 to 4, wherein the frequency band information includes high frequency information and low frequency information; the obtaining the target image processed by the original image according to the filtering weights of the plurality of target areas and the frequency band information includes:
for each target area, performing weighted fusion on the high-frequency information of the target area according to the filtering weight of the target area, and fusing the fused result with the low-frequency information of the target area to obtain a processed area image of the target area;
and fusing the area images of the plurality of target areas to obtain the target image.
7. An image processing apparatus characterized by comprising: an acquisition unit and a processing unit;
the acquiring unit is used for acquiring the region parameters of a first sub-region and the region parameters of a second sub-region in each target region and the frequency band information of each target region in a plurality of target regions in the original image; the first sub-area and the second sub-area are two intersected sub-areas which divide the target area according to a preset rule;
the processing unit is configured to determine, for each target region, a filtering weight of the target region according to the region parameter of the first sub-region and the region parameter of the second sub-region;
and the processing unit is further configured to obtain a target image after the original image is processed according to the filtering weights of the multiple target areas and the frequency band information.
8. An electronic device, characterized in that the electronic device comprises:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the image processing method of any one of claims 1-6.
9. A computer-readable storage medium having instructions stored thereon, wherein the instructions in the computer-readable storage medium, when executed by an electronic device, enable the electronic device to perform the image processing method of any one of claims 1-6.
10. A computer program product comprising computer programs/instructions, characterized in that the computer programs/instructions, when executed by a processor, implement the image processing method according to any of claims 1-6.
CN202210277863.8A 2022-03-21 2022-03-21 Image processing method, image processing device, electronic equipment and storage medium Pending CN114612337A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210277863.8A CN114612337A (en) 2022-03-21 2022-03-21 Image processing method, image processing device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210277863.8A CN114612337A (en) 2022-03-21 2022-03-21 Image processing method, image processing device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114612337A true CN114612337A (en) 2022-06-10

Family

ID=81865159

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210277863.8A Pending CN114612337A (en) 2022-03-21 2022-03-21 Image processing method, image processing device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114612337A (en)

Similar Documents

Publication Publication Date Title
CN109191395B (en) Image contrast enhancement method, device, equipment and storage medium
CN100561517C (en) Be used to check and the method and system that strengthens image
US8681880B2 (en) Adaptive dithering during image processing
EP2059902B1 (en) Method and apparatus for image enhancement
JP2005527880A (en) User definable image reference points
KR102045538B1 (en) Method for multi exposure image fusion based on patch and apparatus for the same
CN111489322B (en) Method and device for adding sky filter to static picture
CN107993189B (en) Image tone dynamic adjustment method and device based on local blocking
CN113658085B (en) Image processing method and device
CN112788251A (en) Image brightness processing method and device, and image processing method and device
CN110858388B (en) Method and device for enhancing video image quality
CN114627022A (en) Image processing method, image processing device, electronic equipment and storage medium
CN105224538B (en) The dithering process method and apparatus of image
Wang et al. Adaptive enhancement for nonuniform illumination images via nonlinear mapping
CN112700385A (en) Image processing method, image processing device, electronic equipment and storage medium
WO2010007933A1 (en) Image signal processing device and image display device
CN114612337A (en) Image processing method, image processing device, electronic equipment and storage medium
CN115775215A (en) Image processing method, image processing device, electronic equipment and storage medium
US9154671B2 (en) Image processing apparatus, image processing method, and program
CN110941413B (en) Display screen generation method and related device
CN114266803A (en) Image processing method, image processing device, electronic equipment and storage medium
CN109951634B (en) Image synthesis method, device, terminal and storage medium
CN110874816B (en) Image processing method, device, mobile terminal and storage medium
JP5967829B2 (en) Color conversion processing program, apparatus and method capable of adjusting theme color allocation ratio
CN112446848A (en) Image processing method and device and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination