CN114494036A - Image saturation adjusting method and device, computer equipment and storage medium - Google Patents

Image saturation adjusting method and device, computer equipment and storage medium Download PDF

Info

Publication number
CN114494036A
CN114494036A CN202111631217.9A CN202111631217A CN114494036A CN 114494036 A CN114494036 A CN 114494036A CN 202111631217 A CN202111631217 A CN 202111631217A CN 114494036 A CN114494036 A CN 114494036A
Authority
CN
China
Prior art keywords
matrix
image
saturation
partition
adjusted
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111631217.9A
Other languages
Chinese (zh)
Inventor
郭章
陈岱玲
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Dongguan Jinruixian Digital Technology Co ltd
Original Assignee
Dongguan Jinruixian Digital Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Dongguan Jinruixian Digital Technology Co ltd filed Critical Dongguan Jinruixian Digital Technology Co ltd
Priority to CN202111631217.9A priority Critical patent/CN114494036A/en
Publication of CN114494036A publication Critical patent/CN114494036A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The application belongs to the technical field of image processing, and particularly relates to an image saturation adjusting method, an image saturation adjusting device, computer equipment and a storage medium, wherein the image saturation adjusting method comprises the following steps: acquiring the saturation of each pixel point of an image to be adjusted; dividing the image to be adjusted into a plurality of matrix partitions, and performing pooling processing on each matrix partition to obtain a first matrix; performing mobile convolution processing on the first matrix according to a preset first convolution core to obtain a second matrix; obtaining an addition coefficient corresponding to each matrix partition according to the first matrix, the second matrix and a preset saturation change slope; and respectively carrying out mean value filtering processing on each matrix partition based on the addition coefficient corresponding to the matrix partition to obtain a target image which corresponds to the image to be adjusted and is subjected to saturation adjustment. According to the method and the device, the saturation of the adjacent area is adjusted in a balanced mode through a convolution mode, different gains are given to different saturations, and the obtained image is softer and more natural.

Description

Image saturation adjusting method and device, computer equipment and storage medium
Technical Field
The present application relates to the field of image processing technologies, and in particular, to an image saturation adjusting method and apparatus, a computer device, and a storage medium.
Background
Saturation of color (saturation) refers to the vividness of a color, also called purity. In the color science, primary colors have the highest saturation, and as the saturation decreases, colors become dull to achromatic colors, i.e., colors with lost hue. As a carrier of information, colors not only adhere to the design form, but also serve as a main body to complete the information transmission process. For artistic design, the application and design of color are important parts, and in practical application, the color is divided into 3 basic categories of low saturation contrast, medium saturation contrast and high saturation contrast according to the saturation value, and a derivative category of saturation combination contrast.
In image processing, it is generally necessary to adjust the saturation of an image. A common saturation adjustment method is matrix control adjustment. The matrix control adjustment is based on the synchronous increase and decrease of all colors, and the problem of color spots is easy to occur on the picture with uneven saturation distribution.
The saturation adjusting method provided by the prior art cannot solve the problem that color specks are easy to appear after the picture with uneven saturation distribution is adjusted, and needs to be solved.
Disclosure of Invention
In view of this, embodiments of the present application provide an image saturation adjusting method, an apparatus, a computer device, and a storage medium, which can solve the problem that a saturation adjusting method provided in the prior art cannot solve the problem that a color spot is easily generated after an image with uneven saturation distribution is adjusted.
A first aspect of an embodiment of the present application provides an image saturation adjusting method, including:
acquiring the saturation of each pixel point of an image to be adjusted;
dividing the image to be adjusted into a plurality of matrix partitions, and performing pooling processing on each matrix partition to obtain a first matrix;
performing mobile convolution processing on the first matrix according to a preset first convolution core to obtain a second matrix;
obtaining an addition coefficient corresponding to each matrix partition according to the first matrix, the second matrix and a preset saturation change slope;
and respectively carrying out mean value filtering processing on each matrix partition based on the addition coefficient corresponding to the matrix partition to obtain a target image which corresponds to the image to be adjusted and is subjected to saturation adjustment.
In a possible implementation manner of the first aspect, the image to be adjusted is divided into a plurality of matrix partitions, and the matrix partitions obtained by the division are preferably n × n matrices.
For example, for an input image with 16 rows and 16 columns, if the matrix partition is determined to be 4 × 4, the matrix partition is divided to obtain 16 matrix partitions with 4 rows and 4 columns.
It should be understood that this is merely an example and is not intended to limit the size of the matrix partitions; the number of rows and columns of the matrix partition is not necessarily equal.
A second aspect of an embodiment of the present application provides an apparatus for identifying a potential customer, including:
the saturation acquisition module is used for acquiring the saturation of each pixel point of the image to be adjusted;
the pooling processing module is used for dividing the image to be adjusted into a plurality of matrix partitions and pooling each matrix partition to obtain a first matrix;
the convolution processing module is used for carrying out mobile convolution processing on the first matrix according to a preset first convolution core to obtain a second matrix;
the addition coefficient calculation module is used for obtaining addition coefficients corresponding to all matrix partitions according to the first matrix, the second matrix and a preset saturation change slope;
and the filtering module is used for respectively carrying out mean value filtering processing on each matrix partition based on the addition coefficient corresponding to the matrix partition to obtain a target image which corresponds to the image to be adjusted and is subjected to saturation adjustment.
A third aspect of embodiments of the present application provides a terminal device, where the terminal device includes a memory and a processor, where the memory stores a computer program that is executable on the processor, and the processor implements the steps of the image saturation adjusting method according to any one of the first aspect when executing the computer program.
A fourth aspect of an embodiment of the present application provides a computer-readable storage medium, including: a computer program is stored, which, when being executed by a processor, carries out the steps of the image saturation adjustment method according to any one of the first aspect.
Compared with the prior art, the embodiment of the application has the advantages that: the saturation of the adjacent region is adjusted in a balanced mode through a convolution mode, different gains are given to different saturations, the high-saturation part is protected while the saturation is improved, the obtained image is softer and more natural than the traditional adjusting method, and the problem of plaque is avoided; meanwhile, the adjusting method has simple steps; the edge transition is smoother by carrying out two times of mean filtering on the saturation; the picture is processed in a partition mode, a convolution kernel changes along with the picture area, and the saturation effect is more natural after the gain; in addition, the maximum gain multiple can be controlled by deforming the addition coefficient, and an adjustable range can be planned.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart illustrating an implementation of an image saturation adjusting method according to an embodiment of the present application;
fig. 2 is a schematic view illustrating a process of acquiring saturation of each pixel point of an image to be adjusted by using an image saturation adjusting method according to an embodiment of the present application;
fig. 3 is a variation trend graph of a slope of an addition coefficient along with a saturation change in an image saturation adjustment method provided in an embodiment of the present application;
fig. 4 is a schematic flow chart illustrating an implementation process of performing mean filtering processing on each matrix partition based on an addition coefficient corresponding to the matrix partition in the image saturation adjustment method according to the embodiment of the present application;
fig. 5 is a schematic structural diagram of an image saturation adjusting apparatus according to an embodiment of the present application;
fig. 6 is a schematic diagram of a terminal device provided in an embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
In order to explain the technical solution described in the present application, the following description will be given by way of specific examples.
Fig. 1 shows a flowchart of an implementation of a method for identifying a potential customer according to an embodiment of the present application, which is detailed as follows: an image saturation adjusting method includes steps S102-S110:
step S102, acquiring the saturation of each pixel point of the image to be adjusted.
In the application, the saturation of the pixel point can be directly read or obtained through calculation.
Step S104, dividing the image to be adjusted into a plurality of matrix partitions, and performing pooling processing on each matrix partition to obtain a first matrix.
In the present application, an image to be adjusted is divided into a plurality of matrix partitions, the matrix partitions obtained by the division are preferably n × n matrices, as shown in table 1, and for an input image with 16 rows and 16 columns, if the matrix partitions are determined to be 4 × 4, the matrix partitions are divided to obtain 16 matrix partitions with 4 rows and 4 columns, as shown in table 2. It is to be understood that this is merely an exemplary illustration and is not intended to limit the size of the matrix partitions; the number of rows and columns of the matrix partition is not necessarily equal.
Table 1: saturation of each pixel
Figure BDA0003440014820000041
Figure BDA0003440014820000051
Table 2: input image matrix partition results
Figure BDA0003440014820000052
In the present application, the pooling process is to set the saturation values of all the pixels in the same matrix partition to a same value, and the same value may be determined by various methods. The data processing amount can be reduced and the processing speed can be improved through the pooling processing.
And S106, performing mobile convolution processing on the first matrix according to a preset first convolution core to obtain a second matrix.
In the present application, the moving convolution belongs to the prior art, and the process of the moving convolution is not described in detail in the present application.
And S108, obtaining an addition coefficient corresponding to each matrix partition according to the first matrix, the second matrix and a preset saturation change slope.
In the method, the first matrix and the second matrix are related to the saturation of all the pixels of the whole image, and for all the matrix partitions, the first matrix and the second matrix are calculated only once, so that the calculated amount of image adjustment is simplified, and the operation speed of the method provided by the application is improved.
And step S110, performing mean value filtering processing on each matrix partition respectively based on the addition coefficient corresponding to the matrix partition to obtain a target image which corresponds to the image to be adjusted and is subjected to saturation adjustment.
According to the method, the saturation of the adjacent region is adjusted in a balanced mode through a convolution mode, different gains are given to different saturations, the high-saturation part is protected while the saturation is improved, an obtained image is softer and more natural than a traditional adjusting method, and the problem of plaque is avoided; meanwhile, the adjusting method has simple steps; the edge transition is smoother by carrying out two times of mean filtering on the saturation; the picture is processed in a partition mode, a convolution kernel changes along with the picture area, and the saturation effect is more natural after the gain is achieved; in addition, the maximum gain multiple can be controlled by deforming the addition coefficient, and an adjustable range can be planned.
As shown in fig. 2, in an embodiment of the present application, the acquiring the saturation of each pixel point of the image to be adjusted includes steps S202 to S206:
step S202, obtaining R pixel values, G pixel values and B pixel values of all pixel points in the image to be adjusted.
In the application, the R pixel value, the G pixel value, and the B pixel value of each pixel point in the image to be adjusted may be obtained from the image information.
And step S204, calculating the difference value between the pixel value with the maximum value and the pixel value with the minimum value in the R pixel value, the G pixel value and the B pixel value of each pixel point.
Step S206, multiplying a ratio of the difference value of each pixel point to a pixel value with a maximum value corresponding thereto by a preset coefficient to obtain a saturation of each pixel point of the image to be adjusted.
In the application, for an image in an RGB mode, an R pixel value, a G pixel value, and a B pixel value of each pixel are obtained, a difference between a pixel value with a maximum value and a pixel value with a minimum value among the R pixel value, the G pixel value, and the B pixel value of each pixel is calculated by using a formula S (max (RGB) -min (RGB) -100/max (RGB)), and the difference is multiplied by a preset coefficient 100, so that an obtained result is rounded, and saturation S of the pixel is obtained. Table 1 shows an input image with 256 pixels, and the values in the image are the saturation of each pixel obtained after processing.
In an embodiment of the present application, the pooling of each matrix partition to obtain the first matrix includes:
performing maximum pooling on each matrix partition;
and summarizing the result of the partitioned pooling of each matrix to obtain the first matrix.
In the present application, pooling may be performed without repeating pooling by 4 × 4max, that is, pooling is performed for each matrix partition, and the rules used for pooling are: and taking the maximum value of the saturation of all the pixel points in the matrix partition as a result of pooling of all the pixel points. Summarizing the result of each matrix after partitioned pooling to obtain a first matrix; each element in the first matrix is respectively equal to the result of pooling processing of the corresponding matrix partition, for example, the value of the second row and the second column in the first matrix is 54, and the result of pooling processing of the matrix partition with the added position in table 1 and table 2 is corresponding.
The first matrix is specifically:
the coordinates of the first matrix are expressed as:
Figure BDA0003440014820000071
Figure BDA0003440014820000072
in an embodiment of the present application, the size of the first convolution kernel is 2 × 2, and the moving convolution processing on the first matrix according to the preset first convolution kernel to obtain the second matrix includes:
and performing moving convolution processing with the step distance of 1 on each first matrix according to the first convolution kernel to obtain the second matrix.
Optionally, the first convolution kernel is:
Figure BDA0003440014820000073
in the present application, the first matrix is convolved with the first convolution kernel at a step pitch of 1, that is:
Figure BDA0003440014820000074
a second matrix of 3 x 3 is obtained:
Figure BDA0003440014820000081
the coordinates of the second matrix are expressed as:
Figure BDA0003440014820000082
in an embodiment of the present application, the obtaining, according to the first matrix, the second matrix, and a preset saturation change slope, an addition coefficient corresponding to each matrix partition includes:
calculating the average value of the difference values of the result after the pooling of each matrix partition and the corresponding element of the second matrix to obtain the average difference of each matrix partition;
and calculating to obtain an addition coefficient corresponding to each matrix partition according to the mean difference of each matrix partition and a preset saturation change slope.
In the present application, the mean value of each convolution window is included in a plurality of windows, for example, the mean value 54 at the g (x, y) position in the first matrix is included in the convolution windows corresponding to the mean values 59.5, 56.5, 56.5 and 53.25 in the second matrix, and the final mean difference only needs to be re-averaged, and can be specifically calculated by the following formula:
Figure BDA0003440014820000083
for the mean value 54 at the g (x, y) position (i.e., the matrix partition at the bold of tables 1, 2), θ is 2.813.
Final addition coefficient:
Figure BDA0003440014820000084
wherein: mu is a saturation change slope, and the purpose of adjusting the saturation intensity can be achieved by adjusting the saturation change slope; theta is the saturation level difference under the current window, and the larger the value is, the larger the saturation level between the current area and the peripheral area is, the more the calculated addition coefficient is, and the more the low saturation gain is. Fig. 3 shows the trend of the addition coefficient with the slope of the saturation change.
As shown in fig. 4, in an embodiment of the present application, the performing mean filtering processing on each matrix partition based on the addition coefficient corresponding to the matrix partition to obtain a target image after saturation adjustment corresponding to the image to be adjusted includes steps S402 to S408:
and step S402, obtaining a second convolution kernel according to the addition coefficient.
In the present application, based on the obtained addition factor Ψ, a 3 × 3 kernel k2 is constructed:
Figure BDA0003440014820000091
when θ is 2.813, K2 is:
Figure BDA0003440014820000092
step S404, for each matrix partition, respectively taking each pixel point in the matrix partition as a center to obtain a third matrix.
In the present application, taking the thickened matrix partition in table 1 as an example, the input pixels are:
Figure BDA0003440014820000093
and 16 third matrixes of 3 x 3 can be obtained by taking each pixel point in the matrix partition as the center.
Step S406, calculating the dot product of each third matrix and the second convolution kernel to obtain the saturation value after the mean value filtering processing of the corresponding pixel points.
In the application, each third matrix is subjected to point multiplication with the kernel K2, so that a saturation value after mean filtering processing of each pixel in the matrix partition can be obtained.
And step S408, summarizing the mean filtering processing result of each matrix partition to obtain a target image which corresponds to the image to be adjusted and is subjected to saturation adjustment.
In the present application, the results of the summary are shown, for example, in the following 4 x 4 table:
52 47 44 46
53 47 45 47
54 48 47 47
54 50 49 45
fig. 5 shows a block diagram of the image saturation adjusting apparatus provided in the embodiment of the present application, which corresponds to the method of the above embodiment, and only shows the relevant parts of the embodiment of the present application for convenience of description. The identification means of the potential customer illustrated in fig. 5 may be an execution subject of the image saturation adjustment method provided in the foregoing embodiment.
Referring to fig. 5, the image saturation adjusting apparatus includes:
a saturation obtaining module 501, configured to obtain the saturation of each pixel point of the image to be adjusted;
a pooling processing module 502, configured to divide the image to be adjusted into a plurality of matrix partitions, and perform pooling processing on each matrix partition to obtain a first matrix;
a convolution processing module 503, configured to perform mobile convolution on the first matrix according to a preset first convolution core, so as to obtain a second matrix;
an addition coefficient calculation module 504, configured to obtain an addition coefficient corresponding to each matrix partition according to the first matrix, the second matrix, and a preset saturation change slope;
and the filtering module 505 is configured to perform mean filtering processing on each matrix partition based on the addition coefficient corresponding to the matrix partition, so as to obtain a target image after saturation adjustment corresponding to the image to be adjusted.
The process of implementing each function by each module in the image saturation adjusting device provided in the embodiment of the present application may specifically refer to the description of the embodiment shown in fig. 1, and is not repeated here.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance. It will also be understood that, although the terms first, second, etc. may be used herein to describe various elements in some embodiments of the application, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first table may be named a second table, and similarly, a second table may be named a first table, without departing from the scope of various described embodiments. The first table and the second table are both tables, but they are not the same table.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
The image saturation adjusting method provided by the embodiment of the application can be applied to terminal devices such as a mobile phone, a tablet personal computer, a wearable device, a vehicle-mounted device, an Augmented Reality (AR)/Virtual Reality (VR) device, a notebook computer, an ultra-mobile personal computer (UMPC), a netbook, a Personal Digital Assistant (PDA), and the like, and the embodiment of the application does not limit the specific type of the terminal device at all.
For example, the terminal device may be a Station (ST) in a WLAN, which may be a cellular phone, a cordless phone, a Session Initiation Protocol (SIP) phone, a Wireless Local Loop (WLL) station, a Personal Digital Assistant (PDA) device, a handheld device with Wireless communication capability, a computing device or other processing device connected to a Wireless modem, a vehicle-mounted device, a vehicle-mounted networking terminal, a computer, a laptop, a handheld communication device, a handheld computing device, a satellite Wireless device, a Wireless modem card, a television set-top box (STB), a Customer Premises Equipment (CPE), and/or other devices for communicating over a Wireless system and a next generation communication system, such as a Mobile terminal in a 5G Network or a Public Land Mobile Network (future evolved, PLMN) mobile terminals in the network, etc.
By way of example and not limitation, when the terminal device is a wearable device, the wearable device may also be a generic term for intelligently designing daily wearing by applying wearable technology, developing wearable devices, such as glasses, gloves, watches, clothing, shoes, and the like. A wearable device is a portable device that is worn directly on the body or integrated into the clothing or accessories of the user. The wearable device is not only a hardware device, but also realizes powerful functions through software support, data interaction and cloud interaction. The generalized wearable intelligent device has the advantages that the generalized wearable intelligent device is complete in function and large in size, can realize complete or partial functions without depending on a smart phone, such as a smart watch or smart glasses, and only is concentrated on a certain application function, and needs to be matched with other devices such as the smart phone for use, such as various smart bracelets for monitoring physical signs, smart jewelry and the like.
Fig. 6 is a schematic structural diagram of a terminal device according to an embodiment of the present application. As shown in fig. 6, the terminal device 6 of this embodiment includes: at least one processor 60 (only one shown in fig. 6), a memory 61, said memory 61 having stored therein a computer program 62 executable on said processor 60. The processor 60, when executing the computer program 62, implements the steps in the above-described embodiments of the method for identifying potential customers, such as the steps S102 to S110 shown in fig. 1. Alternatively, the processor 60, when executing the computer program 62, implements the functions of the modules/units in the above-mentioned device embodiments, such as the functions of the modules 501 to 504 shown in fig. 5.
The terminal device 6 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 60, a memory 61. It will be appreciated by those skilled in the art that fig. 6 is merely an example of a terminal device 6 and does not constitute a limitation of the terminal device 6 and may include more or less components than those shown, or some components may be combined, or different components, for example the terminal device may also include an input transmitting device, a network access device, a bus, etc.
The Processor 60 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 61 may in some embodiments be an internal storage unit of the terminal device 6, such as a hard disk or a memory of the terminal device 6. The memory 61 may also be an external storage device of the terminal device 6, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 6. Further, the memory 61 may also include both an internal storage unit and an external storage device of the terminal device 6. The memory 61 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer program. The memory 61 may also be used to temporarily store data that has been transmitted or is to be transmitted.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The embodiment of the present application further provides a terminal device, where the terminal device includes at least one memory, at least one processor, and a computer program that is stored in the at least one memory and is executable on the at least one processor, and when the processor executes the computer program, the terminal device is enabled to implement the steps in any of the method embodiments.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The embodiments of the present application provide a computer program product, which when running on a terminal device, enables the terminal device to implement the steps in the above method embodiments when executed.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application, and are intended to be included within the scope of the present application.

Claims (10)

1. An image saturation adjusting method, comprising:
acquiring the saturation of each pixel point of an image to be adjusted;
dividing the image to be adjusted into a plurality of matrix partitions, and performing pooling processing on each matrix partition to obtain a first matrix;
performing mobile convolution processing on the first matrix according to a preset first convolution core to obtain a second matrix;
obtaining an addition coefficient corresponding to each matrix partition according to the first matrix, the second matrix and a preset saturation change slope;
and respectively carrying out mean value filtering processing on each matrix partition based on the addition coefficient corresponding to the matrix partition to obtain a target image which corresponds to the image to be adjusted and is subjected to saturation adjustment.
2. The image saturation adjusting method according to claim 1, wherein the obtaining the saturation of each pixel point of the image to be adjusted includes:
acquiring an R pixel value, a G pixel value and a B pixel value of each pixel point in the image to be adjusted;
calculating the difference value between the pixel value with the largest value and the pixel value with the smallest value in the R pixel value, the G pixel value and the B pixel value of each pixel point;
and multiplying the ratio of the difference value of each pixel point to the pixel value with the maximum value corresponding to the difference value by a preset coefficient to obtain the saturation of each pixel point of the image to be adjusted.
3. The method for adjusting image saturation according to claim 1, wherein said pooling each matrix partition to obtain the first matrix comprises:
performing maximum pooling on each matrix partition;
and summarizing the result of the partitioned pooling of each matrix to obtain the first matrix.
4. The method for adjusting image saturation according to claim 1, wherein the size of the first convolution kernel is 2 x 2, and the moving convolution processing on the first matrix according to the preset first convolution kernel to obtain the second matrix includes:
and performing moving convolution processing with the step distance of 1 on each first matrix according to the first convolution kernel to obtain the second matrix.
5. The image saturation adjustment method according to claim 4, wherein the first convolution kernel is:
Figure FDA0003440014810000021
6. the image saturation adjustment method according to claim 1 or 3, wherein the obtaining of the addition coefficient corresponding to each matrix partition according to the first matrix, the second matrix, and a preset saturation change slope includes:
calculating the average value of the difference values of the result after the pooling of each matrix partition and the corresponding element of the second matrix to obtain the average difference of each matrix partition;
and calculating to obtain an addition coefficient corresponding to each matrix partition according to the mean difference of each matrix partition and a preset saturation change slope.
7. The image saturation adjustment method according to claim 1, wherein the performing, based on the addition coefficient corresponding to the matrix partition, a mean filtering process on each matrix partition to obtain a saturation-adjusted target image corresponding to the image to be adjusted includes:
obtaining a second convolution kernel according to the addition coefficient;
for each matrix partition, respectively taking each pixel point in the matrix partition as a center to obtain a third matrix;
calculating the dot product of each third matrix and the second convolution kernel to obtain a saturation value after the mean value filtering processing of the corresponding pixel points;
and summarizing the mean value filtering processing result of each matrix partition to obtain a target image which corresponds to the image to be adjusted and is subjected to saturation adjustment.
8. An image saturation adjusting apparatus, comprising:
the saturation acquisition module is used for acquiring the saturation of each pixel point of the image to be adjusted;
the pooling processing module is used for dividing the image to be adjusted into a plurality of matrix partitions and pooling each matrix partition to obtain a first matrix;
the convolution processing module is used for carrying out mobile convolution processing on the first matrix according to a preset first convolution core to obtain a second matrix;
the addition coefficient calculation module is used for obtaining addition coefficients corresponding to all matrix partitions according to the first matrix, the second matrix and a preset saturation change slope;
and the filtering module is used for respectively carrying out mean value filtering processing on each matrix partition based on the addition coefficient corresponding to the matrix partition to obtain a target image which corresponds to the image to be adjusted and is subjected to saturation adjustment.
9. A terminal device, characterized in that the terminal device comprises a memory, a processor, a computer program being stored on the memory and being executable on the processor, the processor implementing the steps of the image saturation adjusting method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the image saturation adjustment method according to any one of claims 1 to 7.
CN202111631217.9A 2021-12-28 2021-12-28 Image saturation adjusting method and device, computer equipment and storage medium Pending CN114494036A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111631217.9A CN114494036A (en) 2021-12-28 2021-12-28 Image saturation adjusting method and device, computer equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111631217.9A CN114494036A (en) 2021-12-28 2021-12-28 Image saturation adjusting method and device, computer equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114494036A true CN114494036A (en) 2022-05-13

Family

ID=81496284

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111631217.9A Pending CN114494036A (en) 2021-12-28 2021-12-28 Image saturation adjusting method and device, computer equipment and storage medium

Country Status (1)

Country Link
CN (1) CN114494036A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117119119A (en) * 2023-08-24 2023-11-24 深圳市丕微科技企业有限公司 Compression transmission method, device and system for image data
CN118397435A (en) * 2024-06-26 2024-07-26 之江实验室 Task execution method, device, medium and equipment based on image recognition model

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101621702A (en) * 2009-07-30 2010-01-06 北京海尔集成电路设计有限公司 Method and device for automatically adjusting chroma and saturation
CN101729913A (en) * 2008-10-14 2010-06-09 华映视讯(吴江)有限公司 Method and system for adjusting image saturation
US20170178554A1 (en) * 2015-12-16 2017-06-22 Everdisplay Optronics (Shanghai) Limited Display device, image data processing apparatus and method
CN109191406A (en) * 2018-09-19 2019-01-11 浙江宇视科技有限公司 Image processing method, device and equipment
CN111833360A (en) * 2020-07-14 2020-10-27 腾讯科技(深圳)有限公司 Image processing method, device, equipment and computer readable storage medium
CN113239934A (en) * 2021-03-27 2021-08-10 重庆邮电大学 Image processing method and related equipment
CN113674163A (en) * 2021-07-13 2021-11-19 浙江大华技术股份有限公司 Image saturation adjusting method, device and computer readable storage medium
CN113781338A (en) * 2021-08-31 2021-12-10 咪咕文化科技有限公司 Image enhancement method, device, equipment and medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101729913A (en) * 2008-10-14 2010-06-09 华映视讯(吴江)有限公司 Method and system for adjusting image saturation
CN101621702A (en) * 2009-07-30 2010-01-06 北京海尔集成电路设计有限公司 Method and device for automatically adjusting chroma and saturation
US20170178554A1 (en) * 2015-12-16 2017-06-22 Everdisplay Optronics (Shanghai) Limited Display device, image data processing apparatus and method
CN109191406A (en) * 2018-09-19 2019-01-11 浙江宇视科技有限公司 Image processing method, device and equipment
CN111833360A (en) * 2020-07-14 2020-10-27 腾讯科技(深圳)有限公司 Image processing method, device, equipment and computer readable storage medium
CN113239934A (en) * 2021-03-27 2021-08-10 重庆邮电大学 Image processing method and related equipment
CN113674163A (en) * 2021-07-13 2021-11-19 浙江大华技术股份有限公司 Image saturation adjusting method, device and computer readable storage medium
CN113781338A (en) * 2021-08-31 2021-12-10 咪咕文化科技有限公司 Image enhancement method, device, equipment and medium

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117119119A (en) * 2023-08-24 2023-11-24 深圳市丕微科技企业有限公司 Compression transmission method, device and system for image data
CN117119119B (en) * 2023-08-24 2024-06-11 深圳市丕微科技企业有限公司 Compression transmission method, device and system for image data
CN118397435A (en) * 2024-06-26 2024-07-26 之江实验室 Task execution method, device, medium and equipment based on image recognition model

Similar Documents

Publication Publication Date Title
CN114494036A (en) Image saturation adjusting method and device, computer equipment and storage medium
CN107204034B (en) A kind of image processing method and terminal
US11288783B2 (en) Method and system for image enhancement
CN104424626A (en) Method and associated apparatus for correcting color artifact of image
CN109309826B (en) Image color balancing method and device, terminal equipment and readable storage medium
WO2006055693A2 (en) System and method for a vector difference mean filter for noise suppression
US20040202377A1 (en) Image processing apparatus, mobile terminal device and image processing computer readable program
CN109618098A (en) A kind of portrait face method of adjustment, device, storage medium and terminal
US7512264B2 (en) Image processing
CN108447040A (en) histogram equalization method, device and terminal device
CN111968057A (en) Image noise reduction method and device, storage medium and electronic device
CN111860276A (en) Human body key point detection method, device, network equipment and storage medium
CN106683063A (en) Method and device of image denoising
US8988452B2 (en) Color enhancement via gamut expansion
CN111935746A (en) Method, device, terminal and storage medium for acquiring communication parameters
CN117496254A (en) Intelligent identification method, device and equipment for black tea fermentation state and storage medium
CN103871035B (en) Image denoising method and device
CN114862897A (en) Image background processing method and device and electronic equipment
CN113222844B (en) Image beautifying method and device, electronic equipment and medium
US20230360286A1 (en) Image processing method and apparatus, electronic device and storage medium
CN109308690B (en) Image brightness balancing method and terminal
CN109614064A (en) A kind of image display method, image display apparatus and terminal device
CN112215746B (en) Image blurring processing method, device, electronic equipment and storage medium
CN110580880B (en) RGB (red, green and blue) triangular sub-pixel layout-based sub-pixel rendering method and system and display device
CN104754313A (en) Image collecting method and electronic device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination