CN115908191A - Filter parameter acquisition method and device - Google Patents

Filter parameter acquisition method and device Download PDF

Info

Publication number
CN115908191A
CN115908191A CN202211579442.7A CN202211579442A CN115908191A CN 115908191 A CN115908191 A CN 115908191A CN 202211579442 A CN202211579442 A CN 202211579442A CN 115908191 A CN115908191 A CN 115908191A
Authority
CN
China
Prior art keywords
target
image
sub
image block
lut
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211579442.7A
Other languages
Chinese (zh)
Inventor
高丽盼
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Vivo Mobile Communication Co Ltd
Original Assignee
Vivo Mobile Communication Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Vivo Mobile Communication Co Ltd filed Critical Vivo Mobile Communication Co Ltd
Priority to CN202211579442.7A priority Critical patent/CN115908191A/en
Publication of CN115908191A publication Critical patent/CN115908191A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Processing (AREA)

Abstract

The application discloses a filter parameter obtaining method and a filter parameter obtaining device, and belongs to the technical field of image processing. The method comprises the following steps: acquiring a first calibration graph, wherein the first calibration graph comprises M first image blocks, each first image block comprises M first sub image blocks, each first sub image block comprises a plurality of pixel points with the same pixel value, and M is a positive integer less than or equal to 255; inputting the first calibration graph into a target filter to obtain a second calibration graph, wherein the second calibration graph comprises M second image blocks, and each second image block comprises M × M second sub image blocks; and determining a first 3D LUT of the target filter according to each second sub image block.

Description

Filter parameter acquisition method and device
Technical Field
The application belongs to the technical field of image processing, and particularly relates to a filter parameter acquisition method and a filter parameter acquisition device.
Background
One-key repair and personalized repair are more and more popular with users, and common quick repair means include filters, cutout, special effects, blurring and the like, wherein the filters can adjust the style and the color tone of an image, can obviously reflect the mood of an author, such as melancholy, sunlight and the like, and are widely applied. The filters commonly used at present include antique, film, nature, party, pan, sketch, oil painting and other filters. Most filters belong to hue filters and are realized by three-dimensional (3 dimensions,3 d) Look-Up table (LUT) parameters, such as antique, natural, warm-yang, fresh and so on.
The color tone filters on the market are various at present, the filter banks provided by different manufacturers are different due to different aesthetic tastes of different manufacturers, even if the filters are named identically, different manufacturers have a plurality of differences, and one manufacturer can only provide a limited filter bank. When the limited filter library provided by a certain manufacturer cannot meet the aesthetic sense of the user, the user can only use the filter libraries of other manufacturers to meet the user's favor, or manually adjust the filter libraries to the satisfaction.
That is to say, the filters provided by different manufacturers can only be used on the platform of the manufacturer, and the cross-platform expansion of the filters cannot be performed according to the requirements of users, and the filter addition lacks flexibility.
Disclosure of Invention
The embodiment of the application aims to provide a filter parameter obtaining method and a filter parameter obtaining device, and the problem that filter addition is lack of flexibility can be solved.
In a first aspect, an embodiment of the present application provides a method for obtaining filter parameters, where the method includes:
acquiring a first calibration graph, wherein the first calibration graph comprises M first image blocks, each first image block comprises M first sub image blocks, each first sub image block comprises a plurality of pixel points with the same pixel value, and M is a positive integer less than or equal to 255;
inputting the first calibration graph into a target filter to obtain a second calibration graph, wherein the second calibration graph comprises M second image blocks, and each second image block comprises M × M second sub image blocks;
and determining a first three-dimensional lookup table (3D LUT) of the target filter according to each second sub image block.
In a second aspect, an embodiment of the present application provides a filter parameter obtaining apparatus, where the apparatus includes:
the first obtaining module is used for obtaining a first calibration graph, the first calibration graph comprises M first image blocks, each first image block comprises M first sub image blocks, each first sub image block comprises a plurality of pixel points with the same pixel value, and M is a positive integer less than or equal to 255;
the second obtaining module is used for inputting the first calibration graph into a target filter to obtain a second calibration graph, wherein the second calibration graph comprises M second image blocks, and each second image block comprises M second sub image blocks;
and the determining module is used for determining a first three-dimensional lookup table (3D LUT) of the target filter according to each second sub image block.
In a third aspect, embodiments of the present application provide an electronic device, which includes a processor and a memory, where the memory stores a program or instructions executable on the processor, and the program or instructions, when executed by the processor, implement the steps of the method according to the first aspect.
In a fourth aspect, embodiments of the present application provide a readable storage medium, on which a program or instructions are stored, which when executed by a processor, implement the steps of the method according to the first aspect.
In a fifth aspect, an embodiment of the present application provides a chip, where the chip includes a processor and a communication interface, where the communication interface is coupled to the processor, and the processor is configured to execute a program or instructions to implement the method according to the first aspect.
In a sixth aspect, embodiments of the present application provide a program product, stored in a storage medium, which is executed by at least one processor to implement the method according to the first aspect.
In an embodiment of the present application, a first calibration graph is obtained, where the first calibration graph includes M first image blocks, each first image block includes M × M first sub image blocks, each first sub image block includes multiple pixel points with the same pixel value, and M is a positive integer less than or equal to 255; inputting the first calibration graph into a target filter to obtain a second calibration graph, wherein the second calibration graph comprises M second image blocks, and each second image block comprises M × M second sub image blocks; and determining a first 3D LUT of the target filter according to each second sub image block. Through the mode, the first 3D LUT of the target filter can be acquired, the target filter can be quickly restored, the target filter can be added to other platforms through the first 3D LUT, cross-platform use of the target filter is achieved, and the first 3D LUT can be added to a filter library by a user according to requirements, so that flexible addition of the filter is achieved.
Drawings
FIG. 1 is a flowchart of a filter parameter obtaining method provided by an embodiment of the present application;
FIG. 2a is a schematic diagram of a color space partitioning node according to an embodiment of the present disclosure;
FIG. 2b is a first calibration chart provided in an embodiment of the present application;
fig. 2c is another flowchart of a filter parameter obtaining method according to an embodiment of the present disclosure;
fig. 3 is a structural diagram of a filter parameter acquiring apparatus provided in an embodiment of the present application;
FIG. 4 is a block diagram of an electronic device according to an embodiment of the present disclosure;
fig. 5 is a second structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some, but not all, embodiments of the present application. All other embodiments that can be derived by one of ordinary skill in the art from the embodiments given herein are intended to be within the scope of the present disclosure.
The terms first, second and the like in the description and in the claims of the present application are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the terms so used are interchangeable under appropriate circumstances such that the embodiments of the application are capable of operation in other sequences than those illustrated or otherwise described herein, and that the terms "first," "second," and the like are generally used herein in a generic sense to distinguish one object from another, and not necessarily to limit the number of objects, e.g., the first object may be one or more. In addition, "and/or" in the specification and claims means at least one of connected objects, a character "/" generally means that a preceding and succeeding related objects are in an "or" relationship.
The method for acquiring filter parameters provided in the embodiments of the present application is described in detail below with reference to the accompanying drawings by using specific embodiments and application scenarios thereof.
Fig. 1 is a flowchart of a filter parameter obtaining method according to an embodiment of the present disclosure. The method for acquiring the filter parameters in the embodiment is applied to electronic equipment, and comprises the following steps:
101, obtaining a first calibration graph, where the first calibration graph includes M first image blocks, each first image block includes M first sub image blocks, each first sub image block includes multiple pixel points with the same pixel value, and M is a positive integer less than or equal to 255.
The method in the application can be applied to a hue class filter, the hue class filter is realized through a 3D LUT, different filters can be obtained by setting parameter values of all nodes in the 3D LUT, the 3D LUT can be understood as a 3D LUT consisting of 3 1D LUTs of R (red), G (green) and B (blue), and the color values of three input channels of R, G and B are mapped according to the 3D LUT to obtain the converted color.
The 3D LUT can divide the RGB color space, coordinate axes of the RGB color space are three latitudes of R, G, B, each latitude is [0, 255], if each latitude is divided into three nodes of two segments, if divided in segments, the size of dividing the color space is 2 × 2; if the division is done in nodes, the size of the division into color spaces is 3 x 3, and both statements are true. In this embodiment, M is described by taking a node division as an example.
The dimensions of the 3D LUT include, but are not limited to, 17 × 17, 18 × 18, 33 × 33, 52 × 52, 65 × 65, and the like. The first calibration map can be obtained from a 3D LUT of the standard M x M. For example, if M is 18, there are 18 nodes in each dimension of R, G, B, and the 18 nodes divide the pixel values [0, 255] into 17 equally spaced portions, each portion having a step size of 15, as shown in fig. 2a for a standard 18 × 18 3DLUT. The first calibration graph comprises 18 large blocks Block0 (namely first image blocks), wherein the component B in each Block0 is a fixed value, the abscissa of each first image Block is the component R, and the ordinate of each first image Block is the component G; each large Block0 further includes 18 × 18 small blocks Block1 (i.e., first sub-image blocks), and in order to reduce the loss that may be introduced by image compression, each small Block1 further includes a plurality of pixels with the same pixel value, i.e., the first sub-image Block includes a plurality of pixels with N × N pixel values, where N is a positive integer, for example, N is 20 or 30, and the larger the value of N is, the higher the accuracy is.
It should be noted that the M first image blocks may be arranged in the first calibration graph in a preset manner, for example, I row and J column, where I and J are positive integers, and when the arrangement is performed, the M first image blocks may not fully arrange I row and J column, and the missing positions may be filled with a preset image, and the pixel values in the preset image may be set according to actual situations, for example, may be set to (0, 0).
And 102, inputting the first calibration graph to a target filter to obtain a second calibration graph, wherein the second calibration graph comprises M second image blocks, and each second image block comprises M second sub image blocks.
The target filter adjusts pixel values of pixel points in the first calibration graph without changing the size of the first calibration graph, that is, the first image blocks and the second image blocks have the same size, and the M first image blocks and the M second image blocks have a one-to-one correspondence relationship. The first sub image block in the first image block and the second sub image block in the second image block also have a one-to-one correspondence.
And 103, determining a first 3D LUT of the target filter according to each second sub image block.
The target filter may be a black box filter, where the black box filter refers to a filter that does not disclose detailed 3DLut parameters, and a parameter value of one node in the first 3D LUT may be determined according to a pixel value of each second sub image block.
In this embodiment, a first calibration graph is obtained, where the first calibration graph includes M first image blocks, each first image block includes M × M first sub image blocks, each first sub image block includes a plurality of pixel points having the same pixel value, and M is a positive integer less than or equal to 255; inputting the first calibration graph into a target filter to obtain a second calibration graph, wherein the second calibration graph comprises M second image blocks, and each second image block comprises M × M second sub image blocks; and determining a first 3D LUT of the target filter according to each second sub image block. Through the mode, the first 3D MUT of the target filter can be acquired, the target filter can be quickly restored, the target filter can be added to other platforms through the first 3D LUT, cross-platform use of the target filter is achieved, a user can add the first 3D MUT to the filter library according to requirements, and therefore flexible adding of the filter is achieved.
In an embodiment of the application, the step 101 of obtaining a first calibration chart includes:
step 1011, dividing a preset color space to obtain a plurality of divided nodes, wherein the plurality of divided nodes comprise M first divided nodes, each first divided node comprises M × M divided sub-nodes, and each divided sub-node corresponds to one color value;
step 1012, determining the M first image blocks according to the M first division nodes, where one first division node corresponds to one first image block, where a target image block is any image block in the M first image blocks, a pixel value of a first sub-image block in the target image block is determined according to a color value of a division sub-node included in the target division node, and the target division node is a first division node corresponding to the target image block.
Specifically, the preset color space may be an RGB color space, where the RGB color space includes three dimensions of R, G, and B, each dimension includes M division nodes, as shown in fig. 2a, for example, if M is 18, there are 18 division nodes on each dimension of R, G, and B, the 18 division nodes divide the pixel value [0, 255] into 17 parts at equal intervals, the step length of each part is 15, the first division node includes a division sub-node whose B component is 0, specifically, the first division node includes a division sub-node whose B component is 0, the R component is one of [0, 15, 30 \8230 ], and 255], and the G component is one of [0, 15, 30 \, 8230;, and 255 ]. The parameter value of each division sub-node is the color value at the node, for example, in fig. 2a, the parameter value of the node with the reference number 11 is (0, 0), that is, the values at R, G, and B are 0, respectively, (0, 15, 0) for the node with the reference number 12, and (15, 0) for the node with the reference number 13.
And mapping a first division node to a first image block, namely each first division node corresponds to a first image block, each division sub-node in each first division node corresponds to a first sub-image block, and the pixel value of each first sub-image block is the same as the color value of the division sub-node corresponding to the first sub-image block. For example, as shown in fig. 2b, a reference numeral 14 in the figure is a first image block, a pixel value of the first sub-image block 15 is the same as a parameter value (0, 0) of a node denoted by a reference numeral 11, a pixel value of the first sub-image block 16 is the same as a parameter value (0, 15, 0) of a node denoted by a reference numeral 12, a pixel value of the first sub-image block 17 is the same as a parameter value (15, 0) of a node denoted by a reference numeral 13, and the first sub-image block may include one pixel point or N pixel points with the same pixel value.
In this embodiment, the preset color space is divided to obtain a standard M × M3 DLUT, the parameter values of the nodes in the 3D LUT are mapped to the pixel values of the first sub-image block in the first calibration graph, and the 3D LUT can be mapped to a two-dimensional image, so that the target filter is conveniently used to process the first calibration graph in the subsequent process, and thus the 3D LUT of the target filter is obtained.
In one embodiment of the present application, the first 3D LUT includes a plurality of nodes, and the plurality of nodes are in one-to-one correspondence with a plurality of partitioning nodes; 103, determining a first three-dimensional look-up table (3D LUT) of the target filter according to each second sub image block, wherein the determination comprises the following steps:
and determining parameter values of a target node in the first 3D LUT according to pixel values of pixel points in a target sub-image block, wherein the target sub-image block is any second sub-image block, and the target sub-image block corresponds to the target node.
Specifically, since the plurality of nodes in the first 3D LUT correspond to the plurality of division nodes one to one, the plurality of division nodes correspond to the plurality of first sub image blocks one to one, and the plurality of first sub image blocks correspond to the plurality of second sub image blocks one to one, the plurality of nodes correspond to the plurality of second sub image blocks one to one.
The target sub-image block is any one of the second sub-image blocks, the target sub-image block corresponds to the target node, and a parameter value of the target node may be determined according to pixel values of pixel points in the target sub-image block, for example, an average value of pixel values of all pixel points in the target sub-image block may be used as a parameter value of the target node. Thereby, parameter values of each node in a first 3D LUT, which is a 3D LUT of the target filter, can be obtained, and a filter with pixel level accuracy can be obtained according to the first 3D LUT.
By the method, the 3D LUT of the target filter can be acquired, and a user can add the 3D LUT into the filter library, so that cross-platform utilization of the filter is realized, and the filter acquisition efficiency is improved.
In another embodiment of the present application, in order to improve the accuracy of the obtained filter, after determining a first three-dimensional look-up table 3D LUT of the target filter according to each of the second sub image blocks in step 103, the method further includes:
step 104, an initial color map is obtained.
The initial color map can be a random color map in which R, G and B are uniformly distributed in the range of [0, 255], and the random color map needs to contain all combinations of R, G and B in the range of [0, 255] as much as possible to ensure that higher accuracy is achieved. In this embodiment, the size of the initial color map is not limited, and the larger the size is, the higher the accuracy is, for example, the capacity of the initial color map may be 4M, and the size may be about 2000 × 2000. A random color map with uniformly distributed R, G, B in the range of 0, 255 can be generated using the uniform () function.
And 105, inputting the initial color image to the target filter to obtain a first color image.
And 106, performing filter processing on the initial color image based on the first 3D LUT to obtain a second color image.
And 107, adjusting the parameter values of the nodes of the first 3D LUT according to the first color image and the second color image to obtain a second 3D LUT of the target filter.
The first color map may be a standard map (which may also be understood as referring to a figure) of the target filter, and the second color map may be obtained from the first 3D LUT process, which may be understood as a result map of the first 3D LUT process. And adjusting the parameter values of the nodes of the first 3D LUT based on the difference between the first color map and the second color map, wherein the result map of the first 3D LUT processing is closer to the result map of the target filter processing, and the 3D LUT after the adjustment is called as a second 3D LUT, and compared with the first 3D LUT, the second 3D LUT has higher precision and smaller difference with the 3D LUT of the target filter. In this embodiment, the parameter values of the nodes of the first 3D LUT are adjusted through the first color map and the second color map, so that the second 3D LUT with higher accuracy can be obtained.
In the above, in step 107, adjusting the parameter value of the node of the first 3D LUT according to the first color map and the second color map to obtain the second 3D LUT of the target filter includes:
inputting the first color image and the second color image into a machine learning model to obtain a second 3D LUT of the target filter, wherein a loss function of the machine learning model is determined according to the first color image, the second color image, an output result of the machine learning model and the first 3D LUT, and the machine learning model adopts a gradient descent algorithm.
Specifically, the adjustment of the first 3D LUT may be processed using a machine learning model, where the input of the machine learning model is the first color map and the second color map, and the output of the machine learning model is the second 3D LUT. And the machine learning model obtains a final second 3D LUT through multiple iterations. The machine learning model may set the number of iterations, e.g., 1000 iterations, and stop the iterations after 1000 iterations are completed, outputting a second 3D LUT, the second 3D LUT being a sub-pixel level filter.
The machine learning model may also set the loss function to be less than a preset threshold, and stop the iteration. The loss function may use L1 loss (i.e., mean absolute value error) of the first colorgram, and, in order to reduce errors introduced by the initial colorgram distribution being not absolutely uniform, L1 loss of the 3D LUT may be considered, based on which the loss function loss = avg (B2-B1) + avg (3 DLut1-3DLut 0), where B1 represents the first colorgram, B2 represents the second colorgram, 3DLut0 represents the first 3D LUT,3dlut1 represents the second 3D LUT, (B2-B1) represents the subtraction of pixel values of corresponding positions of the second colorgram and the first colorgram, avg (B2-B1) represents the mean value of the sum of pixel values in (B2-B1), (3 DLut1-3DLut 0) represents the parameter value of the node of the corresponding position, and avg (3 DLut1-3DLut 0) represents the mean value of the sum of subtraction of pixel values in (3 DLut1-3 dl0). The smaller the difference between B1 and B2, the smaller the loss value, and the smaller the difference between 3DLut1 and 3DLut0, the smaller the loss value.
In this embodiment, a gradient descent algorithm may be used as the machine learning model, and the learning rate may be 0.1 or 0.05, for example. As shown in fig. 2c, the specific process of obtaining the second 3D LUT by the machine learning model is as follows:
in step 201, 3DLut1 is assigned, and the initial value of 3DLut1 is 3DLut 0.
Step 202, the initial color map is processed by using 3DLut1 to obtain a second color map.
Step 203, calculating the gradient of the loss function loss, loss = avg (B2-B1) + avg (3 DLut1-3DLut 0);
and step 204, judging whether the gradient is minimum, if so, executing step 205, otherwise, executing step 206.
In step 205, 3DLut1 is output.
And step 206, adjusting the value of 3DLut1, and assigning the adjusted value to 3DLut1.
The method provided by the application can quickly restore the target filter for expansion of the filter library, and improves the development efficiency; a special calibration graph (namely a first calibration graph) is arranged, and the calibration graph can obtain a filter with pixel-level precision only by passing through a target filter once; and (3) setting initial color images which are uniformly distributed, and combining a gradient descent method to obtain the high-precision sub-pixel filter.
This application carries out the filter to tone class filter and recovers, and several ms just can recover the target filter consuming time, and the response is fast, the precision is high, can be used to quick expansion filter storehouse. Meanwhile, the method provided by the application can be integrated in the retouching software, so that a user can also add the filter by one key, a manufacturer can prepare a first calibration graph and an initial color graph which is uniformly distributed in advance, the user sends the two graphs back the integrated filter by one key of the manufacturer after processing the two graphs by the favorite black box filter, the filter can be added into the personal filter library, and the filter adding mode is flexible and high in efficiency.
According to the filter parameter acquiring method provided by the embodiment of the application, the executing main body can be a filter parameter acquiring device. The filter parameter acquiring device provided by the embodiment of the present application is described by taking the filter parameter acquiring device as an example to execute the filter parameter acquiring method.
As shown in fig. 3, the filter parameter acquiring apparatus 300 includes:
a first obtaining module 301, configured to obtain a first calibration graph, where the first calibration graph includes M first image blocks, each first image block includes M × M first sub image blocks, each first sub image block includes multiple pixel points with the same pixel value, and M is a positive integer less than or equal to 255;
a second obtaining module 302, configured to input the first calibration graph to a target filter to obtain a second calibration graph, where the second calibration graph includes M second image blocks, and each second image block includes M × M second sub image blocks;
a determining module 303, configured to determine a first three-dimensional lookup table 3D LUT of the target filter according to each second sub image block.
Optionally, the first obtaining module 301 includes:
the first obtaining submodule is used for dividing a preset color space to obtain a plurality of division nodes, wherein the plurality of division nodes comprise M first division nodes, each first division node comprises M division sub-nodes, and each division sub-node corresponds to one color value;
and the second obtaining sub-module is configured to determine the M first image blocks according to the M first division nodes, where one first division node corresponds to one first image block, where a target image block is any image block in the M first image blocks, a pixel value of a first sub-image block in the target image block is determined according to a color value of a division sub-node included in the target division node, and the target division node is the first division node corresponding to the target image block.
Optionally, the first 3D LUT includes a plurality of nodes, and the plurality of nodes are in one-to-one correspondence with the plurality of division nodes;
the determining module 303 is configured to determine a parameter value of a target node in the first 3D LUT according to a pixel value of a pixel point in a target sub image block, where the target sub image block is any second sub image block, and the target sub image block corresponds to the target node.
Optionally, the apparatus 300 further comprises:
the third acquisition module is used for acquiring an initial color image;
the fourth acquisition module is used for inputting the initial color image to the target filter to acquire a first color image;
a fifth obtaining module, configured to perform filter processing on the initial color map based on the first 3D LUT, to obtain a second color map;
and the adjusting module is used for adjusting the parameter values of the nodes of the first 3D LUT according to the first color image and the second color image to obtain a second 3D LUT of the target filter.
Optionally, the adjusting module is configured to input the first color image and the second color image into a machine learning model to obtain a second 3D LUT of the target filter, where a loss function of the machine learning model is determined according to the first color image, the second color image, an output result of the machine learning model, and the first 3D LUT, and the machine learning model adopts a gradient descent algorithm.
The filter parameter obtaining apparatus 300 according to the embodiment of the present application can implement each process implemented by the foregoing method embodiment, and is not described here again to avoid repetition.
The filter parameter acquiring apparatus 300 in the embodiment of the present application may be an electronic device, and may also be a component in the electronic device, such as an integrated circuit or a chip. The electronic device may be a terminal, or may be a device other than a terminal. The electronic Device may be, for example, a Mobile phone, a tablet computer, a notebook computer, a palm top computer, a vehicle-mounted electronic Device, a Mobile Internet Device (MID), an Augmented Reality (AR)/Virtual Reality (VR) Device, a robot, a wearable Device, an ultra-Mobile personal computer (UMPC), a netbook or a Personal Digital Assistant (PDA), and the like, and may also be a server, a Network Attached Storage (Network Attached Storage, NAS), a personal computer (NAS), a Television (TV), a teller machine, a self-service machine, and the like, and the embodiments of the present application are not limited in particular.
The filter parameter acquiring apparatus 300 in the embodiment of the present application may be an apparatus having an operating system. The operating system may be an Android (Android) operating system, an ios operating system, or other possible operating systems, and embodiments of the present application are not limited specifically.
Optionally, as shown in fig. 4, an electronic device 600 is further provided in an embodiment of the present application, and includes a processor 601 and a memory 602, where a program or an instruction that can be executed on the processor 601 is stored in the memory 602, and when the program or the instruction is executed by the processor 601, the steps of the embodiment of the filter parameter obtaining method are implemented, and the same technical effects can be achieved, and are not described again here to avoid repetition.
It should be noted that the electronic device in the embodiment of the present application includes the mobile electronic device and the non-mobile electronic device described above.
Fig. 5 is a hardware structure diagram of an electronic device implementing the embodiment of the present application.
The electronic device 700 includes, but is not limited to: a radio frequency unit 701, a network module 702, an audio output unit 703, an input unit 704, a sensor 705, a display unit 706, a user input unit 707, an interface unit 708, a memory 709, and a processor 710.
Those skilled in the art will appreciate that the electronic device 700 may also include a power supply (e.g., a battery) for powering the various components, and the power supply may be logically coupled to the processor 710 via a power management system, such that the functions of managing charging, discharging, and power consumption may be performed via the power management system. The electronic device structure shown in fig. 5 does not constitute a limitation of the electronic device, and the electronic device may include more or less components than those shown, or combine some components, or arrange different components, and thus, the description is omitted here.
The processor 710 is configured to obtain a first scaling map, where the first scaling map includes M first image blocks, each first image block includes M × M first sub image blocks, each first sub image block includes multiple pixel points with the same pixel value, and M is a positive integer smaller than or equal to 255; inputting the first calibration graph into a target filter to obtain a second calibration graph, wherein the second calibration graph comprises M second image blocks, and each second image block comprises M × M second sub image blocks; and determining a first three-dimensional lookup table (3D LUT) of the target filter according to each second sub image block.
Optionally, the processor 710 is further configured to divide a preset color space to obtain a plurality of division nodes, where the plurality of division nodes includes M first division nodes, each first division node includes M × M division sub-nodes, and each division sub-node corresponds to one color value; and determining the M first image blocks according to the M first division nodes, wherein one first division node corresponds to one first image block, a target image block is any image block in the M first image blocks, the pixel value of a first sub-image block in the target image block is determined according to the color value of a division sub-node included in the target division node, and the target division node is the first division node corresponding to the target image block.
Optionally, the first 3D LUT includes a plurality of nodes, and the plurality of nodes are in one-to-one correspondence with the plurality of division nodes; the processor 710 is further configured to determine, according to pixel values of pixels in a target sub image block, a parameter value of a target node in the first 3D LUT, where the target sub image block is any second sub image block, and the target sub image block corresponds to the target node.
Optionally, a processor 710, further configured to obtain an initial color map; inputting the initial color image to the target filter to obtain a first color image; performing filter processing on the initial color image based on the first 3D LUT to obtain a second color image; and adjusting the parameter value of the node of the first 3D LUT according to the first color image and the second color image to obtain a second 3D LUT of the target filter.
Optionally, the processor 710 is further configured to input the first color map and the second color map to a machine learning model to obtain a second 3D LUT of the target filter, where a loss function of the machine learning model is determined according to the first color map, the second color map, an output result of the machine learning model, and the first 3D LUT, and the machine learning model adopts a gradient descent algorithm.
The electronic device provided in the embodiment of the present application can implement each process implemented by the foregoing method embodiment, and is not described here again to avoid repetition.
It should be understood that, in the embodiment of the present application, the input Unit 704 may include a Graphics Processing Unit (GPU) 7041 and a microphone 7042, and the Graphics processor 7041 processes image data of a still picture or a video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The display unit 706 may include a display panel 7061, and the display panel 7061 may be configured in the form of a liquid crystal display, an organic light emitting diode, or the like. The user input unit 707 includes at least one of a touch panel 7071 and other input devices 7072. The touch panel 7071 is also referred to as a touch screen. The touch panel 7071 may include two parts of a touch detection device and a touch controller. Other input devices 7072 may include, but are not limited to, a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, and a joystick, which are not described in detail herein.
The memory 709 may be used to store software programs as well as various data. The memory 709 may mainly include a first storage area for storing a program or an instruction and a second storage area for storing data, wherein the first storage area may store an operating system, an application program or an instruction (such as a sound playing function, an image playing function, and the like) required by at least one function, and the like. Further, the memory 709 may include volatile memory or nonvolatile memory, or the memory 709 may include both volatile and nonvolatile memory. The non-volatile Memory may be a Read-Only Memory (ROM), a Programmable ROM (PROM), an Erasable PROM (EPROM), an Electrically Erasable PROM (EEPROM), or a flash Memory. The volatile Memory may be a Random Access Memory (RAM), a Static Random Access Memory (Static RAM, SRAM), a Dynamic Random Access Memory (Dynamic RAM, DRAM), a Synchronous Dynamic Random Access Memory (Synchronous DRAM, SDRAM), a Double Data Rate Synchronous Dynamic Random Access Memory (Double Data Rate SDRAM, ddr SDRAM), an Enhanced Synchronous SDRAM (ESDRAM), a Synchronous Link DRAM (SLDRAM), and a Direct Memory bus RAM (DRRAM). The memory 709 in the embodiments of the present application includes, but is not limited to, these and any other suitable types of memory.
Processor 710 may include one or more processing units; optionally, the processor 710 integrates an application processor, which primarily handles operations related to the operating system, user interface, and applications, and a modem processor, which primarily handles wireless communication signals, such as a baseband processor. It will be appreciated that the modem processor described above may not be integrated into processor 710.
The embodiment of the present application further provides a readable storage medium, where a program or an instruction is stored on the readable storage medium, and when the program or the instruction is executed by a processor, the program or the instruction implements each process of the above-mentioned filter parameter obtaining method embodiment, and can achieve the same technical effect, and in order to avoid repetition, details are not repeated here.
The processor is the processor in the electronic device described in the above embodiment. The readable storage medium includes a computer readable storage medium, such as a computer read only memory ROM, a random access memory RAM, a magnetic or optical disk, and the like.
The embodiment of the present application further provides a chip, where the chip includes a processor and a communication interface, the communication interface is coupled to the processor, and the processor is configured to run a program or an instruction to implement each process of the above-mentioned filter parameter obtaining method embodiment, and can achieve the same technical effect, and in order to avoid repetition, the details are not repeated here.
It should be understood that the chips mentioned in the embodiments of the present application may also be referred to as system-on-chip, system-on-chip or system-on-chip, etc.
Embodiments of the present application provide a computer program product, where the program product is stored in a storage medium, and the program product is executed by at least one processor to implement the processes of the foregoing filter parameter obtaining method embodiments, and achieve the same technical effects, and in order to avoid repetition, details are not repeated here.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrases "comprising a component of' 8230; \8230;" does not exclude the presence of another like element in a process, method, article, or apparatus that comprises the element. Further, it should be noted that the scope of the methods and apparatus of the embodiments of the present application is not limited to performing the functions in the order illustrated or discussed, but may include performing the functions in a substantially simultaneous manner or in a reverse order based on the functions involved, e.g., the methods described may be performed in an order different than that described, and various steps may be added, omitted, or combined. In addition, features described with reference to certain examples may be combined in other examples.
Through the description of the foregoing embodiments, it is clear to those skilled in the art that the method of the foregoing embodiments may be implemented by software plus a necessary general hardware platform, and certainly may also be implemented by hardware, but in many cases, the former is a better implementation. Based on such understanding, the technical solutions of the present application or portions thereof that contribute to the prior art may be embodied in the form of a computer software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (which may be a mobile phone, a computer, a server, or a network device, etc.) to execute the method according to the embodiments of the present application.
While the present embodiments have been described with reference to the accompanying drawings, it is to be understood that the invention is not limited to the precise embodiments described above, which are meant to be illustrative and not restrictive, and that various changes may be made therein by those skilled in the art without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (10)

1. A filter parameter acquisition method, comprising:
acquiring a first calibration graph, wherein the first calibration graph comprises M first image blocks, each first image block comprises M first sub image blocks, each first sub image block comprises a plurality of pixel points with the same pixel value, and M is a positive integer less than or equal to 255;
inputting the first calibration graph into a target filter to obtain a second calibration graph, wherein the second calibration graph comprises M second image blocks, and each second image block comprises M × M second sub image blocks;
and determining a first three-dimensional lookup table (3D LUT) of the target filter according to each second sub image block.
2. The method according to claim 1, wherein the obtaining a first calibration map comprises:
dividing a preset color space to obtain a plurality of division nodes, wherein the plurality of division nodes comprise M first division nodes, each first division node comprises M division sub-nodes, and each division sub-node corresponds to a color value;
and determining the M first image blocks according to the M first division nodes, wherein one first division node corresponds to one first image block, a target image block is any image block in the M first image blocks, the pixel value of a first sub-image block in the target image block is determined according to the color value of a division sub-node included in the target division node, and the target division node is the first division node corresponding to the target image block.
3. The method of claim 2, wherein the first 3D LUT comprises a plurality of nodes, the plurality of nodes in one-to-one correspondence with the plurality of dividing nodes;
determining a first three-dimensional lookup table 3DLUT of the target filter according to each of the second sub-image blocks, including:
and determining parameter values of a target node in the first 3D LUT according to pixel values of pixel points in a target sub-image block, wherein the target sub-image block is any second sub-image block, and the target sub-image block corresponds to the target node.
4. The method of claim 1, wherein after determining the first three-dimensional look-up table (3D LUT) for the target filter from each of the second sub-image blocks, the method further comprises:
acquiring an initial color image;
inputting the initial color image to the target filter to obtain a first color image;
performing filter processing on the initial color image based on a first 3D LUT to obtain a second color image;
and adjusting the parameter values of the nodes of the first 3D LUT according to the first color image and the second color image to obtain a second 3D LUT of the target filter.
5. The method of claim 4, wherein the adjusting parameter values for nodes of a first 3D LUT according to the first color map and the second color map to obtain a second 3D LUT for the target filter comprises:
inputting the first color image and the second color image into a machine learning model to obtain a second 3D LUT of the target filter, wherein a loss function of the machine learning model is determined according to the first color image, the second color image, an output result of the machine learning model and the first 3D LUT, and the machine learning model adopts a gradient descent algorithm.
6. A filter parameter acquisition apparatus, comprising:
the first obtaining module is used for obtaining a first calibration graph, the first calibration graph comprises M first image blocks, each first image block comprises M first sub image blocks, each first sub image block comprises a plurality of pixel points with the same pixel value, and M is a positive integer less than or equal to 255;
the second obtaining module is used for inputting the first calibration graph into a target filter to obtain a second calibration graph, wherein the second calibration graph comprises M second image blocks, and each second image block comprises M second sub image blocks;
and the determining module is used for determining a first three-dimensional lookup table (3D LUT) of the target filter according to each second sub image block.
7. The apparatus of claim 6, wherein the first obtaining module comprises:
the first obtaining submodule is used for dividing a preset color space to obtain a plurality of division nodes, wherein the plurality of division nodes comprise M first division nodes, each first division node comprises M division subnodes, and each division subnode corresponds to one color value;
and the second obtaining sub-module is configured to determine the M first image blocks according to the M first division nodes, where one first division node corresponds to one first image block, where a target image block is any image block in the M first image blocks, a pixel value of a first sub-image block in the target image block is determined according to a color value of a division sub-node included in the target division node, and the target division node is the first division node corresponding to the target image block.
8. The apparatus of claim 7, wherein the first 3D LUT comprises a plurality of nodes, the plurality of nodes in one-to-one correspondence with the plurality of dividing nodes;
the determining module is configured to determine a parameter value of a target node in the first 3D LUT according to a pixel value of a pixel point in a target sub-image block, where the target sub-image block is any second sub-image block, and the target sub-image block corresponds to the target node.
9. The apparatus of claim 6, further comprising:
the third acquisition module is used for acquiring an initial color image;
the fourth acquisition module is used for inputting the initial color image to the target filter to acquire a first color image;
the fifth acquisition module is used for carrying out filter processing on the initial color image based on the first 3D LUT to obtain a second color image;
and the adjusting module is used for adjusting the parameter values of the nodes of the first 3D LUT according to the first color image and the second color image to obtain a second 3D LUT of the target filter.
10. The apparatus of claim 9, wherein the adjustment module is configured to input the first color map and the second color map into a machine learning model to obtain a second 3D LUT of the target filter, wherein a loss function of the machine learning model is determined according to the first color map, the second color map, an output result of the machine learning model, and the first 3D LUT, and the machine learning model adopts a gradient descent algorithm.
CN202211579442.7A 2022-12-09 2022-12-09 Filter parameter acquisition method and device Pending CN115908191A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211579442.7A CN115908191A (en) 2022-12-09 2022-12-09 Filter parameter acquisition method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211579442.7A CN115908191A (en) 2022-12-09 2022-12-09 Filter parameter acquisition method and device

Publications (1)

Publication Number Publication Date
CN115908191A true CN115908191A (en) 2023-04-04

Family

ID=86493692

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211579442.7A Pending CN115908191A (en) 2022-12-09 2022-12-09 Filter parameter acquisition method and device

Country Status (1)

Country Link
CN (1) CN115908191A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116993619A (en) * 2023-08-29 2023-11-03 荣耀终端有限公司 Image processing method and related equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116993619A (en) * 2023-08-29 2023-11-03 荣耀终端有限公司 Image processing method and related equipment
CN116993619B (en) * 2023-08-29 2024-03-12 荣耀终端有限公司 Image processing method and related equipment

Similar Documents

Publication Publication Date Title
CN110570505A (en) image rendering method, device and equipment and storage medium
CN112037160B (en) Image processing method, device and equipment
CN115908191A (en) Filter parameter acquisition method and device
US20230401806A1 (en) Scene element processing method and apparatus, device, and medium
CN111179370B (en) Picture generation method and device, electronic equipment and storage medium
CN113676713A (en) Image processing method, apparatus, device and medium
US11682364B2 (en) Curvature interpolation for lookup table
CN116824092A (en) Three-dimensional model generation method, three-dimensional model generation device, computer equipment and storage medium
CN111063319B (en) Image dynamic enhancement method and device based on backlight adjustment and computer equipment
CN114218421A (en) Resource recall method and device and network side equipment
CN116862813B (en) Color calibration method and system for augmented reality technology
CN117112090A (en) Business page theme generation method, device, computer equipment, medium and product
CN116546176A (en) Color correction method, related device, equipment and storage medium
CN114820968A (en) Three-dimensional visualization method and device, robot, electronic device and storage medium
CN104754313A (en) Image collecting method and electronic device
CN114327166A (en) Image processing method and device, electronic equipment and readable storage medium
CN111784558A (en) Image processing method and device, electronic equipment and computer storage medium
CN115760596A (en) Image processing method and device
CN106776682A (en) A kind of picture match device, method and mobile terminal
CN116977154A (en) Visible light image and infrared image fusion storage method, device, equipment and medium
CN116528053A (en) Image exposure method and device and electronic equipment
CN118135935A (en) Screen brightness adjusting method and device and computer equipment
CN115272127A (en) Model training method and device, electronic equipment and readable storage medium
CN116485665A (en) Image processing method, device, acquisition card, electronic equipment and storage medium
CN117218030A (en) Model training method, image processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination