CN112950453B - Image processing method and image processing apparatus - Google Patents
Image processing method and image processing apparatus Download PDFInfo
- Publication number
- CN112950453B CN112950453B CN202110099560.7A CN202110099560A CN112950453B CN 112950453 B CN112950453 B CN 112950453B CN 202110099560 A CN202110099560 A CN 202110099560A CN 112950453 B CN112950453 B CN 112950453B
- Authority
- CN
- China
- Prior art keywords
- tone
- image
- hue
- edited
- pixels
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000012545 processing Methods 0.000 title claims abstract description 89
- 238000003672 processing method Methods 0.000 title claims abstract description 38
- 238000006243 chemical reaction Methods 0.000 claims abstract description 114
- 238000001514 detection method Methods 0.000 claims description 8
- 238000012163 sequencing technique Methods 0.000 claims description 6
- 230000001131 transforming effect Effects 0.000 claims description 6
- 238000000034 method Methods 0.000 description 12
- 238000004458 analytical method Methods 0.000 description 8
- 238000001914 filtration Methods 0.000 description 8
- 230000009466 transformation Effects 0.000 description 8
- 230000008859 change Effects 0.000 description 7
- 238000004590 computer program Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 238000004891 communication Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 6
- 238000010606 normalization Methods 0.000 description 6
- 230000011218 segmentation Effects 0.000 description 6
- 230000000694 effects Effects 0.000 description 5
- 230000003993 interaction Effects 0.000 description 5
- 230000007704 transition Effects 0.000 description 5
- 238000012805 post-processing Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000005540 biological transmission Effects 0.000 description 3
- 238000013500 data storage Methods 0.000 description 3
- 238000007619 statistical method Methods 0.000 description 3
- 230000006978 adaptation Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 230000002452 interceptive effect Effects 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 239000008186 active pharmaceutical agent Substances 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000002146 bilateral effect Effects 0.000 description 1
- 210000000746 body region Anatomy 0.000 description 1
- 239000003086 colorant Substances 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/643—Hue control means, e.g. flesh tone control
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Facsimile Image Signal Circuits (AREA)
- Color Image Communication Systems (AREA)
Abstract
The present disclosure relates to an image processing method and an image processing apparatus. The image processing method may include the steps of: obtaining an image to be edited; determining at least one hue of the image to be edited; acquiring a region corresponding to the at least one tone from the image to be edited; and performing tone conversion on the acquired region. The present disclosure may enable tone conversion of a particular region in an image.
Description
Technical Field
The present disclosure relates to the field of image processing, and more particularly, to an image processing method and an image processing apparatus for tone conversion.
Background
The user may change the style of the current image or video by changing the hue of the image or video. Currently, tone conversion of an image or video generally involves tone conversion of the entire image or video. For example, all regions in the video may be color graded over a predetermined period of time (such as 15 seconds or 60 seconds), with different hues being monotonically cycled. In addition, when the picture is occupied, the whole human body area can be kept unchanged, and only other scenes in the image or video are subjected to tone conversion.
Disclosure of Invention
The present disclosure provides an image processing method and an image processing apparatus to solve at least the problem of a single form of tone conversion in the related art. The technical scheme of the present disclosure is as follows:
according to a first aspect of embodiments of the present disclosure, there is provided an image processing method, which may include the steps of: obtaining an image to be edited; determining at least one hue of the image to be edited; acquiring a region corresponding to the at least one tone from the image to be edited; and performing tone conversion on the acquired region.
Optionally, the at least one hue includes at least one of a dominant hue, a designated hue, and a full hue of the image to be edited.
Optionally, the step of determining at least one hue of the image to be edited may comprise: converting the image to be edited into an HSV image; performing tone classification on pixels in the HSV image according to a pre-defined first tone space; the at least one hue is determined based on the number of pixels included in each hue.
Optionally, the step of obtaining a region corresponding to the at least one tone from the image to be edited may include: and extracting a region corresponding to the at least one tone from the image to be edited according to a predefined second tone space.
Alternatively, the first tone space and the second tone space may be defined based on HSV color space, wherein the first tone space and the second tone space may include a range of values of a plurality of tones and a range of values of saturation and brightness corresponding to each tone, respectively.
Alternatively, the range of values of the saturation and the luminance corresponding to each tone may be set based on the super parameter, wherein the super parameters in the first tone space and the second tone space may be set differently.
Optionally, the step of determining the at least one hue based on the number of pixels comprised by each hue may comprise: performing tone sorting in order of increasing number of pixels included in each tone, and determining the first at least one tone in the tone sorting as the at least one tone; or determining the at least one tone according to the ratio of the number of pixels included in each tone to the total number of pixels of the image to be edited and the saturation value corresponding to each tone.
Optionally, the step of determining at least one hue of the image to be edited may comprise: clustering the RGB pixels of the image to be edited; sorting each category according to the order of the number of pixels included in each category from large to small; converting RGB values of cluster center points of at least one category before in the ranking into HSV values; the at least one hue is determined based on a predefined first hue space and the converted HSV value.
Optionally, the image processing method may further include: receiving user input; and performing tone conversion on the acquired region according to the user input.
Alternatively, the user input may comprise at least one of a first user input and a second user input, wherein the first user input may be used to set a target hue and the second user input may be used to set a degree of hue conversion, wherein the degree of hue conversion is indicative of a percentage of an area of the image to be edited that is to be hue converted.
Optionally, the step of hue transforming the acquired region according to said user input may comprise: determining a region to be subjected to tone conversion of the acquired region based on the tone conversion degree; and/or converting the determined hue of the region to be hue-converted into the target hue.
Alternatively, in the case where the human body is included in the acquired region, the step of performing the tone conversion on the acquired region may include: extracting the naked skin area of the human body in the area by using a skin detection algorithm; the original color tone is reserved for the exposed skin area of the human body.
Alternatively, the step of determining the region to be subjected to tone conversion of the acquired region based on the tone conversion degree may include: determining the number N of tone regions to be tone-converted among a plurality of tone regions included in the acquired region according to the tone conversion degree, wherein N is greater than or equal to 1; performing tone sorting in order of from large to small in the number of pixels included in each of the plurality of tone areas; the top N hue regions in the hue order are determined as the regions to be hue-converted.
Alternatively, the step of determining the number N of tone regions to be tone-converted from the plurality of tone regions according to the tone conversion degree may include: in the case where the degree of tone conversion is less than or equal to the first value, the number N is determined to be 1; determining the number N as the number of all tone areas in the case where the degree of tone conversion is greater than or equal to the second value; in the case where the degree of tone conversion is greater than the first value and less than the second value, the number N is determined as the following value: sequentially calculating the proportion of the number of pixels included in each tone region to the number of effective pixels in the image to be edited according to tone sequencing from the front until the sum of the proportions is greater than or equal to the number of tone regions of the tone conversion degree.
Optionally, in a case where the image to be edited does not include a human body, the effective pixels include all pixels in the image to be edited; in the case where the image to be edited includes a human body, the effective pixels include pixels other than pixels of an exposed skin area of the human body in the image to be edited.
According to a second aspect of embodiments of the present disclosure, there is provided an image processing apparatus, which may include: the acquisition module is configured to acquire an image to be edited; and a processing module configured to: determining at least one hue of the image to be edited; acquiring a region corresponding to the at least one tone from the image to be edited; and performing tone conversion on the acquired region.
Optionally, the processing module may be configured to convert the image to be edited into an HSV image; performing tone classification on pixels in the HSV image according to a pre-defined first tone space; the at least one hue is determined based on the number of pixels included in each hue.
Alternatively, the processing module may be configured to extract a region corresponding to the at least one hue from the image to be edited according to a predefined second hue space.
Alternatively, the first tone space and the second tone space may be defined based on HSV color space, wherein the first tone space and the second tone space may include a range of values of a plurality of tones and a range of values of saturation and brightness corresponding to each tone, respectively.
Alternatively, the range of values of the saturation and the luminance corresponding to each tone may be set based on the super parameter, wherein the super parameters in the first tone space and the second tone space may be set differently.
Alternatively, the processing module may be configured to sort the hues in order of the number of pixels included in each hue from large to small, and determine the first at least one hue in the sort of hues as the at least one hue; or determining the at least one tone according to the ratio of the number of pixels included in each tone to the total number of pixels of the image to be edited and the saturation value corresponding to each tone.
Alternatively, the processing module may be configured to: clustering the RGB pixels of the image to be edited; sorting each category according to the order of the number of pixels included in each category from large to small; converting RGB values of cluster center points of at least one category before in the ranking into HSV values; the at least one hue is determined based on a predefined first hue space and the converted HSV value.
Optionally, the image processing apparatus may further comprise a user input module configured to receive a user input, wherein the processing module is configured to tone-convert the acquired region according to the user input.
Optionally, the user input may include at least one of a first user input for setting a target hue and a second user input for setting a hue conversion degree, wherein the hue conversion degree represents a percentage of an area of the image to be edited that is to be hue converted.
Alternatively, the processing module may be configured to: determining a region to be subjected to tone conversion of the acquired region based on the tone conversion degree; and/or converting the determined hue of the region to be hue-converted into the target hue.
Optionally, in the case that the acquired region includes a human body, the processing module may be configured to extract a region of naked human skin in the region using a skin detection algorithm; the original color tone is reserved for the exposed skin area of the human body.
Alternatively, the processing module may be configured to: determining the number N of tone regions to be tone-converted among a plurality of tone regions included in the acquired region according to the tone conversion degree, wherein N is greater than or equal to 1; performing tone sorting in order of from large to small in the number of pixels included in each of the plurality of tone areas; the top N hue regions in the hue order are determined as the regions to be hue-converted.
Alternatively, the processing module may be configured to: in the case where the degree of tone conversion is less than or equal to the first value, the number N is determined to be 1; determining the number N as the number of all tone areas in the case where the degree of tone conversion is greater than or equal to the second value; in the case where the degree of tone conversion is greater than the first value and less than the second value, the number N is determined as the following value: sequentially calculating the proportion of the number of pixels included in each tone region to the number of effective pixels in the image to be edited according to tone sequencing from the front until the sum of the proportions is greater than or equal to the number of tone regions of the tone conversion degree.
Optionally, in a case where the image to be edited does not include a human body, the effective pixels include all pixels in the image to be edited; in the case where the image to be edited includes a human body, the effective pixels include pixels other than pixels of an exposed skin area of the human body in the image to be edited.
According to a third aspect of embodiments of the present disclosure, there is provided an electronic device, which may include: at least one processor; at least one memory storing computer-executable instructions, wherein the computer-executable instructions, when executed by the at least one processor, cause the at least one processor to perform the image processing method as described above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium storing instructions that, when executed by at least one processor, cause the at least one processor to perform the image processing method as described above.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product, instructions in which are executed by at least one processor in an electronic device to perform the image processing method as described above.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
the image processing scheme provided by the disclosure can intelligently analyze dominant hue areas or interactive appointed specific areas in the image, and then transform the areas to appointed hues; the whole image can be normalized and converted into a designated tone; at the same time, in the presence of a person, the skin area of the person can be protected and the area on the person, such as clothing, backpack, etc., can be tone-converted. In addition, a user interaction function is provided, and a user can set a desired tone to change, so that user experience is greatly improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the disclosure and together with the description, serve to explain the principles of the disclosure and do not constitute an undue limitation on the disclosure.
Fig. 1 is a flowchart illustrating an image processing method according to an exemplary embodiment.
Fig. 2 is a flowchart illustrating an image processing method according to another exemplary embodiment.
Fig. 3 is a flowchart illustrating an image processing method according to another exemplary embodiment.
Fig. 4 is a flowchart illustrating an image processing method according to another exemplary embodiment.
Fig. 5 is a flow chart illustrating an image processing method according to an exemplary embodiment.
Fig. 6 is a block diagram of an image processing apparatus according to an exemplary embodiment.
Fig. 7 is a schematic structural diagram of an image processing apparatus according to an exemplary embodiment.
Fig. 8 is a block diagram of an electronic device, according to an example embodiment.
Detailed Description
In order to enable those skilled in the art to better understand the technical solutions of the present disclosure, the technical solutions of the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of the embodiments of the disclosure defined by the claims and their equivalents. Various specific details are included to aid understanding, but are merely to be considered exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. In addition, descriptions of well-known functions and constructions are omitted for clarity and conciseness.
The terms and words used in the following description and claims are not limited to written meanings, but are used only by the inventors to achieve a clear and consistent understanding of the disclosure. Accordingly, it should be apparent to those skilled in the art that the following descriptions of the various embodiments of the present disclosure are provided for illustration only and not for the purpose of limiting the disclosure as defined by the claims and their equivalents.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the foregoing figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged where appropriate such that the embodiments of the disclosure described herein may be capable of operation in sequences other than those illustrated or described herein. The implementations described in the following exemplary examples are not representative of all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with some aspects of the present disclosure as detailed in the accompanying claims.
Although the related art realizes the color change effect on an image, it only changes the color tone set in advance for all color areas (excluding human parts) in the image, but cannot realize the change of the color tone alone for an area having a specific color tone in the image (such as a main color tone area in the image), nor can specify the color tone desired to be changed; and at the same time, it is not possible to transform all areas in the image to the specified hue. Further, although the related art retains the tone of all the human body regions in the image, only the region not containing the human body is subjected to the change of the tone, but it restricts the change of the tone of the region such as the human body clothing, the backpack, and the like.
Based on the above, the present disclosure has been made to provide a scheme for tone conversion of an image or video by comprehensively utilizing means such as image processing, statistical analysis, and machine learning. The scheme can complete the main tone extraction, main tone conversion and tone normalization conversion functions in a single frame image for multiple scenes (such as people or no people). Dominant hue transformation refers to intelligently analyzing a significant hue or hues in an image and then transforming only regions with such hues to a specified hue without significant abrupt artifacts. The tone normalization transformation refers to intelligent analysis of an image, and transforms areas of multiple tones in the image into a specific tone without obvious abrupt artifacts.
The method can analyze main tone areas or interactive appointed specific areas in the image, then convert the tone of the areas to the appointed tone, and also convert the whole normalization of the image to the appointed tone; meanwhile, the skin of the human body can be protected (namely, the skin part in the image keeps the original image effect), and the color tone of the human body such as clothes, backpacks and the like is changed, so that various different special effects of tone change are provided, and the human body is flexible and changeable.
Hereinafter, according to various embodiments of the present disclosure, the method, apparatus, and device of the present disclosure will be described in detail with reference to the accompanying drawings.
Fig. 1 is a flowchart illustrating an image processing method, which may be used for tone conversion of an image or video, as shown in fig. 1, according to an exemplary embodiment. The method shown in fig. 1 may be performed by any electronic device having image processing functions. The electronic device may be at least one of: such as smart phones, tablet Personal Computers (PCs), mobile phones, video phones, electronic book readers (e-book readers), desktop PCs, laptop PCs, netbook computers, workstations, servers, personal Digital Assistants (PDAs), portable Multimedia Players (PMPs), cameras, wearable devices, and the like.
Referring to fig. 1, in step S11, an image to be edited is obtained. Here, the image to be edited may be a photograph or a single frame image extracted from a video.
In step S12, at least one tone of the image to be edited is determined. The at least one hue may be a dominant hue in the image or may comprise a dominant hue as well as other kinds of prominent hues. Alternatively, the determined at least one hue may be a user-specified hue, or any combination including the above-mentioned hue categories. The above examples are merely exemplary, and the present disclosure is not limited thereto.
Dominant hue may refer to the hue that is noticeable in the image to be edited. There may be one or more distinct hues in an image. Further, the number of tones may also be set according to user input.
In determining at least one tone in the image to be edited, the image to be edited may be converted into an HSV image, the pixels in the HSV image may be tone-classified according to a pre-defined first tone space, and then the at least one tone may be determined based on the number of pixels included in each tone. The first tone space may be predefined for salient tone analysis. The first color space may be defined based on the HSV color space. For example, the first tone space may include a plurality of hues, a range of values for each hue, and a range of values for saturation and brightness corresponding to each hue.
The saturation and brightness value ranges corresponding to each tone can be respectively provided with super parameters, so that some areas of saturation and brightness can be ignored, and the image or video transition after the subsequent tone conversion is more natural.
The pixels in the image to be edited can be classified according to the value range of each tone and the corresponding value ranges of saturation and brightness, tone sorting is performed according to the order of the number of pixels included in each tone from large to small, and the former at least one tone in the tone sorting is determined. For example, a first hue in the hue order may be determined as the at least one hue. For example, the first several midtones in the tone ordering may be determined as the at least one tone.
As another example, after classifying pixels in an image to be edited, at least one tone may be determined according to a ratio of the number of pixels included in each tone to the total number of pixels of the image to be edited and a saturation value corresponding to each tone, respectively. For example, the ratio of the number of pixels included in each tone to the total number of pixels of the image to be edited may be calculated separately, and then the average saturation value of the corresponding region of each tone may be calculated separately, the larger the multiplication result is, the more noticeable the tone is represented, or the larger the addition result is, the more noticeable the tone is represented. Therefore, at least one tone can be determined from the above-described multiplication result or addition result.
Alternatively, when determining at least one tone in the image to be edited, the RGB pixels of the image to be edited may be clustered, each class may be ordered in order of the number of pixels included in each class from large to small, the RGB values of the cluster center point of the first at least one class in the order may be converted into HSV values, and at least one tone may be determined based on the predefined first tone space and the converted HSV values.
For example, the cluster center points of the Q most significant categories may be labeled [ R ] s-i ,G s-i ,B s-i ] i=0,1,…Q-1 Converted to the corresponding HSV value, labeled [ H ] s-i ,S s-i ,V s-i ] i=0,1,…Q-1 The method comprises the steps of carrying out a first treatment on the surface of the Then, from the first color space, [ H ] is determined s-i ,S s-i ,V s-i ] i=0,1,…Q-1 Which hue respectively belongs to, e.g. the cluster center point RGB value of the j-th class is [100,50,150 ]]Its corresponding HSV value is [135,170,50 ]]A corresponding hue is determined from the first hue space based on the HSV value. However, the above examples are merely exemplary, and the present disclosure is not limited theretoAnd is not limited thereto.
In step S13, a region corresponding to at least one tone is acquired from the image to be edited. The region corresponding to the at least one hue may be extracted from the image to be edited according to a predefined second hue space. After determining the hues, the corresponding regions in the image to be edited may be extracted based on a range of values for each hue in a second hue space different from the first hue space.
In step S14, the acquired region is subjected to tone conversion. In performing the tone conversion, the tone to be converted into can be specified according to user input. By receiving a user input for setting a target tone, the tone of the acquired region can be converted into a target tone desired by the user.
In addition, in the case where a human body is included in the image to be edited, the tone of the exposed skin area of the human body in the image can be retained, and the tone conversion can be performed on other areas. For example, a skin detection algorithm may be used to extract an area of human bare skin in the image to be edited, the extracted area of human bare skin retaining the original tone, and the tone of an area on the human body such as clothing or a backpack being transformed.
The present disclosure may enable transforming a tonal saliency region in an image or video to a specified tone, or transforming an entire image to a specified tone, or transforming a specified region in an image to a specified tone.
Fig. 2 is a flowchart illustrating an image processing method according to another exemplary embodiment.
Referring to fig. 2, in step S21, an image to be edited is obtained. When a video is acquired, each frame image of the video may be extracted.
In step S22, the image to be edited is converted into an HSV image. An HSV conversion algorithm may be utilized to convert the image to be edited into an HSV image. After undergoing HSV conversion, each pixel of the HSV image may be represented by a hue H, a saturation S, a brightness V.
In step S23, pixels in the HSV image are tone-classified according to a tone space defined in advance.
According to embodiments of the present disclosure, a tone space may be defined based on the HSV color space. The tone space may include a range of values of a plurality of tones, and a range of values of saturation and brightness corresponding to each tone. Further, the saturation and brightness value ranges corresponding to each hue are respectively provided with super parameters. The setting of the super-parameters can make the tone-converted image more natural to some extent.
As an example, 10 hues, such as black, gray, white, red, orange, yellow, green, cyan, blue, and violet hues, may be included in the hue space, each setting a corresponding range of values to distinguish. For example, the tone space can be represented by the following table 1.
TABLE 1
In table 1, kBlack, kGrray, kWhite, kRed, kOrange, kYellow, kGreen, kCyan, kBlue, kPurple represents black, gray, white, red, orange, yellow, green, cyan, blue and violet hues in order. [ Hmin, hmax ] represents a value range of hue (Hmin > =0; hmin < =hmax; hmax < =180), [ Smin, smax ] represents a value range of saturation (Smin > =0; smin < =smax; smax < =255), [ Vmin, vmax ] represents a value range of luminance (Vmin > =0; vmin < =smax; vmax < =255); ks and Kv are super parameters, and can be set according to actual situations (-211 < = Ks < = 44, -208< = Kv < = 47), and Ks and Kv can ignore pixels with some saturation and brightness, for example, ks can be set to 33, kv can be set to 1. However, the above examples are merely exemplary, and the present disclosure is not limited thereto.
The pixels of the HSV image may be classified according to the range of values for each hue in the hue space. For example, according to the tone space of table 1 described above, pixels of an image to be edited can be classified into 10 categories.
In step S24, the dominant hue of the image to be edited is determined. Here, a dominant hue may mean that the hue is more pronounced in the image. After the pixel classification is performed, the number of pixels included in each tone may be calculated separately, then tone sorting may be performed in order of the number of pixels included in each tone after the classification from large to small, and the dominant tone may be determined from the first tone in the tone sorting.
As an example, after obtaining the hues in 10 using table 1, the numbers of pixels each occupied by the hues in 10 may be calculated, respectively, and the more the number of pixels, the more the hue region is significant, and the top G most significant class of pixel points (in this case G < =10) may be selected as the dominant hue. Alternatively, when performing the saliency tone analysis, kBlack, kGrray, kWhite three tones may be culled, i.e., the saliency tone ordering is performed among the remaining seven tones, with G < = 7.
As another example, the dominant hue may be determined from the ratio of the number of pixels included in each hue to the total number of pixels of the image to be edited, and the saturation value corresponding to each hue, respectively. For example, a dominant hue is selected in combination with the duty cycle of the number of pixels of each hue and the average saturation value corresponding to that hue, such as multiplying the duty cycle of the number of pixels of the hue by the average saturation, with a larger value representing that hue is more pronounced; or the duty ratio of the number of pixels of a tone and the average saturation are added, the larger the value, the more noticeable the tone is represented. However, the above examples are merely examples, and the present disclosure is not limited thereto.
In addition, the dominant hue may also be determined using a priori knowledge, algorithmic processing, statistical analysis, and the like.
In step S25, a region corresponding to the determined dominant hue is acquired from the image to be edited. The user-defined second tone space may be caused to extract a region corresponding to the determined dominant tone from the image to be edited.
After determining the dominant hue, a corresponding region in the image to be edited may be extracted based on a range of values for each hue in a second hue space different from the first hue space.
As an example, in performing the tone region division, the corresponding region in the image to be edited can be extracted using the value range of each tone in table 2. Table 2 is free of kBlack, kGrray, kWhite three hues compared with table 1, and the division of the region is performed from the viewpoint of hue (in other words, table 2 corresponds to table 1 having ks=44 and kv=47), so that a significant boundary excessive unnatural phenomenon can be avoided.
TABLE 2
In step S26, the acquired region is subjected to tone conversion. The corresponding region may be transformed into the target hue by the user entering the target hue value. Alternatively, the acquired region may be converted into a predetermined tone.
In step S27, post-processing may be performed on the tone-converted image. According to embodiments of the present disclosure, in order to smooth the transformed edge transition in the image, making it more natural to look at, the tone transformed image may be subjected to a filtering process. In addition, to increase the diversity of tone conversion, the original pixel values, such as human skin tone portions, may be preserved for some particular regions.
As an example, therefore, in order to smooth edges, the tone-converted image may be subjected to a guided filtering operation with the original image to be edited as a reference image to keep the edges smooth. In the case that a person exists in the image to be edited, the skin color of the human body can be protected, and the skin color detection method can be realized by adopting a skin segmentation method based on an elliptical color space or a skin segmentation algorithm based on deep learning. In addition, a smoothing algorithm such as bilateral filtering can be used for filtering the image.
Fig. 3 is a flowchart illustrating an image processing method according to another exemplary embodiment.
Referring to fig. 3, in step S31, an image to be edited is obtained.
In step S32, the RGB pixels of the image to be edited are clustered.
As an example, the RGB pixel values may be clustered with M as the number of cluster centers. For example, a clustering algorithm such as K-means Kmeans, kmeans ++ can be used to divide all pixels in the image to be edited into M classes.
In step S33, each category is ordered in order of the number of pixels included in each category from large to small.
In step S34, RGB values of cluster center points of the first category in the ranking are converted into HSV values.
In step S35, a dominant hue is determined from the predefined first hue space and the converted HSV value. The more pixels in a class, the more significant the class may be, and the pixels of the first most significant class in the rank may be extracted.
In step S36, an area corresponding to the dominant hue is acquired from the image to be edited. The region corresponding to the dominant hue may be extracted from the image to be edited using a predefined second hue space. For example, the corresponding region of the dominant hue may be extracted as in table 2 above. For example, after the kplay tone is dominant, all pixel values in the kplay tone in Table 2 can be extracted and labeled as Region j . However, the above examples are merely exemplary, and the present disclosure is not limited thereto.
In step S37, the acquired region is subjected to tone conversion. The corresponding region may be transformed into the target hue by the user entering the target hue value. Alternatively, the acquired region may be converted into a predetermined tone.
In step S38, post-processing may be performed on the tone-converted image. According to embodiments of the present disclosure, in order to smooth the transformed edge transition in the image, making it more natural to look at, the tone transformed image may be subjected to a filtering process. In addition, to increase the diversity of tone conversion, the original pixel values, such as human skin tone portions, may be preserved for some particular regions.
Fig. 4 is a flowchart illustrating an image processing method according to another exemplary embodiment.
Referring to fig. 4, in step S41, an image to be edited is obtained.
In step S42, at least one tone of the image to be edited is determined. Here, the at least one tone may include a dominant tone as well as other tones than the dominant tone. Alternatively, the at least one hue may comprise all hues in the image to be edited.
At least one hue in the image to be edited may be determined using a predefined first hue space. For example, 10 hues may be determined using the first hue space described in table 1.
In step S43, a region corresponding to at least one tone is acquired from the image to be edited. A region corresponding to at least one tone determined in step S42 may be determined using a predefined second tone space. For example, the image to be edited may be area-divided using the second tone space shown in table 2.
In step S44, a user input is received. According to embodiments of the present disclosure, the user input may include at least one of a first user input and a second user input. Here, the first user input may be used to set a target hue and the second user input may be used to set a degree of hue conversion, wherein the degree of hue conversion represents a percentage of an area in the image to be edited that is to be hue converted.
As an example, the range of values for the degree of hue conversion may be [0%,100% ], denoted as T%, which means that at least a portion of T percent of the image to be edited will be converted to the target chromaticity.
In step S45, the acquired region is subjected to tone conversion based on the user input. In the case where the user input is the first user input, the hue of the acquired region may be converted into the target hue of the input. In the case where the user input is the second user input, the region to be subjected to tone conversion of the acquired region is determined based on the tone conversion degree, and then the tone of the determined region is converted into a specified tone. In the case where the user input includes the first user input and the second user input, an area to be subjected to tone conversion of the acquired area may be determined based on the tone conversion degree, and then the tone of the determined area may be converted into the input target tone. The above examples are merely exemplary, and the present disclosure is not limited thereto.
In the case where the region to be subjected to tone conversion is determined based on the second user input, the number N of tone regions to be tone-converted (where N is greater than or equal to 1) among the plurality of tone regions included in the acquired region is first determined based on the degree of tone conversion, tone sorting is performed in order of the number of pixels included in each of the plurality of tone regions from large to small, and the first N tone regions in the tone sorting are determined as the region to be tone-converted.
According to embodiments of the present disclosure, the number N may be determined in the following manner. In the case where the degree of tone conversion is less than or equal to the first value, the number N is determined to be 1; in the case where the degree of tone conversion is greater than or equal to the second value, the number N is determined as the number of all tone areas; in the case where the degree of tone conversion is greater than the first value and less than the second value, the number N is determined as the following value: the ratio of the number of pixels included in each tone region to the number of effective pixels in the image to be edited, respectively, is sequentially calculated from the front in tone ordering until the sum of the ratios is greater than or equal to the number of tone regions of the tone conversion degree.
In the case where the image to be edited does not include a human body, the effective pixels include all pixels in the image to be edited. In the case where the image to be edited includes a human body, the effective pixels include pixels other than pixels of an exposed skin area of the human body in the image to be edited.
For example, at tone conversion degree T<t low ,t low When the default value is 40, n=1 can be set to represent the Region of the most significant tone 0 Color tone of (a) 0 The conversion is performed, and the Hue range of the color tone is set as [ H ] 0-low ,H 0-high ]. The transformation mode can be Region 0 The hue values of all pixels of (1) are set to the specified hue value or target hue value.
At T>t high ,t high When the default value is taken to 90, all tone areas in the n=acquisition area may be set, indicating that the tones of these tone areas are all shifted to the designated tone value or target tone value. The transformation may be to [ Region ] i ] i=0,1,…N-1 The hue values of all pixels of (1) are set to the specified hue value or target hue value.
When t low <T<t high When the effective pixel number of the image to be edited is X (the effective pixel number can refer to the number of pixels remained after removing the pixels in the skin segmentation result in the image), len (Region is set i ) Representing Region i The number of pixels in the image is Ratio (Region i ) Representing Region i The proportion occupying the whole image can be expressed by the following equation (1):
Ratio(Region i )=Len(Regioni)/X (1)
the pixel proportions of the first n hues are sequentially calculated until the following equation (2) is satisfied
At this time, n=n may be taken to represent that all of these N tone regions are converted to a specified tone or target tone. The transformation may be to [ Region ] i ] i=0,1,…n-1 The tone values of all pixels in (a) are set to a specified tone or target tone.
In addition, when t low =0,t high When=1, the main tone conversion and tone normalization conversion functions are not needed to be distinguished, and the regional tone conversion of the image can be controlled by the tone conversion degree T, so the T value can be adjusted interactively by the user.
By setting the degree of tone conversion in the tone conversion, it is possible to either set t low And t high The two parameters can realize two functions of main tone conversion and tone normalization conversion, and can also be independently used as interaction parameters for user adjustment to realize tone conversion of image controllable region proportion.
As an alternative embodiment, the tone-converted image may be post-processed. According to embodiments of the present disclosure, in order to smooth the transformed edge transition in the image, making it more natural to look at, the tone transformed image may be subjected to a filtering process. In addition, to increase the diversity of tone conversion, the original pixel values, such as human skin tone portions, may be preserved for some particular regions.
As an example, therefore, in order to smooth edges, the tone-converted image may be subjected to a guided filtering operation with the original image to be edited as a reference image to keep the edges smooth. In the case that a person exists in the image to be edited, the skin color of the human body can be protected, and the skin color detection method can be realized by adopting a skin segmentation method based on an elliptical color space or a skin segmentation algorithm based on deep learning.
Fig. 5 is a flow chart illustrating an image processing method according to an exemplary embodiment.
Referring to fig. 5, an image or video to be edited is acquired. In the case of acquiring a video, a single frame image is extracted, and each frame image is processed frame by frame.
And carrying out tone saliency analysis on the acquired image or video. The tone saliency analysis may be performed using a first tone space. For example, the hue space is predefined to 10 hues, kBlack, kGrray, kWhite, kRed, kOrange, kYellow, kGreen, kCyan, kBlue, kPurple respectively, based on the HSV color space, representing black, gray, white, red, orange, yellow, green, cyan, blue, and violet hues in order. The top G main (or significant) hues in the image or video can be obtained by means of prior knowledge, algorithm processing, statistical analysis and the like, and are marked as [ Color ] i ] i=0,1,…G-1 . At least one hue may be obtained using a clustering algorithm based on the RGB color space. For example, in M (M>The number of the clustering centers is =g), the RGB pixel values are clustered, all the pixels in the image can be divided into M classes by using a clustering algorithm such as Kmeans or kmeans++, the more the number of the pixels in the classes is, the more significant the marking is, and the top G most significant class of pixels are taken as the at least one tone.
Alternatively, at least one hue may be determined using a hue space based on the HSV color space. For example, according to the value ranges of 10 hues in table 1, pixels in an HSV image are divided into 10 categories, i.e., 10 block areas. Then, the number of pixels occupied by each of the 10 block areas is calculated, the more the number of pixels is, the more significant the hue area is, the first G most significant categories of pixels are taken (in this case G < = 10), or the ratio of the number of pixels and the average saturation value of the area can be combined to select. Alternatively, when performing the saliency tone analysis, the kBlack, kGrray, kWhite three colors in table 1 can be removed, i.e., the saliency tone spatial region ordering is performed in the remaining 7 hues, so G < = 7.
Next, the image to be edited is subjected to hue region division according to the determined at least one hue. The segmentation of the tonal regions may be performed using a second tonal space. For example, corresponding regions in the image may be sequentially extracted according to the ranges in Table 2 based on the top G significant hues analyzed, and may be labeled [ Region ] i ] i=0,1,…G-1 . The protected pixel portion, such as the skin area pixel portion, may not be included in this writing area.
If the clustering algorithm based on RGB color space is adopted in the saliency tone analysis, the cluster center points of the G most significant categories are marked as R s-i ,G s-i ,B s-i ] i=0,1,…G-1 Converted to the corresponding HSV value, labeled [ H ] s-i ,S s-i ,V s-i ] i=0,1,…G-1 The method comprises the steps of carrying out a first treatment on the surface of the Then, from the first tone space (such as Table 1), determine [ H ] s-i ,S s-i ,V s-i ] i=0,1,…G-1 Respectively belonging to which hue, and then extracting the Region [ Region ] in the image according to the range of the hue in Table 2 i ] i=0,1,…G-1 For example, the cluster center point RGB value of the j-th class is [100,50,150 ]]Its corresponding HSV value is [135,170,50 ]]In the case where ks=33 and kv=1 (as in table 1), the cluster center belongs to the kplay tone, and then the pixel values in the kplay tone are extracted according to all the pixel values in table 2, and are recorded as regions j 。
If the salient hue analysis is performed using the hue space predefined in Table 1, the Region [ Region ] of the corresponding hue can be extracted according to Table 2 i ] i=0,1,…G-1 。
In the tone conversion, the divided regions may be tone-converted according to user input. In the case where the user input is the first user input, the hue of the acquired region may be converted into the target hue of the input. In the case where the user input is the second user input, the region to be subjected to tone conversion in the above-described divided region is determined based on the degree of tone conversion, and then the tone of the determined region is converted into a specified tone. In the case where the user input includes the first user input and the second user input, the region to be subjected to the tone conversion in the above-described divided region may be determined based on the tone conversion degree, and then the tone of the determined region is converted into the input target tone. The above examples are merely exemplary, and the present disclosure is not limited thereto.
By setting user input, the main tone conversion effect and tone normalization conversion effect can be realized, and the user experience is greatly improved.
After the tone conversion, post-processing is performed on the converted image. For example, to smooth the transformed edge transitions in the image, making them appear more natural, the tonally transformed image may be subjected to a filtering process. In addition, to increase the diversity of tone conversion, the original pixel values, such as human skin tone portions, may be preserved for some particular regions.
After post-processing, a final target image may be obtained.
Fig. 6 is a block diagram of an image processing apparatus according to an exemplary embodiment. Referring to fig. 6, an image processing apparatus 600 may include an acquisition module 601 and a processing module 602. In addition, the image processing apparatus 600 may further include a user input module 603. Each module in the image processing apparatus 600 may be implemented by one or more modules, and the names of the corresponding modules may vary according to the types of the modules. In various embodiments, some modules in the image processing apparatus 600 may be omitted, or additional modules may also be included. Furthermore, modules/elements according to various embodiments of the present disclosure may be combined to form a single entity, and thus functions of the respective modules/elements prior to combination may be equivalently performed.
The acquisition module 601 may obtain an image to be edited.
The processing module 602 may determine at least one hue of the image to be edited. Here, the at least one tone may be a dominant tone, a designated tone, or other tones.
The processing module 602 may obtain an area corresponding to the determined at least one hue from the image to be edited and perform a hue transformation on the obtained area.
The processing module 602 may convert the image to be edited into an HSV image, tone classify pixels in the HSV image according to a predefined first tone space, and determine at least one tone based on the number of pixels included in each tone.
The processing module 602 may extract a region corresponding to the at least one hue from the image to be edited according to a predefined second hue space.
The first tone space and the second tone space may be predefined based on the HSV color space, wherein the first tone space and the second tone space may include a range of values of a plurality of tones and a range of values of saturation and brightness corresponding to each tone, respectively. The range of values of saturation and luminance corresponding to each tone may be set based on the super parameter, wherein the super parameters in the first tone space and the second tone space may be set differently. For example, the first tone space may be as shown in table 1 above, and the second tone space may be as shown in table 2 above.
The processing module 602 may perform tone ordering in order of increasing number of pixels included in each tone, and determine a first at least one tone in the tone ordering as the at least one tone; or determining the at least one tone according to the ratio of the number of pixels included in each tone to the total number of pixels of the image to be edited and the saturation value corresponding to each tone.
The processing module 602 may perform clustering processing on RGB pixels of the image to be edited; sequencing each category according to the sequence of the number of pixels included in each category from large to small, and converting the RGB value of the clustering center point of the former at least one category in the sequencing into an HSV value; the at least one hue is determined based on a predefined first hue space and the converted HSV value.
The user input module 603 may receive user input.
The processing module 602 may tone transform the acquired region based on the received user input. According to an embodiment of the present disclosure, the user input may include at least one of a first user input for setting a target hue and a second user input for setting a hue conversion degree, wherein the hue conversion degree represents a percentage of an area in the image to be edited to be hue converted.
The processing module 602 may determine the region of the acquired region to be tone transformed based on the degree of tone transformation.
The processing module 602 may transform the determined hue of the region to be hue-transformed to a target hue.
In the case where the acquired region includes a human body, the processing module 602 may extract a region of human bare skin in the region using a skin detection algorithm and preserve the original hue for the region of human bare skin.
The processing module 602 may determine the number N of tone regions to be tone-converted among a plurality of tone regions included in the acquired region according to the tone conversion degree, where N is greater than or equal to 1, tone-ordering the first N tone regions in the tone ordering in order of the number of pixels included in each of the plurality of tone regions from large to small.
In the event that the degree of hue conversion is less than or equal to the first value, the processing module 602 may determine the number N as 1.
In the event that the degree of hue conversion is greater than or equal to the second value, the processing module 602 may determine the number N as the number of full hue regions.
In the event that the degree of hue conversion is greater than the first value and less than the second value, the processing module 602 may determine the number N as the following value: the ratio of the number of pixels included in each tone region to the number of effective pixels in the image to be edited, respectively, is sequentially calculated from the front in tone ordering until the sum of the ratios is greater than or equal to the number of tone regions of the tone conversion degree. Here, in the case where the image to be edited does not include a human body, the effective pixels include all pixels in the image to be edited. In the case where the image to be edited includes a human body, the effective pixels include pixels other than pixels of an exposed skin area of the human body in the image to be edited. For example, the number N is calculated with reference to the above equation (2).
Fig. 7 is a schematic structural diagram of an image processing apparatus according to an exemplary embodiment.
As shown in fig. 7, the image processing apparatus 700 may include: a processing component 701, a communication bus 702, a network interface 703, an input output interface 704, a memory 705, and a power supply component 706. Wherein the communication bus 702 is used to enable connected communications between these components. The input output interface 704 may include a video display (such as a liquid crystal display), microphone and speaker, and a user interaction interface (such as a keyboard, mouse, touch input device, etc.), optionally the input output interface 704 may also include standard wired interfaces, wireless interfaces. The network interface 703 may optionally include a standard wired interface, a wireless interface (e.g., a wireless fidelity interface). The memory 705 may be a high-speed random access memory or a stable nonvolatile memory. The memory 705 may alternatively be a storage device separate from the processing component 701 described above.
It will be appreciated by those skilled in the art that the structure shown in fig. 7 does not constitute a limitation of the image processing apparatus 700, and may include more or fewer components than shown, or may combine certain components, or may have a different arrangement of components.
As shown in fig. 7, an operating system, a data storage module, a network communication module, a user interface module, an image processing program, and a database may be included in the memory 705 as one storage medium.
In the image processing apparatus 700 shown in fig. 7, the network interface 703 is mainly used for data communication with an external apparatus/terminal; the input/output interface 704 is mainly used for data interaction with a user; the processing component 701 and the memory 705 in the image processing apparatus 700 may be provided in the image processing apparatus 700, and the image processing apparatus 700 executes the image processing method provided by the embodiment of the present disclosure by calling the image processing program stored in the memory 705 and various APIs provided by the operating system through the processing component 701.
The processing component 701 may include at least one processor, with a set of computer-executable instructions stored in the memory 705 that, when executed by the at least one processor, perform an image processing method according to an embodiment of the present disclosure. Further, the processing component 701 may perform encoding operations, decoding operations, and the like. However, the above examples are merely exemplary, and the present disclosure is not limited thereto.
The processing component 701 may obtain an image to be edited, determine at least one hue of the image to be edited, obtain an area corresponding to the at least one hue from the image to be edited, and perform a hue transformation on the obtained area. Alternatively, the acquired region may be subjected to tone conversion according to user input.
The image processing device 700 may receive or output images and/or video via the input-output interface 704. For example, a user may output processed images or video via the input output interface 704 for sharing to other users.
By way of example, image processing apparatus 700 may be a PC computer, tablet device, personal digital assistant, smart phone, or other device capable of executing the above-described set of instructions. Here, the image processing apparatus 700 is not necessarily a single electronic apparatus, but may be any device or aggregate of circuits capable of executing the above-described instructions (or instruction sets) singly or in combination. The image processing device 700 may also be part of an integrated control system or system manager, or may be configured as a portable electronic device that interfaces with either locally or remotely (e.g., via wireless transmission).
In image processing apparatus 700, processing component 701 may comprise a Central Processing Unit (CPU), a Graphics Processor (GPU), a programmable logic device, a special purpose processor system, a microcontroller, or a microprocessor. By way of example, and not limitation, processing component 701 may also include an analog processor, a digital processor, a microprocessor, a multi-core processor, a processor array, a network processor, and so forth.
The processing component 701 may execute instructions or code stored in a memory, wherein the memory 705 may also store data. Instructions and data may also be transmitted and received over a network via network interface 703, wherein network interface 703 may employ any known transmission protocol.
The memory 705 may be integrated with the processor, for example, RAM or flash memory disposed within an integrated circuit microprocessor or the like. In addition, the memory 705 may include a stand-alone device, such as an external disk drive, a storage array, or any other storage device that may be used by a database system. The memory and the processor may be operatively coupled or may communicate with each other, for example, through an I/O port, a network connection, etc., such that the processor is able to read files stored in the memory.
According to embodiments of the present disclosure, an electronic device may be provided. Fig. 8 is a block diagram of an electronic device according to an embodiment of the present disclosure, the electronic device 800 may include at least one memory 802 and at least one processor 801, the at least one memory 802 storing a set of computer-executable instructions that, when executed by the at least one processor 801, perform an image processing method according to various embodiments of the present disclosure.
Processor 801 may include a Central Processing Unit (CPU), a Graphics Processor (GPU), a programmable logic device, a special purpose processor system, a microcontroller, or a microprocessor. By way of example, and not limitation, processor 801 may also include an analog processor, a digital processor, a microprocessor, a multi-core processor, a processor array, a network processor, and the like.
The memory 802, which is one type of storage medium, may include an operating system, a data storage module, a network communication module, a user interface module, an image processing program, and a database.
The memory 802 may be integrated with the processor 801, for example, RAM or flash memory may be disposed within an integrated circuit microprocessor or the like. In addition, the memory 802 may include a stand-alone device, such as an external disk drive, a storage array, or other storage device usable by any database system. The memory 802 and the processor 801 may be operatively coupled or may communicate with each other, for example, through an I/O port, network connection, etc., such that the processor 801 is able to read files stored in the memory 802.
In addition, the electronic device 800 may also include a video display (such as a liquid crystal display) and a user interaction interface (such as a keyboard, mouse, touch input device, etc.). All components of the electronic device 800 may be connected to each other via a bus and/or a network.
By way of example, electronic device 800 may be a PC computer, tablet device, personal digital assistant, smart phone, or other device capable of executing the above-described set of instructions. Here, the electronic device 800 is not necessarily a single electronic device, but may be any apparatus or a collection of circuits capable of executing the above-described instructions (or instruction set) individually or in combination. The electronic device 800 may also be part of an integrated control system or system manager, or may be configured as a portable electronic device that interfaces with either locally or remotely (e.g., via wireless transmission).
It will be appreciated by those skilled in the art that the structure shown in fig. 8 is not limiting and may include more or fewer components than shown, or certain components may be combined, or a different arrangement of components.
According to an embodiment of the present disclosure, there may also be provided a computer-readable storage medium storing instructions, wherein the instructions, when executed by at least one processor, cause the at least one processor to perform an image processing method according to the present disclosure. Examples of the computer readable storage medium herein include: read-only memory (ROM), random-access programmable read-only memory (PROM), electrically erasable programmable read-only memory (EEPROM), random-access memory (RAM), dynamic random-access memory (DRAM), static random-access memory (SRAM), flash memory, nonvolatile memory, CD-ROM, CD-R, CD + R, CD-RW, CD+RW, DVD-ROM, DVD-R, DVD + R, DVD-RW, DVD+RW, DVD-RAM, BD-ROM, BD-R, BD-R LTH, BD-RE, blu-ray or optical disk storage, hard Disk Drives (HDD), solid State Disks (SSD), card memory (such as multimedia cards, secure Digital (SD) cards or ultra-fast digital (XD) cards), magnetic tape, floppy disks, magneto-optical data storage, hard disks, solid state disks, and any other means configured to store computer programs and any associated data, data files and data structures in a non-transitory manner and to provide the computer programs and any associated data, data files and data structures to a processor or computer to enable the processor or computer to execute the programs. The computer programs in the computer readable storage media described above can be run in an environment deployed in a computer device, such as a client, host, proxy device, server, etc., and further, in one example, the computer programs and any associated data, data files, and data structures are distributed across networked computer systems such that the computer programs and any associated data, data files, and data structures are stored, accessed, and executed in a distributed fashion by one or more processors or computers.
In accordance with embodiments of the present disclosure, there may also be provided a computer program product in which instructions are executable by a processor of a computer device to perform the above-described image processing method.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any adaptations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It is to be understood that the present disclosure is not limited to the precise arrangements and instrumentalities shown in the drawings, and that various modifications and changes may be effected without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (22)
1. An image processing method, comprising:
obtaining an image to be edited;
determining at least one hue of the image to be edited;
acquiring a region corresponding to the at least one tone from the image to be edited; and
The acquired region is subjected to a tone conversion,
wherein the step of determining at least one hue of the image to be edited comprises: converting the image to be edited into an HSV image; performing tone classification on pixels in the HSV image according to a pre-defined first tone space; the at least one hue is determined based on the number of pixels comprised by each hue,
wherein the step of acquiring a region corresponding to the at least one tone from the image to be edited comprises: extracting a region corresponding to the at least one tone from the image to be edited according to a predefined second tone space,
wherein a first tone space and a second tone space are defined based on HSV color space, wherein the first tone space and the second tone space respectively include a range of values of a plurality of tones and a range of values of saturation and brightness corresponding to each tone,
wherein the range of values of saturation and brightness corresponding to each tone is set based on the super parameter, wherein the super parameters in the first tone space and the second tone space are set differently.
2. The image processing method according to claim 1, wherein the step of determining the at least one tone based on the number of pixels included in each tone includes:
Performing tone sorting in order of increasing number of pixels included in each tone, and determining the first at least one tone in the tone sorting as the at least one tone; or alternatively
The at least one tone is determined according to the ratio of the number of pixels included in each tone to the total number of pixels of the image to be edited and the saturation value corresponding to each tone.
3. The image processing method according to claim 1, wherein the step of determining at least one tone of the image to be edited comprises:
clustering the RGB pixels of the image to be edited;
sorting each category according to the order of the number of pixels included in each category from large to small;
converting RGB values of cluster center points of at least one category before in the ranking into HSV values;
the at least one hue is determined based on a predefined first hue space and the converted HSV value.
4. The image processing method according to claim 1, further comprising:
receiving user input;
and performing tone conversion on the acquired region according to the user input.
5. The image processing method of claim 4, wherein the user input comprises at least one of a first user input and a second user input,
Wherein a first user input is used to set a target hue and a second user input is used to set a degree of hue conversion, wherein the degree of hue conversion represents a percentage of an area of the image to be edited that is to be hue converted.
6. The image processing method according to claim 5, wherein the step of tone transforming the acquired region according to the user input includes:
determining a region to be subjected to tone conversion of the acquired region based on the tone conversion degree; and/or
And converting the determined hue of the region to be hue-converted into the target hue.
7. The image processing method according to claim 1, wherein, in the case where a human body is included in the acquired region, the step of performing tone conversion on the acquired region includes:
extracting the naked skin area of the human body in the area by using a skin detection algorithm;
the original color tone is reserved for the exposed skin area of the human body.
8. The image processing method according to claim 6, wherein the step of determining the region to be subjected to tone conversion of the acquired region based on the degree of tone conversion comprises:
determining the number N of tone regions to be tone-converted among a plurality of tone regions included in the acquired region according to the tone conversion degree, wherein N is greater than or equal to 1;
Performing tone sorting in order of from large to small in the number of pixels included in each of the plurality of tone areas;
the top N hue regions in the hue order are determined as the regions to be hue-converted.
9. The image processing method according to claim 8, wherein the step of determining the number N of tone regions to be tone-converted among the plurality of tone regions according to the degree of tone conversion includes:
in the case where the degree of tone conversion is less than or equal to the first value, the number N is determined to be 1;
determining the number N as the number of all tone areas in the case where the degree of tone conversion is greater than or equal to the second value;
in the case where the degree of tone conversion is greater than the first value and less than the second value, the number N is determined as the following value: sequentially calculating the proportion of the number of pixels included in each tone region to the number of effective pixels in the image to be edited according to tone sequencing from the front until the sum of the proportions is greater than or equal to the number of tone regions of the tone conversion degree.
10. The image processing method according to claim 9, wherein in a case where the image to be edited does not include a human body, the effective pixels include all pixels in the image to be edited;
In the case where the image to be edited includes a human body, the effective pixels include pixels other than pixels of an exposed skin area of the human body in the image to be edited.
11. An image processing apparatus, comprising:
the acquisition module is configured to acquire an image to be edited;
a processing module configured to:
determining at least one hue of the image to be edited;
acquiring a region corresponding to the at least one tone from the image to be edited; and
the acquired region is subjected to a tone conversion,
wherein the processing module is configured to: converting the image to be edited into an HSV image; performing tone classification on pixels in the HSV image according to a pre-defined first tone space; determining the at least one tone based on the number of pixels included in each tone, extracting a region corresponding to the at least one tone from the image to be edited according to a predefined second tone space,
wherein a first tone space and a second tone space are defined based on HSV color space, wherein the first tone space and the second tone space respectively include a range of values of a plurality of tones and a range of values of saturation and brightness corresponding to each tone,
Wherein the range of values of saturation and brightness corresponding to each tone is set based on the super parameter, wherein the super parameters in the first tone space and the second tone space are set differently.
12. The image processing apparatus of claim 11, wherein the processing module is configured to:
performing tone sorting in order of increasing number of pixels included in each tone, and determining the first at least one tone in the tone sorting as the at least one tone; or alternatively
The at least one tone is determined according to the ratio of the number of pixels included in each tone to the total number of pixels of the image to be edited and the saturation value corresponding to each tone.
13. The image processing apparatus of claim 11, wherein the processing module is configured to:
clustering the RGB pixels of the image to be edited;
sorting each category according to the order of the number of pixels included in each category from large to small;
converting RGB values of cluster center points of at least one category before in the ranking into HSV values;
the at least one hue is determined based on a predefined first hue space and the converted HSV value.
14. The image processing apparatus of claim 11, further comprising a user input module configured to receive user input,
wherein the processing module is configured to tone transform the acquired region in accordance with the user input.
15. The image processing apparatus of claim 14, wherein the user input comprises at least one of a first user input and a second user input,
wherein a first user input is used to set a target hue and a second user input is used to set a degree of hue conversion, wherein the degree of hue conversion represents a percentage of an area of the image to be edited that is to be hue converted.
16. The image processing apparatus of claim 15, wherein the processing module is configured to:
determining a region to be subjected to tone conversion of the acquired region based on the tone conversion degree; and/or
And converting the determined hue of the region to be hue-converted into the target hue.
17. The image processing apparatus according to claim 11, wherein in the case where a human body is included in the acquired region, the processing module is configured to:
extracting the naked skin area of the human body in the area by using a skin detection algorithm;
The original color tone is reserved for the exposed skin area of the human body.
18. The image processing apparatus of claim 17, wherein the processing module is configured to:
determining the number N of tone regions to be tone-converted among a plurality of tone regions included in the acquired region according to the tone conversion degree, wherein N is greater than or equal to 1;
performing tone sorting in order of from large to small in the number of pixels included in each of the plurality of tone areas;
the top N hue regions in the hue order are determined as the regions to be hue-converted.
19. The image processing apparatus of claim 18, wherein the processing module is configured to:
in the case where the degree of tone conversion is less than or equal to the first value, the number N is determined to be 1;
determining the number N as the number of all tone areas in the case where the degree of tone conversion is greater than or equal to the second value;
in the case where the degree of tone conversion is greater than the first value and less than the second value, the number N is determined as the following value: sequentially calculating the proportion of the number of pixels included in each tone region to the number of effective pixels in the image to be edited according to tone sequencing from the front until the sum of the proportions is greater than or equal to the number of tone regions of the tone conversion degree.
20. The image processing apparatus according to claim 19, wherein in a case where the image to be edited does not include a human body, the effective pixels include all pixels in the image to be edited;
in the case where the image to be edited includes a human body, the effective pixels include pixels other than pixels of an exposed skin area of the human body in the image to be edited.
21. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the image processing method of any one of claims 1 to 10.
22. A computer readable storage medium, which when executed by a processor of an electronic device, causes the electronic device to perform the image processing method of any of claims 1 to 10.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110099560.7A CN112950453B (en) | 2021-01-25 | 2021-01-25 | Image processing method and image processing apparatus |
PCT/CN2021/112857 WO2022156196A1 (en) | 2021-01-25 | 2021-08-16 | Image processing method and image processing apparatus |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110099560.7A CN112950453B (en) | 2021-01-25 | 2021-01-25 | Image processing method and image processing apparatus |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112950453A CN112950453A (en) | 2021-06-11 |
CN112950453B true CN112950453B (en) | 2023-10-20 |
Family
ID=76236543
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110099560.7A Active CN112950453B (en) | 2021-01-25 | 2021-01-25 | Image processing method and image processing apparatus |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112950453B (en) |
WO (1) | WO2022156196A1 (en) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112950453B (en) * | 2021-01-25 | 2023-10-20 | 北京达佳互联信息技术有限公司 | Image processing method and image processing apparatus |
CN114676360B (en) * | 2022-03-23 | 2024-09-17 | 腾讯科技(深圳)有限公司 | Image processing method, device, electronic equipment and storage medium |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107424198A (en) * | 2017-07-27 | 2017-12-01 | 广东欧珀移动通信有限公司 | Image processing method, device, mobile terminal and computer-readable recording medium |
CN107833620A (en) * | 2017-11-28 | 2018-03-23 | 北京羽医甘蓝信息技术有限公司 | Image processing method and image processing apparatus |
CN111198956A (en) * | 2019-12-24 | 2020-05-26 | 北京达佳互联信息技术有限公司 | Multimedia resource interaction method and device, electronic equipment and storage medium |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4837965B2 (en) * | 2005-09-28 | 2011-12-14 | ソニー株式会社 | Color adjustment device, display device, and printing device |
US9083918B2 (en) * | 2011-08-26 | 2015-07-14 | Adobe Systems Incorporated | Palette-based image editing |
KR20150092546A (en) * | 2014-02-05 | 2015-08-13 | 한국전자통신연구원 | Harmless frame filter and apparatus for harmful image block having the same, method for filtering harmless frame |
JP2015154194A (en) * | 2014-02-13 | 2015-08-24 | 株式会社リコー | Image processing apparatus, image processing system, image processing method, program, and recording medium |
JP2015179995A (en) * | 2014-03-19 | 2015-10-08 | 富士ゼロックス株式会社 | Image processing device and program |
CN107845076A (en) * | 2017-10-31 | 2018-03-27 | 广东欧珀移动通信有限公司 | Image processing method, device, computer-readable recording medium and computer equipment |
CN107909553B (en) * | 2017-11-02 | 2021-10-26 | 海信视像科技股份有限公司 | Image processing method and device |
CN112950453B (en) * | 2021-01-25 | 2023-10-20 | 北京达佳互联信息技术有限公司 | Image processing method and image processing apparatus |
-
2021
- 2021-01-25 CN CN202110099560.7A patent/CN112950453B/en active Active
- 2021-08-16 WO PCT/CN2021/112857 patent/WO2022156196A1/en active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107424198A (en) * | 2017-07-27 | 2017-12-01 | 广东欧珀移动通信有限公司 | Image processing method, device, mobile terminal and computer-readable recording medium |
CN107833620A (en) * | 2017-11-28 | 2018-03-23 | 北京羽医甘蓝信息技术有限公司 | Image processing method and image processing apparatus |
CN111198956A (en) * | 2019-12-24 | 2020-05-26 | 北京达佳互联信息技术有限公司 | Multimedia resource interaction method and device, electronic equipment and storage medium |
Also Published As
Publication number | Publication date |
---|---|
WO2022156196A1 (en) | 2022-07-28 |
CN112950453A (en) | 2021-06-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11537873B2 (en) | Processing method and system for convolutional neural network, and storage medium | |
CN109919869B (en) | Image enhancement method and device and storage medium | |
Lin et al. | DeepFL-IQA: Weak supervision for deep IQA feature learning | |
CN107016415B (en) | A kind of color image Color Semantic classification method based on full convolutional network | |
CN112614060B (en) | Face image hair rendering method and device, electronic equipment and medium | |
CN106898026B (en) | A kind of the dominant hue extracting method and device of picture | |
CN112950453B (en) | Image processing method and image processing apparatus | |
US20050152613A1 (en) | Image processing apparatus, image processing method and program product therefore | |
US20120294514A1 (en) | Techniques to enable automated workflows for the creation of user-customized photobooks | |
US8942469B2 (en) | Method for classification of videos | |
WO2023000895A1 (en) | Image style conversion method and apparatus, electronic device and storage medium | |
Li et al. | Globally and locally semantic colorization via exemplar-based broad-GAN | |
CN104951495A (en) | Apparatus and method for managing representative video images | |
CN113222846A (en) | Image processing method and image processing apparatus | |
Qu et al. | Richness-preserving manga screening | |
Yeh et al. | Personalized photograph ranking and selection system considering positive and negative user feedback | |
Murray et al. | Toward automatic and flexible concept transfer | |
Lee et al. | Property-specific aesthetic assessment with unsupervised aesthetic property discovery | |
US8131077B2 (en) | Systems and methods for segmenting an image based on perceptual information | |
Liu | Two decades of colorization and decolorization for images and videos | |
Lindner et al. | Joint statistical analysis of images and keywords with applications in semantic image enhancement | |
Han et al. | High dynamic range image tone mapping: Literature review and performance benchmark | |
Ko et al. | IceNet for interactive contrast enhancement | |
CN103325101A (en) | Extraction method and device of color characteristics | |
CN111383289A (en) | Image processing method, image processing device, terminal equipment and computer readable storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |