CN112950453A - Image processing method and image processing apparatus - Google Patents

Image processing method and image processing apparatus Download PDF

Info

Publication number
CN112950453A
CN112950453A CN202110099560.7A CN202110099560A CN112950453A CN 112950453 A CN112950453 A CN 112950453A CN 202110099560 A CN202110099560 A CN 202110099560A CN 112950453 A CN112950453 A CN 112950453A
Authority
CN
China
Prior art keywords
tone
image
edited
hue
image processing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202110099560.7A
Other languages
Chinese (zh)
Other versions
CN112950453B (en
Inventor
王伟农
戴宇荣
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Dajia Internet Information Technology Co Ltd
Original Assignee
Beijing Dajia Internet Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Dajia Internet Information Technology Co Ltd filed Critical Beijing Dajia Internet Information Technology Co Ltd
Priority to CN202110099560.7A priority Critical patent/CN112950453B/en
Publication of CN112950453A publication Critical patent/CN112950453A/en
Priority to PCT/CN2021/112857 priority patent/WO2022156196A1/en
Application granted granted Critical
Publication of CN112950453B publication Critical patent/CN112950453B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • G06T3/04
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/643Hue control means, e.g. flesh tone control
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Abstract

The present disclosure relates to an image processing method and an image processing apparatus. The image processing method may include the steps of: obtaining an image to be edited; determining at least one tone of the image to be edited; acquiring a region corresponding to the at least one tone from the image to be edited; and performing tone conversion on the acquired region. The present disclosure may enable tone transformation of a specific region in an image.

Description

Image processing method and image processing apparatus
Technical Field
The present disclosure relates to the field of image processing, and in particular, to an image processing method and an image processing apparatus for tone conversion.
Background
The user may change the style of the current image or video by changing the tint of the image or video. Currently, for tone conversion of an image or video, tone conversion is generally performed on the entire image or video. For example, all regions in the video may be faded in color over a predetermined period of time (such as 15 seconds or 60 seconds), monotonically cycling through different shades. In addition, when a person is present on the screen, only another scene in the image or video can be subjected to color tone conversion while the entire human body region is maintained.
Disclosure of Invention
The present disclosure provides an image processing method and an image processing apparatus to solve at least the problem of the monotonous form of tone conversion in the related art. The technical scheme of the disclosure is as follows:
according to a first aspect of embodiments of the present disclosure, there is provided an image processing method, which may include: obtaining an image to be edited; determining at least one tone of the image to be edited; acquiring a region corresponding to the at least one tone from the image to be edited; and performing tone conversion on the acquired region.
Optionally, the at least one tone includes at least one of a dominant tone, a designated tone, and an overall tone of the image to be edited.
Optionally, the step of determining at least one color tone of the image to be edited may include: converting the image to be edited into an HSV image; carrying out tone classification on pixels in the HSV image according to a predefined first tone space; the at least one hue is determined based on the number of pixels included in each hue.
Optionally, the step of acquiring a region corresponding to the at least one color tone from the image to be edited may include: extracting a region corresponding to the at least one tone from the image to be edited according to a predefined second tone space.
Alternatively, the first and second hue spaces may be defined based on the HSV color space, wherein the first and second hue spaces may include value ranges of a plurality of hues and saturation and brightness corresponding to each hue, respectively.
Alternatively, the value ranges of the saturation and brightness corresponding to each hue may be set based on the super-parameter, wherein the super-parameter in the first and second hue spaces may be set differently.
Alternatively, the step of determining the at least one hue based on the number of pixels comprised by each hue may comprise: carrying out tone sorting according to the sequence of the number of pixels included in each tone from large to small, and determining at least one previous tone in the tone sorting as the at least one tone; or determining the at least one tone according to the ratio of the number of pixels included in each tone to the total number of pixels of the image to be edited and the saturation value corresponding to each tone.
Optionally, the step of determining at least one color tone of the image to be edited may include: clustering the RGB pixels of the image to be edited; sorting each category according to the sequence of the number of pixels included in each category from large to small; converting the RGB value of the clustering center point of the first at least one category in the sequence into HSV value; the at least one hue is determined based on a predefined first hue space and the converted HSV value.
Optionally, the image processing method may further include: receiving a user input; and carrying out tone conversion on the acquired region according to the user input.
Optionally, the user input may comprise at least one of a first user input and a second user input, wherein the first user input may be used to set a target hue and the second user input may be used to set a degree of hue transformation, wherein the degree of hue transformation represents a percentage of an area in the image to be edited that is to be subjected to hue transformation.
Optionally, the step of performing tone transformation on the acquired region according to the user input may include: determining a region to be subjected to tone conversion in the acquired region based on the tone conversion degree; and/or transforming the tone of the determined region to be tone-transformed into the target tone.
Alternatively, in the case where the acquired region includes a human body, the step of performing the tone transformation on the acquired region may include: extracting a human body naked skin area in the area by using a skin detection algorithm; and reserving original color tone for the naked skin area of the human body.
Alternatively, the step of determining a region to be subjected to the tone conversion in the acquired region based on the tone conversion degree may include: determining the number N of tone regions to be tone-transformed among a plurality of tone regions included in the acquired region, according to the tone transformation degree, wherein N is greater than or equal to 1; carrying out tone sequencing according to the sequence of the number of pixels included in each tone area in the plurality of tone areas from large to small; and determining the first N tone regions in tone ordering as the regions to be subjected to tone conversion.
Alternatively, the step of determining the number N of tone regions to be tone-transformed among the plurality of tone regions according to the tone transformation degree may include: determining the number N to be 1 in a case where the tone conversion degree is less than or equal to a first value; determining the number N as the number of all tone regions in the case where the tone conversion degree is greater than or equal to a second value; in a case where the tone conversion degree is larger than a first value and smaller than a second value, the number N is determined to be the following value: and sequentially calculating the proportion of the pixel number included in each tone region to the effective pixel number in the image to be edited from the beginning according to the tone sequencing until the sum of the proportions is greater than or equal to the number of the tone regions of the tone conversion degree.
Optionally, in a case that the image to be edited does not include a human body, the effective pixels include all pixels in the image to be edited; in the case that the image to be edited includes a human body, the effective pixels include pixels in the image to be edited other than pixels of a naked skin area of the human body.
According to a second aspect of the embodiments of the present disclosure, there is provided an image processing apparatus, which may include: an acquisition module configured to obtain an image to be edited; and a processing module configured to: determining at least one tone of the image to be edited; acquiring a region corresponding to the at least one tone from the image to be edited; and performing tone conversion on the acquired region.
Optionally, the processing module may be configured to convert the image to be edited into an HSV image; carrying out tone classification on pixels in the HSV image according to a predefined first tone space; the at least one hue is determined based on the number of pixels included in each hue.
Alternatively, the processing module may be configured to extract a region corresponding to the at least one tone from the image to be edited according to a predefined second tone space.
Alternatively, the first and second hue spaces may be defined based on the HSV color space, wherein the first and second hue spaces may include value ranges of a plurality of hues and saturation and brightness corresponding to each hue, respectively.
Alternatively, the value ranges of the saturation and brightness corresponding to each hue may be set based on the super-parameter, wherein the super-parameter in the first and second hue spaces may be set differently.
Optionally, the processing module may be configured to perform tone ordering in an order from a large number of pixels included in each tone, and determine at least one previous tone in the tone ordering as the at least one tone; or determining the at least one tone according to the ratio of the number of pixels included in each tone to the total number of pixels of the image to be edited and the saturation value corresponding to each tone.
Optionally, the processing module may be configured to: clustering the RGB pixels of the image to be edited; sorting each category according to the sequence of the number of pixels included in each category from large to small; converting the RGB value of the clustering center point of the first at least one category in the sequence into HSV value; the at least one hue is determined based on a predefined first hue space and the converted HSV value.
Optionally, the image processing apparatus may further include a user input module configured to receive a user input, wherein the processing module is configured to perform a tone transformation on the acquired region according to the user input.
Optionally, the user input may comprise at least one of a first user input for setting a target hue and a second user input for setting a degree of hue transformation, wherein the degree of hue transformation represents a percentage of an area in the image to be edited, which is to be subjected to hue transformation.
Optionally, the processing module may be configured to: determining a region to be subjected to tone conversion in the acquired region based on the tone conversion degree; and/or transforming the tone of the determined region to be tone-transformed into the target tone.
Optionally, in the case that the acquired region includes a human body, the processing module may be configured to extract a human body bare skin region in the region using a skin detection algorithm; and reserving original color tone for the naked skin area of the human body.
Optionally, the processing module may be configured to: determining the number N of tone regions to be tone-transformed among a plurality of tone regions included in the acquired region, according to the tone transformation degree, wherein N is greater than or equal to 1; carrying out tone sequencing according to the sequence of the number of pixels included in each tone area in the plurality of tone areas from large to small; and determining the first N tone regions in tone ordering as the regions to be subjected to tone conversion.
Optionally, the processing module may be configured to: determining the number N to be 1 in a case where the tone conversion degree is less than or equal to a first value; determining the number N as the number of all tone regions in the case where the tone conversion degree is greater than or equal to a second value; in a case where the tone conversion degree is larger than a first value and smaller than a second value, the number N is determined to be the following value: and sequentially calculating the proportion of the pixel number included in each tone region to the effective pixel number in the image to be edited from the beginning according to the tone sequencing until the sum of the proportions is greater than or equal to the number of the tone regions of the tone conversion degree.
Optionally, in a case that the image to be edited does not include a human body, the effective pixels include all pixels in the image to be edited; in the case that the image to be edited includes a human body, the effective pixels include pixels in the image to be edited other than pixels of a naked skin area of the human body.
According to a third aspect of the embodiments of the present disclosure, there is provided an electronic apparatus, which may include: at least one processor; at least one memory storing computer-executable instructions, wherein the computer-executable instructions, when executed by the at least one processor, cause the at least one processor to perform the image processing method as described above.
According to a fourth aspect of embodiments of the present disclosure, there is provided a computer-readable storage medium storing instructions that, when executed by at least one processor, cause the at least one processor to perform the image processing method as described above.
According to a fifth aspect of embodiments of the present disclosure, there is provided a computer program product, instructions of which are executed by at least one processor in an electronic device to perform the image processing method as described above.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
the image processing scheme provided by the present disclosure can intelligently analyze the dominant hue regions or the specific regions of interaction in the image and then transform the regions to the specified hue; the whole image can be normalized and transformed to the designated tone; meanwhile, in a human scene, the skin area of a human body can be protected, and the color tone of the area, such as clothes, a backpack and the like, on the human body can be changed. In addition, a user interaction function is provided, a user can set a desired tone to be transformed, and the user experience is greatly improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
FIG. 1 is a flow diagram illustrating an image processing method according to an exemplary embodiment.
Fig. 2 is a flowchart illustrating an image processing method according to another exemplary embodiment.
Fig. 3 is a flowchart illustrating an image processing method according to another exemplary embodiment.
Fig. 4 is a flowchart illustrating an image processing method according to another exemplary embodiment.
FIG. 5 is a flowchart illustrating an image processing method according to an exemplary embodiment.
Fig. 6 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment.
Fig. 7 is a schematic configuration diagram illustrating an image processing apparatus according to an exemplary embodiment.
FIG. 8 is a block diagram illustrating an electronic device in accordance with an example embodiment.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
The following description with reference to the accompanying drawings is provided to assist in a comprehensive understanding of the embodiments of the disclosure as defined by the claims and their equivalents. Various specific details are included to aid understanding, but these are to be considered exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. In addition, descriptions of well-known functions and constructions are omitted for clarity and conciseness.
The terms and words used in the following description and claims are not limited to the written meaning, but are used only by the inventors to achieve a clear and consistent understanding of the disclosure. Accordingly, it should be apparent to those skilled in the art that the following descriptions of the various embodiments of the present disclosure are provided for illustration only and not for the purpose of limiting the disclosure as defined by the appended claims and their equivalents.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The implementations described in the exemplary embodiments below are not intended to represent all implementations consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the present disclosure, as detailed in the appended claims.
Although the related art achieves the color change effect on an image, it only performs a preset tone change on all color regions (excluding human parts) in the image, and cannot achieve a tone change alone on a region having a specific tone in the image (such as a main tone region in the image), nor can specify a tone of a desired change; nor can all regions in the image be transformed to a specified hue. Further, although the related art reserves the color tone of all the human body regions in the image, changes the color tone only to the regions not containing the human body, it limits changes of the color tone to the regions such as human body clothes, backpacks, and the like.
Based on the above situation, the present disclosure has been made, and provides a scheme for tone conversion of an image or video by comprehensively using image processing, statistical analysis, machine learning, and the like. The scheme can complete the functions of dominant tone extraction, dominant tone transformation and tone normalization transformation in a single-frame image aiming at multiple scenes (such as people or no people). Dominant hue transformation refers to the intelligent analysis of one or more significant hues in an image, and then only transforming regions with such hues to a specified hue, without noticeable, abrupt artifacts. The tone normalization transformation refers to intelligent analysis of an image, and the transformation of areas with various tones in the image into a certain specified tone without obvious and abrupt artifacts.
The method can analyze the main tone area or the interactive specified specific area in the image, and then transform the tone of the areas to the specified tone, and can also transform the integral image to the specified tone; meanwhile, the skin of the human body can be protected (namely, the skin part in the image is kept with the original image effect), and the color tones of the human body such as clothes, a backpack and the like are transformed, so that various color tone transformation special effects are provided, and the method is flexible and changeable.
Hereinafter, a method, an apparatus, and a device of the present disclosure will be described in detail with reference to the accompanying drawings, according to various embodiments of the present disclosure.
FIG. 1 is a flow diagram illustrating an image processing method, as shown in FIG. 1, that may be used for tone transformation of an image or video, according to an example embodiment. The method shown in fig. 1 may be performed by any electronic device having an image processing function. The electronic device may be a device comprising at least one of: for example, smart phones, tablet Personal Computers (PCs), mobile phones, video phones, electronic book readers (e-book readers), desktop PCs, laptop PCs, netbook computers, workstations, servers, Personal Digital Assistants (PDAs), Portable Multimedia Players (PMPs), cameras, wearable devices, and the like.
Referring to fig. 1, in step S11, an image to be edited is obtained. Here, the image to be edited may be a photograph or a single frame image extracted from a video.
In step S12, at least one hue of the image to be edited is determined. The at least one hue may be a dominant hue in the image or may comprise a dominant hue as well as other kinds of prominent hues. Alternatively, the determined at least one hue may be a user-specified hue, or include any combination of the above-mentioned hue classes. The above examples are merely illustrative, and the present disclosure is not limited thereto.
A dominant hue may refer to a hue that is prominent in an image to be edited. There may be one prominent tone or a plurality of prominent tones in an image. Further, the number of tones may also be set according to user input.
When determining at least one tone in the image to be edited, the image to be edited may be converted into an HSV image, pixels in the HSV image may be tone-classified according to a predefined first tone space, and then at least one tone may be determined based on a number of pixels included in each tone. The first hue space may be predefined for use in a saliency hue analysis. The first hue space may be defined based on the HSV color space. For example, the first hue space may include a plurality of hues, a range of values for each hue, and a range of values for saturation and brightness corresponding to each hue.
The value ranges of the saturation and the brightness corresponding to each tone can be respectively provided with hyper-parameters, so that some regions of the saturation and the brightness can be ignored, and the images or videos after subsequent tone conversion are excessive and natural.
The pixels in the image to be edited can be classified according to the value range of each tone and the corresponding value ranges of saturation and brightness, tone ordering is carried out according to the sequence of the number of pixels included in each tone from large to small, and at least one previous tone in the tone ordering is determined to be the at least one tone. For example, a first hue in the hue order may be determined as the at least one hue. For example, the hues in the top several of the hue orders may be determined as the at least one hue.
As another example, after classifying the pixels in the image to be edited, at least one hue may be determined according to a ratio of the number of pixels included in each hue to the total number of pixels of the image to be edited and a saturation value corresponding to each hue. For example, the ratio of the number of pixels included in each tone to the total number of pixels of the image to be edited may be calculated, and then the average saturation value of the corresponding region of each tone may be calculated, and the larger the multiplication result is, the more prominent the tone is indicated, or the larger the addition result is, the more prominent the tone is indicated, by multiplying the ratio of each tone by the average saturation value of the tone. Therefore, at least one color tone can be determined from the above-described multiplication result or addition result.
Optionally, when at least one color tone in the image to be edited is determined, clustering RGB pixels of the image to be edited, sorting each category according to a sequence of the number of pixels included in each category from large to small, converting an RGB value of a cluster center point of at least one category in the sorting into an HSV value, and determining at least one color tone based on a predefined first color tone space and the converted HSV value.
For example, the cluster center points of the Q most significant categories may be labeled [ R ]s-i,Gs-i,Bs-i]i=0,1,…Q-1Converted into a corresponding HSV value, labeled [ H ]s-i,Ss-i,Vs-i]i=0,1,…Q-1(ii) a Then [ H ] is determined from the first chromaticity spaces-i,Ss-i,Vs-i]i=0,1,…Q-1To which hue the respective cluster center point, e.g., class j, RGB value is [100,50,150]Corresponding HSV values of [135,170,50]And determining the corresponding hue from the first hue space according to the HSV value. However, the above examples are merely exemplary, and the present disclosure is not limited thereto.
In step S13, a region corresponding to at least one color tone is acquired from the image to be edited. A region corresponding to at least one tone may be extracted from the image to be edited according to a predefined second tone space. After the tones are determined, the corresponding region in the image to be edited may be extracted based on the value range of each tone in a second tone space different from the first tone space.
In step S14, the acquired region is subjected to tone conversion. In the tone conversion, a tone to be converted may be specified according to a user input. By receiving a user input for setting a target tone, the tone of the acquired region can be transformed into a target tone desired by the user.
In addition, in the case where a human body is included in the image to be edited, the tone of the region of the naked skin of the human body in the image may be retained, while the other regions are subjected to tone conversion. For example, a skin detection algorithm may be used to extract a human body bare skin area in an image to be edited, and the original hue is retained for the extracted human body bare skin area, while the hue of an area on the human body, such as a garment or a backpack, is transformed.
The present disclosure may enable transforming a tonal saliency area in an image or video to a specified tone, or transforming the entire image to a specified tone, or transforming a specified area in an image to a specified tone.
Fig. 2 is a flowchart illustrating an image processing method according to another exemplary embodiment.
Referring to fig. 2, in step S21, an image to be edited is obtained. When a video is acquired, each frame image of the video may be extracted.
In step S22, the image to be edited is converted into an HSV image. The image to be edited may be converted into an HSV image using an HSV conversion algorithm. After the HSV conversion, each pixel of the HSV image may be represented by a hue H, a saturation S, and a brightness V.
In step S23, pixels in the HSV image are tone-classified according to a predefined tone space.
According to embodiments of the present disclosure, a hue space may be defined based on an HSV color space. The hue space may include a range of values for a plurality of hues and a range of values for saturation and brightness corresponding to each hue. In addition, the value ranges of the saturation and the brightness corresponding to each hue are respectively provided with a hyper-parameter. The setting of the hyper-parameters can make the image after tone conversion more natural to a certain extent.
As an example, 10 hues, such as black, gray, white, red, orange, yellow, green, cyan, blue and violet hues, each of which sets a corresponding range of values to distinguish, may be included in the hue space. For example, the hue space may be represented by the following table 1.
TABLE 1
Figure BDA0002915525400000091
In Table 1, kBlack, kGrray, kWhite, kRed, kOrange, kYellow, kGreen, kCyan, kBlue, and kPurple represent black, gray, white, red, orange, yellow, green, cyan, blue, and violet hues, in that order. [ Hmin, Hmax ] represents a value range of hue (Hmin > 0; Hmin < ═ Hmax; Hmax < ═ 180), [ Smin, Smax ] represents a value range of saturation (Smin > -0; Smin < ═ Smax; Smax < ═ 255), [ Vmin, Vmax ] represents a value range of brightness (Vmin > -0; Vmin < ═ Smax; Vmax < > 255); ks and Kv are hyper-parameters, which may be set according to the actual situation (-211< ═ Ks < ═ 44, -208< ═ Kv < ═ 47), and Ks and Kv may ignore some pixels of saturation and luminance, e.g., Ks may be set to 33 and Kv to 1. However, the above examples are merely exemplary, and the present disclosure is not limited thereto.
The pixels of the HSV image may be classified according to a span of each hue in the hue space. For example, according to the tone space of table 1 described above, the pixels of the image to be edited can be classified into 10 categories.
In step S24, the dominant hue of the image to be edited is determined. Here, the dominant hue may mean that the hue is more prominent in an image. After the pixel classification, the number of pixels included in each tone may be calculated, and then tone sorting may be performed in order of the number of pixels included in each classified tone from large to small, and the first tone in the tone sorting may be determined as a dominant tone.
As an example, after 10 hues are obtained by using table 1, the number of pixels occupied by each hue in 10 can be calculated, and the larger the number of pixels is, the more prominent the hue region is, and the top G most prominent categories of pixel points (in this case, G < ═ 10) can be selected as the dominant hue. Alternatively, in conducting the saliency hue analysis, three hues of kback, kGrray, kwite may be eliminated, i.e. saliency hue ordering is conducted in the remaining seven hues, when G < ═ 7.
As another example, the dominant hue may be determined according to a ratio of the number of pixels included in each hue to the total number of pixels of the image to be edited, respectively, and a saturation value corresponding to each hue. For example, a dominant hue is selected by combining the pixel count ratio of each hue and the average saturation value corresponding to the hue, such as multiplying the pixel count ratio of the hue and the average saturation, with a larger value representing a more prominent hue; or the proportion of the number of pixels of a tone and the average saturation are added, and the larger the value, the more prominent the tone is represented. However, the above examples are merely exemplary, and the present disclosure is not limited thereto.
In addition, the dominant hue can also be determined by means of a priori knowledge, algorithmic processing, statistical analysis, and the like.
In step S25, a region corresponding to the determined dominant hue is acquired from the image to be edited. The second hue space, which may be predefined by the user, extracts an area corresponding to the determined dominant hue from the image to be edited.
After the dominant hue is determined, a corresponding region in the image to be edited may be extracted based on the value range of each hue in a second hue space different from the first hue space.
As an example, when performing the hue region division, the value range of each hue in table 2 may be used to extract the corresponding region in the image to be edited. Table 2 compared with table 1, three hues of kback, kGrray, and kwite are removed, and regions are divided from the hue angle (in other words, table 2 corresponds to table 1 in which Ks is 44 and Kv is 47), so that a significant boundary excessive unnatural phenomenon can be avoided.
TABLE 2
Figure BDA0002915525400000101
In step S26, the acquired region is subjected to tone conversion. The corresponding region may be transformed into a target tone by a user inputting a target tone value. Alternatively, the acquired region may be converted into a preset color tone.
In step S27, post-processing may be performed on the tone-converted image. According to an embodiment of the present disclosure, in order to smooth the transformed edge transition portion in the image to make it look more natural, the tone-transformed image may be subjected to a filtering process. In addition, in order to increase the diversity of tone transformation, the original pixel values, such as the skin color part of human body, can be reserved for some special areas.
As an example, therefore, in order to smooth the edge, the original image to be edited may be used as a reference image, and the image after tone conversion may be subjected to a guided filtering operation to keep the edge smooth. The method can protect the skin color of the human body under the condition that a person exists in the image to be edited, and the skin color detection method can be realized by adopting a skin segmentation method based on an elliptical color space or a skin segmentation algorithm based on deep learning and the like. In addition, the image can be filtered by using a smoothing algorithm such as bilateral filtering.
Fig. 3 is a flowchart illustrating an image processing method according to another exemplary embodiment.
Referring to fig. 3, in step S31, an image to be edited is obtained.
In step S32, the RGB pixels of the image to be edited are subjected to clustering processing.
As an example, the RGB pixel values may be clustered with M as the number of cluster centers. For example, all pixel points in the image to be edited can be divided into M classes by using clustering algorithms such as K-means Kmeans, Kmeans + +, and the like.
In step S33, each category is sorted in the order of the number of pixels included in each category from large to small.
In step S34, the RGB value of the cluster center point of the first category in the ranking is converted into HSV value.
In step S35, a dominant hue is determined according to the predefined first hue space and the converted HSV value. The more the number of the pixel points in the category is, the more remarkable the category is, and the pixel point of the first most remarkable category in the sorting can be extracted.
In step S36, a region corresponding to the keytone is acquired from the image to be edited. A region corresponding to a dominant hue may be extracted from the image to be edited using a predefined second hue space. For example, the corresponding region of the dominant hue may be extracted according to table 2 above. For example, after the kPurple tone is the dominant tone, all of the pixel values in kPurple tone in table 2 can be extracted and labeled as Regionj. However, the above examples are merely exemplary, and the present disclosure is not limited thereto.
In step S37, the acquired region is subjected to tone conversion. The corresponding region may be transformed into a target tone by a user inputting a target tone value. Alternatively, the acquired region may be converted into a preset color tone.
In step S38, post-processing may be performed on the tone-converted image. According to an embodiment of the present disclosure, in order to smooth the transformed edge transition portion in the image to make it look more natural, the tone-transformed image may be subjected to a filtering process. In addition, in order to increase the diversity of tone transformation, the original pixel values, such as the skin color part of human body, can be reserved for some special areas.
Fig. 4 is a flowchart illustrating an image processing method according to another exemplary embodiment.
Referring to fig. 4, in step S41, an image to be edited is obtained.
In step S42, at least one hue of the image to be edited is determined. Here, the at least one color tone may include a main color tone and other color tones except the main color tone. Alternatively, the at least one hue may comprise all hues in the image to be edited.
At least one hue in the image to be edited may be determined using a predefined first hue space. For example, 10 hues may be determined using the first hue space described in table 1.
In step S43, a region corresponding to at least one color tone is acquired from the image to be edited. The area corresponding to at least one of the hues determined in step S42 may be determined using a predefined second hue space. For example, the second tone space shown in table 2 can be used to perform region segmentation on the image to be edited.
In step S44, a user input is received. According to an embodiment of the present disclosure, the user input may comprise at least one of a first user input and a second user input. Here, the first user input may be used to set a target hue, and the second user input may be used to set a hue transformation degree, wherein the hue transformation degree represents a percentage of an area in the image to be edited, to which hue transformation is to be performed.
As an example, the value range of the tone transformation degree may be [ 0%, 100% ], denoted as T%, which means that at least a T-percent portion of the image to be edited will be transformed to the target chromaticity.
In step S45, the acquired region is subjected to tone conversion based on the user input. In the case where the user input is the first user input, the acquired hue of the region may be converted into an input target hue. In the case where the user input is the second user input, a region to be subjected to tone conversion in the acquired region is determined based on the tone conversion degree, and then the tone of the determined region is converted into a specified tone. In the case where the user input includes the first user input and the second user input, a region to be subjected to the tone conversion in the acquired region may be determined based on the tone conversion degree, and then the tone of the determined region may be converted into the target tone of the input. The above examples are merely illustrative, and the present disclosure is not limited thereto.
In the case of determining the region to be subjected to the tone conversion according to the second user input, the number N of tone regions to be subjected to the tone conversion among a plurality of tone regions included in the acquired region is first determined according to the degree of the tone conversion (where N is greater than or equal to 1), tone sorting is performed in order of the number of pixels included in each of the plurality of tone regions from large to small, and the first N tone regions from the tone sorting are determined as the region to be subjected to the tone conversion.
According to an embodiment of the present disclosure, the number N may be determined in the following manner. Determining the number N to be 1 in a case where the tone conversion degree is less than or equal to a first value; determining the number N as the number of all the tone regions in the case where the tone conversion degree is greater than or equal to the second value; in the case where the tone conversion degree is larger than the first value and smaller than the second value, the number N is determined to be the following value: and calculating the proportion of the pixel number included in each tone area to the effective pixel number in the image to be edited from the beginning according to the tone sequencing in sequence until the sum of the proportions is greater than or equal to the number of tone areas of the tone conversion degree.
In the case where the image to be edited does not include a human body, the effective pixels include all pixels in the image to be edited. In the case that the image to be edited includes a human body, the effective pixels include pixels in the image to be edited other than pixels of a naked skin area of the human body.
For example, at the tone conversion level T<tlow,tlowWhen the default value is 40, N may be set to 1, indicating the Region for the most prominent color tone0Color tone of (1)0Performing conversion to set Hue range of the color tone as [ H0-low,H0-high]. The transformation mode can be Region0The tone values of all the pixels in (a) are set to the specified tone value or the target tone value.
At T>thigh,thighWhen the default value is 90, N may be set to all the hue regions in the acquisition region, indicating that the hues of these hue regions are all converted to the specified hue value or the target hue value. The transformation may be to [ Region ]i]i=0,1,…N-1The tone values of all the pixels in (a) are set to the specified tone value or the target tone value.
When t islow<T<thighWhen the number of effective pixels of the image to be edited is X (the number of effective pixels may refer to the number of pixels remaining after the pixels in the skin segmentation result are removed from the image), Len (Region) is seti) Represents RegioniThe number of pixels in (1), then Ratio (Region)i) Represents RegioniThe proportion occupying the entire image can be expressed by the following equation (1):
Ratio(Regioni)=Len(Regioni)/X (1)
the pixel ratios of the first n tones are sequentially calculated until the following equation (2) is satisfied
Figure BDA0002915525400000131
In this case, N ═ N may be taken, which represents that the N tone regions are all converted to the designated tone or the target tone. The transformation may be to [ Region ]i]i=0,1,…n-1The tone values of all the pixels in (a) are set to the designated tone or the target tone.
Furthermore, when t islow=0,thighWhen 1, the main tone conversion and tone normalization may not be performedThe transformation function is distinguished, and the regional tone transformation of the image can be controlled by the tone transformation degree T, so the value of T can be interactively adjusted by a user.
By setting the tone conversion degree in the tone conversion, it is possible to set tlowAnd thighThe two parameters realize two functions of main tone transformation and tone normalization transformation, and can also be independently used as interactive parameters for users to adjust, thereby realizing the tone transformation of the controllable area proportion of the image.
As an alternative embodiment, the tone-transformed image may be post-processed. According to an embodiment of the present disclosure, in order to smooth the transformed edge transition portion in the image to make it look more natural, the tone-transformed image may be subjected to a filtering process. In addition, in order to increase the diversity of tone transformation, the original pixel values, such as the skin color part of human body, can be reserved for some special areas.
As an example, therefore, in order to smooth the edge, the original image to be edited may be used as a reference image, and the image after tone conversion may be subjected to a guided filtering operation to keep the edge smooth. The method can protect the skin color of the human body under the condition that a person exists in the image to be edited, and the skin color detection method can be realized by adopting a skin segmentation method based on an elliptical color space or a skin segmentation algorithm based on deep learning and the like.
FIG. 5 is a flowchart illustrating an image processing method according to an exemplary embodiment.
Referring to fig. 5, an image or video to be edited is acquired. Under the condition of acquiring the video, extracting single-frame images, and processing each frame image frame by frame.
And carrying out tone significance analysis on the acquired image or video. The first hue space may be utilized for hue saliency analysis. For example, the hue space is predefined as 10 hues based on the HSV color space, respectively kback, kgray, kwite, kRed, krorange, kyyellow, kGreen, kcian, kBlue, kPurple, representing black, gray, white, red, orange, yellow, green, cyan, blue, and violet hues in that order. The first G main (or significant) in the image or video can be obtained through the modes of priori knowledge, algorithm processing, statistical analysis and the like) Color tone, notation [ Colori]i=0,1,…G-1. At least one hue may be obtained using a clustering algorithm based on the RGB color space. For example, with M (M)>G) is the number of clustering centers, the RGB pixel values are clustered, a clustering algorithm such as Kmeans or Kmeans + + can be used to divide all the pixel points in the image into M classes, the more the number of the pixel points in the classes is, the more significant the mark is, and the first G most significant classes of pixel points are taken as the at least one color tone.
Alternatively, at least one hue may be determined using a hue space based on the HSV color space. For example, the pixels in the HSV image are divided into 10 categories, i.e. 10 regions, according to the value ranges of 10 hues in table 1. Then, the number of pixels occupied by each of the 10 block regions is calculated, and the hue region is more prominent when the number of pixels is larger, and the first G most prominent types of pixel points (in this case, G < ═ 10) are taken, or the ratio of the number of pixels and the average saturation value of the region can be selected by combining. Alternatively, when performing saliency hue analysis, the three colors of kback, kGrray, kwite in table 1 may be removed, i.e., the saliency hue space region ordering is performed in the remaining 7 hues, so G < ═ 7.
And then, carrying out tone region segmentation on the image to be edited according to the determined at least one tone. The division of the hue region may be performed using the second hue space. For example, the corresponding regions in the image can be sequentially extracted according to the ranges in table 2 according to the analyzed first G significant hues, which can be labeled as [ Regioni]i=0,1,…G-1. Protected pixel portions, such as skin area pixel portions, and the like, may not be included in this writing area.
If a clustering algorithm based on RGB color space is adopted in the significance tone analysis, the clustering center points of G most significant categories are marked as [ R ]s-i,Gs-i,Bs-i]i=0,1,…G-1Converted into a corresponding HSV value, labeled [ H ]s-i,Ss-i,Vs-i]i=0,1,…G-1(ii) a Then [ H ] is determined from a first tone space (such as Table 1)s-i,Ss-i,Vs-i]i=0,1,…G-1To which hue the image belongs, respectively, and then extracting the Region [ Region ] in the image according to the range of the hue in table 2i]i=0,1,…G-1For example, the cluster center point RGB value of the j-th class is [100,50,150 ]]Corresponding HSV values of [135,170,50]If Ks is 33 and Kv is 1 (see table 1), the cluster center point belongs to the color tone of kPurple, and all the pixel values in the color tone of kPurple are extracted according to table 2 and are denoted as Regionj
If a significant hue analysis is performed using the predefined hue space in Table 1, the Region [ Region ] of the corresponding hue can be extracted according to Table 2i]i=0,1,…G-1
When performing tone conversion, tone conversion may be performed on the divided regions in accordance with user input. In the case where the user input is the first user input, the acquired hue of the region may be converted into an input target hue. In the case where the user input is the second user input, a region to be subjected to tone conversion in the divided region is determined based on the tone conversion degree, and then the tone of the determined region is converted into a specified tone. In the case where the user input includes a first user input and a second user input, a region to be tone-converted in the divided region may be determined based on the tone conversion degree, and then the tone of the determined region may be converted into an input target tone. The above examples are merely illustrative, and the present disclosure is not limited thereto.
By setting user input, the dominant hue transformation effect and the hue normalization transformation effect can be realized, and the user experience is greatly improved.
After the tone conversion, post-processing is performed on the converted image. For example, in order to smooth the transformed edge transition portion in the image to make it look more natural, the tone-transformed image may be subjected to a filtering process. In addition, in order to increase the diversity of tone transformation, the original pixel values, such as the skin color part of human body, can be reserved for some special areas.
After post-processing, a final target image may be obtained.
Fig. 6 is a block diagram illustrating an image processing apparatus according to an exemplary embodiment. Referring to fig. 6, the image processing apparatus 600 may include an acquisition module 601 and a processing module 602. Furthermore, the image processing apparatus 600 may further include a user input module 603. Each module in the image processing apparatus 600 may be implemented by one or more modules, and the name of the corresponding module may vary according to the type of the module. In various embodiments, some modules in the image processing apparatus 600 may be omitted, or additional modules may also be included. Furthermore, modules/elements according to various embodiments of the present disclosure may be combined to form a single entity, and thus may equivalently perform the functions of the respective modules/elements prior to combination.
The obtaining module 601 may obtain an image to be edited.
The processing module 602 may determine at least one hue of the image to be edited. Here, the at least one color tone may be a main color tone, a designated color tone, or other color tones.
The processing module 602 may acquire a region corresponding to the determined at least one color tone from the image to be edited, and perform color tone transformation on the acquired region.
The processing module 602 may convert the image to be edited into an HSV image, perform color tone classification on pixels in the HSV image according to a predefined first color tone space, and determine at least one color tone based on a number of pixels included in each color tone.
The processing module 602 may extract an area corresponding to the at least one tone from the image to be edited according to a predefined second tone space.
The first and second hue spaces may be predefined based on the HSV color space, wherein the first and second hue spaces may include value ranges of a plurality of hues and value ranges of saturation and brightness corresponding to each hue, respectively. The value ranges of the saturation and brightness corresponding to each hue may be set based on the super-parameter, wherein the super-parameter in the first and second hue spaces may be set differently. For example, the first tone space may be as shown in table 1 above, and the second tone space may be as shown in table 2 above.
The processing module 602 may perform tone ordering according to an order from a large number of pixels included in each tone, and determine at least one previous tone in the tone ordering as the at least one tone; or determining the at least one tone according to the ratio of the number of pixels included in each tone to the total number of pixels of the image to be edited and the saturation value corresponding to each tone.
The processing module 602 may perform clustering processing on RGB pixels of an image to be edited; sequencing each category according to the sequence of the number of pixels included in each category from large to small, and converting the RGB value of the clustering center point of at least one category in the sequencing into an HSV value; the at least one hue is determined based on a predefined first hue space and the converted HSV value.
The user input module 603 may receive user input.
The processing module 602 may perform a tone transformation on the acquired region according to the received user input. According to an embodiment of the present disclosure, the user input may include at least one of a first user input for setting a target hue and a second user input for setting a hue transformation degree, wherein the hue transformation degree represents a percentage of an area to be hue transformed in the image to be edited.
The processing module 602 may determine a region to be tone-transformed in the acquired region based on the tone transformation degree.
The processing module 602 may transform the determined tone of the region to be tone-transformed into a target tone.
In the case that the acquired region includes a human body, the processing module 602 may extract a human body bare skin region in the region using a skin detection algorithm, and retain an original color tone for the human body bare skin region.
The processing module 602 may determine the number N of tone regions to be tone-converted among a plurality of tone regions included in the acquired region according to the tone conversion degree, where N is greater than or equal to 1, perform tone sorting in order of the number of pixels included in each of the plurality of tone regions from large to small, and determine the first N tone regions from the tone sorting as the regions to be tone-converted.
In the event that the degree of tonal transformation is less than or equal to the first value, the processing module 602 may determine the number N to be 1.
In the case where the degree of tone transformation is greater than or equal to the second value, the processing module 602 may determine the number N as the number of entire tone regions.
In the event that the degree of tonal transformation is greater than the first value and less than the second value, the processing module 602 may determine the number N to be the following value: and calculating the proportion of the pixel number included in each tone area to the effective pixel number in the image to be edited from the beginning according to the tone sequencing in sequence until the sum of the proportions is greater than or equal to the number of tone areas of the tone conversion degree. Here, in the case where the image to be edited does not include a human body, the effective pixels include all pixels in the image to be edited. In the case where the image to be edited includes a human body, the effective pixels include pixels in the image to be edited other than pixels of the naked skin area of the human body. The number N is calculated, for example, with reference to equation (2) above.
Fig. 7 is a schematic configuration diagram illustrating an image processing apparatus according to an exemplary embodiment.
As shown in fig. 7, the image processing apparatus 700 may include: a processing component 701, a communication bus 702, a network interface 703, an input-output interface 704, a memory 705, and a power component 706. Wherein a communication bus 702 is used to enable connective communication between these components. The input output interface 704 may include a video display (such as a liquid crystal display), a microphone and speakers, and a user interaction interface (such as a keyboard, mouse, touch input device, etc.), and optionally the input output interface 704 may also include a standard wired interface, a wireless interface. Network interface 703 may optionally include standard wired interfaces, wireless interfaces (e.g., wireless fidelity interfaces). The memory 705 may be a high speed random access memory or a stable non-volatile memory. The memory 705 may alternatively be a storage device separate from the processing component 701 described above.
Those skilled in the art will appreciate that the configuration shown in fig. 7 does not constitute a limitation of the image processing apparatus 700, and may include more or fewer components than those shown, or some components in combination, or a different arrangement of components.
As shown in fig. 7, the memory 705, which is a storage medium, may include therein an operating system, a data storage module, a network communication module, a user interface module, an image processing program, and a database.
In the image processing apparatus 700 shown in fig. 7, the network interface 703 is mainly used for data communication with an external apparatus/terminal; the input/output interface 704 is mainly used for data interaction with a user; the processing component 701 and the memory 705 in the image processing apparatus 700 may be provided in the image processing apparatus 700, and the image processing apparatus 700 executes the image processing method provided by the embodiment of the present disclosure by the processing component 701 calling the image processing program stored in the memory 705 and various APIs provided by the operating system.
The processing component 701 may include at least one processor, and the memory 705 has stored therein a set of computer-executable instructions that, when executed by the at least one processor, perform an image processing method according to an embodiment of the disclosure. Further, the processing component 701 may perform encoding operations and decoding operations, among others. However, the above examples are merely exemplary, and the present disclosure is not limited thereto.
The processing component 701 may obtain an image to be edited, determine at least one color tone of the image to be edited, acquire a region corresponding to the at least one color tone from the image to be edited, and perform color tone transformation on the acquired region. Alternatively, the acquired region may be subjected to tone conversion in accordance with user input.
The image processing apparatus 700 may receive or output images and/or videos via the input-output interface 704. For example, a user may output processed images or videos via the input-output interface 704 for sharing to other users.
By way of example, the image processing apparatus 700 may be a PC computer, tablet device, personal digital assistant, smartphone, or other device capable of executing the set of instructions described above. The image processing apparatus 700 need not be a single electronic device, but can be any collection of devices or circuits that can execute the above-described instructions (or sets of instructions), either individually or in combination. The image processing device 700 may also be part of an integrated control system or system manager, or may be configured as a portable electronic device that interfaces with local or remote (e.g., via wireless transmission).
In the image processing apparatus 700, the processing component 701 may include a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a programmable logic device, a dedicated processor system, a microcontroller, or a microprocessor. By way of example and not limitation, processing component 701 may also include an analog processor, a digital processor, a microprocessor, a multi-core processor, a processor array, a network processor, and the like.
The processing component 701 may execute instructions or code stored in a memory, where the memory 705 may also store data. Instructions and data may also be sent and received over a network via network interface 703, where network interface 703 may employ any known transmission protocol.
The memory 705 may be integral to the processor, e.g., having RAM or flash memory disposed within an integrated circuit microprocessor or the like. Further, the memory 705 may comprise a stand-alone device, such as an external disk drive, storage array, or any other storage device that may be used by a database system. The memory and the processor may be operatively coupled or may communicate with each other, such as through an I/O port, a network connection, etc., so that the processor can read files stored in the memory.
According to an embodiment of the present disclosure, an electronic device may be provided. Fig. 8 is a block diagram of an electronic device 800 that may include at least one memory 802 and at least one processor 801, the at least one memory 802 storing a set of computer-executable instructions that, when executed by the at least one processor 801, perform an image processing method according to various embodiments of the present disclosure, according to an embodiment of the present disclosure.
The processor 801 may include a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a programmable logic device, a special-purpose processor system, a microcontroller, or a microprocessor. By way of example, and not limitation, processor 801 may also include analog processors, digital processors, microprocessors, multi-core processors, processor arrays, network processors, and the like.
The memory 802, which is a kind of storage medium, may include an operating system, a data storage module, a network communication module, a user interface module, an image processing program, and a database.
The memory 802 may be integrated with the processor 801, for example, a RAM or flash memory may be disposed within an integrated circuit microprocessor or the like. Further, memory 802 may comprise a stand-alone device, such as an external disk drive, storage array, or any other storage device usable by a database system. The memory 802 and the processor 801 may be operatively coupled or may communicate with each other, such as through I/O ports, network connections, etc., so that the processor 801 can read files stored in the memory 802.
Further, the electronic device 800 may also include a video display (such as a liquid crystal display) and a user interaction interface (such as a keyboard, mouse, touch input device, etc.). All components of the electronic device 800 may be connected to each other via a bus and/or a network.
By way of example, the electronic device 800 may be a PC computer, tablet device, personal digital assistant, smart phone, or other device capable of executing the set of instructions described above. Here, the electronic device 800 need not be a single electronic device, but can be any collection of devices or circuits that can execute the above instructions (or sets of instructions) either individually or in combination. The electronic device 800 may also be part of an integrated control system or system manager, or may be configured as a portable electronic device that interfaces with local or remote (e.g., via wireless transmission).
Those skilled in the art will appreciate that the configuration shown in fig. 8 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
According to an embodiment of the present disclosure, there may also be provided a computer-readable storage medium storing instructions that, when executed by at least one processor, cause the at least one processor to perform an image processing method according to the present disclosure. Examples of the computer-readable storage medium herein include: read-only memory (ROM), random-access programmable read-only memory (PROM), electrically erasable programmable read-only memory (EEPROM), random-access memory (RAM), dynamic random-access memory (DRAM), static random-access memory (SRAM), flash memory, non-volatile memory, CD-ROM, CD-R, CD + R, CD-RW, CD + RW, DVD-ROM, DVD-R, DVD + R, DVD-RW, DVD + RW, DVD-RAM, BD-ROM, BD-R, BD-R LTH, BD-RE, Blu-ray or compact disc memory, Hard Disk Drive (HDD), solid-state drive (SSD), card-type memory (such as a multimedia card, a Secure Digital (SD) card or a extreme digital (XD) card), magnetic tape, a floppy disk, a magneto-optical data storage device, an optical data storage device, a hard disk, a magnetic tape, a magneto-optical data storage device, a, A solid state disk, and any other device configured to store and provide a computer program and any associated data, data files, and data structures to a processor or computer in a non-transitory manner such that the processor or computer can execute the computer program. The computer program in the computer-readable storage medium described above can be run in an environment deployed in a computer apparatus, such as a client, a host, a proxy device, a server, and the like, and further, in one example, the computer program and any associated data, data files, and data structures are distributed across a networked computer system such that the computer program and any associated data, data files, and data structures are stored, accessed, and executed in a distributed fashion by one or more processors or computers.
According to an embodiment of the present disclosure, there may also be provided a computer program product, in which instructions are executable by a processor of a computer device to perform the above-mentioned image processing method.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.

Claims (10)

1. An image processing method, comprising:
obtaining an image to be edited;
determining at least one tone of the image to be edited;
acquiring a region corresponding to the at least one tone from the image to be edited; and
and carrying out tone conversion on the acquired region.
2. The image processing method according to claim 1, wherein the step of determining at least one color tone of the image to be edited comprises:
converting the image to be edited into an HSV image;
carrying out tone classification on pixels in the HSV image according to a predefined first tone space;
the at least one hue is determined based on the number of pixels included in each hue.
3. The image processing method according to claim 1, wherein the step of acquiring the region corresponding to the at least one color tone from the image to be edited includes:
extracting a region corresponding to the at least one tone from the image to be edited according to a predefined second tone space.
4. The image processing method according to claim 1, wherein the step of determining at least one color tone of the image to be edited comprises:
clustering the RGB pixels of the image to be edited;
sorting each category according to the sequence of the number of pixels included in each category from large to small;
converting the RGB value of the clustering center point of the first at least one category in the sequence into HSV value;
the at least one hue is determined based on a predefined first hue space and the converted HSV value.
5. The image processing method according to claim 1, further comprising:
receiving a user input;
and carrying out tone conversion on the acquired region according to the user input.
6. The image processing method according to claim 1, wherein, in a case where the acquired region includes a human body, the step of performing the tone conversion on the acquired region includes:
extracting a human body naked skin area in the area by using a skin detection algorithm;
and reserving original color tone for the naked skin area of the human body.
7. An image processing apparatus characterized by comprising:
an acquisition module configured to obtain an image to be edited;
a processing module configured to:
determining at least one tone of the image to be edited;
acquiring a region corresponding to the at least one tone from the image to be edited; and
and carrying out tone conversion on the acquired region.
8. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the image processing method of any one of claims 1 to 6.
9. A computer-readable storage medium in which instructions, when executed by a processor of an electronic device, enable the electronic device to perform the image processing method of any of claims 1 to 6.
10. A computer program product comprising computer instructions, characterized in that the computer instructions, when executed by a processor, implement the image processing method of any of claims 1 to 6.
CN202110099560.7A 2021-01-25 2021-01-25 Image processing method and image processing apparatus Active CN112950453B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202110099560.7A CN112950453B (en) 2021-01-25 2021-01-25 Image processing method and image processing apparatus
PCT/CN2021/112857 WO2022156196A1 (en) 2021-01-25 2021-08-16 Image processing method and image processing apparatus

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110099560.7A CN112950453B (en) 2021-01-25 2021-01-25 Image processing method and image processing apparatus

Publications (2)

Publication Number Publication Date
CN112950453A true CN112950453A (en) 2021-06-11
CN112950453B CN112950453B (en) 2023-10-20

Family

ID=76236543

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110099560.7A Active CN112950453B (en) 2021-01-25 2021-01-25 Image processing method and image processing apparatus

Country Status (2)

Country Link
CN (1) CN112950453B (en)
WO (1) WO2022156196A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022156196A1 (en) * 2021-01-25 2022-07-28 北京达佳互联信息技术有限公司 Image processing method and image processing apparatus

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070070468A1 (en) * 2005-09-28 2007-03-29 Kaoru Ogawa Color adjusting apparatus, display apparatus, printing apparatus, image processing apparatus, color adjustment method, gui display method, and program
US20130050238A1 (en) * 2011-08-26 2013-02-28 Miklos J. Bergou Palette-Based Image Editing
US20160323481A1 (en) * 2014-02-13 2016-11-03 Ricoh Company, Ltd. Image processing apparatus, image processing system, image processing method, and recording medium
CN107424198A (en) * 2017-07-27 2017-12-01 广东欧珀移动通信有限公司 Image processing method, device, mobile terminal and computer-readable recording medium
CN107833620A (en) * 2017-11-28 2018-03-23 北京羽医甘蓝信息技术有限公司 Image processing method and image processing apparatus
CN111198956A (en) * 2019-12-24 2020-05-26 北京达佳互联信息技术有限公司 Multimedia resource interaction method and device, electronic equipment and storage medium

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20150092546A (en) * 2014-02-05 2015-08-13 한국전자통신연구원 Harmless frame filter and apparatus for harmful image block having the same, method for filtering harmless frame
JP2015179995A (en) * 2014-03-19 2015-10-08 富士ゼロックス株式会社 Image processing device and program
CN107845076A (en) * 2017-10-31 2018-03-27 广东欧珀移动通信有限公司 Image processing method, device, computer-readable recording medium and computer equipment
CN107909553B (en) * 2017-11-02 2021-10-26 海信视像科技股份有限公司 Image processing method and device
CN112950453B (en) * 2021-01-25 2023-10-20 北京达佳互联信息技术有限公司 Image processing method and image processing apparatus

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070070468A1 (en) * 2005-09-28 2007-03-29 Kaoru Ogawa Color adjusting apparatus, display apparatus, printing apparatus, image processing apparatus, color adjustment method, gui display method, and program
US20130050238A1 (en) * 2011-08-26 2013-02-28 Miklos J. Bergou Palette-Based Image Editing
US20160323481A1 (en) * 2014-02-13 2016-11-03 Ricoh Company, Ltd. Image processing apparatus, image processing system, image processing method, and recording medium
CN107424198A (en) * 2017-07-27 2017-12-01 广东欧珀移动通信有限公司 Image processing method, device, mobile terminal and computer-readable recording medium
CN107833620A (en) * 2017-11-28 2018-03-23 北京羽医甘蓝信息技术有限公司 Image processing method and image processing apparatus
CN111198956A (en) * 2019-12-24 2020-05-26 北京达佳互联信息技术有限公司 Multimedia resource interaction method and device, electronic equipment and storage medium

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022156196A1 (en) * 2021-01-25 2022-07-28 北京达佳互联信息技术有限公司 Image processing method and image processing apparatus

Also Published As

Publication number Publication date
CN112950453B (en) 2023-10-20
WO2022156196A1 (en) 2022-07-28

Similar Documents

Publication Publication Date Title
US11537873B2 (en) Processing method and system for convolutional neural network, and storage medium
CN109919869B (en) Image enhancement method and device and storage medium
CN106898026B (en) A kind of the dominant hue extracting method and device of picture
JP4704224B2 (en) Album creating apparatus, album creating method, and program
US20050152613A1 (en) Image processing apparatus, image processing method and program product therefore
Choi et al. A comparative study of preprocessing mismatch effects in color image based face recognition
AU2015201623A1 (en) Choosing optimal images with preference distributions
CN111476849B (en) Object color recognition method, device, electronic equipment and storage medium
EP1745438A1 (en) Method for determining image quality
CN111062993A (en) Color-merged drawing image processing method, device, equipment and storage medium
CN107027069B (en) Processing method, device and system, storage medium and the processor of image data
CN112950453B (en) Image processing method and image processing apparatus
Lee et al. Property-specific aesthetic assessment with unsupervised aesthetic property discovery
US20080247647A1 (en) Systems and methods for segmenting an image based on perceptual information
Lindner et al. Joint statistical analysis of images and keywords with applications in semantic image enhancement
CN113222846A (en) Image processing method and image processing apparatus
JP5615344B2 (en) Method and apparatus for extracting color features
Liu Two decades of colorization and decolorization for images and videos
van den Broek et al. Modeling human color categorization
Ko et al. IceNet for interactive contrast enhancement
US10026201B2 (en) Image classifying method and image displaying method
CN112686800B (en) Image processing method, device, electronic equipment and storage medium
CN113781330A (en) Image processing method, device and electronic system
CN105631812B (en) Control method and control device for color enhancement of display image
CN113077405A (en) Color transfer and quality evaluation system for two-segment block

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant