WO2022156196A1 - 图像处理方法和图像处理装置 - Google Patents
图像处理方法和图像处理装置 Download PDFInfo
- Publication number
- WO2022156196A1 WO2022156196A1 PCT/CN2021/112857 CN2021112857W WO2022156196A1 WO 2022156196 A1 WO2022156196 A1 WO 2022156196A1 CN 2021112857 W CN2021112857 W CN 2021112857W WO 2022156196 A1 WO2022156196 A1 WO 2022156196A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- hue
- tone
- image
- edited
- transformation
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 100
- 238000003672 processing method Methods 0.000 title claims abstract description 40
- 230000009466 transformation Effects 0.000 claims abstract description 87
- 238000004422 calculation algorithm Methods 0.000 claims description 16
- 238000004590 computer program Methods 0.000 claims description 12
- 238000001514 detection method Methods 0.000 claims description 8
- 238000006243 chemical reaction Methods 0.000 description 17
- 238000000034 method Methods 0.000 description 9
- 238000004458 analytical method Methods 0.000 description 7
- 230000011218 segmentation Effects 0.000 description 7
- 238000010586 diagram Methods 0.000 description 6
- 230000000694 effects Effects 0.000 description 6
- 238000001914 filtration Methods 0.000 description 6
- 230000000717 retained effect Effects 0.000 description 6
- 230000008859 change Effects 0.000 description 5
- 230000006870 function Effects 0.000 description 5
- 238000010606 normalization Methods 0.000 description 5
- 238000012805 post-processing Methods 0.000 description 5
- 230000001131 transforming effect Effects 0.000 description 5
- 238000013500 data storage Methods 0.000 description 4
- 230000003993 interaction Effects 0.000 description 4
- 230000007704 transition Effects 0.000 description 4
- 239000003086 colorant Substances 0.000 description 3
- 230000002452 interceptive effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 238000007619 statistical method Methods 0.000 description 3
- 238000011426 transformation method Methods 0.000 description 3
- 230000005540 biological transmission Effects 0.000 description 2
- 238000013135 deep learning Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 239000007787 solid Substances 0.000 description 2
- 239000008186 active pharmaceutical agent Substances 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000002146 bilateral effect Effects 0.000 description 1
- 210000000746 body region Anatomy 0.000 description 1
- 230000001351 cycling effect Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/643—Hue control means, e.g. flesh tone control
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
Definitions
- the present disclosure relates to the field of image processing, and in particular, to an image processing method and an image processing device for tone conversion.
- the user can change the style of the current image or video by changing the tone of the image or video.
- the tone transformation is generally performed on the entire image or video. For example, all areas in the video can be color-graded for a predetermined period of time (such as 15 seconds or 60 seconds), monotonically cycling through different tones.
- a predetermined period of time such as 15 seconds or 60 seconds
- the entire body area can be kept unchanged, and only the tonal transformation of other scenes in the image or video can be performed.
- the present disclosure provides an image processing method and an image processing apparatus.
- an image processing method may include the following steps: obtaining an image to be edited; determining at least one color tone of the image to be edited; a hue-corresponding region; and a hue-transformation of the acquired region.
- the at least one hue includes at least one of a main hue, a specified hue, and all hues of the image to be edited.
- the step of determining at least one hue of the image to be edited may include: converting the image to be edited into an HSV image; Hue classification; determining the at least one hue based on the number of pixels included in each hue.
- the step of obtaining a region corresponding to the at least one hue from the image to be edited may include: extracting a region corresponding to the at least one hue from the image to be edited according to a predefined second hue space area corresponding to each color tone.
- the first hue space and the second hue space may be defined based on the HSV color space, wherein the first hue space and the second hue space may respectively include value ranges of multiple hues and corresponding to each hue The range of values for saturation and brightness.
- the range of values for saturation and lightness corresponding to each hue may be set based on hyperparameters, wherein the hyperparameters in the first hue space and the second hue space may be set differently.
- the step of determining the at least one hue based on the number of pixels included in each hue may include: sorting the hues in descending order of the number of pixels included in each hue, and sorting the hues
- the first at least one color tone in the image is determined as the at least one color tone; or according to the ratio of the number of pixels included in each color tone to the total number of pixels of the image to be edited and the corresponding saturation value of each color tone.
- the at least one hue is determined.
- the step of determining at least one hue of the image to be edited may include: performing clustering processing on RGB pixels of the image to be edited; according to the number of pixels included in each category from large to small Sort each category in order; convert the RGB values of the cluster center points of the first at least one category in the sorting into HSV values; determine the at least one category based on the predefined first tone space and the converted HSV values colors.
- the image processing method may further include: receiving user input; and performing tone transformation on the acquired region according to the user input.
- the user input may include at least one of a first user input and a second user input, wherein the first user input may be used to set the target hue and the second user input may be used to set the degree of hue shift, wherein , and the degree of tone transformation represents the percentage of the area in the image to be edited that will undergo tone transformation.
- the step of tone-transforming the acquired area according to the user input may include: determining an area of the acquired area to be tone-transformed based on the degree of tone-transformation; and/or determining an area to be tone-transformed in the acquired area; The hue of the region subjected to hue conversion is converted to the target hue.
- the step of performing tone transformation on the acquired area may include: extracting a human body bare skin area in the area by using a skin detection algorithm; The original color tone is preserved.
- the step of determining an area to be tone-transformed in the acquired area based on the degree of tone-transformation may include: determining, according to the degree of tone-transformation, to determine the area to be tone-transformed in the plurality of tone areas included in the acquired area The number N of tone regions for tone transformation, wherein N is greater than or equal to 1; the tone sorting is performed according to the descending order of the number of pixels included in each tone region in the plurality of tone regions; The first N tone areas are determined as the areas to be tone transformed.
- the step of determining, according to the degree of tone transformation, the number N of tone areas to be tone transformed in the plurality of tone areas may include: if the degree of tone transformation is less than or equal to a first value, Determining the number N as 1; in the case that the degree of hue transformation is greater than or equal to the second value, determining the number N as the number of all hue regions; in the case where the degree of hue transformation is greater than the first value and smaller than the second value
- the number N is determined as the following value: the ratio of the number of pixels included in each tone area to the number of effective pixels in the image to be edited is calculated in sequence from the front according to the tone ordering, until the sum of the ratios is greater than or The number of tonal regions equal to the degree of hue shift.
- the effective pixels include all pixels in the image to be edited; in the case that the image to be edited includes a human body, the effective pixels Including the pixels in the image to be edited except the pixels of the bare skin area of the human body.
- an image processing apparatus which may include: an obtaining module configured to obtain an image to be edited; and a processing module configured to: determine at least one hue of the image to be edited ; acquiring a region corresponding to the at least one hue from the image to be edited; and performing hue transformation on the acquired region.
- the processing module may be configured to convert the image to be edited into an HSV image; to tone-classify the pixels in the HSV image according to a predefined first tone space; number of pixels to determine the at least one hue.
- the processing module may be configured to extract a region corresponding to the at least one hue from the image to be edited according to a predefined second hue space.
- the first hue space and the second hue space may be defined based on the HSV color space, wherein the first hue space and the second hue space may respectively include value ranges of multiple hues and corresponding to each hue The range of values for saturation and brightness.
- the range of values for saturation and lightness corresponding to each hue may be set based on hyperparameters, wherein the hyperparameters in the first hue space and the second hue space may be set differently.
- the processing module may be configured to sort the hues in descending order of the number of pixels included in each hue, and determine the first at least one hue in the hue sequence as the at least one hue; Or the at least one hue is determined according to the ratio of the number of pixels included in each hue to the total number of pixels of the image to be edited and the saturation value corresponding to each hue.
- the processing module may be configured to: perform clustering processing on the RGB pixels of the image to be edited; sort each category in descending order of the number of pixels included in each category;
- the RGB values of the cluster center points of the first at least one category in the sorting are converted into HSV values; the at least one hue is determined based on a predefined first hue space and the converted HSV values.
- the image processing apparatus may further include a user input module configured to receive user input, wherein the processing module is configured to tone-transform the acquired region according to the user input.
- the user input may include at least one of a first user input for setting a target hue and a second user input for setting a degree of hue shift, wherein , and the degree of tone transformation represents the percentage of the area in the image to be edited that will undergo tone transformation.
- the processing module may be configured to: determine an area to be tone-transformed in the acquired area based on the degree of tone-transformation; and/or convert the tone of the determined area to be tone-transformed to the desired tone-transformation degree. Describe the target color.
- the processing module may be configured to extract a human body bare skin region in the region using a skin detection algorithm; preserving the original hue for the human body bare skin region.
- the processing module may be configured to: determine, according to the degree of tone transformation, the number N of tone regions to be tone transformed in the plurality of tone regions included in the acquired region, wherein N is greater than or equal to 1; perform hue sorting in descending order of the number of pixels included in each hue region in the plurality of hue regions; determine the first N hue regions in the hue sequence as the ones to be subjected to hue transformation; area.
- the processing module may be configured to: determine the number N to be 1 if the degree of hue shift is less than or equal to a first value; if the degree of hue shift is greater than or equal to a second value , determine the number N as the number of all tone regions; in the case that the degree of tone transformation is greater than the first value and less than the second value, the number N is determined as the following value: calculate each tone in sequence from the front according to the tone order The ratio of the number of pixels included in the area to the number of effective pixels in the image to be edited, until the sum of the ratios is greater than or equal to the number of tone areas of the tone conversion degree.
- the effective pixels include all pixels in the image to be edited; in the case that the image to be edited includes a human body, the effective pixels Including the pixels in the image to be edited except the pixels of the bare skin area of the human body.
- an electronic device may include: at least one processor; at least one memory storing computer-executable instructions, wherein the computer-executable instructions are stored by the When the at least one processor runs, the at least one processor is caused to perform the image processing method as described above.
- a computer-readable storage medium storing instructions that, when executed by at least one processor, cause the at least one processor to perform the image processing method as described above.
- a computer program product wherein instructions in the computer program product are executed by at least one processor in an electronic device to execute the image processing method as described above.
- the image processing solution provided by the present disclosure can not only intelligently analyze the main tone area or interactive designated specific area in the image, and then transform these areas into the designated tone; it can also normalize the whole image to the designated tone; at the same time, In a human scene, the skin area of the human body can be protected and the area on the human body, such as clothes, backpacks, etc., can be toned. In addition, a user interaction function is provided, and the user can set the desired color tone for transformation, which greatly improves the user experience.
- Fig. 1 is a flowchart of an image processing method according to an exemplary embodiment.
- Fig. 2 is a flowchart of an image processing method according to another exemplary embodiment.
- Fig. 3 is a flowchart of an image processing method according to another exemplary embodiment.
- Fig. 4 is a flowchart of an image processing method according to another exemplary embodiment.
- Fig. 5 is a schematic flowchart of an image processing method according to an exemplary embodiment.
- Fig. 6 is a block diagram of an image processing apparatus according to an exemplary embodiment.
- FIG. 7 is a schematic structural diagram of an image processing device according to an exemplary embodiment.
- Fig. 8 is a block diagram of an electronic device according to an exemplary embodiment.
- the related art realizes the color change effect on the image, it can only perform the preset color tone change on all the color areas in the image (excluding the human part), and cannot realize the color change effect on the area with a specific color tone in the image (such as the image The main tone area in the image) can change the tone alone, and the desired tone cannot be specified; at the same time, all areas in the image cannot be transformed to the specified tone.
- the related art preserves the hue of all human body regions in the image, and only changes the hue of the regions not including the human body, it limits the hue changes of regions such as human clothes and backpacks.
- tone transformation refers to intelligently analyzing one or more tones that are prominent in an image, and then transforming only the areas with this tone to a specified tone without obvious obtrusive artifacts.
- Tone normalization transformation refers to intelligently analyzing the image and transforming the area of multiple tones in the image into a specified tone without obvious obtrusive artifacts.
- the present disclosure can not only analyze the main tone area or interactive designated specific area in the image, and then transform the tone of these areas to the designated tone, and can also normalize the whole image to the designated tone;
- the skin is protected (that is, the skin part in the image is kept the original image effect), and the tones on the human body, such as clothes and backpacks, are transformed, so as to provide a variety of different tonal transformation effects, which are flexible and changeable.
- FIG. 1 is a flow chart of an image processing method according to an exemplary embodiment.
- the image processing method can be used to transform the tone of an image or a video.
- the method shown in FIG. 1 can be executed by any electronic device having an image processing function.
- the electronic device may be at least one including, for example, a smart phone, a tablet personal computer (PC), a mobile phone, a video phone, an e-book reader, a desktop PC, a laptop PC, Netbook computers, workstations, servers, personal digital assistants (PDAs), portable multimedia players (PMPs), cameras and wearable devices, etc.
- PDAs personal digital assistants
- PMPs portable multimedia players
- step S11 an image to be edited is obtained.
- the image to be edited may be a photo or a single-frame image extracted from a video.
- step S12 at least one color tone of the image to be edited is determined.
- the at least one hue may be the dominant hue in the image, or may include dominant hues as well as other kinds of prominent hues.
- the determined at least one hue may be a user-specified hue, or include any combination of the above-mentioned hue categories.
- Key tones may refer to the hues that are prominent in the image to be edited. There may be one dominant hue or there may be multiple dominant hues in an image. In addition, the number of hues may also be set according to user input.
- the image to be edited may be converted into an HSV image, the pixels in the HSV image are tones classified according to a predefined first hue space, and then based on number of pixels to determine at least one hue.
- the first tone space may be predefined for saliency tone analysis.
- the first tone space may be defined based on the HSV color space.
- the first hue space may include a plurality of hues, a value range of each hue, and a value range of saturation and brightness corresponding to each hue.
- the value ranges of saturation and brightness corresponding to each hue can be set with hyperparameters, so that some areas of saturation and brightness can be ignored, making the image or video after subsequent hue transformation more natural.
- the pixels in the image to be edited can be classified according to the value range of each hue and the corresponding saturation and brightness value ranges, and the hues can be sorted in descending order of the number of pixels included in each hue.
- the first at least one hue in the hue ranking determines the at least one hue.
- the first hue in the hue ranking may be determined as the at least one hue.
- the first several mid-tones in the hue ordering may be determined as the at least one hue.
- the ratio of the number of pixels included in each hue to the total number of pixels of the to-be-edited image and the saturation corresponding to each hue can be determined according to value to determine at least one hue. For example, the ratio of the number of pixels included in each hue to the total number of pixels in the image to be edited can be calculated separately, and then the average saturation value of the corresponding area of each hue can be calculated separately, and the ratio of each hue to the average of the hue can be calculated. The saturation value is multiplied.
- the RGB pixels of the image to be edited may be clustered, and each category is sorted in descending order of the number of pixels included in each category. Sorting the categories, converting the RGB values of the cluster center points of the first at least one category in the sorting into HSV values, and determining at least one hue based on the predefined first hue space and the converted HSV values.
- the RGB value of the cluster center point of the jth class is [100, 50, 150], and its corresponding HSV value is [135, 170, 50], and the corresponding hue is determined from the first chromaticity space according to the HSV value.
- step S13 an area corresponding to at least one hue is acquired from the image to be edited.
- a region corresponding to at least one hue may be extracted from the image to be edited according to a predefined second hue space.
- a corresponding area in the image to be edited may be extracted based on the value range of each hue in a second hue space different from the first hue space.
- tone conversion is performed on the acquired area.
- the hue to be converted into may be specified according to user input.
- the hue of the acquired region can be transformed into the target hue desired by the user.
- the hue of the bare skin area of the human body in the image can be retained, while the hue of other regions can be transformed.
- a skin detection algorithm can be used to extract the human body bare skin area in the image to be edited, the original color tone of the extracted human body bare skin area can be retained, and the color tone of areas such as clothes or backpacks on the human body can be transformed.
- the present disclosure can realize transforming a region of tonal salience in an image or video to a specified hue, or transforming the entire image to a specified hue, or transforming a specified region in an image to a specified hue.
- Fig. 2 is a flowchart of an image processing method according to another exemplary embodiment.
- step S21 an image to be edited is obtained.
- each frame of the video can be extracted.
- step S22 the image to be edited is converted into an HSV image.
- Images to be edited can be converted to HSV images using HSV conversion algorithms. After HSV conversion, each pixel of the HSV image can be represented by hue H, saturation S, and brightness V.
- step S23 the pixels in the HSV image are hue-classified according to a predefined hue space.
- the hue space may be defined based on the HSV color space.
- the hue space may include a range of values for a plurality of hues and a range of values for saturation and brightness corresponding to each hue.
- the value ranges of saturation and brightness corresponding to each hue are set with hyperparameters respectively. The settings of the hyperparameters can make the tone-transformed image more natural to a certain extent.
- 10 medium tones may be included in the hue space, such as black, gray, white, red, orange, yellow, green, cyan, blue, and purple tones, and a corresponding value range is set for each hue to be distinguished.
- the hue space can be represented by Table 1 below.
- kBlack, kGrray, kWhite, kRed, kOrange, kYellow, kGreen, kCyan, kBlue, kPurple represent the black, gray, white, red, orange, yellow, green, cyan, blue and purple shades in order.
- the pixels of an HSV image can be classified according to the range of values for each hue in the hue space. For example, according to the tone space of Table 1 above, the pixels of the image to be edited can be classified into 10 categories.
- step S24 the main color tone of the image to be edited is determined.
- a dominant tone may mean that the tone is more prominent in the image.
- the number of pixels included in each hue can be calculated separately, and then the hues can be sorted in descending order of the number of pixels included in each hue after classification, and the first pixel in the hue sequence can be sorted.
- the colors determine the main color.
- the number of pixels occupied by each of the 10-tone hues can be calculated respectively.
- the more the number of pixels, the more prominent the hue area, and the pixels of the first G most significant categories can be selected (here In this case G ⁇ 10) as the main color.
- the main hue may be determined according to the ratio of the number of pixels included in each hue to the total number of pixels of the image to be edited, and the saturation value corresponding to each hue.
- the main hue is selected by combining the proportion of the number of pixels of each hue and the average saturation value corresponding to the hue, such as multiplying the proportion of the pixel number of the hue and the average saturation. The larger the value, the more representative the hue is. The more significant; or the proportion of the pixel number of the hue is added to the average saturation, the larger the value, the more significant the hue is.
- step S25 an area corresponding to the determined main tone is acquired from the image to be edited.
- a user-defined second tone space can be made to extract an area corresponding to the determined main tone from the image to be edited.
- the corresponding area in the image to be edited may be extracted based on the value range of each tone in the second tone space different from the first tone space.
- the value range of each tone in Table 2 can be used to extract the corresponding area in the image to be edited.
- step S26 tone conversion is performed on the acquired area.
- the corresponding area can be transformed into the target hue by the user inputting the target hue value.
- the acquired area can be transformed into a preset hue.
- step S27 post-processing may be performed on the tone-converted image.
- a filtering process may be performed on the tone-transformed image.
- the original pixel values can be retained for some special areas, such as the skin color of the human body.
- the original image to be edited can be used as a reference image, and a guided filtering operation can be performed on the tone-transformed image to keep the edges smooth.
- the skin color of the human body can be protected.
- the skin color detection method can be implemented by a skin segmentation method based on an ellipse color space or a skin segmentation algorithm based on deep learning.
- smoothing algorithms such as bilateral filtering can also be used to filter the image.
- Fig. 3 is a flowchart of an image processing method according to another exemplary embodiment.
- step S31 an image to be edited is obtained.
- step S32 clustering processing is performed on the RGB pixels of the image to be edited.
- RGB pixel values can be clustered with M as the number of cluster centers.
- clustering algorithms such as K-means Kmeans and Kmeans++ can be used to divide all the pixels in the image to be edited into M categories.
- step S33 each category is sorted in descending order of the number of pixels included in each category.
- step S34 the RGB values of the cluster center points of the first category in the sorting are converted into HSV values.
- step S35 the main tone is determined according to the predefined first tone space and the converted HSV value. The more pixels in a category, the more significant the category is, and the pixels of the first most significant category in the ranking can be extracted.
- step S36 an area corresponding to the main tone is acquired from the image to be edited.
- the region corresponding to the main tone can be extracted from the image to be edited using a predefined second tone space.
- the corresponding regions of the main tone can be extracted according to Table 2 above.
- all pixel values in the kPurple hue in Table 2 can be extracted and marked as Region j .
- step S37 tone conversion is performed on the acquired area.
- the corresponding area can be transformed into the target tone by the user inputting the target tone value.
- the acquired area can be transformed into a preset hue.
- step S38 post-processing may be performed on the tone-converted image.
- a filtering process may be performed on the tone-transformed image.
- the original pixel values can be retained for some special areas, such as the skin color of the human body.
- Fig. 4 is a flowchart of an image processing method according to another exemplary embodiment.
- step S41 an image to be edited is obtained.
- step S42 at least one color tone of the image to be edited is determined.
- the at least one hue may include a main hue and other hues than the main hue.
- the at least one hue may include all hues in the image to be edited.
- At least one hue in the image to be edited may be determined using a predefined first hue space. For example, 10 hues can be determined using the first hue space described in Table 1.
- step S43 an area corresponding to at least one hue is acquired from the image to be edited.
- a region corresponding to the at least one hue determined in step S42 may be determined using a predefined second hue space.
- the image to be edited can be segmented using the second tone space shown in Table 2.
- step S44 input for the user is received.
- the user input may include at least one of a first user input and a second user input.
- the first user input may be used to set the target hue
- the second user input may be used to set the hue transformation degree, wherein the hue transformation degree represents the percentage of the area in the image to be edited that will undergo hue transformation.
- the value range of the hue transformation degree can be [0%, 100%], denoted as T%, which means that at least T% of the image to be edited will be transformed to the target chromaticity.
- tone conversion is performed on the acquired region based on user input.
- the hue of the acquired region may be transformed into the input target hue.
- an area to be tone-converted among the acquired areas is determined based on the degree of tone conversion, and then the tone of the determined area is converted into a specified tone.
- an area to be tone-converted in the acquired area may be determined based on the degree of tone conversion, and then the tone of the determined area is converted to the input target tone.
- the number N of tone areas to be tone-converted among the plurality of tone areas included in the acquired area is determined according to the degree of tone-conversion (wherein, N is greater than or equal to 1), according to the order of the number of pixels included in each tone area in the multiple tone areas from large to small, and the first N tone areas in the tone sorting are determined as the The area of the hue shift.
- the number N may be determined in the following manner. In the case where the degree of hue transformation is less than or equal to the first value, the number N is determined as 1; in the case where the degree of hue transformation is greater than or equal to the second value, the number N is determined as the number of all hue regions; In the case where it is greater than the first value and less than the second value, the number N is determined as the following value: the ratio of the number of pixels included in each tone area to the number of effective pixels in the image to be edited is calculated in sequence from the beginning according to the color tone order, The number of tonal regions until the sum of said ratios is greater than or equal to the degree of tonal transformation.
- the effective pixels include all pixels in the image to be edited.
- the effective pixels include pixels in the image to be edited other than the pixels of the bare skin area of the human body.
- the transformation method sets the hue value of all pixels in Region 0 to the specified hue value or the target hue value.
- N all hue regions in the acquisition region can be set, which means that the hues of these hue regions are transformed to the specified hue value or target hue value.
- the number of effective pixels of the image to be edited is X (the number of effective pixels can refer to the number of pixels remaining in the image after removing the pixels in the skin segmentation result), set Len (Region i ) represents the number of pixels in Region i , then Ratio(Region i ) represents the proportion of Region i occupying the entire image, which can be represented by the following equation (1):
- the two functions of main hue transformation and hue normalization transformation can be realized by setting the two parameters t low and t high , or it can be used as an interactive parameter alone for the user to adjust to realize the controllable image. Tonal transformation of the area scale.
- post-processing may be performed on the tone-converted image.
- a filtering process may be performed on the tone-transformed image.
- the original pixel values can be retained for some special areas, such as the skin color of the human body.
- the original image to be edited can be used as a reference image, and a guided filtering operation can be performed on the tone-transformed image to keep the edges smooth.
- the skin color of the human body can be protected.
- the skin color detection method can be implemented by a skin segmentation method based on an ellipse color space or a skin segmentation algorithm based on deep learning.
- Fig. 5 is a schematic flowchart of an image processing method according to an exemplary embodiment.
- an image or video to be edited is acquired.
- a single frame of image is extracted, and each frame of image is processed frame by frame.
- Tonal saliency analysis may be performed using the first tone space.
- the hue space is predefined as 10 hues, namely kBlack, kGrray, kWhite, kRed, kOrange, kYellow, kGreen, kCyan, kBlue, kPurple, representing black, gray, white, red, orange, Yellow, green, cyan, blue and purple tones.
- the to-be-edited image is segmented according to the determined at least one hue.
- the segmentation of tonal regions can be performed using the second tonal space.
- Protected pixel portions, such as skin area pixel portions, etc., may not be included in these regions.
- the RGB value of the cluster center point of the jth class is [100, 50, 150], and its corresponding HSV value is [135, 170, 50].
- the color tone conversion may be performed on the divided regions according to user input.
- the hue of the acquired region may be transformed into the input target hue.
- an area to be tone-converted among the above-mentioned divided areas is determined based on the degree of tone conversion, and then the tone of the determined area is converted into a designated tone.
- an area to be tone-converted among the above-mentioned divided areas may be determined based on the degree of tone conversion, and then the tone of the determined area is converted into the input target tone.
- the main tone transformation effect and the tone normalization transformation effect can be realized, which greatly improves the user experience.
- tone transformation post-processing is performed on the transformed image. For example, to smooth out transformed edge transitions in an image and make them appear more natural, a tone-transformed image can be filtered. In addition, in order to increase the diversity of tone transformation, the original pixel values can be retained for some special areas, such as the skin color of the human body.
- the final target image can be obtained.
- Fig. 6 is a block diagram of an image processing apparatus according to an exemplary embodiment.
- the image processing apparatus 600 may include an acquisition module 601 and a processing module 602 .
- the image processing apparatus 600 may further include a user input module 603 .
- Each module in the image processing apparatus 600 may be implemented by one or more modules, and the name of the corresponding module may vary according to the type of the module. In various embodiments, some modules in the image processing apparatus 600 may be omitted, or additional modules may also be included.
- modules/elements according to various embodiments of the present disclosure may be combined to form a single entity, and thus may equivalently perform the functions of the corresponding modules/elements prior to combination.
- the obtaining module 601 obtains the image to be edited.
- the processing module 602 can determine at least one hue of the image to be edited.
- the at least one hue may be a main hue, a designated hue, or other hues.
- the processing module 602 may acquire a region corresponding to the determined at least one hue from the image to be edited, and perform hue transformation on the acquired region.
- the processing module 602 can convert the image to be edited into an HSV image, perform tonal classification on the pixels in the HSV image according to a predefined first tone space, and determine at least one tone based on the number of pixels included in each tone.
- the processing module 602 may extract a region corresponding to the at least one hue from the image to be edited according to a predefined second hue space.
- the first hue space and the second hue space may be predefined based on the HSV color space, wherein the first hue space and the second hue space may respectively include value ranges of multiple hues and saturation and saturation corresponding to each hue.
- the range of values for saturation and brightness corresponding to each hue may be set based on hyperparameters, wherein the hyperparameters in the first hue space and the second hue space may be set differently.
- the first tone space may be as shown in Table 1 above
- the second tone space may be as shown in Table 2 above.
- the processing module 602 may sort the hues in descending order of the number of pixels included in each hue, and determine the first at least one hue in the hue sequence as the at least one hue;
- the at least one hue is determined by the ratio of the number of pixels to the total number of pixels of the image to be edited, and the saturation value corresponding to each hue.
- the processing module 602 can perform clustering processing on the RGB pixels of the image to be edited; sort each category in descending order of the number of pixels included in each category, and cluster the clustering of the first at least one category in the sorting.
- the RGB values of the class center points are converted to HSV values; the at least one hue is determined based on a predefined first hue space and the converted HSV values.
- User input module 603 can receive user input.
- the processing module 602 may perform tone transformation on the acquired region according to the received user input.
- the user input may include at least one of a first user input and a second user input, wherein the first user input is used to set the target hue, and the second user input is used to set the hue transformation degree, wherein,
- the degree of tone transformation represents the percentage of the area in the image to be edited that will undergo tone transformation.
- the processing module 602 may determine, based on the degree of tone transformation, an area to be tone transformed among the acquired areas.
- the processing module 602 may transform the hue of the determined region to be hue transformed into a target hue.
- the processing module 602 may extract the human body bare skin region in the region by using a skin detection algorithm, and retain the original color tone for the human body bare skin region.
- the processing module 602 may determine, according to the degree of tone transformation, the number N of tone regions that will undergo tone transformation in the plurality of tone regions included in the acquired region, wherein N is greater than or equal to 1, according to each of the plurality of tone regions.
- the number of pixels included in the hue region is sorted in descending order of hue, and the first N hue regions in the hue sequence are determined as regions to be subjected to hue transformation.
- the processing module 602 may determine the number N to be one in the case where the degree of hue shift is less than or equal to the first value.
- the processing module 602 may determine the number N as the number of all tone regions.
- the processing module 602 may determine the number N as the following value: calculating the number of pixels included in each tone area in sequence according to the tone ordering from the front, which is respectively the same as that of the image to be edited. until the sum of said ratios is greater than or equal to the number of tonal regions of the degree of tonal transformation.
- the effective pixels include all pixels in the image to be edited.
- the effective pixels include pixels in the image to be edited other than the pixels of the bare skin area of the human body.
- the number N is calculated with reference to the above equation (2).
- FIG. 7 is a schematic structural diagram of an image processing device according to an exemplary embodiment.
- the image processing apparatus 700 may include: a processing component 701 , a communication bus 702 , a network interface 703 , an input/output interface 704 , a memory 705 and a power supply component 706 .
- the communication bus 702 is used to realize the connection and communication between these components.
- the input-output interface 704 may include a video display (such as a liquid crystal display), a microphone and speakers, and a user interaction interface (such as a keyboard, mouse, touch input device, etc.), and in some embodiments, the input-output interface 704 may also include a standard Wired interface, wireless interface.
- the network interface 703 may include a standard wired interface, a wireless interface (eg, a Wi-Fi interface).
- the memory 705 may be a high-speed random access memory or a stable non-volatile memory. In some embodiments, the memory 705 may also be a separate storage device from the aforementioned processing component 701 .
- FIG. 7 does not constitute a limitation on the image processing apparatus 700 , and may include more or less components than those shown, or combine some components, or arrange different components.
- the memory 705 as a storage medium may include an operating system, a data storage module, a network communication module, a user interface module, an image processing program, and a database.
- the network interface 703 is mainly used for data communication with external devices/terminals; the input and output interface 704 is mainly used for data interaction with the user; the processing components 701,
- the memory 705 can be set in the image processing device 700, and the image processing device 700 invokes the image processing program stored in the memory 705 and various APIs provided by the operating system through the processing component 701 to execute the image processing method provided by the embodiments of the present disclosure.
- the processing component 701 may include at least one processor, and a computer-executable instruction set is stored in the memory 705. When the computer-executable instruction set is executed by the at least one processor, the image processing method according to the embodiment of the present disclosure is executed. In addition, the processing component 701 may perform encoding operations, decoding operations, and the like.
- the processing component 701 can obtain the image to be edited, determine at least one hue of the image to be edited, obtain a region corresponding to the at least one hue from the image to be edited, and perform hue transformation on the obtained region.
- the acquired regions may be tone transformed according to user input.
- the image processing apparatus 700 may receive or output images and/or video via the input-output interface 704 .
- the user can output the processed image or video via the input-output interface 704 to share with other users.
- the image processing apparatus 700 may be a PC computer, a tablet device, a personal digital assistant, a smart phone, or any other device capable of executing the above set of instructions.
- the image processing device 700 is not necessarily a single electronic device, but can also be any set of devices or circuits capable of executing the above-mentioned instructions (or instruction sets) individually or jointly.
- Image processing device 700 may also be part of an integrated control system or system manager, or may be configured as a portable electronic device that interfaces locally or remotely (eg, via wireless transmission).
- the processing component 701 may include a central processing unit (CPU), a graphics processing unit (GPU), a programmable logic device, a special purpose processor system, a microcontroller or a microprocessor.
- processing components 701 may also include analog processors, digital processors, microprocessors, multi-core processors, processor arrays, network processors, and the like.
- Processing component 701 can execute instructions or code stored in memory, where memory 705 can also store data. Instructions and data may also be sent and received over a network via network interface 703, which may employ any known transport protocol.
- the memory 705 may be integrated with the processor, eg, RAM or flash memory arranged within an integrated circuit microprocessor or the like. Additionally, memory 705 may comprise a separate device, such as an external disk drive, a storage array, or any other storage device that may be used by a database system.
- the memory and the processor may be operatively coupled, or may communicate with each other, eg, through I/O ports, network connections, etc., to enable the processor to read files stored in the memory.
- an electronic device can be provided. 8 is a block diagram of an electronic device according to an embodiment of the present disclosure, the electronic device 800 may include at least one memory 802 and at least one processor 801, the at least one memory 802 stores a set of computer-executable instructions, when the computer-executable instructions When the collection is executed by the at least one processor 801, the image processing method according to various embodiments of the present disclosure is executed.
- Processor 801 may include a central processing unit (CPU), graphics processing unit (GPU), programmable logic device, special purpose processor system, microcontroller, or microprocessor. In some embodiments, the processor 801 may also include an analog processor, a digital processor, a microprocessor, a multi-core processor, a processor array, a network processor, and the like.
- CPU central processing unit
- GPU graphics processing unit
- programmable logic device special purpose processor system
- microcontroller microprocessor
- the processor 801 may also include an analog processor, a digital processor, a microprocessor, a multi-core processor, a processor array, a network processor, and the like.
- the memory 802 as a storage medium may include an operating system, a data storage module, a network communication module, a user interface module, an image processing program, and a database.
- the memory 802 may be integrated with the processor 801, eg, RAM or flash memory may be arranged within an integrated circuit microprocessor or the like. Additionally, memory 802 may comprise a separate device, such as an external disk drive, storage array, or any other storage device that may be used by a database system. The memory 802 and the processor 801 may be operatively coupled, or may communicate with each other, eg, through I/O ports, network connections, etc., to enable the processor 801 to read files stored in the memory 802 .
- the electronic device 800 may also include a video display (such as a liquid crystal display) and a user interaction interface (such as a keyboard, mouse, touch input device, etc.). All components of electronic device 800 may be connected to each other via a bus and/or network.
- a video display such as a liquid crystal display
- a user interaction interface such as a keyboard, mouse, touch input device, etc.
- the electronic device 800 may be a PC computer, a tablet device, a personal digital assistant, a smartphone, or other device capable of executing the set of instructions described above.
- the electronic device 800 is not necessarily a single electronic device, but can also be a collection of any device or circuit capable of individually or jointly executing the above-mentioned instructions (or instruction sets).
- Electronic device 800 may also be part of an integrated control system or system manager, or may be configured as a portable electronic device that interfaces locally or remotely (eg, via wireless transmission).
- FIG. 8 do not constitute a limitation, and may include more or less components than those shown, or combine some components, or arrange different components.
- a computer-readable storage medium storing instructions, wherein the instructions, when executed by at least one processor, cause the at least one processor to perform the image processing method according to the present disclosure.
- Examples of the computer-readable storage medium herein include: Read Only Memory (ROM), Random Access Programmable Read Only Memory (PROM), Electrically Erasable Programmable Read Only Memory (EEPROM), Random Access Memory (RAM) , dynamic random access memory (DRAM), static random access memory (SRAM), flash memory, non-volatile memory, CD-ROM, CD-R, CD+R, CD-RW, CD+RW, DVD-ROM , DVD-R, DVD+R, DVD-RW, DVD+RW, DVD-RAM, BD-ROM, BD-R, BD-R LTH, BD-RE, Blu-ray or Optical Disc Storage, Hard Disk Drive (HDD), Solid State Hard disk (SSD), card memory (such as a multimedia card, Secure Digital (SD) card, or Extreme Digital (X), Secure Digital (SD) card, or Extreme
- the computer program in the above-mentioned computer-readable storage medium can be executed in an environment deployed in computer equipment such as a client, a host, a proxy device, a server, etc.
- the computer program and any associated data , data files and data structures are distributed over networked computer systems so that the computer programs and any associated data, data files and data structures are stored, accessed and executed in a distributed fashion by one or more processors or computers.
- a computer program product can also be provided, wherein instructions in the computer program product can be executed by a processor of a computer device to complete the above image processing method.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Facsimile Image Signal Circuits (AREA)
- Color Image Communication Systems (AREA)
Abstract
本公开关于一种图像处理方法和图像处理装置,涉及图像处理领域。所述图像处理方法可包括以下步骤:获得待编辑图像;确定所述待编辑图像的至少一种色调;从所述待编辑图像中获取与所述至少一种色调相应的区域;以及对获取的区域进行色调变换。
Description
相关申请的交叉引用
本申请基于申请号为202110099560.7、申请日为2021年01月25日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此引入本申请作为参考。
本公开涉及图像处理领域,尤其涉及一种用于色调变换的图像处理方法和图像处理装置。
用户可通过改变图像或视频的色调来改变当前图像或视频的风格。目前,对于图像或视频的色调变换,一般是对图像或视频的整体进行色调变换。例如,可在一段预先设定的时间(诸如15秒或60秒)内对视频中的所有区域进行色彩的渐变,单调循环不同的色调。此外,在画面有人的情况下,可维持整个人体区域不变,只对图像或视频中的其他场景进行色调变换。
发明内容
本公开提供一种图像处理方法和图像处理装置。
根据本公开实施例的一个方面,提供一种图像处理方法,可包括以下步骤:获得待编辑图像;确定所述待编辑图像的至少一种色调;从所述待编辑图像中获取与所述至少一种色调相应的区域;以及对获取的区域进行色调变换。
在一些实施例中,所述至少一种色调包括待编辑图像的主色调、指定色调和全部色调中的至少一个。
在一些实施例中,确定所述待编辑图像的至少一种色调的步骤可包括:将所述待编辑图像转换为HSV图像;根据预先定义的第一色调空间对所述HSV图像中的像素进行色调分类;基于每种色调所包括的像素数来确定所述至少一种色调。
在一些实施例中,从所述待编辑图像中获取与所述至少一种色调相应的区域的步骤可包括:根据预先定义的第二色调空间从所述待编辑图像中提取与所述至少一种色调相应的区域。
在一些实施例中,第一色调空间和第二色调空间可基于HSV色彩空间被定义,其中,第一色调空间和第二色调空间可分别包括多种色调的取值范围以及与每种色调相应的饱和 度和亮度的取值范围。
在一些实施例中,与每种色调相应的饱和度和亮度的取值范围可基于超参数被设置,其中,第一色调空间和第二色调空间中的超参数可被不同地设置。
在一些实施例中,基于每种色调所包括的像素数来确定所述至少一种色调的步骤可包括:按照每种色调所包括的像素数从大到小的顺序进行色调排序,将色调排序中的前至少一种色调确定为所述至少一种色调;或者根据每种色调所包括的像素数分别与所述待编辑图像的全部像素数的比值以及与每种色调相应的饱和度值来确定所述至少一种色调。
在一些实施例中,确定所述待编辑图像的至少一种色调的步骤可包括:对所述待编辑图像的RGB像素进行聚类处理;按照每种类别所包括的像素数从大到小的顺序对每种类别进行排序;将所述排序中的前至少一种类别的聚类中心点的RGB值转换为HSV值;基于预先定义的第一色调空间和转换的HSV值确定所述至少一种色调。
在一些实施例中,所述图像处理方法还可包括:接收用户输入;根据所述用户输入对获取的区域进行色调变换。
在一些实施例中,所述用户输入可包括第一用户输入和第二用户输入中的至少一个,其中,第一用户输入可用于设置目标色调,第二用户输入可用于设置色调变换程度,其中,所述色调变换程度表示所述待编辑图像中将进行色调变换的区域百分比。
在一些实施例中,根据所述用户输入对获取的区域进行色调变换的步骤可包括:基于所述色调变换程度确定获取的区域中将被进行色调变换的区域;和/或将确定的将被进行色调变换的区域的色调变换为所述目标色调。
在一些实施例中,在获取的区域中包括人体的情况下,对获取的区域进行色调变换的步骤可包括:利用皮肤检测算法提取该区域中的人体裸露皮肤区域;对所述人体裸露皮肤区域保留原始色调。
在一些实施例中,基于所述色调变换程度确定获取的区域中将被进行色调变换的区域的步骤可包括:根据所述色调变换程度来确定包括在获取的区域中的多个色调区域中将进行色调变换的色调区域的数量N,其中,N大于或等于1;按照所述多个色调区域中的每个色调区域所包括的像素数从大到小的顺序进行色调排序;将与色调排序中前N个色调区域确定为所述将被进行色调变换的区域。
在一些实施例中,根据所述色调变换程度来确定多个色调区域中将进行色调变换的色调区域的数量N的步骤可包括:在所述色调变换程度小于或等于第一值的情况下,将数量N确定为1;在所述色调变换程度大于或等于第二值的情况下,将数量N确定为全部色调区域的数量;在所述色调变换程度大于第一值并且小于第二值的情况下,将数量N确定为以下 值:按照色调排序从前开始依次计算每个色调区域所包括的像素数分别与所述待编辑图像中的有效像素数的比例,直至所述比例之和大于或等于所述色调变换程度的色调区域数量。
在一些实施例中,在所述待编辑图像不包括人体的情况下,所述有效像素包括所述待编辑图像中的全部像素;在所述待编辑图像包括人体的情况下,所述有效像素包括所述待编辑图像中的除人体裸露皮肤区域的像素之外的像素。
根据本公开实施例的另一方面,提供一种图像处理装置,可包括:获取模块,被配置为获得待编辑图像;以及处理模块,被配置为:确定所述待编辑图像的至少一种色调;从所述待编辑图像中获取与所述至少一种色调相应的区域;以及对获取的区域进行色调变换。
在一些实施例中,处理模块可被配置为将所述待编辑图像转换为HSV图像;根据预先定义的第一色调空间对所述HSV图像中的像素进行色调分类;基于每种色调所包括的像素数来确定所述至少一种色调。
在一些实施例中,处理模块可被配置为根据预先定义的第二色调空间从所述待编辑图像中提取与所述至少一种色调相应的区域。
在一些实施例中,第一色调空间和第二色调空间可基于HSV色彩空间被定义,其中,第一色调空间和第二色调空间可分别包括多种色调的取值范围以及与每种色调相应的饱和度和亮度的取值范围。
在一些实施例中,与每种色调相应的饱和度和亮度的取值范围可基于超参数被设置,其中,第一色调空间和第二色调空间中的超参数可被不同地设置。
在一些实施例中,处理模块可被配置为按照每种色调所包括的像素数从大到小的顺序进行色调排序,将色调排序中的前至少一种色调确定为所述至少一种色调;或者根据每种色调所包括的像素数分别与所述待编辑图像的全部像素数的比值以及与每种色调相应的饱和度值来确定所述至少一种色调。
在一些实施例中,处理模块可被配置为:对所述待编辑图像的RGB像素进行聚类处理;按照每种类别所包括的像素数从大到小的顺序对每种类别进行排序;将所述排序中的前至少一种类别的聚类中心点的RGB值转换为HSV值;基于预先定义的第一色调空间和转换的HSV值确定所述至少一种色调。
在一些实施例中,所述图像处理装置还可包括用户输入模块,被配置为接收用户输入,其中,处理模块被配置为根据所述用户输入对获取的区域进行色调变换。
在一些实施例中,所述用户输入可包括第一用户输入和第二用户输入中的至少一个,其中,第一用户输入用于设置目标色调,第二用户输入用于设置色调变换程度,其中,所述色调变换程度表示所述待编辑图像中将进行色调变换的区域百分比。
在一些实施例中,处理模块可被配置为:基于所述色调变换程度确定获取的区域中将被进行色调变换的区域;和/或将确定的将被进行色调变换的区域的色调变换为所述目标色调。
在一些实施例中,在获取的区域中包括人体的情况下,处理模块可被配置为利用皮肤检测算法提取该区域中的人体裸露皮肤区域;对所述人体裸露皮肤区域保留原始色调。
在一些实施例中,处理模块可被配置为:根据所述色调变换程度来确定包括在获取的区域中的多个色调区域中将进行色调变换的色调区域的数量N,其中,N大于或等于1;按照所述多个色调区域中的每个色调区域所包括的像素数从大到小的顺序进行色调排序;将与色调排序中前N个色调区域确定为所述将被进行色调变换的区域。
在一些实施例中,处理模块可被配置为:在所述色调变换程度小于或等于第一值的情况下,将数量N确定为1;在所述色调变换程度大于或等于第二值的情况下,将数量N确定为全部色调区域的数量;在所述色调变换程度大于第一值并且小于第二值的情况下,将数量N确定为以下值:按照色调排序从前开始依次计算每个色调区域所包括的像素数分别与所述待编辑图像中的有效像素数的比例,直至所述比例之和大于或等于所述色调变换程度的色调区域数量。
在一些实施例中,在所述待编辑图像不包括人体的情况下,所述有效像素包括所述待编辑图像中的全部像素;在所述待编辑图像包括人体的情况下,所述有效像素包括所述待编辑图像中的除人体裸露皮肤区域的像素之外的像素。
根据本公开实施例的又一方面,提供一种电子设备,所述电子设备可包括:至少一个处理器;至少一个存储计算机可执行指令的存储器,其中,所述计算机可执行指令在被所述至少一个处理器运行时,促使所述至少一个处理器执行如上所述的图像处理方法。
根据本公开实施例的再一方面,提供一种存储指令的计算机可读存储介质,当所述指令被至少一个处理器运行时,促使所述至少一个处理器执行如上所述的图像处理方法。
根据本公开实施例的另一方面,提供一种计算机程序产品,所述计算机程序产品中的指令被电子装置中的至少一个处理器运行以执行如上所述的图像处理方法。
本公开提供的图像处理方案既可以智能地分析出图像中的主色调区域或交互的指定特定区域,然后将这些区域变换到指定的色调;也可以将图像整体归一变换到指定的色调;同时在有人场景下,可对人体的皮肤区域进行保护并且对人体上的诸如衣服、背包等区域进行色调变换。此外,还提供了用户交互功能,用户可设置期望的色调进行变换,极大地改善了用户体验。
应当理解的是,以上的一般描述和后文的细节描述仅是示例性和解释性的,并不能限制本公开。
此处的附图被并入说明书中并构成本说明书的一部分,示出了符合本公开的实施例,并与说明书一起用于解释本公开的原理,并不构成对本公开的不当限定。
图1是根据一示例性实施例示出的一种图像处理方法的流程图。
图2是根据另一示例性实施例示出的一种图像处理方法的流程图。
图3是根据另一示例性实施例示出的一种图像处理方法的流程图。
图4是根据另一示例性实施例示出的一种图像处理方法的流程图。
图5是根据一示例性实施例示出的一种图像处理方法的流程示意图。
图6是根据一示例性实施例示出的一种图像处理装置的框图。
图7是根据一示例性实施例示出的一种图像处理设备的结构示意图。
图8是根据一示例性实施例示出的一种电子设备的框图。
为了使本领域普通人员更好地理解本公开的技术方案,下面将结合附图,对本公开实施例中的技术方案进行清楚、完整地描述。
需要说明的是,本公开的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的本公开的实施例能够以除了在这里图示或描述的那些以外的顺序实施。以下示例性实施例中所描述的实施方式并不代表与本公开相一致的所有实施方式。相反,它们仅是与如所附权利要求书中所详述的、本公开的一些方面相一致的装置和方法的例子。
虽然相关技术实现了对图像的色彩变化效果,但是其只会对图像中所有的色彩区域(除去人的部分)进行预先设置的色调变化,而不能实现对图像中具有特定色调的区域(诸如图像中的主色调区域)单独进行色调的变化,也不能指定期望变化的色调;同时也不能将图像中所有的区域变换到指定的色调。此外,虽然相关技术对图像中的所有的人体区域的色调进行了保留,只对不包含人体的区域进行色调的变化,但是其限制了对诸如人体衣服、背包等区域的色调的变化等。
基于上述情况,本公开应运而生,通过综合运用图像处理、统计分析和机器学习等手段,提供了一种用于图像或视频的色调变换的方案。该方案可针对多场景(诸如有人或无人),完成单帧图像中的主色调提取、主色调变换和色调归一变换功能。主色调变换是指智能分析 图像中显著的一种或多种色调,然后仅将具有这种色调的区域变换到某一指定的色调,无明显突兀的伪影。色调归一变换是指智能分析图像,将图像中多种色调的区域变换为某一指定色调,无明显突兀的伪影。
本公开既可以分析出图像中的主色调区域或交互的指定特定区域,然后将这些区域的色调变换到指定的色调,也可以将图像整体归一变换到指定的色调上;同时可对人体的皮肤进行保护(即将图像中的皮肤部分保留原图像效果),而对人体上的诸如衣服、背包等色调进行变换,从而提供多种不同的色调变换特效,灵活多变。
在下文中,根据本公开的各种实施例,将参照附图对本公开的方法、装置以及设备进行详细描述。
图1是根据一示例性实施例示出的一种图像处理方法的流程图,如图1所示,图像处理方法可用于对图像或视频的色调变换。图1所示的方法可由任意具有图像处理功能的电子设备执行。电子设备可以是包括如下中的至少一个:例如,智能电话、平板个人计算机(PC)、移动电话、视频电话、电子书阅读器(e-book reader)、桌上型PC、膝上型PC、上网本计算机、工作站、服务器、个人数字助理(PDA)、便携式多媒体播放器(PMP)、相机和可穿戴装置等。
参照图1,在步骤S11中,获得待编辑图像。这里,待编辑图像可以是一张照片或者是从视频中提取的单帧图像。
在步骤S12中,确定待编辑图像的至少一种色调。至少一种色调可以是图像中的主色调,或者可包括主色调以及其他种类的显著色调。或者,确定的至少一种色调可以是用户指定的色调,或者包括上述提及的色调种类的任意组合。
主色调可指在待编辑图像中显著的色调。在一幅图像中可能存在一种显著的色调或者存在多种显著的色调。此外,也可根据用户输入来设置关于色调的数量。
在确定待编辑图像中的至少一种色调的情况下,可将待编辑图像转换为HSV图像,根据预先定义的第一色调空间对HSV图像中的像素进行色调分类,然后基于每种色调所包括的像素数来确定至少一种色调。第一色调空间可被预先定义以用于显著性色调分析。第一色调空间可基于HSV色彩空间进行定义。例如,第一色调空间可包括多种色调、每种色调的取值范围以及与每种色调相应的饱和度和亮度的取值范围。
与每种色调相应的饱和度和亮度的取值范围可分别设置有超参数,这样可以忽略一些饱和度和亮度地的区域,使得后续色调变换后的图像或视频过度更加自然。
可根据每种色调的取值范围以及相应的饱和度和亮度的取值范围将待编辑图像中的像素进行分类,按照每种色调所包括的像素数从大到小的顺序进行色调排序,将色调排序中的 前至少一种色调确定所述至少一种色调。例如,可将色调排序中的第一种色调确定为所述至少一种色调。例如,可将色调排序中的前若干中色调确定为所述至少一种色调。
在本公开的其他实施例中,在对待编辑图像中的像素进行分类后,可根据每种色调所包括的像素数分别与待编辑图像的全部像素数的比值以及与每种色调相应的饱和度值来确定至少一种色调。例如,可分别计算每种色调所包括的像素数与待编辑图像的全部像素数的比值,然后分别计算每种色调的相应区域的平均饱和度值,将每种色调的比值与该色调的平均饱和度值相乘,相乘结果越大,表示色调越显著,或者将每种色调的比值与该色调的平均饱和度值相加,相加结果越大,表示色调越显著。因此,可根据上述相乘结果或相加结果来确定至少一种色调。
在一些实施例中,在确定待编辑图像中的至少一种色调的情况下,可对待编辑图像的RGB像素进行聚类处理,按照每种类别所包括的像素数从大到小的顺序对每种类别进行排序,将该排序中的前至少一种类别的聚类中心点的RGB值转换为HSV值,基于预先定义的第一色调空间和转换的HSV值确定至少一种色调。
例如,可将Q个最显著类别的聚类中心点标记为[R
s-i,G
s-i,B
s-i]
i=0,1,…Q-1,转换为对应的HSV值,标记为[H
s-i,S
s-i,V
s-i]
i=0,1,…Q-1;然后根据第一色度空间,确定[H
s-i,S
s-i,V
s-i]
i=0,1,…Q-1分别属于哪个色调,比如第j类的聚类中心点RGB值为[100,50,150],其对应的HSV值为[135,170,50],根据该HSV值从第一色度空间中确定相应的色调。
在步骤S13中,从待编辑图像中获取与至少一种色调相应的区域。可根据预先定义的第二色调空间从待编辑图像中提取与至少一种色调相应的区域。在确定色调后,可基于不同于第一色调空间的第二色调空间中的每种色调的取值范围来提取待编辑图像中的对应区域。
在步骤S14中,对获取的区域进行色调变换。在进行色调变换的情况下,可根据用户输入来指定将被变换成的色调。通过接收用于设置目标色调的用户输入,可将获取的区域的色调变换为用户期望的目标色调。
此外,在待编辑图像中包括人体的情况下,可保留图像中人体裸露皮肤区域的色调,而对其他区域进行色调变换。例如,可利用皮肤检测算法提取待编辑图像中的人体裸露皮肤区域,对提取的人体裸露皮肤区域保留原始色调,而对人体上的诸如衣服或背包等区域的色调进行变换。
本公开可实现将图像或视频中的色调显著性区域变换到指定的色调,或将整个图像变换到指定的色调,或将图像中的指定区域变换到指定的色调。
图2是根据另一示例性实施例示出的一种图像处理方法的流程图。
参照图2,在步骤S21中,获得待编辑图像。在获取视频的情况下,可提取视频的每个 帧图像。
在步骤S22中,将待编辑图像转换为HSV图像。可利用HSV转换算法将待编辑的图像转换为HSV图像。在经过HSV转换之后,HSV图像的每个像素可由色调H、饱和度S、亮度V表示。
在步骤S23中,根据预先定义的色调空间对HSV图像中的像素进行色调分类。
根据本公开的实施例,可基于HSV色彩空间来定义色调空间。所述色调空间可包括多种色调的取值范围以及与每种色调相应的饱和度和亮度的取值范围。此外,与每种色调相应的饱和度和亮度的取值范围分别设置有超参数。对于超参数的设置可在一定程度上使色调变换后的图像更加自然。
例如,色调空间中可包括10中色调,诸如黑、灰、白、红、橙、黄、绿、青、蓝和紫色调,每种色调设置相应的取值范围以进行区别。例如,色调空间可由如下表1表示。
表1
在表1中,kBlack、kGrray、kWhite、kRed、kOrange、kYellow、kGreen、kCyan、kBlue、kPurple依次代表黑、灰、白、红、橙、黄、绿、青、蓝和紫色调。[Hmin,Hmax]代表色调的取值范围(Hmin>=0;Hmin<=Hmax;Hmax<=180),[Smin,Smax]代表饱和度的取值范围(Smin>=0;Smin<=Smax;Smax<=255),[Vmin,Vmax]代表亮度的取值范围(Vmin>=0;Vmin<=Smax;Vmax<=255);Ks和Kv是超参数,可根据实际情况来进行设置(-211<=Ks<=44,-208<=Kv<=47),Ks和Kv可忽略一些饱和度和亮度地的像素,例如,可将Ks设 置为33,Kv设置为1。
可根据色调空间中的每种色调的取值范围将HSV图像的像素进行分类。例如,按照上述表1的色调空间,可将待编辑图像的像素分为10种类别。
在步骤S24中,确定待编辑图像的主色调。这里,主色调可意为该色调在图像中更加显著。在进行像素分类后,可分别计算每种色调所包括的像素数,然后可按照分类后的每种色调所包括的像素数从大到小的顺序进行色调排序,并且将色调排序中的第一种色调确定主色调。
例如,在利用表1获得10中色调后,可分别计算10中色调各自占据的像素数,像素数越多,则该色调区域更显著,可选取前G个最显著类别的像素点(在这种情况下G<=10)作为主色调。在一些实施例中,在进行显著性色调分析的情况下,可剔除kBlack、kGrray、kWhite三个色调,即在其余的七种色调中进行显著性色调排序,此时G<=7。
在本公开的其他实施例中,可根据每种色调所包括的像素数分别与待编辑图像的全部像素数的比值以及与每种色调相应的饱和度值来确定主色调。例如,结合每种色调的像素数的占比和与该色调相应的平均饱和度值来选取主色调,诸如将色调的像素数的占比和平均饱和度相乘,值越大,代表该色调越显著;或将色调的像素数的占比和平均饱和度相加,值越大,代表该色调越显著。
此外,也可利用先验知识、算法处理和统计分析等方式来确定主色调。
在步骤S25中,从待编辑图像中获取与确定的主色调相应的区域。可使用户预先定义的第二色调空间从待编辑图像中提取与确定的主色调相应的区域。
在确定主色调后,可基于不同于第一色调空间的第二色调空间中的每种色调的取值范围来提取待编辑图像中的对应区域。
例如,在进行色调区域分割的情况下,可利用表2中的每种色调的取值范围来提取待编辑图像中的对应区域。表2相比较于表1,去掉了kBlack、kGrray、kWhite三种色调,从色调的角度进行区域的划分(换句话说,表2相当于令Ks=44且Kv=47的表1),这样,可避免明显的边界过度不自然现象。
表2
在步骤S26中,对获取的区域进行色调变换。可通过用户输入目标色调值将相应的区域变换为目标色调。或者,可将获取的区域变换为预先设定的色调。
在步骤S27,可对色调变换后的图像进行后处理。根据本公开的实施例,为了平滑图像中变换后的边缘过渡部分,使其看来更自然,可对色调变换后的图像进行滤波处理。此外,为了增加色调变换的多样性,可针对一些特殊的区域,保留原始像素值,如人体肤色部分等。
例如,因此为了平滑边缘,可将待编辑的原图作为参考图像,对色调变换后的图像进行导向滤波操作,以保持边缘平滑。在待编辑图像中有人的情况下,可保护人体肤色,这里,肤色检测方法可采用基于椭圆色彩空间的皮肤分割方法或者基于深度学习的皮肤分割算法等实现。此外,还可以利用双边滤波等平滑算法对图像进行滤波处理。
图3是根据另一示例性实施例示出的一种图像处理方法的流程图。
参照图3,在步骤S31中,获得待编辑图像。
在步骤S32中,对待编辑图像的RGB像素进行聚类处理。
例如,可以以M为聚类中心个数,将RGB像素值进行聚类。例如,可使用K均值Kmeans、Kmeans++等聚类算法,将待编辑图像中的所有像素点划分为M类。
在步骤S33中,按照每种类别所包括的像素数从大到小的顺序对每种类别进行排序。
在步骤S34中,将排序中的第一种类别的聚类中心点的RGB值转换为HSV值。
在步骤S35中,根据预先定义的第一色调空间和转换的HSV值来确定主色调。类别中像素点个数越多,则可表示该类别越显著,可提取排序中第一个最显著类别的像素点。
在步骤S36中,从待编辑图像中获取与主色调相应的区域。可利用预先定义的第二色调空间从待编辑图像中提取与主色调相应的区域。例如,可按照上述表2来提取主色调的相应区域。例如,在kPurple色调为主色调后,表2中所有在kPurple色调的像素值可被提取并标记为Region
j。
在步骤S37中,对获取的区域进行色调变换。可通过用户输入目标色调值将相应的区域 变换为目标色调。或者,可将获取的区域变换为预先设定的色调。
在步骤S38中,可对色调变换后的图像进行后处理。根据本公开的实施例,为了平滑图像中变换后的边缘过渡部分,使其看来更自然,可对色调变换后的图像进行滤波处理。此外,为了增加色调变换的多样性,可针对一些特殊的区域,保留原始像素值,如人体肤色部分等。
图4是根据另一示例性实施例示出的一种图像处理方法的流程图。
参照图4,在步骤S41中,获得待编辑图像。
在步骤S42中,确定待编辑图像的至少一种色调。这里,所述至少一种色调可包括主色调以及除主色调之外的其他色调。或者,所述至少一种色调可以包括待编辑图像中的全部色调。
可利用预先定义的第一色调空间来确定待编辑图像中的至少一种色调。例如,可利用表1所述的第一色调空间确定出10种色调。
在步骤S43中,从待编辑图像中获取与至少一种色调相应的区域。可利用预先定义的第二色调空间确定与步骤S42中确定的至少一种色调相应的区域。例如,可利用表2所示的第二色调空间对待编辑图像进行区域分割。
在步骤S44中,接收用于用户输入。根据本公开的实施例,用户输入可包括第一用户输入和第二用户输入中的至少一个。这里,第一用户输入可用于设置目标色调,第二用户输入可用于设置色调变换程度,其中,所述色调变换程度表示所述待编辑图像中将进行色调变换的区域百分比。
例如,色调变换程度的取值范围可以是[0%,100%],记为T%,这表示待编辑图像中至少有百分之T的部分将被变换到目标色度。
在步骤S45中,基于用户输入对获取的区域进行色调变换。在用户输入为第一用户输入的情况下,可将获取的区域的色调变换为输入的目标色调。在用户输入为第二用户输入的情况下,基于色调变换程度确定获取的区域中将被进行色调变换的区域,然后将确定的区域的色调变换为指定的色调。在用户输入包括第一用户输入和第二用户输入的情况下,可基于色调变换程度确定获取的区域中将被进行色调变换的区域,然后将确定的区域的色调变换为输入的目标色调。
在根据第二用户输入确定将被进行色调变换的区域的情况下,首先根据色调变换程度来确定包括在获取的区域中的多个色调区域中将进行色调变换的色调区域的数量N(其中,N大于或等于1),按照多个色调区域中的每个色调区域所包括的像素数从大到小的顺序进行色调排序,将与色调排序中前N个色调区域确定为所述将被进行色调变换的区域。
根据本公开的实施例,可采用以下方式确定数量N。在色调变换程度小于或等于第一值 的情况下,将数量N确定为1;在色调变换程度大于或等于第二值的情况下,将数量N确定为全部色调区域的数量;在色调变换程度大于第一值并且小于第二值的情况下,将数量N确定为以下值:按照色调排序从前开始依次计算每个色调区域所包括的像素数分别与待编辑图像中的有效像素数的比例,直至所述比例之和大于或等于色调变换程度的色调区域数量。
在待编辑图像不包括人体的情况下,所述有效像素包括待编辑图像中的全部像素。在所述待编辑图像包括人体的情况下,所述有效像素包括待编辑图像中的除人体裸露皮肤区域的像素之外的像素。
例如,在色调变换程度T<t
low,t
low默认值取40的情况下,可设置N=1,表示对最显著色调的区域Region
0的色调Color
0进行变换,设该色调的Hue范围为[H
0-low,H
0-high]。变换方式可将Region
0中所有像素的色调值设置为指定的色调值或目标色调值。
在T>t
high,t
high默认值取90的情况下,可设置N=获取区域中的全部色调区域,表示将这些色调区域的色调都变换到指定的色调值或目标色调值。变换方式可以是将[Region
i]
i=0,1,…N-1中所有像素的色调值设置为指定的色调值或目标色调值。
在t
low<T<t
high的情况下,在待编辑图像的有效像素数为X(有效像素数可指图像中除去皮肤分割结果中的像素之后剩余的像素数),设定Len(Region
i)代表Region
i中的像素数,则Ratio(Region
i)代表Region
i占据整个图像的比例,可由下面的等式(1)表示:
Ratio(Region
i)=Len(Regioni)/X (1)
依次计算前n个色调的像素比例,直至满足以下等式(2)
此时,可取N=n,代表将这n种色调区域都变换到指定色调或目标色调。变换方式可以是将[Region
i]
i=0,1,…n-1中所有像素的色调值设置为指定色调或目标色调。
此外,在t
low=0,t
high=1的情况下,可不做主色调变换和色调归一变换功能区分,可由色调变换程度T来控制图像的区域色调变换,所以T值可供用户交互调节。
通过在色调变换中设置色调变换程度,既可以通过设置t
low和t
high两个参数实现主色调变换和色调归一变换两种功能,也可以单独作为交互参数,供用户调节,实现图像可控区域比例的色调变换。
作为一种实施方式,可对色调变换后的图像进行后处理。根据本公开的实施例,为了平滑图像中变换后的边缘过渡部分,使其看来更自然,可对色调变换后的图像进行滤波处理。此外,为了增加色调变换的多样性,可针对一些特殊的区域,保留原始像素值,如人体肤色部分等。
例如,因此为了平滑边缘,可将待编辑的原图作为参考图像,对色调变换后的图像进行导向滤波操作,以保持边缘平滑。在待编辑图像中有人的情况下,可保护人体肤色,这里,肤色检测方法可采用基于椭圆色彩空间的皮肤分割方法或者基于深度学习的皮肤分割算法等实现。
图5是根据一示例性实施例示出的一种图像处理方法的流程示意图。
参照图5,获取待编辑的图像或视频。在获取视频的情况下,提取单帧图像,对每一帧图像逐帧处理。
对获取的图像或视频进行色调显著性分析。可利用第一色调空间进行色调显著性分析。例如,基于HSV色彩空间将色调空间预定义为10种色调,分别是kBlack、kGrray、kWhite、kRed、kOrange、kYellow、kGreen、kCyan、kBlue、kPurple,依次代表黑、灰、白、红、橙、黄、绿、青、蓝和紫色调。可通过先验知识、算法处理和统计分析等方式,获得图像或视频中的前G种主(或显著)色调,标记为[Color
i]
i=0,1,…G-1。可利用基于RGB色彩空间的聚类算法来获取至少一种色调。例如,以M(M>=G)为聚类中心个数,将RGB像素值进行聚类,可以使用Kmeans或Kmeans++等聚类算法,将图像中的所有像素点划分为M类,类中像素点个数越多,则标记为越显著,取前G个最显著类别的像素点作为所述至少一种色调。
在一些实施例中,可利用基于HSV色彩空间的色调空间来确定至少一种色调。例如,根据上述表1中10种色调的取值范围,将HSV图像中的像素划分为10种类别,即10块区域。然后分别计算10块区域的各自占据的像素个数,像素个数越多,则该色调区域更显著,取前G个最显著类别的像素点(在这种情况下G<=10),或者可结合像素个数的占比和该区域的平均饱和度值来选取。或者,在进行显著性色调分析的情况下,可去除表1中的kBlack、kGrray、kWhite三个色彩,即在其余的7种色调中进行显著性色调空间区域排序,所以G<=7。
接下来,根据确定的至少一种色调对待编辑图像进行色调区域分割。可利用第二色调空间进行色调区域的分割。例如,可根据分析出的前G种显著性色调,按照表2中的范围依次提取图像中的对应区域,可标记为[Region
i]
i=0,1,…G-1。在这些区域中可不包括被保护的像素部分,诸如皮肤区域像素部分等。
根据在显著性色调分析中采取的基于RGB色彩空间的聚类算法,将G个最显著类别的聚类中心点,标记为[R
s-i,G
s-i,B
s-i]
i=0,1,…G-1,转换为对应的HSV值,标记为[H
s-i,S
s-i,V
s-i]
i=0,1,…G-1;然后根据第一色调空间(诸如表1),确定出[H
s-i,S
s-i,V
s-i]
i=0,1,…G-1分别属于哪个色调,然后按照该色调在表2中的范围提取图像中的区域[Region
i]
i=0,1,…G-1,比如第j类的聚类中心点RGB值为[100,50,150],其对应的HSV值为[135,170,50],在Ks=33,Kv=1 情况下(如表1),该聚类中心点属于kPurple色调,则按照表2中所有在kPurple色调的像素值被提取,记为Region
j。
利用表1中预定义的色调空间进行的显著性色调分析,可按照表2提取对应的色调的区域[Region
i]
i=0,1,…G-1。
在进行色调变换的情况下,可根据用户输入对上述分割的区域进行色调变换。在用户输入为第一用户输入的情况下,可将获取的区域的色调变换为输入的目标色调。在用户输入为第二用户输入的情况下,基于色调变换程度确定上述分割区域中将被进行色调变换的区域,然后将确定的区域的色调变换为指定的色调。在用户输入包括第一用户输入和第二用户输入的情况下,可基于色调变换程度确定上述分割区域中将被进行色调变换的区域,然后将确定的区域的色调变换为输入的目标色调。
通过设置用户输入,可实现主色调变换效果和色调归一变换效果,极大地改善了用户体验。
在色调变换后,对变换后的图像进行后处理。例如,为了平滑图像中变换后的边缘过渡部分,使其看来更自然,可对色调变换后的图像进行滤波处理。此外,为了增加色调变换的多样性,可针对一些特殊的区域,保留原始像素值,如人体肤色部分等。
在后处理后,可获得最终的目标图像。
图6是根据一示例性实施例示出的一种图像处理装置的框图。参照图6,图像处理装置600可包括获取模块601和处理模块602。此外,图像处理装置600还可包括用户输入模块603。图像处理装置600中的每个模块可由一个或多个模块来实现,并且对应模块的名称可根据模块的类型而变化。在各种实施例中,可省略图像处理装置600中的一些模块,或者还可包括另外的模块。此外,根据本公开的各种实施例的模块/元件可被组合以形成单个实体,并且因此可等效地执行相应模块/元件在组合之前的功能。
获取模块601可获得待编辑图像。
处理模块602可确定待编辑图像的至少一种色调。这里,所述至少一种色调可以是主色调、指定色调或其他色调。
处理模块602可从待编辑图像中获取与确定的至少一种色调相应的区域,并且对获取的区域进行色调变换。
处理模块602可将待编辑图像转换为HSV图像,根据预先定义的第一色调空间对HSV图像中的像素进行色调分类,基于每种色调所包括的像素数来确定至少一种色调。
处理模块602可根据预先定义的第二色调空间从待编辑图像中提取与所述至少一种色调相应的区域。
第一色调空间和第二色调空间可基于HSV色彩空间可被预先定义,其中,第一色调空间和第二色调空间可分别包括多种色调的取值范围以及与每种色调相应的饱和度和亮度的取值范围。与每种色调相应的饱和度和亮度的取值范围可基于超参数被设置,其中,第一色调空间和第二色调空间中的超参数可被不同地设置。例如,第一色调空间可如上述表1所示,第二色调空间可如上述表2所示。
处理模块602可按照每种色调所包括的像素数从大到小的顺序进行色调排序,将色调排序中的前至少一种色调确定为所述至少一种色调;或者根据每种色调所包括的像素数分别与所述待编辑图像的全部像素数的比值以及与每种色调相应的饱和度值来确定所述至少一种色调。
处理模块602可对待编辑图像的RGB像素进行聚类处理;按照每种类别所包括的像素数从大到小的顺序对每种类别进行排序,将所述排序中的前至少一种类别的聚类中心点的RGB值转换为HSV值;基于预先定义的第一色调空间和转换的HSV值确定所述至少一种色调。
用户输入模块603可接收用户输入。
处理模块602可根据接收的用户输入对获取的区域进行色调变换。根据本公开的实施例,用户输入可包括第一用户输入和第二用户输入中的至少一个,其中,第一用户输入用于设置目标色调,第二用户输入用于设置色调变换程度,其中,所述色调变换程度表示所述待编辑图像中将进行色调变换的区域百分比。
处理模块602可基于色调变换程度确定获取的区域中将被进行色调变换的区域。
处理模块602可将确定的将被进行色调变换的区域的色调变换为目标色调。
在获取的区域中包括人体的情况下,处理模块602可利用皮肤检测算法提取该区域中的人体裸露皮肤区域,并且对人体裸露皮肤区域保留原始色调。
处理模块602可根据色调变换程度来确定包括在获取的区域中的多个色调区域中将进行色调变换的色调区域的数量N,其中,N大于或等于1,按照多个色调区域中的每个色调区域所包括的像素数从大到小的顺序进行色调排序,将与色调排序中前N个色调区域确定为将被进行色调变换的区域。
在色调变换程度小于或等于第一值的情况下,处理模块602可将数量N确定为1。
在色调变换程度大于或等于第二值的情况下,处理模块602可将数量N确定为全部色调区域的数量。
在色调变换程度大于第一值并且小于第二值的情况下,处理模块602可将数量N确定为以下值:按照色调排序从前开始依次计算每个色调区域所包括的像素数分别与待编辑图像 中的有效像素数的比例,直至所述比例之和大于或等于色调变换程度的色调区域数量。这里,在待编辑图像不包括人体的情况下,有效像素包括待编辑图像中的全部像素。在待编辑图像包括人体的情况下,有效像素包括待编辑图像中的除人体裸露皮肤区域的像素之外的像素。例如,参照上述等式(2)来计算数量N。
图7是根据一示例性实施例示出的一种图像处理设备的结构示意图。
如图7所示,图像处理设备700可包括:处理组件701、通信总线702、网络接口703、输入输出接口704、存储器705以及电源组件706。其中,通信总线702用于实现这些组件之间的连接通信。输入输出接口704可以包括视频显示器(诸如,液晶显示器)、麦克风和扬声器以及用户交互接口(诸如,键盘、鼠标、触摸输入装置等),在一些实施例中,输入输出接口704还可包括标准的有线接口、无线接口。在一些实施例中,网络接口703可包括标准的有线接口、无线接口(如无线保真接口)。存储器705可以是高速的随机存取存储器,也可以是稳定的非易失性存储器。在一些实施例中,存储器705还可以是独立于前述处理组件701的存储装置。
本领域技术人员可以理解,图7中示出的结构并不构成对图像处理设备700的限定,可包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
如图7所示,作为一种存储介质的存储器705中可包括操作系统、数据存储模块、网络通信模块、用户接口模块、图像处理程序以及数据库。
在图7所示的图像处理设备700中,网络接口703主要用于与外部设备/终端进行数据通信;输入输出接口704主要用于与用户进行数据交互;图像处理设备700中的处理组件701、存储器705可被设置在图像处理设备700中,图像处理设备700通过处理组件701调用存储器705中存储的图像处理程序以及由操作系统提供的各种API,执行本公开实施例提供的图像处理方法。
处理组件701可以包括至少一个处理器,存储器705中存储有计算机可以执行指令集合,当计算机可以执行指令集合被至少一个处理器执行时,执行根据本公开实施例的图像处理方法。此外,处理组件701可执行编码操作和解码操作等。
处理组件701可获得待编辑图像,确定待编辑图像的至少一种色调,从待编辑图像中获取与至少一种色调相应的区域,并且对获取的区域进行色调变换。在一些实施例中,可根据用户输入对获取的区域进行色调变换。
图像处理设备700可经由输入输出接口704接收或输出图像和/或视频。例如,用户可经由输入输出接口704输出处理后的图像或视频以分享给其他用户。
例如,图像处理设备700可以是PC计算机、平板装置、个人数字助理、智能手机、或 其他能够执行上述指令集合的装置。这里,图像处理设备700并非必须是单个的电子设备,还可以是任何能够单独或联合执行上述指令(或指令集)的装置或电路的集合体。图像处理设备700还可以是集成控制系统或系统管理器的一部分,或者可以被配置为与本地或远程(例如,经由无线传输)以接口互联的便携式电子设备。
在图像处理设备700中,处理组件701可包括中央处理器(CPU)、图形处理器(GPU)、可编程逻辑装置、专用处理器系统、微控制器或微处理器。在一些实施例中,处理组件701还可以包括模拟处理器、数字处理器、微处理器、多核处理器、处理器阵列、网络处理器等。
处理组件701可运行存储在存储器中的指令或代码,其中,存储器705还可以存储数据。指令和数据还可以经由网络接口703而通过网络被发送和接收,其中,网络接口703可以采用任何已知的传输协议。
存储器705可以与处理器集成为一体,例如,将RAM或闪存布置在集成电路微处理器等之内。此外,存储器705可包括独立的装置,诸如,外部盘驱动、存储阵列或任何数据库系统可以使用的其他存储装置。存储器和处理器可以在操作上进行耦合,或者可以例如通过I/O端口、网络连接等互相通信,使得处理器能够读取存储在存储器中的文件。
根据本公开的实施例,可提供一种电子设备。图8是根据本公开实施例的电子设备的框图,该电子设备800可包括至少一个存储器802和至少一个处理器801,所述至少一个存储器802存储有计算机可执行指令集合,当计算机可执行指令集合被至少一个处理器801执行时,执行根据本公开各种实施例的图像处理方法。
处理器801可包括中央处理器(CPU)、图形处理器(GPU)、可编程逻辑装置、专用处理器系统、微控制器或微处理器。在一些实施例中,处理器801还可包括模拟处理器、数字处理器、微处理器、多核处理器、处理器阵列、网络处理器等。
作为一种存储介质的存储器802可包括操作系统、数据存储模块、网络通信模块、用户接口模块、图像处理程序以及数据库。
存储器802可与处理器801集成为一体,例如,可将RAM或闪存布置在集成电路微处理器等之内。此外,存储器802可包括独立的装置,诸如,外部盘驱动、存储阵列或任何数据库系统可使用的其他存储装置。存储器802和处理器801可在操作上进行耦合,或者可例如通过I/O端口、网络连接等互相通信,使得处理器801能够读取存储在存储器802中的文件。
此外,电子设备800还可包括视频显示器(诸如,液晶显示器)和用户交互接口(诸如,键盘、鼠标、触摸输入装置等)。电子设备800的所有组件可经由总线和/或网络而彼此连接。
例如,电子设备800可以是PC计算机、平板装置、个人数字助理、智能手机、或其他 能够执行上述指令集合的装置。这里,电子设备800并非必须是单个的电子设备,还可以是任何能够单独或联合执行上述指令(或指令集)的装置或电路的集合体。电子设备800还可以是集成控制系统或系统管理器的一部分,或者可被配置为与本地或远程(例如,经由无线传输)以接口互联的便携式电子设备。
本领域技术人员可理解,图8中示出的结构并不构成对的限定,可包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。
根据本公开的实施例,还可提供一种存储指令的计算机可读存储介质,其中,当指令被至少一个处理器运行时,促使至少一个处理器执行根据本公开的图像处理方法。这里的计算机可读存储介质的示例包括:只读存储器(ROM)、随机存取可编程只读存储器(PROM)、电可擦除可编程只读存储器(EEPROM)、随机存取存储器(RAM)、动态随机存取存储器(DRAM)、静态随机存取存储器(SRAM)、闪存、非易失性存储器、CD-ROM、CD-R、CD+R、CD-RW、CD+RW、DVD-ROM、DVD-R、DVD+R、DVD-RW、DVD+RW、DVD-RAM、BD-ROM、BD-R、BD-R LTH、BD-RE、蓝光或光盘存储器、硬盘驱动器(HDD)、固态硬盘(SSD)、卡式存储器(诸如,多媒体卡、安全数字(SD)卡或极速数字(XD)卡)、磁带、软盘、磁光数据存储装置、光学数据存储装置、硬盘、固态盘以及任何其他装置,所述任何其他装置被配置为以非暂时性方式存储计算机程序以及任何相关联的数据、数据文件和数据结构并将所述计算机程序以及任何相关联的数据、数据文件和数据结构提供给处理器或计算机使得处理器或计算机能执行所述计算机程序。上述计算机可读存储介质中的计算机程序可在诸如客户端、主机、代理装置、服务器等计算机设备中部署的环境中运行,此外,在本申请的实施例中,计算机程序以及任何相关联的数据、数据文件和数据结构分布在联网的计算机系统上,使得计算机程序以及任何相关联的数据、数据文件和数据结构通过一个或多个处理器或计算机以分布式方式存储、访问和执行。
根据本公开的实施例中,还可提供一种计算机程序产品,该计算机程序产品中的指令可由计算机设备的处理器执行以完成上述图像处理方法。
本公开所有实施例均可以单独被执行,也可以与其他实施例相结合被执行,均视为本公开要求的保护范围。
Claims (31)
- 一种图像处理方法,包括:获得待编辑图像;确定所述待编辑图像的至少一种色调;从所述待编辑图像中获取与所述至少一种色调相应的区域;以及对获取的区域进行色调变换。
- 根据权利要求1所述的图像处理方法,其中,确定所述待编辑图像的至少一种色调的步骤包括:将所述待编辑图像转换为HSV图像;根据预先定义的第一色调空间对所述HSV图像中的像素进行色调分类;基于每种色调所包括的像素数来确定所述至少一种色调。
- 根据权利要求2所述的图像处理方法,其中,从所述待编辑图像中获取与所述至少一种色调相应的区域的步骤包括:根据预先定义的第二色调空间从所述待编辑图像中提取与所述至少一种色调相应的区域。
- 根据权利要求3所述的图像处理方法,其中,第一色调空间和第二色调空间基于HSV色彩空间被定义,其中,第一色调空间和第二色调空间分别包括多种色调的取值范围以及与每种色调相应的饱和度和亮度的取值范围。
- 根据权利要求4所述的图像处理方法,其中,与每种色调相应的饱和度和亮度的取值范围基于超参数被设置,其中,第一色调空间和第二色调空间中的超参数被不同地设置。
- 根据权利要求2所述的图像处理方法,其中,基于每种色调所包括的像素数来确定所述至少一种色调的步骤包括:按照每种色调所包括的像素数从大到小的顺序进行色调排序,将色调排序中的前至少一种色调确定为所述至少一种色调;或者根据每种色调所包括的像素数分别与所述待编辑图像的全部像素数的比值以及与每种色调相应的饱和度值来确定所述至少一种色调。
- 根据权利要求1所述的图像处理方法,其中,确定所述待编辑图像的至少一种色调的步骤包括:对所述待编辑图像的RGB像素进行聚类处理;按照每种类别所包括的像素数从大到小的顺序对每种类别进行排序;将所述排序中的前至少一种类别的聚类中心点的RGB值转换为HSV值;基于预先定义的第一色调空间和转换的HSV值确定所述至少一种色调。
- 根据权利要求1所述的图像处理方法,还包括:接收用户输入;根据所述用户输入对获取的区域进行色调变换。
- 根据权利要求8所述的图像处理方法,其中,所述用户输入包括第一用户输入和第二用户输入中的至少一个,其中,第一用户输入用于设置目标色调,第二用户输入用于设置色调变换程度,其中,所述色调变换程度表示所述待编辑图像中将进行色调变换的区域百分比。
- 根据权利要求9所述的图像处理方法,其中,根据所述用户输入对获取的区域进行色调变换的步骤包括:基于所述色调变换程度确定获取的区域中将被进行色调变换的区域;和/或将确定的将被进行色调变换的区域的色调变换为所述目标色调。
- 根据权利要求1所述的图像处理方法,其中,在获取的区域中包括人体的情况下,对获取的区域进行色调变换的步骤包括:利用皮肤检测算法提取该区域中的人体裸露皮肤区域;对所述人体裸露皮肤区域保留原始色调。
- 根据权利要求10所述的图像处理方法,其中,基于所述色调变换程度确定获取的区域中将被进行色调变换的区域的步骤包括:根据所述色调变换程度来确定包括在获取的区域中的多个色调区域中将进行色调变换的色调区域的数量N,其中,N大于或等于1;按照所述多个色调区域中的每个色调区域所包括的像素数从大到小的顺序进行色调排序;将与色调排序中前N个色调区域确定为所述将被进行色调变换的区域。
- 根据权利要求12所述的图像处理方法,其中,根据所述色调变换程度来确定多个色调区域中将进行色调变换的色调区域的数量N的步骤包括:在所述色调变换程度小于或等于第一值的情况下,将数量N确定为1;在所述色调变换程度大于或等于第二值的情况下,将数量N确定为全部色调区域的数量;在所述色调变换程度大于第一值并且小于第二值的情况下,将数量N确定为以下值: 按照色调排序从前开始依次计算每个色调区域所包括的像素数分别与所述待编辑图像中的有效像素数的比例,直至所述比例之和大于或等于所述色调变换程度的色调区域数量。
- 根据权利要求13所述的图像处理方法,其中,在所述待编辑图像不包括人体的情况下,所述有效像素包括所述待编辑图像中的全部像素;在所述待编辑图像包括人体的情况下,所述有效像素包括所述待编辑图像中的除人体裸露皮肤区域的像素之外的像素。
- 一种图像处理装置,其中,包括:获取模块,被配置为获得待编辑图像;处理模块,被配置为:确定所述待编辑图像的至少一种色调;从所述待编辑图像中获取与所述至少一种色调相应的区域;以及对获取的区域进行色调变换。
- 根据权利要求15所述的图像处理装置,其中,处理模块被配置为:将所述待编辑图像转换为HSV图像;根据预先定义的第一色调空间对所述HSV图像中的像素进行色调分类;基于每种色调所包括的像素数来确定所述至少一种色调。
- 根据权利要求16所述的图像处理装置,其中,处理模块被配置为:根据预先定义的第二色调空间从所述待编辑图像中提取与所述至少一种色调相应的区域。
- 根据权利要求17所述的图像处理装置,其中,第一色调空间和第二色调空间基于HSV色彩空间被定义,其中,第一色调空间和第二色调空间分别包括多种色调的取值范围以及与每种色调相应的饱和度和亮度的取值范围。
- 根据权利要求18所述的图像处理装置,其中,与每种色调相应的饱和度和亮度的取值范围基于超参数被设置,其中,第一色调空间和第二色调空间中的超参数被不同地设置。
- 根据权利要求16所述的图像处理装置,其中,处理模块被配置为:按照每种色调所包括的像素数从大到小的顺序进行色调排序,将色调排序中的前至少一种色调确定为所述至少一种色调;或者根据每种色调所包括的像素数分别与所述待编辑图像的全部像素数的比值以及与每种色调相应的饱和度值来确定所述至少一种色调。
- 根据权利要求15所述的图像处理装置,其中,处理模块被配置为:对所述待编辑图像的RGB像素进行聚类处理;按照每种类别所包括的像素数从大到小的顺序对每种类别进行排序;将所述排序中的前至少一种类别的聚类中心点的RGB值转换为HSV值;基于预先定义的第一色调空间和转换的HSV值确定所述至少一种色调。
- 根据权利要求15所述的图像处理装置,还包括用户输入模块,被配置为接收用户输入,其中,处理模块被配置为根据所述用户输入对获取的区域进行色调变换。
- 根据权利要求22所述的图像处理装置,其中,所述用户输入包括第一用户输入和第二用户输入中的至少一个,其中,第一用户输入用于设置目标色调,第二用户输入用于设置色调变换程度,其中,所述色调变换程度表示所述待编辑图像中将进行色调变换的区域百分比。
- 根据权利要求23所述的图像处理装置,其中,处理模块被配置为:基于所述色调变换程度确定获取的区域中将被进行色调变换的区域;和/或将确定的将被进行色调变换的区域的色调变换为所述目标色调。
- 根据权利要求15所述的图像处理装置,其中,在获取的区域中包括人体的情况下,处理模块被配置为:利用皮肤检测算法提取该区域中的人体裸露皮肤区域;对所述人体裸露皮肤区域保留原始色调。
- 根据权利要求24所述的图像处理装置,其中,处理模块被配置为:根据所述色调变换程度来确定包括在获取的区域中的多个色调区域中将进行色调变换的色调区域的数量N,其中,N大于或等于1;按照所述多个色调区域中的每个色调区域所包括的像素数从大到小的顺序进行色调排序;将与色调排序中前N个色调区域确定为所述将被进行色调变换的区域。
- 根据权利要求26所述的图像处理装置,其中,处理模块被配置为:在所述色调变换程度小于或等于第一值的情况下,将数量N确定为1;在所述色调变换程度大于或等于第二值的情况下,将数量N确定为全部色调区域的数量;在所述色调变换程度大于第一值并且小于第二值的情况下,将数量N确定为以下值:按照色调排序从前开始依次计算每个色调区域所包括的像素数分别与所述待编辑图像中的 有效像素数的比例,直至所述比例之和大于或等于所述色调变换程度的色调区域数量。
- 根据权利要求27所述的图像处理装置,其中,在所述待编辑图像不包括人体的情况下,所述有效像素包括所述待编辑图像中的全部像素;在所述待编辑图像包括人体的情况下,所述有效像素包括所述待编辑图像中的除人体裸露皮肤区域的像素之外的像素。
- 一种电子设备,其中,包括:处理器;用于存储所述处理器可执行指令的存储器;其中,所述处理器被配置为执行所述指令,实现以下步骤:获得待编辑图像;确定所述待编辑图像的至少一种色调;从所述待编辑图像中获取与所述至少一种色调相应的区域;以及对获取的区域进行色调变换。
- 一种计算机可读存储介质,当所述计算机可读存储介质中的指令由电子设备的处理器执行时,使得电子设备能够执行以下步骤:获得待编辑图像;确定所述待编辑图像的至少一种色调;从所述待编辑图像中获取与所述至少一种色调相应的区域;以及对获取的区域进行色调变换。
- 一种计算机程序产品,包括计算机指令,其中,所述计算机指令被处理器执行时实现以下步骤:获得待编辑图像;确定所述待编辑图像的至少一种色调;从所述待编辑图像中获取与所述至少一种色调相应的区域;以及对获取的区域进行色调变换。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110099560.7A CN112950453B (zh) | 2021-01-25 | 2021-01-25 | 图像处理方法和图像处理装置 |
CN202110099560.7 | 2021-01-25 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022156196A1 true WO2022156196A1 (zh) | 2022-07-28 |
Family
ID=76236543
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/112857 WO2022156196A1 (zh) | 2021-01-25 | 2021-08-16 | 图像处理方法和图像处理装置 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN112950453B (zh) |
WO (1) | WO2022156196A1 (zh) |
Families Citing this family (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112950453B (zh) * | 2021-01-25 | 2023-10-20 | 北京达佳互联信息技术有限公司 | 图像处理方法和图像处理装置 |
CN114676360B (zh) * | 2022-03-23 | 2024-09-17 | 腾讯科技(深圳)有限公司 | 图像处理方法、装置、电子设备及存储介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150221097A1 (en) * | 2014-02-05 | 2015-08-06 | Electronics And Telecommunications Research Institute | Harmless frame filter, harmful image blocking apparatus having the same, and method for filtering harmless frames |
CN104935784A (zh) * | 2014-03-19 | 2015-09-23 | 富士施乐株式会社 | 图像处理设备和图像处理方法 |
CN107845076A (zh) * | 2017-10-31 | 2018-03-27 | 广东欧珀移动通信有限公司 | 图像处理方法、装置、计算机可读存储介质和计算机设备 |
CN107909553A (zh) * | 2017-11-02 | 2018-04-13 | 青岛海信电器股份有限公司 | 一种图像处理方法及设备 |
CN112950453A (zh) * | 2021-01-25 | 2021-06-11 | 北京达佳互联信息技术有限公司 | 图像处理方法和图像处理装置 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4837965B2 (ja) * | 2005-09-28 | 2011-12-14 | ソニー株式会社 | 色調整装置、表示装置及び印刷装置 |
US9083918B2 (en) * | 2011-08-26 | 2015-07-14 | Adobe Systems Incorporated | Palette-based image editing |
JP2015154194A (ja) * | 2014-02-13 | 2015-08-24 | 株式会社リコー | 画像処理装置、画像処理システム、画像処理方法、プログラム及び記録媒体 |
CN107424198B (zh) * | 2017-07-27 | 2020-03-27 | Oppo广东移动通信有限公司 | 图像处理方法、装置、移动终端及计算机可读存储介质 |
CN107833620A (zh) * | 2017-11-28 | 2018-03-23 | 北京羽医甘蓝信息技术有限公司 | 图像处理方法和图像处理装置 |
CN111198956A (zh) * | 2019-12-24 | 2020-05-26 | 北京达佳互联信息技术有限公司 | 一种多媒体资源的互动方法、装置、电子设备及存储介质 |
-
2021
- 2021-01-25 CN CN202110099560.7A patent/CN112950453B/zh active Active
- 2021-08-16 WO PCT/CN2021/112857 patent/WO2022156196A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20150221097A1 (en) * | 2014-02-05 | 2015-08-06 | Electronics And Telecommunications Research Institute | Harmless frame filter, harmful image blocking apparatus having the same, and method for filtering harmless frames |
CN104935784A (zh) * | 2014-03-19 | 2015-09-23 | 富士施乐株式会社 | 图像处理设备和图像处理方法 |
CN107845076A (zh) * | 2017-10-31 | 2018-03-27 | 广东欧珀移动通信有限公司 | 图像处理方法、装置、计算机可读存储介质和计算机设备 |
CN107909553A (zh) * | 2017-11-02 | 2018-04-13 | 青岛海信电器股份有限公司 | 一种图像处理方法及设备 |
CN112950453A (zh) * | 2021-01-25 | 2021-06-11 | 北京达佳互联信息技术有限公司 | 图像处理方法和图像处理装置 |
Also Published As
Publication number | Publication date |
---|---|
CN112950453B (zh) | 2023-10-20 |
CN112950453A (zh) | 2021-06-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11537873B2 (en) | Processing method and system for convolutional neural network, and storage medium | |
WO2022156196A1 (zh) | 图像处理方法和图像处理装置 | |
US20230230215A1 (en) | Image processing method and apparatus, device, and medium | |
US20050152613A1 (en) | Image processing apparatus, image processing method and program product therefore | |
CN104951495B (zh) | 用于管理代表性视频图像的设备和方法 | |
US8244056B2 (en) | Image contrast enhancement apparatus and method thereof | |
US9749503B2 (en) | Image processing device, image processing method and recording medium | |
KR101384627B1 (ko) | 영상 내 객체 영역 자동분할 방법 | |
CN112541868B (zh) | 图像处理方法、装置、计算机设备和存储介质 | |
CN111476849A (zh) | 对象颜色识别方法、装置、电子设备以及存储介质 | |
Lee et al. | Property-specific aesthetic assessment with unsupervised aesthetic property discovery | |
Liu | Two decades of colorization and decolorization for images and videos | |
CN109102457B (zh) | 一种基于卷积神经网络的智能化变色系统及方法 | |
WO2008124029A1 (en) | Systems and methods for segmenting an image based on perceptual information | |
Lindner et al. | Joint statistical analysis of images and keywords with applications in semantic image enhancement | |
Thyagharajan et al. | Prevalent color extraction and indexing | |
van den Broek et al. | Modeling human color categorization | |
KR20000014123A (ko) | 영상 유사도 가중치 조절 장치 및 방법과 그를 이용한 내용기반영상 검색 시스템 및 방법 | |
CN113762058A (zh) | 一种视频合成方法、装置、计算机设备和存储介质 | |
TW201426372A (zh) | 網頁配色方法及系統 | |
CN113658287A (zh) | 用户界面配色处理方法、装置及设备 | |
Zulfikar et al. | Statistical investigation of skin image for disease analyzing in rural area using Matlab | |
Li et al. | Automatic image enhancement by learning adaptive patch selection | |
Mary et al. | Content based image retrieval using colour, multi-dimensional texture and edge orientation | |
US11157767B2 (en) | Image searching method based on feature extraction |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21920576 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 16.11.2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21920576 Country of ref document: EP Kind code of ref document: A1 |