WO2023273111A1 - 一种图像处理方法、装置、计算机设备和存储介质 - Google Patents
一种图像处理方法、装置、计算机设备和存储介质 Download PDFInfo
- Publication number
- WO2023273111A1 WO2023273111A1 PCT/CN2021/132511 CN2021132511W WO2023273111A1 WO 2023273111 A1 WO2023273111 A1 WO 2023273111A1 CN 2021132511 W CN2021132511 W CN 2021132511W WO 2023273111 A1 WO2023273111 A1 WO 2023273111A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- target
- image
- target pixel
- color value
- initial
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 23
- 238000006243 chemical reaction Methods 0.000 claims abstract description 111
- 238000000034 method Methods 0.000 claims abstract description 47
- 230000004927 fusion Effects 0.000 claims description 27
- 238000012545 processing Methods 0.000 claims description 18
- 238000004590 computer program Methods 0.000 claims description 17
- 230000011218 segmentation Effects 0.000 claims description 17
- 238000012216 screening Methods 0.000 claims 1
- 230000008859 change Effects 0.000 description 23
- 230000000694 effects Effects 0.000 description 14
- 230000008569 process Effects 0.000 description 10
- 238000010586 diagram Methods 0.000 description 6
- 230000006870 function Effects 0.000 description 6
- 230000003190 augmentative effect Effects 0.000 description 3
- 230000003796 beauty Effects 0.000 description 3
- 238000004891 communication Methods 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000011160 research Methods 0.000 description 3
- 241001465754 Metazoa Species 0.000 description 2
- 238000004364 calculation method Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 230000007246 mechanism Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000013528 artificial neural network Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000002708 enhancing effect Effects 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 230000006641 stabilisation Effects 0.000 description 1
- 238000011105 stabilization Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/90—Determination of colour characteristics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/001—Texturing; Colouring; Generation of texture or colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/11—Region-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/26—Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/56—Extraction of image or video features relating to colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/70—Labelling scene content, e.g. deriving syntactic or semantic representations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/46—Colour picture communication systems
- H04N1/56—Processing of colour picture signals
- H04N1/60—Colour correction or control
- H04N1/62—Retouching, i.e. modification of isolated colours only or in isolated picture areas only
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
Definitions
- the present disclosure relates to the technical field of image processing, and in particular to an image processing method, device, computer equipment and storage medium.
- Image processing is becoming more and more diverse. In many scenarios, for the obtained user image, there is often a need to change the color of a specific area corresponding to the target object in the image, such as the hair area, so as to achieve beautification of the user image .
- Embodiments of the present disclosure at least provide an image processing method, device, computer equipment, and storage medium, so as to enhance the color change effect of a specific region, so that the color change effect is more natural.
- an embodiment of the present disclosure provides an image processing method, including: determining the target area corresponding to the target object in the image to be processed; based on the initial color values of at least some of the target pixel points corresponding to the target area, The initial brightness corresponding to the target pixel point; based on the initial brightness corresponding to at least part of the target pixel point and the acquired preset color conversion information of the target area, adjust the initial color value of at least part of the target pixel point to obtain target image.
- the determining the initial brightness corresponding to at least some of the target pixels based on the initial color values of at least some of the target pixels corresponding to the target area includes: for each of at least some of the target pixels The target pixel point, based on the initial color value of the target pixel point, determine the brightness corresponding to the target pixel point; based on the brightness corresponding to the target pixel point, determine the initial brightness corresponding to the target pixel point.
- the lightness can reflect the light intensity, and the lightness is determined by the light intensity and the light area, and the color value of the pixel can accurately reflect the lightness corresponding to the color value. Therefore, based on the initial color value of the target pixel, firstly, the corresponding Then, based on the determined brightness of the target pixel, the initial brightness corresponding to the target pixel is determined, which improves the accuracy of the determined initial brightness.
- the determining the initial brightness corresponding to the target pixel based on the brightness corresponding to the target pixel includes: acquiring a preset brightness threshold; based on the preset brightness threshold and the For the brightness corresponding to the target pixel, filter the preset brightness conversion rule matching the target pixel; based on the screened brightness conversion rule and the brightness corresponding to the target pixel, determine the initial brightness corresponding to the target pixel .
- the initial color values of at least part of the target pixel points are Adjusting to obtain a target image includes: for each of at least some of the target pixels, based on the initial brightness corresponding to the target pixel, the initial color value of the target pixel, and the preset Set the color conversion information to determine the target color value of the target pixel; based on the target color value of the target pixel, adjust the initial color value of the target pixel to obtain a target image.
- the preset color conversion information includes preset color value and color conversion degree information; the initial brightness corresponding to the target pixel, the initial color value of the target pixel and The preset color conversion information, determining the target color value of the target pixel point includes: determining the target color value based on the initial color value of the target pixel point, the preset color value and the color conversion degree information A fused color value of the pixel; based on the fused color value and the initial brightness, determine a target color value of the target pixel.
- the determining the target area corresponding to the target object in the image to be processed includes: performing semantic segmentation on the image to be processed to obtain a segmented image corresponding to the target object; based on the segmented image , determining the target region corresponding to the target object in the image to be processed; before determining the initial brightness corresponding to at least part of the target pixel points based on the initial color values of at least part of the target pixel points corresponding to the target region, the The method further includes a step of determining the target pixel: taking each pixel in the target area as a target pixel corresponding to the target object.
- the determining the target area corresponding to the target object in the image to be processed includes: acquiring an image of a target person, and using the image of the target person as the image to be processed; Perform semantic segmentation to determine the target area corresponding to the target object in the image to be processed; wherein, the target area corresponding to the target object includes the human hair area, human skin area, and at least part of the clothing area in the target person image At least one; or, acquire an image of the target object, and use the image of the target object as the image to be processed; perform semantic segmentation on the image to be processed, and determine the target area corresponding to the target object in the image to be processed; wherein , the target area corresponding to the target object is at least a part of the object area in the target object image.
- an embodiment of the present disclosure further provides an image processing device, including:
- a first determination module configured to determine a target area corresponding to the target object in the image to be processed
- the second determination module is configured to determine the initial brightness corresponding to at least part of the target pixel points based on the initial color values of at least part of the target pixel points corresponding to the target area;
- the adjustment module is configured to adjust the initial color values of at least some of the target pixels based on the initial brightness corresponding to at least some of the target pixels and the acquired preset color conversion information of the target area to obtain a target image.
- an embodiment of the present disclosure further provides a computer device, a processor, and a memory, the memory stores machine-readable instructions executable by the processor, and the processor is configured to execute the instructions stored in the memory.
- a machine-readable instruction when the machine-readable instruction is executed by the processor, executes the above-mentioned first aspect, or the steps in any possible implementation manner of the first aspect.
- embodiments of the present disclosure further provide a computer-readable storage medium, on which a computer program is stored, and when the computer program is executed, the above-mentioned first aspect, or any of the first aspects of the first aspect, is executed. Steps in one possible implementation.
- An embodiment of the present disclosure provides a computer program product, wherein the computer program product includes a non-transitory computer-readable storage medium storing a computer program, and the computer program is operable to enable the computer to execute the image processing method according to the embodiment of the present disclosure.
- the color value of a pixel point there is a relationship between the color value of a pixel point and the brightness corresponding to the pixel point. Therefore, based on the initial color value of the target pixel point in the target area corresponding to the target object, it can be accurately determined that each target pixel point corresponds to The initial brightness of the pixel point can reflect the corresponding light intensity of the pixel point. Using the determined initial brightness and color conversion information to adjust the initial color value of the target pixel point, the adjusted color value of the target pixel point can be Matching with the light intensity, thus, the color changing effect can be made more natural and real, and the color changing effect of the target object can be enhanced.
- FIG. 1 shows a flowchart of an image processing method provided by an embodiment of the present disclosure
- FIG. 2 shows a flow chart of a method for determining initial brightness provided by an embodiment of the present disclosure
- FIG. 3 shows a flow chart of a method for determining initial brightness based on the brightness corresponding to a target pixel according to an embodiment of the present disclosure
- Fig. 4 shows a flow chart of determining a target color value provided by an embodiment of the present disclosure
- FIG. 5 shows a flow chart of a method for determining a target area provided by an embodiment of the present disclosure
- FIG. 6 shows a schematic diagram of acquiring a mask image provided by an embodiment of the present disclosure
- FIG. 7 shows a schematic diagram of an image processing device provided by an embodiment of the present disclosure.
- Fig. 8 shows a schematic structural diagram of a computer device provided by an embodiment of the present disclosure.
- the present disclosure provides an image processing method, device, computer equipment, and storage medium.
- the initial color value of the target pixel can accurately determine the initial brightness corresponding to each target pixel.
- the brightness of the pixel can reflect the corresponding light intensity of the pixel.
- the initial brightness of the target pixel Adjusting the color value can make the adjusted color value of the target pixel point match the light intensity, thereby making the color change effect more natural and real, and enhancing the color change effect of the target object.
- the execution subject of the image processing method provided in the embodiments of the present disclosure is generally a computer device with a certain computing power.
- the image processing method may be realized by calling a computer-readable instruction stored in a memory by a processor.
- FIG. 1 it is a flow chart of an image processing method provided by an embodiment of the present disclosure, which may include the following steps:
- S101 Determine a target area corresponding to a target object in an image to be processed.
- the image to be processed may be an overall image including the target object captured by the camera, or may be an overall image corresponding to any frame of video including the target object captured by the camera.
- the target object corresponds to a part of the image to be processed, and may be an object in the image to be processed that requires color adjustment.
- the object may be the whole or a part of an object or a person, such as hair, clothing, and the like.
- the image to be processed may be an image containing hair, and the hair may be human hair or animal hair.
- the method provided by the embodiments of the present disclosure can be applied to a beauty treatment application program.
- a beautification processing request for performing beautification processing on the target object in the image to be processed can be submitted in the beautification processing application program.
- the beautification processing application can process the image to be processed in response to the beautification processing request, for example, perform semantic segmentation on the image to be processed, determine the target object in the image to be processed, and then determine that the target object is in the image to be processed The corresponding partial image, and then the area corresponding to the partial image can be used as the target area corresponding to the target object.
- the color value of the target pixel corresponding to the target area can be adjusted, so as to realize the beauty treatment of the image to be processed.
- the initial color value is the original color value of the target pixel corresponding to the target object, and the initial brightness corresponding to the target pixel can be determined by the light intensity corresponding to the target pixel.
- all pixels in the image to be processed corresponding to the target area may be used as target pixel points corresponding to the target object.
- At least part of the target pixel points may be selected, and initial color values of the selected at least part of the target pixel points may be determined. Then, based on the conversion relationship between the color value and brightness and the initial color value of the target pixel, the initial brightness corresponding to each target pixel in at least part of the target pixels can be determined, so that it can be determined that each target pixel corresponds to light intensity.
- the preset color conversion information is information determined by the user for changing the color of the target object, and is aimed at changing the initial color values of all target pixel points corresponding to the target object.
- the preset color conversion information includes preset color value and color conversion degree information, wherein the preset color value is a color value after adjusting the color of the target pixel preset by the user.
- the preset color value cannot reflect the light intensity corresponding to the target pixel, it is necessary to convert the preset color value into the target color value, and then process the target pixel, thereby obtaining a more natural target image.
- the color conversion degree information is used to characterize the degree of color change determined by the user to change the color of the target pixel, for example, the degree of change is 80%, 90% and so on.
- the server side determines that the user needs to adjust the color of the target object in the image to be processed
- the preset color conversion information of the user's target area corresponding to the target object can be obtained.
- the step of acquiring preset color conversion information may be performed before the step of determining the target area, after the step of determining the target area, or simultaneously with the step of determining the target area, which is not limited here.
- a target color value corresponding to each of the at least some of the target pixels can be determined.
- the brightness of a pixel is determined by the light intensity corresponding to the pixel, the brightness of the pixel can reflect the light intensity, so that the target color value determined based on the initial brightness can match the light intensity corresponding to the initial brightness.
- an initial color value of each target pixel in at least a part of the target pixels may be adjusted by using the determined target color value corresponding to each target pixel in at least a part of the target pixels.
- the initial color value of each target pixel in at least some of the target pixels can be adjusted to the target color value, and then, based on at least some of the target pixels after adjusting the color value, the color of the target object can be updated, and the updated target audience.
- the color of each target pixel corresponding to the target object in the target image is the color represented by the target color value corresponding to the target pixel.
- the initial brightness corresponding to each target pixel can be accurately determined, and the brightness of the pixel can reflect the corresponding light intensity of the pixel.
- the initial brightness and color conversion information of the target pixel can be adjusted by adjusting the initial color value of the target pixel, so that the adjusted color value of the target pixel can match the light intensity, so that the effect of color change can be more natural and real, and enhance The color change effect of the target object.
- the initial brightness corresponding to the target pixel can be determined according to the method shown in FIG. A flowchart, which may include the following steps:
- S201 For each target pixel in at least some of the target pixels, determine the brightness corresponding to the target pixel based on the initial color value of the target pixel.
- the lightness can reflect the light intensity, and the lightness is determined by the light intensity and the light area, and the color value of the pixel can accurately reflect the lightness corresponding to the color value.
- Lightness and brightness can be converted to each other and affect each other. As the lightness increases or decreases, the brightness will change accordingly.
- the image to be processed can be an RGB (Red Green Blue, red, green, blue) image
- the initial color value of the target pixel can include the color value corresponding to the R channel, the color value corresponding to the G channel, and the color value corresponding to the B channel.
- the initial color value corresponds to the color value corresponding to the R channel, the color value corresponding to the G channel, and the color value corresponding to the B channel. color value.
- the lightness conversion rule can be used to determine the value corresponding to the initial color value. the lightness.
- the conversion rule between the color value of a pixel point and the lightness of a pixel point can be the following formula 1:
- Y represents lightness
- R lin represents the color value corresponding to the R channel in the initial color value
- G lin represents the color value corresponding to the G channel in the initial color value
- B lin represents the color value corresponding to the B channel in the initial color value
- a 1 represents the lightness conversion coefficient corresponding to the color value of the R channel
- a 2 represents the lightness conversion coefficient corresponding to the color value of the G channel
- a 3 represents the lightness conversion coefficient corresponding to the color value of the B channel.
- the value range corresponding to A 1 may be (0.2101, 0.2205), the value range corresponding to A 2 may be (0.7145, 0.7211), and the value range corresponding to A 3 may be (0.0671, 0.0750).
- the sum of the values of A 1 , A 2 and A 3 may be equal to 1.
- the lightness corresponding to each target pixel in at least part of the target pixels can be determined.
- the lightness Y ⁇ [0,1] corresponding to each target pixel determined by formula 1.
- the brightness of the target pixel can be determined based on the brightness of the target pixel and the conversion rule between brightness and brightness, and the determined brightness can be used as The initial brightness corresponding to the target pixel.
- the initial brightness can be determined according to the method shown in FIG. 3, as shown in FIG.
- a flowchart of the method may include the following steps:
- S301 Acquire a preset brightness threshold.
- different brightness ranges may correspond to different brightness conversion rules, and for the same brightness, different brightness conversion rules are used to convert it to obtain different brightness.
- the preset brightness threshold is the boundary value of different brightness ranges, therefore, in the process of determining the initial brightness based on the brightness corresponding to the target pixel point, it is necessary to obtain the preset brightness threshold first.
- different brightness ranges correspond to different brightness conversion rules
- different brightness conversion rules may correspond to different brightness conversion coefficients
- the same brightness conversion rule may correspond to at least one brightness conversion coefficient
- the brightness range to which the brightness belongs can be determined first by using the preset brightness threshold and the brightness, and then the brightness conversion rule corresponding to the brightness range can be determined according to the brightness range.
- the brightness range may include a range greater than a preset brightness threshold, and a range smaller than or equal to a preset brightness threshold.
- the brightness corresponding to the brightness can be determined by using the brightness conversion coefficient and the brightness corresponding to the determined brightness conversion rule.
- the preset brightness threshold may take any value in the threshold interval (0.008056, 0.09016).
- the brightness conversion rules corresponding to different brightness ranges can be shown in Formula 2:
- L represents brightness.
- B 1 represents the brightness conversion coefficient corresponding to the brightness conversion rule Y*B 1
- B 2 represents the brightness conversion rule Corresponding to the first luminance conversion coefficient
- B 3 represents the conversion rule with luminance
- M and N represent the conversion rule with luminance
- B 4 represents the preset brightness threshold
- Y ⁇ B 4 and Y>B 4 represent different brightness ranges.
- the value range of L can be ( 0 , 1 )
- the value range of B1 can be (800, 1000)
- the value range of B2 can be (100, 130)
- the value range of B3 can be is ( 8 , 23)
- the value range of B4 is (0.008056, 0.09016)
- M and N can be any value, and M is greater than N.
- a magnitude relationship between the brightness of the target pixel and the preset brightness threshold may be determined, and then, According to the determined size relationship, the brightness conversion rules matching the target pixel are screened.
- the brightness is converted using the determined brightness conversion rule to obtain the brightness corresponding to the brightness, and this brightness is used as the initial value corresponding to the target pixel. brightness.
- the initial brightness corresponding to each target pixel in at least some of the target pixels can be determined.
- the target image can be obtained according to the following steps:
- Step 1 For each target pixel in at least some of the target pixels, determine the target color value of the target pixel based on the initial brightness corresponding to the target pixel, the initial color value of the target pixel, and preset color conversion information.
- each target pixel in at least some of the target pixels may correspond to a single target color value, where the target color value is the corresponding value of the target pixel after the initial color value of the target pixel is adjusted. color value.
- the initial color value of the target pixel can be calculated according to the initial color value of the target pixel, the initial brightness corresponding to the target pixel, and the preset color conversion information. Conversion to obtain the target color value matching the initial brightness and preset color conversion information.
- the preset color value in the preset color conversion information can be converted first, and the preset color value can be converted into the first color value matching the initial brightness .
- the initial color value can be converted according to the initial brightness of the target pixel, and the initial color value can be converted into a second color value that matches the initial brightness.
- the color conversion degree in the preset color conversion information can be used information, convert the first color value into a first blended color value, and convert the second color value into a second blended color value by using the color conversion degree information.
- a target color value matching the initial brightness and the preset color conversion information may be determined by using the first fusion color value and the second fusion color value, and the initial brightness.
- Step 2 Based on the target color value of the target pixel, the initial color value of the target pixel is adjusted to obtain the target image.
- the color value of the pixel point in the target area can be adjusted to the target color value based on the target color value to obtain the The adjusted color value corresponding to the target pixel. Therefore, the color value of the target pixel point can be adjusted from the initial color value to the target color value.
- the color value of the pixel in the target area can be adjusted from the initial color value to the target color according to the determined target color value of each pixel in at least some of the pixels corresponding to the target area Therefore, the adjustment of the color value of each target pixel in at least part of the target pixels can be completed, thereby obtaining an adjusted target image.
- the color values of all pixels in the image to be processed can reflect the texture of the image to be processed, and if the color value of a pixel in the image to be processed changes, the corresponding texture of the image to be processed will also change. Therefore, for obtaining the target image, since the pixels in the target image include pixels with adjusted color values, the texture corresponding to the target image is also the adjusted texture.
- the preset color conversion information includes preset color values and color conversion degree information.
- the target color value can be determined according to the process shown in FIG. 4 , as shown in FIG. 4
- a flow chart for determining a target color value provided by an embodiment of the present disclosure may include the following steps:
- S401 Determine the fused color value of the target pixel based on the initial color value, preset color value and color conversion degree information of the target pixel.
- the fused color value is a color value that matches the color conversion degree information.
- the preset color value and the initial color value may be mixed to obtain a fusion color value matching the color conversion degree information.
- the first fusion of the preset color value and the initial color value matching the color conversion degree can be performed to obtain the third fusion color value
- the initial color value can be converted by using the color conversion degree to obtain a third color value matching the color conversion degree
- the preset color value can be converted by using the color conversion degree to obtain a color value matching the color conversion degree.
- the fourth color value can be converted by using the color conversion degree to obtain a color value matching the color conversion degree.
- the degree of color conversion may be used to perform a second fusion of the third color value and the fourth color value matching the degree of color conversion to obtain a fourth fusion color value matching the degree of color conversion.
- the third fusion color value and the fourth fusion color value may be fused to obtain the final fusion color value corresponding to the target pixel.
- the fusion process of determining the fusion color value of the target pixel point is not limited to the above method, and the fusion mechanism can be set according to the needs of developers. There is no limitation here. Wherein, different fusion mechanisms may correspond to different fusion manners.
- S402 Determine the target color value of the target pixel based on the fusion color value and the initial brightness.
- the fused color value of the target pixel can be adjusted according to the initial brightness corresponding to the target pixel to obtain a target color value matching the initial brightness, that is, determine the target color value of the target pixel.
- the color value corresponding to the R channel, the color value corresponding to the G channel and the color value corresponding to the B channel can be determined according to the fusion color value. Then, according to the fusion rule between the brightness and the color value, based on the initial brightness first, the fusion coefficient corresponding to each channel of the initial brightness can be determined.
- the determined fusion coefficient of each channel and the fusion color value corresponding to the color value of each channel, and the conversion rules of the fusion coefficient corresponding to each channel respectively determine the target color value corresponding to the color value of the R channel, and the corresponding target color value of the G channel.
- a target color value corresponding to the fusion color value and matching the initial brightness can be determined.
- the color conversion degree information is used to represent the degree of color change of the target pixel determined by the user.
- the initial color value corresponding to the target pixel point and the preset color value are fused, so that the fusion can be obtained
- the fusion color value matches the change degree corresponding to the color conversion degree information, and then uses the initial brightness and fusion color value to determine the target color value of the target pixel, so that the obtained target color value can not only match the light intensity corresponding to the initial brightness Matching can also match the change degree corresponding to the color conversion degree information, which can not only make the color change effect more natural, enhance the color change effect of the target object, but also meet the user's color change demand.
- the target area may be determined according to the following steps, as shown in FIG. 5 , which is a flow chart of a method for determining the target area provided by an embodiment of the present disclosure, which may include the following steps:
- S501 Perform semantic segmentation on the image to be processed to obtain a segmented image corresponding to the target object.
- the trained semantic segmentation neural network can be used to perform semantic segmentation on the image to be processed, determine the semantics of each pixel in the image to be processed, and then, based on the determined semantics of each pixel , the segmented image corresponding to the target object can be obtained.
- the obtained segmented image can be a mask image.
- FIG. 6 it is a schematic diagram of acquiring a mask image provided by an embodiment of the present disclosure, wherein image A represents an image to be processed, and image B represents a mask image corresponding to a target object (hair) in the image to be processed.
- S502 Based on the segmented image, determine a target area corresponding to the target object in the image to be processed.
- the position coordinates of each pixel corresponding to the target object in the segmented image may be determined.
- the image feature information of the segmented image and the image feature information of the image to be processed may include image texture information, image size information, image key point information, and the like.
- each pixel point corresponding to the target object in the segmented image corresponds to The position coordinates of each pixel in the image to be processed.
- each of the determined position coordinates above may be determined to be an area corresponding to the image to be processed as the target area. Therefore, the target area corresponding to the target object in the image to be processed can be determined.
- the method before S102, further includes a step of determining the target pixel: taking each pixel in the target area as a target pixel corresponding to the target object.
- the target area is composed of all pixels corresponding to the target object, therefore, each pixel in the target area can be directly used as a target pixel corresponding to the target object.
- the segmented image obtained by semantic segmentation is used as the image corresponding to the target object.
- the region corresponding to the image to be processed can be accurately determined, that is, the region corresponding to the target object can be accurately determined in the image to be processed.
- the determined target pixels do not include pixels other than the pixels corresponding to the target object, thereby not only limiting the area that needs to be changed in color, but also improving the accuracy of the area in which color change is performed .
- the method of determining the target pixel point by using the obtained segmented image can realize sequentially determining the corresponding object of each frame of video.
- the segmented image of the image is determined, which can improve the accuracy of the determined target pixel in the image corresponding to each frame of video.
- the color adjustment of the target pixel in the determined image corresponding to each frame of video can be realized, that is, the color adjustment of the target object in the video can be realized to obtain the target video.
- the target pixel points in the image corresponding to each frame of video in the target video can be matched with the position of the target object, Therefore, the stabilization of the texture of the image corresponding to each frame of video in the target video can be achieved.
- the acquired image to be processed may be an on-site image taken by an augmented reality (Augmented Reality, AR) device, and when the AR device captures the on-site image, the on-site image may be directly used as the image to be processed , so that real-time acquisition of images to be processed can be achieved.
- AR Augmented Reality
- the AR device may be a smart terminal with AR function held by the user, which may include but not limited to: electronic devices capable of presenting augmented reality effects such as mobile phones, tablet computers, and AR glasses.
- the obtained target image can also be displayed by an AR device, which can realize real-time color change of the target object in the AR scene.
- the target area may also be determined according to the following steps:
- Step 1 Obtain an image of a target person, and use the image of the target person as an image to be processed.
- the acquired image to be processed may be an acquired target person image.
- Step 2 Perform semantic segmentation on the image to be processed, and determine the target area corresponding to the target object in the image to be processed.
- the target area corresponding to the target object may include at least one of the human hair area, human skin area, human eye area, and at least part of the clothing area in the image of the target person.
- the target object may include hair, skin, eyes, At least one of at least some of the apparel.
- the target object Take the target object as hair, and the target area corresponding to the target object is the human hair area as an example.
- the image to be processed can be semantically segmented to determine each pixel in the image to be processed Then, based on the semantics of the pixel point, the pixel point whose semantic meaning is hair can be determined, and then the human hair region corresponding to the hair can be determined.
- the acquired image to be processed may also be an acquired image of the target object.
- the target area corresponding to the target object may be at least part of the object area in the target object image.
- the target area corresponding to the target object may be a desktop area, a screen area, a floor area, a tree area, a trunk area of a tree, a leaf area, or an entire area of a tree.
- the image to be processed can be semantically segmented to determine the value of each pixel in the image to be processed Semantics, and then based on the semantics of the pixels, determine the pixels whose semantics is the desktop, and then determine the desktop area corresponding to the desktop.
- the image of the target person or the target object is used as the image to be processed, so as to realize the color adjustment of the target area of the target object, such as the color adjustment of the human hair area, human skin area, clothing area or object area.
- the embodiment of the present disclosure also provides an image processing device corresponding to the image processing method. Since the problem-solving principle of the device in the embodiment of the present disclosure is similar to the above-mentioned image processing method in the embodiment of the present disclosure, the implementation of the device Reference can be made to the implementation of the method, and repeated descriptions will not be repeated.
- FIG. 7 it is a schematic diagram of an image processing device provided by an embodiment of the present disclosure, including:
- the first determining module 701 is configured to determine the target area corresponding to the target object in the image to be processed
- the second determination module 702 is configured to determine the initial brightness corresponding to at least part of the target pixel points based on the initial color values of at least part of the target pixel points corresponding to the target area;
- the adjustment module 703 is configured to adjust the initial color values of at least part of the target pixel points based on the initial brightness corresponding to at least part of the target pixel points and the acquired preset color conversion information of the target area to obtain a target image .
- the second determination module 702 includes: a first determination submodule configured to, for each of the target pixels in at least some of the target pixel points point, based on the initial color value of the target pixel, determine the brightness corresponding to the target pixel;
- the second determining submodule is configured to determine the initial brightness corresponding to the target pixel based on the brightness corresponding to the target pixel.
- the second determining submodule is configured to acquire a preset brightness threshold
- the adjustment module 703 includes: a third determining submodule configured to, for each of the target pixel points in at least some of the target pixel points, based on the target pixel point corresponding The initial brightness of the target pixel, the initial color value of the target pixel and the preset color conversion information, determine the target color value of the target pixel;
- the adjustment submodule is configured to adjust the initial color value of the target pixel based on the target color value of the target pixel to obtain a target image.
- the preset color conversion information includes preset color value and color conversion degree information
- the third determining submodule is configured to determine the fused color value of the target pixel based on the initial color value of the target pixel, the preset color value and the color conversion degree information;
- the first determining module 701 is further configured to perform semantic segmentation on the image to be processed to obtain a segmented image corresponding to the target object;
- the second determining module 702 is further configured to determine the target according to the following steps before determining the initial brightness corresponding to at least part of the target pixel points based on the initial color values of at least part of the target pixel points corresponding to the target area pixel:
- Each pixel in the target area is used as a target pixel corresponding to the target object.
- the first determining module 701 is configured to acquire an image of a target person, and use the image of the target person as the image to be processed;
- the target area corresponding to the target object includes the human hair area, human skin area, At least one of at least some of the areas of clothing; or,
- the target area corresponding to the target object is at least a part of the object area in the target object image.
- FIG. 8 is a schematic structural diagram of a computer device provided by an embodiment of the present disclosure, including:
- Processor 81 and memory 82 stores machine-readable instructions executable by the processor 81, the processor 81 is configured to execute the machine-readable instructions stored in the memory 82, and the machine-readable instructions are executed by the processor 81
- the processor 81 performs the following steps: S101: Determine the target area corresponding to the target object in the image to be processed; S102: Determine at least some of the target pixel points based on the initial color values of at least some of the target pixel points corresponding to the target area Corresponding initial brightness and S103: Based on the initial brightness corresponding to at least part of the target pixels and the acquired preset color conversion information of the target area, adjust the initial color values of at least part of the target pixels to obtain a target image.
- the above-mentioned memory 82 includes a memory 821 and an external memory 822; the memory 821 here is also called an internal memory, and is configured to temporarily store the calculation data in the processor 81 and the data exchanged with the external memory 822 such as a hard disk.
- the external memory 822 performs data exchange.
- the integrated modules described in the embodiments of the present application are implemented in the form of software function modules and sold or used as independent products, they may also be stored in a computer storage medium. Based on such understanding, those skilled in the art should understand that the embodiments of the present application may be provided as methods, systems, or computer program products. Accordingly, the present application may take the form of an entirely hardware embodiment, an entirely software embodiment, or an embodiment combining software and hardware aspects. Moreover, the embodiment of the present application may be in the form of a computer program product implemented on one or more computer storage media containing computer-executable instructions, and the storage medium includes a U disk, a mobile hard disk, a read-only memory (ROM, Read-Only Memory), random access memory (RAM, Random Access Memory), disk storage, optical storage, etc.
- ROM Read-Only Memory
- RAM Random Access Memory
- An embodiment of the present disclosure also provides a computer-readable storage medium, on which a computer program is stored, and when the computer program is run by a processor, the steps of the image processing method described in the above-mentioned method embodiments are executed .
- the storage medium may be a volatile or non-volatile computer-readable storage medium.
- the computer program product of the image processing method provided by the embodiments of the present disclosure includes a computer-readable storage medium storing program code, and the instructions included in the program code can be used to execute the steps of the image processing method described in the above method embodiments , reference may be made to the foregoing method embodiments, and details are not repeated here.
- the computer program product can be realized by hardware, software or a combination thereof.
- the computer program product is embodied as a computer storage medium, and in other embodiments of the present disclosure, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), etc. Wait.
- a software development kit Software Development Kit, SDK
- the working process of the device described above can refer to the corresponding process in the foregoing method embodiments, which will not be repeated here.
- the disclosed devices and methods may be implemented in other ways.
- the device embodiments described above are only illustrative.
- the division of the units is only a logical function division.
- multiple units or components can be combined.
- some features can be ignored, or not implemented.
- the mutual coupling or direct coupling or communication connection shown or discussed may be through some communication interfaces, and the indirect coupling or communication connection of devices or units may be in electrical, mechanical or other forms.
- the units described as separate components may or may not be physically separated, and the components displayed as units may or may not be physical units, that is, they may be located in one place, or may be distributed to multiple network units. Part or all of the units can be selected according to actual needs to achieve the purpose of the solution of this embodiment.
- each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, each unit may exist separately physically, or two or more units may be integrated into one unit.
- the functions When the functions are realized in the form of software function units and sold or used as independent products, they can be stored in a non-volatile computer-readable storage medium executable by a processor.
- the technical solution of the present disclosure is essentially or the part that contributes to the prior art or the part of the technical solution can be embodied in the form of a software product, and the computer software product is stored in a storage medium, including Several instructions are used to make a computer device (which may be a personal computer, a server, or a network device, etc.) execute all or part of the steps of the methods described in various embodiments of the present disclosure.
- the aforementioned storage media include: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disc and other media that can store program codes. .
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computational Linguistics (AREA)
- Image Processing (AREA)
Abstract
一种图像处理方法、装置、计算机设备和存储介质,其中,该方法包括:确定待处理图像中目标对象对应的目标区域(S101);基于目标区域对应的至少部分目标像素点的初始颜色值,确定至少部分目标像素点对应的初始亮度(S102);基于至少部分目标像素点对应的初始亮度和获取的目标区域的预设颜色转换信息,对至少部分目标像素点的初始颜色值进行调整,得到目标图像(S103)。
Description
相关申请的交叉引用
本公开基于申请号为202110727706.8、申请日为2021年06月29日的中国专利申请提出,并要求该中国专利申请的优先权,该中国专利申请的全部内容在此以全文引入的方式引入本公开。
本公开涉及图像处理技术领域,尤其涉及一种图像处理方法、装置、计算机设备和存储介质。
图像的处理越来越多样化,在许多场景中,针对得到的用户图像,往往存在着改变图像中的目标对象对应的特定区域的颜色的需求,例如头发区域,从而,实现对用户图像的美化。
相关技术中,改变特定区域的颜色的操作大多基于贴图或滤镜的方式实现。但是,两种方式都是基于目标颜色对特定区域的初始颜色进行直接替换,使得颜色改变的效果不自然,颜色改变的效果较差。
发明内容
本公开实施例至少提供一种图像处理方法、装置、计算机设备和存储介质,以增强特定区域的颜色改变效果,使得颜色改变的效果较自然。
第一方面,本公开实施例提供了一种图像处理方法,包括:确定待处理图像中目标对象对应的目标区域;基于所述目标区域对应的至少部分目标像素点的初始颜色值,确定至少部分目标像素点对应的初始亮度;基于至少部分所述目标像素点对应的初始亮度和获取的所述目标区域的预设颜色转换信息,对至少部分所述目标像素点的初始颜色值进行调整,得到目标图像。
在一种可能的实施方式中,所述基于所述目标区域对应的至少部分目标像素点的初始颜色值,确定至少部分目标像素点对应的初始亮度,包括:针对至少部分目标像素点中的每个所述目标像素点,基于所述目标像素点的初始颜色值,确定所述目标像素点对应的明度;基于所述目标像素点对应的明度,确定所述目标像素点对应的初始亮度。
这样,明度能够反映光照强度,亮度由光照强度和光照面积决定,像素点的颜色值能够准确反映该颜色值对应的明度,因此,基于目标像素点的初始颜色值,首先能够确 定目标像素点对应的明度,然后,基于确定的目标像素点的明度,再确定目标像素点对应的初始亮度,提高了确定的初始亮度的准确性。
在一种可能的实施方式中,所述基于所述目标像素点对应的明度,确定所述目标像素点对应的初始亮度,包括:获取预设明度阈值;基于所述预设明度阈值和所述目标像素点对应的明度,筛选预设的与所述目标像素点匹配的亮度转换规则;基于筛选得到的亮度转换规则和所述目标像素点对应的明度,确定所述目标像素点对应的初始亮度。
在一种可能的实施方式中,所述基于至少部分所述目标像素点对应的初始亮度和获取的所述目标区域的预设颜色转换信息,对至少部分所述目标像素点的初始颜色值进行调整,得到目标图像,包括:针对至少部分所述目标像素点中的每个所述目标像素点,基于所述目标像素点对应的初始亮度、所述目标像素点的初始颜色值和所述预设颜色转换信息,确定所述目标像素点的目标颜色值;基于所述目标像素点的目标颜色值,调整所述目标像素点的初始颜色值,得到目标图像。
在一种可能的实施方式中,所述预设颜色转换信息包括预设颜色值和颜色转换程度信息;所述基于所述目标像素点对应的初始亮度、所述目标像素点的初始颜色值和所述预设颜色转换信息,确定所述目标像素点的目标颜色值,包括:基于所述目标像素点的初始颜色值、所述预设颜色值和所述颜色转换程度信息,确定所述目标像素点的融合颜色值;基于所述融合颜色值和所述初始亮度,确定所述目标像素点的目标颜色值。
在一种可能的实施方式中,所述确定待处理图像中目标对象对应的目标区域,包括:对所述待处理图像进行语义分割,得到所述目标对象对应的分割图像;基于所述分割图像,确定所述待处理图像中所述目标对象对应的目标区域;在所述基于所述目标区域对应的至少部分目标像素点的初始颜色值,确定至少部分目标像素点对应的初始亮度之前,所述方法还包括确定所述目标像素点的步骤:将所述目标区域中的每个像素点作为所述目标对象对应的目标像素点。
在一种可能的实施方式中,所述确定待处理图像中目标对象对应的目标区域,包括:获取目标人物图像,并将所述目标人物图像作为所述待处理图像;对所述待处理图像进行语义分割,确定所述待处理图像中目标对象对应的目标区域;其中,所述目标对象对应的目标区域包括所述目标人物图像中的人体头发区域、人体皮肤区域、至少部分服饰区域中的至少一种;或者,获取目标物体图像,并将所述目标物体图像作为所述待处理图像;对所述待处理图像进行语义分割,确定所述待处理图像中目标对象对应的目标区域;其中,所述目标对象对应的目标区域为所述目标物体图像中的至少部分物体区域。
第二方面,本公开实施例还提供一种图像处理装置,包括:
第一确定模块,配置为确定待处理图像中目标对象对应的目标区域;
第二确定模块,配置为基于所述目标区域对应的至少部分目标像素点的初始颜色值,确定至少部分目标像素点对应的初始亮度;
调整模块,配置为基于至少部分所述目标像素点对应的初始亮度和获取的所述目标区域的预设颜色转换信息,对至少部分所述目标像素点的初始颜色值进行调整,得到目标图像。
第三方面,本公开实施例还提供一种计算机设备,处理器、存储器,所述存储器存储有所述处理器可执行的机器可读指令,所述处理器配置为执行所述存储器中存储的机器可读指令,所述机器可读指令被所述处理器执行的情况下执行上述第一方面,或第一方面中任一种可能的实施方式中的步骤。
第四方面,本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被运行的情况下执行上述第一方面,或第一方面中任一种可能的实施方式中的步骤。
本公开实施例提供了一种计算机程序产品,其中,上述计算机程序产品包括存储了计算机程序的非瞬时性计算机可读存储介质,上述计算机程序可操作来使计算机执行如本公开实施例图像处理方法中所描述的部分或全部步骤。
本公开实施例中,像素点的颜色值与像素点对应的亮度存在关联关系,因此基于目标对象对应的目标区域中的目标像素点的初始颜色值,能够准确地确定出每个目标像素点对应的初始亮度,像素点的亮度能够反映该像素点对应的光照强度,利用确定的初始亮度和颜色转换信息,对目标像素点的初始颜色值进行调整,能够使调整后的目标像素点的颜色值与光照强度相匹配,从而,可以使颜色改变的效果较自然和真实,增强了目标对象的颜色改变效果。
关于上述图像处理装置、计算机设备、及计算机可读存储介质的效果描述参见上述图像处理方法的说明,这里不再赘述。
为使本公开的上述目的、特征和优点能更明显易懂,下文特举较佳实施例,并配合所附附图,作详细说明如下。
图1示出了本公开实施例所提供的一种图像处理方法的流程图;
图2示出了本公开实施例所提供的一种确定初始亮度的方法的流程图;
图3示出了本公开实施例所提供的一种基于目标像素点对应的明度确定初始亮度的方法的流程图;
图4示出了本公开实施例所提供的一种确定目标颜色值的流程图;
图5示出了本公开实施例所提供的一种确定目标区域的方法的流程图;
图6示出了本公开实施例所提供的一种获取mask图像的示意图;
图7示出了本公开实施例所提供的一种图像处理装置的示意图;
图8示出了本公开实施例所提供的一种计算机设备结构示意图。
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合本公开实施例中附图,对本公开实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本公开一部分实施例,而不是全部的实施例。通常在此处描述和示出的本公开实施例的组件可以以各种不同的配置来布置和设计。因此,以下对本公开的实施例的详细描述并非旨在限制要求保护的本公开的范围,而是仅仅表示本公开的选定实施例。基于本公开的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
另外,本公开实施例中的说明书和权利要求书及上述附图中的术语“第一”、“第二”等是用于区别类似的对象,而不必用于描述特定的顺序或先后次序。应该理解这样使用的数据在适当情况下可以互换,以便这里描述的实施例能够以除了在这里图示或描述的内容以外的顺序实施。
在本文中提及的“多个或者若干个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B这三种情况。字符“/”一般表示前后关联对象是一种“或”的关系。
经研究发现,图像的处理越来越多样化,在许多场景中,针对得到的用户图像,往往存在着改变图像中的目标对象对应的特定区域的颜色的需求,例如头发区域,从而,实现对用户图像的美化。现有技术中,改变特定区域的颜色的操作大多基于贴图或滤镜的方式实现。但是,两种方式都是基于目标颜色对特定区域的初始颜色进行直接替换,使得颜色改变的效果不自然,颜色改变的效果较差。
基于上述研究,本公开提供了一种图像处理方法、装置、计算机设备和存储介质,像素点的颜色值与像素点对应的亮度存在关联关系,因此基于目标对象对应的目标区域 中的目标像素点的初始颜色值,能够准确地确定出每个目标像素点对应的初始亮度,像素点的亮度能够反映该像素点对应的光照强度,利用确定的初始亮度和颜色转换信息,对目标像素点的初始颜色值进行调整,能够使调整后的目标像素点的颜色值与光照强度相匹配,从而,可以使颜色改变的效果较自然和真实,增强了目标对象的颜色改变效果。
针对以上方案所存在的缺陷,均是发明人在经过实践并仔细研究后得出的结果,因此,上述问题的发现过程以及下文中本公开针对上述问题所提出的解决方案,都应该是发明人在本公开过程中对本公开做出的贡献。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。
为便于对本实施例进行理解,首先对本公开实施例所公开的一种图像处理方法进行详细介绍,本公开实施例所提供的图像处理方法的执行主体一般为具有一定计算能力的计算机设备,在一些可能的实现方式中,该图像处理方法可以通过处理器调用存储器中存储的计算机可读指令的方式来实现。
下面以执行主体为计算机设备为例对本公开实施例提供的图像处理方法加以说明。
如图1所示,为本公开实施例所提供的一种图像处理方法的流程图,可以包括以下步骤:
S101:确定待处理图像中目标对象对应的目标区域。
这里,待处理图像可以为利用摄像装置拍摄的包括目标对象的整体图像,也可以为利用摄像装置拍摄的包括目标对象的视频中的任一帧视频对应的整体图像。目标对象对应于待处理图像中的部分图像,可以是待处理图像中需要进行颜色调整的对象,该对象可以是物体、人物的整体或一部分,如头发、服装等。例如,待处理图像可以为包含毛发的图像,毛发可以是人体的头发,也可以是动物的毛发等。
本公开实施例所提供的方法可以应用于美颜处理应用程序,在用户存在利用美颜处理应用程序对待处理图像中的目标对象进行颜色调整的需求(例如,调整头发的颜色的需求,即染发需求)的情况下,可以在美颜处理应用程序中提交对待处理图像中的目标对象进行美颜处理的美颜处理请求。
进而,美颜处理应用程序可以响应于美颜处理请求,对待处理图像进行处理,例如,对待处理图像进行语义分割,确定待处理图像中的目标对象,进而,可以确定目标对象在待处理图像中对应的部分图像,然后可以将该部分图像对应的区域作为目标对象对应的目标区域。
之后,可以根据后续介绍的S102和S103,对目标区域对应的目标像素点的颜色值进行调整,从而,实现对待处理图像的美颜处理。
S102:基于目标区域对应的至少部分目标像素点的初始颜色值,确定至少部分目标像素点对应的初始亮度。
这里,初始颜色值为目标对象对应的目标像素点的原始颜色值,目标像素点对应的初始亮度可以由目标像素点对应的光照强度确定。
在确定目标对象对应的目标区域之后,可以将目标区域对应的待处理图像中的所有像素点作为目标对象对应的目标像素点。
针对目标对象对应的目标像素点,可以从中选取至少部分目标像素点,并确定选取的至少部分目标像素点的初始颜色值。然后,基于颜色值和亮度之间的转换关系以及目标像素点的初始颜色值,可以确定至少部分目标像素点中的每个目标像素点对应的初始亮度,从而,可以确定每个目标像素点对应的光照强度。
S103:基于至少部分所述目标像素点对应的初始亮度和获取的目标区域的预设颜色转换信息,对至少部分目标像素点的初始颜色值进行调整,得到目标图像。
这里,预设颜色转换信息为用户确定的用于改变目标对象的颜色的信息,针对于目标对象对应的所有的目标像素点的初始颜色值的更改。预设颜色转换信息包括预设颜色值和颜色转换程度信息,其中,预设颜色值为用户预设的目标像素点调整颜色后的颜色值。这里,由于预设颜色值不能反映目标像素点对应的光照强度,因此,需要将预设颜色值转换为目标颜色值,再对目标像素点进行处理,从而,得到较为自然的目标图像。
颜色转换程度信息用于表征用户确定的对目标像素点进行颜色改变的改变程度,例如,改变程度为80%、90%等。
在服务器端确定用户存在对待处理图像中的目标对象进行颜色的调整的需求的情况下,可以获取用户针对目标对象对应的目标区域的预设颜色转换信息。并且,获取预设颜色转换信息的步骤可以是在确定目标区域的步骤之前,也可以是在确定目标区域的步骤之后,还可以与确定目标区域的步骤同时执行的步骤,这里不进行限定。
进而,基于预设颜色转换信息和至少部分目标像素点对应的初始亮度,可以确定至少部分目标像素点中的每个目标像素点对应的目标颜色值。这里,由于像素点的亮度由该像素点对应的光照强度确定,所以像素点的亮度能够反映光照强度,从而,基于初始亮度确定的目标颜色值,能够与初始亮度对应的光照强度相匹配。
然后,可以利用确定的至少部分目标像素点中的每个目标像素点对应的目标颜色值, 对至少部分目标像素点中的每个目标像素点的初始颜色值进行调整。
可以将至少部分目标像素点中的每个目标像素点的初始颜色值调整为目标颜色值,之后,基于调整颜色值后的至少部分目标像素点,实现对目标对象的颜色的更新,得到更新后的目标对象。其中,目标图像中的目标对象对应的每个目标像素点的颜色为该目标像素点对应的目标颜色值所表征的颜色。
这样,基于目标对象对应的目标区域中的目标像素点的初始颜色值,能够准确地确定出每个目标像素点对应的初始亮度,像素点的亮度能够反映该像素点对应的光照强度,利用确定的初始亮度和颜色转换信息,对目标像素点的初始颜色值进行调整,能够使调整后的目标像素点的颜色值与光照强度相匹配,从而,可以使颜色改变的效果较自然和真实,增强了目标对象的颜色改变效果。
在一种实施例中,针对S102,可以按照如图2所示的方法确定目标像素点对应的初始亮度,如图2所示,为本公开实施例所提供的一种确定初始亮度的方法的流程图,可以包括以下步骤:
S201:针对至少部分目标像素点中的每个目标像素点,基于目标像素点的初始颜色值,确定目标像素点对应的明度。
这里,明度能够反映光照强度,亮度由光照强度和光照面积决定,像素点的颜色值能够准确反映该颜色值对应的明度。明度和亮度之间可以相互转换,相互影响。随着明度的增大或减小,亮度也会做出相应的改变。
待处理图像可以是RGB(Red Green Blue,红绿蓝)图像,目标像素点的初始颜色值可以包括R通道对应的颜色值、G通道对应的颜色值和B通道对应的颜色值。
针对至少部分目标像素点中的每个目标像素点,可以根据该目标像素点的初始颜色值,确定该初始颜色值对应于R通道对应的颜色值、G通道对应的颜色值和B通道对应的颜色值。
之后,可以根据像素点的颜色值和像素点的明度之间的明度转换规则以及该目标像素点的初始颜色值对应于每个通道的颜色值,利用明度转换规则,确定与该初始颜色值对应的明度。
像素点的颜色值和像素点的明度之间的转换规则可以为如下所示的公式一:
Y=R
lin*A
1+G
lin*A
2+B
lin*A
3 公式一
其中,Y表示明度,R
lin表示初始颜色值中的R通道对应的颜色值,G
lin表示初始颜色值中的G通道对应的颜色值,B
lin表示初始颜色值中的B通道对应的颜色值,A
1表示 与R通道的颜色值相对应的明度转换系数,A
2表示与G通道的颜色值相对应的明度转换系数,A
3表示与B通道的颜色值相对应的明度转换系数。
并且,A
1对应的取值范围可以为(0.2101,0.2205),A
2对应的取值范围可以为(0.7145,0.7211),A
3对应的取值范围可以为(0.0671,0.0750)。示例性的,在每次计算的情况下,A
1、A
2和A
3的取值之和可以等于1。
进而,利用公式一和每个目标像素点的初始颜色值对应于每个通道的颜色值,可以确定至少部分目标像素点中的每个目标像素点对应的明度。另外,利用公式一确定的每个目标像素点对应的明度Y∈[0,1]。
S202:基于该目标像素点对应的明度,确定该目标像素点对应的初始亮度。
这里,针对任一个确定了其所对应的明度的目标像素点,可以基于该目标像素点的明度以及明度与亮度之间的转换规则,确定该目标像素点的亮度,并将确定的该亮度作为该目标像素点对应的初始亮度。
在一种实施例中,针对S202,可以按照如图3所示的方法,确定初始亮度,如图3所示,为本公开实施例所提供的一种基于目标像素点对应的明度确定初始亮度的方法的流程图,可以包括以下步骤:
S301:获取预设明度阈值。
这里,为了提高亮度转换的精确度,不同的明度范围可以对应不同的亮度转换规则,针对同一明度,利用不同的亮度转换规则对其进行转换,得到的亮度不同。预设明度阈值为不同明度范围的分界值,因此,在基于目标像素点对应的明度确定初始亮度的过程中,需要先获取预设明度阈值。
S302:基于预设明度阈值和目标像素点对应的明度,筛选预设的与目标像素点匹配的亮度转换规则。
这里,不同的明度范围对应于不同的亮度转换规则,不同的亮度转换规则又可以对应于不同的亮度转换系数,同一个亮度转换规则可以对应于至少一个亮度转化系数。
在利用亮度转换规则确定某一明度对应的亮度的情况下,可以先利用预设明度阈值和明度,确定明度所属的明度范围,再根据明度范围,确定与该明度范围对应的亮度转换规则。其中,明度范围可以包括大于预设明度阈值的范围,和小于或等于预设明度阈值的范围。
然后可以利用确定的亮度转换规则对应的亮度转换系数以及明度,确定明度对应的亮度。
示例性的,预设明度阈值可以取阈值区间(0.008056,0.09016)中的任一数值。不同的明度范围对应的亮度转换规则可以如公式二所示:
其中,L表示亮度。Y*B
1和
表示不同的亮度转换规则。B
1表示与亮度转换规则Y*B
1对应的亮度转换系数,B
2表示与亮度转换规则
对应的第一个亮度转换系数,B
3表示与亮度转换规则
对应的第二个亮度转换系数,M和N表示与亮度转换规则
对应的第三个亮度转换系数和第四个亮度转换系数。B
4表示预设明度阈值,Y≤B
4和Y>B
4表示不同的明度范围。
并且,L的取值范围可以为(0,1),B
1的取值范围可以为(800,1000),B
2的取值范围可以为(100,130),B
3的取值范围可以为(8,23),B
4的取值范围为(0.008056,0.09016),M和N可以为任一取值,M大于N。
针对至少部分目标像素点中的每个目标像素点,基于确定的该目标像素点的明度和获取的预设明度阈值,可以确定该目标像素点的明度和预设明度阈值的大小关系,然后,根据确定的大小关系,筛选与该目标像素点匹配的亮度转换规则。
S303:基于筛选得到的亮度转换规则和目标像素点对应的明度,确定目标像素点对应的初始亮度。
本步骤中,针对筛选得到的亮度转换规则和目标像素点对应的明度,利用确定的亮度转换规则对该明度进行转换,得到该明度对应的亮度,并将该亮度作为该目标像素点对应的初始亮度。
进而,基于S301至S303,可以确定至少部分目标像素点中的每个目标像素点对应的初始亮度。
在一种实施例中,针对S103,可以按照如下所示的步骤得到目标图像:
步骤一、针对至少部分目标像素点中的每个目标像素点,基于目标像素点对应的初始亮度、目标像素点的初始颜色值和预设颜色转换信息,确定目标像素点目标颜色值。
这里,至少部分目标像素点中的每个目标像素点都可以对应于一个单独的目标颜色值,其中,目标颜色值为对该目标像素点的初始颜色值进行调整之后,该目标像素点的对应的颜色值。
针对至少部分目标像素点中的每个目标像素点,可以根据该目标像素点的初始颜色 值、该目标像素点对应的初始亮度以及预设颜色转换信息,对该目标像素点的初始颜色值进行转换,得到与初始亮度以及预设颜色转换信息相匹配的目标颜色值。
针对任一目标像素点,可以根据该目标像素点的初始亮度,先对预设颜色转换信息中的预设颜色值进行转换,将预设颜色值转换为与初始亮度相匹配的第一颜色值。之后,可以根据该目标像素点的初始亮度,对初始颜色值进行转换,将初始颜色值转换为与初始亮度相匹配的第二颜色值,之后,可以利用预设颜色转换信息中的颜色转换程度信息,将第一颜色值转换为第一融合颜色值,并利用颜色转换程度信息,将第二颜色值转换为第二融合颜色值。
之后,可以利用第一融合颜色值和第二融合颜色值,以及初始亮度,确定与初始亮度以及预设颜色转换信息相匹配的目标颜色值。
步骤二、基于目标像素点的目标颜色值,调整目标像素点的初始颜色值,得到目标图像。
本步骤中,针对每个目标像素点,在确定出该目标像素点的目标颜色值之后,可以基于该目标颜色值,将目标区域中的该像素点的颜色值调整为目标颜色值,得到该目标像素点对应的调整后的颜色值。从而,可以实现将该目标像素点的颜色值由初始颜色值调整为目标颜色值。
进而,利用上述方式,可以根据确定出的目标区域对应的至少部分像素点中的每个像素点的目标颜色值,将目标区域中的该像素点的颜色值由初始颜色值,调整为目标颜色值,从而,可以完成对至少部分目标像素点中的每个目标像素点的颜色值的调整,从而,得到调整后的目标图像。
另外,待处理图像中所有的像素点的颜色值能够反映待处理图像的纹理,待处理图像中的像素点的颜色值发生改变,待处理图像对应的纹理也将发生改变。因此,针对得到目标图像,由于目标图像中的像素点包括调整过颜色值的像素点,所以目标图像对应的纹理也为调整后的纹理。
由上述实施例可知,预设颜色转换信息包括预设颜色值和颜色转换程度信息,在一种实施例中,针对步骤一、可以按照图4所示的流程确定目标颜色值,如图4所示,为本公开实施例所提供的一种确定目标颜色值的流程图,可以包括以下步骤:
S401:基于目标像素点的初始颜色值、预设颜色值和颜色转换程度信息,确定目标像素点的融合颜色值。
这里,融合颜色值为与颜色转换程度信息相匹配的颜色值。
本步骤中,根据颜色转换程度信息,可以对预设颜色值和初始颜色值进行混合,得到与颜色转换程度信息相匹配的融合颜色值。
可以先根据颜色转换程度信息对应的颜色转化程度,对预设颜色值和初始颜色值进行与颜色转换程度相匹配的第一次融合,得到第三融合颜色值;
进而,可以利用颜色转换程度对初始颜色值进行转换处理,得到与颜色转换程度相匹配的第三颜色值,并利用颜色转换程度对预设颜色值进行转换处理,得到与颜色转换程度相匹配的第四颜色值。
然后,可以利用颜色转化程度,对第三颜色值和第四颜色值进行与颜色转换程度相匹配的第二次融合,得到与颜色转换程度相匹配的第四融合颜色值。
最后,可以对第三融合颜色值和第四融合颜色值进行融合,得到目标像素点对应的最终的融合颜色值。
另外,关于基于目标像素点的初始颜色值、预设颜色值和颜色转换程度信息,确定目标像素点的融合颜色值的融合过程不仅限于上述方式,可以根据开发人员的需要对融合机制进行设置,这里不进行限定。其中,不同的融合机制可以对应于不同的融合方式。
S402:基于融合颜色值和初始亮度,确定目标像素点的目标颜色值。
这里,可以根据目标像素点对应的初始亮度,对应目标像素点的融合颜色值进行调整,得到与初始亮度相匹配的目标颜色值,也即,确定目标像素点的目标颜色值。
示例性的,针对目标颜色值的确定过程,可以根据融合颜色值,确定其分别对应于R通道的颜色值,对应于G通道的颜色值以及对应于B通道的颜色值。然后,可以根据亮度和颜色值之间的融合规则,先基于初始亮度,确定初始亮度对应于每个通道的融合系数。
根据确定好每个通道的融合系数和融合颜色值对应于每个通道的颜色值,以及每个通道对应的融合系数的转化规则,分别确定R通道的颜色值对应的目标颜色值、G通道的颜色值对应的目标颜色值和B通道的颜色值对应的目标颜色值。
之后,可以根据各个通道对应的目标颜色值,确定出融合颜色值对应的、与初始亮度相匹配的目标颜色值。
这样,颜色转换程度信息用于表征用户确定的对目标像素点进行颜色改变的改变程度,基于颜色转换程度信息,对目标像素点对应的初始颜色值和预设颜色值进行融合,能够使得融合得到的融合颜色值与颜色转换程度信息对应的改变程度相匹配,再利用初始亮度和融合颜色值,确定目标像素点的目标颜色值,能够使得得到的目标颜色值既能 与初始亮度对应的光照强度相匹配,也能与颜色转换程度信息对应的改变程度相匹配,既可以使颜色改变的效果较自然,增强了目标对象的颜色改变效果,还可以满足用户的颜色改变需求。
在一种实施例中,针对S101,可以按照以下步骤确定目标区域,如图5所示,为本公开实施例所提供的一种确定目标区域的方法的流程图,可以包括以下步骤:
S501:对待处理图像进行语义分割,得到目标对象对应的分割图像。
这里,针对获取的待处理图像,可以利用训练好的语义分割神经网络,对待处理图像进行语义分割,确定待处理图像中的每个像素点的语义,然后,基于确定的每个像素点的语义,可以目标对象对应的分割图像。得到的分割图像可以为蒙版mask图像。
如图6所示,为本公开实施例所提供的一种获取mask图像的示意图,其中,图像A表示待处理图像,B表示待处理图像中的目标对象(头发)对应的mask图像。
S502:基于分割图像,确定待处理图像中目标对象对应的目标区域。
本步骤中,在得到分割图像之后,可以确定分割图像中的目标对象对应的每个像素点的位置坐标。
然后,可以根据分割图像的图像特征信息和待处理图像的图像特征信息,对分割图像和待处理图像进行匹配处理,确定分割图像与待处理图像之间的对应关系。其中,图像特征信息可以包括图像的纹理信息、图像的尺寸信息、图像的关键点信息等。
根据确定的分割图像中目标对象对应的各个像素点的位置坐标和待处理图像中的每个像素点的位置坐标,按照确定的对应关系,可以确定分割图像中目标对象对应的各个像素点对应于待处理图像中的每个像素点的位置坐标。进而,可以确定上述确定的每个位置坐标对应于待处理图像的区域作为目标区域。从而,可以确定目标对象在待处理图像中所对应的目标区域。
在本公开的一些实施例中,在S102之前,所述方法还包括确定所述目标像素点的步骤:将目标区域中的每个像素点作为目标对象对应的目标像素点。这里,目标区域由目标对象对应的所有像素点组成,因而,可以直接将目标区域中的每个像素点作为目标对象对应的目标像素点。
这样,利用语义分割得到的分割图像作为目标对象对应的图像,利用分割图像,能够准确地确定待处理图像与分割图像对应的区域,也即,可以在待处理图像中准确地确定目标对象对应的目标区域,进而,能够使得确定出的目标像素点不包括除目标对象对应的像素点以外的像素点,从而,既能够限定需要进行颜色改变的区域,还能够提高进 行颜色改变的区域的准确性。
另外,在存在对获取的视频中的每一帧视频中出现的目标对象都进行颜色调整的情况下,利用得到的分割图像,确定目标像素点的方式,可以实现依次确定每一帧视频对应的图像的分割图像。然后利用得到的分割图像,确定每一帧视频对应的图像中的目标像素点,能够提高确定的每一帧视频对应的图像中的目标像素点的准确性。
在本公开的一些实施例中,可以实现对确定的每一帧视频对应的图像中的目标像素点的颜色调整,也即,可以实现对视频中的目标对象的颜色调整,得到目标视频。这样,即使目标对象在视频中的位置发生变化,由于确定的目标像素点的准确性,所以可以使得目标视频中的每一帧视频对应的图像中的目标像素点与目标对象的位置相匹配,从而,可以实现目标视频中的每一帧视频对应的图像的纹理的稳定。
在一种实施例中,获取的待处理图像可以是增强现实(Augmented Reality,AR)设备拍摄的现场图像,在AR设备在拍摄到现场图像的情况下,可以直接将该现场图像作为待处理图像,这样,可以实现实时获取待处理图像。其中,AR设备可以是是用户持有的具有AR功能的智能终端,可以包括但不限于:手机、平板电脑、AR眼镜等能够呈现增强现实效果的电子设备。
在本公开的一些实施例中,在得到目标图像之后,还可以利用AR设备展示得到的目标图像,能够实现在AR场景下对目标对象进行实时的颜色更改。
在一种实施例中,针对S101,还可以按照以下步骤确定目标区域:
步骤一、获取目标人物图像,并将目标人物图像作为待处理图像。
这里,在对待处理图像进行处理之前,需要先获取待处理图像。获取的待处理图像可以是获取的目标人物图像。
步骤二、对待处理图像进行语义分割,确定待处理图像中目标对象对应的目标区域。
其中,目标对象对应的目标区域可以包括目标人物图像中的人体头发区域、人体皮肤区域、人体眼睛区域、至少部分服饰区域中的至少一种,相应的,目标对象可以包括头发、皮肤、眼睛、至少部分服饰中的至少一种。
以目标对象为头发,目标对象对应的目标区域为人体头发区域为例,在获取到待处理图像(目标人物图像)之后,可以对待处理图像进行语义分割,确定待处理图像中的每个像素点的语义,然后可以基于像素点的语义,确定语义为头发的像素点,进而,可以确定头发对应的人体头发区域。
或者,获取的待处理图像还可以是获取的目标物体图像。其中,目标对象对应的目 标区域可以为目标物体图像中的至少部分物体区域。示例性的,目标对象对应的目标区域可以为桌面区域、屏幕区域、地板区域、树木区域、树木的树干区域、树叶区域或树木全部的区域。
以目标对象为桌面,目标对象对应的目标区域为桌面区域为例,在获取到待处理图像(目标动物图像)之后,可以对待处理图像进行语义分割,确定待处理图像中的每个像素点的语义,然后可以基于像素点的语义,确定语义为桌面的像素点,进而,可以确定桌面对应的桌面区域。
这样,将目标人物图像或目标物体图像作为待处理图像,从而实现对目标对象的目标区域进行颜色调整,如对人体头发区域、人体皮肤区域、服饰区域或物体区域进行颜色调整。
本领域技术人员可以理解,在具体实施方式的上述方法中,各步骤的撰写顺序并不意味着严格的执行顺序而对实施过程构成任何限定,各步骤的执行顺序应当以其功能和可能的内在逻辑确定。
基于同一发明构思,本公开实施例中还提供了与图像处理方法对应的图像处理装置,由于本公开实施例中的装置解决问题的原理与本公开实施例上述图像处理方法相似,因此装置的实施可以参见方法的实施,重复之处不再赘述。
如图7所示,为本公开实施例提供的一种图像处理装置的示意图,包括:
第一确定模块701,配置为确定待处理图像中目标对象对应的目标区域;
第二确定模块702,配置为基于所述目标区域对应的至少部分目标像素点的初始颜色值,确定至少部分目标像素点对应的初始亮度;
调整模块703,配置为基于至少部分所述目标像素点对应的初始亮度和获取的所述目标区域的预设颜色转换信息,对至少部分所述目标像素点的初始颜色值进行调整,得到目标图像。
在一种可能的实施方式中,在一种可能的实施方式中,所述第二确定模块702,包括:第一确定子模块,配置为针对至少部分目标像素点中的每个所述目标像素点,基于所述目标像素点的初始颜色值,确定所述目标像素点对应的明度;
第二确定子模块,配置为基于所述目标像素点对应的明度,确定所述目标像素点对应的初始亮度。
在一种可能的实施方式中,所述第二确定子模块,配置为获取预设明度阈值;
基于所述预设明度阈值和所述目标像素点对应的明度,筛选预设的与所述目标像素 点匹配的亮度转换规则;
基于筛选得到的亮度转换规则和所述目标像素点对应的明度,确定所述目标像素点对应的初始亮度。
在一种可能的实施方式中,所述调整模块703,包括:第三确定子模块,配置为针对至少部分所述目标像素点中的每个所述目标像素点,基于所述目标像素点对应的初始亮度、所述目标像素点的初始颜色值和所述预设颜色转换信息,确定所述目标像素点的目标颜色值;
调整子模块,配置为基于所述目标像素点的目标颜色值,调整所述目标像素点的初始颜色值,得到目标图像。
在一种可能的实施方式中,所述预设颜色转换信息包括预设颜色值和颜色转换程度信息;
所述第三确定子模块,配置为基于所述目标像素点的初始颜色值、所述预设颜色值和所述颜色转换程度信息,确定所述目标像素点的融合颜色值;
基于所述融合颜色值和所述初始亮度,确定所述目标像素点的目标颜色值。
在一种可能的实施方式中,所述第一确定模块701,还配置为对所述待处理图像进行语义分割,得到所述目标对象对应的分割图像;
基于所述分割图像,确定所述待处理图像中所述目标对象对应的目标区域;
所述第二确定模块702,还配置为在所述基于所述目标区域对应的至少部分目标像素点的初始颜色值,确定至少部分目标像素点对应的初始亮度之前,按照以下步骤确定所述目标像素点:
将所述目标区域中的每个像素点作为所述目标对象对应的目标像素点。
在一种可能的实施方式中,所述第一确定模块701,配置为获取目标人物图像,并将所述目标人物图像作为所述待处理图像;
对所述待处理图像进行语义分割,确定所述待处理图像中目标对象对应的目标区域;其中,所述目标对象对应的目标区域包括所述目标人物图像中的人体头发区域、人体皮肤区域、至少部分服饰区域中的至少一种;或者,
获取目标物体图像,并将所述目标物体图像作为所述待处理图像;
对所述待处理图像进行语义分割,确定所述待处理图像中目标对象对应的目标区域;
其中,所述目标对象对应的目标区域为所述目标物体图像中的至少部分物体区域。
关于装置中的各模块的处理流程、以及各模块之间的交互流程的描述可以参照上述 方法实施例中的相关说明,这里不再详述。
本公开实施例还提供了一种计算机设备,如图8所示,为本公开实施例提供的一种计算机设备结构示意图,包括:
处理器81和存储器82;所述存储器82存储有处理器81可执行的机器可读指令,处理器81配置为执行存储器82中存储的机器可读指令,所述机器可读指令被处理器81执行的情况下,处理器81执行下述步骤:S101:确定待处理图像中目标对象对应的目标区域;S102:基于目标区域对应的至少部分目标像素点的初始颜色值,确定至少部分目标像素点对应的初始亮度以及S103:基于至少部分所述目标像素点对应的初始亮度和获取的目标区域的预设颜色转换信息,对至少部分目标像素点的初始颜色值进行调整,得到目标图像。
上述存储器82包括内存821和外部存储器822;这里的内存821也称内存储器,配置为暂时存放处理器81中的运算数据,以及与硬盘等外部存储器822交换的数据,处理器81通过内存821与外部存储器822进行数据交换。
上述指令的执行过程可以参考本公开实施例中所述的图像处理方法的步骤,此处不再赘述。
本申请实施例所述集成的模块在以软件功能模块的形式实现并作为独立的产品销售或使用的情况下,也可以存储在一个计算机存储介质中。基于这样的理解,本领域内的技术人员应明白,本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请实施例可采用在一个或多个其中包含有计算机可执行指令的计算机存储介质上实施的计算机程序产品的形式,所述存储介质包括U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁盘存储器、光学存储器等。
本公开实施例还提供一种计算机可读存储介质,该计算机可读存储介质上存储有计算机程序,该计算机程序被处理器运行的情况下执行上述方法实施例中所述的图像处理方法的步骤。其中,该存储介质可以是易失性或非易失的计算机可读取存储介质。
本公开实施例所提供的图像处理方法的计算机程序产品,包括存储了程序代码的计算机可读存储介质,所述程序代码包括的指令可用于执行上述方法实施例中所述的图像处理方法的步骤,可参见上述方法实施例,在此不再赘述。
该计算机程序产品可以通过硬件、软件或其结合的方式实现。在本公开的一些实施 例中,所述计算机程序产品体现为计算机存储介质,在本公开的另一些实施例中,计算机程序产品体现为软件产品,例如软件开发包(Software Development Kit,SDK)等等。
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的装置的工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。在本公开所提供的几个实施例中,应该理解到,所揭露的装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
所述作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
另外,在本公开各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。
所述功能在以软件功能单元的形式实现并作为独立的产品销售或使用的情况下,可以存储在一个处理器可执行的非易失的计算机可读取存储介质中。基于这样的理解,本公开的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,服务器,或者网络设备等)执行本公开各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(Read-Only Memory,ROM)、随机存取存储器(Random Access Memory,RAM)、磁碟或者光盘等各种可以存储程序代码的介质。
最后应说明的是:以上所述实施例,仅为本公开的具体实施方式,用以说明本公开的技术方案,而非对其限制,本公开的保护范围并不局限于此,尽管参照前述实施例对本公开进行了详细的说明,本领域的普通技术人员应当理解:任何熟悉本技术领域的技术人员在本公开揭露的技术范围内,其依然可以对前述实施例所记载的技术方案进行修改或可轻易想到变化,或者对其中部分技术特征进行等同替换;而这些修改、变化或者替换,并不使相应技术方案的本质脱离本公开实施例技术方案的精神和范围,都应涵盖 在本公开的保护范围之内。因此,本公开的保护范围应所述以权利要求的保护范围为准。
Claims (17)
- 一种图像处理方法,包括:确定待处理图像中目标对象对应的目标区域;基于所述目标区域对应的至少部分目标像素点的初始颜色值,确定至少部分目标像素点对应的初始亮度;基于至少部分所述目标像素点对应的初始亮度和获取的所述目标区域的预设颜色转换信息,对至少部分所述目标像素点的初始颜色值进行调整,得到目标图像。
- 根据权利要求1所述的方法,其中,所述基于所述目标区域对应的至少部分目标像素点的初始颜色值,确定至少部分目标像素点对应的初始亮度,包括:针对至少部分目标像素点中的每个所述目标像素点,基于所述目标像素点的初始颜色值,确定所述目标像素点对应的明度;基于所述目标像素点对应的明度,确定所述目标像素点对应的初始亮度。
- 根据权利要求2所述的方法,其中,所述基于所述目标像素点对应的明度,确定所述目标像素点对应的初始亮度,包括:获取预设明度阈值;基于所述预设明度阈值和所述目标像素点对应的明度,筛选预设的与所述目标像素点匹配的亮度转换规则;基于筛选得到的亮度转换规则和所述目标像素点对应的明度,确定所述目标像素点对应的初始亮度。
- 根据权利要求1至3任一项所述的方法,其中,所述基于至少部分所述目标像素点对应的初始亮度和获取的所述目标区域的预设颜色转换信息,对至少部分所述目标像素点的初始颜色值进行调整,得到目标图像,包括:针对至少部分所述目标像素点中的每个所述目标像素点,基于所述目标像素点对应的初始亮度、所述目标像素点的初始颜色值和所述预设颜色转换信息,确定所述目标像素点的目标颜色值;基于所述目标像素点的目标颜色值,调整所述目标像素点的初始颜色值,得到目标图像。
- 根据权利要求4所述的方法,其中,所述预设颜色转换信息包括预设颜色值和颜色转换程度信息;所述基于所述目标像素点对应的初始亮度、所述目标像素点的初始颜色值和所述预设颜色转换信息,确定所述目标像素点的目标颜色值,包括:基于所述目标像素点的初始颜色值、所述预设颜色值和所述颜色转换程度信息,确定所述目标像素点的融合颜色值;基于所述融合颜色值和所述初始亮度,确定所述目标像素点的目标颜色值。
- 根据权利要求1所述的方法,其中,所述确定待处理图像中目标对象对应的目标区域,包括:对所述待处理图像进行语义分割,得到所述目标对象对应的分割图像;基于所述分割图像,确定所述待处理图像中所述目标对象对应的目标区域;在所述基于所述目标区域对应的至少部分目标像素点的初始颜色值,确定至少部分目标像素点对应的初始亮度之前,所述方法还包括确定所述目标像素点的步骤:将所述目标区域中的每个像素点作为所述目标对象对应的目标像素点。
- 根据权利要求1至6任一项所述的方法,其中,所述确定待处理图像中目标对象对应的目标区域,包括:获取目标人物图像,并将所述目标人物图像作为所述待处理图像;对所述待处理图像进行语义分割,确定所述待处理图像中目标对象对应的目标区域;其中,所述目标对象对应的目标区域包括所述目标人物图像中的人体头发区域、人体皮肤区域、至少部分服饰区域中的至少一种;或者,获取目标物体图像,并将所述目标物体图像作为所述待处理图像;对所述待处理图像进行语义分割,确定所述待处理图像中目标对象对应的目标区域;其中,所述目标对象对应的目标区域为所述目标物体图像中的至少部分物体区域。
- 一种图像处理装置,包括:第一确定模块,配置为确定待处理图像中目标对象对应的目标区域;第二确定模块,配置为基于所述目标区域对应的至少部分目标像素点的初始颜色值,确定至少部分目标像素点对应的初始亮度;调整模块,配置为基于至少部分所述目标像素点对应的初始亮度和获取的所述目标区域的预设颜色转换信息,对至少部分所述目标像素点的初始颜色值进行调整,得到目标图像。
- 根据权利要求8所述的装置,其中,所述第二确定模块,包括:第一确定子模块,配置为针对至少部分目标像素点中的每个所述目标像素点,基于 所述目标像素点的初始颜色值,确定所述目标像素点对应的明度;第二确定子模块,配置为基于所述目标像素点对应的明度,确定所述目标像素点对应的初始亮度。
- 根据权利要求9所述的装置,其中,所述第二确定子模块,配置为获取预设明度阈值;基于所述预设明度阈值和所述目标像素点对应的明度,筛选预设的与所述目标像素点匹配的亮度转换规则;基于筛选得到的亮度转换规则和所述目标像素点对应的明度,确定所述目标像素点对应的初始亮度。
- 根据权利要求8至10任一项所述的装置,其中,所述调整模块,包括:第三确定子模块,配置为针对至少部分所述目标像素点中的每个所述目标像素点,基于所述目标像素点对应的初始亮度、所述目标像素点的初始颜色值和所述预设颜色转换信息,确定所述目标像素点的目标颜色值;调整子模块,配置为基于所述目标像素点的目标颜色值,调整所述目标像素点的初始颜色值,得到目标图像。
- 根据权利要求11所述的装置,其中,所述预设颜色转换信息包括预设颜色值和颜色转换程度信息;所述第三确定子模块,配置为基于所述目标像素点的初始颜色值、所述预设颜色值和所述颜色转换程度信息,确定所述目标像素点的融合颜色值;基于所述融合颜色值和所述初始亮度,确定所述目标像素点的目标颜色值。
- 根据权利要求8所述的装置,其中,所述第一确定模块,配置为对所述待处理图像进行语义分割,得到所述目标对象对应的分割图像;基于所述分割图像,确定所述待处理图像中所述目标对象对应的目标区域;所述第二确定模块,还配置为在所述基于所述目标区域对应的至少部分目标像素点的初始颜色值,确定至少部分目标像素点对应的初始亮度之前,按照以下步骤确定所述目标像素点:将所述目标区域中的每个像素点作为所述目标对象对应的目标像素点。
- 根据权利要求8至13任一项所述的装置,其中,所述第一确定模块,配置为获取目标人物图像,并将所述目标人物图像作为所述待处理图像;对所述待处理图像进行语义分割,确定所述待处理图像中目标对象对应的目标区域;其中,所述目标对象对应的目标区域包括所述目标人物图像中的人体头发区域、人体皮肤区域、至少部分服饰区域中的至少一种;或者,获取目标物体图像,并将所述目标物体图像作为所述待处理图像;对所述待处理图 像进行语义分割,确定所述待处理图像中目标对象对应的目标区域;其中,所述目标对象对应的目标区域为所述目标物体图像中的至少部分物体区域。
- 一种计算机设备,包括:处理器、存储器,所述存储器存储有所述处理器可执行的机器可读指令,所述处理器配置为执行所述存储器中存储的机器可读指令,所述机器可读指令被所述处理器执行的情况下,所述处理器执行如权利要求1至7任意一项所述的图像处理方法的步骤。
- 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被计算机设备运行的情况下,所述计算机设备执行如权利要求1至7任意一项所述的图像处理方法的步骤。
- 一种计算机程序产品,包括计算机可读代码,在所述计算机可读代码在电子设备中运行的情况下,所述电子设备中的处理器执行如权利要求1至7任意一项所述的方法。
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP21948037.3A EP4207075A4 (en) | 2021-06-29 | 2021-11-23 | IMAGE PROCESSING METHOD AND APPARATUS, AND COMPUTER DEVICE AND STORAGE MEDIUM |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110727706.8A CN113240760B (zh) | 2021-06-29 | 2021-06-29 | 一种图像处理方法、装置、计算机设备和存储介质 |
CN202110727706.8 | 2021-06-29 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2023273111A1 true WO2023273111A1 (zh) | 2023-01-05 |
Family
ID=77141177
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/132511 WO2023273111A1 (zh) | 2021-06-29 | 2021-11-23 | 一种图像处理方法、装置、计算机设备和存储介质 |
Country Status (3)
Country | Link |
---|---|
EP (1) | EP4207075A4 (zh) |
CN (1) | CN113240760B (zh) |
WO (1) | WO2023273111A1 (zh) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113240760B (zh) * | 2021-06-29 | 2023-11-24 | 北京市商汤科技开发有限公司 | 一种图像处理方法、装置、计算机设备和存储介质 |
CN114028808A (zh) * | 2021-11-05 | 2022-02-11 | 腾讯科技(深圳)有限公司 | 虚拟宠物的外观编辑方法、装置、终端及存储介质 |
CN114972009A (zh) * | 2022-03-28 | 2022-08-30 | 北京达佳互联信息技术有限公司 | 一种图像处理方法、装置、电子设备及存储介质 |
CN118678033A (zh) * | 2023-03-16 | 2024-09-20 | 蔚来移动科技有限公司 | 图像处理方法、装置、计算机设备和存储介质 |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005034355A (ja) * | 2003-07-14 | 2005-02-10 | Kao Corp | 画像処理装置および顔画像処理装置 |
CN108492348A (zh) * | 2018-03-30 | 2018-09-04 | 北京金山安全软件有限公司 | 图像处理方法、装置、电子设备及存储介质 |
WO2019100766A1 (zh) * | 2017-11-22 | 2019-05-31 | 格力电器(武汉)有限公司 | 一种图像处理方法、装置、电子设备及存储介质 |
CN112614060A (zh) * | 2020-12-09 | 2021-04-06 | 深圳数联天下智能科技有限公司 | 人脸图像头发渲染方法、装置、电子设备和介质 |
CN112767285A (zh) * | 2021-02-23 | 2021-05-07 | 北京市商汤科技开发有限公司 | 图像处理方法及装置、电子设备和存储介质 |
CN113240760A (zh) * | 2021-06-29 | 2021-08-10 | 北京市商汤科技开发有限公司 | 一种图像处理方法、装置、计算机设备和存储介质 |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4872982B2 (ja) * | 2008-07-31 | 2012-02-08 | ソニー株式会社 | 画像処理回路および画像表示装置 |
CN112087648B (zh) * | 2019-06-14 | 2022-02-25 | 腾讯科技(深圳)有限公司 | 图像处理方法、装置、电子设备及存储介质 |
WO2021035505A1 (zh) * | 2019-08-27 | 2021-03-04 | 深圳市大疆创新科技有限公司 | 图像处理方法及装置 |
CN111127591B (zh) * | 2019-12-24 | 2023-08-08 | 腾讯科技(深圳)有限公司 | 图像染发处理方法、装置、终端和存储介质 |
CN111784568A (zh) * | 2020-07-06 | 2020-10-16 | 北京字节跳动网络技术有限公司 | 人脸图像处理方法、装置、电子设备及计算机可读介质 |
CN112766234B (zh) * | 2021-02-23 | 2023-05-12 | 北京市商汤科技开发有限公司 | 图像处理方法及装置、电子设备和存储介质 |
CN112801916A (zh) * | 2021-02-23 | 2021-05-14 | 北京市商汤科技开发有限公司 | 图像处理方法及装置、电子设备和存储介质 |
-
2021
- 2021-06-29 CN CN202110727706.8A patent/CN113240760B/zh active Active
- 2021-11-23 WO PCT/CN2021/132511 patent/WO2023273111A1/zh unknown
- 2021-11-23 EP EP21948037.3A patent/EP4207075A4/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2005034355A (ja) * | 2003-07-14 | 2005-02-10 | Kao Corp | 画像処理装置および顔画像処理装置 |
WO2019100766A1 (zh) * | 2017-11-22 | 2019-05-31 | 格力电器(武汉)有限公司 | 一种图像处理方法、装置、电子设备及存储介质 |
CN108492348A (zh) * | 2018-03-30 | 2018-09-04 | 北京金山安全软件有限公司 | 图像处理方法、装置、电子设备及存储介质 |
CN112614060A (zh) * | 2020-12-09 | 2021-04-06 | 深圳数联天下智能科技有限公司 | 人脸图像头发渲染方法、装置、电子设备和介质 |
CN112767285A (zh) * | 2021-02-23 | 2021-05-07 | 北京市商汤科技开发有限公司 | 图像处理方法及装置、电子设备和存储介质 |
CN113240760A (zh) * | 2021-06-29 | 2021-08-10 | 北京市商汤科技开发有限公司 | 一种图像处理方法、装置、计算机设备和存储介质 |
Non-Patent Citations (1)
Title |
---|
See also references of EP4207075A4 |
Also Published As
Publication number | Publication date |
---|---|
CN113240760A (zh) | 2021-08-10 |
CN113240760B (zh) | 2023-11-24 |
EP4207075A4 (en) | 2024-04-03 |
EP4207075A1 (en) | 2023-07-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2023273111A1 (zh) | 一种图像处理方法、装置、计算机设备和存储介质 | |
KR102290985B1 (ko) | 이미지 조명 방법, 장치, 전자 기기 및 저장 매체 | |
WO2021169307A1 (zh) | 人脸图像的试妆处理方法、装置、计算机设备和存储介质 | |
US11323676B2 (en) | Image white balance processing system and method | |
EP3542347B1 (en) | Fast fourier color constancy | |
WO2018176925A1 (zh) | Hdr图像的生成方法及装置 | |
CN107204034B (zh) | 一种图像处理方法及终端 | |
US20230074060A1 (en) | Artificial-intelligence-based image processing method and apparatus, electronic device, computer-readable storage medium, and computer program product | |
CN112328345B (zh) | 用于确定主题色的方法、装置、电子设备及可读存储介质 | |
US9378564B2 (en) | Methods for color correcting digital images and devices thereof | |
WO2023005743A1 (zh) | 图像处理方法及装置、计算机设备、存储介质和计算机程序产品 | |
WO2021128593A1 (zh) | 人脸图像处理的方法、装置及系统 | |
CN114266803A (zh) | 图像处理方法、装置、电子设备及存储介质 | |
WO2021184931A1 (zh) | 适用于光学穿透式头戴显示器的颜色对比增强绘制方法、装置以及系统 | |
WO2021078276A1 (zh) | 连拍照片获取方法、智能终端及存储介质 | |
CN115170383A (zh) | 一种图像虚化方法、装置、存储介质及终端设备 | |
CN116668838B (zh) | 图像处理方法与电子设备 | |
CN113781330B (zh) | 图像处理方法、装置及电子系统 | |
KR101513931B1 (ko) | 구도의 자동보정 방법 및 이러한 구도의 자동보정 기능이 탑재된 영상 장치 | |
WO2023151210A1 (zh) | 图像处理方法、电子设备及计算机可读存储介质 | |
KR102334030B1 (ko) | 컴퓨터 장치를 이용한 헤어 염색 방법 | |
WO2017101570A1 (zh) | 照片的处理方法及处理系统 | |
Yuan et al. | Full convolutional color constancy with adding pooling | |
CN108093178A (zh) | 一种通过pca线性变换实现照片肤色变化的方法和拍照手机 | |
US12062175B2 (en) | Method for processing images, electronic device, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21948037 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2021948037 Country of ref document: EP Effective date: 20230331 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |