WO2022227547A1 - Procédé et appareil de traitement d'images, dispositif électronique et support de stockage - Google Patents

Procédé et appareil de traitement d'images, dispositif électronique et support de stockage Download PDF

Info

Publication number
WO2022227547A1
WO2022227547A1 PCT/CN2021/133400 CN2021133400W WO2022227547A1 WO 2022227547 A1 WO2022227547 A1 WO 2022227547A1 CN 2021133400 W CN2021133400 W CN 2021133400W WO 2022227547 A1 WO2022227547 A1 WO 2022227547A1
Authority
WO
WIPO (PCT)
Prior art keywords
pixel point
target
pixels
value
pixel
Prior art date
Application number
PCT/CN2021/133400
Other languages
English (en)
Chinese (zh)
Inventor
吴尧
四建楼
Original Assignee
北京市商汤科技开发有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京市商汤科技开发有限公司 filed Critical 北京市商汤科技开发有限公司
Publication of WO2022227547A1 publication Critical patent/WO2022227547A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2415Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on parametric or probabilistic models, e.g. based on likelihood ratio or false acceptance rate versus a false rejection rate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/047Probabilistic or stochastic networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Definitions

  • the present disclosure relates to the technical field of image processing, and in particular, to a method, an apparatus, an electronic device, and a storage medium for image processing.
  • images are widely used in various scenarios such as work and life.
  • the picture in the image can be modified. For example, you can change the color value of an object in the picture by replacing the color value of the pixels in the image.
  • the color of the picture is usually more complicated. If the color value of the pixel of an object in the picture is directly replaced with a certain value, the part of the object connected with other parts in the picture will lose the transition color, which will make the There is a big difference between the effect of the object after the color change and the original image.
  • Embodiments of the present disclosure provide at least an image processing method, an apparatus, an electronic device, and a storage medium.
  • an embodiment of the present disclosure provides an image processing method, including: acquiring a probability value that at least some pixels in an original image are target pixels, wherein the target pixels are in the original image and belong to a target object The pixel point; for any pixel point in the at least part of the pixel point, based on the probability value that the pixel point is the target pixel point and the preselected target color value, determine the fusion color value corresponding to the pixel point ; Adjust the color values of the at least part of the pixel points in the original image by using the fusion color values corresponding to the at least part of the pixel points respectively, to obtain a target image.
  • the target object whose color needs to be replaced can be identified from the original image.
  • the target image there is a certain difference between the pixels near and inside the target object to form a transition color, so that the characteristics of the target object after color replacement are more in line with the original image.
  • the obtaining the probability value that at least some of the pixels in the original image are target pixels includes: extracting image feature information in the original image; determining the image feature information based on the image feature information At least some of the pixels in the original image are the initial probability values of the target pixels; based on the initial probability values of the at least some of the pixels as the target pixels and the color values of the at least some of the pixels, the standard color value is determined ; For any pixel point in the at least part of the pixel points, based on the standard color value and the color value of the pixel point, the initial probability value that the pixel point is the target pixel point is calibrated to obtain the The pixel point is the probability value of the target pixel point.
  • a standard color value is determined based on the initial probability value and color value of at least some of the pixels for the target pixel, and then the initial probability value is corrected by using the color values and standard color values corresponding to at least some of the pixels, so that at least The probability value that some of the pixels are the target pixels is more in line with the real situation, and the accuracy of identifying the target object is improved.
  • the determining, based on the image feature information corresponding to the original image, that at least some of the pixels in the original image are the initial probability values of the target pixel points includes: based on the image features information, determine that at least some of the pixels in the original image are the evaluation scores of the target pixels; for any pixel in the at least some of the pixels, based on the preset evaluation score correction threshold and the pixel are The evaluation score of the target pixel point determines that the pixel point is the initial probability value of the target pixel point.
  • the evaluation scores of at least some of the pixels are modified by using the evaluation score correction threshold to obtain the initial probability value, so that the initial probability value is more accurate.
  • the evaluation score correction threshold includes a first correction threshold and a second correction threshold, and the first correction threshold is greater than the second correction threshold.
  • the determining that the pixel point is an initial probability value of the target pixel point based on the preset evaluation score correction threshold and the pixel point being the evaluation score of the target pixel point includes: when the pixel point is the target pixel point.
  • the evaluation score of the target pixel is greater than or equal to the first correction threshold, determine that the pixel is the initial probability value of the target pixel.
  • the initial probability value is the first preset probability value; In the case where the evaluation score of the target pixel is less than or equal to the second correction threshold, determine that the pixel is the initial probability value of the target pixel as the second preset probability value; In the case where the evaluation score of the target pixel is less than the first correction threshold and greater than the second correction threshold, the pixel is based on the evaluation score of the target pixel, the first correction threshold and the second correction threshold.
  • the modified threshold determines the initial probability value that the pixel point is the target pixel point.
  • the eligible pixels can be directly determined as the initial probability value of the target pixel, which can reduce the amount of calculation for calculating the initial probability value and improve the calculation efficiency.
  • the pixel point is determined as the target pixel point based on the evaluation score of the pixel point as the target pixel point, the first correction threshold and the second correction threshold
  • the initial probability value of including: determining the first difference between the first correction threshold and the second correction threshold; determining that the pixel point is the evaluation score of the target pixel point and the second correction threshold
  • the second difference between the two based on the ratio between the second difference and the first difference, determine that the pixel is an initial probability value of the target pixel.
  • the initial probability value is determined by using the ratio between the second difference value and the first difference value, which can improve the accuracy of the initial probability value.
  • the determining a standard color value based on the initial probability value that the at least part of the pixel points are the target pixel point and the color value of the at least part of the pixel point includes: determining the at least part of the pixel point.
  • the initial probability value is higher than or equal to the first pixel point with a preset probability threshold; the average value of the color values of all the first pixel points is used as the standard color value.
  • the initial probability value that the pixel point is the target pixel point is calibrated based on the standard color value and the color value of the pixel point, and the pixel point is obtained as
  • the probability value of the target pixel point includes: determining the similarity between the color value of the pixel point and the standard color value; based on the similarity, determining the initial probability that the pixel point is the target pixel point The value is calibrated to obtain the probability value that the pixel point is the target pixel point.
  • the initial probability is corrected by using the similarity between the color value of the pixel point and the standard color value, so as to further improve the accuracy of the probability value.
  • the determining the fusion color value corresponding to the pixel point based on the probability value that the pixel point is the target pixel point and the preselected target color value includes: based on the pixel point The point is the probability value of the target pixel point, and the color fusion weight corresponding to the pixel point is determined; based on the color value of the pixel point, the target color value and the color fusion weight corresponding to the pixel point, the color fusion weight corresponding to the pixel point is determined The fusion color value corresponding to the pixel point.
  • the color fusion weight corresponding to the pixel point is determined based on the probability value that the pixel point is the target pixel point, and then the color value of the pixel point and the target color value are fused based on the color fusion weight to obtain the corresponding pixel point. Fusion color value, so that the fusion color value can reflect the original color value of the pixel and the target color value according to the color fusion weight, so that the boundary of the target object forms a transition color, so that the characteristics of the target object after color replacement are more in line with the original image. .
  • the method further includes: for the at least part of the pixel points any pixel point of the original image, determine the target gray-scale value corresponding to the pixel point based on the gray-scale value of the pixel point in the original image and/or the preset gray-scale value corresponding to the pixel point; The grayscale value of the pixel point corresponding to the pixel point in the target image is adjusted to the target grayscale value corresponding to the pixel point.
  • the target gray-scale value of at least some pixels in the target image is determined by using the gray-scale values of at least some pixels in the original image and/or the preset gray-scale values of at least some pixels.
  • the brightness of the target object is adjusted so that the characteristics of the target object after color replacement are more in line with user expectations.
  • the extracting image feature information in the original image includes: extracting image feature information of the original image at multiple preset resolutions.
  • the determining, based on the image feature information, that at least some of the pixels in the original image are the evaluation scores of the target pixels includes: using a preset classifier to determine the lowest resolution among the multiple preset resolutions At least some of the pixels in the original image are the initial evaluation scores of the target pixels under the preset resolution of At least some of the pixels in the original image are the initial evaluation scores of the target pixels and the image feature information of the original image at the current preset resolution, and determine the original image at the current preset resolution.
  • At least some of the pixels in the image are the intermediate evaluation scores of the target pixels; at least some of the pixels in the original image will be the target pixels at the highest preset resolution among the multiple preset resolutions
  • the intermediate evaluation score of as the evaluation score of at least some of the pixels in the original image being the target pixel.
  • the accuracy of calculating the probability value can be improved.
  • the original image includes a live image captured by an augmented reality AR device, and the AR device displays the target image.
  • the original image is acquired by the AR device, and the target image is displayed by the AR device, so that the real-time color change of the target object in the AR scene can be realized.
  • the original image includes: an image of a target person, wherein the target object includes at least one of a human hair area, a human skin area, and at least part of a clothing area in the target person image or a target object image, wherein the target object is at least part of the object area in the target object image.
  • the target person image or the target object image is used as the original image, so as to realize the color adjustment of the human hair area, the human skin area, the clothing area or the object area.
  • an embodiment of the present disclosure further provides an image processing apparatus, including: an acquisition module configured to acquire a probability value that at least some of the pixels in the original image are target pixels, wherein the target pixels are the original A pixel point belonging to the target object in the image; a determination module, configured to determine, for any pixel point in the at least part of the pixel point, based on the probability value that the pixel point is the target pixel point and the preselected target color value The fusion color value corresponding to the pixel point; the generating module is configured to adjust the color value of the at least part of the pixel point in the original image by using the fusion color value corresponding to the at least part of the pixel point respectively, to obtain the target image.
  • embodiments of the present disclosure further provide an electronic device, including: a processor, a memory, and a bus, where the memory stores machine-readable instructions; the processor is configured to execute the machine-readable instructions stored in the memory , wherein, when the electronic device is running, the processor and the memory communicate through a bus, and the machine-readable instructions are executed by the processor to execute the first aspect or any one of the first aspects. steps in a possible implementation.
  • embodiments of the present disclosure further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and the computer program is executed by a processor to execute the first aspect, or any one of the first aspect. steps in one possible implementation.
  • FIG. 1 shows a flowchart of an image processing method provided by an embodiment of the present disclosure
  • FIG. 2 shows a schematic diagram of a confidence map provided by an embodiment of the present disclosure
  • FIG. 3 shows a schematic diagram of a neural network provided by an embodiment of the present disclosure
  • FIG. 4 shows a schematic diagram of an image processing apparatus provided by an embodiment of the present disclosure
  • FIG. 5 shows a schematic diagram of an electronic device provided by an embodiment of the present disclosure.
  • the present disclosure provides a method, an apparatus, an electronic device and a storage medium for image processing.
  • the probability value that at least some of the pixels in the original image are target pixels it is possible to identify the color that needs to be replaced from the original image.
  • the target object, the fusion color value corresponding to at least part of the pixel points is determined based on the probability value of at least part of the pixel points for the target pixel point and the preselected target color value, and then the target image is generated based on the fusion color value.
  • the target image there is a certain difference between the pixels near and inside the target object, forming a transition color, so that the characteristics of the target object after color replacement are more in line with the original image.
  • an embodiment of the present disclosure discloses an image processing method, which can be applied to an electronic device with computing capability, such as a server.
  • the image processing method may include the following steps:
  • the target pixel point is a pixel point belonging to the target object in the original image.
  • the above-mentioned original image may be a live image captured by an augmented reality (AR) (Augmented Reality) device.
  • AR augmented reality
  • the original image can be acquired in real time, and the probability value that at least some of the pixels in the original image are target pixels can be determined.
  • the AR device may be a smart terminal with AR function held by the user, and may include, but is not limited to, electronic devices such as mobile phones, tablet computers, and AR glasses that can present augmented reality effects.
  • the above-mentioned at least some of the pixels may be all pixels in the original image, or may be pixels in a preset area, and the preset area may be an area pre-selected by a user.
  • the target pixel can be a pixel belonging to the target object in the original image, and the target object can be the area where the object that needs to be replaced by color is located.
  • the original image may be an image containing hair
  • the hair may specifically be human hair, animal hair, or the like.
  • the area that needs to be replaced by color is, for example, the hair area of a human body, or the hair area of an animal.
  • the original image may be a target person image
  • the target object may include, but is not limited to, at least one of a human hair area, a human skin area, a human eye area, and at least part of a clothing area in the target person image.
  • the original image may also be an image of a target object, and the target object image may include one or more object objects.
  • the target object may be at least part of the object area in the target object image.
  • the object may be a tree area in the target object image, and the tree area may be a trunk area of a tree, a leaf area, or an entire area of the tree.
  • the probability value that at least some of the pixels in the original image are target pixels may be determined by the following steps: extracting image feature information in the original image; determining the original image based on the image feature information At least some of the pixels in the image are the initial probability values of the target pixels; based on the initial probability values of the at least some of the pixels as the target pixels and the color values of the at least some of the pixels, a standard color value is determined; Based on the standard color value and the color value of the at least part of the pixel points, the initial probability value of the at least part of the pixel points being the target pixel point is calibrated, and the calibrated initial probability value is used as the The probability value that at least some of the pixels in the original image are the target pixels.
  • the above-mentioned determining that at least some of the pixels in the original image are the initial probability values of the target pixels may include: determining, based on the image feature information, that at least some of the pixels in the original image are the evaluation scores of the target pixels; For any pixel in the at least part of the pixels, based on the preset evaluation score correction threshold and the evaluation score of the pixel as the target pixel, determine the initial probability value of the pixel as the target pixel.
  • the above image feature information may be feature parameters of at least some pixels in the original image, such as color value, saturation, brightness, and the like.
  • the trained neural network can be used to process the image feature information to obtain the evaluation score that at least some of the pixels are the target pixel, and then use the evaluation score to correct the threshold and the evaluation score to determine that at least some of the pixels in the original image are the target pixel. initial probability value.
  • image feature information of the original image at various preset resolutions may be extracted, and then the trained neural network may be used to determine the evaluation scores of at least some of the pixels as target pixels.
  • the neural network may use a feature extractor to extract image feature information of an original image at multiple preset resolutions. Specifically, the image feature information of the original image at the highest preset resolution can be extracted first, and then the obtained image feature information can be down-sampled to obtain the image feature information of the original image at each preset resolution.
  • the neural network can use the preset classifier to determine the initial evaluation score that at least some of the pixels in the original image are target pixels at the lowest preset resolution . Then, in the order of the preset resolution from low to high, based on the initial evaluation score of at least some of the pixels in the original image at the previous preset resolution as target pixels, and the image features of the original image at the current preset resolution information and a classifier to determine that at least some of the pixels in the original image at the current preset resolution are the intermediate evaluation scores of the target pixels. Finally, at the highest preset resolution, at least some of the pixels in the original image are the intermediate evaluation scores of the target pixels, and output as the evaluation scores of at least some of the pixels in the original image as the target pixels.
  • the initial evaluation score under the previous preset resolution may be up-sampled, so that the resolution of the evaluation score under the previous preset resolution is the same as that of the previous preset resolution.
  • the resolutions of the evaluation scores under the current preset resolution are the same, and then the evaluation scores under the two preset resolutions are spliced to obtain the spliced evaluation scores under the current preset resolution.
  • the above neural network can be trained by using a pre-prepared data set.
  • the data set can include a large number of images and labels corresponding to the images.
  • the images contain target objects, and the labels can be marked with pixels corresponding to the target objects.
  • the output of different preset resolutions can be supervised, and the small-resolution output can be fused with the features of the low-level large-resolution output, so that the small-resolution output can learn more Good semantic categories, while the large-resolution output learns finer details, thereby improving the accuracy of the neural network.
  • the loss can be calculated according to the output results of the neural network and the labeling results.
  • the target object Compared with other backgrounds, the target object often has a color jump at the boundary, so it has a clear gradient. Therefore, a refine loss loss function that can better preserve the boundary gradient can be used, so that the boundary of the target object in the image output by the neural network has a similar gradient to the image input to the neural network, so as to obtain a more refined edge prediction result.
  • the pixel points can be classified by using the evaluation score correction threshold, so that different initial probability value calculation methods can be adopted for different types of pixel points, so as to facilitate calculation optimization.
  • the evaluation score modification threshold includes a first modification threshold and a second modification threshold; the first modification threshold is greater than the second modification threshold.
  • determining the pixel point as the initial probability value of the target pixel point based on the preset evaluation score correction threshold and the pixel point as the evaluation score of the target pixel point including: evaluating the pixel point When the score is greater than or equal to the first correction threshold, the initial probability value of the pixel as the target pixel is determined to be the first preset probability value; the evaluation score of the pixel is less than or equal to the second In the case of correcting the threshold, the initial probability value of determining that the pixel is the target pixel is a second preset probability value; the evaluation score of the pixel is less than the first correction threshold and greater than the second correction threshold In the case of , the initial probability value that the pixel is the target pixel is determined based on the evaluation score of the pixel being the target pixel, the first correction threshold and the second correction threshold.
  • the first correction threshold may be set to 0.7
  • the second correction threshold may be set to 0.4.
  • the probability of the pixel being the target pixel is low and can be ignored, so the initial probability value of the pixel being the target pixel can be directly set to 0, that is The second preset probability value.
  • the probability of the pixel being the target pixel is relatively high, so the initial probability value of the pixel being the target pixel can be set to 1, that is, the first Preset probability value.
  • the above-mentioned determining that the pixel is the initial probability value of the target pixel based on the evaluation score of the pixel as the target pixel, the first correction threshold and the second correction threshold may include: determining the first correction threshold. a first difference between a modified threshold and the second modified threshold; determining a second difference between the evaluation score and the second modified threshold; based on the second difference and the first The ratio between the differences determines the initial probability value.
  • the ratio between the second difference and the first difference may be used as the initial probability value.
  • the pixel For a pixel whose initial probability value is higher than or equal to the preset probability threshold, the pixel can be considered as a target pixel.
  • the average value of the color values of the plurality of pixel points whose initial probability value is higher than or equal to the preset probability threshold value can be used as the standard color value, and the initial probability value can be corrected by using the standard color value, so as to obtain at least some of the pixel points as the target The probability value of the pixel point.
  • the preset probability threshold may be the same as the first correction threshold, and the mean value of the color values of the pixels whose initial probability value is higher than or equal to the preset probability threshold may be determined, and the mean value may be used as the standard color value. In this way, since the standard color value is relatively close to the color value of the target pixel point, the standard color value can be used to correct the initial probability value of each pixel point being the target pixel point.
  • the similarity between the color value of the pixel and the standard color value may be determined for any of the above at least some of the pixels;
  • the point is the initial probability value of the target pixel for calibration.
  • the initial probability value may be weighted by using the similarity between the color value of the pixel point and the standard color value, and the weighted probability value may be used as the probability value that the pixel point is the target pixel point.
  • FIG. 2 it is a schematic diagram of a confidence map provided by an embodiment of the present disclosure.
  • the above confidence map can represent the probability value that at least some of the pixels in the original image are target pixels.
  • the original image corresponding to the confidence map shown in FIG. 2 is a human image
  • the target object is the area where human hair is located.
  • the probability value of the target pixel corresponding to the hair in Figure 2 is close to 1, which tends to be white in the confidence map, while the probability value of the pixels corresponding to other parts is close to 0, which tends to be in the confidence map. black.
  • the above-mentioned target color value may be a color value preset by a user. For example, if the user wishes to replace the hair color in Figure 2 with brown, the color value corresponding to brown can be set as the target color value.
  • determining the corresponding fusion color value based on the probability value that the pixel point is the target pixel point and the preselected target color value may include: based on the probability that the pixel point is the target pixel point. value, and determine the color fusion weight corresponding to the pixel point; based on the color value of the pixel point, the target color value, and the color fusion weight corresponding to the pixel point, determine the fusion color value corresponding to the pixel point.
  • the original color value of the pixel point (that is, the color value of the pixel point in the original image, which may also be referred to as the color value of the pixel point) and the fusion weight of the target color value can be determined based on the above probability value, and According to the ratio of the fusion weight, the original color value of the pixel point and the target color value are fused to obtain the fusion color value corresponding to the pixel point.
  • the original color value of the pixel point and the target color value can be fused by means of alpha fusion.
  • the result is obtained.
  • the fusion color value corresponding to the pixel point so that the fusion color value can reflect the original color value of the pixel point and the target color value according to the color fusion weight, so that the boundary of the target object can form a transition color, so that the target object can be replaced after color replacement.
  • the features are more in line with the original image.
  • the color values of at least some of the pixels in the original image may be replaced according to the fusion color values corresponding to the above at least some of the pixels to obtain the target image.
  • the target image is generated, based on the grayscale values of the at least part of the pixels in the original image, and/or the preset grayscale values of the at least part of the pixels, it is possible to determine, respectively, corresponding to the at least part of the pixels. target grayscale value, and then adjust the grayscale value of the at least part of the pixel points in the target image to the corresponding target grayscale value.
  • the brightness of the target object after the color change can be adjusted, so that the target object can keep the original texture and brightness after the color replacement is performed, and the characteristics are more in line with the user's expectation.
  • an AR device can also be used to display the obtained target image, so as to realize the real-time color change of the target object in the AR scene.
  • the writing order of each step does not mean a strict execution order but constitutes any limitation on the implementation process, and the specific execution order of each step should be based on its function and possible Internal logic is determined.
  • the embodiment of the present disclosure also provides an image processing apparatus corresponding to the image processing method. Reference can be made to the implementation of the method, and repeated details will not be repeated.
  • the apparatus includes: an acquisition module 410 for acquiring a probability value that at least some of the pixels in the original image are target pixels.
  • the target pixel point is a pixel point belonging to the target object in the original image.
  • the determining module 420 is configured to, for any pixel point in the at least part of the pixel points, determine the fusion color value corresponding to the pixel point based on the probability value that the pixel point is the target pixel point and the preselected target color value.
  • the generating module 430 is configured to adjust the color values of the at least part of the pixel points in the original image by using the fusion color values corresponding to the at least part of the pixel points respectively, to obtain a target image.
  • the obtaining module 410 is specifically configured to: extract image feature information in the original image; determine at least some of the pixels in the original image as the target based on the image feature information the initial probability value of the pixel point; based on the initial probability value of the at least part of the pixel point being the target pixel point and the color value of the at least part of the pixel point, determine a standard color value; based on the standard color value and the at least color values of some pixels, calibrate the initial probability value that at least some of the pixels are the target pixel, and use the calibrated initial probability value as at least some of the pixels in the original image.
  • the probability value of the target pixel point is specifically configured to: extract image feature information in the original image; determine at least some of the pixels in the original image as the target based on the image feature information the initial probability value of the pixel point; based on the initial probability value of the at least part of the pixel point being the target pixel point and the color value of the at least part of the pixel point, determine a standard color value;
  • the obtaining module 410 when determining, based on the image feature information, that at least some of the pixels in the original image are the initial probability values of the target pixels, is specifically configured to: the image feature information, determine that at least some of the pixels in the original image are the evaluation scores of the target pixels; for any pixel in the at least part of the pixels, based on the preset evaluation score correction threshold and the pixel are The evaluation score of the target pixel point determines that the pixel point is the initial probability value of the target pixel point.
  • the evaluation score modification threshold includes a first modification threshold and a second modification threshold; the first modification threshold is greater than the second modification threshold.
  • the acquisition module 410 determines that the pixel is the initial probability value of the target pixel based on the preset evaluation score correction threshold and the pixel is the evaluation score of the target pixel, it is specifically used for: In the case that the evaluation score of the pixel point is greater than or equal to the first correction threshold, determine that the pixel point is the initial probability value of the target pixel point as the first preset probability value; when the evaluation score of the pixel point is less than or equal to the second correction threshold, determine that the initial probability value of the pixel point as the target pixel point is a second preset probability value; the evaluation score of the pixel point is less than the first correction threshold value and In the case of being greater than the second correction threshold, based on the evaluation score of the pixel point being the target pixel point, the first correction threshold value and the second correction threshold value, it is determined that the pixel point is a part of the target pixel point. initial probability value.
  • the obtaining module 410 determines that the pixel is the target based on the evaluation score of the pixel as the target pixel, the first correction threshold and the second correction threshold
  • the initial probability value of the pixel point is specifically used to: determine the first difference between the first correction threshold and the second correction threshold; determine the first difference between the evaluation score and the second correction threshold Two difference values; the initial probability value is determined based on the ratio between the second difference value and the first difference value.
  • the obtaining module 410 determines the standard color value based on the initial probability value that the at least part of the pixel points are the target pixel point and the color value of the at least part of the pixel point
  • the specific It is used for: determining the mean value of the color values of a plurality of pixel points whose initial probability value is higher than or equal to a preset probability threshold value, and using the mean value as the standard color value.
  • the obtaining module 410 is specifically configured to: any pixel point in the at least part of the pixel points, to determine the similarity between the color value of the pixel point and the standard color value; based on the similarity, the initial probability that the pixel point is the target pixel point value to calibrate.
  • the determining module 420 determines the corresponding fusion color value based on the probability value that the at least part of the pixel points are the target pixel point and the preselected target color value, it is specifically used for : Determine the color fusion weight corresponding to the pixel point based on the probability value that the pixel point is the target pixel point; based on the color value of the pixel point, the target color value and the color fusion corresponding to the pixel point Weight, to determine the fusion color value corresponding to the pixel point.
  • the generating module 430 is further configured to: based on the grayscale values of the at least part of the pixels in the original image and/or the preset grayscale values of the at least part of the pixels , determine the target grayscale values corresponding to the at least part of the pixel points respectively; adjust the grayscale values of the pixel points corresponding to the at least part of the pixel points in the target image to the target grayscale value corresponding to the pixel points order value.
  • the obtaining module 410 when extracting the image feature information in the original image, is specifically configured to: extract image feature information of the original image under multiple preset resolutions.
  • the obtaining module is specifically configured to: use a preset classifier to determine, by using a preset classifier, that in the original image at the lowest preset resolution. At least some of the pixels are the initial evaluation scores of the target pixels; according to the preset resolution from low to high, based on the previous preset resolution, at least some of the pixels in the original image are the initial evaluation scores of the target pixels and the original.
  • the image feature information of the image at the current preset resolution determining that at least some of the pixels in the original image under the current preset resolution are the intermediate evaluation scores of the target pixel; the original image at the highest preset resolution is determined.
  • the intermediate evaluation scores of at least some of the pixels in the original image are taken as the evaluation scores of at least some of the pixels in the original image being the target pixels.
  • the acquiring module 410 when acquiring the probability value that at least some of the pixels in the original image are target pixels, is specifically configured to: take the live image captured by the augmented reality AR device as the original image, and acquire The probability value that at least some of the pixels in the original image are the target pixels.
  • the generating module 430 is further configured to: display the target image by using the AR device.
  • the acquiring module 410 when acquiring the probability value that at least some of the pixels in the original image are target pixels, is specifically configured to: take the target person image as the original image, wherein the target object Including at least one of the human hair area, human skin area, and at least part of the clothing area in the target person image; or, using the target object image as the original image, wherein the target object is in the target object image. at least part of the object area. Obtain a probability value that at least some of the pixels in the original image are target pixels.
  • an embodiment of the present disclosure further provides an electronic device 500 .
  • a schematic structural diagram of the electronic device 500 provided by the embodiment of the present disclosure includes: a processor 51 and a memory 52 , and bus 53.
  • the memory 52 is used to store execution instructions, including a memory 521 and an external memory 522, where the memory 521 is also called an internal memory, and is used to temporarily store operation data in the processor 51 and data exchanged with an external memory 522 such as a hard disk.
  • the processor 51 exchanges data with the external memory 522 through the memory 521.
  • the processor 51 communicates with the memory 52 through the bus 53, so that the processor 51 can execute the following instructions :
  • the probability value that at least some of the pixels in the original image are target pixels, where the target pixels are pixels belonging to the target object in the original image; for any pixel in the at least some of the pixels, Based on the probability value that the pixel point is the target pixel point and the preselected target color value, determine the fusion color value corresponding to the pixel point; use the fusion color value corresponding to the at least part of the pixel points to compare the original The color values of the at least part of the pixels in the image are adjusted to obtain the target image.
  • Embodiments of the present disclosure further provide a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is run by a processor, the steps of the image processing method described in the foregoing method embodiments are executed.
  • the storage medium may be a volatile or non-volatile computer-readable storage medium.
  • Embodiments of the present disclosure further provide a computer program product, including a computer-readable storage medium storing program codes, wherein the instructions included in the program codes can be used to execute the steps of the image processing methods in the above method embodiments, specifically Refer to the above method embodiments, which are not repeated here.
  • the computer program product can be specifically implemented by means of hardware, software or a combination thereof.
  • the computer program product is embodied as a computer storage medium, and in another optional embodiment, the computer program product is embodied as a software product, such as a software development kit (Software Development Kit, SDK), etc. Wait.
  • the units described as separate components may or may not be physically separated, and components displayed as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • each functional unit in each embodiment of the present disclosure may be integrated into one processing unit, or each unit may exist physically alone, or two or more units may be integrated into one unit.
  • the functions, if implemented in the form of software functional units and sold or used as stand-alone products, may be stored in a processor-executable non-volatile computer-readable storage medium.
  • the computer software products are stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, a server, or a network device, etc.) to execute all or part of the steps of the methods described in various embodiments of the present disclosure.
  • the aforementioned storage medium includes: U disk, mobile hard disk, read-only memory (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or optical disk and other media that can store program codes .

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Probability & Statistics with Applications (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Color Image Communication Systems (AREA)

Abstract

La présente divulgation concerne un procédé et un appareil de traitement d'images, un dispositif électronique et un support de stockage. Selon un exemple, le procédé consiste à : après l'acquisition d'une image d'origine, déterminer une valeur de probabilité d'au moins certains des points de pixels dans l'image d'origine étant des points de pixels cibles, les points de pixels cibles étant des points de pixels appartenant à un objet cible dans l'image d'origine ; puis, pour n'importe quel point de pixel parmi les au moins certains points de pixels, sur la base de la valeur de probabilité du point de pixel étant un point de pixel cible et d'une valeur de couleur cible présélectionnée, déterminer une valeur de couleur de fusion correspondant au point de pixel ; et enfin, utiliser les valeurs de couleur de fusion correspondant aux au moins certains points de pixels pour ajuster les valeurs de couleur des au moins certains points de pixels dans l'image d'origine afin d'obtenir une image cible.
PCT/CN2021/133400 2021-04-29 2021-11-26 Procédé et appareil de traitement d'images, dispositif électronique et support de stockage WO2022227547A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110473454.0 2021-04-29
CN202110473454.0A CN113191938B (zh) 2021-04-29 2021-04-29 图像处理方法、装置、电子设备及存储介质

Publications (1)

Publication Number Publication Date
WO2022227547A1 true WO2022227547A1 (fr) 2022-11-03

Family

ID=76980903

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/133400 WO2022227547A1 (fr) 2021-04-29 2021-11-26 Procédé et appareil de traitement d'images, dispositif électronique et support de stockage

Country Status (3)

Country Link
CN (1) CN113191938B (fr)
TW (1) TW202242804A (fr)
WO (1) WO2022227547A1 (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113191938B (zh) * 2021-04-29 2022-11-15 北京市商汤科技开发有限公司 图像处理方法、装置、电子设备及存储介质
CN113763270B (zh) * 2021-08-30 2024-05-07 青岛信芯微电子科技股份有限公司 蚊式噪声去除方法及电子设备
CN114022395B (zh) * 2022-01-06 2022-04-12 广州卓腾科技有限公司 一种证件照头发颜色矫正方法、装置及介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170270679A1 (en) * 2016-03-21 2017-09-21 The Dial Corporation Determining a hair color treatment option
CN110930296A (zh) * 2019-11-20 2020-03-27 Oppo广东移动通信有限公司 图像处理方法、装置、设备及存储介质
CN111145086A (zh) * 2019-12-27 2020-05-12 北京奇艺世纪科技有限公司 一种图像处理方法、装置及电子设备
CN112614060A (zh) * 2020-12-09 2021-04-06 深圳数联天下智能科技有限公司 人脸图像头发渲染方法、装置、电子设备和介质
CN113191938A (zh) * 2021-04-29 2021-07-30 北京市商汤科技开发有限公司 图像处理方法、装置、电子设备及存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9262684B2 (en) * 2013-06-06 2016-02-16 Apple Inc. Methods of image fusion for image stabilization
CN108921810A (zh) * 2018-06-20 2018-11-30 厦门美图之家科技有限公司 一种颜色迁移方法及计算设备
CN112204608A (zh) * 2019-08-27 2021-01-08 深圳市大疆创新科技有限公司 图像处理方法及装置
CN110971839B (zh) * 2019-11-18 2022-10-04 咪咕动漫有限公司 视频融合方法、电子设备及存储介质
CN112233154A (zh) * 2020-11-02 2021-01-15 影石创新科技股份有限公司 拼接图像的色差消除方法、装置、设备和可读存储介质
CN112235520B (zh) * 2020-12-07 2021-05-04 腾讯科技(深圳)有限公司 一种图像处理方法、装置、电子设备及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170270679A1 (en) * 2016-03-21 2017-09-21 The Dial Corporation Determining a hair color treatment option
CN110930296A (zh) * 2019-11-20 2020-03-27 Oppo广东移动通信有限公司 图像处理方法、装置、设备及存储介质
CN111145086A (zh) * 2019-12-27 2020-05-12 北京奇艺世纪科技有限公司 一种图像处理方法、装置及电子设备
CN112614060A (zh) * 2020-12-09 2021-04-06 深圳数联天下智能科技有限公司 人脸图像头发渲染方法、装置、电子设备和介质
CN113191938A (zh) * 2021-04-29 2021-07-30 北京市商汤科技开发有限公司 图像处理方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN113191938A (zh) 2021-07-30
CN113191938B (zh) 2022-11-15
TW202242804A (zh) 2022-11-01

Similar Documents

Publication Publication Date Title
WO2022227547A1 (fr) Procédé et appareil de traitement d'images, dispositif électronique et support de stockage
US11222222B2 (en) Methods and apparatuses for liveness detection, electronic devices, and computer readable storage media
Gijsenij et al. Computational color constancy: Survey and experiments
WO2017193906A1 (fr) Procédé et système de traitement d'image
US8638993B2 (en) Segmenting human hairs and faces
EP3542347B1 (fr) Constance de couleur de fourier rapide
WO2018036462A1 (fr) Procédé de segmentation d'image, appareil informatique et support de stockage informatique
Hwang et al. Context-based automatic local image enhancement
CN112328345B (zh) 用于确定主题色的方法、装置、电子设备及可读存储介质
CN111553838A (zh) 模型参数的更新方法、装置、设备及存储介质
US20190304152A1 (en) Method and device for processing image
CN113128373B (zh) 基于图像处理的色斑评分方法、色斑评分装置及终端设备
US20160140748A1 (en) Automated animation for presentation of images
Sharma et al. Single-image camera response function using prediction consistency and gradual refinement
US20120170861A1 (en) Image processing apparatus, image processing method and image processing program
CN109300170B (zh) 肖像照片光影传递方法
Smiatacz Normalization of face illumination using basic knowledge and information extracted from a single image
WO2023273111A1 (fr) Procédé et appareil de traitement d'image, et dispositif informatique et support de stockage
KR102334030B1 (ko) 컴퓨터 장치를 이용한 헤어 염색 방법
WO2023272495A1 (fr) Procédé et appareil de badgeage, procédé et système de mise à jour de modèle de détection de badge et support de stockage
US9251567B1 (en) Providing color corrections to photos
CN113781330A (zh) 图像处理方法、装置及电子系统
Yuan et al. Full convolutional color constancy with adding pooling
CN114565506B (zh) 图像颜色迁移方法、装置、设备及存储介质
CN111062862A (zh) 基于颜色的数据增强方法和系统及计算机设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21939006

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21939006

Country of ref document: EP

Kind code of ref document: A1