WO2023197780A1 - 图像处理方法、装置、电子设备及存储介质 - Google Patents

图像处理方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2023197780A1
WO2023197780A1 PCT/CN2023/079974 CN2023079974W WO2023197780A1 WO 2023197780 A1 WO2023197780 A1 WO 2023197780A1 CN 2023079974 W CN2023079974 W CN 2023079974W WO 2023197780 A1 WO2023197780 A1 WO 2023197780A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
target object
target
color
processed
Prior art date
Application number
PCT/CN2023/079974
Other languages
English (en)
French (fr)
Inventor
刁俊玉
周栩彬
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2023197780A1 publication Critical patent/WO2023197780A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/136Segmentation; Edge detection involving thresholding
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics

Definitions

  • Embodiments of the present disclosure relate to the field of image processing technology, such as an image processing method, device, electronic device, and storage medium.
  • the display content in short videos or images can include multiple display objects such as people, objects, and backgrounds.
  • Each display object has its own corresponding color attribute and is displayed independently of each other.
  • the display method of related technologies is relatively simple and cannot meet the diverse needs of users.
  • an embodiment of the present disclosure provides an image processing method, including:
  • the color of the first target object is displayed on the second target object.
  • embodiments of the present disclosure also provide an image processing device, which includes:
  • An image acquisition module to be processed configured to acquire an image to be processed corresponding to the target image special effect in response to a special effect triggering operation for enabling a target image special effect
  • a color display module configured to display the color of the first target object on the second target object in response to detecting that the image to be processed includes a first target object and a second target object.
  • embodiments of the present disclosure also provide an electronic device, which includes:
  • processors one or more processors
  • a storage device configured to store one or more programs
  • the one or more processors are caused to implement the image processing method provided by any embodiment of the present disclosure.
  • embodiments of the present disclosure also provide a computer-readable storage medium on which is stored A computer program that, when executed by a processor, implements the image processing method provided by any embodiment of the present disclosure.
  • Figure 1 is a schematic flowchart of an image processing method provided by Embodiment 1 of the present disclosure
  • FIG. 2 is a schematic flowchart of an image processing method provided by Embodiment 2 of the present disclosure
  • FIG. 3 is a schematic flowchart of an image processing method provided by Embodiment 3 of the present disclosure.
  • Figure 4 is a schematic diagram comparing the effects before and after processing an image to be processed provided by the present disclosure
  • Figure 5 is a schematic structural diagram of an image processing device provided in Embodiment 4 of the present disclosure.
  • FIG. 6 is a schematic structural diagram of an electronic device provided by Embodiment 5 of the present disclosure.
  • the term “include” and its variations are open-ended, ie, “including but not limited to.”
  • the term “based on” means “based at least in part on.”
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one additional embodiment”; and the term “some embodiments” means “at least some embodiments”. Relevant definitions of other terms will be given in the description below.
  • FIG 1 is a schematic flowchart of an image processing method provided by Embodiment 1 of the present disclosure.
  • This embodiment can diversify the colors of the target objects in the image to be processed according to the corresponding target image special effects before the image is displayed.
  • the method can be executed by an image processing device, which can be implemented by software and/or hardware, and can be configured in a terminal and/or server to implement the image processing method in the embodiment of the present disclosure.
  • the method in this embodiment may include:
  • the target image special effects can have the effect of diversifying the colors of the target objects in the image to be processed.
  • the target image special effect can change the display color of the target object to a user-defined color, change the display color of the target object to a randomly selected color in the preset color library, or change the display color of the target object to other objects included in the image. color, thereby increasing the diversity and freedom of the color of the target object in the image to be processed through the target image special effects.
  • the special effect triggering operation is an operation for activating the special effects of the target image.
  • Special effect trigger operations include two types: contact operation and non-contact operation.
  • the special effects triggering controls such as buttons, sliders and other controls
  • the special effects trigger control corresponds to the special effect name of the target image special effect.
  • the special effects trigger operation is completed to start the target image special effect.
  • the user terminal is a touch screen terminal
  • the user can directly click or slide with his finger to select the position corresponding to the special effect triggering control on the screen.
  • the user terminal is a non-touch screen terminal
  • the user can use external input devices such as a mouse and keyboard. Send an operation instruction to the user terminal so that the special effects triggering control is selected, thereby realizing the special effects triggering operation.
  • each image special effect displayed may include a special effect that replaces the color of the second example object with the color of the first example object.
  • the color of the first example object is white and the color of the second example object is red, then the effect is used.
  • each image special effect can also include two
  • the effect animation is used to display the special effect triggering control of the special effect of object exchange color
  • the change process of the color exchange between the first example object and the second example object can be displayed, so that the user can intuitively and clearly understand the effect of the image special effect. effect.
  • the basic operation controls displayed on the application display interface can also be associated with the target image special effects.
  • the target image special effects can be enabled.
  • the basic operation controls may include at least one of a shooting image control, an image uploading control, and an image calling control.
  • the basic operation controls associated with the target image special effects are operated, it is equivalent to completing the special effects triggering operation.
  • Enable target image effects For example, the application display interface displays a "shoot" floating button, which is associated with the target image special effects. When an image is captured by clicking the "shoot" floating button, the captured image is determined to be an image to be processed, and the target needs to be enabled. Image special effects are displayed after image processing.
  • the special effect triggering operation is a non-contact operation, it can be divided into gesture recognition operation and/or voice control operation.
  • the user can perform actions according to the gestures indicated by the application display interface.
  • the special effects triggering operation is completed; the user can also control the user terminal through voice to realize the special effects triggering operation. For example, when it is detected that the user issues "start target" When the sound command "Image Special Effects" is issued, the special effects triggering operation is completed.
  • the image to be processed is an image that needs to be operated by applying target image special effects.
  • the image to be processed can be directly obtained through an image collector.
  • the image collector includes a camera, a camera, a mobile phone with a shooting function, a computer and other equipment. Images stored in the user terminal can also be uploaded to obtain images to be processed; communication connections can also be established with other devices to share images stored in other devices as images to be processed.
  • the implementation of obtaining the image to be processed corresponding to the target image special effect may include:
  • the preset shooting method is the method of collecting images to be processed.
  • the user can select different shooting methods by customizing settings in advance; the preset default shooting method can also be used.
  • the preset shooting method can be divided into timed shooting and real-time shooting; according to the mode of collecting images, the preset shooting method can be divided into at least one of panoramic shooting, portrait shooting, and night scene shooting; Divided according to trigger type, the preset shooting methods can be divided into at least one of click shooting, voice-activated shooting and motion recognition shooting.
  • images uploaded by the user or captured in a preset shooting mode can be determined as images to be processed.
  • images uploaded by users who meet preset conditions or captured in a preset shooting method can be obtained.
  • the preset condition can be the image obtained after enabling the target image special effects and before performing the upload or shooting end operation.
  • the preset condition can also be the image obtained within a preset time period after the target image special effects are enabled.
  • the number of images to be processed may be at least one.
  • the image can be determined as the image to be processed.
  • each image can be used as an image to be processed for image processing; the similarity between each image can also be determined. If the similarity is greater than the preset similarity threshold, it can be understood as an image uploaded by the user. For images in the same scene, the image with the highest definition and the best color display is selected as the image to be processed.
  • the video uploaded by the user or recorded in real time for applying the target image special effects can be obtained, and each frame of the image in the video is extracted as the image to be processed corresponding to the target image special effects.
  • images of a partial number of frames in the video may also be determined as images to be processed.
  • specific frame images in the video pre-specified by the user can be determined as images to be processed; one or more images can also be extracted from the video according to preset extraction rules and determined as images to be processed.
  • the preset extraction rule can be extracted at equal time intervals, such as extracting one frame of image every 1 second; the preset extraction rule can also be extracted at equal frame intervals, such as extracting once every 5 frames, or it can be Random extraction; reduce the amount of data processing by extracting images of partial frames.
  • a method of determining a partial number of frames in a video as images to be processed also includes: obtaining a video uploaded by a user or recorded in real time that contains the first target object and/or the second target in which the target image special effects are applied.
  • the image of the object is used as the image to be processed corresponding to the target image special effect.
  • This method requires setting the target object in advance, which is the component object used to apply the target image special effects in the image to be processed.
  • the number of target objects can be one or more, and they can be at least one of plants, animals, people and objects in the image to be processed; they can also be local components of animals, people and objects.
  • the target object includes a first target object and a second target object, and the first target object and the second target object may be different components in the image to be processed.
  • the color of the first target object can be displayed on the second target object.
  • images of the first target object and/or the second target object in the acquired video uploaded by the user or recorded in real time can be used as images to be processed.
  • the specific contents of the first target object and the second target object can be set in advance.
  • the method may include the user pre-setting the specific content of the first target object and the second target object, and/or the user pre-setting the selection conditions of the first target object and the second target object, and determining the first target object and the second target object according to the selection conditions. Second target object.
  • the user can directly input the specific content and/or selection conditions of the object into the user terminal through the application setting function, or can complete the selection condition through the alternative object information and/or alternative selection conditions displayed on the application interface. Or pre-set the specific content of the object.
  • alternative information such as alternative names and alternative types of the first target object and the second target object are displayed on the application display interface, and the first target object selected by the user is
  • the alternative information corresponding to the target object is used as the first target object
  • the alternative information corresponding to the second target object selected by the user is used as the second target object.
  • the candidate type may include at least one of animal type, plant type, person type and object type.
  • the candidate name may include face, tree, sky, etc., and the candidate information may include the area, shape, size, etc. of the candidate object. Color and other information.
  • alternative selection conditions of the first target object and the second target object may also be displayed on the application display interface.
  • the alternative selection conditions may include plants in the image to be processed as the third target object.
  • the alternative selection conditions may also include determining the rules, or the round object in the image to be processed as the first target object, and using the image to be processed as the first target object.
  • the square item in the object serves as the second target object, etc.
  • the first target object may be an image subject included in the image to be processed, or may be at least one preset candidate object corresponding to the target image special effect.
  • the user may select the first target object based on the displayed image to be processed, or the user may select the first target object according to a preset The condition determines the first target object.
  • the preset candidate objects corresponding to the target image special effects are two or more objects with different colors, the first target object can be determined in a user-defined manner. For example, at least one candidate object control among the name, logo, image, color, etc. of each candidate object may be displayed on the application display interface.
  • the corresponding candidate object is determined as the first target object.
  • the input box can also be directly displayed on the application display interface, allowing the user to directly input the name, logo and other information of the first target object, or input the desired color in the input box.
  • the RGB color value of the first target object, the object corresponding to the RGB color value can be determined as the first target object.
  • the first target object may include handheld objects, salient objects in the image to be processed, and image subjects belonging to the first preset subject type. of at least one.
  • the handheld object is an object in contact with the user's hand.
  • one of the objects can be selected as the first target object, for example, the object with the largest area can be selected as the target object.
  • Significant objects may include walls, floors, lawns, desks, windows, beds and other items whose area is larger than the preset area threshold.
  • the preset area threshold can be determined based on the area of the image to be processed. The larger the area of the image to be processed, the larger the area of the image to be processed. Then the preset area threshold can be larger.
  • an image subject belonging to the first preset subject type may also be determined as the first target object.
  • the first preset subject type can be classified from multiple aspects, such as from the external shape of the target object.
  • Image subjects belonging to the first preset subject type can include shapes such as square, circle, star, ring, etc.
  • the image subject of the element; or, based on the color of the target object, the image subject belonging to the first preset subject type can be an image subject containing cool and/or warm color elements; or, based on the object attributes of the target object
  • image subjects belonging to the first preset subject type may include image subjects belonging to fruits, clothing or buildings.
  • any object that belongs to the first preset subject type and is an image subject in the image to be processed can be set as the first target object.
  • the second target object includes at least one of the body parts of the target user in the image to be processed, the wearables of the target user, the background area of the image to be processed, and the image subject belonging to the second preset subject type.
  • the target user's body parts can include hair, face, facial features, neck, hands and other parts;
  • the wearables can include tops, pants, necklaces, backpacks and other items;
  • the background area can be walls, buildings, floors and scenery, etc. Remove the locale portion of the target user.
  • the second target object may also be an image subject belonging to a second preset subject type, and the second preset subject type may be the same type as the first preset subject type, or may be different from the first preset subject type.
  • the image to be processed is detected through a preset recognition algorithm to determine whether the image to be processed contains the set third target object. a first target object and a second target object.
  • the neural network model can also be trained by processing the sample image in advance, so that the trained neural network model can identify the first target object and the second target object in the sample processed image.
  • the image to be processed is input into a pre-trained neural network model for image recognition, and based on the output result of the neural network model, it is determined whether the image to be processed contains the first target object and the second target object.
  • the first target object and the second target object can be segmented through a preset segmentation algorithm or a pre-trained segmentation model, and the first target object can be extracted. color and display that color on the second target object.
  • the first target object when extracting the color of the first target object, it can be determined whether the first target object contains only one color. If it contains only one color, the unique color can be determined as the color of the first target object. color. If the first target object includes two or more colors, in order to reduce the amount of calculation and improve the efficiency of color replacement, a pixel point can be determined in the first target object, and the color corresponding to the pixel point can be used as the first The color of the target object.
  • a pixel point may be randomly determined, or a pixel point meeting a preset condition may be determined in the first target object, and the color corresponding to the pixel point may be determined as the color of the first target object.
  • the preset condition can be the pixel corresponding to the color with the largest difference between the color value of the second target object; it can also be the pixel in the second target object that is farthest from the first target object; it can also be based on the user-specified third
  • the color corresponding to the pixel position of a target object is determined as the color of the first target object, and the color is extracted and displayed on the second target object.
  • all colors included in the first target object may also be displayed in the second target object.
  • the color value at the corresponding pixel position can be determined sequentially according to the pixel arrangement order of the first target object, and the color corresponding to the color value can be displayed at the corresponding pixel position of the second target object.
  • the color values may be RGB color mode values.
  • the prompt information can be displayed on the application display interface, or the target image special effects operation failure can be prompted through voice playback.
  • the reason for the special effect failure may also be displayed or voiced.
  • the application display interface displays the words "No applicable object detected, it is recommended to change the image" to provide a prompt.
  • the image to be processed corresponding to the target image special effect is obtained.
  • the first target object is The color of the target object is displayed on the second target object, and the conversion operation of the display color of the second target object is completed, which avoids the boring and single image display method in the related technology, and realizes the special effects based on the target image to enrich the image of the target object.
  • the purpose of displaying colors is to improve the interest of image display and meet the diverse needs of users.
  • Figure 2 is a schematic flow chart of an image processing method provided in Embodiment 2 of the present disclosure.
  • This embodiment displays the color of the first target object on the second target object based on any of the embodiments of the present disclosure. on, including: replacing the color of the second target object with the color of the first target object; or, Blends the color of the first target object to the color of the second target object.
  • the explanations of terms that are the same as or corresponding to the above embodiments will not be repeated here.
  • the method in this embodiment may include:
  • the image to be processed includes the first target object and the second target object, replace the color of the second target object with the color of the first target object, or fuse the color of the first target object into the second target object. on the color on the target object.
  • a color can be determined in the first target object and displayed on the second target object.
  • the center pixel of the first target object can be determined, and the color of the center pixel can be used as the color of the first target object.
  • the advantage of using the color of the center pixel of the first target object as the color of the first target object is that while reducing the amount of calculation required to determine the color, it can avoid color interference from objects close to the first target object, resulting in the inability to achieve the target image. Special effects, or displaying the color of a nearby object onto a secondary target object.
  • the central pixel point can be a pixel point in the first target object that is equal to the surrounding distance of the first target object; for example, when the first target object When the target object is a circular object, the central pixel point is the center of the circle of the first target object.
  • the color corresponding to the pixel point such as the center of gravity, vertical center and inner center of the first target object as the color of the first target object according to actual application conditions, which is not limited in this embodiment.
  • the first target object in the frame of regular shape can be used, such as through the inscribed circle, inscribed rectangle and other borders of the first target object, the first target object in the frame can be determined;
  • the pixel points such as the center of gravity, the vertical center and the inner center of the regular-shaped border are used as the center pixel points;
  • the equal-area dividing line of the first target object can be determined, and the first target object can be divided into two parts of equal area, and the equal-area dividing lines can be divided into two parts.
  • the midpoint of the line is determined as the center pixel.
  • an example of determining the center pixel point of the first target object includes: segmenting the first target object from the image to be processed, obtaining a target segmentation image, and determining the first target object based on the target segmentation image. Center pixel.
  • the target segmented image includes an image obtained after the first target object is segmented, and the center pixel of the target segmented image is determined as the center pixel of the first target object.
  • an image segmentation algorithm or a pre-trained neural network model can be used to segment the image to be processed.
  • the image segmentation algorithm may include an image edge segmentation algorithm, a region-based segmentation algorithm, an image threshold segmentation algorithm, etc.
  • image edge analysis such as Canny edge detection and Sobel edge detection can be used.
  • the cutting algorithm completes the segmentation of the first target object.
  • the method of determining the color of the first target object may further include: traversing and reading the color value of each pixel point in the target area of the first target object, and calculating the color value of all pixel points in the target area according to Average value, use the color corresponding to the average value as the color of the first target object.
  • the target area can be preset according to actual application requirements, and an area corresponding to any shape located at any position of the first target object can be used as the target area. For example, a rectangular area, a trapezoidal area, a circular area or an irregular-shaped area with a preset area value in the first target object is used as the target area.
  • the number of pixels contained in the target area is less than or equal to the number of pixels of the first target object.
  • the target area may be a range composed of a preset number of pixels closest to the center pixel of the first target object.
  • the target area may be set to be a range composed of 8 pixels closest to the center pixel. It can also be a square area; it can also be the target area, or it can be the two adjacent pixels in the horizontal and vertical coordinate directions of the central pixel, and the two adjacent pixels in the vertical coordinate direction, a total of five pixels.
  • the average color value of all pixels in the target area can be used as the color value of the first target object.
  • the average color value of the eight pixels closest to the central pixel and the square area composed of the central pixel can be determined, and the average can be determined as the color of the first target object.
  • two methods color replacement or color fusion, can be used to display the color of the first target object onto the color of the second target object.
  • the steps of the two display methods are as follows. :
  • one implementation is as follows: clearing the color of the second target object, and re-adding it to the second target object according to the color of the first target object to replace the color of the first target object.
  • one type of implementation is as follows: determine the first color value of the first target object, traverse each pixel point in the second target object, determine the second color value of each pixel point, and combine the first color
  • the value is fused with the second color value to obtain the fused color value of the pixel of the second target object.
  • the fusion of the first color value and the second color value may include multiplication calculation, addition calculation, multiplication-add mixing calculation, etc. of the first color value and the second color value.
  • the display can be enriched to the third color value. Color effects on the second target object.
  • the color effect displayed on the second target object is further enriched, and the interest of the image display is improved; and, by performing the display on the first target object Segmentation can determine the color of the first target object more accurately and efficiently.
  • Figure 3 is a schematic flowchart of an image processing method provided in Embodiment 3 of the present disclosure; in this embodiment, based on any of the embodiments of the present disclosure, the color of the first target object is displayed on the second target.
  • the method also includes: obtaining a preset color adjustment grayscale image, and displaying the color adjustment grayscale image on the second target object.
  • Gaussian blur processing is performed on the second target object.
  • a target filter effect can be added to the second target object.
  • the method in this embodiment may include:
  • the color-adjusted grayscale image can be displayed on the second target object.
  • the color adjustment grayscale image may include two or more grayscale values, and the grayscale value range is [0, 255].
  • the color-adjusted grayscale image can be obtained by adjusting the light transmittance of each part of the black image through a pre-stored black image to obtain a color-adjusted grayscale image.
  • the color adjustment grayscale image can only include two parts with a grayscale value of 255 and a grayscale value of 0.
  • the color adjustment grayscale image can be divided into two parts: black and white.
  • the black part of the color adjustment grayscale image can be divided into two parts respectively.
  • the color adjustment grayscale image can also include two or more grayscale values, and the color adjustment grayscale image is composed of grayscales.
  • the display shape of the grayscale in the grayscale image can be adjusted by adjusting the color to enrich the hierarchical distribution when the second target object is displayed.
  • each grayscale in the color adjustment grayscale image includes a corrugated, bar-shaped, and ring-shaped display form, and the color-adjusted grayscale image is obtained by splicing grayscales of different shapes.
  • the grayscale distribution in the grayscale image can also be adjusted by setting the color to enrich the level of display of the second target object.
  • the color adjustment grayscale image can be spliced sequentially according to the order of the grayscale values from large to small. Then, after superimposing the color adjustment grayscale maps, the grayscale value of the color of the second target object can be obtained. Regular changes; you can also splice the corresponding grayscales according to the "size" interval sequence of the grayscale values to form a color adjustment grayscale value. Then, after superimposing the color adjustment grayscale images, the second The contrast of the target object.
  • the grayscale distribution can also be determined based on the background scene of the image to be processed.
  • the color can be adjusted according to the distribution pattern of scattered sunlight.
  • the background scene of the image to be processed is an indoor scene with a lamp tube.
  • the color can be adjusted according to the light wave pattern formed when the light is illuminated, so that the gray scale distribution in the gray scale image can be adjusted to make the second target object better when displayed. The place blends into the shooting scene.
  • Gaussian blur processing can be performed on the second target object.
  • an image obtained by performing Gaussian blur processing on the second target object can be obtained by performing a convolution calculation on the second target object and a Gaussian distribution.
  • filters are used to achieve various special effects on images, and the target filter effect can be the filter effect specified by the user.
  • image modification information can be generated on the application display interface to provide various special effects that can be applied to the second target object for the user to choose.
  • the special effect corresponding to the target filter operation can be determined among each special effect as the target filter effect.
  • the target filter effect may include at least one of a noise filter, a distortion filter, an extraction filter, a rendering filter, a liquefaction filter, and a blur filter.
  • the noise filter is used to correct defects in the image processing process, such as removing spots, adding noise, etc.
  • the distortion filter is used to deform and distort selected areas in the image to create three-dimensional effects or other Overall changes
  • the Extract filter is used to cut out images, for example, to extract stray hairs from a complicated background
  • the Render filter is used to create texture fills to produce similar 3D lighting effects
  • the Liquify filter can be used to push, pull, rotate, Reflect, fold, and expand any area of an image to enhance the artistic effect of the image
  • the Blur filter can be used to blur areas of the image that are too sharp or have too much contrast.
  • the user When the user takes interesting images through the video shooting application, he or she can click on the " ⁇ " shaped logo displayed in the display interface of the video shooting application to indicate the start of shooting, and obtain the user's facial avatar provided by the user in the form of real-time shooting. and background images to be processed; the image special effects controls that can be used for image processing of the images to be processed can be displayed at the bottom of the application display interface. Users can control their favorite image special effects by clicking on the screen, gesturing in the air, or using voice control. Perform special effects triggering operations to enable corresponding special effects.
  • special effects may include A special effects for transforming the color of the target object, A special effects for The B special effect is used to deform the target object's face, the C special effect is used to gender-change the face in the image to be processed, and the D special effect is used to predict the age of the person appearing in the image to be processed.
  • the application display interface The name identification controls and/or image identification controls of A special effect, B special effect, C special effect and D special effect can be displayed below respectively. When it is detected that the name identification control and/or image identification control are triggered, the corresponding special effects are enabled.
  • the function of the target image special effect is to superimpose, display or replace the color of the first target object in the image to be processed onto the second target object in the image to be processed.
  • the target image special effect may be determined as the target image special effect proposed in this embodiment.
  • the user can be prompted to select the first target object and the second target object in the display page. For example, the user selects the handheld object as the first target object by clicking on the position of the handheld object in the image to be processed; Process the position of the hair in the image and select the hair as the second target object.
  • Figure 4 is a schematic diagram comparing the effects before and after processing of an image to be processed provided by the present disclosure.
  • the left image in Figure 4 is an image when no object is held in the hand, and the right image in Figure 4 is used to represent when the user selects After applying special effects to the target image, the rendering is obtained by image processing based on the image to be processed with different handheld objects.
  • the image on the right is the effect image obtained after performing special effects on the target image.
  • the handheld object shown in Figure 4 is a tissue bag, so the tissue bag can be used as the first target object, and the hair of the character in the graphic to be processed can be used as the second target object.
  • the image to be processed is segmented into paper bags, and the center coordinates of the paper bag are calculated.
  • the determined center coordinates of the paper bag are in dark fonts, and the center coordinates correspond to The font color is determined as the color of the paper bag.
  • the hair of the character in Figure 4 is segmented through the hair segmentation algorithm, and the color of the handheld object is displayed on the character's hair.
  • the paper bag contains a variety of colors, fonts and paper bags.
  • the periphery is dark and the spare part is white.
  • the font color in the paper bag is darker and darker; while the hair of the character in Figure 4 is lighter and brighter.
  • the target image special effects can cause color interaction between the objects in the image to be processed, making the image more interesting for the user. and images showing the diversity of results.
  • the hierarchical distribution when the second target object is displayed is enriched; by performing Gaussian blur processing on the second target object, the display image of the second target object is made more clear. Smooth and soft, reduce image noise; by adding a target filter effect to the second target object, the image processed by the target image special effects will be more varied and interesting.
  • FIG. 5 is a schematic structural diagram of an image processing device provided in Embodiment 4 of the present disclosure.
  • the image processing device provided in this embodiment can be implemented by software and/or hardware, and can be configured in a terminal and/or server. Image processing method in embodiments of the present disclosure.
  • the device may include: an image acquisition module 510 to be processed and a color display module 520; wherein,
  • the image acquisition module 510 to be processed is configured to acquire the image to be processed corresponding to the target image special effect in response to a special effect triggering operation for enabling the target image special effect;
  • the color display module 520 is configured to display the color of the first target object on the second target object if it is detected that the image to be processed includes a first target object and a second target object.
  • the image acquisition module 510 to be processed includes:
  • the first to-be-processed image acquisition unit is configured to acquire the image corresponding to the application of the target image special effect uploaded by the user or taken in a preset shooting mode as the image to be processed corresponding to the target image special effect; or,
  • the second image acquisition unit to be processed is configured to acquire each frame of image in the video uploaded by the user or recorded in real time for applying the target image special effects, as the image to be processed corresponding to the target image special effects; or,
  • the third to-be-processed image acquisition unit is configured to acquire an image containing the first target object and/or the second target object in the video uploaded by the user or recorded in real time with the target image special effects applied, as the image corresponding to the target image special effect. of images to be processed.
  • the color display module 520 includes:
  • a color replacement unit configured to replace the color of the second target object with the color of the first target object
  • a color fusion unit configured to blend the color of the first target object with the color of the second target object.
  • the image processing device further includes:
  • a center pixel point determination module is configured to determine the center pixel point of the first target object, and use the color of the center pixel as the color of the first target object; or,
  • the average value calculation module is configured to traversely read the color value of each pixel point in the target area in the first target object, and calculate the average value of the color values of all pixel points in the target area, and calculate the average value The corresponding color is used as the color of the first target object.
  • the central pixel point determination module includes:
  • the first target object segmentation unit is configured to segment the first target object from the image to be processed to obtain a target segmented image, and the target segmented image determines the center pixel of the first target object.
  • the image processing device further includes:
  • the color adjustment grayscale image acquisition module is configured to obtain a preset color adjustment grayscale image after the color adjustment grayscale image is displayed on the second target object, and display the color adjustment grayscale image on the second target object. on the second target object.
  • the image processing device further includes:
  • the second target object Gaussian blur processing module is configured to perform Gaussian blur processing on the second target object after the color of the first target object is displayed on the second target object.
  • the image processing device further includes:
  • the second target object adding filter effect module is configured to add a target filter effect to the second target object after the color of the first target object is displayed on the second target object.
  • the first target object includes at least one of a handheld object, a salient object, and an image subject belonging to the first preset subject type in the image to be processed.
  • the second target object includes the body parts of the target user in the image to be processed, the wearables of the target user, the background area of the image to be processed, and At least one of the image subjects belonging to the second preset subject type.
  • the image to be processed corresponding to the target image special effect is obtained.
  • the first target object is The color of the target object is displayed on the second target object, and the conversion operation of the display color of the second target object is completed, which avoids the boring and single image display method in the related technology, and realizes the special effects based on the target image to enrich the image of the target object.
  • the purpose of displaying colors is to improve the interest of image display and meet the diverse needs of users.
  • the above-mentioned device can execute the method provided by any embodiment of the present disclosure, and has corresponding functional modules and beneficial effects for executing the method.
  • FIG. 6 is a schematic structural diagram of an electronic device provided by Embodiment 5 of the present disclosure.
  • Terminal devices in embodiments of the present disclosure may include, but are not limited to, mobile phones, notebook computers, digital broadcast receivers, personal digital assistants (Personal Digital Assistant, PDA), tablet computers (PAD), portable multimedia players (Portable Media Player , PMP), mobile terminals such as vehicle-mounted terminals (such as vehicle-mounted navigation terminals), and fixed terminals such as digital televisions (Television, TV), desktop computers, etc.
  • PDA Personal Digital Assistant
  • PMP portable multimedia players
  • mobile terminals such as vehicle-mounted terminals (such as vehicle-mounted navigation terminals)
  • fixed terminals such as digital televisions (Television, TV), desktop computers, etc.
  • the electronic device shown in FIG. 6 is only an example and should not impose any limitations on the functions and scope of use of the embodiments of the present disclosure.
  • the electronic device 600 may include a processing device (such as a central processing unit, a graphics processor, etc.) 601, which may process data according to a program stored in a read-only memory (Read-Only Memory, ROM) 602 or from a storage device. 608 loads the program in the random access memory (Random Access Memory, RAM) 603 to perform various appropriate actions and processing. In the RAM 603, various programs and data required for the operation of the electronic device 600 are also stored.
  • the processing device 601, ROM 602 and RAM 603 are connected to each other via a bus 605.
  • An editing/output (I/O) interface 604 is also connected to bus 605.
  • the following devices can be connected to the I/O interface 604: input devices 606 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a Liquid Crystal Display (LCD) , an output device 607 such as a speaker, a vibrator, etc.; a storage device 608 including a magnetic tape, a hard disk, etc.; and a communication device 609. Communication device 609 may allow electronic device 600 to communicate wirelessly or wiredly with other devices to exchange data.
  • FIG. 6 illustrates electronic device 600 with various means, it should be understood that implementation or availability of all illustrated means is not required. More or fewer means may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product including a computer program carried on a non-transitory computer-readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from the network via communication device 609, or from storage device 608, or from ROM 602.
  • the processing device 601 When the computer program is executed by the processing device 601, the above functions defined in the method of the embodiment of the present disclosure are performed.
  • Embodiments of the present disclosure provide a computer storage medium on which a computer program is stored.
  • the program is executed by a processor, the image processing method provided by the above embodiments is implemented.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • the computer-readable storage medium may be, for example, but is not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or any combination thereof.
  • Examples of computer readable storage media may include, but are not limited to: an electrical connection having one or more wires, a portable computer disk, a hard drive, random access memory (RAM), read only memory (ROM), erasable programmable read only memory Memory (Erasable Programmable Read-Only Memory, EPROM) or flash memory, optical fiber, portable compact disk read-only memory (Compact Disc Read-Only Memory, CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above .
  • a computer-readable storage medium may be any tangible medium that contains or stores a program for use by or in connection with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium that can send, propagate, or transmit a program for use by or in connection with an instruction execution system, apparatus, or device .
  • Program code contained on a computer-readable medium can be transmitted using any appropriate medium, including but not limited to: wires, optical cables, radio frequency (Radio Frequency, RF), etc., or any suitable combination of the above.
  • the client and server can communicate using any currently known or future developed network protocol such as HTTP (HyperText Transfer Protocol), and can communicate with digital data in any form or medium.
  • Communications e.g., communications network
  • Examples of communication networks include Local Area Networks (LANs), Wide Area Networks (WANs), the Internet (e.g., the Internet), and end-to-end networks (e.g., ad hoc end-to-end networks), as well as any current network for knowledge or future research and development.
  • LANs Local Area Networks
  • WANs Wide Area Networks
  • the Internet e.g., the Internet
  • end-to-end networks e.g., ad hoc end-to-end networks
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; it may also exist independently without being assembled into the electronic device.
  • the above computer-readable medium carries one or more programs.
  • the electronic device When the above one or more programs When executed by the electronic device, the electronic device:
  • the color of the first target object is displayed on the second target object.
  • the storage medium may be a non-transitory storage medium.
  • Computer program code for performing the operations of the present disclosure may be written in one or more programming languages, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and Includes conventional procedural programming languages—such as "C” or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as an Internet service provider through Internet connection).
  • LAN local area network
  • WAN wide area network
  • Internet service provider such as an Internet service provider through Internet connection
  • each block in the flowchart or block diagram may represent a module, segment, or portion of code that contains one or more logic functions that implement the specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown one after another may actually execute substantially in parallel, or they may sometimes execute in the reverse order, depending on the functionality involved.
  • each block of the block diagram and/or flowchart illustration, and combinations of blocks in the block diagram and/or flowchart illustration can be implemented by special purpose hardware-based systems that perform the specified functions or operations. , or can be implemented using a combination of specialized hardware and computer instructions.
  • the units involved in the embodiments of the present disclosure can be implemented in software or hardware.
  • the name of the unit does not constitute a limitation on the unit itself under certain circumstances.
  • the first acquisition unit can also be described as "the unit that acquires at least two Internet Protocol addresses.”
  • exemplary types of hardware logic components include: Field Programmable Gate Array (FPGA), Application Specific Integrated Circuit (Application Specific Integrated Circuit) Specific Integrated Circuit (ASIC), Application Specific Standard Product (ASSP), System on Chip (SOC), Complex Programmable Logic Device (CPLD), etc.
  • FPGA Field Programmable Gate Array
  • ASIC Application Specific Integrated Circuit
  • ASSP Application Specific Standard Product
  • SOC System on Chip
  • CPLD Complex Programmable Logic Device
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with an instruction execution system, apparatus, or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices or devices, or any suitable combination of the foregoing.
  • machine-readable storage media examples include one or more wire-based electrical connections, laptop disks, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM) ) or flash memory, optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable read only memory
  • flash memory optical fiber
  • CD-ROM portable compact disk read-only memory
  • magnetic storage device or any suitable combination of the foregoing.
  • Example 1 provides an image processing method, which includes:
  • the color of the first target object is displayed on the second target object.
  • Example 2 provides an image processing method, which further includes:
  • Example 3 provides an image processing method, which further includes:
  • Example 4 provides an image processing method, which further includes:
  • Example 5 provides an image processing method, which further includes:
  • the first target object is segmented from the image to be processed to obtain a target segmented image, and the center pixel of the first target object is determined based on the target segmented image.
  • Example 6 provides an image processing method, which further includes:
  • Example 7 provides an image processing method, which further includes:
  • Gaussian blur processing is performed on the second target object.
  • Example 8 provides an image processing method, which further includes:
  • Example 9 provides an image processing method, which further includes:
  • the first target object includes at least one of a handheld object, a salient object, and an image subject belonging to a first preset subject type in the image to be processed.
  • Example 10 provides an image processing method, which further includes:
  • the second target object includes at least one of the body parts of the target user in the image to be processed, the wearables of the target user, the background area of the image to be processed, and the image subject belonging to the second preset subject type. kind.
  • Example 11 provides an image processing device, which includes:
  • An image acquisition module to be processed configured to acquire an image to be processed corresponding to the target image special effect in response to a special effect triggering operation for enabling a target image special effect
  • a color display module configured to display the color of the first target object on the second target object if it is detected that the image to be processed includes a first target object and a second target object.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • User Interface Of Digital Computer (AREA)
  • Image Analysis (AREA)

Abstract

本公开实施例公开了一种图像处理方法、装置、电子设备及存储介质,其中,该方法包括:响应于用于启用目标图像特效的特效触发操作,获取与目标图像特效对应的待处理图像;响应于检测到待处理图像中包括第一目标对象和第二目标对象,将第一目标对象的颜色显示在第二目标对象上。

Description

图像处理方法、装置、电子设备及存储介质
本申请要求在2022年04月12日提交中国专利局、申请号为202210382300.5的中国专利申请的优先权,以上申请的全部内容通过引用结合在本申请中。
技术领域
本公开实施例涉及图像处理技术领域,例如涉及一种图像处理方法、装置、电子设备及存储介质。
背景技术
随着互联网新媒体技术的迅速发展,短视频或图像作为一种互联网内容的传播方式,深受广大用户的喜爱。
目前,短视频或图像中的展示内容可包含人、物品及背景等多个展示对象,各展示对象之间具有各自对应的颜色属性,均相互独立展示。但是,相关技术的展示方式较单一,无法满足用户的多样需求。
发明内容
第一方面,本公开实施例提供了一种图像处理方法,包括:
响应于用于启用目标图像特效的特效触发操作,获取与所述目标图像特效对应的待处理图像;
响应于检测到所述待处理图像中包括第一目标对象和第二目标对象,将所述第一目标对象的颜色显示在所述第二目标对象上。
第二方面,本公开实施例还提供了一种图像处理装置,该装置包括:
待处理图像获取模块,设置为响应于用于启用目标图像特效的特效触发操作,获取与所述目标图像特效对应的待处理图像;
颜色显示模块,设置为响应于检测到所述待处理图像中包括第一目标对象和第二目标对象,将所述第一目标对象的颜色显示在所述第二目标对象上。
第三方面,本公开实施例还提供了一种电子设备,该电子设备包括:
一个或多个处理器;
存储装置,设置为存储一个或多个程序,
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现本公开任意实施例所提供的图像处理方法。
第四方面,本公开实施例还提供了一种计算机可读存储介质,其上存储有 计算机程序,该计算机程序被处理器执行时实现本公开任意实施例所提供的图像处理方法。
附图说明
下面对描述实施例中所需要用到的附图做一简单介绍。
图1为本公开实施例一所提供的一种图像处理方法的流程示意图;
图2为本公开实施例二所提供的一种图像处理方法的流程示意图;
图3为本公开实施例三所提供的一种图像处理方法的流程示意图;
图4为本公开提供的一种待处理图像处理前后的效果对比示意图;
图5为本公开实施例四所提供的一种图像处理装置的结构示意图;
图6为本公开实施例五所提供的一种电子设备的结构示意图。
具体实施方式
下面将参照附图描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
实施例一
图1为本公开实施例一所提供的一种图像处理方法的流程示意图,本实施例可在图像展示前,根据响应的目标图像特效,对待处理图像中目标对象的颜色进行多样化展示,该方法可以由图像处理装置来执行,该装置可以通过软件和/或硬件来实现,可配置于终端和/或服务器中来实现本公开实施例中的图像处理方法。
如图1所示,本实施例的方法可包括:
S110、响应于用于启用目标图像特效的特效触发操作,获取与目标图像特效对应的待处理图像。
其中,目标图像特效可起到对待处理图像中的目标对象的颜色进行多样化展示的效果。例如,目标图像特效可将目标对象的显示颜色更换为用户自定义颜色、将目标对象的显示颜色在预设颜色库中随机选中颜色进行更换或将目标对象的显示颜色更换为图像中包含其它对象的颜色,从而通过目标图像特效,增加了待处理图像中目标对象的颜色的多样性和自由性。
本实施例中,特效触发操作为用于启用目标图像特效的操作。特效触发操作的包括接触式操作和非接触式操作两种类型。
(1)、当特效触发操作为接触式操作时,可在用户终端的应用显示界面上展示特效触发控件,如按钮、滑块等控件。特效触发控件与目标图像特效的特效名称相对应,当用户用过对特效触发控件进行点击按钮或滑动滑块等接触操作,完成特效触发操作以启动目标图像特效。例如,当用户终端为触屏终端时,用户直接通过手指点击或滑动选中屏幕上特效触发控件对应的位置即可,当用户终端为非触屏终端时,可通过鼠标、键盘等外接输入设备,向用户终端发送操作指令,以使特效触发控件被选中,从而实现特效触发操作。
为更生动形象的展示图像特效,图像特效对应的特效触发控件可用效果动图的形式展示,表示出在使用图像特效前后的图像变化。示例性的,展示的各图像特效中可包括通过第一实例对象的颜色替换第二实例对象的颜色的特效,如第一示例对象颜色为白色,第二示例对象的颜色为红色,则采用效果动图展示特效触发控件时,可展示出第一示例对象从白色变换为第二示例对象的红色的变化过程,第二示例对象的颜色保持不变;或者,各图像特效中还可包括两个对象交换颜色的特效,采用效果动图展示该特效触发控件时,则可展示出第一示例对象与第二示例对象进行颜色互换的变化过程,从而使用户直观、清晰地了解到图像特效的作用。
在一示例中,还可将应用显示界面上展示的基础操作控件与目标图像特效建立关联,当触发基础操作控件时,可启用目标图像特效。示例性的,基础操作控件可包括拍摄影像控件、上传影像控件和调用影像控件中的至少一项,当对与目标图像特效关联的基础操作控件进行操作时,相当于完成了特效触发操作,可启用目标图像特效。例如,应用显示界面上展示有“拍摄”悬浮按钮,该悬浮按钮与目标图像特效关联,当通过点击“拍摄”悬浮按钮拍摄图像时,则将拍摄得到的图像确定为待处理图像,需要启用目标图像特效进行图像处理后再展示。
(2)、当特效触发操作为非接触操作时,可分为手势识别操作和/或语音控制操作。用户可按照应用显示界面指示的手势做动作,当用户终端检测到该动作时,则完成特效触发操作;用户还可通过语音控制用户终端实现特效触发操作,例如,当检测到用户发出“启动目标图像特效”的声音指令时,则完成特效触发操作。
例如,待处理图像为需要应用目标图像特效进行操作的图像,可通过图像采集器直接获取待处理图像,图像采集器包括摄像头、相机、带有拍摄功能的手机、电脑等设备。还可将用户终端中存储的图像进行上传,得到待处理图像;还可将与其它设备建立通信连接,共享其它设备中存储的图像作为待处理图像。
在一个实施例中,获取与目标图像特效对应的待处理图像的实现方式可包括:
获取用户上传的或以预设拍摄方式拍摄的用于应用目标图像特效对应的图像,作为与目标图像特效对应的待处理图像;或者,
获取用户上传或实时录制的用于应用目标图像特效的视频中的每一帧图像,作为与目标图像特效对应的待处理图像;或者,
获取用户上传或实时录制的应用目标图像特效的视频中包含有第一目标对象和/或第二目标对象的图像,作为与目标图像特效对应的待处理图像。
在第一种方式中,预设拍摄方式为采集待处理图像的方式,用户可通过预先进行自定义设置,选择不同的拍摄方式;也可采用预先设定的默认拍摄方式。例如,按照采集时间进行划分,预设拍摄方式可分为定时拍摄和实时拍摄;按照采集图像的模式进行划分,预设拍摄方式可分为全景拍摄、人像拍摄、夜景拍摄中的至少一项;按照触发类型进行划分,预设拍摄方式可分为点击拍摄、声控拍摄及动作识别拍摄中的至少一项。
例如,可将目标图像特效被启用后,由用户上传或以预设拍摄方式拍摄的图像,确定为待处理图像。当用户上传或拍摄图像时,可能出现多次拍摄、多 次上传等情况,以提供多张待处理图像;为避免获取待处理图像时产生遗漏,可获取满足预设条件的用户上传的或以预设拍摄方式拍摄的图像。例如,预设条件可为启用目标图像特效后且执行上传或拍摄结束操作前得到的图像,预设条件还可为启用目标图像特效后,预设时间段内得到的图像。通过获取满足预设条件的图像,确保能够准确、全面的得到待处理图像。
在一示例中,待处理图像的数量可为至少一张。当仅获取到用户上传的或拍摄的一张图像时,则可将该图像确定为待处理图像。当获取到多张图像,则可将各图像均作为待处理图像进行图像处理;也可确定各图像之间的相似程度,若相似度大于预设相似度阈值,则可理解为用户上传的图像为同一场景下的图像,则选择清晰度最高、颜色显示效果最好的一张图像作为待处理图像。
在第二种方式中,可获取用户上传或实时录制的用于应用目标图像特效的视频,提取该视频中的每一帧图像均作为目标图像特效对应的待处理图像。
例如,也可在视频中确定出部分数量帧的图像,确定为待处理图像。示例性的,可按照用户预先指定的视频中的特定帧图像,确定为待处理图像;还可按照预设抽取规则,在视频中抽取出一张或多张图像,确定为待处理图像。预设抽取规则可为等时间间隔进行抽取,如每间隔1秒钟抽取一帧图像;预设抽取规则还可为等帧数间隔进行抽取,如每间隔5帧图像进行一次抽取,也可为随机抽取;通过抽取部分帧的图像,降低数据处理量。
在一示例中,在视频中确定出部分数量帧作为待处理图像的一类方式还包括:获取用户上传或实时录制的应用目标图像特效的视频中包含有第一目标对象和/或第二目标对象的图像,作为与目标图像特效对应的待处理图像。该方式需预先设定目标对象,目标对象为待处理图像中用于应用目标图像特效的组成对象。目标对象的数量可为一个或多个,可为待处理图像中的植物、动物、人物和物品中的至少一种;也可为动物、人物和物品的局部组成部分。
在一个实施例中,目标对象包括第一目标对象和第二目标对象,第一目标对象与第二目标对象可为待处理图像中的不同组成部分。通过目标图像特效,可将第一目标对象的颜色显示至第二目标对象上。基于预设的第一目标对象和第二目标对象,可将获取的用户上传或实时录制的视频中,包含有第一目标对象和/或第二目标对象的图像,作为待处理图像。
S120、如果检测到待处理图像中包括第一目标对象和第二目标对象,则将第一目标对象的颜色显示在第二目标对象上。
本实施例中,可预先设定第一目标对象和第二目标对象的具体内容,设定 方式可包括用户预先设置第一目标对象的第二目标对象的具体内容,和/或,用户预先设置第一目标对象和第二目标对象的选定条件,根据选定条件确定第一目标对象和第二目标对象。
例如,用户可通过应用设置功能,向用户终端直接输入对象的具体内容和/或选定条件,也可通过应用界面展示的备选对象信息和/或备选选定条件,完成对选定条件或对象具体内容的预先设定。
示例性的,在响应用目标图像特效的特效触发操作时,在应用显示界面展示第一目标对象和第二目标对象的各备选名称、备选类型等备选信息,将用户选中的第一目标对象对应备选信息作为第一目标对象,将用户选中的第二目标对象对应的备选信息,作为第二目标对象。备选类型可包括动物类型、植物类型、人物类型和物品类型中的至少一项,备选名称可包括脸部、树木、天空等,备选信息可包括备选对象的面积、形状、尺寸、颜色等信息。响应于目标图像特效的特效触发操作,还可在应用显示界面展示第一目标对象和第二目标对象的备选选定条件,如备选选定条件可包括将待处理图像中的植物作为第一目标对象,将待处理图像中面积最大的部分作为第二目标对象;备选选定条件还可包括确定规则还可为将待处理图像中圆形物品作为第一目标对象,将待处理图像中的方形物品作为第二目标对象等。
需要说明的是,第一目标对象可为待处理图像中包括的图像主体,也可为预先设定好的与目标图像特效对应的至少一个备选对象。当第一目标对象可为待处理图像中包括的图像主体时,则可在获取到待处理图像后,用户根据展示的待处理图像在其中选择第一目标对象,或者,可以按照预设选定条件确定出第一目标对象。当预先设定好的与目标图像特效对应的备选对象为两个或两个以上具有不同颜色的对象时,可通过用户自定义方式确定第一目标对象。例如,可在应用显示界面上显示各备选对象的名称、标识、图像和颜色等中的至少一种备选对象控件。当检测到其中一个备选对象控件被触发时,则将对应的备选对象确定为第一目标对象。为增加第一目标对象颜色的多样性和灵活性,还可通过在应用显示界面直接展示输入框,使用户直接输入第一目标对象的名称、标识等信息,也可在输入框中输入想要的第一目标对象的RGB颜色值,可将该RGB颜色值对应的对象确定为第一目标对象。
为说明第一目标对象和第二目标对象存在的多种选择,本实施例中,第一目标对象可包括待处理图像中的手持物体、显著物体以及属于第一预设主体类型的图像主体中的至少一种。
其中,手持物为与用户手部接触的物品,当用户手部接触多个物品时,可在各物品中选取一个作为第一目标对象,如选取面积最大的物品作为目标对象。显著物体可包括墙面、地面、草坪、书桌、窗户、床等面积大于预设面积阈值的物品,预设面积阈值可根据获取的待处理图像的面积进行确定,待处理图像的面积越大,则预设面积阈值可越大。
例如,还可将属于第一预设主体类型的图像主体确定为第一目标对象。第一预设主体类型可从多个方面进行分类,如从目标对象的外在形状方面进行分类,属于第一预设主体类型的图像主体可包括具有方形、圆形、星状、环形等形状元素的图像主体;或者,从目标对象的颜色方面进行分类,属于第一预设主体类型的图像主体可为包含冷色系和/或暖色系元素的图像主体;又或者,从目标对象的对象属性进行分类,属于第一预设主体类型的图像主体可包括属于水果、服饰或建筑物的图像主体。例如,对于属于第一预设主体类型,且为待处理图像中的图像主体的对象,均可设定为第一目标对象。
在一示例中,第二目标对象包括待处理图像中目标用户的身体部位、目标用户的穿戴物、待处理图像的背景区域以及属于第二预设主体类型的图像主体中的至少一种。其中,目标用户的身体部位可包括头发、脸部、五官、脖子及手部等部位,穿戴物可包括上衣、裤子、项链、背包等物品;背景区域可为墙面、建筑、地面及景物等除去目标用户的区域部分。并且,第二目标对象还可为属于第二预设主体类型的图像主体,第二预设主体类型可为与第一预设主体类型相同的类型,也可与第一预设主体类型不同。
例如,在获取待处理图像后,基于预先确定的第一目标对象和第二目标对象的具体内容,通过预设识别算法对待处理图像进行图像检测,以确定待处理图像中是否包含设定的第一目标对象和第二目标对象。也可预先通过样本处理图像,对神经网络模型进行训练,使训练后的神经网络模型可实现识别出样本处理图像中的第一目标对象和第二目标对象。将待处理图像输入至预先训练完成的用于进行图像识别的神经网络模型中,基于神经网络模型的输出结果,确定待处理图像中是否包含第一目标对象和第二目标对象。
当检测到待处理图像中包含第一目标对象和第二目标对象时,可通过预设分割算法或预先训练完成的分割模型,分割出第一目标对象和第二目标对象,提取第一目标对象的颜色,并将该颜色显示在第二目标对象上。
例如,提取第一目标对象的颜色时,可确定第一目标对象中是否仅包含一种颜色,如仅包含一种颜色时,则可将该唯一的颜色确定为第一目标对象的颜 色。如第一目标对象包括两种或两种以上的颜色时,为减少计算量,提高颜色替换的效率,可在第一目标对象中确定出一个像素点,将该像素点对应的颜色作为第一目标对象的颜色。
示例性的,可随机确定出一个像素点,也可在第一目标对象中确定出满足预设条件的像素点,将该像素点对应的颜色确定为第一目标对象的颜色。例如,预设条件可为与第二目标对象的颜色值差距最大的颜色对应的像素点;还可为第二目标对象中距离第一目标对象最远的像素点;还可按照用户指定的第一目标对象的像素位置对应的颜色,确定为第一目标对象的颜色,提取该颜色显示至第二目标对象上。
为了更全面的考虑第一目标对象的颜色特征,提取第一目标对象的特征,还可将第一目标对象中包含的全部颜色均显示至第二目标对象中。示例性的,可按照第一目标对象的像素排列顺序,依次确定对应像素位置上的颜色值,并将该颜色值对应的颜色显示至第二目标对象的对应像素位置上。在一示例中,颜色值可以为RGB色彩模式值。当第一目标对象的面积大于第二目标对象时,将第二目标对象的颜色被填满即可无需再提取。当第一目标对象的面积小于第二目标对象时,在遍历第一目标对象的全部像素后,可重复提取第一目标对象的像素位置中的颜色,显示至第二目标对象中未添加颜色的像素位置。
当在待处理图像中未检测到第一目标对象和/或第二目标对象时,则缺少目标图像特效的作用对象,无法实现特效的操作。为及时提示用户无法完成目标图像特效,可在应用显示界面展示提示信息,或通过语音播放的形式提示目标图像特效操作失败。在一示例中,还可显示或语音播报特效失败原因。如在应用显示界面显示“未检测到可作用对象,建议更换图像”等字样,进行提示。
本实施例,在启用目标图像特效的特效触发操作被触发时,获取目标图像特效对应的待处理图像,当检测到待处理图像中包括第一目标对象和第二目标对象时,通过将第一目标对象的颜色显示在第二目标对象上,完成对第二目标对象的显示颜色的变换操作,避免了相关技术中图像展示方式枯燥、单一的情况,实现了基于目标图像特效,丰富目标对象的展示颜色的目的,提高了图像展示时的趣味性,满足用户的多样化需求。
实施例二
图2为本公开实施例二所提供的一种图像处理方法的流程示意图,本实施例在本公开实施例中任一实施例的基础上,将第一目标对象的颜色显示在第二目标对象上,包括:将第二目标对象的颜色替换为第一目标对象的颜色;或者, 将第一目标对象的颜色融合至第二目标对象上的颜色上。其中,与上述各实施例相同或相应的术语的解释在此不再赘述。
如图2所示,本实施例的方法可包括:
S210、响应于用于启用目标图像特效的特效触发操作,获取与目标图像特效对应的待处理图像。
S220、如果检测到待处理图像中包括第一目标对象和第二目标对象,则将第二目标对象的颜色替换为第一目标对象的颜色,或者,将第一目标对象的颜色融合至第二目标对象上的颜色上。
本实施例中,可在第一目标对象中确定出一种颜色,显示至第二目标对象上。例如,可确定第一目标对象的中心像素点,将中心像素的颜色作为第一目标对象的颜色。将第一目标对象的中心像素点的颜色作为第一目标对象的颜色的好处在于,减少确定颜色的计算量的同时,可避免与第一目标对象临近的对象的颜色干扰,导致无法实现目标图像特效,或将临近的对象的颜色显示至第二目标对象上。
其中,对于形状规则的第一目标对象,如圆形、矩形、梯形等形状的对象,中心像素点可为第一目标对象中与第一目标对象的四周距离相等的像素点;如当第一目标对象为圆形对象时,中心像素点为第一目标对象的圆心。另外,本领域技术人员可根据实际应用情况,还可将第一目标对象的重心、垂心和内心等像素点对应的颜色作为第一目标对象的颜色,对此本实施例不作限定。对于形状不规则的第一目标对象,可采用规则形状的边框框中第一目标对象,如通过第一目标对象的内接圆、内接矩形等边框,框中第一目标对象;可确定出规则形状的边框的重心、垂心和内心等像素点作为中心像素点;或者,可确定出第一目标对象的等面积分割线,将第一目标对象划分为等面积的两部分,将等面积分割线的中点确定为中心像素点。
本实施例中,确定第一目标对象的中心像素点的一类示例,包括:将第一目标对象从待处理图像中分割出来,得到目标分割图像,并根据目标分割图像确定第一目标对象的中心像素点。
其中,目标分割图像包括第一目标对象被分割出后,得到的图像,通过确定目标分割图像的中心像素点作为第一目标对象的中心像素点。例如,可采用图像分割算法或预先训练完成的神经网络模型,对待处理图像进行图像分割。示例性的,图像分割算法可包括图像边缘分割算法、基于区域的分割算法和图像阈值分割算法等。如,可采用Canny边缘检测、Sobel边缘检测等图像边缘分 割算法完成对第一目标对象的分割。
在一实施例中,确定第一目标对象的颜色的方式还可包括:遍历读取第一目标对象中目标区域内每个像素点的颜色值,根据计算目标区域内所有像素点的颜色值的平均值,将平均值对应的颜色作为第一目标对象的颜色。
其中,目标区域可根据实际应用需求预先设定,可将位于第一目标对象的任意位置的任意形状对应的区域作为目标区域。如,将第一目标对象中的预设面积值的矩形区域、梯形区域、圆形区域或不规则形状区域作为目标区域。目标区域中包含的像素点的数量小于或等于第一目标对象的像素点的数量。
示例性的,目标区域可为与第一目标对象的中心像素点最接近的预设数量的像素组成的范围,例如,可设定目标区域为与中心像素点最接近的8个像素点组成的正方形区域;还可为目标区域还可为中心像素点的横纵标方向上相邻的两个像素点、纵坐标方向上相邻的两个像素点,共5个像素点组成的区域。
例如,为了更准确地、全面的反映目标区域的颜色,可将目标区域内所有像素点的颜色值的平均值作为第一目标对象的颜色值。例如,可确定与中心像素点最接近的8个像素点和中心像素点组成的正方形区域的颜色值的平均值,将该平均值确定为第一目标对象的颜色。
本实施例中,确定出第一目标对象的颜色后,可采用颜色替换或者颜色融合两种方式,将第一目标对象的颜色显示至第二目标对象的颜色上,两种显示方式的步骤如下:
当采用颜色替换方式时,一类实施为:将第二目标对象的颜色清除,按照第一目标对象的颜色,重新添加至第二目标对象中,以替换第一目标对象的颜色。
当采用颜色融合方式时,一类实施为:确定第一目标对象的第一颜色值,遍历第二目标对象中的各像素点,分别确定出各像素点的第二颜色值,将第一颜色值与第二颜色值进行融合,得到第二目标对象的该像素点的融合后的颜色值。例如,将第一颜色值与第二颜色值进行融合可为将第一颜色值与第二颜色值进行相乘计算、相加计算和乘加混合计算等。还可为第一颜色值和第二颜色值分别分配权重,通过加权求和或加权后求乘积的方式得到融合颜色值,通过调整第一颜色值和第二颜色值的权重,丰富显示至第二目标对象上的颜色效果。
本实施例中,通过颜色替换或颜色融合两种显示颜色的方式,更加丰富了显示至第二目标对象上的颜色效果,提高了图像展示时的趣味性;并且,通过对第一目标对象进行分割,能够更准确、高效地确定出第一目标对象的颜色。
实施例三
图3为本公开实施例三所提供的一种图像处理方法的流程示意图;本实施例在本公开实施例中任一实施例的基础上,在将第一目标对象的颜色显示在第二目标对象上之后,还包括:获取预设的颜色调整灰度图,并将颜色调整灰度图显示至第二目标对象上。在一示例中,为提高第二目标对象更平滑,在将第一目标对象的颜色显示在第二目标对象上之后,对第二目标对象进行高斯模糊处理。在一示例中,为增强第二目标对象颜色展示效果,可为第二目标对象添加目标滤镜效果。
其中,与上述各实施例相同或相应的术语的解释在此不再赘述。
如图3所示,本实施例的方法可包括:
S310、响应于用于启用目标图像特效的特效触发操作,获取与目标图像特效对应的待处理图像。
S320、如果检测到待处理图像中包括第一目标对象和第二目标对象,则将第一目标对象的颜色显示在第二目标对象上。
S330、获取预设的颜色调整灰度图,并将颜色调整灰度图显示至第二目标对象上。
为丰富第二目标对象的展示效果,使图像更具层次感,可将颜色调整灰度图显示至第二目标对象上。其中,颜色调整灰度图中可包括两种或两种以上的灰度值,灰度值的取值范围为[0,255]。颜色调整灰度图可通过预先存储的黑色图片,调整该黑色图片各部分的透光率,得到颜色调整灰度图。
示例性的,颜色调整灰度图可仅包括灰度值为255和灰度值为0两部分,将颜色调整灰度图分为黑和白两部分,分别将颜色调整灰度图中黑色部分与白色部分与第二目标对象的颜色进行叠加,使第二目标对象显示时更具有层次感。颜色调整灰度图中还可包括两种以上的灰度值,通过灰阶组成颜色调整灰度图。
本实施例中,可通过调整颜色调整灰度图中灰阶的展示形状,丰富第二目标对象展示时的层次分布。例如,颜色调整灰度图中各灰阶包括为波纹状、条形状、环形状的展示形式,通过不同的形状的灰阶拼接得到颜色调整灰度图。
本实施例中,还可通过设置颜色调整灰度图中灰阶分布情况,丰富第二目标对象展示时的层次。例如,可根据灰阶的灰度值的由大到小的顺序,依次进行拼接组成颜色调整灰度图,则将颜色调整灰度图叠加后,得到的第二目标对象的颜色的灰度值规律变化;也可按照灰度值“大小大小”的间隔顺序,将对应的灰阶进行拼接组成颜色调整灰度值,则将颜色调整灰度图叠加后,增强了第二 目标对象的对比度。在一示例中,还可依据待处理图像的背景场景,确定灰阶分布情况,如待处理图像的背景场景为白天室外场景时,则可按照日光散落时的分布模式,调整颜色调整灰度图中灰阶分布;待处理图像的背景场景为灯管室内场景,则可按照灯光照射时形成的光波模式,调整颜色调整灰度图中灰阶分布,以使第二目标对象在展示时更好地与拍摄场景融合。
S340、对第二目标对象进行高斯模糊处理。
为使第二目标对象的展示时图像更平滑柔和,降低图像噪声,可对第二目标对象进行高斯模糊处理。例如,可通过将第二目标对象与高斯分布作卷积计算,得到第二目标对象进行高斯模糊处理后得到的图像。
S350、为第二目标对象添加目标滤镜效果。
其中,滤镜用来实现图像的各种特殊效果,目标滤镜效果可为用户指定的滤镜效果。
例如,将第一目标对象的颜色显示至第二目标对象上后,可在应用显示界面生成图像修饰信息,以提供各种可作用于第二目标对象的特殊效果供用户选择,当检测到用于启用目标滤镜效果的目标滤镜操作时,可在各特殊效果中确定出与目标滤镜操作对应的特殊效果作为目标滤镜效果,
示例性的,目标滤镜效果可包括杂色滤镜、扭曲滤镜、抽出滤镜、渲染滤镜、液化滤镜和模糊滤镜中的至少一种。其中,杂色滤镜用于矫正图像处理过程中的瑕疵,如用于去斑、添加杂色等;扭曲滤镜用于对图像中的选择区域进行变形、扭曲,以创造出三维效果或其它整体变化;抽出滤镜用于抠图,例如可抽出烦杂背景中的散乱发丝等;渲染滤镜用于创建纹理填充以产生类似3维光照效果;液化滤镜可用于推、拉、旋转、反射、折叠和膨胀图像的任意区域,修饰图像的艺术效果;模糊滤镜可用于使图像中太清晰或对比度太强的区域产生模糊效果。
上文中对于图像处理方法对应的实施例进行了描述,下文中给出示例性的应用场景。
当用户通过视频拍摄应用拍摄趣味图像时,可点击视频拍摄应用的显示界面中显示的用于表示开始拍摄的“○”形状的标识,通过实时拍摄的形式,获取到用户提供的包含用户面部头像及背景的待处理图像;在应用显示界面的下方可展示出可用于对待处理图像进行图像处理的图像特效控件,用户通过点击屏幕、隔空摆手势或语音控制等方式,对喜欢的图像特效控件进行特效触发操作,以启用对应的特效。例如,特效可包括用于变换目标对象的颜色的A特效、用于 对目标对象进行脸部变形的B特效、用于对待处理图像中的人脸进行性别装换的C特效,和用于对待处理图像中出现的人物进行年龄预测的D特效等,应用显示界面的下方可分别展示出A特效、B特效、C特效和D特效的名称标识控件和/或图像标识控件,当检测到其中的名称标识控件和/或图像标识控件被触发,则启用相应的特效。
示例性的,目标图像特效的作用为可将待处理图像中第一目标对象的颜色叠加显示或替换显示至待处理图像中的第二目标对象上。当确定出用户选择目标图像特效时,可将目标图像特效确定为本实施例中提出的目标图像特效。在一示例中,可在显示页面中提示用户选择第一目标对象和第二目标对象,例如,用户通过点击待处理图像中的手持物的位置,选中手持物作为第一目标对象;通过点击待处理图像中的头发的位置,选中头发作为第二目标对象。
图4为本公开提供的一种待处理图像处理前后的效果对比示意图,其中,图4中左侧图像为手中未拿物品时的图像,图4中的右侧图像为用于表示当用户选中目标图像特效后,根据具有不同手持物的待处理图像进行图像处理后得到的效果图。
如图4所示,右侧图像为进行目标图像特效后得到的效果图像。图4中展示的手持物为抽纸袋,则可将抽纸袋作为第一目标对象,将待处理图形中的人物头发作为第二目标对象。在目标图像特效的作用下,通过手持物体识别算法,对待处理图像分割出抽纸袋,并计算抽纸袋的中心坐标,确定出的抽纸袋的中心坐标为深色的字体部分,则将中心坐标对应的字体颜色确定为抽纸袋的颜色。接着,通过头发分割算法,对图4中人物的头发部分进行分割,将手持物的颜色显示至人物头发上。最后,对显示手持物颜色后得到的头发图像进行颜色渐变、添加滤镜和高斯模糊处理,得到图4中的右侧图像,如图4可知,抽纸袋中包含多种颜色,字体和抽纸袋周边采用深色,空余部分采用白色,抽纸袋中字体颜色较深、较暗;而图4中人物头发为较浅,较亮。抽纸袋中字体颜色显示至人物头发上后,头发相应颜色变暗,亮度降低;由此可见,目标图像特效可使待处理图像中的对象之间发生颜色交互,提高了用户拍摄图像的趣味性和图像展示结果的多样性。
本实施例,通过对第二目标对象叠加颜色调整灰度图,丰富了第二目标对象展示时的层次分布;通过对第二目标对象进行高斯模糊处理,使第二目标对象的展示时图像更平滑柔和,降低图像噪声;通过对第二目标对象添加目标滤镜效果,使经过目标图像特效处理的图像更多变、有趣。
实施例四
图5为本公开实施例四所提供的一种图像处理装置的结构示意图,本实施例所提供的图像处理装置可以通过软件和/或硬件来实现,可配置于终端和/或服务器中来实现本公开实施例中的图像处理方法。该装置可包括:待处理图像获取模块510和颜色显示模块520;其中,
待处理图像获取模块510,设置为响应于用于启用目标图像特效的特效触发操作,获取与所述目标图像特效对应的待处理图像;
颜色显示模块520,设置为如果检测到所述待处理图像中包括第一目标对象和第二目标对象,则将所述第一目标对象的颜色显示在所述第二目标对象上。
在本公开实施例中任一实施例的基础上,所述待处理图像获取模块510,包括:
第一待处理图像获取单元,设置为获取用户上传的或以预设拍摄方式拍摄的用于应用所述目标图像特效对应的图像,作为与所述目标图像特效对应的待处理图像;或者,
第二待处理图像获取单元,设置为获取用户上传或实时录制的用于应用所述目标图像特效的视频中的每一帧图像,作为与所述目标图像特效对应的待处理图像;或者,
第三待处理图像获取单元,设置为获取用户上传或实时录制的应用所述目标图像特效的视频中包含有第一目标对象和/或第二目标对象的图像,作为与所述目标图像特效对应的待处理图像。
在本公开实施例中任一实施例的基础上,所述颜色显示模块520,包括:
颜色替换单元,设置为将所述第二目标对象的颜色替换为所述第一目标对象的颜色;或者,
颜色融合单元,设置为将所述第一目标对象的颜色融合至所述第二目标对象上的颜色上。
在本公开实施例中任一实施例的基础上,所述图像处理装置,还包括:
中心像素点确定模块,设置为确定所述第一目标对象的中心像素点,将所述中心像素的颜色作为所述第一目标对象的颜色;或者,
平均值计算模块,设置为遍历读取所述第一目标对象中目标区域内每个像素点的颜色值,根据计算所述目标区域内所有像素点的颜色值的平均值,将所述平均值对应的颜色作为所述第一目标对象的颜色。
在本公开实施例中任一实施例的基础上,所述中心像素点确定模块,包括:
第一目标对象分割单元,设置为将所述第一目标对象从所述待处理图像中分割出来,得到目标分割图像,并所述目标分割图像确定所述第一目标对象的中心像素点。
在本公开实施例中任一实施例的基础上,所述图像处理装置,还包括:
颜色调整灰度图获取模块,设置为在所述将颜色调整灰度图显示至第二目标对象上之后,获取预设的颜色调整灰度图,并将所述颜色调整灰度图显示至所述第二目标对象上。
在本公开实施例中任一实施例的基础上,所述图像处理装置,还包括:
第二目标对象高斯模糊处理模块,设置为在所述将所述第一目标对象的颜色显示在所述第二目标对象上之后,对所述第二目标对象进行高斯模糊处理。
在本公开实施例中任一实施例的基础上,所述图像处理装置,还包括:
第二目标对象添加滤镜效果模块,设置为在所述将所述第一目标对象的颜色显示在所述第二目标对象上之后,为所述第二目标对象添加目标滤镜效果。
在本公开实施例中任一实施例的基础上,所述第一目标对象包括所述待处理图像中的手持物体、显著物体以及属于第一预设主体类型的图像主体中的至少一种。
在本公开实施例中任一实施例的基础上,所述第二目标对象包括所述待处理图像中目标用户的身体部位、所述目标用户的穿戴物、所述待处理图像的背景区域以及属于第二预设主体类型的图像主体中的至少一种。
本实施例,在启用目标图像特效的特效触发操作被触发时,获取目标图像特效对应的待处理图像,当检测到待处理图像中包括第一目标对象和第二目标对象时,通过将第一目标对象的颜色显示在第二目标对象上,完成对第二目标对象的显示颜色的变换操作,避免了相关技术中图像展示方式枯燥、单一的情况,实现了基于目标图像特效,丰富目标对象的展示颜色的目的,提高了图像展示时的趣味性,满足用户的多样化需求。
上述装置可执行本公开任意实施例所提供的方法,具备执行方法相应的功能模块和有益效果。
值得注意的是,上述装置所包括的各个单元和模块只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,各功能单元的具体名称也只是为了便于相互区分,并不用于限制本公开实施例的保护范围。
实施例五
图6为本公开实施例五所提供的一种电子设备的结构示意图。下面参考图6,其示出了适于用来实现本公开实施例的电子设备(例如图6中的终端设备或服务器)600的结构示意图。本公开实施例中的终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、个人数字助理(Personal Digital Assistant,PDA)、平板电脑(PAD)、便携式多媒体播放器(Portable Media Player,PMP)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字电视(Television,TV)、台式计算机等等的固定终端。图6示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图6所示,电子设备600可以包括处理装置(例如中央处理器、图形处理器等)601,其可以根据存储在只读存储器(Read-Only Memory,ROM)602中的程序或者从存储装置608加载到随机访问存储器(Random Access Memory,RAM)603中的程序而执行各种适当的动作和处理。在RAM 603中,还存储有电子设备600操作所需的各种程序和数据。处理装置601、ROM 602以及RAM603通过总线605彼此相连。编辑/输出(Input/Output,I/O)接口604也连接至总线605。
通常,以下装置可以连接至I/O接口604:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置606;包括例如液晶显示器(Liquid Crystal Display,LCD)、扬声器、振动器等的输出装置607;包括例如磁带、硬盘等的存储装置608;以及通信装置609。通信装置609可以允许电子设备600与其他设备进行无线或有线通信以交换数据。虽然图6示出了具有各种装置的电子设备600,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置609从网络上被下载和安装,或者从存储装置608被安装,或者从ROM 602被安装。在该计算机程序被处理装置601执行时,执行本公开实施例的方法中限定的上述功能。
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。
本公开实施例提供的电子设备与上述实施例提供的图像处理方法属于同一 发明构思,未在本实施例中详尽描述的技术细节可参见上述实施例,并且本实施例与上述实施例具有相同的有益效果。
实施例六
本公开实施例提供了一种计算机存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述实施例所提供的图像处理方法。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(Erasable Programmable Read-Only Memory,EPROM)或闪存、光纤、便携式紧凑磁盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、射频(Radio Frequency,RF)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如HTTP(HyperText Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(Local Area Network,LAN),广域网(Wide Area Network,WAN),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序 被该电子设备执行时,使得该电子设备:
响应于用于启用目标图像特效的特效触发操作,获取与所述目标图像特效对应的待处理图像;
如果检测到所述待处理图像中包括第一目标对象和第二目标对象,则将所述第一目标对象的颜色显示在所述第二目标对象上。
存储介质可以是非暂态(non-transitory)存储介质。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在某种情况下并不构成对该单元本身的限定,例如,第一获取单元还可以被描述为“获取至少两个网际协议地址的单元”。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(Field Programmable Gate Array,FPGA)、专用集成电路(Application  Specific Integrated Circuit,ASIC)、专用标准产品(Application Specific Standard Product,ASSP)、片上系统(System on Chip,SOC)、复杂可编程逻辑设备(Complex Programmable Logic Device,CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM)或快闪存储器、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
根据本公开的一个或多个实施例,【示例一】提供了一种图像处理方法,该方法包括:
响应于用于启用目标图像特效的特效触发操作,获取与所述目标图像特效对应的待处理图像;
如果检测到所述待处理图像中包括第一目标对象和第二目标对象,则将所述第一目标对象的颜色显示在所述第二目标对象上。
根据本公开的一个或多个实施例,【示例二】提供了一种图像处理方法,该方法,还包括:
获取用户上传的或以预设拍摄方式拍摄的用于应用所述目标图像特效对应的图像,作为与所述目标图像特效对应的待处理图像;或者,
获取用户上传或实时录制的用于应用所述目标图像特效的视频中的每一帧图像,作为与所述目标图像特效对应的待处理图像;或者,
获取用户上传或实时录制的应用所述目标图像特效的视频中包含有第一目标对象和/或第二目标对象的图像,作为与所述目标图像特效对应的待处理图像。
根据本公开的一个或多个实施例,【示例三】提供了一种图像处理方法,该方法,还包括:
将所述第二目标对象的颜色替换为所述第一目标对象的颜色;或者,将所述第一目标对象的颜色融合至所述第二目标对象上的颜色上。
根据本公开的一个或多个实施例,【示例四】提供了一种图像处理方法,该方法,还包括:
确定所述第一目标对象的中心像素点,将所述中心像素的颜色作为所述第一目标对象的颜色;或者,
遍历读取所述第一目标对象中目标区域内每个像素点的颜色值,根据计算所述目标区域内所有像素点的颜色值的平均值,将所述平均值对应的颜色作为所述第一目标对象的颜色。
根据本公开的一个或多个实施例,【示例五】提供了一种图像处理方法,该方法,还包括:
将所述第一目标对象从所述待处理图像中分割出来,得到目标分割图像,并根据所述目标分割图像确定所述第一目标对象的中心像素点。
根据本公开的一个或多个实施例,【示例六】提供了一种图像处理方法,该方法,还包括:
获取预设的颜色调整灰度图,并将所述颜色调整灰度图显示至所述第二目标对象上。
根据本公开的一个或多个实施例,【示例七】提供了一种图像处理方法,该方法,还包括:
对所述第二目标对象进行高斯模糊处理。
根据本公开的一个或多个实施例,【示例八】提供了一种图像处理方法,该方法,还包括:
为所述第二目标对象添加目标滤镜效果。
根据本公开的一个或多个实施例,【示例九】提供了一种图像处理方法,该方法,还包括:
所述第一目标对象包括所述待处理图像中的手持物体、显著物体以及属于第一预设主体类型的图像主体中的至少一种。
根据本公开的一个或多个实施例,【示例十】提供了一种图像处理方法,该方法,还包括:
所述第二目标对象包括所述待处理图像中目标用户的身体部位、所述目标用户的穿戴物、所述待处理图像的背景区域以及属于第二预设主体类型的图像主体中的至少一种。
根据本公开的一个或多个实施例,【示例十一】提供了一种图像处理装置,该装置,包括:
待处理图像获取模块,设置为响应于用于启用目标图像特效的特效触发操作,获取与所述目标图像特效对应的待处理图像;
颜色显示模块,设置为如果检测到所述待处理图像中包括第一目标对象和第二目标对象,则将所述第一目标对象的颜色显示在所述第二目标对象上。
本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的实施例,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它实施例。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的实施例。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了如果干具体实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。

Claims (13)

  1. 一种图像处理方法,包括:
    响应于用于启用目标图像特效的特效触发操作,获取与所述目标图像特效对应的待处理图像;
    响应于检测到所述待处理图像中包括第一目标对象和第二目标对象,将所述第一目标对象的颜色显示在所述第二目标对象上。
  2. 根据权利要求1所述的方法,其中,所述获取与所述目标图像特效对应的待处理图像,包括:
    获取用户上传的或以预设拍摄方式拍摄的用于应用所述目标图像特效对应的图像,作为与所述目标图像特效对应的待处理图像;或者,
    获取用户上传或实时录制的用于应用所述目标图像特效的视频中的每一帧图像,作为与所述目标图像特效对应的待处理图像;或者,
    获取用户上传或实时录制的应用所述目标图像特效的视频中包含有第一目标对象或第二目标对象中的至少一个的图像,作为与所述目标图像特效对应的待处理图像。
  3. 根据权利要求1所述的方法,其中,所述将所述第一目标对象的颜色显示在所述第二目标对象上,包括:
    将所述第二目标对象的颜色替换为所述第一目标对象的颜色;或者,
    将所述第一目标对象的颜色融合至所述第二目标对象上的颜色上。
  4. 根据权利要求1或3所述的方法,还包括:
    确定所述第一目标对象的中心像素点,将所述中心像素的颜色作为所述第一目标对象的颜色;或者,
    遍历读取所述第一目标对象中目标区域内每个像素点的颜色值,根据计算所述目标区域内所有像素点的颜色值的平均值,将所述平均值对应的颜色作为所述第一目标对象的颜色。
  5. 根据权利要求4所述的方法,其中,所述确定所述第一目标对象的中心像素点,包括:
    将所述第一目标对象从所述待处理图像中分割出来,得到目标分割图像,并根据所述目标分割图像确定所述第一目标对象的中心像素点。
  6. 根据权利要求1所述的方法,在所述将所述第一目标对象的颜色显示在所述第二目标对象上之后,还包括:
    获取预设的颜色调整灰度图,并将所述颜色调整灰度图显示至所述第二目标对象上。
  7. 根据权利要求1所述的方法,在所述将所述第一目标对象的颜色显示在所述第二目标对象上之后,还包括:
    对所述第二目标对象进行高斯模糊处理。
  8. 根据权利要求1所述的方法,在所述将所述第一目标对象的颜色显示在所述第二目标对象上之后,还包括:
    为所述第二目标对象添加目标滤镜效果。
  9. 根据权利要求1所述的方法,其中,所述第一目标对象包括所述待处理图像中的手持物体、显著物体以及属于第一预设主体类型的图像主体中的至少一种。
  10. 根据权利要求1所述的方法,其中,所述第二目标对象包括所述待处理图像中目标用户的身体部位、所述目标用户的穿戴物、所述待处理图像的背景区域以及属于第二预设主体类型的图像主体中的至少一种。
  11. 一种图像处理装置,包括:
    待处理图像获取模块,设置为响应于用于启用目标图像特效的特效触发操作,获取与所述目标图像特效对应的待处理图像;
    颜色显示模块,设置为响应于检测到所述待处理图像中包括第一目标对象和第二目标对象,将所述第一目标对象的颜色显示在所述第二目标对象上。
  12. 一种电子设备,包括:
    一个或多个处理器;
    存储装置,设置为存储一个或多个程序,
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-10中任一所述的图像处理方法。
  13. 一种计算机可读存储介质,所述计算机可读存储介质上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1-10中任一所述的图像处理方法。
PCT/CN2023/079974 2022-04-12 2023-03-07 图像处理方法、装置、电子设备及存储介质 WO2023197780A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210382300.5A CN114758027A (zh) 2022-04-12 2022-04-12 图像处理方法、装置、电子设备及存储介质
CN202210382300.5 2022-04-12

Publications (1)

Publication Number Publication Date
WO2023197780A1 true WO2023197780A1 (zh) 2023-10-19

Family

ID=82330350

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/079974 WO2023197780A1 (zh) 2022-04-12 2023-03-07 图像处理方法、装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN114758027A (zh)
WO (1) WO2023197780A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114758027A (zh) * 2022-04-12 2022-07-15 北京字跳网络技术有限公司 图像处理方法、装置、电子设备及存储介质
CN115937010B (zh) * 2022-08-17 2023-10-27 北京字跳网络技术有限公司 一种图像处理方法、装置、设备及介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020098409A (ja) * 2018-12-17 2020-06-25 ヤフー株式会社 画像処理装置、画像処理方法および画像処理プログラム
CN112258605A (zh) * 2020-10-16 2021-01-22 北京达佳互联信息技术有限公司 特效添加方法、装置、电子设备及存储介质
CN113240777A (zh) * 2021-04-25 2021-08-10 北京达佳互联信息技术有限公司 特效素材处理方法、装置、电子设备及存储介质
CN113918442A (zh) * 2020-07-10 2022-01-11 北京字节跳动网络技术有限公司 图像特效参数处理方法、设备和存储介质
CN114758027A (zh) * 2022-04-12 2022-07-15 北京字跳网络技术有限公司 图像处理方法、装置、电子设备及存储介质

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107862657A (zh) * 2017-10-31 2018-03-30 广东欧珀移动通信有限公司 图像处理方法、装置、计算机设备及计算机可读存储介质
CN108635859A (zh) * 2018-05-04 2018-10-12 网易(杭州)网络有限公司 用于图像染色的方法及装置、存储介质、电子设备
CN110084154B (zh) * 2019-04-12 2021-09-17 北京字节跳动网络技术有限公司 渲染图像的方法、装置、电子设备和计算机可读存储介质
CN112184540A (zh) * 2019-07-02 2021-01-05 北京小米移动软件有限公司 图像处理方法、装置、电子设备和存储介质
CN111935418B (zh) * 2020-08-18 2022-12-09 北京市商汤科技开发有限公司 视频处理方法及装置、电子设备和存储介质
CN112883821B (zh) * 2021-01-27 2024-02-20 维沃移动通信有限公司 图像处理方法、装置及电子设备

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2020098409A (ja) * 2018-12-17 2020-06-25 ヤフー株式会社 画像処理装置、画像処理方法および画像処理プログラム
CN113918442A (zh) * 2020-07-10 2022-01-11 北京字节跳动网络技术有限公司 图像特效参数处理方法、设备和存储介质
CN112258605A (zh) * 2020-10-16 2021-01-22 北京达佳互联信息技术有限公司 特效添加方法、装置、电子设备及存储介质
CN113240777A (zh) * 2021-04-25 2021-08-10 北京达佳互联信息技术有限公司 特效素材处理方法、装置、电子设备及存储介质
CN114758027A (zh) * 2022-04-12 2022-07-15 北京字跳网络技术有限公司 图像处理方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN114758027A (zh) 2022-07-15

Similar Documents

Publication Publication Date Title
US11678050B2 (en) Method and system for providing recommendation information related to photography
US10255681B2 (en) Image matting using deep learning
EP3879843A1 (en) Video processing method and apparatus, electronic device, and computer-readable medium
WO2023197780A1 (zh) 图像处理方法、装置、电子设备及存储介质
US9811933B2 (en) Image editing using selective editing tools
US8692830B2 (en) Automatic avatar creation
US20180276882A1 (en) Systems and methods for augmented reality art creation
CN111541907B (zh) 物品显示方法、装置、设备及存储介质
US20120327172A1 (en) Modifying video regions using mobile device input
US20230360184A1 (en) Image processing method and apparatus, and electronic device and computer-readable storage medium
US20220326823A1 (en) Method and apparatus for operating user interface, electronic device, and storage medium
KR20160142742A (ko) 메이크업 거울을 제공하는 디바이스 및 방법
CN103997687B (zh) 用于向视频增加交互特征的方法及装置
CN104508680B (zh) 改善之视讯追踪
CN108111911B (zh) 基于自适应跟踪框分割的视频数据实时处理方法及装置
WO2022042624A1 (zh) 信息显示方法、设备及存储介质
CN111491187A (zh) 视频的推荐方法、装置、设备及存储介质
CN112884908A (zh) 基于增强现实的显示方法、设备、存储介质及程序产品
JP2024506014A (ja) 動画生成方法、装置、機器及び可読記憶媒体
US20160140748A1 (en) Automated animation for presentation of images
CN112906553B (zh) 图像处理方法、装置、设备及介质
CN115967823A (zh) 视频封面生成方法、装置、电子设备及可读介质
CN113395456A (zh) 辅助拍摄方法、装置、电子设备及程序产品
CN112083863A (zh) 图像处理方法、装置、电子设备及可读存储介质
WO2023154544A1 (en) Interactively defining an object segmentation

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23787428

Country of ref document: EP

Kind code of ref document: A1