CN115797287A - Image processing method and device - Google Patents
Image processing method and device Download PDFInfo
- Publication number
- CN115797287A CN115797287A CN202211514705.6A CN202211514705A CN115797287A CN 115797287 A CN115797287 A CN 115797287A CN 202211514705 A CN202211514705 A CN 202211514705A CN 115797287 A CN115797287 A CN 115797287A
- Authority
- CN
- China
- Prior art keywords
- image
- pixel
- object region
- color
- processed
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Landscapes
- Image Analysis (AREA)
Abstract
The present disclosure relates to an image processing method and apparatus. The image processing method comprises the following steps: determining an object segmentation mask image corresponding to an object region according to the object region in an image to be processed, wherein the object segmentation mask image is an image obtained by cutting out a region except the object region from the image to be processed; determining boundary pixels of the object region based on the object segmentation mask image; determining a filling color of the object region in the image to be processed based on the boundary pixels; and rendering the object area in the image to be processed based on the filling color to obtain a first target image. According to the image processing method and the image processing device, the color of the boundary background pixel of each pixel in the human image can be found, and the color after background filling is predicted, so that the background color filling efficiency is improved.
Description
Technical Field
The present disclosure relates to the field of image video technology. More particularly, the present disclosure relates to an image processing method and apparatus.
Background
It is a common technique in the field of image processing to remove characters in an image and fill the characters according to surrounding images. The technology is applied to the field of special effects, and a virtual effect that characters disappear gradually and are fused into a background can be provided for a user. The common image matting method fills the human region according to the semantic information of the background image through a neural network. The method has extremely high calculation complexity and is difficult to be applied to real-time calculation scenes such as video special effects and the like.
Disclosure of Invention
An exemplary embodiment of the present disclosure is to provide an image processing method and apparatus to at least solve a problem of low filling efficiency due to high computational complexity when background color filling is performed on an object region in the related art.
According to an exemplary embodiment of the present disclosure, there is provided an image processing method including: determining an object segmentation mask image corresponding to an object region according to the object region in an image to be processed, wherein the object segmentation mask image is an image obtained by matting regions except the object region from the image to be processed; determining boundary pixels of the object region based on the object segmentation mask image; determining a filling color of the object region in the image to be processed based on the boundary pixels; rendering the object area in the image to be processed based on the filling color to obtain a first target image.
Optionally, after the processing the object region in the image to be processed based on the filling color to obtain a first target image, the image processing method may further include: performing mean value fuzzy sampling processing on the first target image according to the texture coordinates of the first target image to obtain a mean value fuzzy sampling processing result; and processing the object area in the image to be processed based on the mean value fuzzy sampling processing result to obtain a second target image.
Optionally, the performing mean value fuzzy sampling processing on the first target image according to the texture coordinate of the first target image to obtain a mean value fuzzy sampling processing result may include: respectively carrying out texture coordinate offset processing on each pixel in the object region of the first target image to obtain offset texture coordinates; and performing mean value fuzzy sampling processing on the texture coordinates of the first target image after the offset to obtain a mean value fuzzy sampling processing result.
Optionally, the separately performing texture coordinate offset processing on each pixel in the object region of the first target image may include: determining a texture coordinate offset value of a corresponding pixel in the first target image based on the color of each pixel in the object region in the image to be processed respectively; and performing texture coordinate offset processing on the corresponding pixel in the first target image based on the texture coordinate offset value of the corresponding pixel in the first target image.
Optionally, the determining the texture coordinate offset value of the corresponding pixel in the first target image based on the color of each pixel in the object region in the image to be processed, respectively, may include performing the following operations for each pixel in the object region, respectively: determining red channel values and green channel values of colors of pixels in the object region in the image to be processed; and determining a texture coordinate offset value of a pixel corresponding to the pixel in the first target image based on the red channel value and the green channel value to obtain the texture coordinate offset value of the corresponding pixel in the first target image.
Optionally, the determining, based on the boundary pixel, a filling color of the object region in the image to be processed may include: determining a color of a boundary pixel corresponding to each pixel in the object region, respectively; and determining the filling color of each pixel in the object region based on the color of the boundary pixel corresponding to each pixel in the object region.
Optionally, the determining the filling color of each pixel in the object region based on the color of the boundary pixel corresponding to each pixel in the object region may include: determining any pixel in the object region as a target pixel; determining a mixing proportion of colors of the boundary pixels corresponding to the target pixel to obtain a color mixing proportion corresponding to the target pixel; and mixing the colors of the boundary pixels corresponding to the target pixel based on the color mixing proportion corresponding to the target pixel to obtain the filling color of the target pixel.
According to an exemplary embodiment of the present disclosure, there is provided an image processing apparatus including: the image segmentation unit is configured to determine an object segmentation mask image corresponding to an object region according to the object region in an image to be processed, wherein the object segmentation mask image is an image obtained by cutting out a region except the object region from the image to be processed; a boundary determining unit configured to determine boundary pixels of the object region based on the object segmentation mask image; a color determination unit configured to determine a filling color of the object region in the image to be processed based on the boundary pixels; and the image processing unit is configured to process the object area in the image to be processed based on the filling color to obtain a first target image.
Optionally, the image processing apparatus may further include another processing unit configured to: performing mean value fuzzy sampling processing on the first target image according to the texture coordinates of the first target image to obtain a mean value fuzzy sampling processing result; and processing the object region in the image to be processed based on the mean value fuzzy sampling processing result to obtain a second target image.
Optionally, the further processing unit may be configured to: respectively carrying out texture coordinate offset processing on each pixel in the object region of the first target image to obtain offset texture coordinates; and performing mean value fuzzy sampling processing on the texture coordinates of the first target image after the offset to obtain a mean value fuzzy sampling processing result.
Optionally, the further processing unit may be configured to: respectively determining texture coordinate offset values of corresponding pixels in the first target image based on the color of each pixel in the object region in the image to be processed; and performing texture coordinate offset processing on the corresponding pixel in the first target image based on the texture coordinate offset value of the corresponding pixel in the first target image.
Optionally, the further processing unit may be configured to perform the following for each pixel in the object region, respectively: determining red channel values and green channel values of colors of pixels in the object region in the image to be processed; and determining a texture coordinate offset value of a pixel corresponding to the pixel in the first target image based on the red channel value and the green channel value to obtain the texture coordinate offset value of the corresponding pixel in the first target image.
Optionally, the color determination unit may be configured to: determining a color of a boundary pixel corresponding to each pixel in the object region, respectively; and determining the filling color of each pixel in the object region based on the color of the boundary pixel corresponding to each pixel in the object region.
Optionally, the color determination unit may be configured to: determining any pixel in the object region as a target pixel; determining a color mixing proportion of a boundary pixel corresponding to the target pixel to obtain a color mixing proportion example corresponding to the target pixel; and mixing the colors of the boundary pixels corresponding to the target pixel based on the color mixing proportion corresponding to the target pixel to obtain the filling color of the target pixel.
According to an exemplary embodiment of the present disclosure, there is provided an electronic device including: a processor; a memory for storing the processor-executable instructions; wherein the processor is configured to execute the instructions to implement an image processing method according to an exemplary embodiment of the present disclosure.
According to an exemplary embodiment of the present disclosure, there is provided a computer-readable storage medium having stored thereon a computer program which, when executed by a processor of an electronic device, causes the electronic device to execute an image processing method according to an exemplary embodiment of the present disclosure.
According to an exemplary embodiment of the present disclosure, a computer program product is provided, comprising computer programs/instructions which, when executed by a processor, implement an image processing method according to an exemplary embodiment of the present disclosure.
The technical scheme provided by the embodiment of the disclosure at least brings the following beneficial effects:
the method comprises the steps of firstly searching the nearest background pixel color on two sides of each pixel in a portrait, and predicting the color after background filling, so that the background color filling efficiency is improved;
and shifting texture coordinates according to the color of the portrait, and combining with mean fuzzy to create a transparent portrait effect, thereby improving the transparent portrait effect.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure and are not to be construed as limiting the disclosure.
Fig. 1 illustrates an exemplary system architecture to which exemplary embodiments of the present disclosure may be applied.
Fig. 2 illustrates a flowchart of an image processing method according to an exemplary embodiment of the present disclosure.
Fig. 3 illustrates an example of an image to be processed according to an exemplary embodiment of the present disclosure.
Fig. 4 illustrates an example of an object segmentation mask image according to an exemplary embodiment of the present disclosure.
Fig. 5 illustrates an example of a color filling result according to an exemplary embodiment of the present disclosure.
Fig. 6 illustrates a flowchart of an image processing method according to another exemplary embodiment of the present disclosure.
Fig. 7 illustrates an example of a processing result according to an exemplary embodiment of the present disclosure.
Fig. 8 illustrates a block diagram of an image processing apparatus according to an exemplary embodiment of the present disclosure.
Fig. 9 illustrates a block diagram of an image processing apparatus according to another exemplary embodiment of the present disclosure.
Fig. 10 is a block diagram of an electronic device 1000 according to an example embodiment of the present disclosure.
Detailed Description
In order to make the technical solutions of the present disclosure better understood by those of ordinary skill in the art, the technical solutions in the embodiments of the present disclosure will be clearly and completely described below with reference to the accompanying drawings.
It should be noted that the terms "first," "second," and the like in the description and claims of the present disclosure and in the above-described drawings are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the disclosure described herein are capable of operation in sequences other than those illustrated or otherwise described herein. The embodiments described in the following examples do not represent all embodiments consistent with the present disclosure. Rather, they are merely examples of apparatus and methods consistent with certain aspects of the disclosure, as detailed in the appended claims.
In this case, the phrase "at least one of the plurality of items" in the present disclosure means that the three parallel cases including "any one of the plurality of items", "a combination of any plurality of the plurality of items", and "the entirety of the plurality of items" are included. For example, "including at least one of a and B" includes the following three cases in parallel: (1) comprises A; (2) comprises B; and (3) comprises A and B. For another example, "at least one of the first step and the second step is performed", which means that the following three cases are juxtaposed: (1) executing the step one; (2) executing the step two; and (3) executing the step one and the step two.
In the related technology, the human figures are filled by extracting image features through methods such as a neural network, and the like, and the method is high in calculation complexity and difficult to apply to real-time scenes such as video special effects.
In addition, in the related art, the image generated by the method of filling after matting the portrait does not have a stereoscopic effect, and it is difficult to embody the uneven texture of the surface of the figure, and the effect of changing the figure into transparent glass cannot be realized.
According to the scheme, the semantic features of the image are extracted without using complex methods such as a neural network, efficient parallel drawing is performed on the colors of the image area through a graphics processing unit (GPU for short), the rendering efficiency can be achieved in real time, and the requirements of application scenes such as video special effects are met.
The scheme simulates the concave-convex effect of the figure surface according to the color of the figure in the image, so that the drawn transparent figure has the texture of ground glass, and the transparent effect of the figure has stereoscopic impression.
Hereinafter, an image processing method and apparatus according to an exemplary embodiment of the present disclosure will be described in detail with reference to fig. 1 to 10.
Fig. 1 illustrates an exemplary system architecture 100 in which exemplary embodiments of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include terminal devices 101, 102, 103, a network 104, and a server 105. Network 104 is the medium used to provide communication links between terminal devices 101, 102, 103 and server 105. Network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others. A user may use terminal devices 101, 102, 103 to interact with server 105 over network 104 to receive or send messages (e.g., image upload requests, image processing requests), and so forth. Various image applications, such as audio/video playing software, audio/video recording software, audio/video processing software, audio/video editing software, and the like, may be installed on the terminal devices 101, 102, and 103. The terminal apparatuses 101, 102, and 103 may be hardware or software. When the terminal devices 101, 102, 103 are hardware, they may be various electronic devices having a display screen and capable of playing, recording, editing, etc. audio and video, including but not limited to smart phones, tablet computers, laptop portable computers, desktop computers, etc. When the terminal device 101, 102, 103 is software, it may be installed in the electronic devices listed above, it may be implemented as a plurality of software or software modules (for example, to provide distributed services), or it may be implemented as a single software or software module. And is not particularly limited herein.
The terminal devices 101, 102, 103 may be equipped with an image capture device (e.g., a camera) to capture video data. In practice, the smallest visual unit that makes up a video is a Frame (Frame). Each frame is a static image. Temporally successive sequences of frames are composited together to form a motion video. Further, the terminal apparatuses 101, 102, 103 may also be mounted with a component (e.g., a speaker) for converting an electric signal into sound to play the sound, and may also be mounted with a device (e.g., a microphone) for converting an analog audio signal into a digital audio signal to pick up the sound.
The server 105 may be a server providing various services, such as a background server providing support for applications installed on the terminal devices 101, 102, 103. The backend server can analyze, store and the like the received data such as the image processing request, and can also receive the image processing request sent by the terminal device 101, 102, 103 and feed back the processing result to the terminal device 101, 102, 103.
The server may be hardware or software. When the server is hardware, it may be implemented as a distributed server cluster composed of multiple servers, or may be implemented as a single server. When the server is software, it may be implemented as multiple pieces of software or software modules (e.g., to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein.
It should be noted that the image processing method provided by the embodiment of the present disclosure is generally executed by a terminal device, but may also be executed by a server, or may also be executed by cooperation of the terminal device and the server. Accordingly, the image processing apparatus may be provided in the terminal device, the server, or both the terminal device and the server.
It should be understood that the number of terminal devices, networks, and servers in fig. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers as desired for an implementation, and the disclosure is not limited thereto.
Fig. 2 illustrates a flowchart of an image processing method according to an exemplary embodiment of the present disclosure. Fig. 3 illustrates an example of an image to be processed according to an exemplary embodiment of the present disclosure. Fig. 4 illustrates an example of an object segmentation mask image according to an exemplary embodiment of the present disclosure. Fig. 5 illustrates an example of a color filling result according to an exemplary embodiment of the present disclosure.
Referring to fig. 2, in step S201, according to an object region in an image to be processed, an object segmentation mask image corresponding to the object region is determined. Here, the object segmentation mask image is an image obtained by matting a region other than the object region from the image to be processed.
Here, the object in the object region may be a human, an animal, a plant, or the like. For example, the object in the image to be processed of fig. 3 is a person. As shown in fig. 4, in the object segmentation mask image, the object region is normally displayed, and the region other than the object region is black. In the background image, the object area is black, and the area other than the object area is normally displayed.
In an exemplary embodiment of the present disclosure, any object segmentation algorithm (or image segmentation algorithm) may be used to segment an object region in an image to be processed, which is not limited by the present disclosure.
In step S202, boundary pixels of the object region are determined based on the object segmentation mask image. For example, boundary pixels on preset opposite sides of the object region are determined based on the object segmentation mask image.
In an exemplary embodiment of the present disclosure, after obtaining the object segmentation mask image, an object segmentation mask texture map may be further determined based on the object segmentation mask image. In the object segmentation mask texture map, each pixel grayscale represents the probability that the object segmentation algorithm predicts that the pixel position in the image to be processed belongs to the object. Boundary pixels of the object region may be determined based on the object segmentation mask texture map (e.g., boundary pixels on opposite sides of a preset).
When the object in the object region is a human, the object segmentation mask texture map is also referred to as a human segmentation mask texture map, and the object region is also referred to as a human image region.
As an example, when determining boundary pixels on two preset opposite sides of the human image region based on the human segmentation mask texture map M, for pixels P with a gray scale greater than 0 in the human segmentation mask texture map M, the leftmost pixels P belonging to the human image region are respectively searched to the left and right left And the rightmost pixel P right . The method comprises the following specific steps:
2.1 creating an Open Graphics Library (OpenGL) Shader (Shader 1) for rendering the portrait area fill color estimation result. In a Shader (Shader 1), an image I to be processed uploaded by a user and a human body segmentation mask image M are respectively sampled according to current texture coordinates.
2.2 when the sampling result obtained in the step 2.1 is larger than 0, respectively translating the texture coordinate P to left and right iteration, and offsetting each timeAnd samples the human segmentation mask map M.
2.3 repeating the step 2.2 until the color value of the sampling result is equal to 0, and obtaining the leftmost pixel of the current portrait area when the precision range is within 0.1And the rightmost pixel
2.4 continuing to respectively translate the texture coordinate P to the left and the right in an iterative way on the basis of the leftmost pixel and the rightmost pixel of which the precision range obtained in the step 2.3 is within 0.1, and offsetting each timeAnd the human segmentation mask map M is sampled, wherein the initial value of the offset step s is 0.05.
2.5 when the sampling result obtained in the step 2.4 is larger than 0, shifting the texture coordinate P to the left and the rightWhen sampling knotIf equal to 0, no shift is made to the texture coordinates. While reducing the offset step s to 0.5 times the original.
2.6 repeat steps 2.4 and 2.5 until the offset step s is less than the precision 0.001. The texture coordinate P obtained at this time is the leftmost pixel P of the portrait left And the rightmost pixel P right 。
In step S203, based on the boundary pixels, a filling color of the object region in the image to be processed is determined.
In an exemplary embodiment of the present disclosure, in determining the filling color of the object region in the image to be processed based on the boundary pixels, the color of the boundary pixels corresponding to each pixel in the object region may be first determined, respectively, and then the filling color of each pixel in the object region may be determined based on the color of the boundary pixels corresponding to each pixel in the object region.
In an exemplary embodiment of the present disclosure, when determining the filling color of each pixel in the object region based on the color of the boundary pixel corresponding to each pixel in the object region, any pixel in the object region may be first determined as a target pixel, then a mixing ratio of the colors of the boundary pixels corresponding to the target pixel is determined, a color mixing ratio example corresponding to the target pixel is obtained, and the colors of the boundary pixels corresponding to the target pixel are mixed based on the color mixing ratio example corresponding to the target pixel, and the filling color of the target pixel is obtained.
As an example, in a Shader (Shader 1), a pending image I uploaded by a user is P left And P right Respectively sampling to obtain color values I left And I right . The leftmost and rightmost pixel color mixture ratio of the portrait area is r.
The leftmost and rightmost pixel colors of the portrait area are mixed according to the mixing ratio,obtaining a filling color I at the pixel P P 。
I p =I left ·(1-r)+I right ·r
In step S204, the object region in the image to be processed is processed based on the filling color, so as to obtain a first target image.
For example, the Shader (Shader 1) is used to perform color filling (i.e., rendering) on the object region, and the rendering results in a human image region filling result graph O (e.g., the color filling result in fig. 5).
Fig. 6 illustrates a flowchart of an image processing method according to another exemplary embodiment of the present disclosure. Fig. 7 illustrates an example of a processing result according to an exemplary embodiment of the present disclosure.
Referring to fig. 6, in step S601, an object segmentation mask image corresponding to an object region in an image to be processed is determined according to the object region. Here, the object segmentation mask image is an image obtained by matting a region other than the object region from the image to be processed.
Here, the object in the object region may be a human, an animal, a plant, or the like. For example, the object in the image to be processed of fig. 3 is a person.
In an exemplary embodiment of the present disclosure, any object segmentation algorithm (or image segmentation algorithm) may be used to segment an object region in an image to be processed, which is not limited by the present disclosure.
In step S602, boundary pixels of the object region are determined based on the object segmentation mask image. For example, boundary pixels on preset opposite sides of the object region are determined based on the object segmentation mask image.
In an exemplary embodiment of the present disclosure, after obtaining the object segmentation mask image, an object segmentation mask texture map may be further determined based on the object segmentation mask image. In the object segmentation mask texture map, each pixel gray level represents the probability that the object segmentation algorithm predicts that the pixel position in the image to be processed belongs to the object. Boundary pixels of the object region may be determined based on the object segmentation mask texture map (e.g., boundary pixels on opposite sides of a preset).
When the object in the object region is a human, the object segmentation mask texture map is also referred to as a human segmentation mask texture map, and the object region is also referred to as a human image region.
For example, when determining boundary pixels on two preset opposite sides of the portrait area based on the human segmentation mask texture map M, the leftmost pixels P belonging to the portrait area may be searched for to the left and right respectively for the pixels P with the gray scale greater than 0 in the human segmentation mask texture map M left And the rightmost pixel P right 。
In step S603, based on the boundary pixels, a filling color of the object region in the image to be processed is determined.
In an exemplary embodiment of the present disclosure, in determining the filling color of the object region in the image to be processed based on the boundary pixels, the color of the boundary pixels corresponding to each pixel in the object region may be first determined, respectively, and then the filling color of each pixel in the object region may be determined based on the color of the boundary pixels corresponding to each pixel in the object region.
In an exemplary embodiment of the present disclosure, when determining the filling color of each pixel in the object region based on the color of the boundary pixel corresponding to each pixel in the object region, any pixel in the object region may be first determined as a target pixel, then a mixing ratio of the colors of the boundary pixels corresponding to the target pixel is determined, a color mixing ratio example corresponding to the target pixel is obtained, and the colors of the boundary pixels corresponding to the target pixel are mixed based on the color mixing ratio example corresponding to the target pixel, and the filling color of the target pixel is obtained.
For example, in Shader (Shader 1), the image I to be processed uploaded by the user is in P left And P right Respectively sampling to obtain color values I left And L right . The leftmost and rightmost pixel color mixture ratio of the portrait area is r.
Mixing the colors of the leftmost and rightmost pixels of the portrait area according to the mixing proportion to obtain a filling color I at the position of the pixel P P 。
I P =I left ·(1-r)+I right ·r
In step S604, the object region in the image to be processed is processed based on the filling color, so as to obtain a first target image.
For example, the object area is rendered using the Shader (Shader 1) described above, and the portrait area filling result graph O shown in fig. 5 is obtained by drawing.
In step S605, according to the texture coordinates of the first target image, mean value fuzzy sampling processing is performed on the first target image to obtain a mean value fuzzy sampling processing result.
In an exemplary embodiment of the present disclosure, when performing mean value fuzzy sampling processing on the first target image according to the texture coordinate of the first target image to obtain a mean value fuzzy sampling processing result, texture coordinate offset processing may be performed on each pixel in the object region of the first target image to obtain an offset texture coordinate, and then mean value fuzzy sampling processing may be performed based on the offset texture coordinate of the first target image to obtain a mean value fuzzy sampling processing result. For example, when the mean value fuzzy sampling processing is performed based on the shifted texture coordinates of the first target image, the shifted texture coordinates may be sampled at preset intervals to obtain interval sampling results, and then the mean value of the interval sampling results is calculated to obtain a mean value fuzzy sampling processing result.
In an exemplary embodiment of the present disclosure, when texture coordinate offset processing is performed on each pixel in the object region of the first target image, a texture coordinate offset value of the corresponding pixel in the first target image may be first determined based on a color of each pixel in the object region in the image to be processed, respectively, and then texture coordinate offset processing may be performed on the corresponding pixel in the first target image based on the texture coordinate offset value of the corresponding pixel in the first target image.
In an exemplary embodiment of the present disclosure, when determining a texture coordinate offset value of a corresponding pixel in the first target image based on a color of each pixel in the object region in the image to be processed, respectively, the following operations may be performed for each pixel in the object region, respectively: firstly, determining a red channel value and a green channel value of a pixel in the object area in the color of the image to be processed, and then determining a texture coordinate offset value of a pixel corresponding to the pixel in the first target image based on the red channel value and the green channel value to obtain the texture coordinate offset value of the corresponding pixel in the first target image.
As an example, for a pending image I uploaded by a user, an OpenGL Shader (Shader 2) is created for drawing a portrait transparency effect. In a Shader (Shader 2), the image I to be processed uploaded by the user, the human body segmentation mask image M and the human image area filling result image O are respectively sampled according to the current texture coordinates. For the pixel P with the gray scale larger than 0 in the human body segmentation mask texture map M, according to the color I of the pixel P in the image I to be processed uploaded by the user P Calculating to obtain the texture coordinate offset value of the pixel P in the portrait area filling result graph O: Δ uv =0.1 · (I) P .r,I P .g)。
As an example, sampling is performed every 90 degrees around the texture coordinates after the pixel P in the human image region filling result map O is shifted, and the color mean value O 'is calculated' P 。
In step S606, the object region in the image to be processed is processed based on the mean value fuzzy sampling processing result, so as to obtain a second target image.
Using the above sampled color mean O' P As a result of the processing (i.e., rendering) by Shader 2. Each pixel of the human body area in the image to be processed uploaded by the user is processed (i.e., rendered) by using the Shader2, and a human body area transparent effect graph (e.g., the processing result in fig. 7) is obtained through drawing.
The image processing method according to the exemplary embodiment of the present disclosure has been described above in conjunction with fig. 1 to 7. Hereinafter, an image processing apparatus and units thereof according to an exemplary embodiment of the present disclosure will be described with reference to fig. 8 and 9.
Fig. 8 illustrates a block diagram of an image processing apparatus according to an exemplary embodiment of the present disclosure.
Referring to fig. 8, the image processing apparatus includes an image segmentation unit 81, a boundary determination unit 82, a color determination unit 83, and an image processing unit 84.
The image segmentation unit 81 is configured to determine an object segmentation mask image corresponding to an object region in an image to be processed, wherein the object segmentation mask image is an image obtained by matting regions other than the object region from the image to be processed.
The boundary determining unit 82 is configured to determine boundary pixels of the object region based on the object segmentation mask image.
The color determination unit 83 is configured to determine a fill color of the object region in the image to be processed based on the boundary pixels.
In an exemplary embodiment of the present disclosure, the color determination unit 83 may be configured to: determining a color of a boundary pixel corresponding to each pixel in the object region, respectively; and determining the filling color of each pixel in the object region based on the color of the boundary pixel corresponding to each pixel in the object region.
In an exemplary embodiment of the present disclosure, the color determination unit 83 may be configured to: determining any pixel in the object region as a target pixel; determining a color mixing proportion of a boundary pixel corresponding to the target pixel to obtain a color mixing proportion example corresponding to the target pixel; and mixing the colors of the boundary pixels corresponding to the target pixel based on the color mixing proportion corresponding to the target pixel to obtain the filling color of the target pixel.
The image processing unit 84 is configured to process the object region in the image to be processed based on the filling color, resulting in a first target image.
Fig. 9 illustrates a block diagram of an image processing apparatus according to another exemplary embodiment of the present disclosure.
Referring to fig. 9, the image processing apparatus includes an image segmentation unit 91, a boundary determination unit 92, a color determination unit 93, an image processing unit 94, and another processing unit 95.
The image segmentation unit 91 is configured to determine an object segmentation mask image corresponding to an object region in an image to be processed according to the object region, wherein the object segmentation mask image is an image obtained by matting regions of the image to be processed except the object region.
The boundary determining unit 92 is configured to determine boundary pixels of the object region based on the object segmentation mask image.
The color determination unit 93 is configured to determine a fill color of the object region in the image to be processed based on the boundary pixels.
In an exemplary embodiment of the present disclosure, the color determination unit 93 may be configured to: determining a color of a boundary pixel corresponding to each pixel in the object region, respectively; and determining the filling color of each pixel in the object region based on the color of the boundary pixel corresponding to each pixel in the object region.
In an exemplary embodiment of the present disclosure, the color determination unit 93 may be configured to: determining any pixel in the object region as a target pixel; determining a mixing proportion of colors of the boundary pixels corresponding to the target pixel to obtain a color mixing proportion corresponding to the target pixel; and mixing the colors of the boundary pixels corresponding to the target pixel based on the color mixing proportion corresponding to the target pixel to obtain the filling color of the target pixel.
The image processing unit 94 is configured to process the object region in the image to be processed based on the filling color, resulting in a first target image.
The other processing unit 95 is configured to perform mean value fuzzy sampling processing on the first target image according to the texture coordinates of the first target image to obtain a mean value fuzzy sampling processing result, and process the object region in the image to be processed based on the mean value fuzzy sampling processing result to obtain a second target image.
In an exemplary embodiment of the present disclosure, the further processing unit 95 may be configured to: respectively carrying out texture coordinate offset processing on each pixel in the object region of the first target image to obtain offset texture coordinates; and performing mean value fuzzy sampling processing on the texture coordinates of the first target image after the offset to obtain a mean value fuzzy sampling processing result.
In an exemplary embodiment of the present disclosure, the other processing unit 95 may be configured to: respectively determining texture coordinate offset values of corresponding pixels in the first target image based on the color of each pixel in the object region in the image to be processed; and performing texture coordinate offset processing on the corresponding pixel in the first target image based on the texture coordinate offset value of the corresponding pixel in the first target image.
In an exemplary embodiment of the present disclosure, the further processing unit 95 may be configured to perform the following operations for each pixel in the object region, respectively: determining red channel values and green channel values of colors of pixels in the object region in the image to be processed; and determining a texture coordinate offset value of a pixel corresponding to the pixel in the first target image based on the red channel value and the green channel value to obtain the texture coordinate offset value of the corresponding pixel in the first target image.
With regard to the apparatus in the above-described embodiment, the specific manner in which each unit performs the operation has been described in detail in the embodiment related to the method, and will not be described in detail here.
The image processing apparatus according to the exemplary embodiment of the present disclosure has been described above in conjunction with fig. 8 and 9. Next, an electronic apparatus according to an exemplary embodiment of the present disclosure is described with reference to fig. 10.
Fig. 10 is a block diagram of an electronic device 1000 according to an example embodiment of the present disclosure.
Referring to fig. 10, the electronic device 1000 includes at least one memory 1001 and at least one processor 1002, the at least one memory 1001 having stored therein a set of computer-executable instructions that, when executed by the at least one processor 1002, perform a method of image processing according to an example embodiment of the present disclosure.
In an exemplary embodiment of the present disclosure, the electronic device 1000 may be a PC computer, a tablet device, a personal digital assistant, a smartphone, or other device capable of executing the above-described set of instructions. The electronic device 1000 need not be a single electronic device, but can be any collection of devices or circuits capable of executing the above-described instructions (or sets of instructions), individually or in combination. The electronic device 1000 may also be part of an integrated control system or system manager, or may be configured as a portable electronic device that interfaces with local or remote (e.g., via wireless transmission).
In the electronic device 1000, the processor 1002 may include a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a programmable logic device, a dedicated processor system, a microcontroller, or a microprocessor. By way of example, and not limitation, processors may also include analog processors, digital processors, microprocessors, multi-core processors, processor arrays, network processors, and the like.
The processor 1002 may execute instructions or code stored in the memory 1001, wherein the memory 1001 may also store data. The instructions and data may also be transmitted or received over a network via a network interface device, which may employ any known transmission protocol.
The memory 1001 may be integrated with the processor 1002, for example, by having RAM or flash memory disposed within an integrated circuit microprocessor or the like. Further, memory 1001 may include a stand-alone device, such as an external disk drive, storage array, or any other storage device usable by a database system. The memory 1001 and the processor 1002 may be operatively coupled or may communicate with each other, e.g., through I/O ports, network connections, etc., to enable the processor 1002 to read files stored in the memory.
In addition, the electronic device 1000 may also include a video display (such as a liquid crystal display) and a user interaction interface (such as a keyboard, mouse, touch input device, etc.). All components of the electronic device 1000 may be connected to each other via a bus and/or a network.
There is also provided, in accordance with an exemplary embodiment of the present disclosure, a computer-readable storage medium, such as a memory 1001, including instructions executable by a processor 1002 of a device 1000 to perform the above-described method. Alternatively, the computer readable storage medium may be a ROM, a Random Access Memory (RAM), a CD-ROM, a magnetic tape, a floppy disk, an optical data storage device, and the like.
According to an exemplary embodiment of the present disclosure, a computer program product may also be provided, which comprises computer programs/instructions, which when executed by a processor, implement the method of image processing according to an exemplary embodiment of the present disclosure.
The image processing method and apparatus according to the exemplary embodiment of the present disclosure have been described above with reference to fig. 1 to 10. However, it should be understood that: the image processing apparatus and units thereof shown in fig. 8 and 9 may be respectively configured as software, hardware, firmware, or any combination thereof to perform a specific function, the electronic device shown in fig. 10 is not limited to including the above-shown components, but some components may be added or deleted as needed, and the above components may also be combined.
According to the image processing method and device disclosed by the invention, an object segmentation mask image corresponding to an object region is determined according to the object region in an image to be processed, wherein the object segmentation mask image is an image obtained by matting the image to be processed except the object region, boundary pixels of the object region are determined based on the object segmentation mask image, filling colors of the object region in the image to be processed are determined based on the boundary pixels, the object region in the image to be processed is rendered based on the filling colors to obtain a first target image, so that the filling efficiency of background colors can be improved by searching opposite side boundaries of pixels of the image region in the image and estimating the filling colors after matting the image according to background color values at the boundaries.
In addition, according to the image processing method and the image processing device, texture coordinates can be shifted according to the colors of the portrait, a transparent portrait effect is created by combining mean blurring, and frosted glass texture is added to the transparent portrait, so that the image processing effect is improved.
In addition, the image processing method can be realized in parallel in the GPU, the calculation process is efficient, the real-time application scene can be met, and the generated image is exquisite and has frosted glass stereoscopic texture.
Other embodiments of the disclosure will be apparent to those skilled in the art from consideration of the specification and practice of the disclosure disclosed herein. This application is intended to cover any variations, uses, or adaptations of the disclosure following, in general, the principles of the disclosure and including such departures from the present disclosure as come within known or customary practice within the art to which the disclosure pertains. It is intended that the specification and examples be considered as exemplary only, with a true scope and spirit of the disclosure being indicated by the following claims.
It will be understood that the present disclosure is not limited to the precise arrangements described above and shown in the drawings and that various modifications and changes may be made without departing from the scope thereof. The scope of the present disclosure is limited only by the appended claims.
Claims (10)
1. An image processing method, comprising:
determining an object segmentation mask image corresponding to an object region according to the object region in an image to be processed, wherein the object segmentation mask image is an image obtained by cutting out a region except the object region from the image to be processed;
determining boundary pixels of the object region based on the object segmentation mask image;
determining a filling color of the object region in the image to be processed based on the boundary pixels;
and processing the object region in the image to be processed based on the filling color to obtain a first target image.
2. The image processing method according to claim 1, wherein after the processing the object region in the image to be processed based on the filling color to obtain a first target image, the method further comprises:
performing mean value fuzzy sampling processing on the first target image according to the texture coordinates of the first target image to obtain a mean value fuzzy sampling processing result;
and processing the object area in the image to be processed based on the mean value fuzzy sampling processing result to obtain a second target image.
3. The image processing method according to claim 2, wherein the performing mean value fuzzy sampling processing on the first target image according to the texture coordinate of the first target image to obtain a mean value fuzzy sampling processing result comprises:
respectively carrying out texture coordinate offset processing on each pixel in the object region of the first target image to obtain offset texture coordinates of the first target image;
and performing mean value fuzzy sampling processing on the texture coordinates of the first target image after the offset to obtain a mean value fuzzy sampling processing result.
4. The image processing method according to claim 3, wherein the performing texture coordinate offset processing on each pixel in the object region of the first target image comprises:
respectively determining texture coordinate offset values of corresponding pixels in the first target image based on the color of each pixel in the object region in the image to be processed;
and performing texture coordinate offset processing on the corresponding pixel in the first target image based on the texture coordinate offset value of the corresponding pixel in the first target image.
5. The method according to claim 4, wherein the determining the texture coordinate offset value for the corresponding pixel in the first target image based on the color of each pixel in the object region in the image to be processed, respectively, comprises performing the following for each pixel in the object region, respectively:
determining red channel values and green channel values of colors of pixels in the object region in the image to be processed;
and determining a texture coordinate offset value of a pixel corresponding to the pixel in the first target image based on the red channel value and the green channel value to obtain the texture coordinate offset value of the corresponding pixel in the first target image.
6. The method according to claim 1, wherein the determining the filling color of the object region in the image to be processed based on the boundary pixel comprises:
determining a color of a boundary pixel corresponding to each pixel in the object region, respectively;
and determining the filling color of each pixel in the object region based on the color of the boundary pixel corresponding to each pixel in the object region.
7. The method according to claim 6, wherein the determining the filling color of each pixel in the object region based on the color of the boundary pixel corresponding to each pixel in the object region comprises:
determining any pixel in the object region as a target pixel;
determining a color mixing proportion of a boundary pixel corresponding to the target pixel to obtain a color mixing proportion example corresponding to the target pixel;
and mixing the colors of the boundary pixels corresponding to the target pixel based on the color mixing proportion corresponding to the target pixel to obtain the filling color of the target pixel.
8. An image processing apparatus characterized by comprising:
the image segmentation unit is configured to determine an object segmentation mask image corresponding to an object region according to the object region in an image to be processed, wherein the object segmentation mask image is an image obtained by matting regions except the object region from the image to be processed;
a boundary determining unit configured to determine boundary pixels of the object region based on the object segmentation mask image;
a color determination unit configured to determine a filling color of the object region in the image to be processed based on the boundary pixels; and
and the image processing unit is configured to process the object area in the image to be processed based on the filling color to obtain a first target image.
9. An electronic device, comprising:
a processor;
a memory for storing the processor-executable instructions;
wherein the processor is configured to execute the instructions to implement the image processing method of any one of claims 1 to 7.
10. A computer-readable storage medium storing a computer program, which, when executed by a processor of an electronic device, causes the electronic device to perform the image processing method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211514705.6A CN115797287A (en) | 2022-11-29 | 2022-11-29 | Image processing method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202211514705.6A CN115797287A (en) | 2022-11-29 | 2022-11-29 | Image processing method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN115797287A true CN115797287A (en) | 2023-03-14 |
Family
ID=85443309
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202211514705.6A Pending CN115797287A (en) | 2022-11-29 | 2022-11-29 | Image processing method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN115797287A (en) |
-
2022
- 2022-11-29 CN CN202211514705.6A patent/CN115797287A/en active Pending
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113689372B (en) | Image processing method, apparatus, storage medium, and program product | |
US11758295B2 (en) | Methods, systems, and media for generating compressed images | |
CN111080655A (en) | Image segmentation and model training method, device, medium and electronic equipment | |
EP3958207A2 (en) | Method and apparatus for video frame interpolation, and electronic device | |
CN110852980A (en) | Interactive image filling method and system, server, device and medium | |
CN113411550B (en) | Video coloring method, device, equipment and storage medium | |
US20240320807A1 (en) | Image processing method and apparatus, device, and storage medium | |
CN114299088A (en) | Image processing method and device | |
CN112785493A (en) | Model training method, style migration method, device, equipment and storage medium | |
CN110211017A (en) | Image processing method, device and electronic equipment | |
CN114913061A (en) | Image processing method and device, storage medium and electronic equipment | |
CN110930492A (en) | Model rendering method and device, computer readable medium and electronic equipment | |
CN109065001B (en) | Image down-sampling method and device, terminal equipment and medium | |
CN113766117B (en) | Video de-jitter method and device | |
CN113810755B (en) | Panoramic video preview method and device, electronic equipment and storage medium | |
CN115797287A (en) | Image processing method and device | |
CN114529649A (en) | Image processing method and device | |
CN115588064A (en) | Video generation method and device, electronic equipment and storage medium | |
CN114596203A (en) | Method and apparatus for generating images and for training image generation models | |
JP2023508516A (en) | Animation generation method, apparatus, electronic device and computer readable storage medium | |
CN113762260A (en) | Method, device and equipment for processing layout picture and storage medium | |
CN118135079B (en) | Three-dimensional scene roaming drawing method, device and equipment based on cloud fusion | |
CN115937338B (en) | Image processing method, device, equipment and medium | |
CN114820908B (en) | Virtual image generation method and device, electronic equipment and storage medium | |
CN113658045B (en) | Video processing method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |