CN112215854B - Image processing method and device - Google Patents

Image processing method and device Download PDF

Info

Publication number
CN112215854B
CN112215854B CN202011120091.4A CN202011120091A CN112215854B CN 112215854 B CN112215854 B CN 112215854B CN 202011120091 A CN202011120091 A CN 202011120091A CN 112215854 B CN112215854 B CN 112215854B
Authority
CN
China
Prior art keywords
image
processed
images
target image
style
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011120091.4A
Other languages
Chinese (zh)
Other versions
CN112215854A (en
Inventor
史少桦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhuhai Kingsoft Digital Network Technology Co Ltd
Original Assignee
Zhuhai Kingsoft Digital Network Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhuhai Kingsoft Digital Network Technology Co Ltd filed Critical Zhuhai Kingsoft Digital Network Technology Co Ltd
Priority to CN202011120091.4A priority Critical patent/CN112215854B/en
Publication of CN112215854A publication Critical patent/CN112215854A/en
Application granted granted Critical
Publication of CN112215854B publication Critical patent/CN112215854B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)

Abstract

The application relates to an image processing method and device. The image processing method comprises the following steps: dividing an original image to be processed to obtain at least two first images to be processed; performing pixel expansion area increasing processing on each of at least two first images to be processed to obtain at least two second images to be processed; performing style migration processing on each of at least two second images to be processed to obtain at least two target image units; and merging the at least two target image units based on a preset merging rule to obtain a target image. The image processing method provided by the application can achieve better style migration effect on the image to be processed.

Description

Image processing method and device
Technical Field
The present application relates to the field of internet technologies, and in particular, to a method and apparatus for image processing, a computing device, and a computer readable storage medium.
Background
In the prior art, when the style of a picture is migrated, the picture is often based on a certain formed style, and is taken as a template, and a certain picture is subjected to stylized processing through the template, so that the processed picture is output. This prior art approach to processing can be used for smaller size pictures.
However, the size of the picture material in the current game is often relatively large. The resolution ratio commonly used in the industry is 1024 x 1024 or more, if the device is directly used, the device cannot normally operate due to the limitation of the performance of the machine display card; meanwhile, in the prior art, large-size images are segmented, the segmented images are combined after style migration treatment is carried out on the segmented images respectively, and a remained gap exists at the combined boundary, so that a complete image cannot be formed, and the attractiveness is affected.
Disclosure of Invention
In view of the above, the present application provides an image processing method and apparatus, a computing device and a computer readable storage medium, so as to solve the technical drawbacks in the prior art.
Specifically, the application provides the following technical scheme:
the application provides an image processing method, which comprises the following steps:
Dividing an original image to be processed to obtain at least two first images to be processed;
Performing pixel expansion area increasing processing on each of at least two first images to be processed to obtain at least two second images to be processed;
performing style migration processing on each of at least two second images to be processed to obtain at least two target image units;
and merging the at least two target image units based on a preset merging rule to obtain a target image.
Optionally, for the image processing method, splitting the original image to be processed to obtain at least two first images to be processed, including:
And dividing the image to be processed based on a preset dividing rule to obtain at least two first images to be processed, wherein each first image to be processed is a rectangular image.
Optionally, for the image processing method, performing processing of increasing a pixel extension area on each of at least two first images to be processed to obtain at least two second images to be processed, including:
Expanding a preset number of pixels outwards based on the image edge of each of at least two first images to be processed, wherein the expanded pixels form the pixel expansion area;
And obtaining at least two second images to be processed based on the image content area corresponding to each of the at least two first images to be processed and the pixel extension area.
Optionally, for the image processing method, the performing style migration processing on each of the at least two second images to be processed to obtain at least two target image units includes:
Acquiring a template image, and performing style migration processing on each of at least two second images to be processed based on the style of the template image to generate at least two target image units;
Wherein each of at least two target image units comprises a region of image content after style migration and a region of pixel extension after style migration.
Optionally, for the image processing method, merging the at least two target image units based on a preset merging rule to obtain a target image, including:
arranging the at least two target image units;
Combining the frames of every two adjacent target image units based on the image content areas after the migration of the cells in the target image units to obtain a pre-combined image;
and deleting the pixel extension area after the grid migration in the pre-combined image to obtain a combined target image.
Optionally, for the image processing method, arranging the at least two target image units includes:
And arranging the at least two target image units based on the arrangement sequence of at least two first images to be processed corresponding to the image content areas after the migration in the at least two target image units in the original images to be processed.
Optionally, with the image processing method, when the image is a four-channel image including three primary color channels and an alpha channel, the method further includes:
and splitting the four-channel image to obtain an original three-primary-color channel, and generating an original image to be processed based on the original three-primary-color channel.
Optionally, with the image processing method, after obtaining the target image, the method further includes:
Obtaining three primary color channels of the target image;
and generating an initial format image with four channels based on the four-channel image, and replacing the three primary color channels of the initial format image with the three primary color channels of the target image to obtain a final target image.
Optionally, for the image processing method, performing style migration processing on each of at least two second images to be processed based on the style of the template image, including:
And performing style migration processing on each of at least two second images to be processed based on the style of the template image by adopting a convolutional neural network.
The present application provides an image processing apparatus, the apparatus comprising:
the segmentation module is configured to segment the original image to be processed to obtain at least two first images to be processed;
An expansion module configured to perform pixel expansion area increasing processing on each of at least two first images to be processed to obtain at least two second images to be processed;
The style migration module is configured to perform style migration processing on each of at least two second images to be processed to obtain at least two target image units;
and the merging module is configured to merge the at least two target image units based on a preset merging rule to obtain a target image.
The application provides a computing device comprising a memory, a processor and computer instructions stored on the memory and executable on the processor, characterized in that the processor implements the steps of the aforementioned image processing method when executing the instructions.
The present application provides a computer readable storage medium storing computer instructions which, when executed by a processor, implement the steps of any one of the image processing methods described above.
The application provides an image processing method, which is characterized in that an original image to be processed with a large size is firstly divided into at least two first images to be processed, and style migration processing is respectively carried out after the large-size image is divided into small-size images, so that the requirement on the performance of a machine display card is reduced, and the operation is easier; then, each of at least two first images to be processed is subjected to pixel expansion area increasing processing to obtain at least two second images to be processed, each of the at least two second images to be processed is subjected to style migration processing to obtain at least two target image units, the pixel expansion areas are increased, style migration processing is performed to obtain style migration pixel expansion areas, redundant data are generated at the combined frame, and gaps generated at the combined frame in the process of recombining the divided images are filled; and finally, merging the at least two target image units based on a preset merging rule to obtain a target image, and further obtaining a target image with better style migration processing effect.
Drawings
Fig. 1 is a flowchart of an image processing method according to a first embodiment of the present application;
fig. 2a is a schematic diagram of image segmentation in an image processing method according to a first embodiment of the present application;
fig. 2b is a schematic diagram of image segmentation in an image processing method according to an embodiment of the present application
Fig. 3 is a schematic diagram of a structure for increasing a pixel extension area in an image processing method according to a first embodiment of the present application;
fig. 4 is a schematic flow chart of style migration processing in an image processing method according to a first embodiment of the present application;
Fig. 5 is a schematic flow chart of merging target image units in an image processing method according to a first embodiment of the present application;
fig. 6 is a flowchart of an image processing method according to a second embodiment of the present application;
Fig. 7 is a flowchart of an example of image processing according to the second embodiment of the present application;
Fig. 8 is a schematic structural view of an image processing apparatus according to a third embodiment of the present application;
fig. 9 is a schematic structural diagram of a computing device according to a fourth embodiment of the present application.
Detailed Description
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present application. The present application may be embodied in many other forms than those herein described, and those skilled in the art will readily appreciate that the present application may be similarly embodied without departing from the spirit or essential characteristics thereof, and therefore the present application is not limited to the specific embodiments disclosed below.
The terminology used in the one or more embodiments of the specification is for the purpose of describing particular embodiments only and is not intended to be limiting of the one or more embodiments of the specification. As used in this specification, one or more embodiments and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It should also be understood that the term "and/or" as used in one or more embodiments of the present specification refers to and encompasses any or all possible combinations of one or more of the associated listed items.
The word "if" as used herein may be interpreted as "at … …" or "at … …" or "in response to a determination" depending on the context.
First, terms related to one or more embodiments of the present invention will be explained.
Style migration: refers to converting the style of one picture into another picture. The style migration works by fusing the context content of the original image with the style of the reference image, and the fusion enables the output image to approach the context content of the original image in content and approach the style of the reference image in style.
Pixel (pixel): is the basic unit of image display. Throughout the image, the pixels can be seen as a small grid of single color and not subdivided into smaller elements or units, the more pixels in a unit area representing higher resolution, the more clear the image is displayed.
Template image: in the present application, it refers to images that are used as style templates in the style migration process.
The channel is as follows: refers to the constituent parts of the image. The different color channels record the status of a certain color.
Three primary color channels: refers to 3 color channels of red (R), green (G), and blue (B).
Alpha channel (Alpha channel, a channel): refers to the transparency and translucency of a picture. The Alpha value is generally between 0 and 1, wherein 0 is black and is transparent; 1 is white, indicating opacity; the translucency is between 0 and 1.
Initial format image: in the present application, it refers to a picture generated based on four channels, including TGA format.
Convolutional neural network: is a deep neural network which is most effective in processing image tasks. Convolutional neural networks are feedforward neural networks composed of a plurality of network layers, each of which contains a number of computational units (neurons) for processing visual information. The computing unit of each layer may be understood as a collection of picture filters, each layer may extract different specific features of a picture.
In the present application, an image processing method and apparatus, a computing device, and a computer-readable storage medium are provided, and detailed description is given in the following embodiments.
Example 1
The present embodiment provides an image processing method, referring to fig. 1, fig. 1 shows a flowchart of the image processing method provided in the present embodiment, including steps S101 to S104.
S101, dividing an original image to be processed to obtain at least two first images to be processed.
In the application, the original image to be processed is a still image, and the applicable formats comprise JPEG and JPG formats.
Further, in the present application, the splitting processing is performed on the original image to be processed to obtain at least two first images to be processed, including:
And dividing the image to be processed based on a preset dividing rule to obtain at least two first images to be processed, wherein each first image to be processed is a rectangular image.
Specifically, in the present application, the preset division rule includes: equally dividing the image to be processed into n rectangular images with the same size; or dividing the image to be processed into n rectangular images with different sizes.
Specifically, as shown in fig. 2a, the resolution of the image to be processed is 1024×1024, in fig. 2a, the image to be processed is equally divided into 4 small-size resolution images a1 to a4, a1 to a4 are the first images to be processed, and the resolution of each first image to be processed is 512×512.
As shown in fig. 2b, in the image to be processed (the resolution is 1024×1024), the main content of the image is at the upper right side of the image (for example, fig. 2b is a landscape photograph, where main scenery, such as trees, etc., is mainly concentrated in the upper right region of the image, and the rest regions are scattered with grass, etc.), so in order to ensure the integrity of the content information of the image as much as possible, in the process of splitting the original image to be processed, the continuous part of the content information can be cut into the same region as much as possible, so in fig. 2b, the original image to be processed is not split equally, and the small-size resolution images b 1-b 4, b 1-b 4 are the first image to be processed, where the resolution of the b2 region is 758×768.
The above-mentioned fig. 2a and fig. 2b are both schematic schemes for dividing the original image to be processed according to the present application, and may be specific according to practical situations in the specific application process, for example, dividing into 2 parts, 4 parts, 8 parts, or the like, which is not limited in this aspect of the present application.
S102, performing pixel expansion area increasing processing on each of at least two first images to be processed to obtain at least two second images to be processed.
In the present application, the pixel extension area refers to an area formed by adding pixels along the edge of an image on the basis of the existing image and forming the pixel extension area.
Further, performing processing of increasing a pixel extension area on each of at least two first images to be processed to obtain at least two second images to be processed, including:
Expanding a preset number of pixels outwards based on the image edge of each of at least two first images to be processed, wherein the expanded pixels form the pixel expansion area;
And obtaining at least two second images to be processed based on the image content area corresponding to each of the at least two first images to be processed and the pixel extension area.
Specifically, in the image processing method provided by the application, after an original image to be processed is segmented, n first images to be processed are obtained, a preset number of pixels are added to each first image to be processed outwards along the edge of the image, and all added pixels are used as pixel extension areas. And because the newly added pixels do not contain image content and are only blank pixels, each first image to be processed is taken as an image content area (namely, a non-blank pixel area), and the extended pixel area corresponding to each first image to be processed and the image content area form the second image to be processed together.
As shown in fig. 3, fig. 3 shows a schematic structural diagram of a second image to be processed provided by the present application. Wherein two regions are included in fig. 3: a T region (diagonal line region) representing an image content region (i.e., a first image to be processed) and a K region (dot region) representing a pixel extension region. The T region and the K region together constitute the second image to be processed.
Specifically, in fig. 3, the resolution of the first to-be-processed image in the T region is 256×256 (i.e., the length of the image is 256 pixels and the width of the image is 256 pixels), and the resolution of the second to-be-processed image is 266×266 (i.e., the length of the image is 266 pixels and the width of the image is 266 pixels) based on the first to-be-processed image, wherein the region (K region) formed by the added pixels in the second to-be-processed image is the pixel extension region.
Further, in the image processing method provided by the present application, the number of pixels added may be adjusted according to the result of the image processing, for example, 1, 10, 30, etc. to achieve the best effect, which is not limited by the present application.
Further, since the original image to be processed is divided into a plurality of first images to be processed, the number of pixels added for each of the first images to be processed may be the same or different, and adjustment and optimization may be performed according to the image processing result.
S103, performing style migration processing on each of at least two second images to be processed to obtain at least two target image units.
The style migration processing refers to: the style of one image is converted into another image. For example, as shown in fig. 4, where the picture 1 is a real building image requiring style migration, the picture 2 is a target style image, and then the style migration is performed on the picture 1 based on the style of the picture 2 to generate the picture 3, as can be seen from fig. 4, the picture 3 fuses the pattern style in the picture 2 on the basis of the building of the original picture 1, and the building image after the style migration processing is obtained.
Further, in the image processing method provided by the present application, performing style migration processing on each of at least two second images to be processed to obtain at least two target image units, including:
Acquiring a template image, and performing style migration processing on each of at least two second images to be processed based on the style of the template image to generate at least two target image units;
Wherein each of at least two target image units comprises a region of image content after style migration and a region of pixel extension after style migration.
Specifically, in the image processing method provided by the application, a plurality of first images to be processed are generated on the basis of the original images to be processed, and then pixel expansion areas are respectively added to each first image to be processed, so that a plurality of second images to be processed are obtained; and then, obtaining template images subjected to style migration processing, namely, respectively performing style migration processing on each second image to be processed based on the styles of the template images, wherein in the process, not only is the image content area subjected to style migration processing, but also the pixel area is expanded to simultaneously perform style migration processing, and finally, a plurality of target image units are obtained, and each target image unit is provided with a corresponding image content area subjected to style migration and a pixel expansion area subjected to style migration.
Further, in the image processing method provided by the present application, performing style migration processing on each of at least two second images to be processed based on a style of the template image, including:
And performing style migration processing on each of at least two second images to be processed based on the style of the template image by adopting a convolutional neural network.
In particular, the convolutional neural network is one of the most efficient deep neural networks for processing image tasks. Convolutional neural networks are feedforward neural networks composed of a plurality of network layers, each of which contains a number of computational units (neurons) for processing visual information. The computing unit of each layer may be understood as a collection of picture filters, each layer may extract different specific features of a picture.
In the field of image style migration processing, VGG (Visual Geometry Group) type convolutional neural networks can be adopted for style migration, wherein the VGG type convolutional neural networks comprise VGG-11, VGG-13, VGG-16, VGG-19 and the like.
The following specifically describes the training process of the image migration neural network model by taking VGG-19 as an example:
1) The image is read.
2) Extracting features with VGG 19:
VGG19 may be divided into 5 blocks, each block consisting of multiple convolutions and subsequent pooling layers, the pooling layers of the 5 blocks included in VGG19 being maximally pooled, except for the number of convolutions, the first block having 2 convolutions (conv1_1 and conv1_2), the second block also having 2 convolutions, the subsequent 3 blocks being 4 convolutions, and finally two fully connected layers (FC 1 and FC 2) and one softmax layer for classification. But the last two fully connected layers and softmax layers are not required in the style migration task.
Two pictures were taken: one of the pictures is used as a content input (an image to be subjected to style migration), and the other is a feature map obtained by passing through 5 blocks of the VGG19 as a style input (a template image).
3) Modeling, calculating loss value (loss):
A VGG19 model is defined, in which, in order to meet the input requirements of VGG19, the input style picture and the content picture are required to be identical in size.
4) Training with gradient descent.
Specifically, in the image processing method provided by the application, the style migration processing is performed on each of the second images to be processed by using a trained style migration model (for example, VGG 19). The style migration model already has a certain specific image style, and can perform style migration based on the input image to generate a new image with the specific style.
In the application, each second image to be processed is subjected to style migration processing through the style migration model, and because the second image to be processed comprises an image content area and a pixel extension area, not only the image content area is subjected to style migration, but also the corresponding extension pixel area is subjected to style migration in the processing process of adopting the style migration model; and as the pixel expansion area is added, the image content area can learn more style characteristics based on the template image in the style migration processing process, and the style migration effect is improved.
S103, combining the at least two target image units based on a preset combining rule to obtain a target image.
As can be seen from the foregoing, according to the present application, a plurality of target image units are obtained by performing style migration processing on a plurality of second images to be processed; and combining the plurality of target image units based on a preset combining rule to obtain a target image. The target image is an image after style migration corresponding to the original image to be processed, and the resolution of the target image is the same as the size of the original image to be processed.
Specifically, in the image processing method provided by the present application, merging the at least two target image units based on a preset merging rule to obtain a target image, including:
arranging the at least two target image units;
Combining the frames of every two adjacent target image units based on the image content areas after the migration of the cells in the target image units to obtain a pre-combined image;
and deleting the pixel extension area after the grid migration in the pre-combined image to obtain a combined target image.
Specifically, referring to fig. 5, a schematic diagram of merging two adjacent target image units is shown in fig. 5. Wherein A1 and A2 are two adjacent target image units obtained through style migration processing, and in A1 and A2, the middle grid part represents an image content area after style migration; the external blank frame represents an extended pixel area after style migration; wherein a and b are the borders of the image content area after style migration in A1, A2, respectively. When merging, merging is performed along the border of the image content area after the grid migration in the adjacent target image unit, that is, along the border a and the border b, as shown in fig. 5, at the merged border, the pixel extension areas after the style migration in A1 and A2 overlap, so that redundant data is generated at the merged border based on the pixel extension areas after the style migration, and the pre-merged image is obtained.
Redundant data refers to: the repetition between data is referred to as a phenomenon in which the same data is stored in different data files.
Further, deleting the pixel extension area with the style migration existing in the pre-merged image to obtain a target image. Specifically, in the process, redundant data exist at the combined frame (a, b), so that after the expanded pixels of style migration are cut off, the redundant data fill gaps existing at the combined frame, and finally, a target image without the combined gaps is obtained, and a target image with better style migration processing effect is obtained.
Further, arranging the at least two target image units includes:
And arranging the at least two target image units based on the arrangement sequence of at least two first images to be processed corresponding to the image content areas after the migration in the at least two target image units in the original images to be processed.
Specifically, for example, in fig. 2a, the original image to be processed is divided into four first images to be processed a1-a4, and four target image units a1'-a4' are generated based on a1-a4, and in the process of merging, the a1'-a4' are arranged according to the position sequence of the corresponding a1-a4 in the original image to be processed.
The application provides an image processing method through the steps S101-S104, wherein the large-size original image to be processed is firstly divided into at least two first images to be processed, and the style migration processing is respectively carried out after the large-size image is divided into the small-size images, so that the requirement on the performance of a machine display card is reduced, and the operation is easier; then, each of at least two first images to be processed is subjected to pixel expansion area increasing processing to obtain at least two second images to be processed, each of the at least two second images to be processed is subjected to style migration processing to obtain at least two target image units, the pixel expansion areas are increased, style migration processing is performed to obtain style migration pixel expansion areas, redundant data are generated at the combined frame, and gaps generated at the combined frame in the process of recombining the divided images are filled; and finally, merging the at least two target image units based on a preset merging rule to obtain a target image, and further obtaining a target image with better style migration processing effect.
Example two
The present embodiment provides an image processing method, referring to fig. 6, and fig. 6 shows a flowchart of the image processing method provided in the present embodiment, including steps S601 to S607.
S601, when the image is a four-channel image comprising three primary color channels and an alpha channel, splitting the four-channel image to obtain an original three primary color channel, and generating an original image to be processed based on the original three primary color channel.
Specifically, the format of the original image to be processed adopted in the image processing method provided by the application is required to be a picture with three primary color channels.
Wherein, the three primary color channels: refers to 3 color channels of red (R), green (G), and blue (B).
Alpha channel (Alpha channel, abbreviated as a channel): refers to the transparency and translucency of a picture. The Alpha value is generally between 0 and 1, wherein 0 is black and is transparent; 1 is white, indicating opacity; the translucency is between 0 and 1.
Since the game pictures are usually provided with Alpha channels, style migration processing cannot be performed by four-channel pictures with Alpha channels.
Specifically, in this case, the image processing method provided by the present application includes the steps of:
firstly, splitting based on an original four-channel image to obtain an original three-primary-color channel, namely an RGB three-primary-color channel;
And then generating an original image to be processed in a target format based on the split original RGB three-primary-color channel, wherein the target format comprises JPEG and JPG formats.
S602, dividing the original image to be processed to obtain at least two first images to be processed.
Specifically, the splitting processing is performed on the original to-be-processed image to obtain at least two first to-be-processed images, including:
And dividing the image to be processed based on a preset dividing rule to obtain at least two first images to be processed, wherein each first image to be processed is a rectangular image.
S603, performing pixel expansion area increasing processing on each of at least two first images to be processed to obtain at least two second images to be processed.
Specifically, performing the pixel extension increasing process on each of the at least two first images to be processed to obtain at least two second images to be processed, including:
Expanding a preset number of pixels outwards based on the image edge of each of at least two first images to be processed, wherein the expanded pixels form the pixel expansion area;
And obtaining at least two second images to be processed based on the image content area corresponding to each of the at least two first images to be processed and the pixel extension area.
S604, performing style migration processing on each of at least two second images to be processed to obtain at least two target image units.
Specifically, performing style migration processing on each of at least two second images to be processed to obtain at least two target image units, including:
Acquiring a template image, and performing style migration processing on each of at least two second images to be processed based on the style of the template image to generate at least two target image units;
Wherein each of at least two target image units comprises a region of image content after style migration and a region of pixel extension after style migration.
Further, performing style migration processing on each of at least two second images to be processed based on the style of the template image, including:
And performing style migration processing on each of at least two second images to be processed based on the style of the template image by adopting a convolutional neural network.
S605, merging the at least two target image units based on a preset merging rule to obtain a target image.
Specifically, merging the at least two target image units based on a preset merging rule to obtain a target image, including:
arranging the at least two target image units;
Combining the frames of every two adjacent target image units based on the image content areas after the migration of the cells in the target image units to obtain a pre-combined image;
and deleting the pixel extension area after the grid migration in the pre-combined image to obtain a combined target image.
Further, arranging the at least two target image units includes:
And arranging the at least two target image units based on the arrangement sequence of at least two first images to be processed corresponding to the image content areas after the migration in the at least two target image units in the original images to be processed.
The specific process of generating the target image based on the original image to be processed in steps S602 to S605 is described in detail in the foregoing embodiments, and thus will not be described herein.
S606, obtaining the three primary color channels of the target image.
Specifically, the three primary color channels are three primary color channels of the target image generated after the style migration processing, and are different from the original RGB three primary color channels.
S607, generating an initial format image with four channels based on the four-channel image, and replacing the three primary color channels of the initial format image with the three primary color channels of the target image to obtain a final target image.
Specifically, the initial format image generated based on the four-channel image includes: TGA format.
And then, replacing the three primary color channels of the original format image with the three primary color channels of the obtained target image to obtain a final target image corresponding to the style migration processing of the original four-channel image.
Further, by way of the following example, referring to fig. 7, fig. 7 shows a schematic flow chart of style migration processing for four-way images.
Firstly, splitting to obtain RGB three channels based on an original RGBA picture (four-channel picture), and generating a JPG format picture based on the split RGB three channels, wherein the JPG format picture is an original image to be processed; converting the original RGBA picture into a TGA format picture;
Then, performing image cutting, increasing an extended pixel area and style migration processing on the JPG format picture, and finally merging to obtain a JPG' format picture after style migration processing;
obtaining RGB three channels based on the obtained JPG' format picture;
And finally, replacing RGB three channels in the TGA format picture with RGB three channels to obtain the TGA format picture after style migration, namely the final target image.
The application provides an image processing method for a four-channel image through the steps, wherein three primary color channels are split from the four-channel image, a three-channel image conforming to an image processing format is regenerated, after a target image for style migration is generated based on the three-channel image, the three primary color channels corresponding to the target image are replaced by the three primary color channels in the original four-channel image, so that a final target image is obtained, and style migration processing on the four-channel image can be realized through the steps; meanwhile, the defects that the original image is oversized and gaps are generated when the original image is combined after being cut are overcome, and the effect of image style migration processing is improved.
Example III
The embodiment provides an image processing apparatus, referring to fig. 8, fig. 8 shows a structure diagram of the image processing apparatus provided by the present application, including the following modules:
The segmentation module 810 is configured to perform segmentation processing on the original to-be-processed image to obtain at least two first to-be-processed images;
an expansion module 820 configured to perform a pixel expansion area increasing process on each of the at least two first images to be processed to obtain at least two second images to be processed;
A style migration module 830 configured to perform style migration processing on each of the at least two second images to be processed to obtain at least two target image units;
The merging module 840 is configured to merge the at least two target image units based on a preset merging rule, so as to obtain a target image.
Specifically, the segmentation module 810 is further configured to: and dividing the image to be processed based on a preset dividing rule to obtain at least two first images to be processed, wherein each first image to be processed is a rectangular image.
Specifically, the expansion module 820 is further configured to:
Expanding a preset number of pixels outwards based on the image edge of each of at least two first images to be processed, wherein the expanded pixels form the pixel expansion area;
And obtaining at least two second images to be processed based on the image content area corresponding to each of the at least two first images to be processed and the pixel extension area.
Specifically, the style migration module 830 is further configured to:
Acquiring a template image, and performing style migration processing on each of at least two second images to be processed based on the style of the template image to generate at least two target image units;
Wherein each of at least two target image units comprises a region of image content after style migration and a region of pixel extension after style migration.
Specifically, the style migration module 830 is further configured to:
And performing style migration processing on each of at least two second images to be processed based on the style of the template image by adopting a convolutional neural network.
Specifically, the merging module 840 is further configured to:
arranging the at least two target image units;
Combining the frames of every two adjacent target image units based on the image content areas after the migration of the cells in the target image units to obtain a pre-combined image;
and deleting the pixel extension area after the grid migration in the pre-combined image to obtain a combined target image.
Specifically, the merging module 840 is further configured to:
And arranging the at least two target image units based on the arrangement sequence of at least two first images to be processed corresponding to the image content areas after the migration in the at least two target image units in the original images to be processed.
Specifically, the image processing apparatus provided by the present application further includes a preprocessing module configured to:
When the image is a four-channel image comprising three primary color channels and an alpha channel, the original three primary color channels are obtained based on the four-channel image in a splitting mode, and an original image to be processed is generated based on the original three primary color channels.
Specifically, the image processing apparatus provided by the present application further includes a post-processing module configured to:
Obtaining three primary color channels of the target image;
and generating an initial format image with four channels based on the four-channel image, and replacing the three primary color channels of the initial format image with the three primary color channels of the target image to obtain a final target image.
The application provides an image processing device which can divide a large-size image into small-size images for style migration processing respectively, reduces the requirement on the performance of a machine display card and is easier to operate; and by adding the pixel extension area, the pixel extension area of style migration is obtained after style migration processing, redundant data is generated at the combined frame, gaps generated at the combined frame in the process of recombining the divided images are filled, and a target image with better style migration processing effect is obtained.
The above is a schematic scheme of an image processing apparatus of the present embodiment. It should be noted that, the technical solution of the apparatus and the technical solution of the method of the image processing apparatus belong to the same concept, and details of the technical solution of the image processing apparatus, which are not described in detail, can be referred to the description of the technical solution of the image processing method. The details of the image processing method are provided in the foregoing embodiments, and are not described herein.
Example IV
The present embodiment provides a computing device 900, as shown in FIG. 9.
Fig. 9 is a block diagram illustrating a configuration of a computing device 900 according to an embodiment of the present description. The components of computing device 900 include, but are not limited to, memory 910 and processor 920. Processor 920 is coupled to memory 910 via bus 930 with database 950 configured to hold data.
Computing device 900 also includes an access device 940, access device 940 enabling computing device 900 to communicate via one or more networks 960. Examples of such networks include the Public Switched Telephone Network (PSTN), a Local Area Network (LAN), a Wide Area Network (WAN), a Personal Area Network (PAN), or a combination of communication networks such as the internet. Access device 940 may include one or more of any type of network interface, wired or wireless (e.g., a Network Interface Card (NIC)), such as an IEEE802.11 Wireless Local Area Network (WLAN) wireless interface, a worldwide interoperability for microwave access (Wi-MAX) interface, an ethernet interface, a Universal Serial Bus (USB) interface, a cellular network interface, a bluetooth interface, a Near Field Communication (NFC) interface, and so forth.
In one embodiment of the present description, the above-described components of computing device 900 and other components not shown in FIG. 9 may also be connected to each other, for example, by a bus. It should be understood that the block diagram of the computing device illustrated in FIG. 9 is for exemplary purposes only and is not intended to limit the scope of the present description. Those skilled in the art may add or replace other components as desired.
Computing device 900 may be any type of stationary or mobile computing device including a mobile computer or mobile computing device (e.g., tablet, personal digital assistant, laptop, notebook, netbook, etc.), mobile phone (e.g., smart phone), wearable computing device (e.g., smart watch, smart glasses, etc.), or other type of mobile device, or a stationary computing device such as a desktop computer or PC. Computing device 900 may also be a mobile or stationary server.
The processor 920 may perform the steps in the image processing method provided in the foregoing embodiment. Specific steps are not described in this embodiment.
An embodiment of the application also provides a computer-readable storage medium storing computer instructions which, when executed by a processor, implement the steps in an image processing method as described above.
The above is an exemplary version of a computer-readable storage medium of the present embodiment. It should be noted that, the technical solution of the storage medium and the technical solution of the image processing method belong to the same concept, and details of the technical solution of the storage medium which are not described in detail can be referred to the description of the technical solution of the image processing method.
The computer instructions include computer program code that may be in source code form, object code form, executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
It should be noted that, for the sake of simplicity of description, the foregoing method embodiments are all expressed as a series of combinations of actions, but it should be understood by those skilled in the art that the present application is not limited by the order of actions described, as some steps may be performed in other order or simultaneously in accordance with the present application. Further, those skilled in the art will appreciate that the embodiments described in the specification are all preferred embodiments, and that the acts and modules referred to are not necessarily all required for the present application.
In the foregoing embodiments, the descriptions of the embodiments are emphasized, and for parts of one embodiment that are not described in detail, reference may be made to the related descriptions of other embodiments.
The preferred embodiments of the application disclosed above are intended only to assist in the explanation of the application. Alternative embodiments are not intended to be exhaustive or to limit the application to the precise form disclosed. Obviously, many modifications and variations are possible in light of the above teaching. The embodiments were chosen and described in order to best explain the principles of the application and the practical application, to thereby enable others skilled in the art to best understand and utilize the application. The application is limited only by the claims and the full scope and equivalents thereof.

Claims (12)

1. An image processing method, comprising:
dividing an original image to be processed to obtain at least two first images to be processed, wherein the continuous parts of content information are cut in the same area in the dividing process;
Performing pixel expansion area increasing processing on each of at least two first images to be processed to obtain at least two second images to be processed, wherein the pixel expansion areas are pixels which are increased along the edge of an image on the basis of the existing image and are blank pixels, and the number of the pixels which are increased in the first images to be processed is adjusted and optimized according to an image processing result;
performing style migration processing on each of at least two second images to be processed to obtain at least two target image units;
and merging the at least two target image units based on a preset merging rule to obtain a target image.
2. The method according to claim 1, wherein the segmenting of the original image to be processed results in at least two first images to be processed, comprising:
and dividing the image to be processed based on a preset dividing rule to obtain at least two first images to be processed, wherein each first image to be processed is a rectangular image.
3. The method according to claim 1, wherein performing the pixel extension increase process on each of the at least two first images to be processed to obtain the at least two second images to be processed, comprises:
Expanding a preset number of pixels outwards based on the image edge of each of at least two first images to be processed, wherein the expanded pixels form the pixel expansion area;
And obtaining at least two second images to be processed based on the image content area corresponding to each of the at least two first images to be processed and the pixel extension area.
4. The method of claim 1, wherein performing style migration processing on each of the at least two second images to be processed to obtain at least two target image units comprises:
Acquiring a template image, and performing style migration processing on each of at least two second images to be processed based on the style of the template image to generate at least two target image units;
Wherein each of at least two target image units comprises a region of image content after style migration and a region of pixel extension after style migration.
5. The method of claim 4, wherein merging the at least two target image units based on a preset merging rule to obtain a target image, comprising:
arranging the at least two target image units;
Combining the frames of every two adjacent target image units based on the image content areas after the migration of the cells in the target image units to obtain a pre-combined image;
and deleting the pixel extension area after the grid migration in the pre-combined image to obtain a combined target image.
6. The method of claim 5, wherein arranging the at least two target image units comprises:
And arranging the at least two target image units based on the arrangement sequence of at least two first images to be processed corresponding to the image content areas after the migration in the at least two target image units in the original images to be processed.
7. The method of claim 1, wherein when the image is a four-channel image comprising three primary color channels and an alpha channel, the method further comprises:
and splitting the four-channel image to obtain an original three-primary-color channel, and generating an original image to be processed based on the original three-primary-color channel.
8. The method of claim 7, wherein after obtaining the target image, the method further comprises:
Obtaining three primary color channels of the target image;
and generating an initial format image with four channels based on the four-channel image, and replacing the three primary color channels of the initial format image with the three primary color channels of the target image to obtain a final target image.
9. The method of claim 4, wherein performing a style migration process on each of at least two second images to be processed based on a style of the template image, comprises:
And performing style migration processing on each of at least two second images to be processed based on the style of the template image by adopting a convolutional neural network.
10. An image processing apparatus, characterized in that the apparatus comprises:
The segmentation module is configured to segment an original image to be processed to obtain at least two first images to be processed, wherein the continuous parts of the content information are cut in the same area in the segmentation process;
The expansion module is configured to increase pixel expansion areas of each of at least two first images to be processed to obtain at least two second images to be processed, wherein the pixel expansion areas are pixels which are increased along the edges of the images on the basis of the existing images and are blank pixels, and the number of the pixels increased by the first images to be processed is adjusted and optimized according to the image processing result;
The style migration module is configured to perform style migration processing on each of at least two second images to be processed to obtain at least two target image units;
and the merging module is configured to merge the at least two target image units based on a preset merging rule to obtain a target image.
11. A computing device comprising a memory, a processor and computer instructions stored on the memory and executable on the processor, wherein the processor, when executing the instructions, implements the steps of the image processing method of any of claims 1-9.
12. A computer-readable storage medium storing computer instructions which, when executed by a processor, implement the steps of the image processing method of any one of claims 1-9.
CN202011120091.4A 2020-10-19 2020-10-19 Image processing method and device Active CN112215854B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011120091.4A CN112215854B (en) 2020-10-19 2020-10-19 Image processing method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011120091.4A CN112215854B (en) 2020-10-19 2020-10-19 Image processing method and device

Publications (2)

Publication Number Publication Date
CN112215854A CN112215854A (en) 2021-01-12
CN112215854B true CN112215854B (en) 2024-07-12

Family

ID=74055896

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011120091.4A Active CN112215854B (en) 2020-10-19 2020-10-19 Image processing method and device

Country Status (1)

Country Link
CN (1) CN112215854B (en)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113034348A (en) * 2021-03-24 2021-06-25 北京字节跳动网络技术有限公司 Image processing method, image processing apparatus, storage medium, and device
WO2023272432A1 (en) * 2021-06-28 2023-01-05 华为技术有限公司 Image processing method and image processing apparatus
CN113763233B (en) * 2021-08-04 2024-06-21 深圳盈天下视觉科技有限公司 Image processing method, server and photographing equipment
CN115272146B (en) * 2022-07-27 2023-04-07 天翼爱音乐文化科技有限公司 Stylized image generation method, system, device and medium
CN118071577A (en) * 2022-11-18 2024-05-24 北京字跳网络技术有限公司 Image generation method, device, electronic equipment and storage medium

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109859211A (en) * 2018-12-28 2019-06-07 努比亚技术有限公司 A kind of image processing method, mobile terminal and computer readable storage medium
CN111415299A (en) * 2020-03-26 2020-07-14 浙江科技学院 High-resolution image style migration method

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103632361B (en) * 2012-08-20 2017-01-18 阿里巴巴集团控股有限公司 An image segmentation method and a system
EP3413563A4 (en) * 2016-02-03 2019-10-23 Sharp Kabushiki Kaisha Moving image decoding device, moving image encoding device, and prediction image generation device
CN108156435B (en) * 2017-12-25 2020-03-13 Oppo广东移动通信有限公司 Image processing method and device, computer readable storage medium and computer device
CN111445387B (en) * 2020-06-16 2020-10-16 浙江科技学院 High-resolution image style migration method based on random rearrangement of image blocks
CN111652830A (en) * 2020-06-28 2020-09-11 Oppo广东移动通信有限公司 Image processing method and device, computer readable medium and terminal equipment

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109859211A (en) * 2018-12-28 2019-06-07 努比亚技术有限公司 A kind of image processing method, mobile terminal and computer readable storage medium
CN111415299A (en) * 2020-03-26 2020-07-14 浙江科技学院 High-resolution image style migration method

Also Published As

Publication number Publication date
CN112215854A (en) 2021-01-12

Similar Documents

Publication Publication Date Title
CN112215854B (en) Image processing method and device
US10672164B2 (en) Predicting patch displacement maps using a neural network
JP2023539691A (en) Human image restoration methods, devices, electronic devices, storage media, and program products
EP4092623A1 (en) Image processing method and apparatus, and device and storage medium
US20200302656A1 (en) Object-Based Color Adjustment
CN112733044B (en) Recommended image processing method, apparatus, device and computer-readable storage medium
CN111783735B (en) Steel document analytic system based on artificial intelligence
CN113905219B (en) Image processing apparatus and method, image processing system, control method, and medium
CN112949754B (en) Text recognition data synthesis method based on image fusion
WO2024131565A1 (en) Garment image extraction method and apparatus, and device, medium and product
CN109523558A (en) A kind of portrait dividing method and system
US11232607B2 (en) Adding color to digital images
CN113409411A (en) Rendering method and device of graphical interface, electronic equipment and storage medium
CN107533760A (en) A kind of image partition method and device
WO2023207454A1 (en) Image processing method, image processing apparatuses and readable storage medium
WO2020042467A1 (en) Image compression method, apparatus and device, and computer-readable storage medium
CN115544311A (en) Data analysis method and device
CN113240573B (en) High-resolution image style transformation method and system for local and global parallel learning
CN113344771B (en) Multifunctional image style migration method based on deep learning
CN114863435A (en) Text extraction method and device
CN114494467A (en) Image color migration method and device, electronic equipment and storage medium
CN103971365A (en) Extraction method for image saliency map
CN114022458A (en) Skeleton detection method and device, electronic equipment and computer readable storage medium
KR20180117826A (en) Method and apparatus for production of webtoon movies
JP4024744B2 (en) Trapping method, trapping apparatus, trapping program, and printing system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 519000 Room 102, 202, 302 and 402, No. 325, Qiandao Ring Road, Tangjiawan Town, high tech Zone, Zhuhai City, Guangdong Province, Room 102 and 202, No. 327 and Room 302, No. 329

Applicant after: Zhuhai Jinshan Digital Network Technology Co.,Ltd.

Address before: 519000 Room 102, 202, 302 and 402, No. 325, Qiandao Ring Road, Tangjiawan Town, high tech Zone, Zhuhai City, Guangdong Province, Room 102 and 202, No. 327 and Room 302, No. 329

Applicant before: ZHUHAI KINGSOFT ONLINE GAME TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant