CN111210487A - Pattern generation method and system - Google Patents

Pattern generation method and system Download PDF

Info

Publication number
CN111210487A
CN111210487A CN202010127689.XA CN202010127689A CN111210487A CN 111210487 A CN111210487 A CN 111210487A CN 202010127689 A CN202010127689 A CN 202010127689A CN 111210487 A CN111210487 A CN 111210487A
Authority
CN
China
Prior art keywords
style
migration
content
image
background
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010127689.XA
Other languages
Chinese (zh)
Inventor
刘璐
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
OneConnect Smart Technology Co Ltd
OneConnect Financial Technology Co Ltd Shanghai
Original Assignee
OneConnect Financial Technology Co Ltd Shanghai
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by OneConnect Financial Technology Co Ltd Shanghai filed Critical OneConnect Financial Technology Co Ltd Shanghai
Priority to CN202010127689.XA priority Critical patent/CN111210487A/en
Publication of CN111210487A publication Critical patent/CN111210487A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/04Context-preserving transformations, e.g. by using an importance map
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • G06T2207/20104Interactive definition of region of interest [ROI]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The embodiment of the invention provides a pattern generation method, which comprises the following steps: acquiring a first content picture input by a user and one preset style migration type of a plurality of style migration types, wherein the style migration types comprise a background migration type, a content migration type and a background and content migration type; performing style migration on the first content picture according to the style migration type to obtain a second content picture; and generating a corresponding pattern according to the second content picture. The embodiment of the invention also provides a pattern generation system, computer equipment and a storage medium.

Description

Pattern generation method and system
Technical Field
The embodiment of the invention relates to the technical field of communication, in particular to a pattern generation method, a pattern generation system, computer equipment and a readable storage medium.
Background
The design of the existing card surface patterns is single, the card surface design styles of all rows are similar, so that aesthetic fatigue appears on the card surface patterns by users, and the card surface patterns with single patterns can not meet the requirements of the current young clans. The conventional pattern generation method is to perform style migration on an original drawing, extract a content picture from the original drawing, and then synthesize the content picture and the style picture to complete the style migration of the original drawing, thereby completing the generation of a pattern. Although a lot of style migration methods exist in the prior art, only one style of migration is performed on the whole picture, the types of styles are few, and the richness of the picture migration style is poor. Therefore, the invention aims to solve the problem of how to transfer pictures uploaded by a user in various styles so as to generate patterns.
Disclosure of Invention
In view of the above, it is desirable to provide a pattern generation method, a pattern generation system, a computer device and a readable storage medium, which are used to solve the problem of how to migrate a picture uploaded by a user in multiple styles so as to generate a pattern.
In order to achieve the above object, an embodiment of the present invention provides a pattern generation method, including:
acquiring a first content picture input by a user and one preset style migration type of a plurality of style migration types, wherein the style migration types comprise a background migration type, a content migration type and a background and content migration type;
performing style migration on the first content picture according to the style migration type to obtain a second content picture;
and generating a corresponding pattern according to the second content picture.
Preferably, the performing style migration on the first content picture according to a preset style migration type and a style migration model corresponding to the style migration type to obtain a second content picture includes:
processing the first content picture to extract a region of interest, boundary information of the region of interest, and a background region from the first content picture;
respectively carrying out style migration on the region of interest and the background region according to the boundary information and the style type to obtain a target style content migration image and a target style background migration image after the style migration;
and filling the target style content migration image into the target style background migration image to obtain the second content picture.
Preferably, the processing the first content picture to extract a region of interest, boundary information of the region of interest, and a background region from the first content picture includes:
performing semantic segmentation on the first content picture to obtain a segmented binary image;
extracting contour information from the binary image by using a boundary extraction method, wherein the contour information comprises contour coordinates;
and according to the outline information, the interested region and the background region corresponding to the outline information are scratched from the first content picture, and the boundary information of the interested region is recorded.
Preferably, the performing style migration on the region of interest and the background region according to the boundary information and the style type to obtain a target style content migration image and a target style background migration image after the style migration, includes:
performing style migration on the region of interest according to the style migration type to obtain a first style migration image;
matting a second style migration image from the first style migration image according to the boundary information;
acquiring first size information of the first content picture and second size information of the second content picture, and calculating the size ratio of the first content picture and the second content picture according to the first size information and the second size information;
and according to the size proportion, carrying out scaling processing on the second style image to obtain the target style migration image.
Preferably, the filling the target style content migration image into the target style background migration image to obtain the second content picture includes:
and filling the target style migration image to a position corresponding to the target style background migration image according to the contour coordinates to obtain the second content picture.
In order to achieve the above object, an embodiment of the present invention further provides a pattern generation system, including:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a first content picture and a style migration type input by a user, and the style migration type comprises a background migration type, a content migration type and a background and content migration type;
the style migration module is used for carrying out style migration on the first content picture according to a preset style migration type so as to obtain a second content picture;
and the generating module is used for generating a corresponding pattern according to the second content picture.
Preferably, the style migration module is further configured to:
processing the first content picture to extract a region of interest, boundary information of the region of interest, and a background region from the first content picture;
respectively carrying out style migration on the region of interest and the background region according to the boundary information and the style type to obtain a target style content migration image and a target style background migration image after the style migration;
and filling the target style content migration image into the target style background migration image to obtain the second content picture.
To achieve the above object, an embodiment of the present invention further provides a computer device, a memory of the computer device, a processor, and a computer program stored on the memory and executable on the processor, wherein the computer program, when executed by the processor, implements the steps of the pattern generation method as described above.
To achieve the above object, an embodiment of the present invention further provides a computer-readable storage medium, in which a computer program is stored, the computer program being executable by at least one processor to cause the at least one processor to execute the steps of the pattern generation method as described above.
According to the pattern generation method, the pattern generation system, the computer device and the readable storage medium provided by the embodiment of the invention, the first content picture uploaded by the user and the style migration type input by the user are obtained, the style of the first content picture is migrated according to the style migration type to generate the second content picture, and the corresponding pattern is generated according to the second content picture. By the embodiment of the invention, the transfer of multiple styles of the user picture can be realized, and the richness of the picture style transfer is greatly improved.
Drawings
FIG. 1 is a flowchart illustrating steps of a pattern generation method according to an embodiment of the present invention;
FIG. 2 is a flowchart illustrating an exemplary detailed step of step S102 in FIG. 1;
FIG. 3 is a flowchart illustrating an exemplary detailed step of step S200 in FIG. 2;
FIG. 4 is a flowchart illustrating an exemplary detailed step of step S202 in FIG. 2;
FIG. 5 is a diagram illustrating a hardware architecture of a computer device according to an embodiment of the present invention;
FIG. 6 is a block diagram of a process of a pattern generation system according to an embodiment of the present invention;
FIG. 7 is an exemplary first content picture;
FIG. 8 is the binary image of FIG. 7;
FIG. 9 is a schematic view of a process of obtaining a style transition image after style transition is performed on the region of interest in FIG. 7;
fig. 10 is a schematic diagram of fig. 9 in which contour coordinate correspondence filling is performed on the style transition image obtained by the style transition;
fig. 11 is a schematic diagram of a second content picture obtained by filling the style transition image obtained by style transition in fig. 9 with the contour coordinate correspondence.
The implementation, functional features and advantages of the objects of the present invention will be further explained with reference to the accompanying drawings.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that the description relating to "first", "second", etc. in the present invention is for descriptive purposes only and is not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first" or "second" may explicitly or implicitly include at least one such feature. In addition, technical solutions between various embodiments may be combined with each other, but must be realized by a person skilled in the art, and when the technical solutions are contradictory or cannot be realized, such a combination should not be considered to exist, and is not within the protection scope of the present invention.
Referring to fig. 1, fig. 1 is a flowchart illustrating steps of a pattern generation method according to an embodiment of the invention. It is to be understood that the flow charts in the embodiments of the present method are not intended to limit the order in which the steps are performed. It should be noted that the present embodiment is exemplarily described with the computer device 2 as an execution subject. The pattern generation method specifically includes steps S100 to S104.
Step S100: the method comprises the steps of obtaining a first content picture input by a user and one style migration type of a plurality of preset style migration types, wherein the style migration types comprise a background migration type, a content migration type and a background and content migration type.
Specifically, when a user needs to generate credit card surface patterns with different styles, the user uploads a favorite content picture and selects a corresponding style migration type to migrate the content picture into a pattern corresponding to the style migration type. And after the user uploads the favorite content picture and selects the corresponding style migration type, the computer equipment acquires the content picture and the style migration type. It should be noted that the user may input a migration type that needs to be subjected to style migration in the style migration type input box, or may perform a click operation by clicking a drop-down box to select a migration type that needs to be subjected to style migration.
It should be noted that each style migration type is trained with a corresponding style migration model in advance, and after a user selects a style migration type, the style migration model corresponding to the style migration type performs style migration on a content picture uploaded by the user.
Illustratively, when the user selects a genre migration type for migrating only the content, the content on the content picture is synthesized with the genre picture background in the genre migration type to complete the genre migration of the content. When a user selects a migration background and a style migration type of content, the content on the content picture is synthesized with a preset style picture to complete the style migration of the content, and then the content after the style migration is migrated again to migrate the content after the style migration to other preset style picture backgrounds. Of course, when the content picture has a plurality of contents, the contents may be migrated in different styles, and then the migrated contents may be migrated in the background.
It should be noted that, in this embodiment, a plurality of different style migration types may also be set, for example: setting a plurality of different content migration types according to different detail degrees of the reserved content, wherein the different content migration types comprise a content migration type with a high reservation degree, a content migration type with a medium reservation degree and a content migration type with a low reservation degree; setting a plurality of different background migration types according to different retained background detail degrees, wherein the different background migration types comprise a background migration type with a high retention degree, a background migration type with a retained degree and a background migration type with a low retention degree; according to the different detail degrees of the reserved content and the background, setting a plurality of different background and content migration types, including a background and content migration type with high reservation degree, a background and content migration type with medium reservation degree and a background and content migration type with low reservation degree. When the user needs to reserve the content details with high degree, the user selects the content migration type with high degree of reservation, and accordingly the user selects the corresponding migration type. Of course, in each style migration type, a plurality of different style pictures are preset, and different style pictures are selected according to the style migration type for style migration.
In the embodiment of the invention, the content picture comprises a plurality of contents, and the contents and the background are migrated in different styles.
Step S102: and performing style migration on the first content picture according to the style migration type to obtain a second content picture.
Specifically, after a user inputs a first content picture and a migration type for performing style migration on the first content picture, each style migration type is provided with a plurality of backgrounds, and the style of the migration type is applied to the first content picture, so that the first content picture forms a picture with the style of the migration type. For example, the content and the background of the first content picture are respectively subjected to migration operations of different styles to obtain a second content picture, wherein the content in the second content picture has a picture style corresponding to the content style type, and the background has a picture style corresponding to the background style type.
Referring to fig. 2, in an exemplary embodiment, the performing style migration on the first content picture according to a preset style migration type and a style migration model corresponding to the style migration type in step S102 to obtain a second content picture includes steps S200 to S204, where:
step S200: processing the first content picture to extract a region of interest, boundary information of the region of interest, and a background region from the first content picture.
Specifically, the first content picture is subjected to picture recognition to recognize different persons and different objects in the first content picture, as well as boundary information between different persons, boundary information between different objects, and boundary information between persons and objects. For example: outline boundaries of boys, girls, trees in the background, grassland, and boys, girls in fig. 7 are identified.
Referring to fig. 3, in an exemplary embodiment, step S200 specifically includes steps S300 to S304, where:
step S300: and performing semantic segmentation on the first content picture to obtain a segmented binary image.
Step S302: and extracting contour information from the binary image by using a boundary extraction method, wherein the contour information comprises contour coordinates.
Step S304: and according to the outline information, the interested region and the background region corresponding to the outline information are scratched from the first content picture, and the boundary information of the interested region is recorded.
Referring to fig. 7 and 8, in an exemplary embodiment, fig. 7 is an exemplary first content picture, and fig. 8 is a binary image of fig. 7. Specifically, the computer device 2 subjects the first content picture of fig. 7 to a semantic segmentation algorithm to separate regions of interest in the first content picture, boys, girls, and background, and obtains a binary image of the boys, girls, and background of fig. 8. Then, by using a boundary extraction method, contour information of the boy is extracted from the binary image of the boy and the background, and contour information of the girl is extracted from the binary image of the girl and the background, wherein the contour information comprises contour coordinates. And finally, extracting interesting regions of boys and girls from the first content picture according to the contour information of the boys and the contour information of the girls, and recording the boundary information of the interesting regions of the boys and the girls.
It should be noted that the goal of semantic segmentation is to label the category to which each pixel of the image belongs to determine each pixel category, and then determine the background part and the human part according to the semantic segmentation result.
Step S202: and respectively carrying out style migration on the region of interest and the background region according to the boundary information and the style type to obtain a target style content migration image and a target style background migration image after the style migration.
Specifically, after the region of interest in the first content picture, the boundary information of the region of interest, and the background region are extracted, performing corresponding style migration on the region of interest in the first content picture and the background region according to a style type input by a user, so as to obtain a target style content migration image and a target style background migration image after the corresponding style migration. For example: and if the style migration type input by the user is a background and content migration type, performing content migration on the region of interest in the first content picture input by the user, and performing background migration on the background in the first content picture to obtain a target style migration image and a target style background migration image with corresponding styles.
Referring to fig. 4, in an exemplary embodiment, step S202 specifically includes steps S400 to S406, where:
step S400: and performing style migration on the region of interest according to the style migration type to obtain a first style migration image.
Step S402: and matting a second style migration image from the first style migration image according to the boundary information.
Step S404: acquiring first size information of the first content picture and second size information of the second content picture, and calculating the size ratio of the first content picture and the second content picture according to the first size information and the second size information.
Step S406: and according to the size proportion, carrying out scaling processing on the second style image to obtain the target style migration image.
Referring to fig. 9, fig. 9 is a schematic diagram illustrating a process of obtaining a style transition image after performing style transition on the region of interest in fig. 7. With reference to fig. 9, the region of interest boy (a) is used to obtain a first boy style transition image (c) according to the style transition type (b), and the region of interest girl (d) is used to obtain a first girl style transition image (f) according to the style transition type (e). Then, a second boy style migration image is picked up from the first boy style migration image according to the boundary information of the region of interest boys, and a second girl style migration image is picked up from the first girl style migration image according to the boundary information of the region of interest girls. And then, adjusting the second boy style migration image and the second girl style migration image according to the size ratio of the first content picture and the second content picture to obtain a boy target style migration image and a girl target style migration image. For example: and if the ratio of the size of the first content picture to the size of the second content picture is 2.5:1, respectively reducing the second boy style migration image and the second girl style migration image by 2.5 times, wherein the reduced second boy style migration image is a boy target style migration image, the reduced second girl style migration image is a girl target style migration image, and the boy target style migration image and the girl target style migration image are collectively called as target style migration images.
Step S204: and filling the target style content migration image into the target style background migration image to obtain the second content picture.
When a first content picture is processed, regions of interest, boys, girls, boundary contours of the boys, boundary contours of the girls and background regions are extracted from the first content picture. And then, transferring according to the boundary outline of the boy, the boundary outline of the girl and the background and content transfer type selected by the user so as to transfer the boy to a first style picture, transfer the girl to a second style picture and transfer the background to a third style picture so as to obtain a transferred boy target style transfer image, a transferred girl target style transfer image and a transferred background target style transfer image. And then, filling the boy target style migration image and the girl target style migration image into the background target style migration image.
In the drawings provided in the embodiments of the present invention, the second-style picture is the same as the third-style picture, and in other embodiments, the second-style picture is different from the third-style picture.
In an exemplary embodiment, step S204 specifically includes: and filling the target style migration image to a position corresponding to the target style background migration image according to the contour coordinates to obtain the second content picture.
Referring to fig. 10 and 11, fig. 10 is a schematic diagram of filling the style transition image obtained after the style transition in fig. 9 with the contour coordinate correspondence relationship, and fig. 11 is a schematic diagram of a second content picture obtained after filling the style transition image obtained after the style transition in fig. 9 with the contour coordinate correspondence relationship. In an exemplary embodiment, in conjunction with fig. 10, if the contour coordinate of the boy is extracted as a, the boy target style migration image is filled to the position of the target style background migration image with the coordinate of a. And if the extracted contour coordinate of the boy is b, filling the girl target style migration image to the position of the target style background migration image with the coordinate of b. Referring to fig. 11, when the filling of both the boy and the girl is completed, the second content picture in fig. 11 is obtained.
It should be noted that each picture is composed of pixels, and the pixel coordinates are the positions of the pixels in the image. And after the coordinates of the boy and the girl are determined, filling the boy and the girl into the coordinate positions corresponding to the target style background migration images through the coordinates.
Step S104: and generating a corresponding pattern according to the second content picture.
Illustratively, after the computer device generates the second content pattern, the second content pattern is printed onto a credit card face to generate a corresponding image. Of course, in other embodiments, the second content pattern may be printed on other products, such as: the embodiment of the present invention is not limited to the mobile phone case or other cards, and the embodiment of the present invention is described by taking a credit card surface as an example.
It should be noted that, the size of the second content picture is the same as the size of the card surface of the credit card, and the size of the first content picture may be different from the size of the card surface of the credit card.
According to the pattern generation method provided by the embodiment of the invention, the first content picture uploaded by the user and the style migration type input by the user are obtained, the style of the first content picture is migrated according to the style migration type to generate the second content picture, and the corresponding pattern is generated according to the second content picture. By the embodiment of the invention, the transfer of multiple styles of the user picture can be realized, and the richness of the picture style transfer is greatly improved.
Referring to fig. 5, a hardware architecture diagram of a computer device according to an embodiment of the invention is shown. The computer device 2 includes, but is not limited to, a memory 21, a processor 22, and a network interface 23 communicatively coupled to each other via a system bus, and fig. 2 illustrates only the computer device 2 having components 21-23, but it is to be understood that not all of the illustrated components are required and that more or fewer components may alternatively be implemented.
The memory 21 includes at least one type of readable storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a Read Only Memory (ROM), an Electrically Erasable Programmable Read Only Memory (EEPROM), a Programmable Read Only Memory (PROM), a magnetic memory, a magnetic disk, an optical disk, etc. In some embodiments, the memory 21 may be an internal storage unit of the computer device 2, such as a hard disk or a memory of the computer device 2. In other embodiments, the memory may also be an external storage device of the computer device 2, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a flash Card (FlashCard), and the like, provided on the computer device 2. Of course, the memory 21 may also comprise both an internal storage unit of the computer device 2 and an external storage device thereof. In this embodiment, the memory 21 is generally used for storing an operating system installed in the computer device 2 and various types of application software, such as program codes of the pattern generation system 20. Further, the memory 21 may also be used to temporarily store various types of data that have been output or are to be output.
The processor 22 may be a Central Processing Unit (CPU), controller, microcontroller, microprocessor, or other data Processing chip in some embodiments. The processor 22 is typically used to control the overall operation of the computer device 2. In this embodiment, the processor 22 is configured to run the program code stored in the memory 21 or process data, for example, run the pattern generation system 20.
The network interface 23 may comprise a wireless network interface or a wired network interface, and the network interface 23 is generally used for establishing communication connection between the computer device 2 and other electronic devices. For example, the network interface 23 is used to connect the computer device 2 to an external terminal through a network, establish a data transmission channel and a communication connection between the computer device 2 and the external terminal, and the like. The network may be a wireless or wired network such as an Intranet (Intranet), the Internet (Internet), a Global System of Mobile communication (GSM), Wideband Code Division Multiple Access (WCDMA), a 4G network, a 5G network, Bluetooth (Bluetooth), Wi-Fi, and the like.
Referring to fig. 6, a schematic diagram of program modules of a pattern generation system according to an embodiment of the invention is shown. The pattern generation system can be applied to computer equipment. In the present embodiment, the pattern generation system 20 may include or be divided into one or more program modules, which are stored in a storage medium and executed by one or more processors to implement the present invention and the above-described pattern generation method. The program modules referred to in the embodiments of the present invention refer to a series of computer program instruction segments that can perform specific functions, and are more suitable than the program itself for describing the execution process of the pattern generation system 20 in the storage medium. The following description will specifically describe the functions of the program modules of the present embodiment:
the obtaining module 201 is configured to obtain a first content picture input by a user and one of a plurality of preset genre migration types, where the plurality of genre migration types include a background migration type, a content migration type, and a background and content migration type.
Specifically, when a user needs to generate credit card surface patterns with different styles, the user uploads a favorite content picture and selects a corresponding style migration type to migrate the content picture into a pattern corresponding to the style migration type. After the user uploads a favorite content picture and selects a corresponding style migration type, the obtaining module 201 obtains the content picture and the style migration type. It should be noted that the user may input a migration type that needs to be subjected to style migration in the style migration type input box, or may perform a click operation by clicking a drop-down box to select a migration type that needs to be subjected to style migration.
It should be noted that each style migration type is trained with a corresponding style migration model in advance, and after a user selects a style migration type, the style migration model corresponding to the style migration type performs style migration on a content picture uploaded by the user.
Illustratively, when the user selects a genre migration type for migrating only the content, the content on the content picture is synthesized with the genre picture background in the genre migration type to complete the genre migration of the content. When a user selects a migration background and a style migration type of content, the content on the content picture is synthesized with a preset style picture to complete the style migration of the content, and then the content after the style migration is migrated again to migrate the content after the style migration to other preset style picture backgrounds. Of course, when the content picture has a plurality of contents, the contents may be migrated in different styles, and then the migrated contents may be migrated in the background.
It should be noted that, in this embodiment, a plurality of different style migration types may also be set, for example: setting a plurality of different content migration types according to different detail degrees of the reserved content, wherein the different content migration types comprise a content migration type with a high reservation degree, a content migration type with a medium reservation degree and a content migration type with a low reservation degree; setting a plurality of different background migration types according to different retained background detail degrees, wherein the different background migration types comprise a background migration type with a high retention degree, a background migration type with a retained degree and a background migration type with a low retention degree; according to the different detail degrees of the reserved content and the background, setting a plurality of different background and content migration types, including a background and content migration type with high reservation degree, a background and content migration type with medium reservation degree and a background and content migration type with low reservation degree. When the user needs to reserve the content details with high degree, the user selects the content migration type with high degree of reservation, and accordingly the user selects the corresponding migration type. . Of course, in each style migration type, a plurality of different style pictures are preset, and different style pictures are selected according to the style migration type for style migration.
In the embodiment of the invention, the content picture comprises a plurality of contents, and the contents and the background are migrated in different styles.
And the style migration module 202 is configured to perform style migration on the first content picture according to the style migration type to obtain a second content picture.
Specifically, after a user inputs a first content picture and a migration type for performing style migration on the first content picture, each style migration type is provided with a plurality of backgrounds, and the style of the migration type is applied to the first content picture, so that the first content picture forms a picture with the style of the migration type. For example, the style migration module 202 performs migration operations of different styles on the content and the background of the first content picture, and obtains a second content picture according to the migration result, where the content in the second content picture has a picture style corresponding to the content style type, and the background has a picture style corresponding to the background style type.
The style migration module 202 further includes a processing unit, a style migration unit, and a filling unit.
The processing unit is configured to process the first content picture to extract an area of interest, boundary information of the area of interest, and a background area from the first content picture.
Specifically, the processing unit performs picture recognition on the first content picture to recognize different persons and different objects in the first content picture, and boundary information between different persons, boundary information between different objects, and boundary information between persons and objects. For example: outline boundaries of boys, girls, trees in the background, grassland, and boys, girls in fig. 7 are identified.
The processing unit is further used for performing semantic segmentation on the first content picture to acquire a segmented binary image; extracting contour information from the binary image by using a boundary extraction method, wherein the contour information comprises contour coordinates; and according to the outline information, the interested region and the background region corresponding to the outline information are scratched from the first content picture, and the boundary information of the interested region is recorded.
Referring to fig. 7 and 8, in an exemplary embodiment, fig. 7 is an exemplary first content picture, and fig. 8 is a binary image of fig. 7. Specifically, the processing unit subjects the first content picture of fig. 7 to a semantic segmentation algorithm to separate regions of interest in the first content picture, namely boys, girls and background, and obtains a binary image of the boys, girls and background in fig. 8. Then, by using a boundary extraction method, contour information of the boy is extracted from the binary image of the boy and the background, and contour information of the girl is extracted from the binary image of the girl and the background, wherein the contour information comprises contour coordinates. And finally, extracting interesting regions of boys and girls from the first content picture according to the contour information of the boys and the contour information of the girls, and recording the boundary information of the interesting regions of the boys and the girls.
It should be noted that the goal of semantic segmentation is to label the category to which each pixel of the image belongs to determine each pixel category, and then determine the background part and the human part according to the semantic segmentation result.
And the style migration unit is used for respectively performing style migration on the region of interest and the background region according to the boundary information and the style type so as to obtain a target style content migration image and a target style background migration image after the style migration.
Specifically, after the region of interest in the first content picture, the boundary information of the region of interest, and the background region are extracted, the style migration unit performs corresponding style migration on the region of interest in the first content picture and the background region according to a style type input by a user, so as to obtain a target style content migration image and a target style background migration image after the corresponding style migration. For example: and if the style migration type input by the user is a background and content migration type, performing content migration on the region of interest in the first content picture input by the user, and performing background migration on the background in the first content picture to obtain a target style migration image and a target style background migration image with corresponding styles.
The style migration unit is further configured to perform style migration on the region of interest according to the style migration type to obtain a first style migration image; matting a second style migration image from the first style migration image according to the boundary information; acquiring first size information of the first content picture and second size information of the second content picture, and calculating the size ratio of the first content picture and the second content picture according to the first size information and the second size information; and according to the size proportion, carrying out scaling processing on the second style image to obtain the target style migration image.
Referring to fig. 9, fig. 9 is a schematic diagram illustrating a process of obtaining a style transition image after performing style transition on the region of interest in fig. 7. With reference to fig. 9, the style migration unit obtains a first boy style migration image (c) from the region-of-interest boy (a) according to the style migration type (b), and obtains a first girl style migration image (f) from the region-of-interest girl (d) according to the style migration type (e). Then, a second boy style migration image is picked up from the first boy style migration image according to the boundary information of the region of interest boys, and a second girl style migration image is picked up from the first girl style migration image according to the boundary information of the region of interest girls. And then, adjusting the second boy style migration image and the second girl style migration image according to the size ratio of the first content picture and the second content picture to obtain a boy target style migration image and a girl target style migration image. For example: and if the ratio of the size of the first content picture to the size of the second content picture is 2.5:1, respectively reducing the second boy style migration image and the second girl style migration image by 2.5 times, wherein the reduced second boy style migration image is a boy target style migration image, the reduced second girl style migration image is a girl target style migration image, and the boy target style migration image and the girl target style migration image are collectively called as target style migration images.
And the filling unit is used for filling the target style content migration image into the target style background migration image to obtain the second content picture.
When processing the first content picture, the processing unit extracts regions of interest, boys, girls, and boundary outlines of the boys, the girls, and background regions from the first content picture. Then, the style migration unit migrates according to the boundary outline of the boy, the boundary outline of the girl and the background and content migration type selected by the user to migrate the boy to a first style picture, migrate the girl to a second style picture and migrate the background to a third style picture to obtain a migrated boy target style migration image, a migrated girl target style migration image and a migrated background target style migration image. Then, the filling unit fills the boy target style migration image and the girl target style migration image into the background target style migration image.
In the drawings provided in the embodiments of the present invention, the second-style picture is the same as the third-style picture, and in other embodiments, the second-style picture is different from the third-style picture.
In an exemplary embodiment, the filling unit is further configured to: and filling the target style migration image to a position corresponding to the target style background migration image according to the contour coordinates to obtain the second content picture.
Referring to fig. 10 and 11, fig. 10 is a schematic diagram of filling the style transition image obtained after the style transition in fig. 9 with the contour coordinate correspondence relationship, and fig. 11 is a schematic diagram of a second content picture obtained after filling the style transition image obtained after the style transition in fig. 9 with the contour coordinate correspondence relationship. In an exemplary embodiment, referring to fig. 10, if the contour coordinate of the boy is extracted as a, the filling unit fills the boy target style migration image to the position of the target style background migration image with the coordinate of a. If the extracted contour coordinate of the boy is b, the filling unit fills the girl target style migration image to the position of the target style background migration image with the coordinate of b. Referring to fig. 11, when the filling of both the boy and the girl is completed, the second content picture in fig. 11 is obtained.
It should be noted that each picture is composed of pixels, and the pixel coordinates are the positions of the pixels in the image. And after the coordinates of the boy and the girl are determined, filling the boy and the girl into the coordinate positions corresponding to the target style background migration images through the coordinates.
And a generating module 203, configured to generate a corresponding pattern according to the second content picture.
Illustratively, after the second content pattern is generated, the generating module 203 prints the second content pattern on the card surface of the credit card to generate a corresponding image. Of course, in other embodiments, the second content pattern may be printed on other products, such as: the embodiment of the present invention is not limited to the mobile phone case or other cards, and the embodiment of the present invention is described by taking a credit card surface as an example.
It should be noted that, the size of the second content picture is the same as the size of the card surface of the credit card, and the size of the first content picture may be different from the size of the card surface of the credit card.
According to the pattern generation system provided by the embodiment of the invention, the first content picture uploaded by the user and the style migration type input by the user are obtained, the style of the first content picture is migrated according to the style migration type to generate the second content picture, and the corresponding pattern is generated according to the second content picture. By the embodiment of the invention, the transfer of multiple styles of the user picture can be realized, and the richness of the picture style transfer is greatly improved.
The present invention also provides a computer device, such as a smart phone, a tablet computer, a notebook computer, a desktop computer, a rack server, a blade server, a tower server or a rack server (including an independent server or a server cluster composed of a plurality of servers) capable of executing programs, and the like. The computer device of the embodiment at least includes but is not limited to: memory, processor, etc. communicatively coupled to each other via a system bus.
The present embodiment also provides a computer-readable storage medium, such as a flash memory, a hard disk, a multimedia card, a card-type memory (e.g., SD or DX memory, etc.), a Random Access Memory (RAM), a Static Random Access Memory (SRAM), a read-only memory (ROM), an electrically erasable programmable read-only memory (EEPROM), a programmable read-only memory (PROM), a magnetic memory, a magnetic disk, an optical disk, a server, an App application mall, etc., on which a computer program is stored, which when executed by a processor implements corresponding functions. The computer-readable storage medium of the present embodiment is used for storing the pattern generation system 20, and when executed by a processor, implements the steps of the pattern generation method of the above-described embodiment.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner.
The above description is only a preferred embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the present specification and the accompanying drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (10)

1. A method of pattern generation, the method comprising:
acquiring a first content picture input by a user and one preset style migration type of a plurality of style migration types, wherein the style migration types comprise a background migration type, a content migration type and a background and content migration type;
performing style migration on the first content picture according to the style migration type to obtain a second content picture;
and generating a corresponding pattern according to the second content picture.
2. The pattern generation method according to claim 1, wherein the performing style migration on the first content picture according to a preset style migration type and a style migration model corresponding to the style migration type to obtain a second content picture comprises:
processing the first content picture to extract a region of interest, boundary information of the region of interest, and a background region from the first content picture;
respectively carrying out style migration on the region of interest and the background region according to the boundary information and the style type to obtain a target style content migration image and a target style background migration image after the style migration;
and filling the target style content migration image into the target style background migration image to obtain the second content picture.
3. The pattern generation method according to claim 2, wherein the processing the first content picture to extract a region of interest, boundary information of the region of interest, and a background region from the first content picture includes:
performing semantic segmentation on the first content picture to obtain a segmented binary image;
extracting contour information from the binary image by using a boundary extraction method, wherein the contour information comprises contour coordinates;
and according to the outline information, the interested region and the background region corresponding to the outline information are scratched from the first content picture, and the boundary information of the interested region is recorded.
4. The pattern generation method according to claim 2, wherein the performing style migration on the region of interest and the background region according to the boundary information and the style type to obtain a target style content migration image and a target style background migration image after the style migration, respectively, comprises:
performing style migration on the region of interest according to the style migration type to obtain a first style migration image;
matting a second style migration image from the first style migration image according to the boundary information;
acquiring first size information of the first content picture and second size information of the second content picture, and calculating the size ratio of the first content picture and the second content picture according to the first size information and the second size information;
and according to the size proportion, carrying out scaling processing on the second style image to obtain the target style migration image.
5. The pattern generation method of claim 2, wherein the populating the target-style content migration image into the target-style background migration image to obtain the second content picture comprises:
and filling the target style migration image to a position corresponding to the target style background migration image according to the contour coordinates to obtain the second content picture.
6. A pattern generation system, comprising:
the system comprises an acquisition module, a processing module and a display module, wherein the acquisition module is used for acquiring a first content picture and a style migration type input by a user, and the style migration type comprises a background migration type, a content migration type and a background and content migration type;
the style migration module is used for carrying out style migration on the first content picture according to a preset style migration type so as to obtain a second content picture;
and the generating module is used for generating a corresponding pattern according to the second content picture.
7. The pattern generation system of claim 6, wherein the style migration module further comprises a processing unit, a style migration unit, and a population unit, wherein:
the processing unit is used for processing the first content picture so as to extract an interested area, boundary information of the interested area and a background area from the first content picture;
the style migration unit is used for respectively performing style migration on the region of interest and the background region according to the boundary information and the style type to obtain a target style content migration image and a target style background migration image after the style migration;
and the filling unit is used for filling the target style content migration image into the target style background migration image to obtain the second content picture.
8. The pattern generation system of claim 7, wherein the processing unit is further to:
performing semantic segmentation on the first content picture to obtain a segmented binary image;
extracting contour information from the binary image by using a boundary extraction method, wherein the contour information comprises contour coordinates;
and according to the outline information, the interested region and the background region corresponding to the outline information are scratched from the first content picture, and the boundary information of the interested region is recorded.
9. A computer device, characterized by a computer device memory, a processor and a computer program stored on the memory and executable on the processor, which computer program, when executed by the processor, carries out the steps of the pattern generation method according to any of claims 1-5.
10. A computer-readable storage medium, having stored therein a computer program, the computer program being executable by at least one processor for causing the at least one processor to perform the steps of the pattern generation method as claimed in any one of claims 1 to 5.
CN202010127689.XA 2020-02-28 2020-02-28 Pattern generation method and system Pending CN111210487A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010127689.XA CN111210487A (en) 2020-02-28 2020-02-28 Pattern generation method and system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010127689.XA CN111210487A (en) 2020-02-28 2020-02-28 Pattern generation method and system

Publications (1)

Publication Number Publication Date
CN111210487A true CN111210487A (en) 2020-05-29

Family

ID=70788604

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010127689.XA Pending CN111210487A (en) 2020-02-28 2020-02-28 Pattern generation method and system

Country Status (1)

Country Link
CN (1) CN111210487A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113160033A (en) * 2020-12-28 2021-07-23 武汉纺织大学 Garment style migration system and method
CN113469969A (en) * 2021-06-29 2021-10-01 深圳中科飞测科技股份有限公司 Image generation method, image generation device, detection device, and readable storage medium
CN113763233A (en) * 2021-08-04 2021-12-07 深圳盈天下视觉科技有限公司 Image processing method, server and photographing device
WO2023151299A1 (en) * 2022-02-11 2023-08-17 华为云计算技术有限公司 Data generation method and apparatus, device, and storage medium
CN117522676A (en) * 2024-01-05 2024-02-06 北京市智慧水务发展研究院 Method and device for generating data set based on style migration water meter image
CN113763233B (en) * 2021-08-04 2024-06-21 深圳盈天下视觉科技有限公司 Image processing method, server and photographing equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110022463A (en) * 2019-04-11 2019-07-16 重庆紫光华山智安科技有限公司 Video interested region intelligent coding method and system are realized under dynamic scene
CN110598781A (en) * 2019-09-05 2019-12-20 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110022463A (en) * 2019-04-11 2019-07-16 重庆紫光华山智安科技有限公司 Video interested region intelligent coding method and system are realized under dynamic scene
CN110598781A (en) * 2019-09-05 2019-12-20 Oppo广东移动通信有限公司 Image processing method, image processing device, electronic equipment and storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
缪永伟等: "基于卷积神经网络的图像局部风格迁移", 计算机科学, pages 2 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113160033A (en) * 2020-12-28 2021-07-23 武汉纺织大学 Garment style migration system and method
CN113469969A (en) * 2021-06-29 2021-10-01 深圳中科飞测科技股份有限公司 Image generation method, image generation device, detection device, and readable storage medium
CN113763233A (en) * 2021-08-04 2021-12-07 深圳盈天下视觉科技有限公司 Image processing method, server and photographing device
CN113763233B (en) * 2021-08-04 2024-06-21 深圳盈天下视觉科技有限公司 Image processing method, server and photographing equipment
WO2023151299A1 (en) * 2022-02-11 2023-08-17 华为云计算技术有限公司 Data generation method and apparatus, device, and storage medium
CN117522676A (en) * 2024-01-05 2024-02-06 北京市智慧水务发展研究院 Method and device for generating data set based on style migration water meter image

Similar Documents

Publication Publication Date Title
CN111210487A (en) Pattern generation method and system
CN110163198B (en) Table identification reconstruction method and device and storage medium
CN108010112B (en) Animation processing method, device and storage medium
CN111898411B (en) Text image labeling system, method, computer device and storage medium
CN108492338B (en) Compression method and device for animation file, storage medium and electronic device
CN111652796A (en) Image processing method, electronic device, and computer-readable storage medium
CN113627402B (en) Image identification method and related device
CN110728722A (en) Image color migration method and device, computer equipment and storage medium
CN111310710A (en) Face detection method and system
US20220343507A1 (en) Process of Image
CN116415298A (en) Medical data desensitization method and system
CN110008922B (en) Image processing method, device, apparatus, and medium for terminal device
CN115423936A (en) AI virtual character and image processing method, system, electronic device and storage medium
CN117036546B (en) Picture generation method and device, storage medium and computing equipment
CN115641397A (en) Method and system for synthesizing and displaying virtual image
CN112818820B (en) Image generation model training method, image generation device and electronic equipment
CN114627211A (en) Video business card generation method and device, computer equipment and storage medium
CN111688605B (en) Character batch graph cutting method applied to automobile instrument and related equipment
CN113554549A (en) Text image generation method and device, computer equipment and storage medium
CN111651969A (en) Style migration
CN112416191B (en) Screen recording processing method and device, computer equipment and computer readable storage medium
CN116385829B (en) Gesture description information generation method, model training method and device
CN116385597B (en) Text mapping method and device
CN112328073B (en) Bidding evaluation method, device and system based on augmented reality equipment and computer equipment
CN112464956A (en) Image contact ratio identification method, electronic device and computer readable storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination