CN114022593A - Automatic generation method and platform of paintings and storage medium - Google Patents

Automatic generation method and platform of paintings and storage medium Download PDF

Info

Publication number
CN114022593A
CN114022593A CN202111359861.5A CN202111359861A CN114022593A CN 114022593 A CN114022593 A CN 114022593A CN 202111359861 A CN202111359861 A CN 202111359861A CN 114022593 A CN114022593 A CN 114022593A
Authority
CN
China
Prior art keywords
graphic
frame unit
picture frame
character
label
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202111359861.5A
Other languages
Chinese (zh)
Inventor
肖刚
肖骏清辰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to CN202111359861.5A priority Critical patent/CN114022593A/en
Publication of CN114022593A publication Critical patent/CN114022593A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/20Drawing from basic elements, e.g. lines or circles
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Computing Systems (AREA)
  • Molecular Biology (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)

Abstract

The invention discloses an automatic generation method, a platform and a storage medium of a painting, which are characterized in that a painting instruction containing painting content information is received, a graphic label of a target graphic element in each picture frame unit is determined according to the painting content information, and then the graphic label is input into a preset graphic content generation model to generate a corresponding target graphic element; and combining the target graphic elements to generate the graphic content corresponding to the picture frame unit, and filling the graphic content into the corresponding picture frame unit. Through the automatic, intelligent generation drawing work, can effectively reduce the time cost, the social cost of drawing work, reduce the capital cost of investment, simplify the complexity of drawing project, reduced drawing worker's work load, promoted drawing worker's work efficiency.

Description

Automatic generation method and platform of paintings and storage medium
Technical Field
The invention relates to the technical field of graphic processing, in particular to an automatic generation method, a platform and a storage medium of a pictorial work.
Background
The daily contacted paper entity or the painting works in the computer and mobile phone are drawn by art and art designer through painting paper and painting brush hand or computer painting tool, and the types of the painting works relate to entertainment, education, public welfare, story, cartoon description, public welfare poster, propaganda poster and the like. The character design, the action design, the scene design and the like are drawn manually. Important elements such as characters (main character and accessory character), character characteristics, painting style, character actions, scenes, scene conversion, typesetting and the like need a lot of hard labor to finish manually. Currently painters have slowly moved from paper-based physical painting authoring to computer painting. However, the computer drawing tool only provides professional drawing tools such as white paper boards (electronic paper boards), various strokes, color palettes (color selection), erasers and the like, contents such as graphic images and the like are drawn by hand one by one, similar graphics can be drawn repeatedly, the workload is large, and the user experience is poor.
Disclosure of Invention
The invention provides an automatic generation method, a platform and a storage medium of a painting work, aiming at solving the technical problem that the prior painting work is drawn manually by a painting worker, and similar graphs can be drawn repeatedly with large workload.
In order to solve the technical problem, the invention provides an automatic generation method of a painting, which comprises the following steps:
receiving a drawing instruction, wherein the drawing instruction comprises drawing content information;
determining a graphic label of a target graphic element in each picture frame unit according to the drawing content information;
inputting the graphic label into a preset graphic content generation model to generate a corresponding target graphic element;
and combining the target graphic elements to generate the graphic content corresponding to the picture frame unit, and filling the graphic content into the corresponding picture frame unit.
Optionally, the drawing content instruction is a voice drawing instruction or a text drawing instruction.
Optionally, the determining the graphic label of the target graphic element in each picture frame unit according to the drawing content information includes:
determining story line information corresponding to each picture frame unit according to the drawing content information;
and determining the graphic label of the target graphic element in each picture frame unit according to the story line information.
Optionally, the determining, according to the drawing content information, the story line information corresponding to each picture frame unit includes:
determining story line information corresponding to each picture frame unit according to the drawing content information and a preset picture frame unit layout;
or the like, or, alternatively,
receiving a picture frame unit layout instruction, and determining the story line information corresponding to each picture frame unit according to the drawing content information and the picture frame unit layout instruction.
Optionally, the determining, according to the story line information, the graphic label of the target graphic element in each frame unit includes:
analyzing the story line information through a semantic analysis technology and an emotion analysis technology to determine roles in the corresponding story lines and emotion of each role;
determining a role label of a corresponding role according to the emotion and semantic analysis result;
and inputting the role label into the graphic content generation model to generate a corresponding target graphic element.
Optionally, the role label includes at least one of a role expression label, a role graph size label, a role action label, and a role prop label.
Optionally, the graphic tag further includes a scene tag of a scene in which the character is located, and the determining the graphic tag of the target graphic element in each frame unit according to the story line information includes:
and analyzing the story line information through a semantic analysis technology, and directly determining a corresponding scene label.
Optionally, before the combining each of the target graphic elements to generate the graphic content in the corresponding frame unit, the method further includes:
receiving a modification instruction for the target graphic element;
modifying the corresponding target graphic element according to the modification instruction to generate a new target graphic element;
the step of combining the target graphic elements to generate the graphic content in the corresponding picture frame unit comprises:
and generating the graphic content corresponding to the picture frame unit according to the new target graphic element.
Further, the present invention also provides an automatic generation platform of pictorial representations, comprising:
a processor, a memory, and a communication bus;
the communication bus is used for realizing connection communication between the processor and the memory;
the processor is configured to execute one or more programs stored in the memory to implement the steps of the method of automatically generating a pictorial representation, as described in any one of the preceding claims.
Further, the present invention also provides a storage medium storing one or more programs executable by one or more processors to implement the steps of the method of automatically generating a pictorial representation as set forth in any one of the preceding claims.
Advantageous effects
The invention provides an automatic generation method, a platform and a storage medium of a painting, which are characterized in that a painting instruction containing painting content information is received, a graphic label of a target graphic element in each picture frame unit is determined according to the painting content information, and then the graphic label is input into a preset graphic content generation model to generate a corresponding target graphic element; and combining the target graphic elements to generate the graphic content corresponding to the picture frame unit, and filling the graphic content into the corresponding picture frame unit. Through the automatic, intelligent generation drawing work, can effectively reduce the time cost, the social cost of drawing work, reduce the capital cost of investment, simplify the complexity of drawing project, reduced drawing worker's work load, promoted drawing worker's work efficiency.
Drawings
The invention will be further described with reference to the accompanying drawings and examples, in which:
FIG. 1 is a schematic flow chart of a method for automatically generating a pictorial representation according to an embodiment of the present invention;
FIG. 2 is a diagram illustrating a target graphic element being placed in a frame unit according to an embodiment of the present invention;
FIG. 3 is a first schematic diagram of a layout of a picture frame unit according to an embodiment of the present invention;
FIG. 4 is a second schematic diagram of a layout of a picture frame unit according to an embodiment of the present invention;
FIG. 5 is a third schematic view of a layout of a picture frame unit according to an embodiment of the present invention;
FIG. 6 is a diagram illustrating a first information structure of a drawing command according to an embodiment of the present invention;
fig. 7 is a schematic diagram of a first information structure of a drawing instruction according to an embodiment of the present invention.
Detailed Description
The first embodiment is as follows:
it should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In order to solve the problems that the prior pictorial work is drawn manually by a drawing worker, similar graphs can be drawn repeatedly, and the workload is large, the embodiment provides an automatic generation method of the pictorial work, which can be applied to an automatic generation platform of the pictorial work, and the flow of the method is shown in fig. 1, and comprises the following steps:
s101: and receiving a drawing instruction, wherein the drawing instruction comprises drawing content information.
It should be noted that the drawing instruction in the present embodiment may be a voice drawing instruction or a text drawing instruction.
S102: and determining the graphic label of the target graphic element in each picture frame unit according to the drawing content information.
The picture frame unit in this embodiment is a structural unit in a page displaying the pictorial work. Specifically, the story line information corresponding to each frame unit can be determined according to the drawing content information, and then the graphic label of the target graphic element in each frame unit can be determined according to the story line information.
S103: and inputting the graphic label into a preset graphic content generation model to generate a corresponding target graphic element.
It can be understood that the graphic tag in this embodiment may be a character tag, or may be a scene tag of a scene in which a character is located. It should be noted that, in some examples, the storyline information may be analyzed directly through semantic analysis techniques to determine the characters in the respective storyline and the emotions of each character.
In other examples, the story line information may be analyzed by emotion analysis techniques to determine the emotions of the character. The character tag in this embodiment includes, but is not limited to, at least one of a character emoticon tag, a character graphic size tag, a character action tag, a character prop tag, and a character screen position tag. Specifically, the story line information can be analyzed through a semantic analysis technology and an emotion analysis technology to determine the roles in the story line and the emotion of each role; determining role labels of corresponding roles according to emotion and semantic analysis results; and inputting the role labels into the graphic content generation model to generate corresponding target graphic elements. In some embodiments, the drawing instruction may further include a text of dialog contents between the characters, and combining the target graphic elements to generate the graphic content of the corresponding frame unit includes:
combining the target graphic elements to obtain a role graphic;
and displaying the dialog content text in a screen area associated with the screen position where the corresponding character graph is positioned.
It can be understood that, in this embodiment, the graphic tags may be located based on a semantic understanding technology, and in brief, the graphic tags with the highest matching degree may be selected by understanding the text content, the matching degrees are sorted from top to bottom, the graphic tags with the highest matching degree are automatically selected, the graphic content generation model may generate corresponding target graphic elements based on a preset graphic element library, and each graphic element in the graphic element library may have many graphic tags, for example: gender, age group, academic calendar, nationality, living environment, social environment, mood, illness, hobbies, native family, parental personality, etc., various basic element attribute tags, biological attribute tags, extended social factor tags, etc., which may affect the selection of the target graphic element. Such as: for war resistant materials, the front characters are thin but the people are perfectly dry, all character factors influence the selection of target graphic elements, and the back traitor characters are lazy, obese, soft, exquisite in wearing and slightly bow … … in posture. In addition, after massive training, the platform provided by the embodiment can draw and generate scenes, climates, ecological environments, characters, body states, even dialogue story content and the like in an original creation mode
The graphic label in this embodiment may further include a scene label of a scene where the character is located, and determining the graphic label of the target graphic element according to the story line information includes: and analyzing the story line information through a semantic analysis technology, and directly determining a corresponding scene label. The scene tag here refers to a tag corresponding to any background graphic element in the background scene where the character is located, and includes, but is not limited to, a weather tag, an environment tag, and the like.
The preset graphic content generation model in this embodiment is a model obtained by training in advance according to a plurality of graphic label sample data, where the graphic label sample data includes a graphic material (i.e., graphic content) and a corresponding graphic label, it is understood that one graphic material may correspond to a plurality of graphic labels, and during model training, classification training is performed on graphic elements of different categories, and it is understood that, in order to obtain a more accurate model and a higher diversity of output results, it is preferable that the graphic material may be classified into a minimum category for training, for example, upper body wearing and lower body wearing of a character may be classified for training, instead of training the whole makeup of the character, so that a target graphic element output according to the model is also the minimum graphic element. The graphic label sample in this embodiment includes, but is not limited to, at least one of character expression label sample data, character graphic size label sample data, character action label sample data, character prop label sample data, and character picture position label sample data. For example, during training, the user may upload a character name, a character gender, a character direction, a character upper limb body motion, a character lower limb body motion, a character expression, a character upper body garment, a character lower body garment, a character shoe, a character jewelry, a character prop, a character image and associated characters of characters in different situations, and perform training.
For ease of understanding, the graphical content generation model is specifically described herein and may include the following modules:
the graphic material library module: the basic composition of the graphic material library module is graphic content (including vector diagram and pixel diagram), character introduction, character label, associated graph (parent level node), classification, color system, style, machine learning state (times and time), warehousing time and uploading person information (including automatic generation).
Drawing semantic understanding library module: and the text information recorded by the multi-element network structure comprises the recorded content of a text, an associated text, a classification, an attribute and a timestamp. The expression is the description and the description of the association relationship between the words and the graphic labels. Such as: winter, ice and snow, cold, awkwardness and trembling, chilly and quivering, nasal discharge, and frozen red hands or cheeks.
A brush module: including brush drawing, classification, labeling, uploader, timestamp. The method is used for selecting and using when the platform automatically draws.
A matching degree calculation module: the high-availability graphic materials are stored in the graphic material library, and the matching degree calculation module selects the high-matching degree graphic materials to be directly selected and used based on semantic understanding of the uploaded texts or voices. And (4) scoring the graph materials through a multivariate neural recurrence algorithm, namely, scoring the matching degree. The screening conditions are from large to small and from rough to detail, and the screening contents are sorted from high to low according to the matching degree. And preferentially screening classification, color system and style, screening according to the character tags and the character introduction, and traversing the matching degree of the father nodes according to the incidence relation of the associated graphs. And finally, screening the graph material with the highest matching degree and returning a result. If the matching degree is lower than the set matching threshold (platform setting), namely the better content is not matched, the platform can perform original creation according to the learned content, automatically draw related content and output a return graph material result. Automatic rendering establishes a discriminant model and a generative model based on a deep learning model (generative confrontation network model). The discrimination model is used for learning and judging the content of the graph to classify and summarize, and the data in the discrimination model is screened by a matching degree algorithm. The generative model is used to generate the content to be rendered. The learning and drawing process is a process of continuously and repeatedly checking between the discrimination model and the generation model, the generation model continuously generates drawing contents with sample deviation according to the multidimensional contents of the graphic material library, the discrimination model receives the drawing contents to be compared with the selected material library and returns the sample difference to the generation model, the generation model and the discrimination model continuously repeat the cycle to continuously improve the platform drawing level, and the generated drawing capacity is stored in the material library and marked. The training process and the training result are classified and stored in the storage device for subsequent use. The trained generative model is a high quality automatic generator. Different types and styles of materials need to be trained and learned in different stages, namely, the confrontation training of the generation model and the discrimination model is continuously repeated.
The method provided by the embodiment is to automatically draw through understanding of characters or voice, and does not generate contents through redrawing of the contents of the simple strokes drawn by the painter.
The automatic generation platform of the pictorial work can be stored with a role label selection rule for selecting a role label according to the role emotion and the semantic analysis result in advance, and can define that the priority of the semantic analysis result is higher than the priority of the role emotion, for example, the selection rule can stipulate: if the semantic analysis result does not contain information of the character expression labels and the character action labels, the character expression labels and the character action labels are directly determined according to the emotion analysis result, for example, the happy emotion corresponds to the 'happy' expression labels and the 'happy' action; and if the semantic analysis result contains the information of the character expression and the character action label, determining the expression and the action of the character according to the voice analysis result.
For ease of understanding, a specific example is illustrated herein when the storyline is "today's weather is really good, twilight to play in the countryside, happy! The method comprises the steps of determining that a character is Xiaoming through a semantic analysis technology and an emotion analysis technology, determining that a character emotion and semantic analysis result is happy, directly determining that an expression label and an action label of the character are happy, determining a scene label of a scene where the character is located through analyzing keywords of weather truth and suburb through the semantic analysis technology, and inputting the determined character label and the determined scene label into a preset graphic content generation model to obtain a corresponding target graphic element.
S104: and combining the target graphic elements to generate the graphic content corresponding to the picture frame unit, and filling the graphic content into the corresponding picture frame unit.
In this embodiment, there are two ways of determining the story line information corresponding to each frame unit according to the drawing content information:
the first method is as follows: the method comprises the steps of determining story line information corresponding to each picture frame unit according to drawing content information and a preset picture frame unit layout, specifically, determining story lines corresponding to each picture frame unit according to a forward time sequence. The frame units may be predefined to be uniformly or non-uniformly arranged from left to right, from top to bottom, under the corresponding page, and the platform may combine the corresponding target graphic elements according to the story line and then place the combined target graphic elements into the corresponding frame units, for example, as shown in fig. 2.
The second method comprises the following steps: receiving a picture frame unit layout instruction, and determining the story line information corresponding to each picture frame unit according to the drawing content information and the picture frame unit layout instruction.
Before combining the target graphic elements to generate the graphic content of the corresponding picture frame unit, the generated target graphic elements can be displayed on a platform for a painter to confirm or modify, and a user can add a picture frame page layout structure by dragging a shape tool. Common picture frame unit layout structures such as a common layout, a prominent character layout, a polygon layout and the like can be set, specifically, as shown in fig. 3, 4 and 5, a target graphic element which can be dragged into a picture frame on the right side is on the left side, and each picture frame unit which can be dragged in size and changed is on the right side. Specifically, in some embodiments, before combining the target graphic elements to generate the corresponding pictorial representation, the method may further include the following steps:
receiving a modification instruction aiming at the target graphic element;
modifying the corresponding target graphic element according to the modification instruction to generate a new target graphic element;
at this time, the step of combining the target graphic elements to generate the graphic content in the corresponding picture frame unit includes:
and generating the graphic content corresponding to the picture frame unit according to the new target graphic element.
The graphic content generation model in this embodiment may match the most suitable target graphic element according to the matching degree, and then place the target graphic element in a corresponding position.
The drawing instruction of the user can also include basic material information of the drawing, for example, the character material can be uploaded, and the user can label the corresponding character material, for example, the position of the body part such as the face and the limbs of the uploaded material can be designated for labeling. So that the platform can place the corresponding expression graphic element at the corresponding position in the canvas after determining the expression of the character according to the story line information. If the expression is selected, the character can be placed at the designated face position according to the position of the character.
Example two:
in order to better understand the scheme provided by the present invention, the present embodiment provides a specific method for automatically generating a pictorial work, which is applied to an automatic generation platform of a pictorial work, and the present embodiment specifically explains a story-type cartoon as an example.
The platform comprises a drawing instruction receiving module, a graphic label generating module, a target graphic element generating module and a drawing work generating module.
The drawing instruction receiving module is used for receiving a drawing instruction, and a user can input the drawing instruction to the platform through a text input tool or a voice input tool or a graphic material uploading tool. The uploading format may be a standard table format, and the drawing instruction may include information of a role label, such as a role name, a gender, an age group, a region, a hobby, a nature, a splenic atmosphere, a dress, an appearance characteristic, and the like, and specifically, as shown in fig. 6, in some embodiments, if the user needs to use a customized role image, the user may upload a role map, such as a role front map, a role side map, and a role back map, and the platform may adjust a corresponding graphic element in the role map according to the received role map and information of the role label. The drawing instruction may further include information of a scene label of a scene where the character is located, and specifically, as shown in fig. 7, the platform may automatically select contents such as an electronic brush stroke, a thickness, a color, a light shadow, a color scheme, and the like to generate the drawing content through the uploaded materials, the selected conversation style, and the configured content. And automatically drawing the painting works of corresponding styles, wherein the painting works comprise scene color matching styles, work styles, line thicknesses, line changes, line starting and ending styles, broken line (broken line) ratios, filling color gradual changes, filling color skills and other detailed painting capabilities.
After the uploading and importing of the drawing content information are completed, the platform can determine the story line information corresponding to each picture frame unit based on the story title, the story outline and the role by utilizing a semantic analysis technology, and then determine the graphic label of the target graphic element to be presented in each picture frame unit according to the story line information.
And the target graphic element generation module is used for inputting the graphic label into a preset graphic content generation model to generate corresponding target graphic elements, each generated target graphic element is independent and layered, and a user can click and modify the target graphic elements or upload new graphic materials to replace the currently intelligently generated graphic materials.
The target graphic element generation module in this embodiment may also automatically match elements in the drawing material library according to the drawing graphics of the user, and match the element content with the highest drawing matching degree with the user through an automation function for the user to select.
And the painting work generation module is used for combining all the target graphic elements to generate graphic contents corresponding to the picture frame units and filling the graphic contents into the corresponding picture frame units to obtain the painting work.
The automatic drawing content of the embodiment relates to automatic selection and drawing of eight large dimensions, including picture frame selection, scene drawing, character drawing selection and the like, besides automatic selection of the painting brush drawing. The automatic drawing function is not limited to the automatic drawing technology, and can be used selectively or used in imitation of drawing according to the existing materials.
Eight dimensions:
(1) scene: including location (indoor, outdoor, etc.), set content, season, climate, weather conditions for the fitted scene, etc.).
(2) A character: including attributes such as gender, age bracket, academic calendar, nationality, living environment, social environment, emotion, disease, hobby, primary family, parental character, etc.;
(3) conversation: extracting uploaded text contents, automatically selecting preposed bubbles and placing the bubbles at a position suitable for a conversation main role;
(4) the actions are as follows: providing a character action material library, wherein the character action material library can be selectively used by a platform through text recognition semantic understanding, and can also automatically draw content or draw the connection between a part and a body (one of antagonistic model functions) through uploaded graphic materials and corresponding character labels based on the selection of brush strokes of various parts;
(5) expression: the expression is the important content for expressing and shaping roles and judging whether the roles are vivid, and the expression content comprises the following components: graphics, names, types, text labels, associated graphics, and the like;
(6) props: prop graphic materials, different angles, colors and the like;
(7) the relationship between light and shadow: the position and the brightness of a light source can be set in a scene, the direction can be set to the position in the scene and the front and back positions of a layer (can be placed in front of or behind a role, can be placed in front of the scene and the like), and the position and the color depth of a shadow are determined by the light source;
(8) the purpose is as follows: the story content of each scene is analyzed, and the ultimate goal or purpose is achieved.
The automatic generation method of the paintings provided by the embodiment can effectively reduce the time cost and the social cost of the painting work through an automatic and intelligent means, reduce the invested capital cost, simplify the complexity of the painting project, improve the working efficiency of the painters, and improve the attention and the creativity of the painters. The drawing tool can be assisted to automatically apply drawing elements (such as expressions, actions, props and the like) which can be repeatedly used, and the drawing tool can automatically generate structured, layered and elemental drawing works intelligently, so that the drawing tool can modify any unit at any time. The automatic application has training and learning functions, and can make an automatic program know the graph through manually configuring and importing the graph sample and configuring related types, names, labels and other contents. And the automatic program can also learn autonomously by uploading the basic materials. The amount of training determines the level of drawing, the more training the higher the quality. The automatic drawing module extracts the styles of drawing arts of a plurality of materials through a deep neural network, and automatically generates drawing results of corresponding art styles according to conditions such as story text contents and the like through a GPU (graphics processing unit) of a computer.
Example three:
in order to better understand the solution provided by the present embodiment, the present embodiment is described with a more specific example, for example, the drawing instruction input by the painter is: the mother plays the game with a serious face and without completing the operation.
The platform determines that two roles are available through a semantic analysis technology, wherein the roles are 'mom' and 'Xiaoming', and the scene is 'home'. And automatically determining related graphic labels such as the expression, the size, the picture position, the action, the required props and the like of the character according to the emotion of the character, and inputting the graphic labels into a preset graphic content generation model to generate corresponding target graphic elements. In this example: content mom based on dialect is in the emotional excitement moment and needs to be highlighted, so that a mom character is picked and put at a main position in a picture frame unit, a serious expression is allocated to a specified position, and a picture frame highlighting a specific character is automatically selected according to a story line (the picture frame of the specific character has the highest proportion in the current page and can occupy a complete canvas). The second character belongs to the auxiliary character in the book page, and is automatically assigned with the secondary position of the frame unit and selects the action and expression of the repudiated character.
Example four:
this embodiment provides an automatic generation platform of pictorial work, includes: a processor, a memory, and a communication bus; the communication bus is used for realizing connection communication between the processor and the memory; the processor is adapted to execute one or more programs stored in the memory to implement the steps of the method of automatically generating a pictorial representation as described in any of the above embodiments.
The present embodiment also provides a storage medium storing one or more programs executable by one or more processors to implement the steps of the method of automatically generating a pictorial representation, as described in any one of the above.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.

Claims (10)

1. A method of automatically generating a pictorial representation, comprising:
receiving a drawing instruction, wherein the drawing instruction comprises drawing content information;
determining a graphic label of a target graphic element in each picture frame unit according to the drawing content information;
inputting the graphic label into a preset graphic content generation model to generate a corresponding target graphic element;
and combining the target graphic elements to generate the graphic content corresponding to the picture frame unit, and filling the graphic content into the corresponding picture frame unit.
2. A method of automatically generating a pictorial representation according to claim 1, wherein said pictorial content instructions are voice pictorial instructions or text pictorial instructions.
3. The method of automatically generating a pictorial representation of claim 1, wherein said determining from the pictorial content information the graphic label of the target graphic element in each picture frame unit comprises:
determining story line information corresponding to each picture frame unit according to the drawing content information;
and determining the graphic label of the target graphic element in each picture frame unit according to the story line information.
4. A method of automatically generating a pictorial representation according to claim 3, wherein said determining story line information corresponding to each picture frame unit in accordance with the pictorial content information comprises:
determining story line information corresponding to each picture frame unit according to the drawing content information and a preset picture frame unit layout;
or the like, or, alternatively,
receiving a picture frame unit layout instruction, and determining the story line information corresponding to each picture frame unit according to the drawing content information and the picture frame unit layout instruction.
5. A method of automatically generating a pictorial representation according to claim 3, wherein the graphic labels include character labels, and wherein said determining the graphic label of the target graphic element in each picture frame unit from the storyline information comprises:
analyzing the story line information through a semantic analysis technology and an emotion analysis technology to determine roles in the corresponding story lines and emotion of each role;
determining a role label of a corresponding role according to the emotion and semantic analysis result;
and inputting the role label into the graphic content generation model to generate a corresponding target graphic element.
6. A method of automatically generating a pictorial representation according to claim 5, wherein said character tags include at least one of a character emoji tag, a character graphic size tag, a character action tag, and a character prop tag.
7. The method of automatically generating a pictorial representation of claim 5, wherein the graphic labels further include a scene label of a scene in which the character is located, and wherein determining the graphic label of the target graphic element in each frame unit from the storyline information includes:
and analyzing the story line information through a semantic analysis technology, and directly determining a corresponding scene label.
8. The method of automatically generating a pictorial representation of any one of claims 1 to 7, further comprising, prior to said combining each of said target graphic elements to generate the graphic content in a corresponding picture frame unit:
receiving a modification instruction for the target graphic element;
modifying the corresponding target graphic element according to the modification instruction to generate a new target graphic element;
the step of combining the target graphic elements to generate the graphic content in the corresponding picture frame unit comprises:
and generating the graphic content corresponding to the picture frame unit according to the new target graphic element.
9. An automated platform for generating pictorial representations, comprising: a processor, a memory, and a communication bus;
the communication bus is used for realizing connection communication between the processor and the memory;
the processor is adapted to execute one or more programs stored in the memory to implement the steps of the method of automatically generating a pictorial representation according to any one of claims 1 to 8.
10. A storage medium, characterized in that it stores one or more programs executable by one or more processors to implement the steps of a method of automatic generation of a pictorial representation according to any one of claims 1 to 8.
CN202111359861.5A 2021-11-17 2021-11-17 Automatic generation method and platform of paintings and storage medium Withdrawn CN114022593A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111359861.5A CN114022593A (en) 2021-11-17 2021-11-17 Automatic generation method and platform of paintings and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111359861.5A CN114022593A (en) 2021-11-17 2021-11-17 Automatic generation method and platform of paintings and storage medium

Publications (1)

Publication Number Publication Date
CN114022593A true CN114022593A (en) 2022-02-08

Family

ID=80064712

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111359861.5A Withdrawn CN114022593A (en) 2021-11-17 2021-11-17 Automatic generation method and platform of paintings and storage medium

Country Status (1)

Country Link
CN (1) CN114022593A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114722640A (en) * 2022-06-08 2022-07-08 广东时谛智能科技有限公司 Method and device for individually customizing shoe body model
TWI808041B (en) * 2022-11-21 2023-07-01 凌網科技股份有限公司 Image display format conversion method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114722640A (en) * 2022-06-08 2022-07-08 广东时谛智能科技有限公司 Method and device for individually customizing shoe body model
CN114722640B (en) * 2022-06-08 2022-11-01 广东时谛智能科技有限公司 Method and device for individually customizing shoe body model
TWI808041B (en) * 2022-11-21 2023-07-01 凌網科技股份有限公司 Image display format conversion method

Similar Documents

Publication Publication Date Title
JP6763925B2 (en) Machine learning system for building rendering and building information modeling data
CN108121957B (en) Method and device for pushing beauty material
CN114022593A (en) Automatic generation method and platform of paintings and storage medium
CN108182232B (en) Personage's methods of exhibiting, electronic equipment and computer storage media based on e-book
JP4869340B2 (en) Character costume determination device, character costume determination method, and character costume determination program
CN109859095B (en) Automatic cartoon generation system and method
KR20200064591A (en) Webtoons color customizing programs and applications of deep learning
CN116824020A (en) Image generation method and device, apparatus, medium, and program
CN116595438A (en) Picture creation method, device, equipment and storage medium
CN117475031A (en) Image generation method, device and storage medium
CN112182194A (en) Method, system and readable storage medium for expressing emotional actions of television avatar
CN113838158B (en) Image and video reconstruction method and device, terminal equipment and storage medium
CN117058275B (en) Commodity propaganda drawing generation method and device, computer equipment and storage medium
CN118052907A (en) Text map generation method and related device
TWI698835B (en) Image processing method and device and computer-readable storage medium
CN113963306B (en) Courseware title making method and device based on artificial intelligence
CN113869960B (en) Poster generation method and device, storage medium and computer equipment
CN112613270A (en) Method, system, equipment and storage medium for recommending target text based on pre-trained drawing creation model
CN111582281A (en) Picture display optimization method and device, electronic equipment and storage medium
CN111324466A (en) Information processing method, device, system and storage medium
CN110223217A (en) A kind of planar design automatic addition decorative element information processing system and method
Wang et al. Paint ChatGPT: A Platform for Old Shanghai Calendar Card Generation
Jia Research on Digital Transformation Mode Analysis and Brand Influence Enhancement Strategy of Shanxi Intangible Cultural Heritage Guangling Paper Cutting
Ngan The Future History of Contemporary Chinese Art
Yu et al. Analysis of visual style of traditional Chinese painting by content analysis method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20220208

WW01 Invention patent application withdrawn after publication