CN111061902B - Drawing method and device based on text semantic analysis and terminal equipment - Google Patents

Drawing method and device based on text semantic analysis and terminal equipment Download PDF

Info

Publication number
CN111061902B
CN111061902B CN201911288937.2A CN201911288937A CN111061902B CN 111061902 B CN111061902 B CN 111061902B CN 201911288937 A CN201911288937 A CN 201911288937A CN 111061902 B CN111061902 B CN 111061902B
Authority
CN
China
Prior art keywords
image
static
category
gallery
main body
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911288937.2A
Other languages
Chinese (zh)
Other versions
CN111061902A (en
Inventor
邓立邦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Zhimeiyuntu Tech Corp ltd
Original Assignee
Guangdong Zhimeiyuntu Tech Corp ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Zhimeiyuntu Tech Corp ltd filed Critical Guangdong Zhimeiyuntu Tech Corp ltd
Priority to CN201911288937.2A priority Critical patent/CN111061902B/en
Publication of CN111061902A publication Critical patent/CN111061902A/en
Application granted granted Critical
Publication of CN111061902B publication Critical patent/CN111061902B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/5866Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using information manually generated, e.g. tags, keywords, comments, manually generated location and time information
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/30Information retrieval; Database structures therefor; File system structures therefor of unstructured textual data
    • G06F16/36Creation of semantic tools, e.g. ontology or thesauri
    • G06F16/367Ontology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/001Texturing; Colouring; Generation of texture or colour

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Animal Behavior & Ethology (AREA)
  • Computational Linguistics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Library & Information Science (AREA)
  • Machine Translation (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a drawing method, a drawing device and terminal equipment based on text semantic analysis, wherein the method comprises the following steps: acquiring a text to be identified, and extracting keywords corresponding to the category of a preset knowledge graph from the text to be identified; the categories of the preset knowledge graph comprise a place category, a main object category and a state category; extracting a painting main body image from a preset object gallery according to keywords belonging to main body object categories and state categories; extracting background images of the painting from a preset background gallery according to keywords belonging to the place category; and combining the painting main body image and the background image to obtain the painting work. By implementing the embodiment of the invention, drawing creation can be performed according to the semantic description of the text.

Description

Drawing method and device based on text semantic analysis and terminal equipment
Technical Field
The present invention relates to the field of automatic drawing technologies, and in particular, to a drawing method, apparatus, and terminal device based on text semantic analysis.
Background
Painting artistic creation is an creation inspiration burst result after three stages of life accumulation, creation conception and artistic expression by an artist taking social life as a source spring. Along with the development of the times, the automatic drawing technology also has a great progress, but in the existing automatic drawing technology, a technical scheme for drawing according to text semantics is not available.
Disclosure of Invention
The embodiment of the invention provides a drawing method, a drawing device and terminal equipment based on text semantic analysis, which can carry out drawing creation according to semantic description of text.
An embodiment of the present invention provides a drawing method based on text semantic analysis, including:
acquiring a text to be identified, and extracting keywords corresponding to the category of a preset knowledge graph from the text to be identified; the categories of the preset knowledge graph comprise a place category, a main object category and a state category;
extracting a painting main body image from a preset object gallery according to keywords belonging to main body object categories and state categories;
extracting background images of the painting from a preset background gallery according to keywords belonging to the place category;
and combining the painting main body image and the background image to obtain a painting work.
Further, the categories of the preset knowledge graph further comprise a color category and a time category.
Further, the method further comprises the following steps: rendering the background image according to the keywords belonging to the time category; and rendering the painting main body image according to the keywords belonging to the color categories.
Further, the preset object gallery comprises a static object gallery and a dynamic object gallery;
the construction method of the static object gallery specifically comprises the following steps: collecting static images of various static objects under different forms, rays and angles, and taking the static images as a static object sample set;
extracting the edges of each static image to obtain a tracing draft image corresponding to each static image;
taking the static object sample set as input, taking a tracing draft image corresponding to each static image in the static object sample set as output, performing learning training through a condition generation countermeasure network, and establishing the static object gallery;
the construction method of the dynamic object gallery specifically comprises the following steps: collecting dynamic object images of various dynamic objects under different sexes, ages and postures as a dynamic object sample set;
marking the joint parts and joint connection points of each limb of the dynamic object in each dynamic object image by using line segments and dots to obtain a match image corresponding to each dynamic object image;
and taking the dynamic object sample set as input, taking a match image corresponding to each dynamic object image in the dynamic object sample set as output, carrying out learning training through a condition generation countermeasure network, and establishing the dynamic object gallery.
Further, after the combining the painting main body image and the rendered background image, the method further includes: carrying out picture stylization processing on the painting according to a preset artistic style; wherein the artistic style includes a canvas style and a sketch style.
On the basis of the method item embodiments, the invention correspondingly provides device item embodiments;
the embodiment of the invention provides a drawing device based on text semantic analysis, which comprises a keyword extraction module, a drawing main body acquisition module, a background image acquisition module and a drawing work generation module;
the keyword extraction module is used for acquiring a text to be identified and extracting keywords corresponding to the category of a preset knowledge graph from the text to be identified; the categories of the preset knowledge graph comprise a place category, a main object category and a state category;
the drawing main body acquisition module is used for extracting drawing main body images from a preset object gallery according to keywords belonging to main body object categories and state categories;
the background image acquisition module is used for extracting background images of painting works from a preset background gallery according to keywords belonging to the place category;
the painting generation module is used for combining the painting main body image and the background image to obtain a painting.
Further, the device also comprises a painting main body image rendering module and a background image rendering module;
the painting main body image rendering module is used for rendering the painting main body image according to the keywords belonging to the color categories
The background image rendering module is used for rendering the background image according to the keywords belonging to the time category.
Further, the artistic style processing module is also included; the artistic style processing module is used for carrying out picture stylization processing on the painting work according to a preset artistic style; wherein the artistic style includes a canvas style and a sketch style.
On the basis of the method item embodiment, the invention also provides a corresponding terminal equipment item embodiment;
another embodiment of the present invention provides a drawing terminal device based on text semantic analysis, which includes a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, where the processor executes the computer program to implement the drawing method based on text semantic analysis according to any one of the foregoing method embodiments of the present invention.
The embodiment of the invention has the following beneficial effects:
the embodiment of the invention provides a drawing method, a drawing device and terminal equipment based on text semantic analysis, wherein the method comprises the steps of firstly obtaining a text to be identified, and then extracting keywords corresponding to categories of preset knowledge maps in the text to be identified; then extracting the painting main body image of the painting to be generated from a preset object gallery according to the keywords belonging to the main body object category and the state category, then extracting the background image of the painting to be generated from a preset background gallery according to the keywords of the place category, and finally combining the painting main body image and the background image to automatically generate a painting according to the text (namely a section of literal descriptive content).
Drawings
Fig. 1 is a flow chart of a drawing method based on text semantic analysis according to an embodiment of the present invention.
Fig. 2 is a schematic structural diagram of a drawing device based on text semantic analysis according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
As shown in fig. 1, an embodiment of the present invention provides a drawing method based on text semantic analysis, including:
step S101: acquiring a text to be identified, and extracting keywords corresponding to the category of a preset knowledge graph from the text to be identified; the categories of the preset knowledge graph comprise a place category, a main object category and a state category.
Step S102: and extracting the painting main body image from a preset object gallery according to the keywords belonging to the main body object category and the state category.
Step S103: and extracting the background image of the painting from a preset background gallery according to the keywords belonging to the place category.
Step S104: and combining the painting main body image and the background image to obtain a painting work.
For step S101, a preset knowledge graph is first described.
And obtaining a large number of text description paragraph texts through a network, performing word segmentation analysis on the obtained text sentences, extracting text contents associated with the attributes such as places, main body objects, states and the like, and establishing a knowledge graph association relationship between each extracted text content and the corresponding attribute classification.
In a preferred embodiment of the present invention, the categories of the preset knowledge graph may be classified into a location category, a subject category, and a status category; the three large categories can also comprise a plurality of levels of subcategories, and each level comprises at least one subcategory; for example, assume that in the embodiment of the present invention, the first level is a location category, a subject object category, and a status category; the three large categories are subdivided into a second hierarchical category, for example: the location categories may include: seaside, desert, forest, street, etc.; the subject object categories are divided into two types, one is a dynamic object including human beings, various animal categories (cats, dogs, birds, cattle, sheep, etc.), and the other is a static object including various still categories (e.g., flowers, grasses, trees, houses, pots, bowls, ladles, basins, etc.); the status categories may be divided into two cases, and for a dynamic object subject, the corresponding status categories include gender categories (for humans, male and female, for animals, male and female), age categories (including each different age group), posture categories (posture categories refer to actions of a dynamic object, such as standing, sitting, jumping, etc. postures); for a static object body, the corresponding state categories include a shape category (refer to a shape, such as a circle, a square, etc.), a light category (no light, bright light, strong light with an illumination intensity exceeding a preset value, dim light with an illumination intensity below a preset value, etc.), and an angle category (refer to a front side, a side, etc.); then, the category in each second level may correspond to different text description contents, for example, text description contents corresponding to seaside may be "at seaside", "at sea side of the Bohai sea", text description contents corresponding to cat may be "puppet cat", "tremella cat", "big flower cat", "small cat" and the like, and text description contents corresponding to other categories are not repeated here.
Therefore, a large amount of text contents are acquired through a network, after the text sentences are segmented, each segmented word is classified according to the above, and each segmented word is associated with the corresponding attribute classification, so that the preset knowledge graph is obtained; at this time, the preset knowledge graph contains a large number of text contents (text description contents) and corresponding classifications of each text content; after the text to be recognized is obtained, word segmentation is carried out on the file to be recognized, the same word description content is found in the preset knowledge graph according to the word description content of each word segment, and then the keywords and the category of the knowledge graph to which the keywords belong are obtained.
In a preferred embodiment, a color category and a time category are also included. Likewise, the color category and the time category may be subdivided into sub-categories, e.g., the color category may be subdivided into: red, yellow, blue, green, etc.; time categories can be divided into night and day; various text descriptions are also corresponding to each sub-category, for example, text descriptions corresponding to the category at night include "moon", "night", text descriptions corresponding to the category at daytime include "sun", "daytime", etc., and text descriptions corresponding to each sub-category in the color category are not described one by one.
It should be noted that the above examples of classification cases are merely illustrative, and may be adjusted according to actual situations.
For step S102, a description will first be given of a preset object gallery, which, in a preferred embodiment,
the preset object gallery comprises a static object gallery and a dynamic object gallery;
the construction method of the static object gallery specifically comprises the following steps: collecting static images of various static objects under different forms, rays and angles, and taking the static images as a static object sample set;
extracting the edges of each static image to obtain a tracing draft image corresponding to each static image;
taking the static object sample set as input, taking a tracing draft image corresponding to each static image in the static object sample set as output, performing learning training through a condition generation countermeasure network, and establishing the static object gallery;
the construction method of the dynamic object gallery specifically comprises the following steps: collecting dynamic object images of various dynamic objects under different sexes, ages and postures as a dynamic object sample set;
marking the joint parts and joint connection points of each limb of the dynamic object in each dynamic object image by using line segments and dots to obtain a match image corresponding to each dynamic object image;
and taking the dynamic object sample set as input, taking a match image corresponding to each dynamic object image in the dynamic object sample set as output, carrying out learning training through a condition generation countermeasure network, and establishing the dynamic object gallery.
The definition of the static object and the dynamic object is the same as in step S101.
For the construction of a static object gallery, firstly, collecting static images of various static objects under different forms, rays and angles as a static object sample set; it should be noted that, the types of the static objects represented by the "various static objects" mentioned herein are consistent with the types of the static objects in the preset knowledge graph in step S101 (i.e., flowers, grasses, trees, houses, pots, bowls, ladles, bowls, etc.), the forms, light rays, angles mentioned herein are consistent with the specific classifications mentioned in step S101; for common scenery, various indoor and outdoor static objects and other articles, collecting a large number of pictures of various articles in various large social network platforms in different forms, rays and angles, taking the pictures as a learning set sample, carrying out gray preprocessing on the collected images, and then carrying out edge extraction to obtain the tracing draft images of various article images (static images). And (3) setting the various object images (static images) and the corresponding extracted line manuscript drawing images to be overlapped in an equal proportion of 1:1, so that the object images (static images) and the line manuscript drawing images correspond to each other from pixel to pixel, and removing the image areas of redundant parts. The method comprises the steps of taking images (static images) of various articles as input items, taking corresponding sketch drawing images as output items, generating an countermeasure network by using conditions to perform learning training, performing training repeatedly when two models of a generator G and a discriminator D reach a steady state, and building various article image libraries, namely static object image libraries (each image is marked with corresponding classification labels) by using the generator G.
For a dynamic object gallery, collecting dynamic object images of various dynamic objects under different sexes, ages and postures as a dynamic object sample set; the specific classification of the "various dynamic objects" referred to herein is also consistent with the classification in step S101, that is, the classification of human beings, various animal species (cats, dogs, birds, cattle, sheep, etc.), and the above-mentioned gender, age, and posture are also consistent with the classification in step S101. For people and animals with different sexes, ages and different postures, images (dynamic object images) with different angles of the people and the animals with different postures are collected through the Internet respectively, preprocessed and classified and stored respectively; marking joint parts and joint connection points of limbs in the images of the people and the animals under each category by using line segments and dots to obtain match images of various people and animals; taking collected pictures of various people and animals as input item learning set samples, taking match pictures of various people and animals corresponding to the marks as output according to the proportion of 1:1, generating an countermeasure network by using conditions to perform learning training, and completing training when two models of a generator G and a discriminator D reach a steady state through repeated training; and (3) using a generator G to establish a drawing image library of various people and animals, namely the dynamic object image library (each image is marked with corresponding classification labels).
Therefore, the keywords belonging to the subject object category and the state category are extracted from the preset object gallery, specifically, the subject images with the same category are extracted from the preset object gallery according to the keywords belonging to the subject object category and the state category, and the drawing subject images required for drawing the drawing work are obtained.
For step S103, first, a preset background gallery is described: according to different scene description nouns, searching scene labels through social networks and various picture websites to capture real pictures, and then classifying and storing according to the place types in the step S101; obtaining the preset background gallery;
and searching background images with consistent classifications in a preset background gallery according to the keywords belonging to the place category, and taking the background images as background images of the painting work.
For step S104, in a preferred embodiment, further comprising: rendering the background image according to the keywords belonging to the time category; and rendering the painting main body image according to the keywords belonging to the color categories.
For example, if the time-class keyword is "night", the time-class keyword belongs to the category "night" through the knowledge graph, the background of the background image is rendered at night at this time, and similarly if the color-class keyword is "green", the time-class keyword is known to belong to the category "green" through the knowledge graph, and the drawing main image is rendered as green at this time.
In a preferred embodiment, after said combining said pictorial main body image and said rendered background image to obtain a pictorial representation, further comprising: carrying out picture stylization processing on the painting according to a preset artistic style; wherein the artistic style includes a canvas style and a sketch style.
The embodiments of the method can be applied to robots, so that the robots have certain painting creation capability according to text description contents.
On the basis of the method item embodiment, correspondingly, an apparatus item embodiment is provided:
as shown in fig. 2, an embodiment of the present invention provides a drawing device based on text semantic analysis, which includes a keyword extraction module, a drawing main body acquisition module, a background image acquisition module, and a pictorial representation generation module;
the keyword extraction module is used for acquiring a text to be identified and extracting keywords corresponding to the category of a preset knowledge graph from the text to be identified; the categories of the preset knowledge graph comprise a place category, a main object category and a state category;
the drawing main body acquisition module is used for extracting drawing main body images from a preset object gallery according to keywords belonging to main body object categories and state categories;
the background image acquisition module is used for extracting background images of painting works from a preset background gallery according to keywords belonging to the place category;
the painting generation module is used for combining the painting main body image and the background image to obtain a painting.
In a preferred embodiment, the method further comprises a painting main body image rendering module and a background image rendering module;
the painting main body image rendering module is used for rendering the painting main body image according to the keywords belonging to the color categories
The background image rendering module is used for rendering the background image according to the keywords belonging to the time category.
In a preferred embodiment, the device further comprises an artistic style processing module; the artistic style processing module is used for carrying out picture stylization processing on the painting work according to a preset artistic style; wherein the artistic style includes a canvas style and a sketch style.
It can be understood that the above embodiment of the apparatus item corresponds to the embodiment of the method item of the present invention, and may implement the drawing method based on text semantic analysis provided by any one of the above embodiments of the method item of the present invention.
It should be noted that the above-described apparatus embodiments are merely illustrative, and the units/modules described as separate units may or may not be physically separate, and units shown as units/modules may or may not be physical units/modules, i.e. may be located in one place, or may be distributed over a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. In addition, in the drawings of the embodiment of the device provided by the invention, the connection relation between the modules represents that the modules have communication connection, and can be specifically implemented as one or more communication buses or signal lines. Those of ordinary skill in the art will understand and implement the present invention without undue burden. The schematic diagram is merely an example of a drawing device based on text semantic analysis and does not constitute a limitation of the drawing device based on text semantic analysis, and may include more or less components than illustrated, or some components in combination, or different components.
Correspondingly providing terminal equipment item embodiments on the basis of the method item embodiments;
an embodiment of the present invention provides a drawing terminal device based on text semantic analysis, including a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, where the processor executes the computer program to implement the drawing method based on text semantic analysis according to any one of the foregoing method embodiments of the present invention.
The computer program may be divided into one or more modules/units, which are stored in the memory and executed by the processor to accomplish the present invention, for example. The one or more modules/units may be a series of instruction segments of a computer program capable of performing a specific function for describing the execution of the computer program in the text semantic analysis based drawing terminal device.
The drawing terminal equipment based on text semantic analysis can be computing equipment such as a desktop computer, a notebook computer, a palm computer and a cloud server. The drawing terminal device based on text semantic analysis can include, but is not limited to, a processor and a memory. Those skilled in the art will appreciate that the drawing terminal device based on text semantic analysis may also include an input-output device, a network access device, a bus, etc., for example.
The processor may be a central processing unit (Central Processing Unit, CPU), other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. The general purpose processor may be a microprocessor or the processor may be any conventional processor or the like, which is a control center of the text-semantic-analysis-based drawing terminal apparatus, and connects the respective parts of the entire text-semantic-analysis-based drawing terminal apparatus using various interfaces and lines.
The memory may be used to store the computer program and/or the module, and the processor may implement various functions of the drawing terminal device based on text semantic analysis by running or executing the computer program and/or the module stored in the memory and calling the data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program (such as a sound playing function, an image playing function, etc.) required for at least one function, and the like; the storage data area may store data (such as audio data, phonebook, etc.) created according to the use of the handset, etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart Media Card (SMC), secure Digital (SD) Card, flash Card (Flash Card), at least one disk storage device, flash memory device, or other volatile solid-state storage device.
By implementing the embodiment of the invention, the semantic description of the text can be analyzed according to a section of text, and finally the generation of the painting according to the text is realized, so that the painting creation is realized.
While the foregoing is directed to the preferred embodiments of the present invention, it will be appreciated by those skilled in the art that changes and modifications may be made without departing from the principles of the invention, such changes and modifications are also intended to be within the scope of the invention.

Claims (5)

1. A method of drawing based on text semantic analysis, comprising:
acquiring a text to be identified, and extracting keywords corresponding to the category of a preset knowledge graph from the text to be identified; the categories of the preset knowledge graph comprise a place category, a main object category, a state category, a color category and a time category; the main object category comprises a dynamic object main body and a static object main body, and the state category comprises a state category corresponding to the dynamic object main body and a state category corresponding to the static object main body;
extracting a painting main body image from a preset object gallery according to keywords belonging to main body object categories and state categories;
extracting background images of the painting from a preset background gallery according to keywords belonging to the place category;
rendering the background image according to the keywords belonging to the time category;
rendering the painting main body image according to the keywords belonging to the color categories;
combining the painting main body image and the background image to obtain the painting work;
the preset object gallery comprises a static object gallery and a dynamic object gallery;
the construction method of the static object gallery specifically comprises the following steps: collecting static images of various static objects under different forms, rays and angles, and taking the static images as a static object sample set;
extracting the edges of each static image to obtain a tracing draft image corresponding to each static image;
taking the static object sample set as input, taking a tracing draft image corresponding to each static image in the static object sample set as output, performing learning training through a condition generation countermeasure network, and establishing the static object gallery;
the construction method of the dynamic object gallery specifically comprises the following steps: collecting dynamic object images of various dynamic objects under different sexes, ages and postures as a dynamic object sample set;
marking the joint parts and joint connection points of each limb of the dynamic object in each dynamic object image by using line segments and dots to obtain a match image corresponding to each dynamic object image;
and taking the dynamic object sample set as input, taking a match image corresponding to each dynamic object image in the dynamic object sample set as output, carrying out learning training through a condition generation countermeasure network, and establishing the dynamic object gallery.
2. The drawing method based on text semantic analysis according to claim 1, further comprising, after said combining the drawing subject image and the rendered background image, after obtaining a pictorial representation: carrying out picture stylization processing on the painting according to a preset artistic style; wherein the artistic style includes a canvas style and a sketch style.
3. The drawing device based on text semantic analysis is characterized by comprising a keyword extraction module, a drawing main body acquisition module, a background image acquisition module, a drawing work generation module, a drawing main body image rendering module, a background image rendering module, a static object gallery construction module and a dynamic object gallery construction module;
the keyword extraction module is used for acquiring a text to be identified and extracting keywords corresponding to the category of a preset knowledge graph from the text to be identified; the categories of the preset knowledge graph comprise a place category, a main object category and a state category;
the drawing main body acquisition module is used for extracting drawing main body images from a preset object gallery according to keywords belonging to main body object categories and state categories;
the background image acquisition module is used for extracting background images of painting works from a preset background gallery according to keywords belonging to the place category;
the painting generation module is used for combining the painting main body image and the background image to obtain a painting;
the painting main body image rendering module is used for rendering the painting main body image according to the keywords belonging to the color categories;
the background image rendering module is used for rendering the background image according to the keywords belonging to the time category;
the preset object gallery comprises a static object gallery and a dynamic object gallery;
the static object gallery construction module is used for the static object gallery construction method, and specifically comprises the following steps: collecting static images of various static objects under different forms, rays and angles, and taking the static images as a static object sample set;
extracting the edges of each static image to obtain a tracing draft image corresponding to each static image;
taking the static object sample set as input, taking a tracing draft image corresponding to each static image in the static object sample set as output, performing learning training through a condition generation countermeasure network, and establishing the static object gallery;
the dynamic object gallery construction module is used for the construction method of the dynamic object gallery, and specifically comprises the following steps: collecting dynamic object images of various dynamic objects under different sexes, ages and postures as a dynamic object sample set;
marking the joint parts and joint connection points of each limb of the dynamic object in each dynamic object image by using line segments and dots to obtain a match image corresponding to each dynamic object image;
and taking the dynamic object sample set as input, taking a match image corresponding to each dynamic object image in the dynamic object sample set as output, carrying out learning training through a condition generation countermeasure network, and establishing the dynamic object gallery.
4. The text semantic analysis based drawing device according to claim 3, further comprising an artistic style processing module; the artistic style processing module is used for carrying out picture stylization processing on the painting work according to a preset artistic style; wherein the artistic style includes a canvas style and a sketch style.
5. A drawing terminal device based on text semantic analysis, comprising a processor, a memory and a computer program stored in the memory and configured to be executed by the processor, the processor implementing the drawing method based on text semantic analysis according to any one of claims 1 to 2 when the computer program is executed.
CN201911288937.2A 2019-12-12 2019-12-12 Drawing method and device based on text semantic analysis and terminal equipment Active CN111061902B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911288937.2A CN111061902B (en) 2019-12-12 2019-12-12 Drawing method and device based on text semantic analysis and terminal equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911288937.2A CN111061902B (en) 2019-12-12 2019-12-12 Drawing method and device based on text semantic analysis and terminal equipment

Publications (2)

Publication Number Publication Date
CN111061902A CN111061902A (en) 2020-04-24
CN111061902B true CN111061902B (en) 2023-12-19

Family

ID=70301574

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911288937.2A Active CN111061902B (en) 2019-12-12 2019-12-12 Drawing method and device based on text semantic analysis and terminal equipment

Country Status (1)

Country Link
CN (1) CN111061902B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112308939B (en) * 2020-09-14 2024-04-16 北京沃东天骏信息技术有限公司 Image generation method and device

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008107904A (en) * 2006-10-23 2008-05-08 National Institute Of Information & Communication Technology Text and animation service apparatus, and computer program
US10074200B1 (en) * 2015-04-22 2018-09-11 Amazon Technologies, Inc. Generation of imagery from descriptive text
CN109472838A (en) * 2018-10-25 2019-03-15 广东智媒云图科技股份有限公司 A kind of sketch generation method and device
CN110097616A (en) * 2019-04-17 2019-08-06 广东智媒云图科技股份有限公司 A kind of joint drawing method, device, terminal device and readable storage medium storing program for executing
CN110347823A (en) * 2019-06-06 2019-10-18 平安科技(深圳)有限公司 Voice-based user classification method, device, computer equipment and storage medium
WO2019210075A1 (en) * 2018-04-27 2019-10-31 Facet Labs, Llc Devices and systems for human creativity co-computing, and related methods

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060235809A1 (en) * 2005-04-18 2006-10-19 John Pearson Digital caricature
KR102356435B1 (en) * 2017-04-11 2022-01-28 라운드파이어, 인크. Natural language-based computer animation

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008107904A (en) * 2006-10-23 2008-05-08 National Institute Of Information & Communication Technology Text and animation service apparatus, and computer program
US10074200B1 (en) * 2015-04-22 2018-09-11 Amazon Technologies, Inc. Generation of imagery from descriptive text
WO2019210075A1 (en) * 2018-04-27 2019-10-31 Facet Labs, Llc Devices and systems for human creativity co-computing, and related methods
CN109472838A (en) * 2018-10-25 2019-03-15 广东智媒云图科技股份有限公司 A kind of sketch generation method and device
CN110097616A (en) * 2019-04-17 2019-08-06 广东智媒云图科技股份有限公司 A kind of joint drawing method, device, terminal device and readable storage medium storing program for executing
CN110347823A (en) * 2019-06-06 2019-10-18 平安科技(深圳)有限公司 Voice-based user classification method, device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN111061902A (en) 2020-04-24

Similar Documents

Publication Publication Date Title
US11238310B2 (en) Training data acquisition method and device, server and storage medium
US20220392046A1 (en) Utilizing deep neural networks to automatically select instances of detected objects in images
CN111009041B (en) Drawing creation method, device, terminal equipment and readable storage medium
US7016828B1 (en) Text-to-scene conversion
CN107993191A (en) A kind of image processing method and device
US20100303342A1 (en) Finding iconic images
CN103824053A (en) Face image gender marking method and face gender detection method
US11574392B2 (en) Automatically merging people and objects from multiple digital images to generate a composite digital image
EP3867808A1 (en) Method and device for automatic identification of labels of image
CN112328823A (en) Training method and device for multi-label classification model, electronic equipment and storage medium
CN107748780B (en) Recovery method and device for file of recycle bin
Xiao et al. Vismantic: Meaning-making with Images.
CN111061902B (en) Drawing method and device based on text semantic analysis and terminal equipment
CN111027622A (en) Picture label generation method and device, computer equipment and storage medium
CN113762051B (en) Model training method, image detection device, storage medium and equipment
CN109242042B (en) Picture training sample mining method and device, terminal and computer readable storage medium
Wong et al. Development of species recognition models using Google teachable machine on shorebirds and waterbirds
CN116361502B (en) Image retrieval method, device, computer equipment and storage medium
CN112001380B (en) Recognition method and system for Chinese meaning phrase based on artificial intelligence reality scene
CN114037889A (en) Image identification method and device, electronic equipment and storage medium
US20200019785A1 (en) Automatically associating images with other images of the same locations
CN112580750A (en) Image recognition method and device, electronic equipment and storage medium
CN114840700B (en) Image retrieval method and device for realizing IA by combining RPA and AI and electronic equipment
CN109657079A (en) A kind of Image Description Methods and terminal device
Farhat et al. Captain: Comprehensive composition assistance for photo taking

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant