CN110097616B - Combined drawing method and device, terminal equipment and readable storage medium - Google Patents
Combined drawing method and device, terminal equipment and readable storage medium Download PDFInfo
- Publication number
- CN110097616B CN110097616B CN201910311177.6A CN201910311177A CN110097616B CN 110097616 B CN110097616 B CN 110097616B CN 201910311177 A CN201910311177 A CN 201910311177A CN 110097616 B CN110097616 B CN 110097616B
- Authority
- CN
- China
- Prior art keywords
- image
- article
- main
- drawn
- hand
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 56
- 238000003860 storage Methods 0.000 title claims abstract description 19
- 239000000203 mixture Substances 0.000 claims abstract description 71
- 238000009826 distribution Methods 0.000 claims abstract description 57
- 239000000463 material Substances 0.000 claims abstract description 49
- 239000002131 composite material Substances 0.000 claims abstract description 16
- 238000010422 painting Methods 0.000 claims description 31
- 239000013598 vector Substances 0.000 claims description 20
- 238000004590 computer program Methods 0.000 claims description 17
- 238000012549 training Methods 0.000 claims description 15
- 238000013527 convolutional neural network Methods 0.000 claims description 4
- 238000010586 diagram Methods 0.000 abstract description 3
- 238000012545 processing Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 6
- 239000000047 product Substances 0.000 description 6
- 238000007781 pre-processing Methods 0.000 description 3
- 230000009286 beneficial effect Effects 0.000 description 2
- 238000004891 communication Methods 0.000 description 2
- 230000000295 complement effect Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000007591 painting process Methods 0.000 description 2
- 239000013589 supplement Substances 0.000 description 2
- 241001465754 Metazoa Species 0.000 description 1
- 230000008485 antagonism Effects 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000005520 cutting process Methods 0.000 description 1
- 238000013136 deep learning model Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 230000003340 mental effect Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
-
- Y—GENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
- Y02—TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
- Y02P—CLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
- Y02P90/00—Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
- Y02P90/30—Computing systems specially adapted for manufacturing
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention discloses a combined drawing method, a device, terminal equipment and a readable storage medium, wherein the method comprises the following steps: acquiring a hand-drawn draft image drawn by a user, and selecting a group of composition distribution data containing object labels from a preset composition material library when judging that the hand-drawn draft image is matched with the object labels of a given drawing title; selecting material line manuscript images corresponding to all the main body articles from a preset line manuscript image library according to the article labels of all the main body articles contained in the configuration diagram distribution data; and sequentially scaling and displaying the hand-drawn draft image and the material draft image in equal ratio to corresponding positions by combining the article size data and the coordinate position data of each main article to obtain a combined drawing image. According to the invention, the single object element image drawn by a user can be drawn and supplemented through the material library, so that people without professional drawing skills can also create a drawing work with rich composition and full elements.
Description
Technical Field
The present invention relates to the field of image processing technologies, and in particular, to a joint drawing method, a joint drawing device, a terminal device, and a readable storage medium.
Background
With the continuous development and progress of society and science technology, a plurality of intelligent drawing products or tools are appeared on the market, and the intelligent drawing products or tools are provided for people to draw so as to meet the mental and living demands of the people. However, existing painting products and tools all require people to have certain professional painting skills and comprehensive composition capability to draw better artistic works. Because most people do not have professional painting skills, drawing single object elements is marginal and satisfactory, and therefore the existing painting products cannot help people create satisfactory works with full compositions and rich elements.
Disclosure of Invention
The technical problem to be solved by the embodiment of the invention is to provide a combined drawing method, a device, a terminal device and a readable storage medium, wherein the method can be applied to a drawing product or tool, so that the drawing product or tool can be combined with a user for drawing, and people without professional drawing skills can be helped to create a drawing work with rich elements in composition.
In order to solve the technical problems, the invention provides a combined painting method, which comprises the following steps:
acquiring a free hand drawing manuscript image drawn by a user, and randomly selecting a group of composition distribution data containing a drawing subject object label from a preset composition material library when judging that the free hand drawing manuscript image is matched with the given drawing subject object label; wherein the composition distribution data comprises article labels, article size data and coordinate position data of all main articles;
selecting material line manuscript images corresponding to all the main body articles from a preset line manuscript image library according to the article labels of all the main body articles contained in the composition distribution data;
and sequentially scaling the hand drawing manuscript image and the material line manuscript image in equal proportion by combining the article size data and the coordinate position data, and displaying the hand drawing manuscript image and the material line manuscript image in corresponding positions to obtain a combined drawing image.
Further, before the obtaining the hand-drawn line manuscript image drawn by the user, the method further includes:
obtaining photos of common main body articles of different types and a plurality of main body articles under different angles and different light rays;
extracting feature vectors of photos of various main objects according to the space density arrangement features of the main objects; wherein the spatial density arrangement features comprise one or more of a composition structure, an outline and a color material;
and taking the extracted feature vector as input, taking the article label of each classified main article as output, performing recognition training by using a convolutional neural network, extracting and storing a standard template of each main article under each classification, and obtaining a recognition model library of each main article.
Further, the method for establishing the composition material library comprises the following steps:
acquiring a plurality of high-quality pictures with praise numbers higher than a preset threshold value on a network, and extracting features of the high-quality pictures to obtain high-quality picture feature vectors;
comparing the feature vector of the high-quality picture with the standard template feature vector of each main object in the identification model library, identifying each main object contained in the high-quality picture, and carrying out frame selection marking on each main object;
acquiring an article label of each frame selection marked main article by using the identification model library, and calculating the area size of each frame selection marked main article and the corresponding coordinate position of each frame selection marked main article in the high-quality picture to obtain article size data and coordinate position data of each main article;
correlating the object labels, object size data and coordinate position data of all the main objects to obtain composition distribution data corresponding to the high-quality pictures;
and acquiring composition distribution data of the plurality of high-quality pictures one by one and storing the composition distribution data to obtain the composition material library.
Further, the method for establishing the draft pattern library comprises the following steps:
obtaining a plurality of similar images of each main object marked by frame selection in the high-quality picture;
respectively taking a plurality of similar images of each type of main object as input, and generating a plurality of corresponding material line manuscript images by using a preset line manuscript drawing model;
and storing the object labels of each main object and a plurality of material line manuscript images corresponding to the main object in an associated manner to obtain the line manuscript image library.
Further, the method for establishing the line manuscript drawing model comprises the following steps:
acquiring images of main objects in various shapes on a network;
acquiring a manual manuscript graph of each main body object drawn by a user according to the main body object image;
taking the main object image as input, taking a line manuscript pattern corresponding to the main object image as output, taking the manual line manuscript pattern as an output learning sample, and learning and training by using a generated countermeasure network;
and when the output of the discriminator in the generating type countermeasure network reaches a preset steady-state threshold, training is completed, and the line manuscript drawing model is obtained.
Optionally, the method for judging whether the hand-drawn draft image is matched with the given drawing title object label comprises the following steps:
judging the article category of the hand-drawn draft image by using a discriminator of the generated countermeasure network, obtaining n article labels to be matched with highest probability, and judging whether at least one article label to be matched is successfully matched with the drawing subject article label; wherein n is an integer greater than 0;
if yes, judging that the hand-drawn draft image is matched with the drawing subject article label;
if not, judging that the free hand draft image is not matched with the drawing subject object label.
Further, the combined painting method further includes:
and when the hand-drawn line manuscript image is not matched with the given drawing subject object label, clearing the hand-drawn line manuscript image, and prompting to redraw.
In order to solve the same technical problems, the invention also provides a combined drawing device which comprises a composition distribution data selection module, a material line manuscript image selection module and a combined drawing module;
the composition distribution data selecting module is used for acquiring a free hand drawing manuscript image drawn by a user, and randomly selecting a group of composition distribution data containing the drawing item label from a preset composition material library when judging that the free hand drawing manuscript image is matched with the given drawing item label; wherein the composition distribution data comprises article labels, article size data and coordinate position data of all main articles;
the material line manuscript image selecting module is used for selecting material line manuscript images corresponding to all the main body articles from a preset line manuscript image library according to the article labels of all the main body articles contained in the composition distribution data;
and the combined drawing module is used for combining the size data of the object and the coordinate position data, sequentially scaling the hand-drawn draft image and the material draft image in equal proportion and displaying the hand-drawn draft image and the material draft image in corresponding positions to obtain a combined drawing image.
To solve the same technical problem, the present invention also provides a joint drawing terminal device, including a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, the memory being coupled to the processor, and the processor implementing any of the joint drawing methods when executing the computer program.
To solve the same technical problem, the present invention also provides a computer readable storage medium storing a computer program, wherein the computer program controls a device in which the computer readable storage medium is located to execute any one of the joint drawing methods when running.
The embodiment of the invention has the following beneficial effects:
the invention provides a combined drawing method, a device, a terminal device and a readable storage medium, wherein the method comprises the steps of firstly, randomly selecting an article label from a preset article label question library as a drawing question, prompting a user to draw a corresponding article, randomly selecting a group of composition distribution data containing the article label from a preset composition material library to draw a composition when judging that a hand drawing line manuscript image of the user is matched with the article label corresponding to the given question, then respectively selecting material line manuscript images corresponding to the main articles from the preset line manuscript image library according to the article label of each main article contained in the composition distribution data, drawing according to article size data and position coordinate data in the composition distribution data, scaling and displaying the hand line manuscript image of the user in a canvas according to the size and the coordinate position, and then carrying out one-to-one complement display on the rest main articles in the canvas according to the size and the coordinate position, thereby completing the combined drawing.
The combined painting method provided by the invention has the advantages that the painting process is simple and easy to operate, a user does not need to have a very specialized painting foundation and does not need to carry out complex composition thinking, only a favorite single object element is drawn according to painting prompts at will, after the system acquires the painting element of the user, the image is analyzed, processed and judged, after the consistency of the object drawn by the user is confirmed, the corresponding element is matched with the drawing content of the user from the material library to carry out comprehensive composition, the drawing supplement is perfect on the basis of keeping the original drawing of the user, and the user is combined with the drawing, so that the full-element rich drawing work of the composition is completed, the ordinary person can draw a higher-level drawing work, and the interest of the drawing is increased.
Drawings
FIG. 1 is a schematic flow chart of a combined painting method according to an embodiment of the invention;
FIG. 2 is a schematic diagram of a generated type countermeasure network according to an embodiment of the invention;
fig. 3 is a schematic structural diagram of a combined painting device according to an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
Referring to fig. 1, an embodiment of the present invention provides a combined painting method, including:
step S1, acquiring a free hand draft image drawn by a user, and randomly selecting a group of composition distribution data containing a drawing subject object label from a preset composition material library when judging that the free hand draft image is matched with the given drawing subject object label; wherein the composition distribution data comprises article labels, article size data and coordinate position data of all main articles;
in the embodiment of the invention, step S1 is to acquire a hand-drawn line manuscript image drawn by a user and select composition distribution data; specifically, firstly, randomly selecting an article label from a preset article label question library as a drawing question, prompting a user to draw a corresponding article on an electronic touch screen, acquiring a hand-drawn line manuscript image drawn by the user, judging the article classification to which the graphic line manuscript image drawn by the user belongs, and randomly selecting a group of composition distribution data containing the article label of the drawing question from a preset composition database if the article classification can be matched with the given question, wherein the composition distribution data comprises the article label, the article size data and the coordinate position data of each main article in the group of compositions. It will be appreciated that for a given pictorial item tag, there may be multiple sets of composition distribution data in the composition library containing the item tag, in this embodiment by randomly selecting a set of compositions from those corresponding composition distribution data.
S2, selecting material line manuscript images corresponding to all the main body articles from a preset line manuscript graphic library according to the article labels of all the main body articles contained in the composition distribution data;
in the embodiment of the present invention, step S2 is to select, one by one, material draft images corresponding to each main object from a preset draft image library according to the object labels of each main object included in the configuration map distribution data selected in step S1; it will be appreciated that for each subject item, the document image library may include a plurality of document images corresponding to item tags of the subject item, and the present embodiment randomly selects one of the document images for drawing.
And S3, combining the object size data and the coordinate position data, sequentially scaling the hand-drawn draft image and the material line draft image in equal proportion and displaying the hand-drawn draft image and the material line draft image in corresponding positions to obtain a combined drawing image.
In the embodiment of the present invention, step S3 is to combine the size data and coordinate position data of each main object included in the configuration map distribution data selected in step S1, first scaling and displaying the hand-drawn line manuscript image drawn by the user in the canvas according to the size and coordinate position, and then scaling and completing displaying the material line manuscript images of the other objects in the canvas according to the size and coordinate position one by one, thereby completing the joint drawing.
In the embodiment of the present invention, before the obtaining of the hand-drawn line manuscript image drawn by the user, the method further includes:
obtaining photos of common main body articles of different types and a plurality of main body articles under different angles and different light rays;
extracting feature vectors of photos of various main objects according to the space density arrangement features of the main objects; wherein the spatial density arrangement features comprise one or more of a composition structure, an outline and a color material;
and taking the extracted feature vector as input, taking the article label of each classified main article as output, performing recognition training by using a convolutional neural network, extracting and storing a standard template of each main article under each classification, and obtaining a recognition model library of each main article.
It can be understood that the invention can perform the establishment of the identification model library of each subject article according to the steps in advance, and the establishment process is as follows:
firstly, obtaining pictures shot at various angles and under various light rays of various figures, animals, plants and common main objects through the Internet, preprocessing the pictures, and improving the image processing and recognition performance of a server through preprocessing such as noise and interference removal, image cutting, normalization and the like;
dividing an image of each object into M x N square grid areas according to different space density arrangement characteristics such as a composition structure, an appearance contour, a color material and the like of each object, calculating the ratio of the number of points in each square grid to the total number of the object image to obtain M x N dimension feature vectors, inputting the extracted feature vectors, outputting the labeling name of each classified object, repeatedly identifying and training by using a convolutional neural network, extracting a standard template of each classified object, storing the standard template in a file, and establishing an identification model library of each object.
In a preferred embodiment, the method for creating the composition material library includes:
acquiring a plurality of high-quality pictures with praise numbers higher than a preset threshold value on a network, and extracting features of the high-quality pictures to obtain high-quality picture feature vectors;
comparing the feature vector of the high-quality picture with the standard template feature vector of each main object in the identification model library, identifying each main object contained in the high-quality picture, and carrying out frame selection marking on each main object;
acquiring an article label of each frame selection marked main article by using the identification model library, and calculating the area size of each frame selection marked main article and the corresponding coordinate position of each frame selection marked main article in the high-quality picture to obtain article size data and coordinate position data of each main article;
correlating the object labels, object size data and coordinate position data of all the main objects to obtain composition distribution data corresponding to the high-quality pictures;
and acquiring composition distribution data of the plurality of high-quality pictures one by one and storing the composition distribution data to obtain the composition material library.
It should be noted that, preferably, the preset composition database in step S1 may be established through the above steps, and the specific steps are as follows:
firstly, obtaining high-quality pictures with high user praise numbers (such as praise numbers higher than 50) through various large social media websites and the like, preprocessing the pictures, extracting features, comparing the extracted feature vectors of the pictures with the feature vectors of standard templates of various classified articles stored in the established identification model library, judging various main article elements contained in the images, and carrying out one-frame selection marking; the method for determining the subject article contained in the image may be: a 3/4 (the value can be set arbitrarily according to the requirement) area of the image is selected from the center to be set as a main area, whether more than half of the marked image area is in the set main area is judged, and if so, the marked image area is determined to be the main object element;
then calculating the area size of each main object of the frame selection mark in each high-quality picture and the coordinate position distribution data in the image, correlating with the corresponding object label, obtaining composition distribution data and storing the composition distribution data;
and finally, storing the composition distribution data (corresponding article labels, the sizes of the main articles and the coordinate position distribution data) of the main articles identified in each image one by one to obtain a composition material library.
In a preferred embodiment, the method for creating the draft graphic library includes:
obtaining a plurality of similar images of each main object marked by frame selection in the high-quality picture;
respectively taking a plurality of similar images of each type of main object as input, and generating a plurality of corresponding material line manuscript images by using a preset line manuscript drawing model;
and storing the object labels of each main object and a plurality of material line manuscript images corresponding to the main object in an associated manner to obtain the line manuscript image library.
It should be noted that, preferably, the line manuscript graphic library in step S2 may be established according to the above method, specifically, first, according to each main object marked by the frame selection of the above quality picture, a plurality of corresponding main object images of the same type are obtained, then, the main object images are respectively taken as input, a plurality of corresponding material line manuscript images are generated by using a preset line manuscript drawing model, finally, a plurality of material line manuscript images corresponding to each main object image are obtained, and these material line manuscript images are stored in association with object labels, so as to obtain the line manuscript graphic library.
In a preferred embodiment, the method for establishing the line manuscript drawing model includes:
acquiring images of main objects in various shapes on a network;
acquiring a manual manuscript graph of each main body object drawn by a user according to the main body object image;
taking the main object image as input, taking a manual manuscript image corresponding to the main object image as output, and learning and training by using a generated countermeasure network;
and when the output of the discriminator in the generating type countermeasure network reaches a preset steady-state threshold, training is completed, and the line manuscript drawing model is obtained.
It should be noted that, the method for establishing the line manuscript drawing model includes, but is not limited to, the steps described above, specifically:
firstly, images of various shapes of various main objects are acquired through the Internet, and gray processing is carried out on the images. Since the color image is composed of a plurality of pixel points, each pixel point is represented by three values of RGB; the gray processing is carried out on the image, the texture characteristic information of the image is not affected, each pixel point can be represented by only one gray value, the image processing efficiency is greatly improved, and the following formula is a gray processing weighted average method:
f(i,j)=0.30R(i,j)+0.59G(i,j)+0.11B(i,j)
wherein i and j represent the positions of a pixel point in a two-dimensional space vector, namely: ith row, jth column.
According to the formula, calculating the gray value of each pixel point of each static image, wherein the value range is 0-255, so that the images are in a black-white gray state, and then drawing a manual draft image corresponding to each article image in a manual drawing mode;
and then taking the main object image as input, taking a line manuscript image corresponding to the main object image as output, taking the manual line manuscript image as an output learning sample, and learning and training by using a generated type countermeasure network, and completing training when the output of a discriminator in the generated type countermeasure network reaches a preset steady state threshold (approximately 1/2, and specific values can be set according to requirements), so as to obtain the line manuscript drawing model.
Referring to fig. 2, it can be understood that the generated antagonism network (GAN, generative Adversarial Networks) is a deep learning model, and is one of the most promising methods for unsupervised learning on complex distribution in recent years. The most commonly used technical field of GAN is image generation. The basic principle of GAN is: let us assume that we have two models, G (Generator) and D (Discriminator), whose functions are: g is a model of the generated picture, which receives a random noise z, by which the picture is generated, denoted G (z); d is a discrimination model for discriminating whether a picture is "real", its input parameter is x, x represents a picture, output D (x) represents the probability that x is a real picture, if it is 1, it represents 100% of the pictures are real, and output is 0, it represents the possibility that it is not a real picture. In the training process, the object of generating the model G is to generate a real picture deception discrimination model D as much as possible. The objective of D is to separate the picture generated by G from the actual picture as much as possible. In the most ideal case, G may generate enough "spurious" pictures G (z); for D, it is difficult to determine whether or not the picture generated by G is true, so D (G (z))=0.5. When the two models reach a steady state, namely the output of the judging model D is close to 0.5, training is completed, and a generating model G is obtained, which can be used for generating pictures.
In the embodiment of the invention, when the generated countermeasure network (GAN) is trained, the generator G is used for capturing data distribution, the discriminator D is used for estimating the probability that one sample is self-real data instead of generating the sample, and the discriminator D is continuously updated, and the discrimination result is used as an optimization target of the generator G to guide how the generator G is optimized, so that the function of the generator G objective function is played. If the result of the conversion is not clear, the discriminator D can easily distinguish the result from the real picture, and at the same time instruct the generator G to generate a clearer and more reliable picture. The GAN learns to generate an objective function on the one hand, and optimizes the model according to the learned objective function on the other hand. When the generator G and the discriminator D reach a steady state, namely when the output of the discriminator D is close to 1/2, training is completed, and the line manuscript drawing model is obtained.
In a preferred embodiment, the method for judging whether the free hand draft image matches a given drawing title object label comprises the following steps:
judging the article category of the hand-drawn draft image by using a discriminator of the generated countermeasure network, obtaining n article labels to be matched with highest probability, and judging whether at least one article label to be matched is successfully matched with the drawing subject article label; wherein n is an integer greater than 0;
if yes, judging that the hand-drawn draft image is matched with the drawing subject article label;
if not, judging that the free hand draft image is not matched with the drawing subject object label.
In the embodiment of the present invention, whether the free hand draft image is matched with the given drawing title object label may be determined in the above manner, specifically, n may be set to 5, that is: the discriminator judges the object type of the hand-drawn draft image to obtain five object labels with highest probability, and the hand-drawn draft image is indicated to be matched with the drawing object label as long as one of the five object labels can be matched with the drawing object label.
Further, the combined painting method further includes:
and when the hand-drawn line manuscript image is not matched with the given drawing subject object label, clearing the hand-drawn line manuscript image, and prompting to redraw.
Referring to fig. 3, in order to solve the same technical problems, the invention further provides a combined painting device, which comprises a composition distribution data selecting module 1, a material line manuscript image selecting module 2 and a combined painting module 3;
the composition distribution data selecting module 1 is used for acquiring a free hand draft image drawn by a user, and randomly selecting a group of composition distribution data containing the drawing object label from a preset composition material library when judging that the free hand draft image is matched with the given drawing object label; wherein the composition distribution data comprises article labels, article size data and coordinate position data of all main articles;
the material line manuscript image selecting module 2 is configured to select material line manuscript images corresponding to each main object from a preset line manuscript image library according to object labels of each main object included in the composition distribution data;
and the combined drawing module 3 is used for combining the object size data and the coordinate position data, sequentially scaling the hand-drawn draft image and the material draft image in equal proportion and displaying the hand-drawn draft image and the material draft image in corresponding positions to obtain a combined drawing image.
It can be understood that the embodiment of the device item corresponds to the embodiment of the method item of the present invention, and the combined painting device provided by the embodiment of the present invention can implement the combined painting method provided by any one of the embodiment of the method item of the present invention.
To solve the same technical problem, the present invention also provides a joint drawing terminal device, including a processor, a memory, and a computer program stored in the memory and configured to be executed by the processor, the memory being coupled to the processor, and the processor implementing any of the joint drawing methods when executing the computer program.
The combined painting terminal equipment can be computing equipment such as a desktop computer, a notebook computer, a palm computer, a cloud server and the like. The processor may be a central processing unit (Central Processing Unit, CPU), but may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. The general processor may be a microprocessor or the processor may be any conventional processor, etc., and the processor is a control center of the joint drawing terminal device, and connects various parts of the whole joint drawing terminal device by using various interfaces and lines.
The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to the use of the cellular phone, etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart Media Card (SMC), secure Digital (SD) Card, flash Card (Flash Card), at least one disk storage device, flash memory device, or other volatile solid-state storage device.
To solve the same technical problem, the present invention also provides a computer readable storage medium storing a computer program, wherein the computer program controls a device in which the computer readable storage medium is located to execute any one of the joint drawing methods when running.
The computer program may be stored in a computer readable storage medium, which computer program, when being executed by a processor, may carry out the steps of the various method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
It should be noted that the above-described apparatus embodiments are merely illustrative, and the units described as separate units may or may not be physically separate, and units shown as units may or may not be physical units, may be located in one place, or may be distributed over a plurality of network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of this embodiment. In addition, in the drawings of the embodiment of the device provided by the invention, the connection relation between the modules represents that the modules have communication connection, and can be specifically implemented as one or more communication buses or signal lines. Those of ordinary skill in the art will understand and implement the present invention without undue burden.
In summary, the implementation of the embodiment of the invention has the following beneficial effects:
the invention provides a combined drawing method, a device, a terminal device and a readable storage medium, wherein the method comprises the steps of firstly, randomly selecting an article label from a preset article label question library as a drawing question, prompting a user to draw a corresponding article, randomly selecting a group of composition distribution data containing the article label from a preset composition material library to draw a composition when judging that a hand drawing line manuscript image of the user is matched with the article label corresponding to the given question, then respectively selecting material line manuscript images corresponding to the main articles from the preset line manuscript image library according to the article label of each main article contained in the composition distribution data, drawing according to article size data and position coordinate data in the composition distribution data, scaling and displaying the hand line manuscript image of the user in a canvas according to the size and the coordinate position, and then carrying out one-to-one complement display on the rest main articles in the canvas according to the size and the coordinate position, thereby completing the combined drawing.
The combined painting method provided by the invention has the advantages that the painting process is simple and easy to operate, a user does not need to have a very specialized painting foundation and does not need to carry out complex composition thinking, only a favorite single object element is drawn according to painting prompts at will, after the system acquires the painting element of the user, the image is analyzed, processed and judged, after the consistency of the object drawn by the user is confirmed, the corresponding element is matched with the drawing content of the user from the material library to carry out comprehensive composition, the drawing supplement is perfect on the basis of keeping the original drawing of the user, and the user is combined with the drawing, so that the full-element rich painting work of the composition is completed, the ordinary person can draw a higher-level painting work, and the interest of the drawing is increased.
While the foregoing is directed to the preferred embodiments of the present invention, it will be appreciated by those skilled in the art that changes and modifications may be made without departing from the principles of the invention, such changes and modifications are also intended to be within the scope of the invention.
Claims (10)
1. A method of joint painting, comprising:
acquiring a free hand drawing manuscript image drawn by a user, and randomly selecting a group of composition distribution data containing a drawing subject object label from a preset composition material library when judging that the free hand drawing manuscript image is matched with the given drawing subject object label; wherein the composition distribution data comprises article labels, article size data and coordinate position data of all main articles;
selecting material line manuscript images corresponding to all the main body articles from a preset line manuscript image library according to the article labels of all the main body articles contained in the composition distribution data;
sequentially scaling the hand-drawn draft image and the material draft image in equal ratio by combining the article size data and the coordinate position data, and displaying the hand-drawn draft image and the material draft image in corresponding positions to obtain a combined drawing image;
combining the article size data and the coordinate position data, sequentially scaling the hand-drawn draft image and the material draft image in equal ratio and displaying the hand-drawn draft image and the material draft image in corresponding positions, wherein the method comprises the following steps of:
the hand-drawn line manuscript image is scaled and displayed in the canvas in an equal ratio according to the article size data and the coordinate position data corresponding to the drawing subject article label in the composition distribution data;
and respectively scaling and complementing the material line manuscript image corresponding to the object label of the non-painting subject object label in the composition distribution data according to the object size data and the coordinate position data corresponding to the object label.
2. The joint drawing method according to claim 1, further comprising, before the acquiring the hand-drawn draft image drawn by the user:
obtaining photos of common main body articles of different types and a plurality of main body articles under different angles and different light rays;
extracting feature vectors of photos of various main objects according to the space density arrangement features of the main objects; wherein the spatial density arrangement features comprise one or more of a composition structure, an outline and a color material;
and taking the extracted feature vector as input, taking the article label of each classified main article as output, performing recognition training by using a convolutional neural network, extracting and storing a standard template of each main article under each classification, and obtaining a recognition model library of each main article.
3. The joint drawing method according to claim 2, wherein the method for creating the composition material library includes:
acquiring a plurality of high-quality pictures with praise numbers higher than a preset threshold value on a network, and extracting features of the high-quality pictures to obtain high-quality picture feature vectors;
comparing the feature vector of the high-quality picture with the standard template feature vector of each main object in the identification model library, identifying each main object contained in the high-quality picture, and carrying out frame selection marking on each main object;
acquiring an article label of each frame selection marked main article by using the identification model library, and calculating the area size of each frame selection marked main article and the corresponding coordinate position of each frame selection marked main article in the high-quality picture to obtain article size data and coordinate position data of each main article;
correlating the object labels, object size data and coordinate position data of all the main objects to obtain composition distribution data corresponding to the high-quality pictures;
and acquiring composition distribution data of the plurality of high-quality pictures one by one and storing the composition distribution data to obtain the composition material library.
4. The joint drawing method according to claim 3, wherein the creating method of the draft graphic library comprises:
obtaining a plurality of similar images of each main object marked by frame selection in the high-quality picture;
respectively taking a plurality of similar images of each type of main object as input, and generating a plurality of corresponding material line manuscript images by using a preset line manuscript drawing model;
and storing the object labels of each main object and a plurality of material line manuscript images corresponding to the main object in an associated manner to obtain the line manuscript image library.
5. The combined painting method according to claim 4, wherein the method for creating the line draft painting model includes:
acquiring images of main objects in various shapes on a network;
acquiring a manual manuscript graph of each main body object drawn by a user according to the main body object image;
taking the main object image as input, taking a line manuscript pattern corresponding to the main object image as output, taking the manual line manuscript pattern as an output learning sample, and learning and training by using a generated countermeasure network;
and when the output of the discriminator in the generating type countermeasure network reaches a preset steady-state threshold, training is completed, and the line manuscript drawing model is obtained.
6. The combined drawing method according to claim 1, wherein the means for determining whether the free hand draft image matches a given drawing title object label comprises:
judging the article category of the hand-drawn draft image by using a discriminator of the generated countermeasure network, obtaining n article labels to be matched with highest probability, and judging whether at least one article label to be matched is successfully matched with the drawing subject article label; wherein n is an integer greater than 0;
if yes, judging that the hand-drawn draft image is matched with the drawing subject article label;
if not, judging that the free hand draft image is not matched with the drawing subject object label.
7. The joint drawing method according to claim 1, characterized by further comprising:
and when the hand-drawn line manuscript image is not matched with the given drawing subject object label, clearing the hand-drawn line manuscript image, and prompting to redraw.
8. The combined painting device is characterized by comprising a composition distribution data selection module, a material line manuscript image selection module and a combined painting module;
the composition distribution data selecting module is used for acquiring a free hand drawing manuscript image drawn by a user, and randomly selecting a group of composition distribution data containing the drawing item label from a preset composition material library when judging that the free hand drawing manuscript image is matched with the given drawing item label; wherein the composition distribution data comprises article labels, article size data and coordinate position data of all main articles;
the material line manuscript image selecting module is used for selecting material line manuscript images corresponding to all the main body articles from a preset line manuscript image library according to the article labels of all the main body articles contained in the composition distribution data;
and the combined drawing module is used for combining the size data of the object and the coordinate position data, sequentially scaling the hand-drawn draft image and the material draft image in equal proportion and displaying the hand-drawn draft image and the material draft image in corresponding positions to obtain a combined drawing image.
9. A joint drawing terminal device comprising a processor, a memory and a computer program stored in the memory and configured to be executed by the processor, the memory being coupled to the processor, and the processor implementing the joint drawing method as claimed in any one of claims 1 to 7 when executing the computer program.
10. A computer readable storage medium, characterized in that the computer readable storage medium stores a computer program, wherein the computer program, when run, controls a device on which the computer readable storage medium is located to perform the joint drawing method according to any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910311177.6A CN110097616B (en) | 2019-04-17 | 2019-04-17 | Combined drawing method and device, terminal equipment and readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910311177.6A CN110097616B (en) | 2019-04-17 | 2019-04-17 | Combined drawing method and device, terminal equipment and readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110097616A CN110097616A (en) | 2019-08-06 |
CN110097616B true CN110097616B (en) | 2023-11-28 |
Family
ID=67445151
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910311177.6A Active CN110097616B (en) | 2019-04-17 | 2019-04-17 | Combined drawing method and device, terminal equipment and readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110097616B (en) |
Families Citing this family (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111008573B (en) * | 2019-11-15 | 2024-04-26 | 广东智媒云图科技股份有限公司 | Limb structure generation method and device, terminal equipment and readable storage medium |
CN111061902B (en) * | 2019-12-12 | 2023-12-19 | 广东智媒云图科技股份有限公司 | Drawing method and device based on text semantic analysis and terminal equipment |
CN111612864B (en) * | 2020-04-27 | 2023-05-09 | 厦门盈趣科技股份有限公司 | Drawing method and system based on photo and image recognition |
CN112148944A (en) * | 2020-09-22 | 2020-12-29 | 西京学院 | Manufacturing method, manufacturing system, terminal and storage medium of manual fiber picture |
CN112837396B (en) * | 2021-01-29 | 2024-05-07 | 深圳市天耀创想网络科技有限公司 | Line manuscript generation method and device based on machine learning |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011002875A (en) * | 2009-06-16 | 2011-01-06 | Nippon Telegr & Teleph Corp <Ntt> | Plotting support device, plotting support method, and plotting support program |
CN109389660A (en) * | 2018-09-28 | 2019-02-26 | 百度在线网络技术(北京)有限公司 | Image generating method and device |
CN109448079A (en) * | 2018-10-25 | 2019-03-08 | 广东智媒云图科技股份有限公司 | A kind of drawing bootstrap technique and equipment |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9262853B2 (en) * | 2013-03-15 | 2016-02-16 | Disney Enterprises, Inc. | Virtual scene generation based on imagery |
US10521937B2 (en) * | 2017-02-28 | 2019-12-31 | Corel Corporation | Vector graphics based live sketching methods and systems |
-
2019
- 2019-04-17 CN CN201910311177.6A patent/CN110097616B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011002875A (en) * | 2009-06-16 | 2011-01-06 | Nippon Telegr & Teleph Corp <Ntt> | Plotting support device, plotting support method, and plotting support program |
CN109389660A (en) * | 2018-09-28 | 2019-02-26 | 百度在线网络技术(北京)有限公司 | Image generating method and device |
CN109448079A (en) * | 2018-10-25 | 2019-03-08 | 广东智媒云图科技股份有限公司 | A kind of drawing bootstrap technique and equipment |
Also Published As
Publication number | Publication date |
---|---|
CN110097616A (en) | 2019-08-06 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110097616B (en) | Combined drawing method and device, terminal equipment and readable storage medium | |
CN109359538B (en) | Training method of convolutional neural network, gesture recognition method, device and equipment | |
CN108460338B (en) | Human body posture estimation method and apparatus, electronic device, storage medium, and program | |
CN106778928B (en) | Image processing method and device | |
CN107273502B (en) | Image geographic labeling method based on spatial cognitive learning | |
Li et al. | SHREC’13 track: large scale sketch-based 3D shape retrieval | |
US9449253B2 (en) | Learning painting styles for painterly rendering | |
CN110738207A (en) | character detection method for fusing character area edge information in character image | |
CN112651438A (en) | Multi-class image classification method and device, terminal equipment and storage medium | |
US11436436B2 (en) | Data augmentation system, data augmentation method, and information storage medium | |
CN111009041B (en) | Drawing creation method, device, terminal equipment and readable storage medium | |
CN107153838A (en) | A kind of photo automatic grading method and device | |
CN109785400B (en) | Silhouette image manufacturing method and device, electronic equipment and storage medium | |
US11386589B2 (en) | Method and device for image generation and colorization | |
CN111126254A (en) | Image recognition method, device, equipment and storage medium | |
CN110969641A (en) | Image processing method and device | |
US11157765B2 (en) | Method and system for determining physical characteristics of objects | |
Arya et al. | Local triangular coded pattern: A texture descriptor for image classification | |
CN113838158B (en) | Image and video reconstruction method and device, terminal equipment and storage medium | |
CN111047632A (en) | Method and device for processing picture color of nail image | |
CN109993816B (en) | Combined painting method, device, terminal setting and computer readable storage medium | |
CN108287817B (en) | Information processing method and device | |
CN112884074B (en) | Image design method, equipment, storage medium and device based on decision tree | |
CN113837236A (en) | Method and device for identifying target object in image, terminal equipment and storage medium | |
CN110781884A (en) | Method for realizing intelligent reading of electric meter data |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |