WO2020228536A1 - 图标生成方法及装置、获取图标的方法、电子设备以及存储介质 - Google Patents

图标生成方法及装置、获取图标的方法、电子设备以及存储介质 Download PDF

Info

Publication number
WO2020228536A1
WO2020228536A1 PCT/CN2020/087806 CN2020087806W WO2020228536A1 WO 2020228536 A1 WO2020228536 A1 WO 2020228536A1 CN 2020087806 W CN2020087806 W CN 2020087806W WO 2020228536 A1 WO2020228536 A1 WO 2020228536A1
Authority
WO
WIPO (PCT)
Prior art keywords
icon
sample
target
graphics
keyword
Prior art date
Application number
PCT/CN2020/087806
Other languages
English (en)
French (fr)
Inventor
张丽杰
陈冠男
朱丹
刘瀚文
Original Assignee
京东方科技集团股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 京东方科技集团股份有限公司 filed Critical 京东方科技集团股份有限公司
Publication of WO2020228536A1 publication Critical patent/WO2020228536A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/50Information retrieval; Database structures therefor; File system structures therefor of still image data
    • G06F16/58Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/583Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/181Segmentation; Edge detection involving edge growing; involving edge linking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/187Segmentation; Edge detection involving region growing; involving region merging; involving connected component labelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image

Definitions

  • the present disclosure relates to the field of image processing technology, and in particular to an icon generation method and device, an icon acquisition method, electronic equipment, and computer-readable storage media.
  • an icon is widely used in the business field. Whether in a traditional Internet website or in an APP (Application) of the mobile Internet, icons are widely used.
  • the present disclosure provides an icon generation method and device to solve the existing problems of long time and low efficiency in icon generation.
  • an icon generation method including: obtaining target keywords; searching for corresponding target graphics from a preset icon database according to the target keywords; and inputting the target graphics into a preset
  • the synthesized icon is obtained; the synthesized icon is processed to obtain the target icon.
  • the present disclosure also discloses an icon generation method, including: obtaining target keywords; according to the target keywords, retrieving multiple sample graphics corresponding to the target keywords from a preset icon database as multiple target graphics,
  • the icon database includes a plurality of sample icons and a plurality of sample graphics corresponding to each sample icon, and the sample keywords of the sample icons corresponding to the plurality of sample graphics corresponding to the target keywords are similar to the target keywords
  • the step of processing the synthesized icon to obtain a target icon includes: dividing the synthesized icon to obtain a plurality of icon area blocks; Fitting the edges; coloring the multiple icon area blocks after fitting to obtain the target icon.
  • the processing the target icon to obtain the processed target icon further includes: segmenting the target icon according to the color boundary of the target icon to obtain multiple icon area blocks; Fitting the edges of two icon area blocks to obtain multiple closed icon area blocks with smooth edges; coloring the multiple closed icon area blocks with smooth edges to obtain the target icon.
  • the icon database includes a plurality of sample icons, a sample word vector corresponding to each sample icon, and a plurality of sample graphics corresponding to each sample icon; and the icon database is selected from a preset icon database according to the target keyword.
  • the method further includes: obtaining sample icons; associating the obtained sample icons with sample keywords, obtaining sample keywords corresponding to the sample icons; and inputting the sample keywords into the word vector
  • the sample word vector corresponding to the sample keyword is obtained; the sample icon is processed to obtain multiple sample graphics corresponding to the sample icon; according to the sample icon, the sample word vector, and the multiple Sample graphics to generate the icon database.
  • generating the preset icon database further includes: obtaining sample icons; associating the obtained sample icons with sample keywords, and obtaining sample keywords corresponding to the sample icons; processing the sample icons , Acquire multiple sample graphics corresponding to the sample icon; generate the icon database according to the sample icon, the sample keyword, and the multiple sample graphics.
  • the step of retrieving the corresponding target graphic from a preset icon database according to the target keyword includes: inputting the target keyword into the word vector model to obtain the target word vector; calculating the target The similarity between the word vector and each sample word vector in the icon database; obtaining a plurality of sample graphics corresponding to the sample word vector with a similarity greater than a set threshold, and obtaining the target graphics.
  • the neural network model is a generative confrontation network model
  • training the generative confrontation network model includes: inputting multiple sample patterns and random noises corresponding to the sample icon into the initial generation network unit, and obtaining The icon; input the generated icon and the sample icon into the initial discrimination network unit to obtain the discrimination result; modify the parameters of the initial generation network unit and/or the initial discrimination network unit according to the discrimination result, Obtain the generative confrontation network model.
  • the present disclosure also discloses an icon generating device.
  • the device includes: a target keyword acquisition module configured to acquire target keywords; a target graphic retrieval module configured to retrieve corresponding target graphics from a preset icon database according to the target keywords; a target graphic input module, It is configured to input the target graphic into a preset neural network model to obtain a synthesized icon; the icon processing module is configured to process the synthesized icon to obtain a target icon.
  • the icon processing module includes: an icon segmentation sub-module configured to divide the synthesized icon to obtain a plurality of icon area blocks; and the fitting sub-module is configured to The edge of the icon area block is fitted; the coloring sub-module is configured to color the multiple icon area blocks after fitting to obtain the target icon.
  • the icon database includes a plurality of sample icons, a sample word vector corresponding to each sample icon, and a plurality of sample graphics corresponding to each sample icon; the device further includes: a sample icon obtaining module configured to obtain Sample icon; a keyword calibration module configured to associate the acquired sample icon with sample keywords, and obtain sample keywords corresponding to the sample icons; sample keyword input module, configured to convert the sample keywords In the input word vector model, the sample word vector corresponding to the sample keyword is obtained; the sample icon processing module is configured to process the sample icon to obtain multiple sample graphics corresponding to the sample icon; icon database generation module And configured to generate the icon database according to the sample icon, the sample word vector, and the plurality of sample graphics.
  • the target graphic retrieval module includes: a target keyword input sub-module configured to input the target keyword into the word vector model to obtain a target word vector; a similarity calculation sub-module is configured In order to calculate the similarity between the target word vector and each sample word vector in the icon database; the target graph obtaining sub-module is configured to obtain a plurality of sample graphs corresponding to the sample word vector whose similarity is greater than a set threshold to obtain the target graph.
  • the neural network model is a generative confrontation network model
  • the device further includes: a sample graph input module configured to input a plurality of sample graphs and random noise corresponding to the sample icon into the initial generation network unit, Obtain the generated icon; a discrimination result obtaining module configured to input the generated icon and the sample icon into the initial discrimination network unit to obtain a discrimination result; a parameter correction module configured to modify according to the discrimination result
  • the parameters of the initial generation network unit and/or the initial discrimination network unit are obtained, and the generation confrontation network model is acquired.
  • the present disclosure also discloses a method for obtaining icons, which includes: obtaining target keywords; outputting target icons based on the target keywords and a preset icon database; wherein the icon database includes a plurality of sample word vectors And a sample icon corresponding to each sample word vector; wherein the distance between the word vector of the target keyword and at least one sample word vector of the plurality of sample word vectors is less than a predetermined threshold, and the target icon and the at least The sample icons corresponding to a sample word vector are similar.
  • the present disclosure also discloses an electronic device, including a processor, a memory, and a computer program stored on the memory and capable of running on the processor, and the computer program is executed by the processor to realize the above icon Steps to generate method.
  • the present disclosure additionally discloses a computer-readable storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps of the aforementioned icon generation method are realized.
  • Fig. 1 shows a flowchart of an icon generation method according to an embodiment of the present disclosure
  • FIG. 2 shows a flowchart of another icon generation method according to an embodiment of the present disclosure
  • FIG. 3 shows a schematic diagram of a process of generating an icon database according to an embodiment of the present disclosure
  • FIG. 4 shows a schematic diagram of the training process of generating a confrontation network model according to an embodiment of the present disclosure
  • Fig. 5 shows a structural block diagram of an icon generating device according to an embodiment of the present disclosure
  • Fig. 6 shows a structural block diagram of another icon generating apparatus according to an embodiment of the present disclosure.
  • Step 101 Obtain target keywords.
  • the icon user when the icon user wants to obtain the target icon, he first enters the target keyword, and then the target keyword input by the icon user is obtained.
  • the target keyword refers to a word or word related to the attribute of the target icon .
  • Step 102 retrieve a corresponding target graphic from a preset icon database according to the target keyword.
  • an icon database is created in advance, and the icon database stores multiple sample icons, a sample word vector corresponding to each sample icon, and multiple sample graphics corresponding to each sample icon.
  • Sample icons refer to existing icons that have been designed.
  • the multiple sample graphics refer to multiple closed shapes obtained after processing a sample icon, such as triangles, rectangles, ellipses, etc.
  • a sample icon can be processed through image processing to obtain multiple sample graphics that make up the icon.
  • the sample icon, the sample word vector, and the multiple sample graphics corresponding to the sample icon have a one-to-one correspondence relationship among the three.
  • the icon database may use the sample word vector as a retrieval key, and the sample icon as a storage corresponding to the retrieval value (value). By searching for a sample word vector, the icon database can return a sample icon corresponding to the sample word vector and multiple sample graphics of the sample icon.
  • the target keywords into the preset icon database, compare the target keywords with each sample word vector in the icon database, and select multiple sample graphics corresponding to the sample word vectors that meet the preset conditions, so as to achieve The corresponding target graphic is retrieved from the icon database.
  • the word vector of the target keyword may be compared with each sample word vector in the icon database to obtain a plurality of sample graphics corresponding to the sample word vector that meets the preset conditions. Through the comparison between word vectors, the corresponding sample icons and sample graphics can be found faster.
  • Step 103 Input the target graphic into a preset neural network model to obtain a synthesized icon.
  • training is performed in advance according to multiple sample icons in the created icon database and multiple sample graphics corresponding to each sample icon to obtain a neural network model.
  • the neural network model can be based on a feedforward neural network.
  • the feedforward network can be implemented as an acyclic graph, where nodes are arranged in layers.
  • a feedforward network topology includes an input layer and an output layer, and the input layer and output layer are separated by at least one hidden layer.
  • the hidden layer transforms the input received by the input layer into a useful representation for generating output in the output layer.
  • Network nodes are fully connected to nodes in adjacent layers via edges, but there is no edge between nodes in each layer.
  • the data received at the nodes of the input layer of the feedforward network is propagated (ie, "feedforward") to the nodes of the output layer via an activation function, which calculates each of the nodes in the network based on coefficients ("weights")
  • weights The state of the nodes of successive layers, and the coefficients are respectively associated with each of the edges connecting these layers.
  • the output of the neural network model can take various forms, which are not limited in the present disclosure.
  • Neural network models can also include other neural network models, such as convolutional neural network (CNN) models, recurrent neural network (RNN) models, generative adversarial network (GAN) models, but it is not limited to this, and technology in the field can also be used Other neural network models well-known to the person.
  • CNN convolutional neural network
  • RNN recurrent neural network
  • GAN generative adversarial network
  • the neural network model may be a Generative Adversarial Networks (GAN, Generative Adversarial Networks) model.
  • GAN Generative Adversarial Networks
  • the generative adversarial network performs unsupervised learning by letting two neural networks play games with each other.
  • the generative confrontation network can consist of a generative network and a discriminant network.
  • the generation network randomly samples from the latent space as input, and its output result needs to imitate the real samples in the training set as much as possible.
  • the input of the discriminant network is the real sample or the output of the generating network, and its purpose is to distinguish the output of the generating network from the real samples as much as possible.
  • the generation network should deceive the discrimination network as much as possible.
  • the two networks confront each other and continuously adjust the parameters until the discriminant network cannot judge whether the output result of the generated network is true (whether it belongs to the sample set).
  • the synthesized icon is different from the sample icons corresponding to the selected multiple sample graphics, but the sample icons corresponding to the selected multiple sample graphics have the same or similar characteristics.
  • Step 104 Process the synthesized icon to obtain a target icon.
  • the icons synthesized by the neural network model are usually irregular, and the existing sample icons are all composed of regular geometric shapes. Therefore, the icons synthesized by the neural network model need to be processed. So that the acquisition target icon is composed of regular geometric shapes.
  • the target icon by obtaining target keywords, searching for corresponding target graphics from a preset icon database according to the target keywords, inputting the target graphics into the preset neural network model, obtaining the synthesized icons, After the icon is processed, the target icon is obtained.
  • the target keywords input by the icon user, using the preset icon database and neural network model, the desired target icon can be obtained, which greatly reduces the time spent on icon generation and improves the efficiency of icon generation.
  • FIG. 2 there is shown a flowchart of another icon generation method according to an embodiment of the present disclosure, which may specifically include the following steps:
  • Step 201 Obtain sample icons.
  • Step 202 Associate the acquired sample icons with sample keywords, and obtain sample keywords corresponding to the sample icons.
  • sample keywords may be words such as attributes of the sample icon.
  • the sample icon can be associated with the sample keyword by manually labeling the sample keyword of each sample icon.
  • the sample keywords of each sample icon can also be marked in other ways, and the present disclosure does not limit this.
  • Step 203 Input the sample keywords into a word vector model, and obtain sample word vectors corresponding to the sample keywords.
  • the words in the Chinese database of Wikipedia can be obtained in advance, and the word vector model can be obtained according to the word training in the Chinese database.
  • the word vector model can be a word2vec model.
  • the word2vec model is a correlation model used to generate word vectors.
  • the word2vec model can be a shallow two-layer neural network used for training to reconstruct words/keywords.
  • the word2vec model can map a word/keyword into a vector (for example, a multi-dimensional floating point vector of numerical type).
  • the distance between word vectors can be used to express the similarity between words. For example, when the cosine distance of the word vectors of two words is close, the two words will have similar semantics.
  • the sample keyword corresponding to the sample icon is input into the word vector model, and the sample word vector corresponding to the sample keyword can be obtained.
  • the sample keyword and the sample word vector have a one-to-one correspondence.
  • Step 204 Process the sample icon to obtain a plurality of sample graphics corresponding to the sample icon.
  • the obtained sample icon is processed. Specifically, the size of the obtained sample icon is adjusted to the set size, and then the sample icon after the size adjustment is divided, and the sample icon is divided into Different area blocks. Since the edges of the segmented area blocks are not necessarily smooth, it is necessary to fit the edges of the segmented area blocks to form different closed graphics. Finally, color different closed graphics, fill different closed graphics with different colors, and then obtain multiple sample graphics corresponding to the sample icon.
  • the image segmentation algorithm can be used to segment the sample icons. Since the colors of the sample icons are different in different areas, the sample icons are divided into different area blocks according to the color distribution characteristics of the sample icons.
  • the image segmentation algorithm is a color gradient algorithm
  • the specific steps of using the color gradient algorithm to segment the sample icon are: converting the sample icon into a grayscale icon, and calculating the gradient value of the grayscale value of each pixel in the grayscale icon.
  • the gradient value at the border of different colors in the sample icon changes the most.
  • the color boundary of the icon is detected by the gradient value, and then the icon can be divided according to the color boundary.
  • the sample icon can be divided into different area blocks according to the calculated gradient value.
  • line types such as straight line, arc line, parabola, and Bezier curve can be used to fit the edges of the segmented area blocks.
  • Step 205 Generate the icon database according to the sample icon, the sample word vector, and the multiple sample graphics.
  • an icon database is generated based on the acquired sample icons, sample word vectors, and multiple sample graphics.
  • the icon database includes multiple sample icons, sample word vectors corresponding to each sample icon, and multiple sample graphics corresponding to each sample icon. And the sample icon, the sample word vector, and the multiple sample graphics corresponding to the sample icon have a corresponding relationship among the three.
  • Step 206 Input multiple sample patterns and random noises corresponding to the sample icons into the initial generation network unit to obtain the generated icons.
  • the neural network model may be a generative confrontation network model, and an initial generation confrontation network model can be created in advance.
  • the initial generation confrontation network model includes an initial generation network unit and an initial discrimination network unit, an initial generation network unit and an initial discrimination Arbitrarily set the parameters in the network unit. For example, the weights and biases in the hidden layer in the initial generation network unit and the initial discrimination network unit can be set to zero.
  • a plurality of sample graphics and random noises corresponding to the sample icons in the icon database are input into the initial generation network unit, and a new icon is generated by the initial generation network unit.
  • Step 207 Input the generated icon and the sample icon into the initial discrimination network unit to obtain a discrimination result.
  • the discrimination result is actually a probability value between 0 and 1, which is used to determine the generation Whether the following icon is a real icon, 1 means the judgment result is true, and 0 means the judgment result is false.
  • Step 208 Correct the parameters of the initial generation network unit and/or the initial discrimination network unit according to the discrimination result, and obtain the generation confrontation network model.
  • the parameters of the initial generation network unit and/or the initial discrimination network unit are modified according to the discrimination result. For example, assuming that the initial discrimination network unit judges the generated icon to be true, it means that the parameters in the initial discrimination network unit should be adjusted to more accurately identify which icons are actually existing icons in the icon database. Assuming that the initial judgment network unit judges the generated icon as false, it means that the parameters in the initial generation network unit should be adjusted so that the icon generated by the initial generation network unit is closer to the icon that actually exists in the icon database.
  • this result means: the discrimination network unit judges that the icons generated by the generating network unit have 49.9 % Probability is the real icon in the icon database, and 50.1% probability is not the real icon in the icon database).
  • the absolute value of the difference between 0.499 and 0.5 is 0.001, which is less than the set discrimination threshold of 0.01, and the training of the generated confrontation network model is determined to be completed.
  • the parameters in the initial generation network unit are initialized to fixed values, and then multiple sample patterns and random noise corresponding to the sample icon are input into the initial generation network unit to generate a new Icon, input the generated icon and sample icon into the initial discrimination network unit to obtain the discrimination result, and modify the parameters of the initial discrimination network unit according to the discrimination result.
  • the initial generation network unit When training the initial generation network unit, first initialize the parameters of the initial discrimination network unit to a fixed value, and then input multiple sample patterns and random noises corresponding to the sample icon into the initial generation network unit to generate a new icon, which will generate The latter icon and sample icon are input into the initial discrimination network unit, the discrimination result is obtained, and the parameters of the initial generation network unit are corrected according to the discrimination result.
  • Step 209 Obtain target keywords.
  • Step 210 Input the target keyword into the word vector model to obtain a target word vector.
  • the sample word vector corresponding to the sample keyword is stored in the icon database, in order to calculate the semantic similarity between the sample word vector and the target keyword, it is necessary to input the obtained target keyword into the word In the vector model, the target word vector corresponding to the target keyword is obtained.
  • Step 211 Calculate the similarity between the target word vector and each sample word vector in the icon database.
  • the target word vector corresponding to the target keyword is calculated with each sample word vector in the icon database to obtain the similarity between the target word vector and each sample word vector in the icon database, and the similarity refers to Is the semantic similarity between the target word vector and the sample word vector.
  • Step 212 Obtain a plurality of sample graphs corresponding to the sample word vectors whose similarity is greater than a set threshold, and obtain a target graph.
  • a set threshold can be manually set in advance. After the similarity between the target word vector and each sample word vector in the icon database is obtained, the sample word vectors whose similarity is less than or equal to the set threshold are filtered out. For multiple sample graphs of, only multiple sample graphs corresponding to the sample word vector whose similarity is greater than the set threshold are obtained to obtain the target graph.
  • Step 213 Input the target graphic into a preset neural network model to obtain a synthesized icon.
  • step 103 The principle of this step is similar to that of step 103 in the first embodiment, and will not be repeated here.
  • Step 214 Divide the synthesized icon to obtain multiple icon area blocks.
  • an image segmentation algorithm can be used to synthesize the Divide the combined icon into multiple icon area blocks.
  • the input target graphics include multiple sample graphics, and the fill colors of the multiple sample graphics are different, there are also multiple colors of icons synthesized by generating the confrontation network model, which can be synthesized based on the color gradient algorithm.
  • the icon is divided to obtain multiple icon area blocks.
  • Step 215 Fit the edges of the multiple icon area blocks.
  • the edges of multiple icon area blocks obtained by segmentation are fitted to form multiple closed graphics.
  • linear, arc, parabola, and Bezier curve can be used to fit the edges of multiple icon area blocks.
  • the simplest straight line can be used to fit the edges of multiple icon area blocks, the linear parameters can be fitted by the least square method, and then the distance between the fitted straight line and the edge of the icon area block can be calculated. error.
  • error For example, when the error between the fitted straight line and the edge of the icon area block is less than or equal to the preset error, it means that the fitting is completed this time, so as to obtain a closed graph composed of straight edges.
  • other more complex linear fittings such as Bezier curves, are used to form multiple closed graphs.
  • Step 216 Color the multiple icon area blocks after fitting to obtain the target icon.
  • the color selected by the user may be received, and the multiple icon area blocks after fitting may be colored.
  • the target icon thus obtained not only has a new shape, not only a corresponding color, but also meets the needs of icon users.
  • the sample keyword is obtained by associating the obtained sample icon with the sample keyword, and the sample keyword is input into the word vector model to obtain the sample word vector.
  • the sample icon is processed to obtain multiple sample graphics corresponding to the sample icon.
  • an icon database is generated.
  • the multiple sample graphics and random noises corresponding to the sample icon are input into the initial generation network unit to obtain the generated icon.
  • the target keyword is obtained, and the target keyword is input into the word vector model to obtain the target word vector.
  • the target keyword is input into the word vector model to obtain the target word vector.
  • the target graphics are obtained according to a plurality of sample graphics, and the target graphics are input into a preset neural network model (such as the aforementioned generative confrontation network model) to obtain a synthesized icon.
  • a preset neural network model such as the aforementioned generative confrontation network model
  • the embodiment of the present disclosure obtains the generated confrontation network model by pre-creating an icon database and training.
  • the desired target icon can be obtained.
  • the time spent on icon generation is greatly reduced, and the efficiency of icon generation is improved.
  • the embodiment of the present disclosure also discloses a method for obtaining an icon, which includes: obtaining a target keyword; and outputting the target icon based on the target keyword and a preset icon database.
  • the icon database includes a plurality of sample word vectors and a sample icon corresponding to each sample word vector.
  • the distance between the word vector of the target keyword and at least one sample word vector of the plurality of sample word vectors is less than a predetermined threshold, and the target icon is similar to the sample icon corresponding to the at least one sample word vector.
  • an icon database is created in advance and a generated confrontation network model is acquired through training.
  • the desired target icon can be acquired by acquiring the target keywords input by the icon user, using the icon database and generating the confrontation network model.
  • the target icon is similar to but different from the sample icon. This greatly reduces the time it takes to obtain icons and improves the efficiency of icon generation.
  • FIG. 5 there is shown a structural block diagram of an icon generating apparatus according to an embodiment of the present disclosure.
  • the icon generating device 500 of the embodiment of the present disclosure includes:
  • the target keyword obtaining module 501 is configured to obtain target keywords.
  • the target graphic retrieval module 502 is configured to retrieve the corresponding target graphic from a preset icon database according to the target keyword.
  • the target graphic input module 503 is configured to input the target graphic into a preset neural network model to obtain a synthesized icon.
  • the icon processing module 504 is configured to process the synthesized icon to obtain a target icon.
  • FIG. 6 there is shown a structural block diagram of another icon generating apparatus according to an embodiment of the present disclosure.
  • the icon processing module 504 includes: an icon segmentation sub-module 5041, a fitting sub-module 5042, and a coloring sub-module 5043.
  • the icon segmentation submodule 5041 is configured to segment the synthesized icon to obtain multiple icon area blocks.
  • the fitting submodule 5042 is configured to fit the edges of the multiple icon area blocks.
  • the coloring sub-module 5043 is configured to color the multiple icon area blocks after fitting to obtain the target icon.
  • the icon database includes a plurality of sample icons, a sample word vector corresponding to each sample icon, and a plurality of sample graphics corresponding to each sample icon.
  • the icon generating device 500 further includes: a sample icon acquisition module 505, a keyword calibration module 506, a sample keyword input module 507, a sample icon processing module 508, and an icon database generation module 509.
  • the sample icon obtaining module 505 is configured to obtain sample icons.
  • the keyword calibration module 506 is configured to associate the acquired sample icons with sample keywords, and obtain sample keywords corresponding to the sample icons.
  • the sample keyword input module 507 is configured to input the sample keywords into a word vector model to obtain sample word vectors corresponding to the sample keywords.
  • the sample icon processing module 508 is configured to process the sample icon and obtain multiple sample graphics corresponding to the sample icon.
  • the icon database generating module 509 is configured to generate the icon database according to the sample icon, the sample word vector, and the plurality of sample graphics.
  • the target graphic retrieval module 502 includes: a target keyword input submodule 5021, a similarity calculation submodule 5022, and a target graphic acquisition submodule 5023.
  • the target keyword input submodule 5021 is configured to input the target keyword into the word vector model to obtain the target word vector.
  • the similarity calculation submodule 5022 is configured to calculate the similarity between the target word vector and each sample word vector in the icon database.
  • the target graphic obtaining sub-module 5023 is configured to obtain a plurality of sample graphic corresponding to the sample word vector whose similarity is greater than the set threshold, and obtain the target graphic.
  • the neural network model is a generated confrontation network model
  • the icon generation device 500 further includes: a sample graphic input module 510, a discrimination result acquisition module 511, and a parameter correction module 512.
  • the sample graphic input module 510 is configured to input a plurality of sample graphics and random noise corresponding to the sample icon into the initial generation network unit to obtain the generated icon.
  • the discrimination result acquisition module 511 is configured to input the generated icon and the sample icon into the initial discrimination network unit to obtain the discrimination result.
  • the parameter modification module 512 is configured to modify the parameters of the initial generation network unit and/or the initial discrimination network unit according to the discrimination result to obtain the generation confrontation network model.
  • the target icon by obtaining target keywords, searching for corresponding target graphics from a preset icon database according to the target keywords, inputting the target graphics into the preset neural network model, obtaining the synthesized icons, After the icon is processed, the target icon is obtained.
  • the target keywords input by the icon user, using the preset icon database and neural network model, the desired target icon can be obtained, which greatly reduces the time spent on icon generation and improves the efficiency of icon generation.
  • an embodiment of the present disclosure also provides an electronic device, including a processor, a memory, and a computer program stored on the memory and capable of running on the processor.
  • a computer program stored on the memory and capable of running on the processor.
  • the embodiment of the present disclosure also discloses a computer-readable storage medium on which a computer program is stored.
  • the computer program is executed by a processor, the steps of the icon generation method as described in the first and second embodiments of the present disclosure are realized. .
  • the description is relatively simple, and for related parts, please refer to the part of the description of the method embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Library & Information Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

一种图标生成方法及装置、获取图标的方法、电子设备以及存储介质。所述图标生成方法包括:获取目标关键词;根据所述目标关键词,从预设的图标数据库中检索所述目标关键词对应的多个样本图形作为多个目标图形,其中,所述图标数据库包括多个样本图标和每个样本图标对应的多个样本图形,并且所述目标关键词对应的多个样本图形对应的样本图标的样本关键词与所述目标关键词相似;将所述多个目标图形输入预设的神经网络模型中,以将所述多个目标图形合成为目标图标。本公开通过获取图标使用者输入的目标关键词,可直接获取目标图标,大大降低了图标生成花费的时间,提高了图标生成的效率。

Description

图标生成方法及装置、获取图标的方法、电子设备以及存储介质
本申请要求于2019年5月15日递交的第201910403836.9号中国专利申请的优先权,在此全文引用上述中国专利申请公开的内容以作为本申请的一部分。
技术领域
本公开涉及图像处理技术领域,特别是涉及一种图标生成方法及装置、获取图标的方法、电子设备以及计算机可读存储介质。
背景技术
图标作为一种具有明确指代含义的计算机图形,被广泛应用于商业领域。无论是在传统的互联网网站,还是在移动互联网的APP(Application,应用)中,都存在着图标的广泛应用。
目前,图标的生成需要设计师一步步进行设计。包括开始的构思创意,然后经过构图,最终完成图标的设计。一个图标的生成需要花费设计师大量的时间和心血。因此,目前的图标生成花费的时间长、效率低。
发明内容
本公开提供一种图标生成方法及装置,以解决现有的目前的图标生成花费的时间长、效率低的问题。
为了解决上述问题,本公开公开了一种图标生成方法,包括:获取目标关键词;根据所述目标关键词从预设的图标数据库中检索对应的目标图形;将所述目标图形输入预设的神经网络模型中,获取合成后的图标;对所述合成后的图标进行处理,获取目标图标。
本公开还公开了一种图标生成方法,包括:获取目标关键词;根据所述目标关键词,从预设的图标数据库中检索所述目标关键词对应的多个样本图形作为多个目标图形,其中,所述图标数据库包括多个样本图标和每个样本 图标对应的多个样本图形,并且所述目标关键词对应的多个样本图形对应的样本图标的样本关键词与所述目标关键词相似;将所述多个目标图形输入预设的神经网络模型中,以将所述多个目标图形合成为目标图标。
可选的,所述对所述合成后的图标进行处理,获取目标图标的步骤,包括:对所述合成后的图标进行分割,获取多个图标区域块;对所述多个图标区域块的边缘进行拟合;对拟合后的多个图标区域块进行着色,获取目标图标。
可选地,所述对所述目标图标进行处理以获取处理后的目标图标还包括:根据所述目标图标的颜色边界对所述目标图标进行分割,获取多个图标区域块;对所述多个图标区域块的边缘进行拟合以获取多个边缘平滑的封闭图标区域块;对所述多个边缘平滑的封闭图标区域块进行着色,获取目标图标。
可选的,所述图标数据库包括多个样本图标、每个样本图标对应的样本词向量和每个样本图标对应的多个样本图形;在所述根据所述目标关键词从预设的图标数据库中检索对应的目标图形的步骤之前,还包括:获取样本图标;对获取到的样本图标与样本关键词进行关联,获取所述样本图标对应的样本关键词;将所述样本关键词输入词向量模型中,获取所述样本关键词对应的样本词向量;对所述样本图标进行处理,获取所述样本图标对应的多个样本图形;根据所述样本图标、所述样本词向量和所述多个样本图形,生成所述图标数据库。
可选的,生成所述预设的图标数据库还包括:获取样本图标;对获取到的样本图标与样本关键词进行关联,获取所述样本图标对应的样本关键词;对所述样本图标进行处理,获取所述样本图标对应的多个样本图形;根据所述样本图标、所述样本关键词和所述多个样本图形,生成所述图标数据库。
可选的,所述根据所述目标关键词从预设的图标数据库中检索对应的目标图形的步骤,包括:将所述目标关键词输入所述词向量模型中,获取目标词向量;计算目标词向量与所述图标数据库中各个样本词向量的相似度;获取相似度大于设定阈值的样本词向量对应的多个样本图形,获取目标图形。
可选的,所述神经网络模型为生成对抗网络模型,对所述生成对抗网络模型进行训练包括:将所述样本图标对应的多个样本图形和随机噪声输入初始生成网络单元中,获取生成后的图标;将所述生成后的图标和所述样本图 标输入初始判别网络单元中,获得判别结果;根据所述判别结果修正所述初始生成网络单元和/或所述初始判别网络单元的参数,获取所述生成对抗网络模型。
本公开还公开了一种图标生成装置。该装置包括:目标关键词获取模块,被配置为获取目标关键词;目标图形检索模块,被配置为根据所述目标关键词从预设的图标数据库中检索对应的目标图形;目标图形输入模块,被配置为将所述目标图形输入预设的神经网络模型中,获取合成后的图标;图标处理模块,被配置为对所述合成后的图标进行处理,获取目标图标。
可选的,所述图标处理模块,包括:图标分割子模块,被配置为对所述合成后的图标进行分割,获取多个图标区域块;拟合子模块,被配置为对所述多个图标区域块的边缘进行拟合;着色子模块,被配置为对拟合后的多个图标区域块进行着色,获取目标图标。
可选的,所述图标数据库包括多个样本图标、每个样本图标对应的样本词向量和每个样本图标对应的多个样本图形;所述装置还包括:样本图标获取模块,被配置为获取样本图标;关键词标定模块,被配置为对获取到的样本图标与样本关键词进行关联,获取所述样本图标对应的样本关键词;样本关键词输入模块,被配置为将所述样本关键词输入词向量模型中,获取所述样本关键词对应的样本词向量;样本图标处理模块,被配置为对所述样本图标进行处理,获取所述样本图标对应的多个样本图形;图标数据库生成模块,被配置为根据所述样本图标、所述样本词向量和所述多个样本图形,生成所述图标数据库。
可选的,所述目标图形检索模块,包括:目标关键词输入子模块,被配置为将所述目标关键词输入所述词向量模型中,获取目标词向量;相似度计算子模块,被配置为计算目标词向量与所述图标数据库中各个样本词向量的相似度;目标图形获取子模块,被配置为获取相似度大于设定阈值的样本词向量对应的多个样本图形,获取目标图形。
可选的,所述神经网络模型为生成对抗网络模型,所述装置还包括:样本图形输入模块,被配置为将所述样本图标对应的多个样本图形和随机噪声输入初始生成网络单元中,获取生成后的图标;判别结果获取模块,被配置为将所述生成后的图标和所述样本图标输入初始判别网络单元中,获得判别 结果;参数修正模块,被配置为根据所述判别结果修正所述初始生成网络单元和/或所述初始判别网络单元的参数,获取所述生成对抗网络模型。
本公开还公开了一种获取图标的方法,该方法包括:获取目标关键词;基于所述目标关键词和预设的图标数据库,输出目标图标;其中,所述图标数据库包括多个样本词向量以及每个样本词向量对应的样本图标;其中,所述目标关键词的词向量与所述多个样本词向量中的至少一个样本词向量的距离小于预定阈值,所述目标图标与所述至少一个样本词向量对应的样本图标相似。
本公开还公开了一种电子设备,包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现上述的图标生成方法的步骤。
本公开另外公开了一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现上述的图标生成方法的步骤。
附图说明
图1示出了本公开实施例的一种图标生成方法的流程图;
图2示出了本公开实施例的另一种图标生成方法的流程图;
图3示出了本公开实施例的图标数据库的生成过程示意图;
图4示出了本公开实施例的生成对抗网络模型的训练过程示意图;
图5示出了本公开实施例的一种图标生成装置的结构框图;
图6示出了本公开实施例的另一种图标生成装置的结构框图。
具体实施方式
为使本公开实施例的目的、技术方案和优点更加清楚,下面将结合附图,对本公开实施例的技术方案进行清楚、完整地描述。显然,所描述的实施例是本公开的一部分实施例,而不是全部的实施例。基于所描述的本公开的实施例,本领域普通技术人员在无需创造性劳动的前提下所获得的所有其他实施例,都属于本公开保护的范围。
参照图1,示出了本公开实施例的一种图标生成方法的流程图,具体可以包括如下步骤:
步骤101,获取目标关键词。
在本公开实施例中,当图标使用者想要获取目标图标时,首先输入目标关键词,则获取图标使用者输入的目标关键词,目标关键词指的是与目标图标属性相关的词或字。
步骤102,根据所述目标关键词从预设的图标数据库中检索对应的目标图形。
在本公开实施例中,预先创建图标数据库,图标数据库中保存有多个样本图标、每个样本图标对应的样本词向量和每个样本图标对应的多个样本图形。样本图标指的是已经设计出来的现有图标。多个样本图形指的是对一个样本图标进行处理后而获取的多个封闭形状,如三角形、矩形、椭圆形等形状。例如,可以将一个样本图标通过图像处理,获得多个组成该图标的样本图形。其中,样本图标、样本词向量和样本图标对应的多个样本图形,三者之间是一一对应的关系。可选地,图标数据库可以将样本词向量作为检索键(key),样本图标作为检索值(value)对应的存储。通过查找样本词向量,图标数据库可以返回该样本词向量对应的样本图标以及该样本图标的多个样本图形。
将获取到的目标关键词输入预设的图标数据库中,将目标关键词与图标数据库中的各个样本词向量进行比较,选取符合预设条件的样本词向量对应的多个样本图形,从而实现在图标数据库中检索到对应的目标图形。例如,可以将目标关键词的词向量与图标数据库中的各个样本词向量进行比较来获取选取符合预设条件的样本词向量对应的多个样本图形。通过词向量间的比较可以更快地查找到对应的样本图标以及样本图形。
步骤103,将所述目标图形输入预设的神经网络模型中,获取合成后的图标。
在本公开实施例中,预先根据创建的图标数据库中的多个样本图标和每个样本图标对应的多个样本图形进行训练,获取神经网络模型。该神经网络模型可以是基于前馈神经网络的。前馈网络可以被实现为无环图,其中节点布置在层中。通常,前馈网络拓扑包括输入层和输出层,输入层和输出层通过至少一个隐藏层分开。隐藏层将由输入层接收到的输入变换为对在输出层中生成输出有用的表示。网络节点经由边缘全连接至相邻层中的节点,但每 个层内的节点之间不存在边缘。在前馈网络的输入层的节点处接收的数据经由激活函数被传播(即,“前馈”)至输出层的节点,所述激活函数基于系数(“权重”)来计算网络中的每个连续层的节点的状态,所述系数分别与连接这些层的边缘中的每一个相关联。神经网络模型的输出可以采用各种形式,本公开对此不作限制。神经网络模型还可以包括其他神经网络模型,例如,卷积神经网络(CNN)模型、循环神经网络(RNN)模型、生成式对抗网络(GAN)模型,但不限于此,也可以采用本领域技术人员公知的其他神经网络模型。
例如,神经网络模型可以为生成对抗网络(GAN,Generative Adversarial Networks)模型。可选地,生成对抗网络通过让两个神经网络相互博弈的方式来进行非监督式学习。生成对抗网络可以由一个生成网络与一个判别网络组成。生成网络从潜在空间(latent space)中随机取样作为输入,其输出结果需要尽量模仿训练集中的真实样本。判别网络的输入则为真实样本或生成网络的输出,其目的是将生成网络的输出从真实样本中尽可能分辨出来。而生成网络则要尽可能地欺骗判别网络。两个网络相互对抗、不断调整参数,直到判别网络无法判断生成网络的输出结果是否真实(是否属于样本集)。
将检索到的目标图形输入预设的神经网络模型中,获取合成后的图标。该合成的图标与选取的多个样本图形对应的样本图标不同,但是与选取的多个样本图形对应的样本图标具有相同或相似的特性。
步骤104,对所述合成后的图标进行处理,获取目标图标。
在本公开实施例中,通过神经网络模型合成的图标通常是不规则的,而现有的样本图标都是由规则的几何形状组合而成,因此,需要对神经网络模型合成的图标进行处理,使得获取目标图标由规则的几何形状组合而成。
在本公开实施例中,通过获取目标关键词,根据目标关键词从预设的图标数据库中检索对应的目标图形,将目标图形输入预设的神经网络模型中,获取合成后的图标,对合成后的图标进行处理,获取目标图标。通过获取图标使用者输入的目标关键词,利用预设的图标数据库和神经网络模型,就可获取想要的目标图标,大大降低了图标生成花费的时间,提高了图标生成的效率。
参照图2,示出了本公开实施例的另一种图标生成方法的流程图,具体 可以包括如下步骤:
步骤201,获取样本图标。
步骤202,对获取到的样本图标与样本关键词进行关联,获取所述样本图标对应的样本关键词。
如图3所示,首先,获取现有的多个样本图标,然后对获取到的多个样本图标分别与样本关键词进行关联,获取样本图标对应的样本关键词,构建样本关键词与样本图标之间的一一对应关系。其中,样本关键词可以为样本图标的属性等词语。例如,可以通过人工标注每个样本图标的样本关键词来将样本图标与样本关键词进行关联。可选地,还可以通过其他方式标注每个样本图标的样本关键词,本公开对此不作限制。
步骤203,将所述样本关键词输入词向量模型中,获取所述样本关键词对应的样本词向量。
如图3所示,可以预先获取维基百科的中文数据库中的词语,根据中文数据库中的词语训练获得词向量模型,词向量模型可以为word2vec模型。例如,word2vec模型是用来产生词向量的相关模型。word2vec模型可以是一个浅层双层的神经网络,用来训练以重构词语/关键词。在word2vec模型训练完成之后,word2vec模型可将一个词语/关键词映射成一个向量(例如数值类型的多维浮点向量)。词向量之间的距离可用来表示词与词之间的相似度。例如,当两个词语的词向量的余弦距离接近时,这两个词将具有相似的语义。
将样本图标对应的样本关键词输入词向量模型中,可获取样本关键词对应的样本词向量,样本关键词与样本词向量也是一一对应的关系。
步骤204,对所述样本图标进行处理,获取所述样本图标对应的多个样本图形。
如图3所示,对获取到的样本图标进行处理,具体的,先将获取到的样本图标的尺寸调整到设定尺寸,然后,对尺寸调整后的样本图标进行分割,将样本图标分割成不同的区域块。由于分割出的区域块的边缘不一定是平滑的,因此,需要对分割出的区域块的边缘进行拟合,形成不同的封闭图形。最后,对不同的封闭图形进行着色,将不同的封闭图形填充上不同的颜色,进而获取样本图标对应的多个样本图形。
其中,可采用图像分割算法对样本图标进行分割,由于样本图标不同区 域处的颜色有所不同,根据样本图标的颜色分布特性,将样本图标分割成不同的区域块。
例如,图像分割算法为颜色梯度算法,采用颜色梯度算法对样本图标进行分割的具体步骤为:将样本图标转化为灰度图标,对灰度图标中的每个像素的灰度值求梯度值。通常情况下,样本图标中不同颜色的边界处的梯度值变化最大。通过梯度值检测到了图标的颜色边界,然后即可以根据颜色边界来切分图标。由此,根据计算到的梯度值可将样本图标分割成不同的区域块。
需要说明的是,可采用直线、圆弧线、抛物线和贝塞尔曲线等线型对分割出的区域块的边缘进行拟合。
步骤205,根据所述样本图标、所述样本词向量和所述多个样本图形,生成所述图标数据库。
在本公开实施例中,根据获取到的样本图标、样本词向量和多个样本图形,生成图标数据库。其中,图标数据库包括多个样本图标、每个样本图标对应的样本词向量和每个样本图标对应的多个样本图形。且样本图标、样本词向量和样本图标对应的多个样本图形,三者之间是对应的关系。
步骤206,将所述样本图标对应的多个样本图形和随机噪声输入初始生成网络单元中,获取生成后的图标。
在本公开实施例中,神经网络模型可以为生成对抗网络模型,可预先创建一个初始生成对抗网络模型,初始生成对抗网络模型包括初始生成网络单元和初始判别网络单元,初始生成网络单元和初始判别网络单元中的参数任意设定。例如,可以初始生成网络单元和初始判别网络单元中的隐藏层中的权重和偏置都设为0。
如图4所示,将图标数据库中的样本图标对应的多个样本图形和随机噪声输入初始生成网络单元中,利用初始生成网络单元生成一张新的图标。
步骤207,将所述生成后的图标和所述样本图标输入初始判别网络单元中,获得判别结果。
如图4所示,将生成后的图标和图标数据库中的样本图标输入初始判别网络单元中,获得判别结果,该判别结果实际上是介于0和1之间的概率值,用于判断生成后的图标是否是真实的图标,1表示判别结果为真,0表示判别结果为假。
步骤208,根据所述判别结果修正所述初始生成网络单元和/或所述初始判别网络单元的参数,获取所述生成对抗网络模型。
在本公开实施例中,根据判别结果修正初始生成网络单元和/或初始判别网络单元的参数。例如,假设初始判别网络单元将生成的图标判别为真,则说明初始判别网络单元中的参数应该进行调整,以更精准的识别哪些图标是图标数据库中真实存在的图标。假设初始判别网络单元将生成的图标判别为假,则说明初始生成网络单元中的参数应该进行调整,使得初始生成网络单元生成的图标更接近于图标数据库中真实存在的图标。
然后,将样本图标对应的多个样本图形和随机噪声输入到参数修正后的生成网络单元中,生成一张新的图标。将生成后的图标和图标数据库中的样本图标输入到参数修正后的判别网络单元中,获得判别结果并再次修正生成网络单元和判别网络单元的参数,直至获取的判别结果与0.5的差值的绝对值小于设定判别阈值,从而完成生成对抗网络模型的训练。此时,生成网络单元和判别网络单元的参数已经收敛到一个稳定值(也即每次迭代不会导致参数的巨变)。此时,判别网络单元能够精准的识别出哪些图标是图标数据库中存在的图标,哪些图标不是。而生成网络单元能够生成足以混淆判别网络单元判断的图标。可选地,设定判别阈值可以人为进行设定。
例如,设定判别阈值为0.01,将样本图标对应的多个样本图形输入生成对抗网络模型中,获取的判别结果为0.499(该结果意味着:判别网络单元判断由生成网络单元生成的图标有49.9%的可能性为图标数据库中的真实图标,有50.1%的可能性不是图标数据库中的真实图标)。0.499与0.5的差值的绝对值为0.001,小于设定判别阈值0.01,则确定生成对抗网络模型训练完成。
具体的,在训练初始判别网络单元时,先将初始生成网络单元中的参数初始化为固定值,然后将样本图标对应的多个样本图形和随机噪声输入初始生成网络单元中,生成一张新的图标,将生成后的图标和样本图标输入初始判别网络单元中,获得判别结果,根据判别结果修正初始判别网络单元的参数。
在训练初始生成网络单元时,先将初始判别网络单元的参数初始化为固定值,然后将样本图标对应的多个样本图形和随机噪声输入初始生成网络单 元中,生成一张新的图标,将生成后的图标和样本图标输入初始判别网络单元中,获得判别结果,根据判别结果修正初始生成网络单元的参数。
当然,也可以无需固定初始生成网络单元或初始判别网络单元的参数,每次获得的判别结果同时修正初始生成网络单元和初始判别网络单元的参数。
步骤209,获取目标关键词。
此步骤与上述实施例一中的步骤101原理类似,在此不再赘述。
步骤210,将所述目标关键词输入所述词向量模型中,获取目标词向量。
在本公开实施例中,由于图标数据库中存储有样本关键词对应的样本词向量,为了计算样本词向量与目标关键词之间的语义相似度,需要将获取到的目标关键词输入所述词向量模型中,获取目标关键词对应的目标词向量。
步骤211,计算目标词向量与所述图标数据库中各个样本词向量的相似度。
在本公开实施例中,将目标关键词对应的目标词向量与图标数据库中各个样本词向量进行余弦相似度计算,获取目标词向量与图标数据库中各个样本词向量的相似度,该相似度指的是目标词向量与样本词向量之间的语义相似度。
步骤212,获取相似度大于设定阈值的样本词向量对应的多个样本图形,获取目标图形。
在本公开实施例中,可预先人为设定一个设定阈值,在获取目标词向量与图标数据库中各个样本词向量的相似度后,筛除相似度小于或等于设定阈值的样本词向量对应的多个样本图形,仅获取相似度大于设定阈值的样本词向量对应的多个样本图形,获取目标图形。
当然,也可以将目标词向量与图标数据库中各个样本词向量的相似度按照从大到的顺序进行排序,获取相似度排序靠前的一个或多个样本词向量对应的多个样本图形,获取目标图形。
步骤213,将所述目标图形输入预设的神经网络模型中,获取合成后的图标。
此步骤与上述实施例一中的步骤103原理类似,在此不再赘述。
步骤214,对所述合成后的图标进行分割,获取多个图标区域块。
在本公开实施例中,由于生成对抗网络模型合成的图标的形状可能是不 规则的,而现有的样本图标都是由规则的几何形状组合而成,因此,可采用图像分割算法对合成后的图标进行分割,将合成后的图标分割成多个图标区域块。
需要说明的是,由于输入的目标图形包括多个样本图形,且多个样本图形填充的颜色不同,因此,通过生成对抗网络模型合成的图标的颜色也有多种,可基于颜色梯度算法对合成后的图标进行分割,获取多个图标区域块。
步骤215,对所述多个图标区域块的边缘进行拟合。
在本公开实施例中,对分割获取的多个图标区域块的边缘进行拟合,形成多个封闭图形。其中,可采用直线、圆弧线、抛物线和贝塞尔曲线等线型对多个图标区域块的边缘进行拟合。
在实际应用过程中,可先采用最简单的直线对多个图标区域块的边缘进行拟合,通过最小二乘法拟合出直线参数,然后计算拟合的直线与图标区域块的边缘之间的误差。例如,当拟合的直线与图标区域块的边缘之间的误差小于或等于预设误差时,表示此次拟合完成,从而获取由直线边缘构成的封闭图形。例如,当拟合的直线与图标区域块的边缘之间的误差大于预设误差时,则选用其他更复杂的线性进行拟合,如贝塞尔曲线等,以形成多个封闭图形。
步骤216,对拟合后的多个图标区域块进行着色,获取目标图标。
在本公开实施例中,可接收用户选定的颜色,对拟合后的多个图标区域块进行着色。由此获取的目标图标不仅具有新的形状不仅具有相应的颜色,还更符合图标使用者的需求。
在本公开实施例中,通过对获取到的样本图标与样本关键词进行关联,获取样本关键词,将样本关键词输入词向量模型中,获取样本词向量。对样本图标进行处理,获取样本图标对应的多个样本图形。根据样本图标、样本词向量和多个样本图形,生成图标数据库。将样本图标对应的多个样本图形和随机噪声输入初始生成网络单元中,获取生成后的图标。将生成后的图标和样本图标输入初始判别网络单元中,获得判别结果。根据判别结果修正初始生成网络单元和/或初始判别网络单元的参数,获取生成对抗网络模型。
在本公开实施例中,获取目标关键词,将目标关键词输入词向量模型中,获取目标词向量。计算目标词向量与图标数据库中各个样本词向量的相似度, 获取相似度大于设定阈值的样本词向量对应的多个样本图形。根据多个样本图形来获取目标图形,将目标图形输入预设的神经网络模型(例如上述的生成对抗网络模型)中,获取合成后的图标。对合成后的图标进行分割,获取多个图标区域块。对多个图标区域块的边缘进行拟合,对拟合后的多个图标区域块进行着色,获取目标图标。
本公开实施例通过预先创建图标数据库和训练获取生成对抗网络模型,在实际应用中通过获取图标使用者输入的目标关键词,利用图标数据库和生成对抗网络模型,就可获取想要的目标图标,大大降低了图标生成花费的时间,提高了图标生成的效率。
本公开实施例还公开了一种获取图标的方法,该方法包括:获取目标关键词;基于所述目标关键词和预设的图标数据库,输出目标图标。其中,所述图标数据库包括多个样本词向量以及每个样本词向量对应的样本图标。其中,所述目标关键词的词向量与所述多个样本词向量中的至少一个样本词向量的距离小于预定阈值,所述目标图标与所述至少一个样本词向量对应的样本图标相似。该实施例通过预先创建图标数据库和训练获取生成对抗网络模型,在实际应用中通过获取图标使用者输入的目标关键词,利用图标数据库和生成对抗网络模型,就可获取想要的目标图标。该目标图标与样本图标相似但不同。大大降低了获取图标花费的时间,提高了图标生成的效率。
参照图5,示出了本公开实施例的一种图标生成装置的结构框图。
本公开实施例的图标生成装置500包括:
目标关键词获取模块501,被配置为获取目标关键词。
目标图形检索模块502,被配置为根据所述目标关键词从预设的图标数据库中检索对应的目标图形。
目标图形输入模块503,被配置为将所述目标图形输入预设的神经网络模型中,获取合成后的图标。
图标处理模块504,被配置为对所述合成后的图标进行处理,获取目标图标。
参照图6,示出了本公开实施例的另一种图标生成装置的结构框图。
在图5的基础上,可选的,所述图标处理模块504,包括:图标分割子模块5041、拟合子模块5042和着色子模块5043。
图标分割子模块5041,被配置为对所述合成后的图标进行分割,获取多个图标区域块。
拟合子模块5042,被配置为对所述多个图标区域块的边缘进行拟合。
着色子模块5043,被配置为对拟合后的多个图标区域块进行着色,获取目标图标。
可选的,所述图标数据库包括多个样本图标、每个样本图标对应的样本词向量和每个样本图标对应的多个样本图形。
所述图标生成装置500还包括:样本图标获取模块505、关键词标定模块506、样本关键词输入模块507、样本图标处理模块508、图标数据库生成模块509。
样本图标获取模块505,被配置为获取样本图标。
关键词标定模块506,被配置为对获取到的样本图标与样本关键词进行关联,获取所述样本图标对应的样本关键词。
样本关键词输入模块507,被配置为将所述样本关键词输入词向量模型中,获取所述样本关键词对应的样本词向量。
样本图标处理模块508,被配置为对所述样本图标进行处理,获取所述样本图标对应的多个样本图形。
图标数据库生成模块509,被配置为根据所述样本图标、所述样本词向量和所述多个样本图形,生成所述图标数据库。
可选的,所述目标图形检索模块502,包括:目标关键词输入子模块5021、相似度计算子模块5022、和目标图形获取子模块5023。
目标关键词输入子模块5021,被配置为将所述目标关键词输入所述词向量模型中,获取目标词向量。
相似度计算子模块5022,被配置为计算目标词向量与所述图标数据库中各个样本词向量的相似度。
目标图形获取子模块5023,被配置为获取相似度大于设定阈值的样本词向量对应的多个样本图形,获取目标图形。
可选的,所述神经网络模型为生成对抗网络模型,所述图标生成装置500还包括:样本图形输入模块510、判别结果获取模块511、和参数修正模块512。
样本图形输入模块510,被配置为将所述样本图标对应的多个样本图形和随机噪声输入初始生成网络单元中,获取生成后的图标。
判别结果获取模块511,被配置为将所述生成后的图标和所述样本图标输入初始判别网络单元中,获得判别结果。
参数修正模块512,被配置为根据所述判别结果修正所述初始生成网络单元和/或所述初始判别网络单元的参数,获取所述生成对抗网络模型。
在本公开实施例中,通过获取目标关键词,根据目标关键词从预设的图标数据库中检索对应的目标图形,将目标图形输入预设的神经网络模型中,获取合成后的图标,对合成后的图标进行处理,获取目标图标。通过获取图标使用者输入的目标关键词,利用预设的图标数据库和神经网络模型,就可获取想要的目标图标,大大降低了图标生成花费的时间,提高了图标生成的效率。
相应的,本公开实施例还提供一种电子设备,包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如本公开实施例一和实施例二所述图标生成方法的步骤。
本公开实施例还公开了一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如本公开实施例一和实施例二所述的图标生成方法的步骤。
对于装置实施例而言,由于其与方法实施例基本相似,所以描述的比较简单,相关之处参见方法实施例的部分说明即可。
对于前述的各方法实施例,为了简单描述,故将其都表述为一系列的动作组合,但是本领域技术人员应该知悉,本公开并不受所描述的动作顺序的限制,因为依据本公开,某些步骤可以采用其他顺序或者同时进行。其次,本领域技术人员也应该知悉,说明书中所描述的实施例均属于优选实施例,所涉及的动作和模块并不一定是本公开所必须的。
本说明书中的各个实施例均采用递进的方式描述,每个实施例重点说明的都是与其他实施例的不同之处,各个实施例之间相同相似的部分互相参见即可。
最后,还需要说明的是,在本文中,诸如第一和第二等之类的关系术语 仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。而且,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、商品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、商品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、商品或者设备中还存在另外的相同要素。
以上对本公开所提供的一种图标生成方法及装置,进行了详细介绍,本文中应用了具体示例对本公开的原理及实施方式进行了阐述,以上实施例的说明只是用于帮助理解本公开的方法及其核心思想;同时,对于本领域的一般技术人员,依据本公开的思想,在具体实施方式及应用范围上均会有改变之处,综上所述,本说明书内容不应理解为对本公开的限制。

Claims (13)

  1. 一种图标生成方法,包括:
    获取目标关键词;
    根据所述目标关键词,从预设的图标数据库中检索所述目标关键词对应的多个样本图形作为多个目标图形,其中,所述图标数据库包括多个样本图标和每个样本图标对应的多个样本图形,并且所述目标关键词对应的多个样本图形对应的样本图标的样本关键词与所述目标关键词相似;
    将所述多个目标图形输入预设的神经网络模型中,以将所述多个目标图形合成为目标图标。
  2. 根据权利要求1所述的方法,还包括:对所述目标图标进行处理以获取处理后的目标图标,
    其中,所述对所述目标图标进行处理以获取处理后的目标图标还包括:
    根据所述目标图标的颜色边界对所述目标图标进行分割,获取多个图标区域块;
    对所述多个图标区域块的边缘进行拟合以获取多个边缘平滑的封闭图标区域块;
    对所述多个边缘平滑的封闭图标区域块进行着色,获取目标图标。
  3. 根据权利要求1-2中任一项所述的方法,还包括:生成所述预设的图标数据库,
    其中,生成所述预设的图标数据库还包括:
    获取样本图标;
    对获取到的样本图标与样本关键词进行关联,获取所述样本图标对应的样本关键词;
    对所述样本图标进行处理,获取所述样本图标对应的多个样本图形;
    根据所述样本图标、所述样本关键词和所述多个样本图形,生成所述图标数据库。
  4. 根据权利要求3所述的方法,其中,所述图标数据库还包括每个样本图标对应的样本词向量,
    其中,生成所述预设的图标数据库还包括:
    将所述样本关键词输入词向量模型中,获取所述样本关键词对应的样本词向量;
    根据所述样本图标、所述样本词向量和所述多个样本图形,生成所述图标数据库;
    其中,所述从预设的图标数据库中检索所述目标关键词对应的多个样本图形作为多个目标图形包括:
    将所述目标关键词输入所述词向量模型中,获取目标词向量;
    计算目标词向量与所述图标数据库中各个样本词向量的相似度;
    获取相似度大于设定阈值的样本词向量对应的多个样本图形作为多个目标图形。
  5. 根据权利要求1-4中任一项所述的方法,其中,所述神经网络模型为生成对抗网络模型,所述方法还包括:对所述生成对抗网络模型进行训练,
    其中,所述对所述生成对抗网络模型进行训练还包括:
    将所述样本图标对应的多个样本图形和随机噪声输入所述生成对抗网络模型的初始生成网络单元中,获取生成后的图标;
    将所述生成后的图标和所述样本图标输入所述生成对抗网络模型的初始判别网络单元中,获取判别结果;
    根据所述判别结果修正所述初始生成网络单元和/或所述初始判别网络单元的参数,获取所述生成对抗网络模型。
  6. 一种图标生成装置,包括:
    目标关键词获取模块,被配置为获取目标关键词;
    目标图形检索模块,被配置为根据所述目标关键词,从预设的图标数据库中检索所述目标关键词对应的多个样本图形作为多个目标图形,其中,所述图标数据库包括多个样本图标和每个样本图标对应的多个样本图形,并且所述目标关键词对应的多个样本图形对应的样本图标的样本关键词与所述目标关键词相似;
    目标图形输入模块,被配置为将所述多个目标图形输入预设的神经网络模型中,以将所述多个目标图形合成为目标图标。
  7. 根据权利要求6所述的装置,还包括:
    图标处理模块,被配置为对所述合成后的图标进行处理以获取处理后的 目标图标,
    所述图标处理模块还包括:
    图标分割子模块,被配置为根据所述目标图标的颜色边界对所述目标图标进行分割,获取多个图标区域块;
    拟合子模块,被配置为对所述多个图标区域块的边缘进行拟合以获取多个边缘平滑的封闭图标区域块;
    着色子模块,被配置为对所述多个边缘平滑的封闭图标区域块进行着色,获取目标图标。
  8. 根据权利要求6-7中的任一项所述的装置,所述装置还包括:
    样本图标获取模块,被配置为获取样本图标;
    关键词标定模块,被配置为对获取到的样本图标与样本关键词进行关联,获取所述样本图标对应的样本关键词;
    样本图标处理模块,被配置为对所述样本图标进行处理,获取所述样本图标对应的多个样本图形;
    图标数据库生成模块,被配置为根据所述样本图标、所述样本关键词和所述多个样本图形,生成所述图标数据库。
  9. 根据权利要求8所述的装置,其中,
    图标数据库生成模块还包括:
    样本关键词输入子模块,被配置为将所述样本关键词输入词向量模型中,获取所述样本关键词对应的样本词向量;
    样本词向量关联模块,被配置为根据所述样本图标、所述样本词向量和所述多个样本图形,生成所述图标数据库;
    所述目标图形检索模块还包括:
    目标关键词输入子模块,被配置为将所述目标关键词输入所述词向量模型中,获取目标词向量;
    相似度计算子模块,被配置为计算目标词向量与所述图标数据库中各个样本词向量的相似度;
    目标图形获取子模块,被配置为获取相似度大于设定阈值的样本词向量对应的多个样本图形,获取目标图形。
  10. 根据权利要求6-9中的任一项所述的装置,其中,所述神经网络模 型为生成对抗网络模型,所述装置还包括:
    生成对抗网络模型训练模块,被配置为对所述生成对抗网络模型进行训练;
    其中,所述生成对抗网络模型训练模块还包括:
    样本图形输入模块,被配置为将所述样本图标对应的多个样本图形和随机噪声输入所述生成对抗网络模型的初始生成网络单元中,获取生成后的图标;
    判别结果获取模块,被配置为将所述生成后的图标和所述样本图标输入所述生成对抗网络模型的初始判别网络单元中,获取判别结果;
    参数修正模块,被配置为根据所述判别结果修正所述初始生成网络单元和/或所述初始判别网络单元的参数,获取所述生成对抗网络模型。
  11. 一种获取图标的方法,包括:
    获取目标关键词;
    基于所述目标关键词和预设的图标数据库,输出目标图标;
    其中,所述图标数据库包括多个样本词向量以及每个样本词向量对应的样本图标;
    其中,所述目标关键词的词向量与所述多个样本词向量中的至少一个样本词向量的距离小于预定阈值,所述目标图标与所述至少一个样本词向量对应的样本图标相似。
  12. 一种电子设备,包括处理器、存储器及存储在所述存储器上并可在所述处理器上运行的计算机程序,所述计算机程序被所述处理器执行时实现如权利要求1至5中任一项所述的图标生成方法的步骤。
  13. 一种计算机可读存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1-5中任一项所述的图标生成方法的步骤。
PCT/CN2020/087806 2019-05-15 2020-04-29 图标生成方法及装置、获取图标的方法、电子设备以及存储介质 WO2020228536A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910403836.9A CN110120059B (zh) 2019-05-15 2019-05-15 一种图标生成方法及装置
CN201910403836.9 2019-05-15

Publications (1)

Publication Number Publication Date
WO2020228536A1 true WO2020228536A1 (zh) 2020-11-19

Family

ID=67522478

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/087806 WO2020228536A1 (zh) 2019-05-15 2020-04-29 图标生成方法及装置、获取图标的方法、电子设备以及存储介质

Country Status (2)

Country Link
CN (1) CN110120059B (zh)
WO (1) WO2020228536A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114219875A (zh) * 2021-12-03 2022-03-22 北京艺源酷科技有限公司 一种基于StyleGAN的智能LOGO生成方法

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110120059B (zh) * 2019-05-15 2023-03-10 京东方科技集团股份有限公司 一种图标生成方法及装置
CN111124578B (zh) * 2019-12-23 2023-09-29 中国银行股份有限公司 一种用户界面图标生成方法和装置
CN116798053B (zh) * 2023-07-20 2024-04-26 上海合芯数字科技有限公司 图标生成方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170315700A1 (en) * 2015-12-10 2017-11-02 Appelago Inc. Interactive dashboard for controlling delivery of dynamic push notifications
CN109685072A (zh) * 2018-12-22 2019-04-26 北京工业大学 一种基于生成对抗网络的复合降质图像高质量重建方法
CN109859291A (zh) * 2019-02-21 2019-06-07 北京一品智尚信息科技有限公司 智能logo设计方法、系统和存储介质
CN110120059A (zh) * 2019-05-15 2019-08-13 京东方科技集团股份有限公司 一种图标生成方法及装置
CN110287349A (zh) * 2019-06-10 2019-09-27 天翼电子商务有限公司 图形生成方法、装置、介质及终端

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101887416B (zh) * 2010-06-29 2012-07-11 魔极科技(北京)有限公司 一种将文字转化为图形的方法
CN103258037A (zh) * 2013-05-16 2013-08-21 西安工业大学 一种针对多组合内容的商标识别检索方法
CN109523493A (zh) * 2017-09-18 2019-03-26 杭州海康威视数字技术股份有限公司 一种图像生成方法、装置及电子设备
CN109741423A (zh) * 2018-12-28 2019-05-10 北京奇艺世纪科技有限公司 表情包生成方法及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170315700A1 (en) * 2015-12-10 2017-11-02 Appelago Inc. Interactive dashboard for controlling delivery of dynamic push notifications
CN109685072A (zh) * 2018-12-22 2019-04-26 北京工业大学 一种基于生成对抗网络的复合降质图像高质量重建方法
CN109859291A (zh) * 2019-02-21 2019-06-07 北京一品智尚信息科技有限公司 智能logo设计方法、系统和存储介质
CN110120059A (zh) * 2019-05-15 2019-08-13 京东方科技集团股份有限公司 一种图标生成方法及装置
CN110287349A (zh) * 2019-06-10 2019-09-27 天翼电子商务有限公司 图形生成方法、装置、介质及终端

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114219875A (zh) * 2021-12-03 2022-03-22 北京艺源酷科技有限公司 一种基于StyleGAN的智能LOGO生成方法

Also Published As

Publication number Publication date
CN110120059B (zh) 2023-03-10
CN110120059A (zh) 2019-08-13

Similar Documents

Publication Publication Date Title
WO2020228536A1 (zh) 图标生成方法及装置、获取图标的方法、电子设备以及存储介质
CN111274811B (zh) 地址文本相似度确定方法以及地址搜索方法
CN111260740B (zh) 一种基于生成对抗网络的文本到图像生成方法
US20220222920A1 (en) Content processing method and apparatus, computer device, and storage medium
CN108388651B (zh) 一种基于图核和卷积神经网络的文本分类方法
CN106570141B (zh) 近似重复图像检测方法
US20190325628A1 (en) Ai-driven design platform
WO2020207074A1 (zh) 一种信息推送的方法及设备
CN108694225A (zh) 一种图像搜索方法、特征向量的生成方法、装置及电子设备
CN109783666A (zh) 一种基于迭代精细化的图像场景图谱生成方法
CN109885796B (zh) 一种基于深度学习的网络新闻配图匹配性检测方法
CN111274981B (zh) 目标检测网络构建方法及装置、目标检测方法
Yao et al. A spatial co-location mining algorithm that includes adaptive proximity improvements and distant instance references
US20220245510A1 (en) Multi-dimensional model shape transfer
Sharma et al. High‐level feature aggregation for fine‐grained architectural floor plan retrieval
Zhang et al. Gaussian metric learning for few-shot uncertain knowledge graph completion
Huo et al. Semisupervised learning based on a novel iterative optimization model for saliency detection
CN110347853B (zh) 一种基于循环神经网络的图像哈希码生成方法
CN117315090A (zh) 基于跨模态风格学习的图像生成方法及装置
Parakal et al. Intrinsically Interpretable Document Classification via Concept Lattices.
CN115758159A (zh) 基于混合对比学习和生成式数据增强的零样本文本立场检测方法
CN106384127B (zh) 为图像特征点确定比较点对及二进制描述子的方法及系统
Yongxin et al. A study of learned KD tree based on learned index
Li et al. Contracting medial surfaces isotropically for fast extraction of centred curve skeletons
CN110826726B (zh) 目标处理方法、目标处理装置、目标处理设备及介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20806613

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20806613

Country of ref document: EP

Kind code of ref document: A1

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 13.03.2024)