CN113345053B - Intelligent color matching method and system - Google Patents
Intelligent color matching method and system Download PDFInfo
- Publication number
- CN113345053B CN113345053B CN202110738545.2A CN202110738545A CN113345053B CN 113345053 B CN113345053 B CN 113345053B CN 202110738545 A CN202110738545 A CN 202110738545A CN 113345053 B CN113345053 B CN 113345053B
- Authority
- CN
- China
- Prior art keywords
- color
- color matching
- matching model
- picture
- keywords
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 238000012549 training Methods 0.000 claims abstract description 27
- 238000013528 artificial neural network Methods 0.000 claims abstract description 18
- 239000013598 vector Substances 0.000 claims description 23
- 238000004422 calculation algorithm Methods 0.000 claims description 14
- 238000012795 verification Methods 0.000 claims description 12
- 238000012545 processing Methods 0.000 claims description 10
- 230000011218 segmentation Effects 0.000 claims description 10
- 239000003086 colorant Substances 0.000 claims description 5
- 238000004364 calculation method Methods 0.000 claims description 4
- 238000007621 cluster analysis Methods 0.000 claims description 4
- 238000013016 damping Methods 0.000 claims description 3
- 230000004927 fusion Effects 0.000 claims description 3
- 238000005457 optimization Methods 0.000 claims description 3
- 230000003068 static effect Effects 0.000 claims 4
- 238000013461 design Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 238000007405 data analysis Methods 0.000 description 1
- 238000013079 data visualisation Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 208000015181 infectious disease Diseases 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/40—Filling a planar surface by adding surface attributes, e.g. colour or texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Physics & Mathematics (AREA)
- Artificial Intelligence (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Evolutionary Computation (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Health & Medical Sciences (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Biology (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Probability & Statistics with Applications (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of color matching, in particular to an intelligent color matching method and system, wherein the method comprises the following steps: step 1: establishing a color semantic database, wherein the color semantic database stores color block RGB values and color block keywords associated with the color blocks; step 2: establishing a color matching model based on the BP neural network, taking the color block keywords in the step 1 as input data of the color matching model, taking the RGB values of the color blocks as output data of the color matching model, and training the color matching model; step 3: verifying and optimizing a color matching model by using the RGB values of the color blocks and the key words of the color blocks in the color semantic database in the step 1 as samples to obtain an optimized color matching model; step 4: and (3) inputting the demand keywords of the clients into the color matching model obtained in the step (3), and outputting the demand keywords and the RGB values of the corresponding color blocks by the color matching model to obtain a color matching scheme.
Description
Technical Field
The invention relates to the technical field of color matching, in particular to an intelligent color matching method and system.
Background
Color matching is one of the important contents in the creation process of the design. The excellent color matching not only can increase the impact force and the infection force of the works, but also can promote the attraction of the works, so that the viewers can better appreciate and understand the works. Therefore, a flat designer must have a strong color matching capability. The colors can not only highlight the characteristics and the theme of the works in the plane design, but also beautify the layout of the design, enhance the aesthetic feeling of the whole design works, and leave a deeper impression for viewers.
With the rapid development of information technology, the large data age has come. More and more governments and enterprises need to conduct work by means of multidimensional data analysis, and therefore, the use of data visualization tools is becoming more and more widespread. The excellent color scheme can meet the aesthetic requirements of customers and can highlight the information value of the data. The color scheme system is researched, a basic color scheme can be provided for a designer, the designer is assisted to quickly output an effect diagram, and the development progress of a tool is improved.
Therefore, in order to overcome the above disadvantages, the present invention is highly required to provide an intelligent color matching method and system.
Disclosure of Invention
The invention aims to provide an intelligent color matching method and system for solving the problems in the prior art.
The intelligent color matching method provided by the invention comprises the following steps: step 1: establishing a color semantic database, wherein the color semantic database stores color block RGB values and color block keywords associated with the color blocks; step 2: establishing a color matching model based on the BP neural network, taking the color block keywords in the step 1 as input data of the color matching model, taking the RGB values of the color blocks as output data of the color matching model, and training the color matching model; step 3: verifying and optimizing a color matching model by using the RGB values of the color blocks and the key words of the color blocks in the color semantic database in the step 1 as samples to obtain an optimized color matching model; step 4: and (3) inputting the demand keywords of the clients into the color matching model obtained in the step (3), and outputting the demand keywords and the RGB values of the corresponding color blocks by the color matching model to obtain a color matching scheme.
As described above, it is further preferable that the method for obtaining color patches and color patch keywords in step 1 specifically includes: step 11: acquiring an image source and image source details, and extracting pictures in the image source to obtain pictures and picture details; step 12: performing cluster analysis on the picture obtained in the step 11 by adopting a K-means mean value clustering algorithm to obtain a picture color block; step 13: performing word segmentation on the picture details to obtain picture keywords; step 14: and associating the picture color block with the picture keyword to obtain an RGB value of the color block and the color block keyword.
In the above intelligent color matching method, it is further preferable that in step 11, the image sources are still pictures, video and moving pictures, and the pictures are still pictures and are extracted from the video and moving picture image frames frame by frame.
The intelligent color matching method as described above further preferably, step 12 specifically includes: step 121: acquiring the picture in the step 11, setting the number of colors to be extracted, randomly selecting a plurality of pixels from the picture as an initial clustering center, and setting a vector value of the clustering center; step 122: matching pixels in the picture with an initial clustering center by using an Euclidean distance method; step 123: re-calculating the vector value of each cluster center and the average value of the sample vector of each cluster in step 122, and calculating to obtain a quasi error square sum according to the calculated vector value of the cluster center and the average value of the sample vector of the cluster; step 124: if the absolute value of the sum of squares of the quasi errors of the adjacent two clusters is compounded with a preset condition, the algorithm converges, the calculation is finished, otherwise, the step 121 is returned to, and the cluster center is recalculated; step 125: repeating the steps, and reclassifying the iterative operation one by one until the algorithm converges until the main color blocks in the preset n are extracted, wherein the main color blocks are the preset color matching.
The intelligent color matching method as described above further preferably, the average value of the clustered sample vectors is calculated by the following formula:
p is the average value of clustered sample vectors, P i For the ith cluster sample, i.e. [1, n];
The quasi-error sum of squares is calculated from the following formula:
wherein SSE is the sum of squares of the quasi-errors; m is m i For the ith cluster center point, i E [1, n]The method comprises the steps of carrying out a first treatment on the surface of the p is a cluster sample, p ε C i Wherein C i Represents the ith cluster, n clusters are added, i is E [1, n]。
The intelligent color matching method as described above further preferably, step 13 specifically includes: step 131: cutting source image details, performing word segmentation and part-of-speech tagging on each sentence, and reserving words with appointed parts-of-speech to form candidate keywords, wherein a plurality of candidate keywords form a keyword graph model, and each candidate keyword is a node for forming the graph model; step 132: calculating the weight of each node in the keyword graph model, and acquiring a plurality of nodes according to the weight magnitude order to form candidate labels; if the candidate labels form adjacent phrases, combining the multi-word labels; step 133: and selecting the candidate label or the multi-word label in the step S4 as a requirement keyword.
The intelligent color matching method described above further preferably, step 132 specifically includes: step 1321: constructing edges between any two nodes by adopting co-occurrence relations, and adding the two nodes with the edges into a graph model to form an undirected and unauthorized edge graph; step 1322: and confirming the weight distribution of the word position, the part of speech and the domain characteristics of any node in the undirected non-weighted edge graph, and obtaining the comprehensive weight through multi-characteristic fusion.
In the intelligent color matching method described above, it is further preferable that in step 1322, the comprehensive weight of each node is calculated using the following formula:
wherein WS (V) i ) Representing node V i Weight of WS (V) j ) Representing node V j Weights of (2);
d is a damping factor and represents the probability value of any node in the graph to jump to other nodes;
In(V i ) The representation points to node V i In (V) i )=log e (V i );Out(V j ) Representing node V j All nodes V pointed to j Is a collection of (3);
ω ji for point V j To point V i V j Weighting of edges; omega jk For point V j To point V k V j Weighting of edges.
As described above, it is further preferable that the establishing a color matching model based on the BP neural network in the step 2 specifically includes: step 2.1: based on three layers of structures of an input layer, an hidden layer and an output layer of the BP neural network, a color matching model is established; step 2.2: introducing a training algorithm into the color matching model established in the step 2.1, inputting the color block keywords in the step 1 into the color matching model, and outputting RGB values of color blocks by the color matching model; step 2.3: comparing the RGB value of the color block in the step 1 with the output result of the color matching model in the step 3.2, and adjusting the color matching model according to the comparison result; step 2.4: setting the comparison similarity between the extracted RGB values and the output results, and circularly executing the steps 2.2-2.3 based on different source images until the comparison results exceed the set comparison similarity, so as to complete the training of the color matching model.
In the above-mentioned intelligent color matching method, it is further preferable that in the process of building and training the color matching model, the hidden layer of the BP neural network is a single hidden layer, and the number of hidden layer nodes is set to be 5.
The invention also discloses an intelligent color matching system, which comprises: the collecting and processing module is used for acquiring the picture and the picture details, extracting color blocks and keywords of the picture, wherein the color blocks correspond to the keywords one by one; the building module is used for building a color matching model based on the BP neural network; the training module is used for taking the keywords of the collecting and processing module as input data of the color matching model, taking the color blocks as output data of the color matching model and training the color matching model; the input/output module is used for inputting the keywords into the color matching model and further sending the output result of the color matching model to the setting verification module; the verification module is used for setting a practical threshold value of the color matching model and verifying whether the output result of the input/output module meets the practical threshold value or not based on the color blocks extracted by the collection processing module; the optimization module is used for optimizing the color matching model to obtain a final optimized color matching model when the verification result of the set verification module has no practicability, and the output result of the final optimized color matching model meets the set threshold; and the application module is used for outputting a color matching scheme based on the final optimized color matching model and the user demand keywords.
Compared with the prior art, the invention has the following advantages:
according to the invention, the color matching model is obtained through training the details of the existing picture and the RGB values of main color matching, and further, the effect of automatically deriving the color matching scheme according to the requirements of a client is realized through the color matching model, so that a basic color matching basis is provided for a UI designer, and the efficiency of outputting an effect picture is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flow chart of the intelligent color matching method in the invention;
FIG. 2 is a flow chart of a color matching model in the present invention.
Detailed Description
As shown in fig. 1-2, this embodiment discloses an intelligent color matching method, which includes the following steps:
step 1: establishing a color semantic database, wherein the color semantic database stores color block RGB values and color block keywords associated with the color blocks;
step 2: establishing a color matching model based on the BP neural network, taking the color block keywords in the step 1 as input data of the color matching model, taking the RGB values of the color blocks as output data of the color matching model, and training the color matching model;
step 3: verifying and optimizing a color matching model by using the RGB values of the color blocks and the key words of the color blocks in the color semantic database in the step 1 as samples to obtain an optimized color matching model;
step 4: and (3) inputting the demand keywords of the clients into the color matching model obtained in the step (3), and outputting the demand keywords and the RGB values of the corresponding color blocks by the color matching model to obtain a color matching scheme.
Further, the method for obtaining the color block and the color block keyword in the step 1 specifically includes:
step 11: acquiring an image source and image source details, and extracting pictures in the image source to obtain pictures and picture details;
step 12: performing cluster analysis on the picture obtained in the step 11 by adopting a K-means mean value clustering algorithm to obtain a picture color block;
step 13: performing word segmentation on the picture details to obtain picture keywords;
step 14: and associating the picture color block with the picture keyword to obtain an RGB value of the color block and the color block keyword.
Further, in step 11, the image sources are still pictures, video and moving pictures, and the pictures are still pictures and are extracted from the video and moving picture image frames frame by frame.
Further, step 12 specifically includes:
step 121: acquiring the picture in the step 11, setting the number of colors to be extracted, randomly selecting a plurality of pixels from the picture as an initial clustering center, and setting a vector value of the clustering center;
step 122: matching pixels in the picture with an initial clustering center by using an Euclidean distance method;
step 123: re-calculating the vector value of each cluster center and the average value of the sample vector of each cluster in step 122, and calculating to obtain a quasi error square sum according to the calculated vector value of the cluster center and the average value of the sample vector of the cluster;
step 124: if the absolute value of the sum of squares of the quasi errors of the adjacent two clusters is compounded with a preset condition, the algorithm converges, the calculation is finished, otherwise, the step 121 is returned to, and the cluster center is recalculated;
step 125: repeating the steps, and reclassifying the iterative operation one by one until the algorithm converges until the main color blocks in the preset n are extracted, wherein the main color blocks are the preset color matching.
Specifically, in step 121, the number of dominant colors to be extracted is set as n, a cluster center is selected, j represents all data to be clustered, and in order to find the number of times of iterative operation required by the cluster center, a vector value of the cluster center is set as follows:
Z j (I) J=1, 2. Wherein Z is j (I) The vector value representing the cluster center, j represents all data to be clustered, j=1, 2, … k. According to the quantity of the required color matching, if 5 kinds of color matching are required, Z is selected j (I)=5。
In step 122, the distance between the pixel in the picture and each initial cluster center is calculated, and the pixel is divided into the cluster centers with the smallest distance, namely:
D(x i ,Z j (I))=min{D(x i ,Z j (I) J=1, 2,..and n }, then x i ∈w k 。
In step 123, the vector value of the cluster center is calculated by the following formula:
the average value of the clustered sample vectors is calculated by the following formula:
the quasi-error sum of squares is calculated from the following formula:
in step 123, if |J c (I)-J c (I-1)|<ζ, j=1, 2,..k, then algorithm converges and the calculation ends. Otherwise, returning to the second step, and calculating a new clustering center:
further, step 13 specifically includes:
step 131: cutting source image details, performing word segmentation and part-of-speech tagging on each sentence, and reserving words with appointed parts-of-speech to form candidate keywords, wherein a plurality of candidate keywords form a keyword graph model, and each candidate keyword is a node for forming the graph model;
step 132: calculating the weight of each node in the keyword graph model, and acquiring a plurality of nodes according to the weight magnitude order to form candidate labels; if the candidate labels form adjacent phrases, combining the multi-word labels;
step 133: and selecting the candidate label or the multi-word label in the step S4 as a requirement keyword.
Specifically, step 132 specifically includes:
step 1321: constructing edges between any two nodes by adopting co-occurrence relations, and adding the two nodes with the edges into a graph model to form an undirected and unauthorized edge graph;
step 1322: and confirming the weight distribution of the word position, the part of speech and the domain characteristics of any node in the undirected non-weighted edge graph, and obtaining the comprehensive weight through multi-characteristic fusion.
Specifically, in step 1322, the comprehensive weight of each node is calculated using the following formula:
wherein WS (V) i ) Representing node V i Weight of WS (V) j ) Representing node V j Weights of (2);
d is a damping factor and represents the probability value of any node in the graph to jump to other nodes;
In(V i ) The representation points to node V i In (V) i )=log e (V i );Out(V j ) Representing node V j A set of all nodes pointed to; v (V) k Representing node V j Pointing toIs a node of (a);
ω ji for point V j To point V i V j Weighting of edges; omega jk For point V j To point V k V j Weighting of edges.
Further, the building of the color matching model based on the BP neural network in the step 2 specifically includes:
step 2.1, building a color matching model based on a three-layer structure of an input layer, an hidden layer and an output layer of the BP neural network;
step 2.2, introducing a training algorithm into the color matching model established in the step 2.1, inputting the color block keywords in the step 1 into the color matching model, and outputting RGB values of color blocks by the color matching model;
step 2.3, comparing the RGB value of the color block in the step 1 with the output result of the color matching model in the step 3.2, and adjusting the color matching model according to the comparison result;
and 2.4, setting the comparison similarity between the extracted RGB values and the output results, and circularly executing the steps 2.2-2.3 based on different source images until the comparison results exceed the set comparison similarity, so as to complete the training of the color matching model.
Specifically, in the process of building and training the color matching model, the hidden layer of the BP neural network is a single hidden layer, and specifically, the node number of the hidden layer is calculated according to the following formula:
where m is the number of hidden layer nodes, n is the number of input layer nodes, l is the number of output layer nodes, α is a constant between 1 and 10, where α=7.5 is taken.
In this embodiment, the number of hidden layer nodes is set to 5, the number of input layer nodes is 3, and the number of output layer nodes is 3, so that the weight and the threshold total number n of the neural network ω Is 3×5+3×5+5+5=40.
Selecting color blocks and color block keywords in a color semantic database as training samples, wherein the number of the training samples is P, and the given training error is epsilon, wherein:
n ω is the sum of the weight and the threshold number of the neural network, P is the number of training samples, and ε is a given training error.
According to the iteration step number of 2000, the training target is 0.001, the training function is tranlm, the hidden node transfer function is log sig, and the output node transfer function is purelin, and a neural network model is constructed, wherein: the purelin function formula is y=x.
As shown in fig. 1, the client's demand keywords may be split in terms of keywords, style, industry factors, and time.
In addition, the color semantic database also stores characters, literary works, poetry singing and the like, and the characters, the poetry singing and the word segmentation processing of the literary works are carried out and are matched with color matching in the database, so that color matching or pictures of specific scenes or words are obtained, and if a poem is a poem of' out of the sun Jiang Huagong over fire, and the spring river is green like blue. "through the word after separating, search out the corresponding color matching or accords with the picture of this poetry description scene, directly turn poetry into the apparent picture.
Further, this embodiment also discloses an intelligent color matching system, which includes:
the collecting and processing module is used for acquiring the picture and the picture details, extracting color blocks and keywords of the picture, wherein the color blocks correspond to the keywords one by one;
the building module is used for building a color matching model based on the BP neural network;
the training module is used for taking the keywords of the collecting and processing module as input data of the color matching model, taking the color blocks as output data of the color matching model and training the color matching model;
the input/output module is used for inputting the keywords into the color matching model and further sending the output result of the color matching model to the setting verification module;
the verification module is used for setting a practical threshold value of the color matching model and verifying whether the output result of the input/output module meets the practical threshold value or not based on the color blocks extracted by the collection processing module;
the optimization module is used for optimizing the color matching model to obtain a final optimized color matching model when the verification result of the set verification module has no practicability, and the output result of the final optimized color matching model meets the set threshold;
and the application module is used for outputting a color matching scheme based on the final optimized color matching model and the user demand keywords.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.
Claims (9)
1. An intelligent color matching method is characterized by comprising the following steps:
step 1: establishing a color semantic database, wherein the color semantic database stores color block RGB values and color block keywords associated with the color blocks; the color semantic database also comprises literature works, wherein the literature works are used for searching and converting word segmentation into color matching pictures, and the color matching pictures are corresponding matching relation pictures with color block RGB values and color block keywords;
step 2: establishing a color matching model based on the BP neural network, taking the color block keywords in the step 1 as input data of the color matching model, taking the RGB values of the color blocks as output data of the color matching model, and training the color matching model;
step 3: verifying and optimizing a color matching model by using the RGB values of the color blocks and the key words of the color blocks in the color semantic database in the step 1 as samples to obtain an optimized color matching model;
step 4: inputting the demand keywords of the clients into the color matching model obtained in the step 3, outputting the demand keywords and the RGB values of the corresponding color blocks by the color matching model, and/or identifying the demand keywords of the clients in the color matching picture to obtain a color matching scheme; the method for acquiring the color block and the color block keyword in the step 1 specifically comprises the following steps:
step 11: acquiring an image source and image source details, and extracting pictures in the image source to obtain pictures and picture details; the image source is a static image, a video and a dynamic image, and the images are the static image and are extracted from the video and the dynamic image frame by frame;
step 12: performing cluster analysis on the picture obtained in the step 11 by adopting a K-means mean value clustering algorithm to obtain a picture color block;
step 13: performing word segmentation on the picture details to obtain picture keywords;
step 14: and associating the picture color block with the picture keyword to obtain an RGB value of the color block and the color block keyword.
2. The intelligent color matching method according to claim 1, wherein,
step 12 specifically includes:
step 121: acquiring the picture in the step 11, setting the number of colors to be extracted, randomly selecting a plurality of pixels from the picture as an initial clustering center, and setting a vector value of the clustering center;
step 122: matching pixels in the picture with an initial clustering center by using an Euclidean distance method;
step 123: re-calculating the vector value of each cluster center and the average value of the sample vector of each cluster in step 122, and calculating to obtain a quasi error square sum according to the calculated vector value of the cluster center and the average value of the sample vector of the cluster;
step 124: if the absolute value of the sum of squares of the quasi errors of the two adjacent clusters meets the preset condition, the algorithm converges, the calculation is finished, otherwise, the step 121 is returned to, and the cluster center is recalculated;
step 125: repeating the steps, and reclassifying by iterative operation one by one until the algorithm converges until n preset main color blocks are extracted, wherein the main color blocks are of the preset color matching.
3. The intelligent color matching method according to claim 2, wherein,
the average value of the clustered sample vectors is calculated by the following formula:
p is the average value of clustered sample vectors, P i For the ith cluster sample, i.e. [1, n];
The quasi-error sum of squares is calculated from the following formula:
wherein SSE is the sum of squares of the quasi-errors; m is m i For the ith cluster center point, i E [1, n]The method comprises the steps of carrying out a first treatment on the surface of the p is a cluster sample, p ε C i Wherein C i Represents the ith cluster, n clusters are added, i is E [1, n]。
4. The intelligent color matching method according to claim 3, wherein the step 13 specifically comprises:
step 131: cutting the source image details, carrying out word segmentation and part-of-speech tagging on each sentence in the source image details, and reserving words with appointed parts-of-speech to form candidate keywords, wherein a plurality of the candidate keywords form a keyword graph model, and each candidate keyword is a node for forming the graph model;
step 132: calculating the weight of each node in the keyword graph model, and acquiring a plurality of nodes according to the weight magnitude order to form candidate labels; if the candidate labels form adjacent phrases, combining the multi-word labels;
step 133: and selecting the candidate label or the multi-word label in the step S4 as a requirement keyword.
5. The intelligent color matching method according to claim 4, wherein the step 132 specifically comprises:
step 1321: constructing edges between any two nodes by adopting co-occurrence relations, and adding the two nodes with the edges into a graph model to form an undirected and unauthorized edge graph;
step 1322: and confirming the weight distribution of the word position, the part of speech and the domain characteristics of any node in the undirected non-weighted edge graph, and obtaining the comprehensive weight through multi-characteristic fusion.
6. The intelligent color scheme of claim 5, wherein in step 1322, the integrated weight of each node is calculated using the following formula:
wherein WS (V) i ) Representing node V i Weight of WS (V) j ) Representing node V j Weights of (2);
d is a damping factor and represents the probability value of any node in the graph to jump to other nodes;
In(V i ) The representation points to node V i In (V) i )=log e (V i );Out(V j ) Representing node V j All nodes V pointed to j Is a collection of (3);
ω ji for point V j To point V i V j Weighting of edges; omega jk For point V j To point V k V j Weighting of edges.
7. The intelligent color matching method according to claim 6, wherein the creating of the BP neural network-based color matching model in step 2 specifically includes:
step 2.1: based on three layers of structures of an input layer, an hidden layer and an output layer of the BP neural network, a color matching model is established;
step 2.2: introducing a training algorithm into the color matching model established in the step 2.1, inputting the color block keywords in the step 1 into the color matching model, and outputting RGB values of color blocks by the color matching model;
step 2.3: comparing the RGB value of the color block in the step 1 with the output result of the color matching model in the step 2.2, and adjusting the color matching model according to the comparison result;
step 2.4: setting the comparison similarity between the extracted RGB values and the output results, and circularly executing the steps 2.2-2.3 based on different source images until the comparison results exceed the set comparison similarity, so as to complete the training of the color matching model.
8. The intelligent color matching method according to claim 7, wherein in the process of building and training the color matching model, an hidden layer of the BP neural network is a single hidden layer, and the number of hidden layer nodes is set to be 5.
9. An intelligent color matching system, characterized in that it comprises:
the collecting and processing module is used for acquiring the picture and the picture details, extracting color blocks and keywords of the picture, wherein the color blocks correspond to the keywords one by one; the method for acquiring the color block and the color block keyword specifically comprises the following steps: acquiring an image source and image source details, and extracting pictures in the image source to obtain pictures and picture details; the image source is a static image, a video and a dynamic image, and the images are the static image and are extracted from the video and the dynamic image frame by frame; performing cluster analysis on the picture obtained in the step 11 by adopting a K-means mean value clustering algorithm to obtain a picture color block; performing word segmentation on the picture details to obtain picture keywords; associating the picture color block with the picture keyword to obtain an RGB value of the color block and the color block keyword; the color block and the method for acquiring the color block key words further comprise the following steps: the literary works are used for searching and converting the literary works into color matching pictures after word segmentation, and the color matching pictures are corresponding matching relation pictures with color block RGB values and color block keywords;
the building module is used for building a color matching model based on the BP neural network;
the training module is used for taking the keywords of the collecting and processing module as input data of the color matching model, taking the color blocks as output data of the color matching model and training the color matching model;
the input/output module is used for inputting the keywords into the color matching model and further sending the output result of the color matching model to the setting verification module;
the verification module is used for setting a practical threshold value of the color matching model and verifying whether the output result of the input/output module meets the practical threshold value or not based on the color blocks extracted by the collection processing module;
the optimization module is used for optimizing the color matching model to obtain a final optimized color matching model when the verification result of the set verification module has no practicability, and the output result of the final optimized color matching model meets the set threshold;
and the application module is used for identifying the demand keywords of the clients in the color matching picture and outputting a color matching scheme based on the final optimized color matching model and the user demand keywords.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110738545.2A CN113345053B (en) | 2021-06-30 | 2021-06-30 | Intelligent color matching method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110738545.2A CN113345053B (en) | 2021-06-30 | 2021-06-30 | Intelligent color matching method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113345053A CN113345053A (en) | 2021-09-03 |
CN113345053B true CN113345053B (en) | 2023-12-26 |
Family
ID=77481871
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110738545.2A Active CN113345053B (en) | 2021-06-30 | 2021-06-30 | Intelligent color matching method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113345053B (en) |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113743109B (en) * | 2021-09-09 | 2024-03-29 | 浙江工业大学 | Product intelligent color matching design system based on user emotion |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106202042A (en) * | 2016-07-06 | 2016-12-07 | 中央民族大学 | A kind of keyword abstraction method based on figure |
CN108549626A (en) * | 2018-03-02 | 2018-09-18 | 广东技术师范学院 | A kind of keyword extracting method for admiring class |
CN111080739A (en) * | 2019-12-26 | 2020-04-28 | 山东浪潮通软信息科技有限公司 | BI color matching method and system based on BP neural network |
-
2021
- 2021-06-30 CN CN202110738545.2A patent/CN113345053B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106202042A (en) * | 2016-07-06 | 2016-12-07 | 中央民族大学 | A kind of keyword abstraction method based on figure |
CN108549626A (en) * | 2018-03-02 | 2018-09-18 | 广东技术师范学院 | A kind of keyword extracting method for admiring class |
CN111080739A (en) * | 2019-12-26 | 2020-04-28 | 山东浪潮通软信息科技有限公司 | BI color matching method and system based on BP neural network |
Non-Patent Citations (1)
Title |
---|
张善文.图像模式识别.西安电子科学出版社,2020,108-109. * |
Also Published As
Publication number | Publication date |
---|---|
CN113345053A (en) | 2021-09-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110162591B (en) | Entity alignment method and system for digital education resources | |
CN112818906B (en) | Intelligent cataloging method of all-media news based on multi-mode information fusion understanding | |
CN110880019B (en) | Method for adaptively training target domain classification model through unsupervised domain | |
WO2023065617A1 (en) | Cross-modal retrieval system and method based on pre-training model and recall and ranking | |
CN109885796B (en) | Network news matching detection method based on deep learning | |
Shetty et al. | Segmentation and labeling of documents using conditional random fields | |
CN110598018B (en) | Sketch image retrieval method based on cooperative attention | |
CN102222101A (en) | Method for video semantic mining | |
CN112434553B (en) | Video identification method and system based on deep dictionary learning | |
CN110347868B (en) | Method and system for image search | |
CN108268875B (en) | Image semantic automatic labeling method and device based on data smoothing | |
CN109214346A (en) | Picture human motion recognition method based on hierarchical information transmitting | |
Yin et al. | Yes," Attention Is All You Need", for Exemplar based Colorization | |
JP4511135B2 (en) | Method for representing data distribution, method for representing data element, descriptor for data element, method for collating or classifying query data element, apparatus set to perform the method, computer program and computer-readable storage medium | |
CN114067385A (en) | Cross-modal face retrieval Hash method based on metric learning | |
CN111985520A (en) | Multi-mode classification method based on graph convolution neural network | |
CN108763295A (en) | A kind of video approximate copy searching algorithm based on deep learning | |
Caicedo et al. | Multimodal fusion for image retrieval using matrix factorization | |
CN104008177B (en) | Rule base structure optimization and generation method and system towards linguistic indexing of pictures | |
CN113345053B (en) | Intelligent color matching method and system | |
CN112148886A (en) | Method and system for constructing content knowledge graph | |
CN114461890A (en) | Hierarchical multi-modal intellectual property search engine method and system | |
CN106951501B (en) | Three-dimensional model retrieval method based on multi-graph matching | |
CN116756363A (en) | Strong-correlation non-supervision cross-modal retrieval method guided by information quantity | |
CN109670071A (en) | A kind of across the media Hash search methods and system of the guidance of serializing multiple features |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |