CN113345053A - Intelligent color matching method and system - Google Patents
Intelligent color matching method and system Download PDFInfo
- Publication number
- CN113345053A CN113345053A CN202110738545.2A CN202110738545A CN113345053A CN 113345053 A CN113345053 A CN 113345053A CN 202110738545 A CN202110738545 A CN 202110738545A CN 113345053 A CN113345053 A CN 113345053A
- Authority
- CN
- China
- Prior art keywords
- color
- color matching
- matching model
- keywords
- picture
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 38
- 238000012549 training Methods 0.000 claims abstract description 28
- 238000013528 artificial neural network Methods 0.000 claims abstract description 18
- 239000013598 vector Substances 0.000 claims description 23
- 238000012545 processing Methods 0.000 claims description 13
- 238000012795 verification Methods 0.000 claims description 9
- 230000011218 segmentation Effects 0.000 claims description 7
- 239000003086 colorant Substances 0.000 claims description 6
- 238000004458 analytical method Methods 0.000 claims description 3
- 238000004364 calculation method Methods 0.000 claims description 3
- 238000013016 damping Methods 0.000 claims description 3
- 230000004927 fusion Effects 0.000 claims description 3
- 230000009191 jumping Effects 0.000 claims description 3
- 238000005457 optimization Methods 0.000 claims description 3
- 238000013461 design Methods 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 238000011161 development Methods 0.000 description 2
- 230000018109 developmental process Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 238000007405 data analysis Methods 0.000 description 1
- 238000013079 data visualisation Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003062 neural network model Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/40—Filling a planar surface by adding surface attributes, e.g. colour or texture
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/23—Clustering techniques
- G06F18/232—Non-hierarchical techniques
- G06F18/2321—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions
- G06F18/23213—Non-hierarchical techniques using statistics or function optimisation, e.g. modelling of probability density functions with fixed number of clusters, e.g. K-means clustering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/20—Natural language analysis
- G06F40/279—Recognition of textual entities
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F40/00—Handling natural language data
- G06F40/30—Semantic analysis
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/60—Editing figures and text; Combining figures or text
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Evolutionary Biology (AREA)
- Computational Linguistics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Biophysics (AREA)
- Software Systems (AREA)
- Mathematical Physics (AREA)
- Computing Systems (AREA)
- Molecular Biology (AREA)
- Biomedical Technology (AREA)
- Probability & Statistics with Applications (AREA)
- Image Analysis (AREA)
Abstract
The invention relates to the technical field of color matching, in particular to an intelligent color matching method and an intelligent color matching system, wherein the method comprises the following steps: step 1: establishing a color semantic database, wherein color block RGB values and color block keywords related to color blocks are stored in the color semantic database; step 2: establishing a color matching model based on a BP neural network, taking color block keywords in the step 1 as input data of the color matching model, taking color block RGB values as output data of the color matching model, and training the color matching model; and step 3: verifying and optimizing the color matching model by using the RGB values of the color blocks in the color semantic database in the step 1 and the keywords of the color blocks as samples to obtain an optimized color matching model; and 4, step 4: and (3) inputting the required keywords of the customer into the color matching model obtained in the step (3), and outputting the required keywords and the RGB values of the corresponding color blocks by the color matching model to obtain a color matching scheme.
Description
Technical Field
The invention relates to the technical field of color matching, in particular to an intelligent color matching method and system.
Background
The collocation of colors is one of the important contents in the creation process of the design. The excellent color matching can not only increase the impact force and the infectivity of the works, improve the attractiveness of the works and enable the viewers to better appreciate and understand the works. Therefore, it is necessary for a designer as an excellent flat panel designer to have a strong color matching capability. The colors in the planar design can not only highlight the characteristics and themes of the works, but also beautify the layout of the design, enhance the aesthetic feeling of the whole design works and leave deeper impression for viewers.
With the rapid development of information technology, the big data age has come. More and more governments and enterprises need to work by means of multidimensional data analysis, and therefore, the use of data visualization tools is also increasingly widespread. The excellent color scheme not only can meet the aesthetic requirements of customers, but also can highlight the information value of the data. The color scheme research system can provide basic color schemes for designers, assist the designers to output effect graphs quickly and improve the development progress of tools.
Therefore, in view of the above disadvantages, the present invention provides an intelligent color matching method and system.
Disclosure of Invention
The present invention aims to provide an intelligent color matching method and system to solve the above problems in the prior art.
The invention provides an intelligent color matching method, which comprises the following steps: step 1: establishing a color semantic database, wherein color block RGB values and color block keywords related to color blocks are stored in the color semantic database; step 2: establishing a color matching model based on a BP neural network, taking color block keywords in the step 1 as input data of the color matching model, taking color block RGB values as output data of the color matching model, and training the color matching model; and step 3: verifying and optimizing the color matching model by using the RGB values of the color blocks in the color semantic database in the step 1 and the keywords of the color blocks as samples to obtain an optimized color matching model; and 4, step 4: and (3) inputting the required keywords of the customer into the color matching model obtained in the step (3), and outputting the required keywords and the RGB values of the corresponding color blocks by the color matching model to obtain a color matching scheme.
The above-mentioned intelligent color matching method, further preferably, the method for obtaining color blocks and color block keywords in step 1 specifically includes: step 11: acquiring an image source and image source details, and extracting a picture in the image source to obtain the picture and the picture details; step 12: performing clustering analysis on the picture obtained in the step 11 by adopting a K-means mean clustering algorithm to obtain a picture color block; step 13: performing word segmentation processing on the picture details to obtain picture keywords; step 14: and associating the picture color blocks with the picture keywords to obtain RGB values of the color blocks and color block keywords.
In the intelligent color matching method, it is further preferable that in step 11, the image sources are still pictures, videos and dynamic pictures, the pictures are still pictures and are extracted from the video and dynamic picture image frames frame by frame.
The above-mentioned intelligent color matching method, further preferably, step 12 specifically includes: step 121: acquiring the picture in the step 11, setting the number of colors to be extracted, randomly selecting a plurality of pixels from the picture as an initial clustering center, and setting a vector value of the clustering center; step 122: matching pixels in the picture with an initial clustering center by using an Euclidean distance method; step 123: the vector value of each cluster center and the sample vector average value of each cluster in the step 122 are repeated, and the quasi-error square sum is calculated according to the vector value of the cluster center and the sample vector average value of the cluster; step 124: if the absolute value of the sum of squared quasi-errors of two adjacent clusters is combined with a preset condition, the algorithm is converged and the calculation is finished, otherwise, the step 121 is returned, and the cluster center is recalculated; step 125: and repeating the steps, performing iterative operation one by one, and reclassifying until the algorithm is converged, wherein the main color blocks in the preset n are extracted and are in the set color matching.
In the above-described intelligent color matching method, it is further preferable that the average value of the clustered sample vectors is calculated by the following formula:
p is the mean value of the sample vectors of the clusters, PiFor the ith cluster sample, i ∈ [1, n ]];
The sum of squared errors is calculated from the following equation:
in the formula, SSE is the sum of squares of the quasi-errors; m isiIs the ith cluster center point, i belongs to [1, n ]](ii) a p is a clustering sample, and p belongs to CiIn which C isiRepresenting the ith cluster, n cluster clusters are included, i belongs to [1, n ∈]。
The above-mentioned intelligent color matching method, further preferably, step 13 specifically includes: step 131: cutting source image details, performing word segmentation and part-of-speech tagging on each sentence, reserving words with specified parts-of-speech to form candidate keywords, forming a keyword graph model by a plurality of candidate keywords, and taking each candidate keyword as a node forming the graph model; step 132: calculating the weight of each node in the keyword graph model, and acquiring a plurality of nodes according to the weight order to form a candidate label; if the candidate labels form adjacent phrases, combining the multi-word labels; step 133: and selecting the candidate label or the multi-word label in the step S4 as a requirement keyword.
As mentioned above, preferably, the intelligent color matching method in step 132 specifically includes: step 1321: constructing an edge between any two nodes by adopting a co-occurrence relation, and adding the two nodes with the edge into the graph model to form an undirected and unweighted edge graph; step 1322: and confirming the weight distribution of the word position, the part of speech and the field characteristics of any node in the undivided edge graph, and obtaining the comprehensive weight through multi-characteristic fusion.
In the above-described intelligent color matching method, it is further preferable that, in step 1322, the comprehensive weight of each node is calculated by using the following formula:
in the formula, WS (V)i) Represents a node ViWeight of, WS (V)j) Represents a node VjThe weight of (c);
d is a damping factor and represents the probability value of any node in the graph jumping to other nodes;
In(Vi) Indicating a pointing node ViSet of all nodes of (1), In (V)i)=loge(Vi);Out(Vj) Represents a node VjAll pointed nodes VjA set of (a);
ωjiis a point VjTo point Vi VjThe weight of the edge; omegajkIs a point VjTo point VkVjThe weight of the edge.
In the foregoing intelligent color matching method, it is further preferable that the establishing of the color matching model based on the BP neural network in step 2 specifically includes: step 2.1: establishing a color matching model based on three-layer structures of an input layer, a hidden layer and an output layer of the BP neural network; step 2.2: introducing a training algorithm into the color matching model established in the step 2.1, inputting the color block keywords in the step 1 into the color matching model, and outputting the RGB values of the color blocks by the color matching model; step 2.3: comparing the RGB value of the color block in the step 1 with the output result of the color matching model in the step 3.2, and adjusting the color matching model according to the comparison result; step 2.4: setting the contrast similarity between the extracted RGB value and the output result, and circularly executing the step 2.2-the step 2.3 based on different source images until the contrast result exceeds the set contrast similarity, thereby finishing the training of the color matching model.
In the above-described intelligent color matching method, it is further preferable that, in the process of creating and training the color matching model, the hidden layer of the BP neural network is a single hidden layer, and the number of hidden nodes is set to 5.
The invention also discloses an intelligent color matching system, which comprises: the collection processing module is used for acquiring pictures and picture details and extracting color blocks and keywords of the pictures, wherein the color blocks correspond to the keywords one by one; the building module builds a color matching model based on the BP neural network; the training module is used for training the color matching model by taking the keywords of the collection processing module as input data of the color matching model and taking the color blocks as output data of the color matching model; the input/output module is used for inputting the keywords into the color matching model and further sending the output result of the color matching model to the setting verification module; the setting and verifying module is used for setting a threshold value with practicability of the color matching model and verifying whether the output result of the input and output module meets the threshold value with practicability or not based on the color block extracted by the collecting and processing module; the optimization module is used for optimizing the color matching model to obtain a final optimized color matching model when the verification result of the setting verification module is not practical, and the output result of the final optimized color matching model meets the set threshold value; and the application module is used for outputting the color matching scheme based on the final optimized color matching model and the user requirement keywords.
Compared with the prior art, the invention has the following advantages:
according to the invention, the color matching model is obtained by training the details of the existing picture and the RGB value of the main color matching, so that the effect of automatically deriving the color matching scheme according to the requirements of a client is realized through the color matching model, a basic color matching basis is provided for a UI designer, and the efficiency of outputting an effect picture is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
FIG. 1 is a flow chart of an intelligent color matching method according to the present invention;
FIG. 2 is a flow chart of a color model in the present invention.
Detailed Description
As shown in fig. 1-2, the present embodiment discloses an intelligent color matching method, which includes the following steps:
step 1: establishing a color semantic database, wherein color block RGB values and color block keywords related to color blocks are stored in the color semantic database;
step 2: establishing a color matching model based on a BP neural network, taking color block keywords in the step 1 as input data of the color matching model, taking color block RGB values as output data of the color matching model, and training the color matching model;
and step 3: verifying and optimizing the color matching model by using the RGB values of the color blocks in the color semantic database in the step 1 and the keywords of the color blocks as samples to obtain an optimized color matching model;
and 4, step 4: and (3) inputting the required keywords of the customer into the color matching model obtained in the step (3), and outputting the required keywords and the RGB values of the corresponding color blocks by the color matching model to obtain a color matching scheme.
Further, the method for acquiring color blocks and color block keywords in step 1 specifically includes:
step 11: acquiring an image source and image source details, and extracting a picture in the image source to obtain the picture and the picture details;
step 12: performing clustering analysis on the picture obtained in the step 11 by adopting a K-means mean clustering algorithm to obtain a picture color block;
step 13: performing word segmentation processing on the picture details to obtain picture keywords;
step 14: and associating the picture color blocks with the picture keywords to obtain RGB values of the color blocks and color block keywords.
Further, in step 11, the image sources are still pictures, video and moving pictures, and the pictures are still pictures and are extracted from the video and moving picture image frames frame by frame.
Further, step 12 specifically includes:
step 121: acquiring the picture in the step 11, setting the number of colors to be extracted, randomly selecting a plurality of pixels from the picture as an initial clustering center, and setting a vector value of the clustering center;
step 122: matching pixels in the picture with an initial clustering center by using an Euclidean distance method;
step 123: the vector value of each cluster center and the sample vector average value of each cluster in the step 122 are repeated, and the quasi-error square sum is calculated according to the vector value of the cluster center and the sample vector average value of the cluster;
step 124: if the absolute value of the sum of squared quasi-errors of two adjacent clusters is combined with a preset condition, the algorithm is converged and the calculation is finished, otherwise, the step 121 is returned, and the cluster center is recalculated;
step 125: and repeating the steps, performing iterative operation one by one, and reclassifying until the algorithm is converged, wherein the main color blocks in the preset n are extracted and are in the set color matching.
Specifically, in step 121, the number of dominant colors to be extracted is set to be n, a clustering center is selected, j represents all data to be clustered, and the vector value of the clustering center is set to be:
Zj(I) j is 1, 2. Wherein Z isj(I) Vector value representing the center of the cluster, j represents all data to be clustered, j is 1,2, … k. The number of the needed color combinations is determined, if 5 color combinations are needed, Z is selected herej(I)=5。
In step 122, the distance between the pixels in the picture and each initial clustering center is calculated, and the pixels are divided into the clustering centers with the minimum distance, that is:
D(xi,Zj(I))=min{D(xi,Zj(I) j ═ 1,2,. and n }, then xi∈wk。
In step 123, the vector value of the cluster center is calculated by the following formula:
the average value of the clustered sample vectors is calculated by the following formula:
the sum of squared errors is calculated from the following equation:
in step 123, if | Jc(I)-Jc(I-1)|<ξ, j ═ 1, 2.·, k, the algorithm converges and the computation ends. Otherwise, returning to the second step, and calculating a new clustering center:
further, step 13 specifically includes:
step 131: cutting source image details, performing word segmentation and part-of-speech tagging on each sentence, reserving words with specified parts-of-speech to form candidate keywords, forming a keyword graph model by a plurality of candidate keywords, and taking each candidate keyword as a node forming the graph model;
step 132: calculating the weight of each node in the keyword graph model, and acquiring a plurality of nodes according to the weight order to form a candidate label; if the candidate labels form adjacent phrases, combining the multi-word labels;
step 133: and selecting the candidate label or the multi-word label in the step S4 as a requirement keyword.
Specifically, step 132 specifically includes:
step 1321: constructing an edge between any two nodes by adopting a co-occurrence relation, and adding the two nodes with the edge into the graph model to form an undirected and unweighted edge graph;
step 1322: and confirming the weight distribution of the word position, the part of speech and the field characteristics of any node in the undivided edge graph, and obtaining the comprehensive weight through multi-characteristic fusion.
Specifically, in step 1322, the comprehensive weight of each node is calculated by using the following formula:
in the formula, WS (V)i) Represents a node ViWeight of, WS (V)j) Represents a node VjThe weight of (c);
d is a damping factor and represents the probability value of any node in the graph jumping to other nodes;
In(Vi) Indicating a pointing node ViSet of all nodes of (1), In (V)i)=loge(Vi);Out(Vj) Represents a node VjA set of all nodes pointed to; vkRepresents a node VjA pointed-to node;
ωjiis a point VjTo point Vi VjThe weight of the edge; omegajkIs a point VjTo point VkVjThe weight of the edge.
Further, the step 2 of establishing a color matching model based on the BP neural network specifically includes:
2.1, establishing a color matching model based on three-layer structures of an input layer, a hidden layer and an output layer of the BP neural network;
step 2.2, introducing a training algorithm into the color matching model established in the step 2.1, inputting the color block keywords in the step 1 into the color matching model, and outputting the RGB values of the color blocks by the color matching model;
step 2.3, comparing the RGB value of the color block in the step 1 with the output result of the color matching model in the step 3.2, and adjusting the color matching model according to the comparison result;
and 2.4, setting the contrast similarity between the extracted RGB value and the output result, and circularly executing the step 2.2 to the step 2.3 based on different source images until the contrast result exceeds the set contrast similarity, thereby finishing the training of the color matching model.
Specifically, in the process of establishing and training the color matching model, the hidden layer of the BP neural network is a single hidden layer, and specifically, the number of nodes of the hidden layer is calculated according to the following formula:
wherein m is the number of hidden layer nodes, n is the number of input layer nodes, l is the number of output layer nodes, and α is a constant between 1 and 10, where α is 7.5.
In this embodiment, the number of hidden nodes is set to 5, the number of input nodes is 3, the number of output nodes is 3, and the total number n of weight values and threshold values of the neural network is thenωIs 3 × 5+5+5 ═ 40.
Selecting color blocks and color block keywords from a color semantic database as training samples, wherein the number of the training samples is P, a given training error is epsilon, and the training error is epsilon:
nωis the sum of the weight of the neural network and the threshold number, P is the number of training samples, and epsilon is the given training error.
According to the iteration step number of 2000, the training target is 0.001, the training function is train lm, the hidden node transfer function is logsig, and the output node transfer function is purelin, a neural network model is constructed, wherein: the purelin function formula is y ═ x.
As shown in fig. 1, the customer's demand keywords may be split according to keywords, style, industry factor, and time.
In addition, characters, literary works, poetry and vocation titles and the like are stored in the color semantic database, and through word segmentation processing of the characters, the poetry and vocation titles and the literary works and matching with color matching in the database, color matching or pictures of specific scenes or words are obtained, such as the poetry sentence' the sunrise river flower wins the fire, and the spring river water is green and blue. After the words are divided, corresponding color matching or pictures according with the poetry description scene are searched, and poetry is directly converted into image-like pictures.
Further, this embodiment also discloses an intelligent color matching system, which includes:
the collection processing module is used for acquiring pictures and picture details and extracting color blocks and keywords of the pictures, wherein the color blocks correspond to the keywords one by one;
the building module builds a color matching model based on the BP neural network;
the training module is used for training the color matching model by taking the keywords of the collection processing module as input data of the color matching model and taking the color blocks as output data of the color matching model;
the input/output module is used for inputting the keywords into the color matching model and further sending the output result of the color matching model to the setting verification module;
the setting and verifying module is used for setting a threshold value with practicability of the color matching model and verifying whether the output result of the input and output module meets the threshold value with practicability or not based on the color block extracted by the collecting and processing module;
the optimization module is used for optimizing the color matching model to obtain a final optimized color matching model when the verification result of the setting verification module is not practical, and the output result of the final optimized color matching model meets the set threshold value;
and the application module is used for outputting the color matching scheme based on the final optimized color matching model and the user requirement keywords.
Finally, it should be noted that: the above embodiments are only used to illustrate the technical solution of the present invention, and not to limit the same; while the invention has been described in detail and with reference to the foregoing embodiments, it will be understood by those skilled in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.
Claims (10)
1. An intelligent color matching method is characterized by comprising the following steps:
step 1: establishing a color semantic database, wherein color block RGB values and color block keywords related to color blocks are stored in the color semantic database;
step 2: establishing a color matching model based on a BP neural network, taking color block keywords in the step 1 as input data of the color matching model, taking color block RGB values as output data of the color matching model, and training the color matching model;
and step 3: verifying and optimizing the color matching model by using the RGB values of the color blocks in the color semantic database in the step 1 and the keywords of the color blocks as samples to obtain an optimized color matching model;
and 4, step 4: and (3) inputting the required keywords of the customer into the color matching model obtained in the step (3), and outputting the required keywords and the RGB values of the corresponding color blocks by the color matching model to obtain a color matching scheme.
2. The intelligent color matching method according to claim 1, wherein the method for obtaining color blocks and color block keywords in step 1 specifically comprises:
step 11: acquiring an image source and image source details, and extracting a picture in the image source to obtain the picture and the picture details;
step 12: performing clustering analysis on the picture obtained in the step 11 by adopting a K-means mean clustering algorithm to obtain a picture color block;
step 13: performing word segmentation processing on the picture details to obtain picture keywords;
step 14: and associating the picture color blocks with the picture keywords to obtain RGB values of the color blocks and color block keywords.
3. The intelligent color matching method according to claim 2,
the step 12 specifically includes:
step 121: acquiring the picture in the step 11, setting the number of colors to be extracted, randomly selecting a plurality of pixels from the picture as an initial clustering center, and setting a vector value of the clustering center;
step 122: matching pixels in the picture with an initial clustering center by using an Euclidean distance method;
step 123: the vector value of each cluster center and the sample vector average value of each cluster in the step 122 are repeated, and the quasi-error square sum is calculated according to the vector value of the cluster center and the sample vector average value of the cluster;
step 124: if the absolute value of the sum of squared quasi-errors of two adjacent clusters is combined with a preset condition, the algorithm is converged and the calculation is finished, otherwise, the step 121 is returned, and the cluster center is recalculated;
step 125: and repeating the steps, carrying out iterative operation one by one, and reclassifying until the algorithm is converged, wherein the preset n types of main color blocks are extracted, and the main color blocks are the set color matching.
4. The intelligent color matching method of claim 3,
the average value of the clustered sample vectors is calculated by the following formula:
p is the mean value of the sample vectors of the clusters, PiFor the ith cluster sample, i ∈ [1, n ]];
The sum of squared errors is calculated from the following equation:
in the formula, SSE is the sum of squares of the quasi-errors; m isiIs the ith cluster center point, i belongs to [1, n ]](ii) a p is a clustering sample, and p belongs to CiIn which C isiRepresenting the ith cluster, n cluster clusters are included, i belongs to [1, n ∈]。
5. The intelligent color matching method according to claim 4, wherein step 13 specifically comprises:
step 131: cutting source image details, performing word segmentation and part-of-speech tagging on each sentence, reserving words with specified parts-of-speech to form candidate keywords, forming a keyword graph model by a plurality of candidate keywords, and taking each candidate keyword as a node forming the graph model;
step 132: calculating the weight of each node in the keyword graph model, and acquiring a plurality of nodes according to the weight order to form a candidate label; if the candidate labels form adjacent phrases, combining the multi-word labels;
step 133: and selecting the candidate label or the multi-word label in the step S4 as a requirement keyword.
6. The intelligent color matching method according to claim 5, wherein step 132 specifically comprises:
step 1321: constructing an edge between any two nodes by adopting a co-occurrence relation, and adding the two nodes with the edge into the graph model to form an undirected and unweighted edge graph;
step 1322: and confirming the weight distribution of the word position, the part of speech and the field characteristics of any node in the undivided edge graph, and obtaining the comprehensive weight through multi-characteristic fusion.
7. The intelligent color matching method according to claim 6, wherein in step 1322, the comprehensive weight of each node is calculated using the following formula:
in the formula, WS (V)i) Represents a node ViWeight of, WS (V)j) Represents a node VjThe weight of (c);
d is a damping factor and represents the probability value of any node in the graph jumping to other nodes;
In(Vi) Indicating a pointing node ViSet of all nodes of (1), In (V)i)=loge(Vi);Out(Vj) Represents a node VjAll pointed nodes VjA set of (a);
ωjiis a point VjTo point ViVjThe weight of the edge; omegajkIs a point VjTo point VkVjThe weight of the edge.
8. The intelligent color matching method according to claim 7, wherein the establishing of the color matching model based on the BP neural network in the step 2 specifically comprises:
step 2.1: establishing a color matching model based on three-layer structures of an input layer, a hidden layer and an output layer of the BP neural network;
step 2.2: introducing a training algorithm into the color matching model established in the step 2.1, inputting the color block keywords in the step 1 into the color matching model, and outputting the RGB values of the color blocks by the color matching model;
step 2.3: comparing the RGB value of the color block in the step 1 with the output result of the color matching model in the step 3.2, and adjusting the color matching model according to the comparison result;
step 2.4: setting the contrast similarity between the extracted RGB value and the output result, and circularly executing the step 2.2-the step 2.3 based on different source images until the contrast result exceeds the set contrast similarity, thereby finishing the training of the color matching model.
9. The intelligent color matching method according to claim 8, wherein in the process of building and training the color matching model, the hidden layer of the BP neural network is a single hidden layer, and the number of hidden nodes is set to 5.
10. An intelligent color matching system, comprising:
the collection processing module is used for acquiring pictures and picture details and extracting color blocks and keywords of the pictures, wherein the color blocks correspond to the keywords one by one;
the building module builds a color matching model based on the BP neural network;
the training module is used for training the color matching model by taking the keywords of the collection processing module as input data of the color matching model and taking the color blocks as output data of the color matching model;
the input/output module is used for inputting the keywords into the color matching model and further sending the output result of the color matching model to the setting verification module;
the setting and verifying module is used for setting a threshold value with practicability of the color matching model and verifying whether the output result of the input and output module meets the threshold value with practicability or not based on the color block extracted by the collecting and processing module;
the optimization module is used for optimizing the color matching model to obtain a final optimized color matching model when the verification result of the setting verification module is not practical, and the output result of the final optimized color matching model meets the set threshold value;
and the application module is used for outputting the color matching scheme based on the final optimized color matching model and the user requirement keywords.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110738545.2A CN113345053B (en) | 2021-06-30 | 2021-06-30 | Intelligent color matching method and system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110738545.2A CN113345053B (en) | 2021-06-30 | 2021-06-30 | Intelligent color matching method and system |
Publications (2)
Publication Number | Publication Date |
---|---|
CN113345053A true CN113345053A (en) | 2021-09-03 |
CN113345053B CN113345053B (en) | 2023-12-26 |
Family
ID=77481871
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202110738545.2A Active CN113345053B (en) | 2021-06-30 | 2021-06-30 | Intelligent color matching method and system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN113345053B (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113743109A (en) * | 2021-09-09 | 2021-12-03 | 浙江工业大学 | Product intelligent color matching design system based on user emotion |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106202042A (en) * | 2016-07-06 | 2016-12-07 | 中央民族大学 | A kind of keyword abstraction method based on figure |
CN108549626A (en) * | 2018-03-02 | 2018-09-18 | 广东技术师范学院 | A kind of keyword extracting method for admiring class |
CN111080739A (en) * | 2019-12-26 | 2020-04-28 | 山东浪潮通软信息科技有限公司 | BI color matching method and system based on BP neural network |
-
2021
- 2021-06-30 CN CN202110738545.2A patent/CN113345053B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106202042A (en) * | 2016-07-06 | 2016-12-07 | 中央民族大学 | A kind of keyword abstraction method based on figure |
CN108549626A (en) * | 2018-03-02 | 2018-09-18 | 广东技术师范学院 | A kind of keyword extracting method for admiring class |
CN111080739A (en) * | 2019-12-26 | 2020-04-28 | 山东浪潮通软信息科技有限公司 | BI color matching method and system based on BP neural network |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN113743109A (en) * | 2021-09-09 | 2021-12-03 | 浙江工业大学 | Product intelligent color matching design system based on user emotion |
CN113743109B (en) * | 2021-09-09 | 2024-03-29 | 浙江工业大学 | Product intelligent color matching design system based on user emotion |
Also Published As
Publication number | Publication date |
---|---|
CN113345053B (en) | 2023-12-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111858954B (en) | Task-oriented text-generated image network model | |
WO2023093574A1 (en) | News event search method and system based on multi-level image-text semantic alignment model | |
US11106951B2 (en) | Method of bidirectional image-text retrieval based on multi-view joint embedding space | |
CN108197111B (en) | Text automatic summarization method based on fusion semantic clustering | |
CN112818906B (en) | Intelligent cataloging method of all-media news based on multi-mode information fusion understanding | |
CN111488734A (en) | Emotional feature representation learning system and method based on global interaction and syntactic dependency | |
WO2023065617A1 (en) | Cross-modal retrieval system and method based on pre-training model and recall and ranking | |
CN111027595B (en) | Double-stage semantic word vector generation method | |
CN114743020B (en) | Food identification method combining label semantic embedding and attention fusion | |
CN110619051B (en) | Question sentence classification method, device, electronic equipment and storage medium | |
CN112100346A (en) | Visual question-answering method based on fusion of fine-grained image features and external knowledge | |
CN114048350A (en) | Text-video retrieval method based on fine-grained cross-modal alignment model | |
CN112004111A (en) | News video information extraction method for global deep learning | |
CN102222101A (en) | Method for video semantic mining | |
CN114419304A (en) | Multi-modal document information extraction method based on graph neural network | |
CN110008365B (en) | Image processing method, device and equipment and readable storage medium | |
CN112148886A (en) | Method and system for constructing content knowledge graph | |
CN113408581A (en) | Multi-mode data matching method, device, equipment and storage medium | |
CN111985520A (en) | Multi-mode classification method based on graph convolution neural network | |
Pham et al. | Cross-media alignment of names and faces | |
CN116610778A (en) | Bidirectional image-text matching method based on cross-modal global and local attention mechanism | |
CN115115745A (en) | Method and system for generating self-created digital art, storage medium and electronic device | |
CN116662565A (en) | Heterogeneous information network keyword generation method based on contrast learning pre-training | |
CN113345053A (en) | Intelligent color matching method and system | |
CN114048314A (en) | Natural language steganalysis method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |