CN114926585B - Method for copying blind image based on 3D ink layer thickness and chromaticity algorithm - Google Patents
Method for copying blind image based on 3D ink layer thickness and chromaticity algorithm Download PDFInfo
- Publication number
- CN114926585B CN114926585B CN202210544773.0A CN202210544773A CN114926585B CN 114926585 B CN114926585 B CN 114926585B CN 202210544773 A CN202210544773 A CN 202210544773A CN 114926585 B CN114926585 B CN 114926585B
- Authority
- CN
- China
- Prior art keywords
- image
- training
- algorithm
- height
- chromaticity
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 26
- 230000007935 neutral effect Effects 0.000 claims abstract description 20
- 238000010146 3D printing Methods 0.000 claims abstract description 18
- 239000003086 colorant Substances 0.000 claims abstract description 10
- 238000012549 training Methods 0.000 claims description 76
- 238000006243 chemical reaction Methods 0.000 claims description 23
- 238000004458 analytical method Methods 0.000 claims description 16
- 238000013528 artificial neural network Methods 0.000 claims description 16
- 238000005516 engineering process Methods 0.000 claims description 12
- 238000002474 experimental method Methods 0.000 claims description 9
- 238000013461 design Methods 0.000 claims description 5
- 230000004913 activation Effects 0.000 claims description 3
- 210000002569 neuron Anatomy 0.000 claims description 3
- 238000013139 quantization Methods 0.000 claims description 3
- 230000006872 improvement Effects 0.000 abstract description 2
- 238000007639 printing Methods 0.000 abstract description 2
- 230000006870 function Effects 0.000 description 10
- 238000010428 oil painting Methods 0.000 description 8
- 230000000007 visual effect Effects 0.000 description 4
- 238000007405 data analysis Methods 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 206010010356 Congenital anomaly Diseases 0.000 description 1
- 235000014794 Papaver dubium Nutrition 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 210000004556 brain Anatomy 0.000 description 1
- 238000012512 characterization method Methods 0.000 description 1
- 230000019771 cognition Effects 0.000 description 1
- 230000007278 cognition impairment Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 230000006735 deficit Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000007613 environmental effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 210000005036 nerve Anatomy 0.000 description 1
- 238000005457 optimization Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B29—WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
- B29C—SHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
- B29C64/00—Additive manufacturing, i.e. manufacturing of three-dimensional [3D] objects by additive deposition, additive agglomeration or additive layering, e.g. by 3D printing, stereolithography or selective laser sintering
- B29C64/30—Auxiliary operations or equipment
- B29C64/386—Data acquisition or data processing for additive manufacturing
-
- B—PERFORMING OPERATIONS; TRANSPORTING
- B33—ADDITIVE MANUFACTURING TECHNOLOGY
- B33Y—ADDITIVE MANUFACTURING, i.e. MANUFACTURING OF THREE-DIMENSIONAL [3-D] OBJECTS BY ADDITIVE DEPOSITION, ADDITIVE AGGLOMERATION OR ADDITIVE LAYERING, e.g. BY 3-D PRINTING, STEREOLITHOGRAPHY OR SELECTIVE LASER SINTERING
- B33Y50/00—Data acquisition or data processing for additive manufacturing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/044—Recurrent networks, e.g. Hopfield networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- General Engineering & Computer Science (AREA)
- Life Sciences & Earth Sciences (AREA)
- Artificial Intelligence (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Computational Linguistics (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Biophysics (AREA)
- Mathematical Physics (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Materials Engineering (AREA)
- Chemical & Material Sciences (AREA)
- Manufacturing & Machinery (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Mechanical Engineering (AREA)
- Optics & Photonics (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Color Image Communication Systems (AREA)
- Facsimile Image Signal Circuits (AREA)
Abstract
The invention provides a method for copying blind images based on a 3D ink layer thickness and a chromaticity algorithm, and belongs to the technical field of 3D printing; the technical problems to be solved are as follows: providing an improvement in a method of reproducing an image of a blind person based on a 3D ink layer thickness and a chromaticity algorithm; the technical scheme adopted for solving the technical problems is as follows: dividing the manuscript into a cyan region, a magenta region, a yellow region and a neutral region according to colors, carrying out secondary division in each region according to the height, sequentially marking by adopting a structural marking method, measuring chromaticity values of different regions of the manuscript, constructing a model between the thickness of a 3D ink layer and the chromaticity by MATLAB coding, predicting the height value of the manuscript by the model, carrying out 3D modeling according to the height data, and printing; the invention is applied to 3D printing picture copying.
Description
Technical Field
The invention provides a method for copying blind images based on a 3D ink layer thickness and a chromaticity algorithm, and belongs to the technical field of 3D printing.
Background
3D printing is applied to personalized product customization of various industries as an innovative digital manufacturing technology, and along with optimization of a color 3D printing process, customized products are more realistic and low in cost. The development of full-color 3D printing technology is promoted by using a 3D ink layer thickness and chromaticity correlation model and adopting a 3D printing technology to reproduce the blind image.
The eye is a medium for the human visual system to acquire information which is integrated into the visual nerves of the brain, but a person suffering from vision impairment cannot acquire complete information, and thus a cognitive deficit occurs. The world in the blind's eyes is not "dark", the visual concept of "black" is that there is no color produced by the entry of visible light into the visual field, and for congenital total-blind patients they do not have the concept of color nor the concept of "seeing". At present, the blind person can learn the characters through touch feeling, but can not see the image tone gradation and the color, and the blind person can more clearly perceive the tone of the image and can also have a certain cognition on the color of the image by constructing a correlation model between the thickness and the chromaticity of the 3D ink layer.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and solves the technical problems that: an improvement in structure is provided.
In order to solve the technical problems, the invention adopts the following technical scheme: the method for copying the blind image based on the 3D ink layer thickness and chromaticity algorithm comprises the following steps:
1) Five images with obvious tone levels, which are cyan, magenta, yellow, neutral and mixed colors, are selected and marked as a training group a representing the cyan, a training group b representing the magenta, a training group c representing the yellow, a training group d representing the neutral and an experimental group e representing the mixed colors;
2) Numbering and marking are carried out according to the image tone level and the color characteristics in a training group to obtain a series of training numbers, and numbering and marking are carried out according to the image color types in an experiment group to obtain a series of experiment numbers;
3) Acquiring a chrominance data set L of different areas of the image surface in a training set and an experimental set, wherein the chrominance data set L is divided into a training chrominance data set Lt and an experimental chrominance data set Le;
4) Scanning images in a training group and an experimental group by adopting a 3D profiler to obtain height data sets H of different areas on the surface of the images, wherein the height data sets H are divided into a training height data set Ht and an experimental height data set He;
5) In a training group a, inputting the obtained chroma data set L and the corresponding height data set H, simultaneously inputting a compiled artificial neural network training algorithm, carrying out parameterization association training, constructing an association conversion model of an image cyan chroma value-height value, and so on, in a training group b, constructing an association conversion model of an image magenta chroma value-height value, in a training group c, constructing an association conversion model of an image yellow chroma value-height value, and in a training group d, constructing an association conversion model of an image neutral chroma value-height value;
6) In the experimental group, after the chromaticity data set L of the measured image is input into the associated conversion model obtained in the step 5) according to the category, the height data set G corresponding to the chromaticity data set L of different areas of the image is output, and then the height data set G is converted into a 3D model, and the image is copied by using a 3D printer technology.
Further comprises:
7) Performing tone gradation and color comparison on the copy manuscript and the manuscript of the experimental group, and performing quantization comparison on the image obtained in the step 6) and the manuscript by using a color difference data set analysis algorithm and a height data set error analysis algorithm to obtain surface color difference and height difference;
the color difference and the height difference of different areas of the image surface are fed back to a compiled artificial neural network training algorithm in the forward direction, and the set new weight is adjusted until the 3D printing predicted blind image is copied through the new weight;
8) After the adjustment, the experimental group manuscript is copied again by adopting the 3D printing technology.
The compiled artificial neural network training algorithm is a training algorithm compiled by adopting a MATLAB tool kit and with extensible neuron numbers at each layer.
The numbering in step 2) is as follows:
In the training group a, numbering and marking are carried out according to the image tone level and color characteristics, the training group a is divided into N areas according to the image height, the N areas are marked as a-N n, n=1-N, M small areas are further divided in each area, the M small areas are marked as a-N n-Mm, m= 1~m, and the like, so that training sequence numbers of the rest training groups b, c and d are obtained;
in the experiment group e, the image color category is divided into four large areas, namely a cyan area, a magenta area, a yellow area and a neutral area, the cyan area is divided into N areas according to the image height, which are marked as e-cyan-N n, the each area is divided into M small areas, which are marked as e-cyan-N n-Mm, and the training sequence numbers of the magenta area, the yellow area and the neutral area are obtained by analogy.
The images of the training set and the experimental set which are marked and numbered in the step 2) are new marks of a layer of structural attributes, and are respectively represented by a chromaticity value L n and a height value H n, wherein n=1-n.
And 7) adopting a MATLAB GUI interface design and coding under a point-by-point comparison strategy to feed the color difference and the height difference of different areas of the image surface back to a compiled artificial neural network training algorithm.
And 5) the activation function of the artificial neural network training algorithm in the step 5) adopts any one of a Sigmoid function, a Tanh function, a ReLU function or an ELU function.
Compared with the prior art, the invention has the following beneficial effects:
(1) The original copy method uses the numbering and marking methods of the data sets in different areas, adopts spliced multi-layer structure arrangement and corresponding initial marks, can rapidly and largely process data, reduces operation time and cost, and improves coding efficiency of MATLAB GUI in high data set errors and color difference characterization.
(2) The invention provides a method for associating original chromaticity values of different color systems with height values of different areas on the original surface, which has the advantages that: the height values of different areas of the original are directly deduced through the original chromaticity values, so that 3D modeling can be rapidly performed, the difficulty of directly measuring the height of the original is reduced, and the measuring error and the 3D modeling difficulty are reduced.
(3) The color difference data analysis algorithm and the height data error analysis algorithm adopt MATLAB GUI interface design and coding under the point-by-point comparison strategy, forward feed the color difference and the height difference of different areas on the image surface back to the compiled artificial neural network training algorithm, and adjust the set new weight until the 3D printing prediction blind image is copied by the weight, so that the copied image is more lifelike in tone level and color.
(4) The invention provides a method for copying blind images by a 3D printing technology, which is based on the height values of different areas of a predicted group manuscript, and the model height is preset and layered when 3D modeling is carried out, so that the overall production cost is reduced by 3D printing, and the method is also in accordance with environmental protection.
Drawings
The invention is further described below with reference to the accompanying drawings:
FIG. 1 is a flow chart of the method of the present invention.
Detailed Description
FIG. 1 shows that the invention provides a method for copying an image of a blind person based on the thickness of a 3D ink layer and a chromaticity algorithm, the image is measured with high precision, a chromaticity data set is obtained, and the perception capability of the blind person to the tone level and the color of the image is enhanced through a compiled artificial neural network algorithm, a color difference analysis algorithm and a three-dimensional height data error analysis algorithm; dividing the manuscript into a cyan region, a magenta region, a yellow region and a neutral region according to colors, carrying out secondary division according to the height in each region, sequentially marking by adopting a structural marking method, measuring chromaticity values of different regions of the manuscript, constructing a model between the thickness of a 3D ink layer and the chromaticity by MATLAB coding, predicting the height value of the manuscript by the model, carrying out 3D modeling according to the height data, and printing, wherein the method specifically comprises the following steps:
1) Firstly, selecting five images with obvious tone layers, wherein the tone is cyan, magenta, yellow, neutral and mixed colors respectively, and marking the images as training groups a (cyan), b (magenta), c (yellow) and d (neutral) and experimental group e (mixed colors);
2) Numbering and marking are carried out according to the image tone level and the color characteristics in a training group to obtain a series of training numbers, and numbering and marking are carried out according to the image color types in an experiment group to obtain a series of experiment numbers;
3) Then, in a training group and an experimental group, a spectrodensitometer is adopted to obtain a chromaticity data set L of different areas of the image surface, wherein the chromaticity data set L is divided into a training chromaticity data set Lt and an experimental chromaticity data set Le;
4) Then in a training group and an experimental group, scanning images by adopting a high-precision 3D profiler to obtain height data sets H of different areas of the surface, wherein the height data sets H are divided into a training height data set Ht and an experimental height data set He;
5) In a training group a, inputting the obtained chroma data set L and the corresponding height data set H, simultaneously inputting a compiled artificial neural network training algorithm, carrying out parameterization association training, constructing an association conversion model of an image cyan chroma value-height value, and so on, in a training group b, constructing an association conversion model of an image magenta chroma value-height value, in a training group c, constructing an association conversion model of an image yellow chroma value-height value, and in a training group d, constructing an association conversion model of an image neutral chroma value-height value;
The compiled artificial neural network training algorithm is a training algorithm compiled by adopting a MATLAB tool kit and with extensible neuron numbers at each layer;
6) In the experimental group, after the chromaticity data set L of the measured image is input into the associated conversion model obtained in the step 5) according to the category, a height data set G corresponding to the chromaticity data set L of different areas of the image is output, the height data set G is converted into a 3D model, and the image is copied by using a 3D printer technology;
7) Performing tone gradation and color comparison on the copy manuscript and the manuscript of the experimental group, and performing quantization comparison on the image obtained in the step 6) and the manuscript by using a color difference data set analysis algorithm and a height data set error analysis algorithm to obtain surface color difference and height difference;
The compiled color difference data set analysis algorithm and the compiled height data set error analysis algorithm adopt MATLAB GUI interface design and coding based on a point-by-point comparison strategy;
And feeding the color difference and the height difference of different areas of the image surface back to a compiled artificial neural network training algorithm in a forward direction, and adjusting the set new weight until the 3D-printed predicted blind image is copied through the weight.
8) After the adjustment, the experimental group manuscript is copied again by adopting the 3D printing technology.
Preferably, five images with obvious tone levels are selected, the tone is cyan, magenta, yellow, neutral and mixed colors, and the tone levels and the color characteristics of the images are numbered and marked.
Preferably, in the step 2), the numbers are organized by adopting a spliced structure, for example, in the training group a, a certain area number is a-N n-Mm, n=1-N, m= 1~m, and so on, so as to obtain training sequence numbers of the rest training groups b, c and d; in the experiment group e, a certain cyan region number is e-cyan-N n-Mm, n=1-N, and m= 1~m;
Preferably, the new marks in step 2), step 3) and step 4) are that a layer of structure attribute is added after the numbered spliced structure, and a chromaticity value-L n and a height value-H n are adopted, wherein n=1-n.
Preferably, the activation function of the training algorithm in step 5) is any one of Sigmoid function, tanh function, reLU function and ELU function.
Preferably, the color difference data analysis algorithm and the height data error analysis algorithm in the step 7) adopt MATLAB GUI interface design and coding under the point-by-point comparison strategy, forward feed the color differences and the height differences of different areas of the image surface back to the compiled artificial neural network training algorithm, and adjust the set new weight until the 3D printing prediction blind image is copied by the weight, so that the copied image is more lifelike in tone level and color.
The invention is further illustrated by the following two examples.
Description of the preferred embodiments
If the 3D oil painting model is copied: an oil painting with different surface relief degrees is selected, the oil painting is divided into four large areas, namely a cyan area, a magenta area, a yellow area and a neutral area according to the color of the oil painting, the four areas are divided into five areas according to the height, and the five areas are marked as N 1、N2、N3、N4 and N 5, and are divided into M 1、M2、M3、M4 and M 5 again in sequence. And (3) measuring the chromaticity value of the surface of the green-N 1-M1 for a plurality of times in the green-N 1-M1 area, taking an average value, and substituting the chromaticity value into a cyan chromaticity value-height value associated conversion model through MATLAB programming to obtain a corresponding height data value. And the other areas are analogically substituted into a magenta chromaticity value-height value associated conversion model, a yellow chromaticity value-height value associated conversion model and a neutral chromaticity value-height value associated conversion model respectively, so that the height values of different areas on the surface of the oil painting are obtained, and then the oil painting is obtained through 3D modeling layering and combining a 3D printing technology. And further optimizing a correlation model between the thickness and chromaticity of the 3D ink layer through color difference analysis and height data error analysis.
Second embodiment
If the 3D map model is copied: selection 1:1 into four large areas, namely a cyan area, a magenta area, a yellow area and a neutral area, according to the colors of the map, and dividing the four areas into eight areas according to the height, namely N 1、N2、N3、N4、N5、N6、N7 and N 8, and dividing the four areas into M 1、M2、M3、M4、M5、M6、M7 and M 8 in sequence. And (3) measuring the chromaticity value of the surface of the green-N 1-M1 for a plurality of times in the green-N 1-M1 area, taking an average value, and substituting the chromaticity value into a cyan chromaticity value-height value associated conversion model through MATLAB programming to obtain a corresponding height data value. And the other areas are analogically substituted into a magenta chromaticity value-height value associated conversion model, a yellow chromaticity value-height value associated conversion model and a neutral chromaticity value-height value associated conversion model respectively, so that the height values of different areas on the surface of the oil painting are obtained, and then the oil painting is obtained through 3D modeling layering and combining a 3D printing technology. And further optimizing a correlation model between the thickness and chromaticity of the 3D ink layer through color difference analysis and height data error analysis.
The specific structure of the invention needs to be described that the connection relation between the component modules adopted by the invention is definite and realizable, and besides the specific description in the embodiment, the specific connection relation can bring corresponding technical effects, and solves the technical problems of the invention on the premise of not depending on the execution of corresponding software programs.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.
Claims (7)
1. The method for copying the blind image based on the 3D ink layer thickness and the chromaticity algorithm is characterized by comprising the following steps of: the method comprises the following steps:
1) Five images with obvious tone levels, which are cyan, magenta, yellow, neutral and mixed colors, are selected and marked as a training group a representing the cyan, a training group b representing the magenta, a training group c representing the yellow, a training group d representing the neutral and an experimental group e representing the mixed colors;
2) Numbering and marking are carried out according to the image tone level and the color characteristics in a training group to obtain a series of training numbers, and numbering and marking are carried out according to the image color types in an experiment group to obtain a series of experiment numbers;
3) Acquiring a chrominance data set L of different areas of the image surface in a training set and an experimental set, wherein the chrominance data set L is divided into a training chrominance data set Lt and an experimental chrominance data set Le;
4) Scanning images in a training group and an experimental group by adopting a 3D profiler to obtain height data sets H of different areas on the surface of the images, wherein the height data sets H are divided into a training height data set Ht and an experimental height data set He;
5) In a training group a, inputting the obtained chroma data set L and the corresponding height data set H, simultaneously inputting a compiled artificial neural network training algorithm, carrying out parameterization association training, constructing an association conversion model of an image cyan chroma value-height value, and so on, in a training group b, constructing an association conversion model of an image magenta chroma value-height value, in a training group c, constructing an association conversion model of an image yellow chroma value-height value, and in a training group d, constructing an association conversion model of an image neutral chroma value-height value;
6) In the experimental group, after the chromaticity data set L of the measured image is input into the associated conversion model obtained in the step 5) according to the category, the height data set G corresponding to the chromaticity data set L of different areas of the image is output, and then the height data set G is converted into a 3D model, and the image is copied by using a 3D printer technology.
2. The method for reproducing an image for the blind based on the 3D ink layer thickness and chromaticity algorithm as recited in claim 1, wherein: further comprises:
7) Performing tone gradation and color comparison on the copy manuscript and the manuscript of the experimental group, and performing quantization comparison on the image obtained in the step 6) and the manuscript by using a color difference data set analysis algorithm and a height data set error analysis algorithm to obtain surface color difference and height difference;
the color difference and the height difference of different areas of the image surface are fed back to a compiled artificial neural network training algorithm in the forward direction, and the set new weight is adjusted until the 3D printing predicted blind image is copied through the new weight;
8) After the adjustment, the experimental group manuscript is copied again by adopting the 3D printing technology.
3. The method for reproducing an image for the blind based on the 3D ink layer thickness and chromaticity algorithm as recited in claim 1, wherein: the compiled artificial neural network training algorithm is a training algorithm compiled by adopting a MATLAB tool kit and with extensible neuron numbers at each layer.
4. The method for reproducing an image for the blind based on the 3D ink layer thickness and chromaticity algorithm as recited in claim 1, wherein: the numbering in step 2) is as follows:
In the training group a, numbering and marking are carried out according to the image tone level and color characteristics, the training group a is divided into N areas according to the image height, the N areas are marked as a-N n, n=1-N, M small areas are further divided in each area, the M small areas are marked as a-N n-Mm, m= 1~m, and the like, so that training sequence numbers of the rest training groups b, c and d are obtained;
in the experiment group e, the image color category is divided into four large areas, namely a cyan area, a magenta area, a yellow area and a neutral area, the cyan area is divided into N areas according to the image height, which are marked as e-cyan-N n, the each area is divided into M small areas, which are marked as e-cyan-N n-Mm, and the training sequence numbers of the magenta area, the yellow area and the neutral area are obtained by analogy.
5. The method for reproducing an image for the blind based on the 3D ink layer thickness and chromaticity algorithm as recited in claim 1, wherein: the images of the training set and the experimental set which are marked and numbered in the step 2) are new marks of a layer of structural attributes, and are respectively represented by a chromaticity value L n and a height value H n, wherein n=1-n.
6. The method for reproducing an image for the blind based on the 3D ink layer thickness and chromaticity algorithm as recited in claim 2, wherein: and 7) adopting a MATLAB GUI interface design and coding under a point-by-point comparison strategy to feed the color difference and the height difference of different areas of the image surface back to a compiled artificial neural network training algorithm.
7. The method for reproducing an image for the blind based on the 3D ink layer thickness and chromaticity algorithm as recited in claim 1, wherein: and 5) the activation function of the artificial neural network training algorithm in the step 5) adopts any one of a Sigmoid function, a Tanh function, a ReLU function or an ELU function.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210544773.0A CN114926585B (en) | 2022-05-19 | 2022-05-19 | Method for copying blind image based on 3D ink layer thickness and chromaticity algorithm |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210544773.0A CN114926585B (en) | 2022-05-19 | 2022-05-19 | Method for copying blind image based on 3D ink layer thickness and chromaticity algorithm |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114926585A CN114926585A (en) | 2022-08-19 |
CN114926585B true CN114926585B (en) | 2024-06-18 |
Family
ID=82807700
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210544773.0A Active CN114926585B (en) | 2022-05-19 | 2022-05-19 | Method for copying blind image based on 3D ink layer thickness and chromaticity algorithm |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114926585B (en) |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109253862A (en) * | 2018-08-31 | 2019-01-22 | 武汉精测电子集团股份有限公司 | A kind of colour measurement method neural network based |
CN109902592A (en) * | 2019-01-30 | 2019-06-18 | 浙江大学 | A kind of blind person's secondary row path method based on deep learning |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3360358B2 (en) * | 1993-06-30 | 2002-12-24 | 東洋インキ製造株式会社 | How to determine printing color material amount |
-
2022
- 2022-05-19 CN CN202210544773.0A patent/CN114926585B/en active Active
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109253862A (en) * | 2018-08-31 | 2019-01-22 | 武汉精测电子集团股份有限公司 | A kind of colour measurement method neural network based |
CN109902592A (en) * | 2019-01-30 | 2019-06-18 | 浙江大学 | A kind of blind person's secondary row path method based on deep learning |
Also Published As
Publication number | Publication date |
---|---|
CN114926585A (en) | 2022-08-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2480929B1 (en) | Colored contact lens based on amorphous images | |
CN100464566C (en) | Method, equipment and computer program of changed digital color image | |
CA2415009C (en) | Color reproduction process | |
CN102120384A (en) | Multiple primitive color printing quality control method | |
CN1946563A (en) | N-ink color gamut construction | |
JPH07160871A (en) | Method and device for correcting color picture | |
US5386496A (en) | Method and device for nonlinear transformation of colour information by neural network | |
EP1743477A1 (en) | Multi-color printing using a halftone screen | |
JP2007535865A (en) | Hybrid dot line halftone synthesis screen | |
CN102110428B (en) | Method and device for converting color space from CMYK to RGB | |
CN114926585B (en) | Method for copying blind image based on 3D ink layer thickness and chromaticity algorithm | |
CN108274748A (en) | Layer-cutting printing method for multicolor 3D object | |
JP2004188973A (en) | System and method for processing multicolor image | |
GB2234410A (en) | Converting image densities between different types of reproduction process | |
CN110120005B (en) | Displaying and hiding method for three-dimensional effect of halftone hidden image | |
CN115762326A (en) | Method for manufacturing color anti-counterfeiting label made of transparent material | |
CN113409206A (en) | High-precision digital printing color space conversion method | |
Nack et al. | Colour picking: the pecking prder of form and function | |
CN106097288A (en) | For generating the method for the contrast image of object structures and relevant device thereof | |
CN110163789A (en) | Halftoning based on Moire effect duplicates Fragile Watermarking Technique method | |
JPH10243247A (en) | Color information conversion method and device to match observation results of colors with each other | |
JP2001352457A (en) | Information color system and printing method for the information color system | |
Silapasuphakornwong et al. | An Exploration into Color Reproduction for Inkjet FDM Color 3D Printing. | |
US20050195418A1 (en) | Method for processing a multi-colour image | |
Yanzhe | A novel algorithm based on improved BP neural network and its application in color management |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |