CN114926585B - Method for copying blind image based on 3D ink layer thickness and chromaticity algorithm - Google Patents

Method for copying blind image based on 3D ink layer thickness and chromaticity algorithm Download PDF

Info

Publication number
CN114926585B
CN114926585B CN202210544773.0A CN202210544773A CN114926585B CN 114926585 B CN114926585 B CN 114926585B CN 202210544773 A CN202210544773 A CN 202210544773A CN 114926585 B CN114926585 B CN 114926585B
Authority
CN
China
Prior art keywords
image
training
algorithm
height
chromaticity
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210544773.0A
Other languages
Chinese (zh)
Other versions
CN114926585A (en
Inventor
陈广学
姚丹阳
袁江平
田婕妮
王俪儒
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
South China University of Technology SCUT
Original Assignee
South China University of Technology SCUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by South China University of Technology SCUT filed Critical South China University of Technology SCUT
Priority to CN202210544773.0A priority Critical patent/CN114926585B/en
Publication of CN114926585A publication Critical patent/CN114926585A/en
Application granted granted Critical
Publication of CN114926585B publication Critical patent/CN114926585B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B29WORKING OF PLASTICS; WORKING OF SUBSTANCES IN A PLASTIC STATE IN GENERAL
    • B29CSHAPING OR JOINING OF PLASTICS; SHAPING OF MATERIAL IN A PLASTIC STATE, NOT OTHERWISE PROVIDED FOR; AFTER-TREATMENT OF THE SHAPED PRODUCTS, e.g. REPAIRING
    • B29C64/00Additive manufacturing, i.e. manufacturing of three-dimensional [3D] objects by additive deposition, additive agglomeration or additive layering, e.g. by 3D printing, stereolithography or selective laser sintering
    • B29C64/30Auxiliary operations or equipment
    • B29C64/386Data acquisition or data processing for additive manufacturing
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B33ADDITIVE MANUFACTURING TECHNOLOGY
    • B33YADDITIVE MANUFACTURING, i.e. MANUFACTURING OF THREE-DIMENSIONAL [3-D] OBJECTS BY ADDITIVE DEPOSITION, ADDITIVE AGGLOMERATION OR ADDITIVE LAYERING, e.g. BY 3-D PRINTING, STEREOLITHOGRAPHY OR SELECTIVE LASER SINTERING
    • B33Y50/00Data acquisition or data processing for additive manufacturing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Software Systems (AREA)
  • Evolutionary Computation (AREA)
  • Computational Linguistics (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Materials Engineering (AREA)
  • Chemical & Material Sciences (AREA)
  • Manufacturing & Machinery (AREA)
  • Evolutionary Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Mechanical Engineering (AREA)
  • Optics & Photonics (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Color Image Communication Systems (AREA)
  • Facsimile Image Signal Circuits (AREA)

Abstract

The invention provides a method for copying blind images based on a 3D ink layer thickness and a chromaticity algorithm, and belongs to the technical field of 3D printing; the technical problems to be solved are as follows: providing an improvement in a method of reproducing an image of a blind person based on a 3D ink layer thickness and a chromaticity algorithm; the technical scheme adopted for solving the technical problems is as follows: dividing the manuscript into a cyan region, a magenta region, a yellow region and a neutral region according to colors, carrying out secondary division in each region according to the height, sequentially marking by adopting a structural marking method, measuring chromaticity values of different regions of the manuscript, constructing a model between the thickness of a 3D ink layer and the chromaticity by MATLAB coding, predicting the height value of the manuscript by the model, carrying out 3D modeling according to the height data, and printing; the invention is applied to 3D printing picture copying.

Description

Method for copying blind image based on 3D ink layer thickness and chromaticity algorithm
Technical Field
The invention provides a method for copying blind images based on a 3D ink layer thickness and a chromaticity algorithm, and belongs to the technical field of 3D printing.
Background
3D printing is applied to personalized product customization of various industries as an innovative digital manufacturing technology, and along with optimization of a color 3D printing process, customized products are more realistic and low in cost. The development of full-color 3D printing technology is promoted by using a 3D ink layer thickness and chromaticity correlation model and adopting a 3D printing technology to reproduce the blind image.
The eye is a medium for the human visual system to acquire information which is integrated into the visual nerves of the brain, but a person suffering from vision impairment cannot acquire complete information, and thus a cognitive deficit occurs. The world in the blind's eyes is not "dark", the visual concept of "black" is that there is no color produced by the entry of visible light into the visual field, and for congenital total-blind patients they do not have the concept of color nor the concept of "seeing". At present, the blind person can learn the characters through touch feeling, but can not see the image tone gradation and the color, and the blind person can more clearly perceive the tone of the image and can also have a certain cognition on the color of the image by constructing a correlation model between the thickness and the chromaticity of the 3D ink layer.
Disclosure of Invention
The invention aims to overcome the defects in the prior art, and solves the technical problems that: an improvement in structure is provided.
In order to solve the technical problems, the invention adopts the following technical scheme: the method for copying the blind image based on the 3D ink layer thickness and chromaticity algorithm comprises the following steps:
1) Five images with obvious tone levels, which are cyan, magenta, yellow, neutral and mixed colors, are selected and marked as a training group a representing the cyan, a training group b representing the magenta, a training group c representing the yellow, a training group d representing the neutral and an experimental group e representing the mixed colors;
2) Numbering and marking are carried out according to the image tone level and the color characteristics in a training group to obtain a series of training numbers, and numbering and marking are carried out according to the image color types in an experiment group to obtain a series of experiment numbers;
3) Acquiring a chrominance data set L of different areas of the image surface in a training set and an experimental set, wherein the chrominance data set L is divided into a training chrominance data set Lt and an experimental chrominance data set Le;
4) Scanning images in a training group and an experimental group by adopting a 3D profiler to obtain height data sets H of different areas on the surface of the images, wherein the height data sets H are divided into a training height data set Ht and an experimental height data set He;
5) In a training group a, inputting the obtained chroma data set L and the corresponding height data set H, simultaneously inputting a compiled artificial neural network training algorithm, carrying out parameterization association training, constructing an association conversion model of an image cyan chroma value-height value, and so on, in a training group b, constructing an association conversion model of an image magenta chroma value-height value, in a training group c, constructing an association conversion model of an image yellow chroma value-height value, and in a training group d, constructing an association conversion model of an image neutral chroma value-height value;
6) In the experimental group, after the chromaticity data set L of the measured image is input into the associated conversion model obtained in the step 5) according to the category, the height data set G corresponding to the chromaticity data set L of different areas of the image is output, and then the height data set G is converted into a 3D model, and the image is copied by using a 3D printer technology.
Further comprises:
7) Performing tone gradation and color comparison on the copy manuscript and the manuscript of the experimental group, and performing quantization comparison on the image obtained in the step 6) and the manuscript by using a color difference data set analysis algorithm and a height data set error analysis algorithm to obtain surface color difference and height difference;
the color difference and the height difference of different areas of the image surface are fed back to a compiled artificial neural network training algorithm in the forward direction, and the set new weight is adjusted until the 3D printing predicted blind image is copied through the new weight;
8) After the adjustment, the experimental group manuscript is copied again by adopting the 3D printing technology.
The compiled artificial neural network training algorithm is a training algorithm compiled by adopting a MATLAB tool kit and with extensible neuron numbers at each layer.
The numbering in step 2) is as follows:
In the training group a, numbering and marking are carried out according to the image tone level and color characteristics, the training group a is divided into N areas according to the image height, the N areas are marked as a-N n, n=1-N, M small areas are further divided in each area, the M small areas are marked as a-N n-Mm, m= 1~m, and the like, so that training sequence numbers of the rest training groups b, c and d are obtained;
in the experiment group e, the image color category is divided into four large areas, namely a cyan area, a magenta area, a yellow area and a neutral area, the cyan area is divided into N areas according to the image height, which are marked as e-cyan-N n, the each area is divided into M small areas, which are marked as e-cyan-N n-Mm, and the training sequence numbers of the magenta area, the yellow area and the neutral area are obtained by analogy.
The images of the training set and the experimental set which are marked and numbered in the step 2) are new marks of a layer of structural attributes, and are respectively represented by a chromaticity value L n and a height value H n, wherein n=1-n.
And 7) adopting a MATLAB GUI interface design and coding under a point-by-point comparison strategy to feed the color difference and the height difference of different areas of the image surface back to a compiled artificial neural network training algorithm.
And 5) the activation function of the artificial neural network training algorithm in the step 5) adopts any one of a Sigmoid function, a Tanh function, a ReLU function or an ELU function.
Compared with the prior art, the invention has the following beneficial effects:
(1) The original copy method uses the numbering and marking methods of the data sets in different areas, adopts spliced multi-layer structure arrangement and corresponding initial marks, can rapidly and largely process data, reduces operation time and cost, and improves coding efficiency of MATLAB GUI in high data set errors and color difference characterization.
(2) The invention provides a method for associating original chromaticity values of different color systems with height values of different areas on the original surface, which has the advantages that: the height values of different areas of the original are directly deduced through the original chromaticity values, so that 3D modeling can be rapidly performed, the difficulty of directly measuring the height of the original is reduced, and the measuring error and the 3D modeling difficulty are reduced.
(3) The color difference data analysis algorithm and the height data error analysis algorithm adopt MATLAB GUI interface design and coding under the point-by-point comparison strategy, forward feed the color difference and the height difference of different areas on the image surface back to the compiled artificial neural network training algorithm, and adjust the set new weight until the 3D printing prediction blind image is copied by the weight, so that the copied image is more lifelike in tone level and color.
(4) The invention provides a method for copying blind images by a 3D printing technology, which is based on the height values of different areas of a predicted group manuscript, and the model height is preset and layered when 3D modeling is carried out, so that the overall production cost is reduced by 3D printing, and the method is also in accordance with environmental protection.
Drawings
The invention is further described below with reference to the accompanying drawings:
FIG. 1 is a flow chart of the method of the present invention.
Detailed Description
FIG. 1 shows that the invention provides a method for copying an image of a blind person based on the thickness of a 3D ink layer and a chromaticity algorithm, the image is measured with high precision, a chromaticity data set is obtained, and the perception capability of the blind person to the tone level and the color of the image is enhanced through a compiled artificial neural network algorithm, a color difference analysis algorithm and a three-dimensional height data error analysis algorithm; dividing the manuscript into a cyan region, a magenta region, a yellow region and a neutral region according to colors, carrying out secondary division according to the height in each region, sequentially marking by adopting a structural marking method, measuring chromaticity values of different regions of the manuscript, constructing a model between the thickness of a 3D ink layer and the chromaticity by MATLAB coding, predicting the height value of the manuscript by the model, carrying out 3D modeling according to the height data, and printing, wherein the method specifically comprises the following steps:
1) Firstly, selecting five images with obvious tone layers, wherein the tone is cyan, magenta, yellow, neutral and mixed colors respectively, and marking the images as training groups a (cyan), b (magenta), c (yellow) and d (neutral) and experimental group e (mixed colors);
2) Numbering and marking are carried out according to the image tone level and the color characteristics in a training group to obtain a series of training numbers, and numbering and marking are carried out according to the image color types in an experiment group to obtain a series of experiment numbers;
3) Then, in a training group and an experimental group, a spectrodensitometer is adopted to obtain a chromaticity data set L of different areas of the image surface, wherein the chromaticity data set L is divided into a training chromaticity data set Lt and an experimental chromaticity data set Le;
4) Then in a training group and an experimental group, scanning images by adopting a high-precision 3D profiler to obtain height data sets H of different areas of the surface, wherein the height data sets H are divided into a training height data set Ht and an experimental height data set He;
5) In a training group a, inputting the obtained chroma data set L and the corresponding height data set H, simultaneously inputting a compiled artificial neural network training algorithm, carrying out parameterization association training, constructing an association conversion model of an image cyan chroma value-height value, and so on, in a training group b, constructing an association conversion model of an image magenta chroma value-height value, in a training group c, constructing an association conversion model of an image yellow chroma value-height value, and in a training group d, constructing an association conversion model of an image neutral chroma value-height value;
The compiled artificial neural network training algorithm is a training algorithm compiled by adopting a MATLAB tool kit and with extensible neuron numbers at each layer;
6) In the experimental group, after the chromaticity data set L of the measured image is input into the associated conversion model obtained in the step 5) according to the category, a height data set G corresponding to the chromaticity data set L of different areas of the image is output, the height data set G is converted into a 3D model, and the image is copied by using a 3D printer technology;
7) Performing tone gradation and color comparison on the copy manuscript and the manuscript of the experimental group, and performing quantization comparison on the image obtained in the step 6) and the manuscript by using a color difference data set analysis algorithm and a height data set error analysis algorithm to obtain surface color difference and height difference;
The compiled color difference data set analysis algorithm and the compiled height data set error analysis algorithm adopt MATLAB GUI interface design and coding based on a point-by-point comparison strategy;
And feeding the color difference and the height difference of different areas of the image surface back to a compiled artificial neural network training algorithm in a forward direction, and adjusting the set new weight until the 3D-printed predicted blind image is copied through the weight.
8) After the adjustment, the experimental group manuscript is copied again by adopting the 3D printing technology.
Preferably, five images with obvious tone levels are selected, the tone is cyan, magenta, yellow, neutral and mixed colors, and the tone levels and the color characteristics of the images are numbered and marked.
Preferably, in the step 2), the numbers are organized by adopting a spliced structure, for example, in the training group a, a certain area number is a-N n-Mm, n=1-N, m= 1~m, and so on, so as to obtain training sequence numbers of the rest training groups b, c and d; in the experiment group e, a certain cyan region number is e-cyan-N n-Mm, n=1-N, and m= 1~m;
Preferably, the new marks in step 2), step 3) and step 4) are that a layer of structure attribute is added after the numbered spliced structure, and a chromaticity value-L n and a height value-H n are adopted, wherein n=1-n.
Preferably, the activation function of the training algorithm in step 5) is any one of Sigmoid function, tanh function, reLU function and ELU function.
Preferably, the color difference data analysis algorithm and the height data error analysis algorithm in the step 7) adopt MATLAB GUI interface design and coding under the point-by-point comparison strategy, forward feed the color differences and the height differences of different areas of the image surface back to the compiled artificial neural network training algorithm, and adjust the set new weight until the 3D printing prediction blind image is copied by the weight, so that the copied image is more lifelike in tone level and color.
The invention is further illustrated by the following two examples.
Description of the preferred embodiments
If the 3D oil painting model is copied: an oil painting with different surface relief degrees is selected, the oil painting is divided into four large areas, namely a cyan area, a magenta area, a yellow area and a neutral area according to the color of the oil painting, the four areas are divided into five areas according to the height, and the five areas are marked as N 1、N2、N3、N4 and N 5, and are divided into M 1、M2、M3、M4 and M 5 again in sequence. And (3) measuring the chromaticity value of the surface of the green-N 1-M1 for a plurality of times in the green-N 1-M1 area, taking an average value, and substituting the chromaticity value into a cyan chromaticity value-height value associated conversion model through MATLAB programming to obtain a corresponding height data value. And the other areas are analogically substituted into a magenta chromaticity value-height value associated conversion model, a yellow chromaticity value-height value associated conversion model and a neutral chromaticity value-height value associated conversion model respectively, so that the height values of different areas on the surface of the oil painting are obtained, and then the oil painting is obtained through 3D modeling layering and combining a 3D printing technology. And further optimizing a correlation model between the thickness and chromaticity of the 3D ink layer through color difference analysis and height data error analysis.
Second embodiment
If the 3D map model is copied: selection 1:1 into four large areas, namely a cyan area, a magenta area, a yellow area and a neutral area, according to the colors of the map, and dividing the four areas into eight areas according to the height, namely N 1、N2、N3、N4、N5、N6、N7 and N 8, and dividing the four areas into M 1、M2、M3、M4、M5、M6、M7 and M 8 in sequence. And (3) measuring the chromaticity value of the surface of the green-N 1-M1 for a plurality of times in the green-N 1-M1 area, taking an average value, and substituting the chromaticity value into a cyan chromaticity value-height value associated conversion model through MATLAB programming to obtain a corresponding height data value. And the other areas are analogically substituted into a magenta chromaticity value-height value associated conversion model, a yellow chromaticity value-height value associated conversion model and a neutral chromaticity value-height value associated conversion model respectively, so that the height values of different areas on the surface of the oil painting are obtained, and then the oil painting is obtained through 3D modeling layering and combining a 3D printing technology. And further optimizing a correlation model between the thickness and chromaticity of the 3D ink layer through color difference analysis and height data error analysis.
The specific structure of the invention needs to be described that the connection relation between the component modules adopted by the invention is definite and realizable, and besides the specific description in the embodiment, the specific connection relation can bring corresponding technical effects, and solves the technical problems of the invention on the premise of not depending on the execution of corresponding software programs.
Finally, it should be noted that: the above embodiments are only for illustrating the technical solution of the present invention, and not for limiting the same; although the invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical scheme described in the foregoing embodiments can be modified or some or all of the technical features thereof can be replaced by equivalents; such modifications and substitutions do not depart from the spirit of the invention.

Claims (7)

1. The method for copying the blind image based on the 3D ink layer thickness and the chromaticity algorithm is characterized by comprising the following steps of: the method comprises the following steps:
1) Five images with obvious tone levels, which are cyan, magenta, yellow, neutral and mixed colors, are selected and marked as a training group a representing the cyan, a training group b representing the magenta, a training group c representing the yellow, a training group d representing the neutral and an experimental group e representing the mixed colors;
2) Numbering and marking are carried out according to the image tone level and the color characteristics in a training group to obtain a series of training numbers, and numbering and marking are carried out according to the image color types in an experiment group to obtain a series of experiment numbers;
3) Acquiring a chrominance data set L of different areas of the image surface in a training set and an experimental set, wherein the chrominance data set L is divided into a training chrominance data set Lt and an experimental chrominance data set Le;
4) Scanning images in a training group and an experimental group by adopting a 3D profiler to obtain height data sets H of different areas on the surface of the images, wherein the height data sets H are divided into a training height data set Ht and an experimental height data set He;
5) In a training group a, inputting the obtained chroma data set L and the corresponding height data set H, simultaneously inputting a compiled artificial neural network training algorithm, carrying out parameterization association training, constructing an association conversion model of an image cyan chroma value-height value, and so on, in a training group b, constructing an association conversion model of an image magenta chroma value-height value, in a training group c, constructing an association conversion model of an image yellow chroma value-height value, and in a training group d, constructing an association conversion model of an image neutral chroma value-height value;
6) In the experimental group, after the chromaticity data set L of the measured image is input into the associated conversion model obtained in the step 5) according to the category, the height data set G corresponding to the chromaticity data set L of different areas of the image is output, and then the height data set G is converted into a 3D model, and the image is copied by using a 3D printer technology.
2. The method for reproducing an image for the blind based on the 3D ink layer thickness and chromaticity algorithm as recited in claim 1, wherein: further comprises:
7) Performing tone gradation and color comparison on the copy manuscript and the manuscript of the experimental group, and performing quantization comparison on the image obtained in the step 6) and the manuscript by using a color difference data set analysis algorithm and a height data set error analysis algorithm to obtain surface color difference and height difference;
the color difference and the height difference of different areas of the image surface are fed back to a compiled artificial neural network training algorithm in the forward direction, and the set new weight is adjusted until the 3D printing predicted blind image is copied through the new weight;
8) After the adjustment, the experimental group manuscript is copied again by adopting the 3D printing technology.
3. The method for reproducing an image for the blind based on the 3D ink layer thickness and chromaticity algorithm as recited in claim 1, wherein: the compiled artificial neural network training algorithm is a training algorithm compiled by adopting a MATLAB tool kit and with extensible neuron numbers at each layer.
4. The method for reproducing an image for the blind based on the 3D ink layer thickness and chromaticity algorithm as recited in claim 1, wherein: the numbering in step 2) is as follows:
In the training group a, numbering and marking are carried out according to the image tone level and color characteristics, the training group a is divided into N areas according to the image height, the N areas are marked as a-N n, n=1-N, M small areas are further divided in each area, the M small areas are marked as a-N n-Mm, m= 1~m, and the like, so that training sequence numbers of the rest training groups b, c and d are obtained;
in the experiment group e, the image color category is divided into four large areas, namely a cyan area, a magenta area, a yellow area and a neutral area, the cyan area is divided into N areas according to the image height, which are marked as e-cyan-N n, the each area is divided into M small areas, which are marked as e-cyan-N n-Mm, and the training sequence numbers of the magenta area, the yellow area and the neutral area are obtained by analogy.
5. The method for reproducing an image for the blind based on the 3D ink layer thickness and chromaticity algorithm as recited in claim 1, wherein: the images of the training set and the experimental set which are marked and numbered in the step 2) are new marks of a layer of structural attributes, and are respectively represented by a chromaticity value L n and a height value H n, wherein n=1-n.
6. The method for reproducing an image for the blind based on the 3D ink layer thickness and chromaticity algorithm as recited in claim 2, wherein: and 7) adopting a MATLAB GUI interface design and coding under a point-by-point comparison strategy to feed the color difference and the height difference of different areas of the image surface back to a compiled artificial neural network training algorithm.
7. The method for reproducing an image for the blind based on the 3D ink layer thickness and chromaticity algorithm as recited in claim 1, wherein: and 5) the activation function of the artificial neural network training algorithm in the step 5) adopts any one of a Sigmoid function, a Tanh function, a ReLU function or an ELU function.
CN202210544773.0A 2022-05-19 2022-05-19 Method for copying blind image based on 3D ink layer thickness and chromaticity algorithm Active CN114926585B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210544773.0A CN114926585B (en) 2022-05-19 2022-05-19 Method for copying blind image based on 3D ink layer thickness and chromaticity algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210544773.0A CN114926585B (en) 2022-05-19 2022-05-19 Method for copying blind image based on 3D ink layer thickness and chromaticity algorithm

Publications (2)

Publication Number Publication Date
CN114926585A CN114926585A (en) 2022-08-19
CN114926585B true CN114926585B (en) 2024-06-18

Family

ID=82807700

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210544773.0A Active CN114926585B (en) 2022-05-19 2022-05-19 Method for copying blind image based on 3D ink layer thickness and chromaticity algorithm

Country Status (1)

Country Link
CN (1) CN114926585B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109253862A (en) * 2018-08-31 2019-01-22 武汉精测电子集团股份有限公司 A kind of colour measurement method neural network based
CN109902592A (en) * 2019-01-30 2019-06-18 浙江大学 A kind of blind person's secondary row path method based on deep learning

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3360358B2 (en) * 1993-06-30 2002-12-24 東洋インキ製造株式会社 How to determine printing color material amount

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109253862A (en) * 2018-08-31 2019-01-22 武汉精测电子集团股份有限公司 A kind of colour measurement method neural network based
CN109902592A (en) * 2019-01-30 2019-06-18 浙江大学 A kind of blind person's secondary row path method based on deep learning

Also Published As

Publication number Publication date
CN114926585A (en) 2022-08-19

Similar Documents

Publication Publication Date Title
EP2480929B1 (en) Colored contact lens based on amorphous images
CN100464566C (en) Method, equipment and computer program of changed digital color image
CA2415009C (en) Color reproduction process
CN102120384A (en) Multiple primitive color printing quality control method
CN1946563A (en) N-ink color gamut construction
JPH07160871A (en) Method and device for correcting color picture
US5386496A (en) Method and device for nonlinear transformation of colour information by neural network
EP1743477A1 (en) Multi-color printing using a halftone screen
JP2007535865A (en) Hybrid dot line halftone synthesis screen
CN102110428B (en) Method and device for converting color space from CMYK to RGB
CN114926585B (en) Method for copying blind image based on 3D ink layer thickness and chromaticity algorithm
CN108274748A (en) Layer-cutting printing method for multicolor 3D object
JP2004188973A (en) System and method for processing multicolor image
GB2234410A (en) Converting image densities between different types of reproduction process
CN110120005B (en) Displaying and hiding method for three-dimensional effect of halftone hidden image
CN115762326A (en) Method for manufacturing color anti-counterfeiting label made of transparent material
CN113409206A (en) High-precision digital printing color space conversion method
Nack et al. Colour picking: the pecking prder of form and function
CN106097288A (en) For generating the method for the contrast image of object structures and relevant device thereof
CN110163789A (en) Halftoning based on Moire effect duplicates Fragile Watermarking Technique method
JPH10243247A (en) Color information conversion method and device to match observation results of colors with each other
JP2001352457A (en) Information color system and printing method for the information color system
Silapasuphakornwong et al. An Exploration into Color Reproduction for Inkjet FDM Color 3D Printing.
US20050195418A1 (en) Method for processing a multi-colour image
Yanzhe A novel algorithm based on improved BP neural network and its application in color management

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant