CN102582242A - Ink key opening prediction method in digital printing working process - Google Patents

Ink key opening prediction method in digital printing working process Download PDF

Info

Publication number
CN102582242A
CN102582242A CN2011104265154A CN201110426515A CN102582242A CN 102582242 A CN102582242 A CN 102582242A CN 2011104265154 A CN2011104265154 A CN 2011104265154A CN 201110426515 A CN201110426515 A CN 201110426515A CN 102582242 A CN102582242 A CN 102582242A
Authority
CN
China
Prior art keywords
printing
ink
dot area
training sample
area percentage
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN2011104265154A
Other languages
Chinese (zh)
Inventor
王民
王敏杰
昝涛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing University of Technology
Original Assignee
Beijing University of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing University of Technology filed Critical Beijing University of Technology
Priority to CN2011104265154A priority Critical patent/CN102582242A/en
Publication of CN102582242A publication Critical patent/CN102582242A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention relates to an ink key opening prediction method in the digital printing working process. In the prior art, the mutual influence among ink keys and the influence of the printing conditions are not considered, so that an expected using effect is not achieved. In the invention, three layers of BP (Back Propagation) neural networks are applied to carry out supervised training on trained samples and carry out ink key opening prediction on untrained samples; after complete layout data information of an original manuscript of a client is subjected to rasterization processing, dot matrix information is generated, and the generated dot matrix information is converted by software to generate graphics-text information, i.e. dot area percentages; the printing field conditions (comprising the field temperature, the field humidity and the rotating speed of a printer) and the dot area percentages corresponding to ink areas are used as input original data of the BP neural networks and a nonlinear mapping relation of the input original data and the ink key opening is established; and the untrained samples are predicted by the trained BP neural networks. According to the method, the start-up regulating time can be effectively shortened and the printing efficiency is improved.

Description

A kind of digital printing workflow China and Mexico key aperture Forecasting Methodology
Technical field
The invention belongs to digital printed field, be specifically related to a kind of digital printing workflow China and Mexico key aperture Forecasting Methodology.
Background technology
The ink pre-setting technology is will be before the printing machine start; Before seal, generate the printing ink control data in the plate-making process in advance; These data are used for controlling to adjust the aperture of each black key of printing machine; And then control the quantity of ink of each ink-covered area, and make the inking situation of first printed sheet approaching with the printing finished product as far as possible, ink pre-setting is the technology that each black key China ink numerical quantity of printing machine is provided with in advance.The ink pre-setting technology is the representative art that digitizing technique gets into the print production link, is one of key technology important in the digital printing workflow, to printing quality and printing efficiency decisive role.
Ink pre-setting comprises that the computing in advance of printing ink requirement and correction, data insert two processes of printing machine, and " in advance " and " putting " two parts just the most important thing is the process of " in advance ".The performance of ink pre-setting effect mainly is because in advance the China ink amount has been carried out accurate setting before the start.The adjustment in advance that realizes through the ink pre-setting technology has two characteristics: the one, accurately; The 2nd, in advance, this is the core of ink pre-setting theory.The China ink amount that is meant each black key had in advance just been carried out setting in advance before starting to print, rather than the adjustment after the start, made the adjustment of China ink amount have in-advance.
The method of printing machine control ink usage is actually and is divided into a lot of long and narrow zone-ink-covered area or black roads to printable part on the forme in the direction of vertical long side; The China ink amount of each ink-covered area is can how much accurately regulating according to the shared area percentage-dot area percentage of picture and text part in this ink-covered area area on the forme; The area percentage of picture and text part is high more, and the China ink amount that then needs is many more.
In the traditional printing mode, it all is with printing machine operating personnel's eyes printed sheet to be observed that China ink amount is regulated, and estimates and the China ink amount, manually is input on the console, then start examination printing again.Then take out pull trial prints and observe, and carry out further China ink amount and regulate, until reaching the degree just formally printing very approaching with the digital original text of drawing a design.This method is very consuming time, and 20 minutes to 30 minutes, and can produce a large amount of transferring paper.And the technology of the ink pre-setting in the digital printing work flow calculates the dot area percentage of printed sheet through processing such as the preceding color separation of seal, RIP rasterisations, draws the ink pre-setting algorithm predicts according to test again and goes out black key aperture, and is consuming time shorter, 1 minute to 3 minutes.The traditional relatively ink pre-setting technology of this technology has reduced the start waste product, has shortened the time of starting shooting, has reduced operation easier, has reduced labor strength, but also can the ink pre-setting data can have been reused through data backup.So both stablize product quality, advanced standardization, datumization and the standardization of ink pre-setting again.
Through utilize the test of BR624A offset press in Beiren group; Ink pre-setting technology in the research digital printing work flow; Relation between primary study printed sheet dot area percentage and the black key aperture; Draw the condition of work of printing, promptly scene temperature, on-the-spot humidity and printing machine rotating speed have the greatest impact to black key aperture.Other conditions one timings, along with the increase of print speed printing speed, the value of black key aperture also increases thereupon; Along with the continuous increase of dot area percentage, black key aperture integral body presents growth trend.
Summary of the invention
The present invention relates to a kind of digital printing workflow China and Mexico key aperture Forecasting Methodology.Domestic press is in order further to improve printing quality and production efficiency; Some enterprises have introduced external various ink pre-setting system in succession; Before start, adjust the black key of printing machine in advance; But the ink pre-setting system is unsatisfactory in practical application, do not consider between black key influence each other, the influence of printing condition, cause the result of use that does not reach expection.
A kind of digital printing workflow China and Mexico key opening value Forecasting Methodology is characterized in that may further comprise the steps:
1) with field density promptly evenly and not have the printed sheet that the surface color density that prints meets the GB printing standard blankly be training sample; And to the CMYK of training sample; Be cyan Cyan, magenta Magenta, yellow Yellow and black Black; Four looks carry out by chromatography order printing test, and wherein chromatography has in proper order: black, magenta, cyan, yellow; Perhaps cyan, magenta, yellow, black; The printing condition of record test; Be scene temperature, on-the-spot humidity, printing machine rotating speed; Obtain the dot area percentage of training sample; The space of a whole page data message that client's original copy is complete produces lattice information after through RIP raster image processor rasterization process, and the lattice information that produces transformed produces graph text information, obtains the corresponding black key aperture of dot area percentage with training sample simultaneously;
2) with printing condition, i.e. the dot area percentage normalization of scene temperature, on-the-spot humidity, printing machine rotating speed and 20-30 ink-covered area is handled the back as BP neutral net input layer input data; The normalization of China ink key aperture is handled the back as BP neutral net output layer input data; The hidden layer node number is set at 21-35; Utilization input layer, hidden layer and 3 layers of BP neural network algorithm of output layer procedural training module have tutor's training study to training sample; When the convergence error of BP neutral net during less than 10e-4; The BP neutral net finally restrains, and the weights of preservation BP neural network algorithm Nonlinear Mapping and threshold value are to database;
3) for training sample not with the dot area percentage and the printing condition of printed sheet; Be scene temperature, on-the-spot humidity and printing machine rotating speed; Be defeated by the prediction module of BP neural network algorithm; Weights that the utilization of BP algorithm routine has been stored and threshold value are carried out prediction and calculation to scene temperature, on-the-spot humidity, printing machine rotating speed and the dot area percentage of input, thereby dope the not corresponding black key aperture of dot area percentage of training sample; Black key aperture predicted value is sent to the printing machine console through network or storage medium, and console receives data and controls corresponding black key automatically and is inked to printing machine completion printing.
The present invention calls the BP algorithm routine to qualified actual printed sheet training study; Thereby set up the Nonlinear Mapping relation of printed sheet picture and text digital information and printing condition (scene temperature, on-the-spot humidity, printing machine rotating speed) and black key aperture; Can effectively shorten the start adjustment time, improve printing efficiency.
Description of drawings
Fig. 1 ink pre-setting technology sketch map.
The learning training procedure chart of Fig. 2 BP neutral net.
Fig. 3 prediction and actual black key aperture.
The specific embodiment
The BP neural network learning is a kind of tutor's of having learning training; Its main thought is: the input learning sample; Use back-propagation algorithm that the weights and the threshold value of network are carried out adjustment training repeatedly; Make that vector and the desired output vector of actual output are approaching as much as possible, when the error sum of squares of network output layer during less than the error of appointment training accomplish the weights and the threshold value of preservation network.Concrete steps are as follows:
1. parameter introduction
Network input vector P=(a 1, a 2, La n) TBe dot area percentage, scene temperature, on-the-spot humidity and the normalized data of printing machine rotating speed;
Network desired output vector T=(s 1, s 2, Ls q) TN-3=q is arranged, and q gets 20-30.The normalized data of Jimo key aperture;
Middle layer elements input vector S=(s 1, s 2, Ls p) TOutput vector B=(b 1, b 2, Lb p) TP gets 21-35.
Output layer unit input vector L=(l 1, l 1, Ll q) TActual output vector C=(c 1, c 2, Lc q) T
The connection power W in input layer to intermediate layer Ij, W Ij = w 11 w 12 L w 1 p w 21 w 22 L w 2 p M M O M w n 1 w n 2 L w Np ,
i=1,2,L,p,j=1,2,L,n;
The connection power V of intermediate layer to output layer Ti, V Ti = v 11 v 12 L v 1 i v 21 v 22 L v 2 i M M O M v t 1 v t 2 L v Ti ,
i=1,2,L,p,t=1,2,L,q;
The output threshold value θ of each unit, intermediate layer i, θ i = θ 1 θ 2 M θ i , I=1,2, L, p;
The output threshold value y of each unit of output layer t, y t = y 1 y 2 M y t , T=1,2, L, q;
Factor of momentum is α, 0<α<1;
β is the learning rate of BP neutral net, 0<β<1.
2. concrete learning process:
(1) initialization.Connect weights W to each IjAnd V Ti, threshold value θ iWith y tGive the random number in (1,1), set the convergence error ε of BP neural network algorithm;
(2) sample of choosing one group of input, actual output is to P=(a 1, a 2, La n) T, T=(s 1, s 2, Ls q) T, and sample handled normalization, offer the input layer and the output layer of BP neutral net then respectively;
(3) with input layer sample data P=(a 1, a 2, La n) T, connect weight w IjWith threshold value θ iCalculate the input s of each unit of hidden layer i, use s then iCalculate the output b of each unit of hidden layer through transfer function i, transfer function is selected the sigmoid function for use, and its form is:
Figure BDA0000121715630000045
s i = Σ j = 1 n ( w ij a j - θ i ) j=1,2,L,n;
b i=f(s i) b i = f ( s 1 ) f ( s 2 ) M f ( s i ) , j=1,2,L,n;
(4) utilize the output b of hidden layer i, connect power V TiThreshold value y tCalculate the output l of each unit of output layer t, utilize transfer function to calculate the response c of each unit of output layer then t
l t = Σ i = 1 p ( V ti b i - y t ) t=1,2,L,q;
c t=f(l t) c t = f ( l 1 ) f ( l 2 ) M f ( l t ) , t=1,2,L,q;
(5) utilize the actual output C=(c of network 1, c 2, Lc q) T, the desired output T=(s of network 1, s 2, Ls q) T, calculating target function (error) E, if E less than the convergence error ε that sets, network convergence then, finishing iteration is also preserved weights W Ij, V TiWith threshold value θ i, y tOtherwise continue step (6), revise weights and threshold matrix.
E = 1 2 Σ t = 1 q ( s t - c t ) 2 t=1,2,L,q;
(6) utilize the actual output C=(c of network 1, c 2, Lc q) T, the desired output T=(s of network 1, s 2, Ls q) T, the vague generalization error d of each unit of calculating output layer t
d t=(s t-c t)·c t·(1-c t) t=1,2,L,q;
(7) utilize the vague generalization error d of each unit of output layer of BP neutral net t, connect power V TiOutput b with hidden layer iCalculate the vague generalization error e of each unit of hidden layer i
e i = [ Σ t = 1 q d t · V ti ] · b i · ( 1 - b i ) t=1,2,L,q,i=1,2,L,p;
(8) utilize the vague generalization error d of each unit of output layer tOutput b with each unit of hidden layer iRevise and connect power V TiThreshold value y t
V ti′=αV ti+βd tb i t=1,2,L,q,i=1,2,L,p;
y t′=y t+βd t t=1,2,L,q;
α is a factor of momentum, 0<α<1, and β is the learning rate of network, 0<β<1.
(9) utilize the vague generalization error e of each unit of hidden layer i, the input P=(a of each unit of input layer 1, a 2, La n) revise and connect weights W IjWith threshold value θ i
w ij′=αw ij+βe ia j j=1,2,L,n,i=1,2,L,p;
θ i′=θ i+βe i i=1,2,L,p;
β is the learning rate of neutral net, 0<β<1.
(10) with input layer sample data P=(a 1, a 2, La n) T, connect weights W Ij' and threshold value θ iThe input s of each unit of ' calculating hidden layer i', use s then i' calculate the output b of each unit of hidden layer through transfer function i', transfer function is selected the sigmoid function for use, and its form is:
s i ′ = Σ j = 1 n ( w ij ′ a j - θ i ′ ) j=1,2,L,n;
b i′=f(s i′) b i ′ = f ( s 1 ′ ) f ( s 2 ′ ) M f ( s i ′ ) , j=1,2,L,n;
(11) utilize the output b of hidden layer i', connect power V Ti' threshold value y t'.Calculate the output l of each unit of output layer t', utilize transfer function to calculate the response c of each unit of output layer then t'.
l t ′ = Σ i = 1 p ( V ti ′ b i ′ - y t ′ ) t=1,2,L,q;
c t′=f(l t′) c t ′ = f ( l 1 ′ ) f ( l 2 ′ ) M f ( l t ′ ) , t=1,2,L,q;
(12) utilize the actual output C ' of network=(c ' 1, c ' 2, Lc ' q) T, the desired output T=(s of network 1, s 2, Ls q) T, calculating target function (error) E ', if E ' less than the convergence error ε that sets, network convergence then, finishing iteration is also preserved weights and threshold value; Otherwise return step (6), revise weights and threshold matrix, continue iterative computation.
E ′ = 1 2 Σ t = 1 q ( s t - c t ′ ) 2 t=1,2,L,q;
(10) choose next training sample to offering the BP neutral net, continue step (3)-(12), up to all training samples to trained.
The numerical value of learning rate β, factor of momentum α is most important for convergence rate, and their selection is because of the different differences to some extent of sample.At present, also there is not unified method.When revising these two parameter values, generally follow following principle: β and increase, the study step-length increases, but weight coefficient possibly fluctuate.α increases, and pace of learning slows down, but can stop weight coefficient to fluctuate.
Said process can be used the learning training procedural representation of figure (2) BP neutral net.
Instantiation is following: at first with the digitlization of training sample original copy; Promptly obtain complete space of a whole page data message; Transform through software then through producing lattice information after RIP (raster image processor) rasterization process, and with the lattice information that produces and to produce layout information-dot area percentage.The on-the-spot relative humidity of printing is 30%, and the printing machine rotating speed is 4000/hour, and temperature is 25 ℃; Because each data unit of gathering is inconsistent, in order to accelerate the convergence of training network, thereby must carry out [0 to data; 1] normalization is handled; Top three conditions are done the normalization processing to be respectively: 0.3,0.4,0.25.The full lattice of used ink fountain are 100, and the actual black key aperture of standby gets final product divided by 100 when carrying out the normalization processing, and the value of dot area percentage is also handled without normalization and directly got fractional value between [0-100%].Testing ground temperature, testing ground humidity, printing machine rotating speed are the principal elements of the black key aperture of influence; Also have between adjacent black key and influence each other; So can not be single with of the input of certain any dot area percentage as the BP neutral net; So confirm that the neuron number of input layer is 23, comprise the dot area percentage of scene temperature, on-the-spot humidity, printing machine rotating speed and 20 ink-covered areas; Confirming of output layer node number: the output layer node is followed successively by 20 black key apertures that the ink-covered area dot area percentage is corresponding; The hidden layer node number is set in 23.Learning rate β=0.4, factor of momentum α=0.9, the error ε of expectation=10e-4.
The process of BP neural network learning training:
(1) utilize random function that each is connected weights W Ij(matrixes of 23 row, 23 row) and V Ti(20 row, 23 column matrix), threshold value θ i(23 row, 1 column matrix) and y t(20 row, 1 column matrix) given the random number in (1,1);
(2) training sample is handled carrying out normalization.What the CMYK of four-color process was that the initial of 4 kinds of printing-ink titles: cyan Cyan, carmetta Magenta, yellow Yellow, K get is last letter of black.The graph text information of each ink-covered area all is to be printed in proper order according to certain chromatography by above 4 kinds of colors to form, and the chromatography that this printing test is adopted is blue or green, pinkish red, yellow, black in proper order.Cyan to each ink-covered area is an example below, chooses 6 groups of training sample data, and its normalization is handled, and the data after the processing are as follows.In 6 groups of training samples of picked at random one group input and target data are composed the (a to P=respectively 1, a 2, La 23), T=(s 1, s 2, Ls 20), and offer the BP neutral net;
The dot area percentage of printed sheet is as the input layer input value of network under the identical printing condition:
0.3,0.4,0.25,0,0,0.10,0.19,0.11,0.09,0.22,0.19,0.25,0.23,0.25,0.19,0.20,0.26,0.15,0.06,0.13,0.15,0,0;
0.3,0.4,0.25,0,0,0.01,0.14,0.10,0.11,0.23,0.21,0.19,0.20,0.20,0.18,0.22,0.14,0.06,0.11,0.10,0,0,0;
0.3,0.4,0.25,0,0,0.10,0.18,0.14,0.08,0.26,0.19,0.23,0.22,0.28,0.35,0.33,0.32,0.18,0.10,0.23,0.09,0,0;
0.3,0.4,0.25,0,0,0.10,0.18,0.13,0.09,0.24,0.19,0.25,0.23,0.24,0.19,0.27,0.31,0.15,0.08,0.19,0.09,0,0;
0.3,0.4,0.25,0,0,0.15,0.22,0.14,0.12,0.33,0.28,0.31,0.28,0.31,0.25,0.32,0.36,0.19,0.09,0.19,0.06,0,0;
0.3,0.4,0.25,0,0,0.10,0.19,0.12,0.09,0.22,0.19,0.25,0.23,0.25,0.19,0.20,0.26,0.15,0.09,0.13,0.15,0,0.
The black key aperture corresponding with every networking point area occupation ratio of printed sheet is as the output layer input value of network under the identical printing condition:
0.02,0.02,0.18,0.28,0.23,0.18,0.26,0.25,0.30,0.26,0.29,0.25,0.28,0.32,0.23,0.15,0.21,0.24,0.02,0.02;
0.02,0.02,0.03,0.22,0.18,0.22,0.27,0.27,0.26,0.26,0.27,0.24,0.28,0.25,0.11,0.22,0.19,0.02,0.02,0.02;
0.02,0.02,0.20,0.25,0.22,0.16,0.30,0.25,0.28,0.27,0.31,0.34,0.33,0.33,0.25,0.20,0.28,0.18,0.02,0.02;
0.02,0.02,0.20,0.25,0.22,0.16,0.30,0.26,0.30,0.28,0.28,0.25,0.30,0.33,0.23,0.16,0.27,0.17,0.02,0.02;
0.02,0.02,0.23,0.27,0.22,0.21,0.33,0.31,0.33,0.32,0.32,0.29,0.33,0.34,0.25,0.18,0.26,0.18,0.02,0.02;
0.02,0.02,0.19,0.28,0.25,0.17,0.28,0.24,0.30,0.26,0.30,0.26,0.28,0.30,0.23,0.17,0.22,0.24,0.02,0.02.
(3) with input layer sample data P=(a 1, a 2, La n) T, connect weights W IjWith threshold value θ iCalculate the input s of each unit of hidden layer i, use s then iCalculate the output b of each unit of hidden layer through transfer function i, transfer function is selected the sigmoid function for use, and its form is:
Figure BDA0000121715630000081
s i = Σ j = 1 n ( w ij a j - θ i ) j=1,2,L,23;
b i=f(s i) b i = f ( s 1 ) f ( s 2 ) M f ( s i ) , j=1,2,L,23;
(4) utilize the output b of hidden layer i, connect power V TiThreshold value y tCalculate the output l of each unit of output layer t, utilize transfer function to calculate the response c of each unit of output layer then t
l t = Σ i = 1 p ( V ti b i - y t ) t=1,2,L,20;
c t=f(l t) c t = f ( l 1 ) f ( l 2 ) M f ( l t ) , t=1,2,L,20;
(5) utilize the actual output C=(c of network 1, c 2, Lc q) T, the desired output T=(s of network 1, s 2, Ls q) T, calculating target function (error) E, if E less than the convergence error ε that sets, network convergence then, finishing iteration is also preserved weights W Ij, V TiWith threshold value θ i, y tOtherwise continue step (6), revise weights and threshold matrix.
E = 1 2 Σ t = 1 q ( s t - c t ) 2 t=1,2,L,20;
(6) utilize the actual output C=(c of network 1, c 2, Lc q) T, the desired output T=(s of network 1, s 2, Ls q) T, the vague generalization error d of each unit of calculating output layer t
d t=(s t-c t)·c t·(1-c t) t=1,2,L,20;
(7) utilize the vague generalization error d of each unit of output layer of BP neutral net t, connect power V TiOutput b with hidden layer iCalculate the vague generalization error e of each unit of hidden layer i
e i = [ Σ t = 1 q d t · V ti ] · b i · ( 1 - b i ) t=1,2,L,20;i=1,2,L,23;
(8) utilize the vague generalization error d of each unit of output layer tOutput d with each unit of hidden layer iRevise and connect power V TiThreshold value y t
V ti′=αV ti+βd tb i t=1,2,L,20;,i=1,2,L,23;
y t′=y t+βd t t=1,2,L,20;;
α is a factor of momentum, α=0.9, and β is the learning rate of network, β=0.4.
(9) utilize the vague generalization error e of each unit of hidden layer i, the input P=(a of each unit of input layer 1, a 2, La n) revise and connect weights W IjWith threshold value θ i
w ij′=αw ij+βe ia j j=1,2,L,23,i=1,2,L,23;
θ i′=θ i+βe i i=1,2,L,23;
β is the learning rate of neutral net, β=0.4.
(10) with input layer sample data P=(a 1, a 2, La n) T, connect weight w Ij' and threshold value θ iThe input s of each unit of ' calculating hidden layer i', use s then i' calculate the output b of each unit of hidden layer through transfer function i', transfer function is selected the sigmoid function for use, and its form is:
Figure BDA0000121715630000101
s i ′ = Σ j = 1 n ( w ij ′ a j - θ i ′ ) j=1,2,L,23;
b i′=f(s i′) b i ′ = f ( s 1 ′ ) f ( s 2 ′ ) M f ( s i ′ ) , j=1,2,L,23;
(11) utilize the output b of hidden layer i', connect power V Ti' threshold value y t'.Calculate the output l of each unit of output layer t', utilize transfer function to calculate the response c of each unit of output layer then t'.
l t ′ = Σ i = 1 p ( V ti ′ b i ′ - y t ′ ) t=1,2,L,20;
c t′=f(l t′) c t ′ = f ( l 1 ′ ) f ( l 2 ′ ) M f ( l t ′ ) , t=1,2,L,20;
(12) utilize the actual output C ' of network=(c ' 1, c ' 2, L c ' q) T, the desired output T=(s of network 1, s 2, Ls q) T, calculating target function (error) E ', if E ' less than the convergence error ε that sets, network convergence then, finishing iteration is also preserved weights and threshold value; Otherwise return step (6), revise weights and threshold matrix, continue iterative computation.
E ′ = 1 2 Σ t = 1 q ( s t - c t ′ ) 2 t=1,2,L,20;
(10) choose next training sample to offering the BP neutral net, continue step (3)-(12), up to all training samples to trained.
After through iterative computation 1086 times, the convergence error of BP neutral net is less than 10e-4, and this moment, the BP neutral net was accomplished training study 6 groups of training samples.
Get the cyan dot area percentage and the input of printing condition of above printed sheets beyond 6 groups, carry out black key aperture with the above BP neutral net that has trained and predict, predict and actual black key aperture is illustrated in fig. 3 shown below as the BP neutral net.
Predict the outcome and actual result contrast is seen from black key aperture, the trend basically identical with actual result of predicting the outcome, error is less.Print test according to predicting the outcome, measure the cyan field density in the GB scope, reached the printing quality requirement.It is thus clear that this method can effectively be predicted black key aperture before start, improved the accuracy of ink pre-setting, reached the effect of ink pre-setting.

Claims (1)

1. digital printing workflow China and Mexico key opening value Forecasting Methodology is characterized in that may further comprise the steps:
1) with field density promptly evenly and not have the printed sheet that the surface color density that prints meets the GB printing standard blankly be training sample; And to the CMYK of training sample; Be cyan Cyan, magenta Magenta, yellow Yellow and black Black; Four looks carry out by chromatography order printing test, and wherein chromatography has in proper order: black, magenta, cyan, yellow; Perhaps cyan, magenta, yellow, black; The printing condition of record test; Be scene temperature, on-the-spot humidity, printing machine rotating speed; Obtain the dot area percentage of training sample; The space of a whole page data message that client's original copy is complete produces lattice information after through the raster image processor rasterization process, and the lattice information that produces transformed produces graph text information, obtains the corresponding black key aperture of dot area percentage with training sample simultaneously;
2) with printing condition, i.e. the dot area percentage normalization of scene temperature, on-the-spot humidity, printing machine rotating speed and 20-30 ink-covered area is handled the back as BP neutral net input layer input data; The normalization of China ink key aperture is handled the back as BP neutral net output layer input data; The hidden layer node number is set at 21-35; Utilization input layer, hidden layer and 3 layers of BP neural network algorithm of output layer procedural training module have tutor's training study to training sample; When the convergence error of BP neutral net during less than 10e-4; The BP neutral net finally restrains, and the weights of preservation BP neural network algorithm Nonlinear Mapping and threshold value are to database;
3) for training sample not with the dot area percentage and the printing condition of printed sheet; Be scene temperature, on-the-spot humidity and printing machine rotating speed; Be defeated by the prediction module of BP neural network algorithm; Weights that the utilization of BP algorithm routine has been stored and threshold value are carried out prediction and calculation to scene temperature, on-the-spot humidity, printing machine rotating speed and the dot area percentage of input, thereby dope the not corresponding black key aperture of dot area percentage of training sample; Black key aperture predicted value is sent to the printing machine console through network or storage medium, and console receives data and controls corresponding black key automatically and is inked to printing machine completion printing.
CN2011104265154A 2011-12-19 2011-12-19 Ink key opening prediction method in digital printing working process Pending CN102582242A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN2011104265154A CN102582242A (en) 2011-12-19 2011-12-19 Ink key opening prediction method in digital printing working process

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN2011104265154A CN102582242A (en) 2011-12-19 2011-12-19 Ink key opening prediction method in digital printing working process

Publications (1)

Publication Number Publication Date
CN102582242A true CN102582242A (en) 2012-07-18

Family

ID=46471727

Family Applications (1)

Application Number Title Priority Date Filing Date
CN2011104265154A Pending CN102582242A (en) 2011-12-19 2011-12-19 Ink key opening prediction method in digital printing working process

Country Status (1)

Country Link
CN (1) CN102582242A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106079882A (en) * 2016-06-06 2016-11-09 南京理工大学 A kind of sheet metal print is coated with intelligence ink-providing control system and method thereof
CN106113936A (en) * 2016-08-02 2016-11-16 昆明理工大学 A kind of printer intelligence ink-feeding device and ink supply method

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1367079A (en) * 2001-01-24 2002-09-04 海德堡印刷机械股份公司 Method for regulating printing technical parameters of printing machine and other parameters related to printing process
CN101284445A (en) * 2008-05-29 2008-10-15 西安理工大学 Ink resetting method based on JDF digitalization process
CN101837675A (en) * 2009-03-13 2010-09-22 海德堡印刷机械股份公司 Be used for controlling the method for the inking of printing machine

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1367079A (en) * 2001-01-24 2002-09-04 海德堡印刷机械股份公司 Method for regulating printing technical parameters of printing machine and other parameters related to printing process
CN101284445A (en) * 2008-05-29 2008-10-15 西安理工大学 Ink resetting method based on JDF digitalization process
CN101837675A (en) * 2009-03-13 2010-09-22 海德堡印刷机械股份公司 Be used for controlling the method for the inking of printing machine

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
咎涛等: "基于神经网络的智能油墨预置技术", 《北京工业大学学报》 *
王民等: "数字化印刷工作流程种的油墨预置技术", 《北京工业大学学报》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106079882A (en) * 2016-06-06 2016-11-09 南京理工大学 A kind of sheet metal print is coated with intelligence ink-providing control system and method thereof
CN106079882B (en) * 2016-06-06 2018-06-12 南京理工大学 A kind of sheet metal print applies intelligent ink-providing control system and its method
CN106113936A (en) * 2016-08-02 2016-11-16 昆明理工大学 A kind of printer intelligence ink-feeding device and ink supply method

Similar Documents

Publication Publication Date Title
JP6927344B2 (en) Ink deposition uniformity compensation mechanism
JP2019142223A (en) Ink estimation mechanism
CN100572063C (en) Ink pre-setting method based on JDF digitlization flow process
CN103144448B (en) Ink-quantity limiting method for realizing maximization of ink-jet printing color gamut
CN102779287B (en) Ink key opening forecasting method having increment type learning capacity
CN100589978C (en) Method for confirming ink-supplying amount of printing machine
JP7310402B2 (en) Information processing apparatus, method, and program
CN107077063A (en) three-dimensional halftone
JP2002185789A5 (en)
CN108819508B (en) Printing color management method and device
CN103870689B (en) A kind of printing print system Forecast of Spectra method
CN102582242A (en) Ink key opening prediction method in digital printing working process
JP4535740B2 (en) Color adjustment method in color proof
CN103862858B (en) A kind of multi-color printing print system spectrum color separation method
US20160224296A1 (en) Data flow to a printing device
CN110083317B (en) System for providing intelligent printing data arrangement service
CN102171052A (en) Program, image forming method, and printing system
CN103823648B (en) A kind of method of the inkjet printing with sign
JP7280103B2 (en) Simulation of CMY ink layer thickness
CN104070773A (en) Printing system and printing method for determining ink saving amount
CN101185321B (en) Image data generation device and method, and thermal transfer/recording device
CN109421361B (en) Method for computer-assisted production of printing plates for at least two print jobs
EP2702758B1 (en) Method for creating a copy image and reproduction system
CN112562020A (en) TIFF image and halftone image format conversion method based on least square method
WO2015005038A1 (en) Test chart-forming method, device and program, test chart, and image correction method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C02 Deemed withdrawal of patent application after publication (patent law 2001)
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20120718