EP1741060A2 - Procede pour comparer une image avec au moins une image reference - Google Patents

Procede pour comparer une image avec au moins une image reference

Info

Publication number
EP1741060A2
EP1741060A2 EP05740075A EP05740075A EP1741060A2 EP 1741060 A2 EP1741060 A2 EP 1741060A2 EP 05740075 A EP05740075 A EP 05740075A EP 05740075 A EP05740075 A EP 05740075A EP 1741060 A2 EP1741060 A2 EP 1741060A2
Authority
EP
European Patent Office
Prior art keywords
image
color
logic
logic unit
signal
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
EP05740075A
Other languages
German (de)
English (en)
Inventor
Carsten Diederichs
Volker Lohweg
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Koenig and Bauer AG
Original Assignee
Koenig and Bauer AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Koenig and Bauer AG filed Critical Koenig and Bauer AG
Publication of EP1741060A2 publication Critical patent/EP1741060A2/fr
Withdrawn legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30144Printing quality

Definitions

  • the invention relates to a method for comparing an image with at least one reference image according to the preamble of claim 1.
  • camera systems are increasingly being used for different applications, for example in inspection systems, web monitoring systems or register measuring systems, these systems being arranged in or on a printing press or a machine that processes a printing material. Furthermore, there is a requirement that these systems perform their function “inline”, ie integrated in the work process of the printing press or printing machine, which is due to the large amount of data supplied by the camera system and the fast process flow of the printing press or the printing press for the respective machine
  • Camera system means a considerable challenge, for example in quality control even for spectral photo metric identification features that are difficult to identify, despite the high transport speed of the material in the short time available for assessment to arrive at a reliable assessment, preferably of each individual identification feature
  • use is often made of electronic image sensors in particular color cameras with an image sensor consisting of a CCD chip, the light-sensitive pixels of which correspond to d he recorded color in the observation area an output signal z. B. in three separate signal channels, mostly for the colors red, green and blue.
  • the position of a recognition feature to be assessed in the test process varies within certain tolerance limits within a defined expectation range.
  • the position of a window thread as z. B. is used for banknotes or tokens, vary relative to the printed image of the banknotes or tokens on a printed sheet due to the properties of the production process for producing the window thread.
  • positional tolerances of certain identification features which are in principle tolerable, can generate an error message, since when comparing a print pattern defined as a target value with the current print image, image position by image position is compared one after the other, so that positional deviations of identification features are identified as errors that are not.
  • DE 101 32 589 A1 discloses a method for the qualitative assessment of printed material with at least one identification feature, in which an image sensor records an image of the material to be assessed and for this image the geometric contour and / or the relative arrangement in an evaluation device of several identifying features is evaluated with one another.
  • the post-published DE 103 14 071 B3 discloses a method for the qualitative assessment of a material with at least one identification feature, with a color image being recorded at least from the identification feature with an electronic image sensor, with at least one first electrical signal correlating directly or indirectly with the color image from the image sensor is provided, with an evaluation device connected to the image sensor evaluating the first electrical signal, a second electrical signal being obtained from at least one reference image and being stored in a data memory, the second electrical signal each having a setpoint for the at least two different properties of the reference image First electrical signal, wherein the first signal is compared with at least two of the setpoints contained in the second electrical signal, wherein in the comparison at least the color image of the recognition feature to a Color deviation from the reference image and the identification feature are checked for belonging to a specific class of identification features or for a specific geometric contour or for a relative arrangement to at least one other identification feature of the material, with at least after changing the evaluation device from a learning mode to a working mode based on two of the checks of the color image of
  • a narrow-band excitation light source e.g. B. a tunable laser
  • the object is irradiated with light of a narrow frequency range in a selected location, with light reflected from the object or an emission induced in the object by its irradiation z.
  • B. captured by a multitude of pixels, photometrically calibrated CCD camera, digitized and sent to a computer as a data set identifying each pixel and stored in a memory, whereby the photographically recorded object can also be measured so that the data set contains information Using a geometrical arrangement of different objects, their distance from each other or a depth of their relief structure can be added.
  • the data set created from this image acquisition can, for. B. can be made available over the Internet for a comparison of this data record with a data record created by another object, in order to check the other object elsewhere for a match with the first object, i. H. with the original, and thus to check its authenticity.
  • CH 684 222 A5 discloses a device for classifying a pattern, in particular a banknote or a coin, whereby a multi-stage learning classification system on the pattern sequentially carries out at least three tests by comparing feature vectors with vector nominal values, a light source doing this Illuminates the pattern and a sensor measures the radiation reflected from the pattern at discrete times.
  • methods for pattern recognition determine similarities, such as distance measurements for segmented objects, or they calculate global threshold distributions. These methods are based on translation-invariant output spectra. In reality, however, situations often occur, such as object shifts under the recording system, different substrates during the recording or aliasing effects, so that in many cases a direct comparison of these output spectra with stored target values cannot be carried out.
  • the invention is based on the object of providing a method for comparing an image with at least one reference image, it being possible to carry out a complex assessment of the quality of a printed matter, in particular in real time in the ongoing printing process of a printing press.
  • the logic unit of the image processing system that carries out the image comparison, despite the complex inspection processes to be performed by it, is constructed in a space-saving manner and with low energy consumption in a single chip, which means that the logic unit is connected to one Printing machine is easily adaptable.
  • the logic unit of the image processing system can be flexibly adjusted to the different conditions in the printing process, which leads to a cost-effective solution for different applications.
  • the image processing system offers a sufficiently high bandwidth for processing video signals, in particular also for fast communication with other systems interacting with the image processing system.
  • material preferably a printed matter, in particular printed material with at least one identification feature
  • material is reliably and qualitatively assessed even if the color image recorded by the material, in particular the identification feature, has optical properties that are not sufficiently reliable with spectrophotometric methods alone are to be identified. Since the methods to be preferably used here do not presuppose that the material to be assessed qualitatively has a pronounced reflectivity, practically any optically perceptible property or nature of the material can be defined as its distinguishing feature, which results in a significantly expanded field of application for the method. The determination of what the identifying feature should consist of can therefore be decided based on the application. The test is aimed solely at the fact that there is an optically perceptible difference between the identification feature and its surroundings. This difference is used to assess the material qualitatively, which may also include z. B. to identify or to check its authenticity.
  • the printing press is preferably designed as a rotary printing press, in particular as a printing press printing in an offset printing process, in a steel engraving process, in a screen printing process or in a hot stamping process.
  • the printing press is designed as a sheet-fed printing press, the proposed design of the image processing system ensures that the entire sheet is preferably driven at a machine speed of e.g. B. 18,000 sheets / h can be inspected.
  • the material to be printed is a web of material, the image processing system is able to determine the quality of printed matter which is processed at a machine speed of e.g. B. 15 m / s through the printing press to be subjected to a single piece control.
  • the methods described below can be implemented in an FPGA (Field Programmable Gate Array) and executed there.
  • FPGA Field Programmable Gate Array
  • the implementation of the image processing system in an FPGA miniaturizes its apparatus structure and thereby facilitates its installation in or attachment to the printing press.
  • An access e.g. B. cylinders or other devices of the printing unit is not impeded or restricted by the image processing system attached to the printing machine, which is an important advantage for acceptance of the image processing system due to the narrow, limited installation space in a printing unit of the printing machine.
  • ⁇ E As a color space is z. B. especially the so-called ClELAB color space, which has found widespread use in printing technology.
  • An important parameter for a color deviation is given in the ClELAB color space by the color difference ⁇ E between the target and actual values of the parameters L, a and b characterizing the ClELAB color space, the parameter L for the brightness, a for the red-green Value and b stands for the yellow-blue value.
  • These parameters are also called CIE values.
  • Further parameters are the hue difference ⁇ H and the saturation difference ⁇ C, whereby in multicolor printing in particular the hue difference ⁇ H is important as a parameter because a color cast is subjectively more disturbing than a saturation difference ⁇ C that indicates a difference in brightness.
  • B up to 1 an invisible color difference, from 2 a slight difference, from 3 a recognizable difference, from 4 a clear difference and from 5 a strong difference.
  • the range of values for the CIE values a and b ranges from -100 for green or blue to +100 for red or yellow, the range for brightness L from 0 (black; total absorption) to 100 (white; total reflection).
  • S cones
  • M cones
  • L cones
  • the maximum absorption of the S-cone type is in the blue range, namely at 420 nm.
  • the M-cone type maximally absorbs in the green spectral range, namely at 534 nm.
  • the L- The cone type has its absorption maximum at 564 nm in the yellow / red spectral range.
  • Three-pin vision is called trichromatic vision.
  • the individual color impressions are triggered by differently strong stimuli of the individual cone types. An equally strong attraction of all cone types leads to the impression of the color white.
  • color sensation phenomena such as e.g. B. the color antagonism and color constancy are not explained.
  • Color antagonism means that certain colors can never be seen in transitions, so that no color transition between these colors is possible. Colors that show the color antagonism are called counter or complementary colors. The color pairs red / green and blue / yellow and black / white are worth mentioning here.
  • the color constancy compensates for the different spectral distribution of the light, which is dependent, for example, on weather or daylight conditions.
  • the counter-color model assumes that the cones are arranged in receptive fields, namely in blue / yellow fields and red / green fields.
  • Receptive fields are to be understood here as neurons and the way in which the stimuli of the cones are processed further by the neurons.
  • Two types of receptive fields are essentially responsible for color vision. The first receptive field gets its input from the L and M cones, the second receptive field from the S cones together with differently weighted stimuli from the L and M cones. It is assumed that at the level of the neurons or receptive fields, a subtractive color mixture is used to stimulate the cones.
  • the trichromatic model most commonly used in technology to describe additive color images is the RGB model.
  • the color space is described by the three primary colors red, green and blue.
  • the disadvantage of this model is in particular that the description made by the RGB model does not correspond to the sensation of the human eye, since in particular the behavior of human perception, that is to say the perception by the sensory organs, is not taken into account.
  • Electronic image sensors in particular CCD chips for color cameras, have i. d. R. a variety of z. B. arrayed photosensitive pixels, d. H. individual pixels on, e.g. B. a million or more, of which i. d. R. each provides a first electrical signal that correlates with the color image in accordance with the colored light recorded in the observation area.
  • B. is divided into three separate signal channels, each signal channel at the time of viewing usually provides a part of the first electrical signal corresponding to the primary colors red, green and blue. Such a signal is referred to as an RGB signal.
  • a spectral sensitivity of each signal channel (R; G; B) is set to the spectral sensitivity of the human eye.
  • the first electrical signal as a whole is also adapted to the color perception of the human eye with regard to hue, saturation and brightness. A color image recorded with such a color camera is consequently composed of a large number of pixels.
  • the method for assessing the quality of a printed matter produced by a printing press is characterized in that a second electrical signal is obtained from at least one reference image and is stored in a data memory, the second electrical signal forming at least one desired value for the first electrical signal, that at least the color image of the identifying feature indicates a color deviation from the reference image and / or the identifying feature indicates that it belongs to a specific class of identifying features and / or to a specific geometric contour and / or to a relative arrangement at least a further identification feature of the material is checked in each case by comparing the first signal with the second signal for reaching the target value or for a correspondence with the same.
  • the material and / or its identification feature is preferably always checked simultaneously with respect to at least two of the aforementioned criteria.
  • at least two of the checks of the color image are carried out, in particular the checking of the identification feature for a color deviation from a reference image and the checking of the identification feature for its belonging to a specific class of identification features or for a specific geometric contour or for a relative arrangement to other identification features of the material preferably at the same time in parallel and independent test processes.
  • the color image is preferably checked for a color deviation from the reference image in that the part of the first signal belonging to the color image provided in the first signal channel is linked to the part provided in the second signal channel by means of a first calculation rule, thereby generating an output signal of a first counter-color channel that the part of the first signal belonging to the color image provided in the third signal channel is linked to the part in the first and the second signal channel by means of a second calculation rule, thereby generating an output signal of a second counter-color channel and that the output signals of the counter-color channels are classified by a comparison with target values.
  • the checking of the recognition feature for its belonging to a certain class of recognition features is preferably carried out by converting the first electrical signal provided by the image sensor into a translation-invariant signal with at least one feature value by means of at least one calculation rule, by weighting the feature value with at least one fuzzy membership function that a superordinate fuzzy membership function is generated by linking all membership functions using a calculation rule consisting of at least one rule, that a sympathy value is determined from the superordinate fuzzy membership function, that the sympathy value is compared with a threshold value and that depending on the result of this comparison via a Belonging to the identifier to a certain class of identifiers is decided.
  • the checking of the recognition feature for a specific geometric contour and / or for a relative arrangement to at least one further recognition feature of the material is preferably carried out by storing at least one background setpoint and at least one mask setpoint in the data memory, the background setpoint at least one property of the one to be assessed Material in at least part of a surrounding area surrounding the identification feature and wherein the mask setpoint represents the geometric contour of the identification feature or the relative arrangement of a number of identification features with respect to one another, that when testing the material from the first electrical signal provided by the image sensor and the background setpoint a difference value at least for the expectation range is formed that the current position of the recognition feature is derived from a comparison of the difference value with the mask setpoint and that the qualitative assessment of the material the area of the assessing material, which results from the current position of the identifier, is hidden.
  • the adaptation of the first electrical signal to the color perception of the human eye takes place in that the RGB signal provided by the image sensor at every point in time is interpreted as a vectorial output signal, the coefficients of the RGB signal vector being multiplied by a square correction correction in particular, so that all parts of the first electrical signal representing a signal channel are approximated to the color perception of the human eye.
  • a correction matrix By multiplying the RGB signal vector by a correction matrix, on the one hand, it is possible to classify all printing inks relatively precisely into any color space.
  • an adaptation of the RGB signal vector by means of multiplication with the correction matrix is easy to implement in terms of data technology, so that an implementation in a real system is possible even with large amounts of RGB signals that are provided by a plurality of pixels of the image sensor at the same time is.
  • the coefficients of the correction matrix are of crucial importance for the quality of the proposed correction of the RGB signals, since the RGB signal vectors are transformed in different ways depending on the choice of these coefficients.
  • the coefficients of the correction matrix can consist, for example, of empirical values. They are stored in a data store.
  • an iterative approximation algorithm is proposed.
  • a reference color chart for example an IT8 chart with 288 color patches, is specified to carry out this approximation algorithm.
  • the different reference colors are shown in the color fields.
  • that is Classification of the different reference colors in a suitable color space for example the ClELAB color space.
  • Known transformations can be used to calculate corresponding target values for the three signal channels from these predetermined CIELAB values for the different reference colors of the reference color chart.
  • a reference color chart is thus specified as the input variable for the approximation algorithm and a vector for each reference color with a target value for each signal channel as the desired result of the conversion.
  • the approximation algorithm for determining the coefficients of the corrector matrix is carried out, the reference color table is recorded with the image sensor of the color camera and the RGB signal vector is determined for each color field. The difference between these RGB signal vectors of the color camera and the vector with the predetermined target values corresponds to the difference between the color perception of the human eye and the sensitivity distribution of the color camera.
  • a further correction step can be carried out.
  • the coefficients of the RGB signal vectors are converted in such a way that the result corresponds to those RGB signal vectors that would be obtained by illuminating the observation area with a standard light.
  • the color correction values for adapting the RGB signal vectors to different illumination sources and changes thereof can advantageously be calculated as follows.
  • the starting point of the iteration is a correction matrix, the coefficients of which are specified as output values. These initial values can either be selected purely by chance or selected according to certain empirical values.
  • this correction matrix is then multiplied by all the RGB signal vectors provided by the image sensor and the corrected RGB signal vectors thus obtained are temporarily stored in a data memory. Then the coefficients of the correction matrix are changed slightly and the multiplication is carried out again. The change in the coefficients of the correction matrix is only accepted if the corrected RGB signal vectors approach the vectors with the specified target values.
  • the approximation of the corrected RGB signal vectors to the vectors with the specified target values is evaluated for each iteration step in order to be able to decide on the basis of this evaluation whether the change in the coefficients of the correction matrix made in this iteration step should be adopted or rejected.
  • An advantageous evaluation method provides that for each color field of the reference color table, the difference value between the corrected RGB signal vector and the vector with the specified target values for this color field is determined and the sum of all these difference values is added up.
  • the change in the correction coefficients of the correction matrix in the current iteration step is only adopted if the sum of all difference values in this current iteration step has become smaller compared to the sum of all difference values in the previous iteration step.
  • Another problem with camera systems is the correct adjustment of the color balance, i. H. the correct weighting of the different signal channels to each other.
  • the coefficients of each RGB signal vector can each be multiplied by a correction channel-dependent correction factor.
  • a correction vector is added to each RGB signal vector. This correction of the three signal channels of each RGB signal vector corresponds to a linear shift of the individual coefficients of the RGB signal vectors.
  • a particularly good color balance is achieved if the correction vector and the signal channel-dependent correction factors are selected such that the corrected RGB signal vectors obtained by applying the correction with the correction vector and the correction factors are essentially exact for the two fields with the reference gray values black and white correspond to the vectors with the specified target values for these two color fields.
  • the linear shift of the RGB signal vectors is selected such that corrected results are obtained for the two reference gray values black and white, which correspond to the contrast perception of the human eye. This linear shift is preferably applied to all RGB signal vectors, as a result of which brightness and contrast in the entire color spectrum are automatically corrected as well.
  • RGB signal vectors which has the highest value coefficients and thus represents the brightest point in the observation area is then filtered out of all these RGB signal vectors.
  • all pixels would have to deliver RGB signal vectors that are essentially identical to one another. The respective differences are based on color distortions or a design-related drop in intensity. To compensate for this, correction factors are now selected for each signal channel of each individual pixel, which ensure that when the homogeneously colored material is recorded, all RGB signal vectors correspond to the RGB signal vector at the brightest point in the observation area.
  • Color distortions in particular depend heavily on the lighting conditions in the observation area. In order to rule out sources of error due to a change in the lighting conditions, the lighting should therefore correspond to the experimental determination of the pixel-specific, signal channel-dependent correction factors of the lighting during the later use of the camera system.
  • the corrected RGB signal vectors which are obtained by correcting the RGB signal vectors originally provided by the color camera, are used to control the separate signal channels of a color monitor.
  • the display of colors on a color monitor also poses the problem that the display characteristics of most color monitors do not correspond to the color perception of the human eye. This is based in particular on the fact that the brightness behavior of the color monitors is generally not linear, ie the intensity of the light that is reproduced on the color monitor is a non-linear function of the electrical input signals present on the color monitor, here the RGB signal vectors.
  • the coefficients of the corrected RGB signal vector taken as the basis can each be potentiated by a factor ⁇ .
  • the nonlinearity of the brightness behavior of most color monitors can be compensated for by this nonlinear conversion of the coefficients of the corrected RGB signal vectors.
  • a value in the range between 0.3 and 0.5, in particular approximately 0.45, must be selected for the factor ⁇ .
  • the processing of the stimuli in human color vision is simulated.
  • a signal vector is provided, the coefficients of which preferably represent three separate signal channels.
  • Each of the three signal channels has a characteristic spectral sensitivity.
  • the two receptive fields, which represent the second stage of color processing in human vision, are simulated by correspondingly linking the three separate signal channels.
  • the red / green field of human color perception represents the first counter-color channel in the technical model.
  • the output signal of the first counter-color channel is generated by linking the part of the signal vector in the first signal channel with the part of the signal vector in the second signal channel.
  • the link is made by means of a calculation rule which consists of at least one calculation rule.
  • the blue / yellow field is generated in the technical model by linking the part of the signal vector in the third signal channel with a combination of the parts of the first and the second signal channel. In the technical model, the blue / yellow field corresponds to the second counter-color channel.
  • the output signal of the second counter-color channel is generated by the link described above.
  • the link is made by means of a second calculation rule, which consists of at least one calculation rule. In order to evaluate the signal vector of the examined pixel, the output signals of the two counter-color channels are classified in the next step. In this way it is decided whether the signal vector of the examined pixel and thus ultimately also the color image corresponds to a certain class, whereby a good / bad classification can be made.
  • the spectral range in which the signal channels of the method operate is of no essential importance for the principle of the method, as long as it concerns signal channels with different spectral sensitivity. It is advantageous if the signal channels correspond to the three primary colors of the RGB model, namely red, green and blue, because this makes use of a widely used color model. Each signal channel is advantageously spectrally sensitive to the spectral sensitivity of the cone types of the retina of the human eye adjusted.
  • the manner in which the two output signals of the counter-color channels are generated is of secondary importance for the principle of the invention.
  • One possibility is that a calculation rule of the first calculation rule a weighted difference formation of the part of the signal vector in the second signal channel from the part of the signal vector in the first signal channel and / or a calculation rule of the second calculation rule a weighted difference formation of the weighted sum of the parts of the first and the second signal channel from the part of the third signal channel.
  • At least one signal in at least one counter-color channel is preferably subjected to a transformation rule after and / or before the linkage, in particular a non-linear transformation rule.
  • a transformation has the particular advantage that the digital character of electronically generated color images can be taken into account. It is also possible through transformation regulations to transform a signal from the color space into a space in which the irritation of the cones can be described.
  • the signals are preferably subjected to a transformation in both counter-color channels.
  • the receptive fields in human vision are characterized by a low-pass behavior, it makes sense if at least one signal in at least one counter-color channel is filtered using a low-pass filter.
  • the output signal of each counter-color channel is preferably filtered by means of a low-pass filter.
  • the method preferably has a learning mode and a working mode.
  • an evaluation device processing the signals of the image sensor can be switched between these two operating modes, ie the learning mode and the working mode.
  • At least one reference image e.g. B. the Recording of at least one individual printed sheet, checked pixel by pixel and the output signals of the two counter-color channels generated by the reference image are stored in a data memory as a second electrical signal forming a setpoint.
  • the output signals of each counter color channel are then stored pixel by pixel in the data memory.
  • the output signals of the corresponding pixel generated by a color image to be checked are then compared with the corresponding values stored in the data memory as the desired value, and a classification decision is then made.
  • the values stored in the data memory are formed by several reference data records, so that an allowable tolerance window is defined for each value in the data memory, within which one is used during image verification generated output signal value of a counter color channel may fluctuate.
  • the target value of the output signal of a counter-color channel can be determined, for example, by arithmetic averaging of the individual values, the individual values resulting from the reference data sets.
  • the tolerance window can be determined, for example, by the minimum and maximum values or by the standard deviation of the output signals of the counter-color channels of each pixel generated by the examined reference images.
  • the method for checking the identification feature for its belonging to a specific class of identification features preferably proceeds in the following essential procedural steps: feature formation, fuzzification, interference, defuzzification and decision about a class affiliation.
  • feature formation the first electrical signal provided by the image sensor is converted into a translation-invariant signal in a feature space by means of at least one calculation rule.
  • the aim of the feature formation is to determine such quantities by which typical signal properties of the color image are characterized.
  • the typical signal properties of the color image are represented by so-called features.
  • the characteristics can be represented by values in the characteristics space or by linguistic variables.
  • the affiliation of a feature value to a feature is described by at least one fuzzy affiliation function.
  • This is a soft or fuzzy assignment, whereby depending on the value of the characteristic value, the characteristic value belongs to the characteristic in a standardized interval between 0 and 1.
  • the concept of the membership function means that a characteristic value is no longer either completely or not at all can be assigned to a characteristic, but rather can assume a fuzzy membership which lies between the Boolean truth values 1 and 0.
  • the step just described is called fuzzification. In fuzzification, therefore, a sharp feature value is essentially converted into one or more fuzzy affiliations.
  • a superordinate membership function is generated by means of a calculation rule which consists of at least one rule, all membership functions being linked to one another. The result is a higher-level membership function for each window.
  • a numerical value is determined from the superordinate membership function formed in the interference, which is also called the sympathy value.
  • a comparison of the sympathy value with a predetermined threshold takes place, on the basis of which the affiliation of the window to a particular class is decided.
  • the threshold value forms a further setpoint contained in the second electrical signal.
  • the type of the characteristic values in the characteristic space is of minor importance for the basic sequence of the procedure. In the case of time signals, for example, their mean value or variance can be determined as characteristic values. If the method for checking the identification feature for its belonging to a certain class of identification features is subject to the requirement that it should process the color images without errors, regardless of the prevailing signal intensity, and furthermore small but permissible fluctuations in the color image should not lead to interference, it makes sense if the conversion of the first electrical signal from the two-dimensional space is carried out by means of a two-dimensional spectral transformation. Examples of a suitable spectral transformation are a two-dimensional Fourier, Walsh, Hadamard or circular transformation. The two-dimensional spectral transformation gives translationally invariant feature values. The amount of the spectral coefficients obtained by a spectral transformation is preferably used as the feature value.
  • the membership functions are preferably unimodal potential functions.
  • the higher-level membership function is preferably a multimodal potential function.
  • the membership function has positive and negative slopes, it is advantageous if the parameters of the positive and negative slopes can be determined separately. This will result in a better adaptation of the parameters to those to be examined Guaranteed records.
  • the method for checking a recognition feature of the printed matter for belonging to a certain class of recognition features is again subdivided into two different operating modes, namely a learning mode and a working mode. If the membership functions are parameterized, the parameters of the membership function can be determined in the learning mode from measured data records. In the learning mode, the parameters of the membership functions are matched to so-called reference images. H. In the learning mode, an association of the feature values, which result from the reference images, with the corresponding features is derived by means of the membership functions and their parameters.
  • the characteristic values that result from the subsequently measured data sets are weighted with the membership functions, the parameters of which were determined in the learning mode, whereby the characteristic values of the now measured data sets are associated with the corresponding characteristics.
  • the parameters of the membership functions are determined on the basis of measured reference data records.
  • the data records to be checked are weighted and evaluated with the membership functions defined in the learning mode.
  • At least one rule by means of which the membership functions are linked to one another is preferably a conjunctive rule in the sense of an IF ... THEN link.
  • the generation of the higher-level fuzzy membership function is preferably divided into the following sub-steps: premise evaluation, activation and aggregation.
  • premise evaluation a membership value is determined for each IF part of a rule and, when activated, a membership function for each IF ... THEN- Rule set.
  • the superordinate membership function is generated during the aggregation by superimposing all membership functions generated during activation.
  • the testing of the recognition feature for a specific geometric contour and / or for a relative arrangement to at least one further recognition feature of the material is based on the basic idea when evaluating a position-variant recognition feature in which the optical properties, for example the reflectivity, do not provide sufficiently reliable identification is sufficient to include known information about this identifier in the evaluation. It is assumed as a premise that the position variant identification feature, for example a colored window thread, differs at least in partial areas in the optical properties, for example in the gray value, so far from the other material to be inspected, e.g. B. distinguishes the printed image surrounding the identification feature that there is at least no complete agreement between the identification feature and the printed image.
  • additional information about the known geometric contour of the identification feature or the relative arrangement of several identification features present in the printed image is thus evaluated.
  • This additional information is stored in a mask reference stored as mask setpoints for each material to be evaluated, which mask mask represents the geometric data in a suitable form.
  • a background setpoint is stored in the data memory as a reference, which sets the optical properties of the print image in at least part of one Surrounding area, which surrounds the identifier, represents.
  • the background setpoint must differ at least slightly from the optical properties of the identification feature to be identified.
  • the current first electrical signal provided by the image sensor and the background setpoint are then used to form a difference value representing a difference image, at least for the expected range.
  • the difference image essentially all features of the print image are faded out, the optical properties of which correspond to the background setpoint. Due to their deviation from the background reference value, only position-variant regions of the identification feature and also other elements, such as printing errors or edge deviations, are depicted in the difference image, the regions of the position-variant identification feature having particularly high amplitudes.
  • the difference values are compared with the mask setpoints of the mask reference and the current position of the identification feature is inferred from the result of the comparison.
  • This method step is based on the consideration that the difference image is essentially determined by the depiction of the position-variant identification feature, so that the actual position of the position-variant identification characteristic can be inferred from extensive overlap between mask reference and difference image. If it is not possible to determine sufficient overlap between mask setpoints and difference values due to other error influences, this is harmless. B. only leads to an error display in the print image control and to the removal of the corresponding printed sheet.
  • the areas of the printed image that result from the current position of the identification feature are preferably hidden during the subsequent qualitative assessment of the material, so that interference in the examination of the printed image by the Position variant arrangement of the identifier are excluded.
  • the recognition of the position-variant recognition feature can be improved when this method is carried out by storing a binarization threshold in the data memory. After the difference image has been formed from the current first electrical signal and the background setpoint, all image data whose values are below the binarization threshold can be filtered out from the difference image. I.e. Only those image points are preserved in the difference image that differ sufficiently significantly from the rest of the print image so that the usually other deviations, for example printing errors or edge deviations, can be hidden from the difference image.
  • the procedure can be such that the mask reference is shifted until a maximum overlap between the mask reference and the difference image results.
  • Various mathematical evaluation methods can be used to evaluate the coverage between the mask reference and the difference image and to find the corresponding coverage maximum.
  • the overlap between the difference image and the mask reference should be calculated using suitable mathematical operations, if possible using electronic data processing methods.
  • One way of evaluating the overlap between the mask reference and the difference image consists in calculating centers of gravity in accordance with the optical distribution of the pixels in the difference image and these centers of gravity with the Focus of the mask reference. A maximum coverage occurs when the sum of the center of gravity differences between the mask reference and the difference image is minimized.
  • a prerequisite for carrying out the method for checking the identification feature for a specific geometric contour and / or for a relative arrangement to at least one other identification feature of the material is the storage of a suitable background setpoint in the data memory.
  • the underground setpoint can simply be specified as a process parameter, for example based on one or more empirical values.
  • the background setpoint is specifically determined in a learning mode depending on the respective print image of the material to be tested. Two alternatives are given below.
  • reference material is used in the learning mode that does not contain the position-variant identification feature.
  • printed sheets with banknotes or tokens can be used for which the window thread is not present.
  • the underground setpoint can be derived by evaluating this reference material without a distinguishing feature.
  • the learning mode can also be carried out with reference material that contains the position-variant identification feature. If, when evaluating the printed image of the reference material, the position-variant identification features appear bright in comparison to the surrounding area, then a threshold value is selected as the background setpoint which corresponds to the values of the darkest pixels of the identification feature. When the material is subsequently checked, it is then assumed, based on the threshold value, that at least in the expected range, all pixels that are darker than the Are setpoint value, do not belong to the position-variant identification feature. If, on the other hand, the identification feature appears dark in comparison to the surrounding area, a threshold value is selected as the background setpoint, the value of which corresponds to the brightest pixels of the identification feature.
  • FIG. 1 shows a block diagram with functional units relevant to the method
  • FIG. 2 is a block diagram of the logic unit of the image processing system
  • FIG. 6 shows a flowchart of the method for checking the identification feature for its belonging to a specific class of identification features; 7 shows a schematically represented difference image in a view from above;
  • FIG. 8 shows the difference image according to FIG. 7 after binarization has been carried out
  • FIG. 9 shows the mask reference for determining the position of the position-variant identification feature in the difference image according to FIG. 8;
  • FIG. 10 the overlap between the difference image according to FIG. 8 and the mask reference according to FIG. 9;
  • FIG. 11 shows a second mask reference in a schematically represented side view
  • An image recording unit 01 preferably a color camera 01, the z. B. in or on a printing press or a machine processing a printed matter is fixed so that it can take color images of the material 19 moving past the color camera 01 to be assessed, preferably in the running printing process, with its image sensor 02, is to an evaluation device 03, ie one Image processing system 03, connected. If necessary, the image data recorded by the color camera 01 and evaluated in the evaluation device 03 can be displayed on a color monitor 04, the color monitor 04 being able to be arranged in or on a control station belonging to the printing press.
  • the color monitor 04 can have control elements for entering parameters and for setting and operating the image processing system 03 are in active connection with them by the color monitor 04 z.
  • B. is designed as a so-called touchscreen and has corresponding operating masks.
  • the test methods carried out for the qualitative assessment of the printed material 19 are, in particular if the assessment to increase the reliability of the test is to be based on a test of several criteria, in connection with the evaluation device 03 in e.g. B. three parallel signal paths are shown, the test processes in the respective signal paths preferably taking place independently of one another in the same evaluation device 03.
  • the tests preferably run at least approximately at the same time, i. H. the test processes start at least at the same time.
  • the test processes can begin after the evaluation device 03 having at least two operating modes has changed from its learning mode 48 (FIG. 5) to its working mode 49 (FIG. 5).
  • each signal path relates to a functional unit 06 for checking at least the color image of the identification feature for a color deviation from the reference image, a functional unit 07 for checking the identification feature for its belonging to a specific class of identification features and a functional unit 08 for checking the identification feature a specific geometric contour or a relative arrangement to at least one further identification feature of the material 19, with each test using a comparison point 11 provided in the respective signal path; 12; 13 carried out comparison of the first signal 09 provided and suitably prepared by the image sensor 02 of the color camera 01 with a respectively suitably defined target value 16; 17; 18 includes, the setpoints 16; 17; 18 are stored in a data memory 14 belonging to the evaluation device 03. The respective test results in the individual signal paths are again reported to the evaluation device 03 for the purpose of storage there.
  • the functional units 06; relevant for the method for the qualitative assessment of a printed material 19 with at least one identification feature; 07; 08 can can also be implemented in a machine processing the material 19, this machine e.g. B. a printing press, preferably a sheet-fed printing press, in particular a sheet-fed rotary printing press, preferably arranged downstream, but can also be arranged upstream.
  • the material 19, ie a z. B. multiple identification features printed sheet 19 is in the sheet printing machine at a speed of z. B. printed 18,000 or more sheets per hour and / or then further processed at this speed in the printing sheet 19 processing machine.
  • the testing processes for assessing the quality of the printing machine or the machine processing the material 19 are computationally intensive and the speed of movement of the material 19 are high, a reliable assessment is achieved with the proposed method. Since the functional units 06; relevant for the method for the qualitative assessment of a printed material 19 with at least one identification feature; 07; 08 are arranged in or on the printing press or the machine processing the material 19, the location of the provision of the reference signal and the location of the test are identical. The color image and its reference image can be recorded with the same functional units 06: 07: 08, in particular with the same color camera 01, at the same location and evaluated in the same evaluation device 03.
  • Fig. 2 shows a simplified block diagram of an example of the electronic logic unit of the image processing system 03.
  • the z. B. with only 1, 8 V or 3.3 V voltage-supplied logic unit is designed as a single field-programmable logic circuit with several configurable logic blocks, the logic unit executing the entire image comparison in its logic blocks and assessing the quality of the printed matter.
  • Such field programmable logic circuits are also referred to as the field programmable gate array with the abbreviation FPGA according to its English-language name.
  • An FPGA preferably has a matrix structure made up of logic blocks.
  • B. 20,000 or even 50,000 logic blocks are each provided with freely configurable gate systems.
  • An FPGA can e.g. B. have a total of more than 2 million gate systems.
  • the creation of a software required for the configuration of the logic blocks can be based on a hardware description language, eg. B. with VHDL (very high speed integrated circuit hardware description language).
  • the FPGA has an internal structure width, preferably 0.25 ⁇ m or less, e.g. B. 0.18 ⁇ m or 0.15 ⁇ m, so that its structures are in the range of well below 1 ⁇ m.
  • the FPGA is preferably reprogrammable several times, so that the FPGA z. B. is also able to self-configure its programming, which carries out the image comparison and assesses the quality of the printed matter.
  • the FPGA Due to the speed of the printing process in the printing press, the FPGA has to carry out the test processes required to assess the quality of the printed matter within less than 100 ms in order to arrive at a qualified assessment inline, ie during the printing process.
  • the test processes running in the FPGA are preferably designed in such a way that the FPGA with a picture format of 2048 x 1536 pixels (ie more than 3 million pixels) requires a maximum computing time of less than 80 ms for the entire picture comparison, individual test processes being carried out much faster become. Even if the printing press sequentially produces a large number of copies of a printed matter, the FPGA preferably assesses the quality of each individual copy produced in the current production process of the printing press.
  • the FPGA's assessment of the quality of the copy produced in the current production process of the printing press includes checking the printed matter with regard to several criteria, the various tests preferably being carried out simultaneously in quasi-parallel test processes, each of the test processes relating to the printed matter with respect to at least one other Criterion assessed.
  • the image recording unit 01 has an image sensor 02 with a large number of individual pixels (z. B. more than four million pixels, the image recording unit 01, the image data recorded with its image sensor 02 preferably pixel by pixel as digital data z. B. with a clock rate of 40 MHz or more to the FPGA.
  • the FPGA has for communication with the image acquisition unit 01 or with another control unit, for. B.
  • z. B. a VME bus interface is provided as an interface to another control unit.
  • An input and output for a system bus enables e.g. B. the exchange z. B. 32-bit-wide data with the image acquisition unit 01. Via a connection interface formed on the FPGA, for. B. two FPGAs are interconnected.
  • the FPGA has an input for reading z. B. 32-bit-wide data from a data memory, for. B. an SDRAM memory, and an output to z. B. 32-bit data can be stored there in this data memory.
  • the FPGA each has at least one SDRAM controller, one SRAM controller and one FIFO controller.
  • the FIFO memory stores e.g. B. 32 lines with a length of 2048 pixels with an 8-bit quantization.
  • An SRAM memory which is connected to the FPGA and is controlled by the SRAM controller, serves to store parameters which are required to carry out at least one of the methods for assessing the quality of the printed matter produced by the printing press.
  • a SDRAM memory connected to the FPGA and controlled by the SDRAM controller is used for local image storage and storage of geometric objects, e.g. B. location-based symbols for an error display.
  • the FPGA can e.g. B.
  • the FPGA is preferably operated in such a way that the result from at least one of the methods for Assessment of the quality of the printed matter produced by the printing press after a total of less than 10 ⁇ s, preferably after about 6 ⁇ s, is present.
  • the rapid processing of the process steps for assessing the quality of the printed matter produced by the printing press is achieved in that the process is divided into several process units and each process unit as a macro, ie as a program part consisting of a coherent instruction set, in a coherent group of logic blocks in the FPGA is programmed, the logic blocks combined into a group each being clocked by a clock signal stabilized in a phase locked loop (PLL - phase-locked loop).
  • PLL - phase-locked loop phase locked loop
  • the method for checking the identification feature for its belonging to a specific class of identification features is divided into the method units explained in connection with FIG. 6, with each method unit being assigned at least one method step to be processed in the method.
  • the logic blocks combined into a group are preferably connected to one another. Because the process for assessing the quality of the printed matter produced by the printing press within the FPGA is divided into at least two process units and Each process unit is implemented as a macro in logic blocks combined into a group and all logic blocks of each group are clocked with a clock signal stabilized in a phase-locked loop. This eliminates the need to set up an otherwise very complex signal flow diagram for the timing of the clock signal clocking the FPGA.
  • the clock signal is usually generated by an oscillator arranged in or on the FPGA.
  • the color camera 01 With the color camera 01, a color image of a color printed material 19 arranged in the observation area 21 is recorded.
  • the color camera 01 has an image sensor 02, preferably designed as a CCD chip 02, which converts the image information acquired in the observation area 21 into electronic image data, which form a first electrical, preferably digital signal 09 provided by the color camera 01 or its image sensor 02.
  • a signal vector 22 is generated from each light-sensitive pixel of the CCD chip 02.
  • the color camera 01 Corresponding to the number of pixels of the CCD chip 02, the color camera 01 provides a corresponding number of signal vectors 22, identified by a counting index, to the evaluation device 03 for further processing.
  • Each signal vector 22 preferably has three coefficients R, G and B.
  • the coefficients R, G and B correspond to the color values of the three signal channels red, green and blue, the vectorial first electrical signal 09 emitted by a pixel correlating with the recorded color of the printed material 19 at the corresponding position in the observation area 21.
  • a correction vector 24 with the fixed value coefficients ai, a 2 and a 3 is added to the resulting result vector. This arithmetic operation generates first corrected signal vectors 26 which improve the color balance, the brightness and the contrast of the image data.
  • This goal is achieved in that the signal channel-dependent correction factors K ⁇ K 2 and K 3 and the coefficients ai, a 2 and a 3 of the correction vector 24 are selected such that when the reference gray values black and white are recorded, the signal generated by the color camera 01 -Vectors 22 are transformed in such a way that the corrected signal vectors 26 obtained correspond to such target values as result in vectors from the conversion of the known CIELAB color values.
  • the first corrected signal vectors 26 are then fed to a second correction module 27.
  • the second corrected signal vectors 29 result from this multiplication.
  • the coefficients K 4 to K 12 of the correction matrix 28 were previously determined in a suitable iteration process in such a way that the image information contained in the first corrected signal vectors 26 relates to the color perception of the human being Eye to be approximated.
  • the second corrected signal vectors 29 are then forwarded to a third correction module 31.
  • the third correction module 31 signal channel-dependent correction factors are stored in a data memory 14 for each pixel, which are multiplied by the coefficients R, G and B to adapt the intensity values depending on the position of the respective pixels.
  • This correction of the second corrected signal vectors 29 is preferably carried out for all pixels of the image sensor 02.
  • the third corrected signal vectors 32 are then forwarded to a fourth correction module 33.
  • the coefficients R; G; B of the third corrected signal vectors 32 is potentiated by a factor ⁇ and the fourth corrected signal vectors 34 are calculated therefrom.
  • the potentiation by the factor ⁇ takes into account the non-linear brightness transfer function of a color monitor 04, to which the fourth corrected signal vectors 34 are transferred for display.
  • the result of the correction of the signal vectors 22 in the correction modules 23, 27, 31 and 33 is that the color images displayed on the color monitor 04 are adapted to the color perception of the human eye in such a way that the visual impression when viewing the display on the color monitor 04 is good corresponds to the perception of color that would arise if the printed material 19 were viewed directly.
  • the image signal is recorded by an image sensor 02 in separate signal channels R; G; B.
  • the signal channels R; G; B around the three signal channels red R, green G and blue B.
  • Each of the signal channels R; G; B has an adjustable spectral sensitivity. This has the advantage that each signal channel R; G; B can be adapted in its spectral sensitivity to the spectral sensitivity of the respective cone of the retina of the human eye.
  • the spectral content of an image is analyzed pixel by pixel.
  • the image sensor signals of the signal channels R; G; B linked together.
  • each image sensor signal in the counter color channel 38; 39 subjected to a non-linear transformation 41. This takes into account the digital character of the electronically generated recordings.
  • the generation of the output signals 43; 44 of the counter-color channels 38; 39 is carried out analogously to the generation of the signals of the receptive fields in the human retina. That is, a link is created using the calculation rules 36; 37 of the signal channels R; G; B performed according to the linkage of the cones of the human retina.
  • the image sensor signals of the red signal channel R and the green signal channel G are linked to one another by means of the first calculation rule 36.
  • the image sensor signal of the blue signal channel B is linked to the minimum 46 of the image sensor signals of the red signal channel R and the green signal channel G by means of the calculation rule 37.
  • the receptive fields of the human retina are characterized by a low-pass behavior. Accordingly, in the present exemplary embodiment, the signals of a low-pass filtering 47, e.g. B. with a Gauss low-pass filter.
  • the learning mode 48 has the aim of the pixel-by-pixel generation of target values as reference data values, which in the subsequent working mode 49 with the output signals 43; 44 of the counter-color channels 38; 39 of the corresponding pixels are compared.
  • the image contents of one reference image 52 or of a plurality of reference images 52 are analyzed in that the image contents of each pixel are divided into three Signal channels R; G; B are recorded and a subsequent perceptual adaptation of the image signals of each signal channel R; G; B is carried out and subsequently further processing of the image sensor signals is carried out according to the previously described counter-color method.
  • the output signals 43; 44 of the counter-color channels 38; 39 are then stored in a data memory 14.
  • a data memory 14 In order to take account of permissible fluctuations in the reference images 52, it is useful if several reference images 52 are taken into account in the learning mode 48. This makes it possible for the stored target values of each pixel to have a certain permissible fluctuation tolerance.
  • the fluctuation tolerance can be determined either by the minimum / maximum values or the standard deviation from the data obtained from the image contents of the reference images 52 of each pixel.
  • the working mode 49 there is then a pixel-by-pixel comparison of the output values 43; 44 of the counter-color channels 38; 39 of an inspection image 53 with the target values from the data memory 14.
  • the comparison can be carried out using a linear or non-linear classifier 54, in particular using threshold value classifiers, Euclidean - distance classifiers, Bayesian classifiers, fuzzy classifiers or artificial neural networks. Then a good / bad decision is made.
  • FIG. 6 shows a flowchart for signal evaluation in the method for checking the identification feature for its belonging to a specific class of identification features.
  • a grid of M x N windows 56 is placed over the entire color image to be checked, where M, N> 1.
  • Each window 56 advantageously consists of mxn pixels with m; n> 1.
  • a square grid of N x N windows 56 is preferably selected, each window 56 consisting of nxn pixels.
  • the signal of each window 56 is checked separately in the test process.
  • the two-dimensional color image of the spatial space is transformed into a two-dimensional image in the frequency space by one or more spectral transformations 58.
  • the spectrum obtained is called the frequency spectrum. Since the present exemplary embodiment is a discrete spectrum, the frequency spectrum is also discrete.
  • the frequency spectrum is formed by the spectral coefficients 59 - also called spectral values 59.
  • the amount 61 of the spectral values 59 is formed.
  • the magnitude of the spectral values 59 is called the spectral amplitude value 62.
  • the spectral amplitude values 62 form the feature values 62, i. H. they are identical to the feature values 62.
  • the feature selection 63 follows as a further method step.
  • the aim of the feature selection 63 is to select those features 64 which are characteristic of the image content of the color image to be checked.
  • Both characteristic spectral amplitude values 62, which define the characteristic 64 by their position in the frequency space and by their amplitude, and linguistic variables such as “gray”, “black” or “white” are possible as characteristics 64.
  • the fuzzification 66 the association of each spectral amplitude value 62 with a feature 64 is determined by a soft or unsharp membership function 67; d. H. a weighting takes place.
  • the membership functions 67 can be adapted in a learning mode to target values stored as reference data records, it makes sense if the membership functions 67 are designed as parameterized monomodal, that is to say one-dimensional potential functions, in which the parameters of the positive and negative slope are separated from the target values to be examined be adjusted can.
  • the data records of the image content, from which the feature values 62 of the color images to be checked result are weighted with the respective membership functions 67, the parameters of which were determined in the previous learning mode. I.e. for each feature 64 there is a kind of TARGET-ACTUAL comparison between a reference data record, which is expressed in the parameters of the membership functions 67, and the data record of the color image to be checked.
  • the membership functions 67 create a soft or fuzzy association between the respective feature value 62 and the feature 64.
  • interference 68 there is essentially a conjunctive link 69 - also called aggregation 69 - of all membership functions 67 of features 64, as a result of which a superordinate membership function 71 is generated.
  • the next method step, the defuzzification 72 determines a specific membership value 73 or sympathy value 73 from the superordinate membership function 71.
  • this sympathy value 73 is compared with a previously set threshold value 76, whereby a classification statement can be made.
  • the threshold value 76 is set either manually or automatically.
  • the threshold value 76 is also set in the learning mode.
  • the method for checking the identification feature for a specific geometric contour and / or for a relative arrangement to at least one further identification feature of the material essentially takes place in the following steps.
  • a difference image 77 is formed from printed sheets printed with banknotes 19, only a section of the difference image 77 being shown in the area of a banknote 19 in FIG. 7. 7 that the normal printed image of the bank note 19 is hidden in the difference image 77 and only that Areas of the printed image that differ significantly from the background reference value are depicted as dark fields in the differential image.
  • a dashed, strip-shaped expectation area 78 the position z.
  • FIG. 9 shows a mask reference 82 in its geometric shape.
  • the data for the width 83 and the length 84 of the window thread openings 79 are stored in the mask reference 82.
  • the values for the distance 86 between the window thread openings 79 and the number of window thread openings 79 per bank note 19 are also stored in the mask reference 82.
  • the mask reference 82 is shifted relative to the difference image 77 during evaluation by data technology operations until there is a maximum overlap between the mask reference 82 and the dark fields 79 in the difference image 77. If this maximum coverage is reached, the distances 87; 88, which z. B. from the current positions in the X and Y directions of the mask reference 82 relative to the edges of the bank note 19, the current position of the window thread 79 in the printed image can be concluded, so that the areas of the window thread openings during a subsequent check of the printed image 79 can be hidden.
  • FIG. 11 shows a second mask reference 89, which represents dark fields 91 corresponding to eight window thread openings 91 when checking a bank note 19 on a concavely curved contact surface.
  • Fig. 12 schematically shows a difference image 92, in which the window thread openings 91 in dark fields 93, z. B. in window threads 93 have shown.
  • the dark field 94 was caused by a printing error 94 and not by a window thread opening 91.
  • a window thread opening 91 in the middle did not appear in the difference image 92 due to the insufficient color difference between the background and window thread 93.
  • the mask reference 89 is projected onto a projection line 96 and the light-dark distribution resulting therefrom with the light-dark resulting from the projection of the difference image 92 onto a projection line 97 -Distribution compared.
  • This one-dimensional comparison of the light-dark distribution enables the position of the window thread 93 to be determined in one direction.

Landscapes

  • Engineering & Computer Science (AREA)
  • Quality & Reliability (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé pour comparer une image au moyen d'une image référence, lors de l'utilisation d'un système comprenant au moins une unité de réception d'images et un système de traitement des images évaluant les images situées dans l'unité de réception d'images, ledit système comprenant une unité de logique. Une image reçue dans l'unité de réception d'image est ensuite comparée avec au moins une image référence, l'unité de logique formée par des blocs de logique pouvant être configurés comme un circuit de commutation pouvant être programmé par un programme de champ commande la comparaison des images dans les blocs de logique. Ladite unité de logique, effectuant la comparaison, juge la qualité d'une impression sur plusieurs critères, le jugement étant effectué en parallèle par des mécanismes de vérification utilisés en parallèle. Chaque mécanisme de vérification juge l'impression en fonction d'au moins un autre critère.
EP05740075A 2004-04-29 2005-04-08 Procede pour comparer une image avec au moins une image reference Withdrawn EP1741060A2 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102004021047A DE102004021047B3 (de) 2004-04-29 2004-04-29 Verfahren zum Vergleich eines Bildes mit mindestens einem Referenzbild
PCT/EP2005/051571 WO2005106792A2 (fr) 2004-04-29 2005-04-08 Procede pour comparer une image avec au moins une image reference

Publications (1)

Publication Number Publication Date
EP1741060A2 true EP1741060A2 (fr) 2007-01-10

Family

ID=34966184

Family Applications (1)

Application Number Title Priority Date Filing Date
EP05740075A Withdrawn EP1741060A2 (fr) 2004-04-29 2005-04-08 Procede pour comparer une image avec au moins une image reference

Country Status (3)

Country Link
EP (1) EP1741060A2 (fr)
DE (1) DE102004021047B3 (fr)
WO (1) WO2005106792A2 (fr)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2939548B1 (fr) * 2008-12-09 2010-12-10 Thales Sa Dispositif de detection d'anomalie dans une image numerique
DE102012215114B4 (de) * 2012-08-24 2015-03-19 Koenig & Bauer Aktiengesellschaft Verfahren zur Inspektion eines Druckerzeugnisses
DE102015203628A1 (de) 2014-03-31 2015-10-01 Heidelberger Druckmaschinen Ag Verfahren zur automatischen Prüfparameterwahl eines Bildinspektionssystems

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07117498B2 (ja) * 1991-12-11 1995-12-18 インターナショナル・ビジネス・マシーンズ・コーポレイション 検査システム
US5809171A (en) * 1996-01-05 1998-09-15 Mcdonnell Douglas Corporation Image processing method and apparatus for correlating a test image with a template
US5894565A (en) * 1996-05-20 1999-04-13 Atmel Corporation Field programmable gate array with distributed RAM and increased cell utilization
WO2001050236A1 (fr) * 2000-01-03 2001-07-12 Array Ab Publ. Dispositif et procede d'impression
DE10132589B4 (de) * 2001-07-05 2005-11-03 Koenig & Bauer Ag Verfahren zur qualitativen Beurteilung von Material
US7073158B2 (en) * 2002-05-17 2006-07-04 Pixel Velocity, Inc. Automated system for designing and developing field programmable gate arrays
DE10234085B4 (de) * 2002-07-26 2012-10-18 Koenig & Bauer Aktiengesellschaft Verfahren zur Analyse von Farbabweichungen von Bildern mit einem Bildsensor
DE10234086B4 (de) * 2002-07-26 2004-08-26 Koenig & Bauer Ag Verfahren zur Signalauswertung eines elektronischen Bildsensors bei der Mustererkennung von Bildinhalten eines Prüfkörpers
DE10314071B3 (de) * 2003-03-28 2004-09-30 Koenig & Bauer Ag Verfahren zur qualitativen Beurteilung eines Materials mit mindestens einem Erkennungsmerkmal

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See references of WO2005106792A2 *

Also Published As

Publication number Publication date
WO2005106792A3 (fr) 2006-01-26
DE102004021047B3 (de) 2005-10-06
WO2005106792A2 (fr) 2005-11-10

Similar Documents

Publication Publication Date Title
DE10314071B3 (de) Verfahren zur qualitativen Beurteilung eines Materials mit mindestens einem Erkennungsmerkmal
EP0012724B1 (fr) Procédé mécanisé d'estimation de la qualité d'impression d'un produit imprimé ainsi que dispositf pour sa mise en oeuvre
DE102011106052B4 (de) Schattenentfernung in einem durch eine fahrzeugbasierte Kamera erfassten Bild unter Verwendung eines nichtlinearen beleuchtungsinvarianten Kerns
DE102011106050A1 (de) Schattenentfernung in einem durch eine fahrzeugbasierte Kamera erfassten Bild zur Detektion eines freien Pfads
DE29521937U1 (de) Prüfsystem für bewegtes Material
DE102011106072A1 (de) Schattenentfernung in einem durch eine fahrzeugbasierte kamera erfassten bild unter verwendung einer optimierten ausgerichteten linearen achse
EP3289398B1 (fr) Procédé de génération d'une image de contraste à réflexion réduite et dispositifs associés
WO2012045472A2 (fr) Procédé pour contrôler un signe de sécurité optique d'un document de valeur
DE3878282T2 (de) Fotomechanischer apparat unter verwendung von fotoelektrischer abtastung.
EP0012723B1 (fr) Procédé d'examen mécanique de la qualité d'impression d'une épreuve et dispositif pour sa mise en action
DD274786A1 (de) Verfahren und anordnung zur ueberwachung der druckqualitaet mehrfarbiger rasterdruckvorlagen einer offset-druckmaschine
EP1741060A2 (fr) Procede pour comparer une image avec au moins une image reference
EP1291179A2 (fr) Agent d'essai et procédé pour contrôler l'impression offset et numerique
WO2018007619A1 (fr) Procédé et dispositif pour catégoriser une surface de rupture d'un élément
EP1024400B1 (fr) Masque et procédé pour modifier la distribution de luminance d'un tirage photographique dans une imprimante photographique ou digitale
DE19838806A1 (de) Verfahren und Vorrichtung zur Erfassung von Objektfarben
DE10234085A1 (de) Verfahren zur Analyse von Farbabweichungen von Bildern mit einem Bildsensor
DE202004020463U1 (de) Systeme zur Beurteilung einer Qualität einer von einer Druckmaschine produzierten Drucksache
DE10250705A1 (de) Verfahren und Vorrichtung zur Schattenkompensation in digitalen Bildern
DE4419395A1 (de) Verfahren und Einrichtung zur Analyse und Verarbeitung von Farbbildern
EP3316216B1 (fr) Procédé de vérification d'un objet
DE69127135T2 (de) Anordnung zur Verarbeitung von Bildsignalen
DE102019120938B4 (de) Druckinspektionsvorrichtung und Verfahren zur optischen Inspektion eines Druckbildes eines Druckobjekts
DE102021130505A1 (de) Verfahren zur Bestimmung einer Beleuchtungsfarbe, Bildverarbeitungsvorrichtung und Aufnahmesystem
DE102019207739B4 (de) Systeme und verfahren zur farbkorrektur für unkalibriertematerialien

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20061018

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LI LT LU MC NL PL PT RO SE SI SK TR

DAX Request for extension of the european patent (deleted)
17Q First examination report despatched

Effective date: 20070814

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE APPLICATION HAS BEEN WITHDRAWN

18W Application withdrawn

Effective date: 20110113