WO2011114447A1 - Paper discriminating device and method of discriminating paper - Google Patents
Paper discriminating device and method of discriminating paper Download PDFInfo
- Publication number
- WO2011114447A1 WO2011114447A1 PCT/JP2010/054491 JP2010054491W WO2011114447A1 WO 2011114447 A1 WO2011114447 A1 WO 2011114447A1 JP 2010054491 W JP2010054491 W JP 2010054491W WO 2011114447 A1 WO2011114447 A1 WO 2011114447A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- edge
- image
- paper sheet
- dictionary
- pixel
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07D—HANDLING OF COINS OR VALUABLE PAPERS, e.g. TESTING, SORTING BY DENOMINATIONS, COUNTING, DISPENSING, CHANGING OR DEPOSITING
- G07D7/00—Testing specially adapted to determine the identity or genuineness of valuable papers or for segregating those which are unacceptable, e.g. banknotes that are alien to a currency
- G07D7/20—Testing patterns thereon
- G07D7/2016—Testing patterns thereon using feature extraction, e.g. segmentation, edge detection or Hough-transformation
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07D—HANDLING OF COINS OR VALUABLE PAPERS, e.g. TESTING, SORTING BY DENOMINATIONS, COUNTING, DISPENSING, CHANGING OR DEPOSITING
- G07D7/00—Testing specially adapted to determine the identity or genuineness of valuable papers or for segregating those which are unacceptable, e.g. banknotes that are alien to a currency
- G07D7/181—Testing mechanical properties or condition, e.g. wear or tear
Definitions
- the present invention relates to a paper sheet discriminating apparatus and a paper sheet discriminating method for discriminating whether a paper sheet such as a banknote is correct or not, and in particular, even if there is diversity in the design and quality of the paper sheet.
- the present invention relates to a paper sheet discriminating apparatus and a paper sheet discriminating method capable of performing optimum damage discrimination and type discrimination with accuracy.
- banknote discriminating apparatus that discriminates banknotes that are contaminated such as dirt, wrinkles, and folds from banknotes that are not generated using an image sensor or the like is known.
- a banknote in which such fouling has occurred is referred to as a “damaged ticket”, and a banknote in which such a fouling has not occurred is referred to as a “correct ticket”.
- Such a bill discriminating device describes a captured image of a genuine note (hereinafter referred to as “reference image”) acquired using an image sensor or the like and a captured image of a bill to be discriminated (hereinafter referred to as “discrimination target image”). Is generally discriminated by comparing and analyzing.
- the edge refers to a portion where the density of the image has changed remarkably, and the density gradient direction of the edge is referred to as the edge direction.
- Patent Document 1 an area in which stains are conspicuous in design is specified in advance as a stain detection target region based on a reference image, and an edge difference in the stain detection target region is analyzed to analyze the damage.
- a technique for performing the discrimination is disclosed.
- the area where stains are conspicuous in the design refers to an area where edges are not concentrated, that is, a simple design area where there is little change in shading.
- Patent Document 2 discloses a technique for comparing the reference image and the discrimination target image in pixel units for the edge direction, and analyzing the difference in the edge direction detected by the comparison, thereby determining the fitness. ing.
- JP 2006-302109 A Japanese Patent No. 3207931
- the present invention has been made in order to solve the above-described problems caused by the prior art, and even when there is diversity in the design and quality of paper sheets, it is possible to determine the optimum damage with high accuracy and
- An object of the present invention is to provide a paper sheet discriminating apparatus and a paper sheet discriminating method capable of discriminating types.
- the present invention is a paper sheet discriminating apparatus that discriminates a paper sheet based on a captured image of a paper sheet, and is a captured image of a legitimate paper sheet.
- the direction dictionary corresponding to the edge direction among a plurality of direction dictionary images prepared for each predetermined edge direction range.
- a direction-specific dictionary image generation unit that generates the direction-specific dictionary image by allocating the edge position to a corresponding position of the image, and an input image that is a captured image of a paper sheet to be determined is the direction-specific dictionary image
- a discriminating means for discriminating the damage and type of the paper sheet.
- the edge position when an edge position and an edge direction at the edge position are detected from the input image, the edge position is separated for each edge direction range based on the edge direction.
- the direction-specific input image generation means for generating the direction-specific input image, and all the edge positions that exist only in the direction-specific input image among the direction-specific input image and the direction-specific dictionary image related to the same edge direction range.
- a remaining edge extracting means for extracting a remaining edge region by superimposing the edge direction range of the edge direction range, and the determining means is configured to determine the determination target based on the remaining edge region extracted by the remaining edge extracting means. It is characterized by discriminating whether the paper sheet is normal or not.
- the present invention is the above invention, wherein the direction-specific dictionary image generation means is such that the edge direction at the detected edge position is within a predetermined range from a boundary line between two adjacent edge direction ranges.
- the edge position is assigned to the corresponding position in the direction dictionary image corresponding to each of the two edge direction ranges.
- the present invention further includes a density detection unit that detects a density of the remaining edge region extracted by the remaining edge extraction unit in the above-described invention, and the determination unit detects the density detected by the density detection unit. It is characterized by discriminating whether or not the paper sheet to be discriminated is based on the density.
- the direction-specific dictionary image generation means uses, as a valid edge region, a pixel to which a predetermined number or more of the edge positions are allocated using a predetermined number of learning images. It is characterized by.
- the present invention is characterized in that, in the above-mentioned invention, the direction-specific dictionary image generating means performs expansion processing for expanding the edge position or the effective edge region to surrounding pixels for each direction-specific dictionary image. .
- the present invention is characterized in that, in the above-mentioned invention, the direction-specific dictionary image generation means performs the expansion process giving priority to the edge direction range corresponding to the direction-specific dictionary image.
- a peripheral edge counting unit that counts the number of pixels to which the edge positions included in a predetermined vicinity range of each pixel in the directional dictionary image are assigned, and the peripheral edge count Statistic calculation means for calculating a statistic obtained by statistically calculating the number counted for each pixel by the means for a plurality of learning images, and the direction-dependent dictionary image generation means is configured to perform the statistic calculation means by the statistic calculation means.
- the directional dictionary image is generated by setting the pixel value of the pixel in the directional dictionary image based on the statistic.
- the present invention also relates to a paper sheet discrimination method for discriminating a paper sheet based on a captured image of a paper sheet, the edge position and the edge position from a learning image that is a legitimate captured image of the paper sheet
- the edge position is distributed to the corresponding position in the direction-specific dictionary image corresponding to the edge direction among a plurality of direction-specific dictionary images prepared for each predetermined edge direction range.
- the direction-specific dictionary image generation step for generating the direction-specific dictionary image in FIG. 6 and the input image, which is a captured image of the paper sheet to be discriminated, are compared with the direction-specific dictionary image to determine whether the paper sheet is normal or not.
- a discrimination step for discriminating the type.
- the present invention when an edge position and an edge direction at the edge position are detected from a learning image that is a captured image of a valid paper sheet, a plurality of directions prepared for each predetermined edge direction range are detected.
- direction-specific dictionary images are generated by allocating the detected edge positions to the corresponding positions in the direction-specific dictionary image corresponding to the detected edge direction, and the captured image of the paper sheet to be determined.
- the input image is compared with the direction-specific dictionary image to determine the correctness and type of the paper, so even if there is a variety in the design and quality of the paper, There is an effect that it is possible to perform the optimum damage discrimination and type discrimination with accuracy.
- the edge position is separated for each direction by separating the edge position for each edge direction range based on the detected edge direction.
- the remaining edge area can be extracted more clearly from a plurality of edge directions, and the pattern is complicated. Highly accurate and optimal damage discrimination and type discrimination can be performed even for paper sheets that are not easily noticeable due to their design. The effect say.
- the direction dictionary corresponding to each of the two edge direction when the edge direction at the detected edge position is within a predetermined range from the boundary line between two adjacent edge direction ranges, the direction dictionary corresponding to each of the two edge direction ranges. Since the detected edge position is assigned to the corresponding position of the image, there is an effect that it is possible to generate a dictionary image that is resistant to rotational deviation in consideration of a predetermined rotational deviation absorption amount in the detected edge direction.
- the density of the extracted remaining edge region is detected, and whether or not the paper sheet to be determined is determined based on the detected density is determined. Therefore, it is possible to perform the damage discrimination that is narrowed down to the fouled portion having the thickness, and to prevent erroneous discrimination such as wrinkles extending linearly on the sheet surface.
- the edge processing or the effective edge region is expanded to the peripheral pixels for each of the directional dictionary images, the positional deviation is prevented without reducing the extraction capability of the remaining edge region.
- a strong dictionary image can be created.
- the expansion amount can be increased as compared with the case where there is no dictionary image for each direction.
- the expansion process is performed with priority given to the edge direction range corresponding to the direction-specific dictionary image, it is possible to create a dictionary image expanded in the direction in which the edge detection deviation occurs. There is an effect.
- the number of pixels to which the edge positions included in a predetermined neighborhood range of each pixel in the direction-specific dictionary image are distributed is counted, and the number counted for each pixel is obtained for a plurality of learning images. Since the statistic is calculated, and when the statistic is calculated, the directional dictionary image is generated by setting the pixel value of the pixel in the directional dictionary image based on the statistic. There is an effect that it is possible to create a dictionary image corresponding to the peripheral characteristics (design diversity, etc.) of each.
- FIG. 1 is a diagram showing an outline of a paper sheet discrimination method according to the present invention.
- FIG. 2 is a block diagram illustrating the configuration of the bill discriminating apparatus according to the first embodiment.
- FIG. 3 is a diagram for explaining edge direction detection processing performed by the edge direction detection unit.
- FIG. 4 is a diagram for explaining the valid edge determination process performed by the valid edge determination unit.
- FIG. 5 is a diagram for explaining expansion processing performed by the expansion processing unit.
- FIG. 6 is a diagram illustrating a modification of the expansion process.
- FIG. 7 is a diagram for explaining the remaining edge extraction processing performed by the remaining edge extraction unit.
- FIG. 8 is a diagram for explaining the remaining edge analysis process performed by the remaining edge analysis unit and the determination process performed by the determination unit.
- FIG. 3 is a diagram for explaining edge direction detection processing performed by the edge direction detection unit.
- FIG. 4 is a diagram for explaining the valid edge determination process performed by the valid edge determination unit.
- FIG. 5 is a diagram for explaining expansion processing performed
- FIG. 9 is a flowchart illustrating a processing procedure executed by the bill discriminating apparatus according to the first embodiment.
- FIG. 10 is a flowchart illustrating a processing procedure of dictionary image creation processing executed by the banknote determination apparatus according to the first embodiment.
- FIG. 11 is a flowchart illustrating the procedure of the damage determination process executed by the banknote determination apparatus according to the first embodiment.
- FIG. 12 is a diagram illustrating an outline of dictionary image creation of the bill discriminating apparatus according to the second embodiment.
- FIG. 13 is a block diagram illustrating the configuration of the bill discriminating apparatus according to the second embodiment.
- FIG. 14 is a flowchart illustrating a processing procedure of dictionary image creation processing executed by the banknote discriminating apparatus according to the second embodiment.
- FIG. 15 is a flowchart illustrating the procedure of the damage determination process executed by the banknote determination apparatus according to the second embodiment.
- the banknote discriminating apparatus describes a direction-specific dictionary image created for each direction by processing a learning image, which is a captured image of a valid paper sheet, as a “dictionary image”.
- FIG. 1 is a diagram showing an outline of a paper sheet discrimination method according to the present invention.
- the paper sheet discriminating method according to the present invention is characterized in that it detects the edge point and the edge direction of a genuine ticket image and creates a dictionary image for each edge direction. is there.
- the edge point is a pixel corresponding to an edge in the image.
- a difference between the discrimination target image and the dictionary image (hereinafter referred to as “residual edge point”) is taken, and the density of the residual edge point in a predetermined size region is calculated.
- residual edge point a difference between the discrimination target image and the dictionary image
- the paper sheet discriminating method according to the present invention can be broadly divided into a stage for creating a dictionary image using the correct bill image 1 as input data and a stage for discriminating damage using the discrimination target image 5 as input data.
- the edge direction is divided into six directions in increments of 60 degrees of 360 degrees. Then, a unique designation of directions A to F is assigned to each of the six divided directions.
- the division of the direction is described as “quantization”, and the number of divisions is described as “quantization number”.
- a plane corresponding to the quantization (hereinafter referred to as “direction plane”) is prepared, and it corresponds to the edge direction of each edge point of the detected correct ticket image 1.
- the fact that an edge point has been detected is stored at the corresponding coordinates on each direction plane.
- a dictionary image is created with the coordinates having the number of times of storage equal to or greater than a predetermined number as effective edge points.
- a direction plane F corresponding to the direction planes E and F is prepared, each edge point is stored in each direction plane A to F for each edge direction, and a dictionary image for each direction is created. Details of the dictionary image creation will be described later with reference to FIGS.
- the facing direction may be stored in the same direction plane. This is because edge points in the edge direction that are 180 degrees symmetrical are often detected in the vicinity of each edge point.
- edge points in the A and D directions can be stored in the direction plane A
- edge points in the B and E directions can be stored in the direction plane B
- edge points in the C and F directions can be stored in the direction plane C, respectively.
- the number of directional planes is 3 for 6 quantizations.
- the paper sheet discrimination method expands each pixel that is an edge point by a predetermined amount.
- (2) of FIG. 1 shows an example in which the pixel Px of the portion surrounded by the circle o is expanded to the expansion range D0 and registered in the dictionary image. Details of the predetermined amount of expansion will be described later with reference to FIG.
- the conventional technique using the edge direction (for example, Patent Document 2) does not use a method for creating a dictionary image for each edge direction, unlike the paper sheet discrimination method according to the present invention. Therefore, the dictionary image is one plane data for one sheet of paper, and it is necessary to determine one edge point and one edge direction.
- a dictionary image is created for each edge direction.
- an edge point having an edge direction near the quantization boundary spans two edge directions. Can be handled. That is, one image of the dictionary image has two edge directions. Therefore, it is possible to perform a damage determination that absorbs a rotational deviation or the like that occurs during printing or scanning of a paper sheet.
- rotational deviation absorption amount the degree of deviation from the quantization boundary to be handled across two edge directions.
- the pixel value and edge direction of one edge point are expanded by expansion to absorb the design diversity and various positions and rotation loss. Discrimination can be performed. Further, when a certain pixel that was not originally an edge point is included in an expansion range of a different direction plane (that is, within a “predetermined amount”) due to such a predetermined amount of expansion, the pixel corresponds to each direction plane. Can have an edge direction. This will be described later with reference to FIG. Further, since the coordinates having a predetermined number or more of edge points from the plurality of regular image images are determined as the effective edge points by the effective edge point determination, a plurality of edge directions can be given to one coordinate of the dictionary image. . Therefore, it is possible to perform a damage determination that absorbs a variety of paper sheet designs and a plurality of rotations and misalignments.
- the discrimination target image 5 and each dictionary image in which the edge point and the edge direction are detected are detected as in the case of the correct ticket image 1. Compare (see (3) in FIG. 1). Then, damage determination is performed using the density of the remaining edge points extracted by the comparison in a predetermined size region (see (4) in FIG. 1). Details of this point will be described later with reference to FIG.
- the paper sheet discrimination method according to the present invention since the density of the remaining edge points in the predetermined size region is used, only the stain having the width and the height can be discriminated. It is possible to prevent erroneous discrimination such as.
- a dictionary image is created for each edge direction in which the edge points are expanded by a predetermined amount, and the damage determination is performed using the density of the remaining edge points in the predetermined size region. It was. Therefore, even if there is diversity in the design and quality of the paper sheets, it is possible to perform the optimum damage determination with high accuracy.
- Example 1 an embodiment of a banknote discriminating apparatus that creates a dictionary image in which each edge point is expanded by a predetermined amount
- Example 2 an embodiment of a banknote discriminating apparatus that creates a dictionary image based on the number of peripheral edges of each edge point
- An embodiment will be described as a second embodiment.
- the banknote discriminating apparatus captures an image of a banknote using an image line sensor (hereinafter referred to as “line sensor”) will be described.
- FIG. 2 is a block diagram illustrating the configuration of the bill discriminating apparatus 10 according to the first embodiment.
- determination apparatus 10 is shown, and the description about a general component is abbreviate
- the banknote discriminating apparatus 10 includes a line sensor unit 11, a control unit 12, and a storage unit 13.
- the control unit 12 includes a setting unit 12a, an image data acquisition unit 12b, an edge direction detection unit 12c, an effective edge determination unit 12d, an expansion processing unit 12e, a remaining edge extraction unit 12f, and a remaining edge analysis unit. 12g and the determination part 12h are further provided.
- the storage unit 13 also stores setting information 13a, a filter group 13b, direction plane edge point amounts 13c, direction dictionary images 13d, and discrimination reference information 13e.
- the line sensor unit 11 is a sensor that receives transmitted light or reflected light from a bill conveyed by a conveyance mechanism (not shown), and is configured by arranging a plurality of light receiving sensors in a straight line. The line sensor unit 11 also performs a process of outputting the acquired data to the image data acquisition unit 12b of the control unit 12.
- the setting unit 12a is a processing unit that extracts various parameters such as the number of quantization and the rotational deviation absorption amount included in the setting information 13a of the storage unit 13 and performs initial settings related to dictionary creation and damage determination.
- the setting unit 12a also performs processing for outputting the extracted various parameters to the edge direction detection unit 12c.
- the image data acquisition unit 12b is a processing unit that performs a process of synthesizing the output from the line sensor unit 11 for each banknote and generating image data for the entire banknote.
- the image data acquisition part 12b produces
- the image data acquisition unit 12b also performs a process of outputting the generated image data to the edge direction detection unit 12c.
- the edge direction detection unit 12c is a processing unit that performs processing for smoothing the image data output from the image data acquisition unit 12b and detecting all edge points and directions in the smoothed image data.
- the edge direction detection unit 12c uses various filters stored in advance in the filter group 13b of the storage unit 13. This will be described later with reference to FIG.
- edge direction detection unit 12c indicates the detected edge point and its direction to the valid edge determination unit 12d when creating a dictionary image, and the remaining edge extraction unit 12f when performing banknote damage determination. In addition, the respective output processes are performed together.
- FIG. 3 is a diagram for explaining edge direction detection processing performed by the edge direction detection unit 12c.
- edge direction detection processing using a filter will be shown, but when applying such a filter to an image, a pixel on the image where the center pixel of the filter overlaps will be referred to as a “target pixel” hereinafter. I will do it.
- the edge direction detection unit 12c smoothes the genuine note image 1 or the discrimination target image 5 output by the image data acquisition unit 12b (see (1) in the figure).
- a smoothing filter f1 having a coefficient of 1 in the vicinity of 3 ⁇ 3 centering on the target pixel can be used.
- the application of the smoothing filter f1 may be repeated a plurality of times (for example, three times) until an optimum noise removal result is obtained.
- the size of the smoothing filter f1 may be increased (for example, in the vicinity of 7 ⁇ 7) and applied only once. Even when the smoothing filter f1 is not applied, it is possible to detect edge points and edge directions.
- the edge direction detection unit 12c detects an edge point of the correct ticket image 1 or the discrimination target image 5 (see (2) in the figure).
- a Laplacian filter f2 can be used for the detection of the edge point. That is, the zero cross point detected by applying the Laplacian filter f2 becomes the edge point of the correct ticket image 1 or the discrimination target image 5.
- the edge direction detection unit 12c detects the edge direction of the correct ticket image 1 or the discrimination target image 5 by using, for example, the Prewitt filters f3 and f4 (see (3) in the figure).
- the white arrows in the five black pixels included in the rectangle r shown in the figure indicate the detected edge directions.
- the edge direction at each edge point is detected (see (4) in the figure).
- the Prewitt filters f3 and f4 are shown, but other filters such as a Sobel filter may be used. Various filters may be appropriately combined. Further, the size and coefficient values of various filters are not particularly limited.
- the effective edge determination unit 12d is a processing unit that determines whether or not each pixel in the direction plane is an effective edge point, and performs such determination using the edge point amount 13c for each direction plane stored for each pixel in the direction plane. I do.
- the effective edge determination unit 12d also performs a process of outputting each direction plane storing the effective edge points to the expansion processing unit 12e.
- FIG. 4 is a diagram for explaining the effective edge determination process performed by the effective edge determination unit 12d.
- A in the figure shows an edge point amount counting process at an arbitrary edge point P1 of the genuine ticket image
- B in the same figure shows an edge point quantity counting process at an arbitrary edge point P2.
- C in the figure shows the valid edge determination processing at the edge points P1 and P2.
- the valid edge determination unit 12d The edge point amount of the pixel P1 is counted up.
- the effective edge determination unit 12d When it is detected that the edge direction of the edge point P2 is within the rotational deviation absorption amount from the boundary between the A direction and the F direction, the effective edge determination unit 12d The edge point amounts of the pixel P2 at the same position in the direction plane A and the pixel P2 at the same position in the direction plane F are counted up.
- the effective edge determination unit 12d detects the pixel at the same position in the two direction planes. Count up the amount of edge points.
- the valid edge determination unit 12d performs an edge point amount counting process for each edge point for all the regular image acquired in creating the dictionary image, and obtains an edge point amount equal to or greater than a predetermined determination threshold for each direction plane.
- the pixel is stored for each direction plane as an effective edge point.
- the pixel P1 in the direction plane A is Since an edge point amount of “200” that is equal to or greater than the determination threshold is obtained, it is stored as an effective edge point in the direction plane A.
- the pixel P2 in the direction plane A is also stored as an effective edge point in the direction plane A because the edge point amount of “180” that is equal to or greater than the determination threshold is obtained.
- the pixel P2 in the direction plane F has only obtained an edge point amount of “3” that does not satisfy the determination threshold, it is not stored as an effective edge point in the direction plane F.
- the expansion processing unit 12e is a processing unit that performs a process of expanding all effective edge points on each direction plane by a predetermined amount.
- the expansion processing unit 12e also performs a process of storing each direction plane after performing the expansion processing in the storage unit 13 as a dictionary image for each direction.
- FIG. 5 is a diagram for explaining expansion processing performed by the expansion processing unit 12e.
- A in the figure shows a basic example of the dilation process
- B in the figure shows a procedure from the dilation process to creation of a dictionary image
- C in the figure shows a plurality of dilations. The explanatory diagrams about the pixels at the same position included in the range are respectively shown.
- a pixel in which “m” is “i” and “n” is “j” is assumed as a target pixel.
- the pixel is referred to as “P” and described as “P (i, j)”.
- the expansion processing refers to processing for converting the pixel value of a neighboring pixel adjacent to the target pixel into the pixel value of the target pixel.
- a filter can be used for the conversion of the pixel value.
- the target pixel is expanded with respect to adjacent eight neighboring pixels (hereinafter referred to as “1-pixel expansion”), P (i ⁇ 1, j ⁇ 1), P (i ⁇ 1, j), P
- Each pixel value of (i ⁇ 1, j + 1), P (i, j ⁇ 1), P (i, j + 1), P (i + 1, j ⁇ 1), P (i + 1, j), P (i + 1, j + 1) Is converted into the pixel value of the target pixel P (i, j) (see the black pixel in FIG. 5A).
- This expansion process is mainly intended to absorb misalignment and rotational displacement due to expansion of the pixel range (see FIG. 1).
- one-pixel expansion has been described as an example, but such one-pixel expansion can be repeated by specifying a predetermined expansion amount. For example, when 3 pixels are designated as a predetermined expansion amount, the 1-pixel expansion may be repeated three times. Further, when such a three-pixel expansion is performed, the pixel value of the target pixel finally expands to each pixel in the vicinity of 7 ⁇ 7 with the target pixel as the center.
- the expansion processing unit 12e performs the above-described expansion processing on all effective edge points on each direction plane (see (B-1) in FIG. 5). Then, after all the effective edge points are expanded, each direction plane is stored in the storage unit 13 as a dictionary image for each direction (see (B-2) in FIG. 5).
- the pixels at the same position included in the expansion range of a plurality of direction planes will be described.
- the pixel P3 at a predetermined position is included in both the arbitrary expansion range D1 of the directional plane A and the arbitrary expansion range D2 of the directional plane F that have undergone the expansion process.
- the pixel P3 when viewed as one regular image rather than by direction has two edge directions (range of the angle ⁇ shown in the figure) in the A direction and the F direction. Therefore, regardless of whether the pixel P3 of the discrimination target image is in the A direction or the F direction, it can be determined that such an edge direction exists in the genuine ticket image.
- FIG. 6 is a diagram illustrating a modification of the expansion process.
- a pixel (see FIG. 5) in which “m” is “i” and “n” is “j” will be described as a target pixel. Further, the case of indicating the position of the pixel is the same as the case of the description of FIG. 5 described above.
- the expansion process can be performed only on adjacent neighboring pixels corresponding to the A or D direction. It can. Specifically, P (i ⁇ 2, j), P (i ⁇ 2, j + 1), P (i ⁇ 2, j + 2), P (i ⁇ 1, j), P (i ⁇ 1, j + 1), P (i-1, j + 2), P (i, j-1), P (i, j + 1), P (i + 1, j-2), P (i + 1, j-1), P (i + 1, j), Expansion processing may be performed on each pixel such as P (i + 2, j-2), P (i + 2, j-1), and P (i + 2, j).
- the remaining edge extraction unit 12f performs a process of generating a difference image obtained by extracting a difference between the determination target image and the dictionary image and extracting a remaining edge point that exists only in the determination target image when determining whether the bill is correct or not. It is a processing part to perform.
- the remaining edge extracting unit 12f also performs a process of outputting the generated difference image to the remaining edge analyzing unit 12g.
- FIG. 7 is a diagram for explaining the remaining edge extraction processing performed by the remaining edge extraction unit 12f.
- the remaining edge extraction unit 12f stores each edge point of the discrimination target image 5 detected by the edge direction detection unit 12c in each direction plane for each edge direction ((1) in the figure). reference). Note that each edge point of the discrimination target image 5 includes an edge point other than the effective edge points constituting the dictionary image.
- the remaining edge extraction unit 12f extracts the direction-specific dictionary image of the banknote as the discrimination target from the direction-specific dictionary image 13d stored in the storage unit 13 (see (2) in the figure).
- the remaining edge extracting unit 12f takes each difference obtained by subtracting each dictionary image corresponding to each direction plane from each direction plane in which each edge point of the discrimination target image 5 is stored ((3) in the figure). reference).
- the remaining edge extraction unit 12f generates a difference image 5 ′ by combining the differences (see (4) in the figure).
- residual edge points including edge points other than the effective edge points
- the difference image 5 ′ see (4) in FIG. 5.
- the remaining edge analysis unit 12g removes the isolated points from the remaining edge points extracted by the remaining edge extraction unit 12f, and generates a density image in which the density of the remaining remaining edge points in a predetermined size is detected. It is a processing unit.
- the remaining edge analysis unit 12g uses various filters stored in advance in the filter group 13b of the storage unit 13. This will be described later with reference to FIG.
- the remaining edge analysis unit 12g also performs a process of outputting the generated density image to the determination unit 12h.
- the discrimination unit 12h is a processing unit that performs a process of finally discriminating whether or not the banknote is the discrimination target based on the density image binarized with a predetermined threshold.
- the determination unit 12h refers to the determination criterion information 13e in the storage unit 13.
- FIG. 8 is a diagram for explaining the remaining edge analysis process performed by the remaining edge analysis unit 12g and the determination process performed by the determination unit 12h.
- the remaining edge analyzing unit 12g removes the remaining edge points isolated from the surroundings as isolated points from the remaining edge points of the difference image 5 ′ output by the remaining edge extracting unit 12f (see FIG. (See (1)). For example, in the figure, among the remaining edge points of the difference image 5 ′, the remaining edge point surrounded by the broken line R2 is removed as an isolated point, and the remaining edge point surrounded by the broken line R1 is the final remaining edge point. Shows the remaining case.
- an isolated point removal filter f5 can be used to remove such isolated points (see (1) in the figure).
- the isolated point removal filter f5 converts the target pixel into white when all the eight neighboring pixels of the target pixel are white.
- the remaining edge analysis unit 12g applies density detection filters f6 and f7 to the difference image 5 ′ (see (2) in the figure) to detect the density of the remaining edge portion as a pixel value.
- the density detection filters f6 and f7 are filters of a predetermined size composed of L1 ⁇ L2 filter elements.
- the predetermined size is the minimum size of the contamination that is desired to be detected in determining whether the damage is correct or not.
- all the filter coefficients can be set to 1, but when considering the density of the remaining edge points, the coefficients near the center of the filter may be increased.
- a horizontally long density detection filter f6 is used to detect the density of the remaining edge portion that is horizontally long
- a vertically long density detection filter f7 is used to detect the density of the remaining edge portion that is vertically long.
- (2-a) shown in the figure is a case where the horizontally long density detection filter f6 is applied to the difference image 5 ′.
- the density of the remaining edge portion that is long in the horizontal direction is expressed as each pixel value of the portion surrounded by the broken line Ry.
- the remaining edge analysis unit 12g outputs a density image 5 ′′ obtained by synthesizing the application results of the density detection filters f6 and f7 to the determination unit 12h.
- the determination unit 12h binarizes the density image 5 ′′ with a predetermined threshold (see (3) in the figure).
- the predetermined threshold is a pixel value based on the predetermined size of the density detection filters f6 and f7 and the predetermined reference density.
- the predetermined reference density is a ratio of the number of remaining edge points that are predicted to be detected in a predetermined size in the case of a slip to the total value of filter coefficients.
- determination part 12h will discriminate
- the density detection filter f6 or f7 is a rectangular filter. However, in the case of performing damage determination with emphasis on the density rather than the shape and size of the stain. May be a circular Gaussian filter.
- the storage unit 13 is a storage unit configured by a storage device such as a hard disk drive or a non-volatile memory.
- the setting information 13a, the filter group 13b, the edge point amount by direction plane 13c, and the dictionary image by direction 13d are discriminated.
- the reference information 13e is stored.
- the setting information 13a is information relating to the initial setting such as the quantization number and the rotational deviation absorption amount.
- the filter group 13b defines various filters used in the edge direction detection process (see FIG. 3) and the remaining edge analysis process (see FIG. 8) in advance, and is referred to by the edge direction detection unit 12c and the remaining edge analysis unit 12g. Is done.
- the edge point amount 13c by direction plane is an edge point amount in each direction plane corresponding to each edge point of the N sheets of the regular image, and is registered and updated by the valid edge determination unit 12d.
- the direction-specific dictionary image 13d is a collection of direction-specific dictionary images, and is registered by the expansion processing unit 12e for each quantized direction. Further, the comparison with the discrimination target image is referred to by the remaining edge extraction unit 12f.
- the discrimination reference information 13e is information relating to a predetermined threshold value in the final damage discrimination, and is referred to by the discrimination unit 12h.
- setting information 13a, the filter group 13b, and the discrimination reference information 13e are definition information defined in advance, but can be changed as appropriate even during operation of the bill discriminating apparatus 10.
- FIG. 9 is a flowchart illustrating a processing procedure executed by the banknote discriminating apparatus 10 according to the first embodiment.
- the setting unit 12a performs initial settings such as the number of quantization and the amount of rotational deviation absorption (step S101). Then, in creating the dictionary image, the image data acquisition unit 12b acquires N banknotes of genuine bill images (step S102).
- the banknote discriminating apparatus 10 performs a dictionary image creation process for creating a dictionary image for each edge direction using N sheets of genuine bill images as input data (step S103). Note that the processing procedure of the dictionary image creation processing will be described later with reference to FIG.
- the image data acquisition unit 12b acquires an image of the banknote that is the target of determination (step S104).
- step S105 the banknote discrimination device 10 performs a damage discrimination process using the discrimination target image as input data. Note that the processing procedure of the damage determination processing will be described later with reference to FIG.
- step S106 determines whether or not there is a next banknote to be discriminated (step S106), and when there is no next banknote to be discriminated (step S106, Yes), the process is terminated. On the other hand, when the determination condition of step S106 is not satisfied (step S106, No), the banknote determination apparatus 10 repeats the processes after step S104.
- the dictionary image creation stage (see the part surrounded by the broken line F1 in the figure) and the damage determination stage (see the part enclosed by the broken line F2 in the figure) are separated. It may be done.
- FIG. 10 is a flowchart illustrating a processing procedure of dictionary image creation processing executed by the banknote determination apparatus 10 according to the first embodiment.
- the dictionary image creation processing referred to here corresponds to processing from creation of dictionary images for each edge direction from N sheets of regular ticket images.
- the edge direction detection unit 12c smoothes N sheets of genuine images (step S201), and detects each edge point and its direction for N sheets of genuine images (step S202). .
- the valid edge determination unit 12d counts up the edge point amount in each corresponding direction plane (the edge point amount 13c for each direction plane in the storage unit 13) for each edge point of the N sheets of the regular ticket image. (Step S203). Then, the effective edge determination unit 12d stores, on each direction plane, pixels that have obtained edge point amounts that are equal to or greater than a predetermined determination threshold in each direction plane (step S204).
- the expansion processing unit 12e expands each effective edge point stored in each direction plane by a predetermined expansion amount (step S205). And the expansion
- FIG. 11 is a flowchart illustrating the processing procedure of the damage determination process executed by the banknote determination apparatus 10 according to the first embodiment.
- the damage determination process corresponds to a process from comparing and analyzing the determination target image and the dictionary image to determining the correctness of the banknote as the determination target.
- the edge direction detection unit 12c smoothes the discrimination target image (step S301), and detects each edge point of the discrimination target image and its direction (step S302).
- the remaining edge extraction unit 12f stores each edge point of the discrimination target image in each corresponding direction plane (step S303). Then, the remaining edge extraction unit 12f takes each difference between each direction plane and each corresponding dictionary image of the direction-specific dictionary image 13d (step S304), and synthesizes each difference to extract a remaining edge point (step S305). ).
- the remaining edge analysis unit 12g removes isolated points from the remaining edge points (step S306) and then analyzes the density of the remaining edge points (step S307).
- the determination unit 12h determines whether the density of the remaining edge points analyzed by the remaining edge analysis unit 12g is less than a predetermined threshold (step S308).
- the discrimination criterion information 13e is referred to.
- step S308 If the density is less than the predetermined threshold (step S308, Yes), the determination unit 12h determines that the banknote to be determined is a correct note (step S309), and ends the process.
- step S308 determines that the banknote to be determined is a non-performing bill (step S310), and ends the process.
- the setting unit performs an initial setting such as quantization, and the image data acquisition unit generates a plurality of genuine ticket image data when creating the dictionary image.
- the image data to be determined is generated, the edge direction detection unit detects the edge point and the direction of the generated image data, and the effective edge determination unit is an effective component that is a component of the dictionary image.
- the edge point is determined, the expansion processing unit expands the effective edge point by a predetermined amount, the remaining edge extraction unit extracts the remaining edge point from the discrimination target image, and the remaining edge analysis unit detects the density of the remaining edge points.
- the banknote discriminating apparatus is configured such that the discriminating unit discriminates whether the banknotes are correct based on the detected density of the remaining edge points. Therefore, even if there is diversity in the design and quality of the paper sheets, it is possible to perform the optimum damage determination with high accuracy.
- Example 1 Although the banknote discriminating device demonstrated the case where the dictionary image which expanded each edge point by the predetermined amount was demonstrated, here, a banknote discriminating device is set to the peripheral edge number of each pixel. It is good also as creating a dictionary image based on it. Therefore, in the following, an embodiment of a bill discriminating apparatus that creates a dictionary image based on the number of peripheral edges of each pixel will be described as a second embodiment.
- FIG. 12 is a diagram illustrating an outline of dictionary image creation of the bill discriminating apparatus according to the second embodiment.
- the banknote discriminating apparatus according to the second embodiment counts the number of peripheral edges for each pixel in each direction plane, and creates a dictionary image for each direction based on the statistics of the number of peripheral edges. Has the main characteristics.
- peripheral edge refers to an edge point included in a predetermined vicinity range of a certain pixel.
- the banknote discriminating apparatus counts the number of peripheral edges for each pixel in each directional plane for each of N regular ticket images that are input data of the dictionary image (see FIG. (See (1)).
- a counting filter (not shown) having 9 ⁇ 9 pixels and all coefficients being 1 can be used.
- (1) in the figure shows the number of peripheral edges for each pixel on each direction plane of the direction plane groups 1 to N for counting the peripheral edges corresponding to the 1st to Nth genuine bill images.
- An example is shown in FIG. Although this example does not limit the contents of the dictionary image creation process of the banknote discriminating apparatus according to the second embodiment, the following description is based on this assumption.
- the banknote discriminating apparatus calculates N statistics (direction plane groups 1 to N) for the counted number of peripheral edges (see (2) in the figure).
- the banknote discriminating apparatus determines a reference pixel value for each pixel in each direction plane based on the calculated statistic, and stores the reference pixel value in each direction plane for dictionary images (same as above). (Refer to (3) in the figure). The determination of the reference pixel value will be described later with reference to FIG.
- FIG. 13 is a block diagram illustrating the configuration of the banknote discriminating apparatus 10a according to the second embodiment.
- symbol is attached
- the banknote discriminating apparatus 10 a is further provided with a peripheral edge counting unit 12 i in the control unit 12, and an effective edge determining unit 12 d and an expansion processing unit 12 e (see FIG. 2). ) Is different from the banknote discriminating apparatus 10 according to the first embodiment described above in that a statistical processing unit 12j is provided. Moreover, it differs from the banknote discrimination device 10 which concerns on Example 1 mentioned above by the point which equips the memory
- the peripheral edge counting unit 12i is a processing unit that performs processing for counting the number of edge points detected by the edge direction detection unit 12c within a predetermined vicinity range for each pixel.
- the peripheral edge counting unit 12i stores the counted peripheral hedge number for each pixel in each direction plane in the peripheral edge counting plane 13f in the case of creating a dictionary image, and remains in the case of determining whether the damage is correct or not. It is also a processing unit that outputs to the edge extraction unit 12f.
- the statistical processing unit 12j refers to the peripheral edge counting plane 13f with respect to the number of peripheral edges for each pixel in each direction plane counted by the peripheral edge counting unit 12i, and calculates a statistical amount for N sheets of the regular image. It is a processing part to perform. Such statistics include maximum values, average values, variance values, standard deviations, and the like.
- the statistical processing unit 12j is also a processing unit that performs a process of determining a reference pixel value for each pixel in each direction plane based on the calculated statistics. In addition, regarding the determination of the reference pixel value, various values and parameters included in the statistic can be combined.
- the maximum value included in the statistic is V
- the average value is ⁇
- the standard deviation is ⁇
- the parameter is ⁇ .
- the reference pixel value can be set to “V + ⁇ ” based on the maximum value of the statistic.
- the reference pixel value may be set to “ ⁇ + ⁇ ” based on the average value and standard deviation of the statistics.
- the calculation formula for the reference pixel value can be appropriately changed according to the design of the banknote, the required discrimination accuracy, and the like.
- the statistical processing unit 12j stores the determined reference pixel value for each pixel of each direction plane in each direction plane for the dictionary image, and stores each direction plane after storage as a direction-specific dictionary image 13d to the storage unit 13.
- FIG. 14 is a flowchart illustrating a processing procedure of dictionary image creation processing executed by the banknote discriminating apparatus 10a according to the second embodiment.
- the edge direction detection unit 12c smoothes N sheets of genuine images (step S401), and detects each edge point and its direction for N sheets of genuine images (step S402). .
- the peripheral edge counting unit 12i counts the number of edge points (the number of peripheral edges) within a predetermined neighborhood range for each pixel in each direction plane of the direction plane groups 1 to N for N sheets of genuine images. (Step S403).
- the statistical processing unit 12j then calculates N (direction plane group 1 to N) statistics for the number of peripheral edges for each pixel counted by the peripheral edge counting unit 12i (step S404). Then, the statistical processing unit 12j determines a reference pixel value for each pixel in each direction plane based on the calculated statistical amount (step S405).
- the statistical processing unit 12j stores the determined reference pixel value for each pixel in the storage unit 13 as the direction-specific dictionary image 13d (step S406), and ends the process.
- FIG. 15 is a flowchart illustrating the processing procedure of the damage determination process executed by the banknote determination apparatus 10a according to the second embodiment.
- the edge direction detection unit 12c smoothes the discrimination target image (step S501), and detects each edge point and the direction of the discrimination target image (step S502).
- the peripheral edge counting unit 12i stores each edge point in the corresponding direction plane for the discrimination target image (step S503).
- the peripheral edge counting unit 12i then counts the number of peripheral edges for each pixel in each direction plane, and sets the count value as a pixel value for each pixel in each direction plane (step S504).
- the remaining edge extraction unit 12f compares each direction plane with each corresponding dictionary image in the direction-specific dictionary image 13d for each pixel (step S505), and adds pixels having pixel values exceeding the reference pixel value in each direction plane.
- the remaining edge point is set (step S506).
- the remaining edge extraction unit 12f synthesizes the remaining edge points on each direction plane (step S507).
- the remaining edge analysis unit 12g removes isolated points from the remaining edge points (step S508) and then analyzes the density of the remaining edge points (step S509).
- the determination unit 12h determines whether the density of the remaining edge points analyzed by the remaining edge analysis unit 12g is less than a predetermined threshold (Step S510).
- the discrimination criterion information 13e is referred to.
- step S510 If the density is less than the predetermined threshold (step S510, Yes), the determination unit 12h determines that the banknote to be determined is a correct note (step S511), and ends the process.
- step S510 determines that the banknote to be determined is a damaged bill (step S512), and ends the process.
- the statistical processing unit determines the reference pixel value for each pixel based on the statistic of the number of peripheral edges for each pixel in each direction plane, which is counted by the peripheral edge counting unit. Therefore, it is possible to add a macro element called a statistic based on a micro element such as the number of peripheral edges for each pixel, and even if there is diversity in the design and quality of paper sheets, Optimal damage determination can be performed with accuracy.
- the method according to the present invention may be applied to the paper sheet type determination.
- a dictionary image is generated for each denomination of the banknote, and the discrimination target image and the dictionary image are compared for all denominations (that is, for the number of denominations). Then, it is possible to determine that the denomination of the dictionary image when the density of the remaining edge points is a predetermined value or less is the denomination of the determination target image.
- the paper sheet discriminating apparatus and the paper sheet discriminating method according to the present invention are highly accurate and optimal for determining the correct damage and types even when the design and quality of the paper sheets are diverse. This is useful when discrimination is desired, and is particularly suitable for application to a device that discriminates highly circulated paper sheets such as banknotes.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
- Inspection Of Paper Currency And Valuable Securities (AREA)
Abstract
A paper discriminating device is configured such that a directional dictionary image for each edge is generated based on an edge domain detected from each learning image and an edge direction in the edge domain using the learning image which is an image taken from an authentic paper; a remaining edge domain which lies only in an input image is extracted by comparing the input image which is an image taken from the paper to be discriminated with the directional dictionary image; the density of the remaining edge domain in a given area domain is detected; and the usable discrimination and kind discrimination are implemented.
Description
本発明は、紙幣などの紙葉類の正損を判別する紙葉類判別装置および紙葉類判別方法に関し、特に、紙葉類のデザインや品質に多様性がある場合であっても、高精度で最適な正損判別および種類判別を行うことができる紙葉類判別装置および紙葉類判別方法に関する。
The present invention relates to a paper sheet discriminating apparatus and a paper sheet discriminating method for discriminating whether a paper sheet such as a banknote is correct or not, and in particular, even if there is diversity in the design and quality of the paper sheet. The present invention relates to a paper sheet discriminating apparatus and a paper sheet discriminating method capable of performing optimum damage discrimination and type discrimination with accuracy.
従来、汚れやしわ、折り目などの汚損が生じている紙幣と生じていない紙幣とを、イメージセンサなどを用いて判別する紙幣判別装置が知られている。なお、以下では、かかる汚損が生じている紙幣を「損券」と、生じていない紙幣を「正券」と、それぞれ記載する。
2. Description of the Related Art Conventionally, a banknote discriminating apparatus that discriminates banknotes that are contaminated such as dirt, wrinkles, and folds from banknotes that are not generated using an image sensor or the like is known. In the following description, a banknote in which such fouling has occurred is referred to as a “damaged ticket”, and a banknote in which such a fouling has not occurred is referred to as a “correct ticket”.
かかる紙幣判別装置は、イメージセンサなどを用いて取得した正券の撮像画像(以下、「基準画像」と記載する)と判別対象である紙幣の撮像画像(以下、「判別対象画像」と記載する)とを比較および解析することで判別を行うのが一般的である。
Such a bill discriminating device describes a captured image of a genuine note (hereinafter referred to as “reference image”) acquired using an image sensor or the like and a captured image of a bill to be discriminated (hereinafter referred to as “discrimination target image”). Is generally discriminated by comparing and analyzing.
そして、かかる比較および解析を行うにあたっては、画像のエッジを用いるなどのさまざまな手法が提案されている。ここで、エッジとは、画像の濃淡が著しく変化している箇所のことを指し、かかるエッジの濃度傾斜方向をエッジ方向という。
And, in performing such comparison and analysis, various methods such as using an image edge have been proposed. Here, the edge refers to a portion where the density of the image has changed remarkably, and the density gradient direction of the edge is referred to as the edge direction.
たとえば、特許文献1には、基準画像に基づき、デザインのうえで汚損が目立ちやすい領域を汚損検出対象領域としてあらかじめ特定しておき、かかる汚損検出対象領域におけるエッジの差異を解析することで正損判別を行う技術が開示されている。
For example, in Patent Document 1, an area in which stains are conspicuous in design is specified in advance as a stain detection target region based on a reference image, and an edge difference in the stain detection target region is analyzed to analyze the damage. A technique for performing the discrimination is disclosed.
なお、デザインのうえで汚損が目立ちやすい領域とは、エッジの集中していない領域、すなわち濃淡の変化が少ない簡素なデザイン領域を指す。
In addition, the area where stains are conspicuous in the design refers to an area where edges are not concentrated, that is, a simple design area where there is little change in shading.
また、特許文献2には、エッジ方向について、基準画像と判別対象画像とを画素単位で比較し、かかる比較によって検出されるエッジ方向の差異を解析することで正損判別を行う技術が開示されている。
Patent Document 2 discloses a technique for comparing the reference image and the discrimination target image in pixel units for the edge direction, and analyzing the difference in the edge direction detected by the comparison, thereby determining the fitness. ing.
しかしながら、上述した特許文献1の技術を用いた場合、汚損が目立ちやすい領域にのみ絞った比較を行うため、かかる領域以外に汚損が生じている場合の正損判別を行えないという問題点があった。
However, when the technique of Patent Document 1 described above is used, the comparison is limited only to the area where the contamination is conspicuous. Therefore, there is a problem that it is not possible to determine the correctness when the contamination is generated in other areas. It was.
また、上述した特許文献2の技術を用いた場合、基準画像と判別対象画像とを画素単位で比較するため、多版刷りによる印刷ずれが生じていた場合に、かかるずれが微細なものであっても損券として判別される可能性が高かった。
In addition, when the technique of Patent Document 2 described above is used, the reference image and the discrimination target image are compared on a pixel-by-pixel basis. Therefore, when there is a printing deviation due to multi-plate printing, the deviation is minute. However, there was a high possibility of being identified as a non-payment.
この点、かかる場合を損券として判別することは精度の高い判別を行えるともいえるが、一方、流通にあたって支障のない紙幣を損券として取り扱うことにもなり妥当ではない。
In this respect, it can be said that discriminating such a case as a damaged ticket can be performed with high accuracy, but on the other hand, a banknote that does not hinder the distribution is handled as a damaged ticket, which is not appropriate.
これらのことから、紙幣のデザインや品質に多様性がある場合であっても、高精度で最適な正損判別を行うことができる紙幣判別装置あるいは紙幣判別方法をいかにして実現するかが大きな課題となっている。ここにいう多様性とは、たとえばエッジが多く汚損が目立ちにくい絵柄か否かといったデザインの多様さや、多版刷りによる印刷品質の変動などのことを指す。
From these facts, how to realize a banknote discriminating apparatus or a banknote discriminating method capable of discriminating optimal damages with high accuracy even when the design and quality of banknotes are diverse. It has become a challenge. The diversity here refers to, for example, the variety of designs such as whether or not the pattern has many edges and the stains are not noticeable, and the variation in print quality due to multi-plate printing.
なお、かかる課題は、紙幣に限らず、商品券や通帳といった紙葉類のデザインや品質に多様性がある場合にも同様に発生する課題である。また、かかる多様性を前提とした紙葉類の種類判別にも同様に発生する課題である。
Note that this problem is not limited to banknotes, and similarly occurs when there are variations in the design and quality of paper sheets such as gift certificates and passbooks. In addition, it is a problem that occurs in the same way in the type discrimination of paper sheets based on such diversity.
本発明は、上述した従来技術による問題点を解消するためになされたものであって、紙葉類のデザインや品質に多様性がある場合であっても、高精度で最適な正損判別および種類判別を行うことができる紙葉類判別装置および紙葉類判別方法を提供することを目的とする。
The present invention has been made in order to solve the above-described problems caused by the prior art, and even when there is diversity in the design and quality of paper sheets, it is possible to determine the optimum damage with high accuracy and An object of the present invention is to provide a paper sheet discriminating apparatus and a paper sheet discriminating method capable of discriminating types.
上述した課題を解決し、目的を達成するために、本発明は、紙葉類の撮像画像に基づいて紙葉類を判別する紙葉類判別装置であって、正当な紙葉類の撮像画像である学習用画像からエッジ位置および当該エッジ位置におけるエッジ方向が検出された場合に、所定のエッジ方向範囲ごとに用意された複数の方向別辞書画像のうち当該エッジ方向に対応する前記方向別辞書画像の該当する位置へ当該エッジ位置を振り分けることで前記方向別辞書画像をそれぞれ生成する方向別辞書画像生成手段と、判別対象となる紙葉類の撮像画像である入力画像を前記方向別辞書画像と対比することによって当該紙葉類の正損および種類を判別する判別手段とを備えたことを特徴とする。
In order to solve the above-described problems and achieve the object, the present invention is a paper sheet discriminating apparatus that discriminates a paper sheet based on a captured image of a paper sheet, and is a captured image of a legitimate paper sheet. When the edge position and the edge direction at the edge position are detected from the learning image, the direction dictionary corresponding to the edge direction among a plurality of direction dictionary images prepared for each predetermined edge direction range. A direction-specific dictionary image generation unit that generates the direction-specific dictionary image by allocating the edge position to a corresponding position of the image, and an input image that is a captured image of a paper sheet to be determined is the direction-specific dictionary image And a discriminating means for discriminating the damage and type of the paper sheet.
また、本発明は、上記の発明において、前記入力画像からエッジ位置および当該エッジ位置におけるエッジ方向が検出された場合に、当該エッジ方向に基づいて当該エッジ位置を前記エッジ方向範囲ごとに分離することで方向別入力画像を生成する方向別入力画像生成手段と、同一の前記エッジ方向範囲に関する前記方向別入力画像および前記方向別辞書画像のうち前記方向別入力画像にのみ存在する前記エッジ位置をすべての前記エッジ方向範囲について重ね合わせることで残存エッジ領域を抽出する残存エッジ抽出手段とをさらに備え、前記判別手段は、前記残存エッジ抽出手段によって抽出された前記残存エッジ領域に基づいて前記判別対象となる紙葉類の正損を判別することを特徴とする。
Further, according to the present invention, in the above invention, when an edge position and an edge direction at the edge position are detected from the input image, the edge position is separated for each edge direction range based on the edge direction. The direction-specific input image generation means for generating the direction-specific input image, and all the edge positions that exist only in the direction-specific input image among the direction-specific input image and the direction-specific dictionary image related to the same edge direction range. And a remaining edge extracting means for extracting a remaining edge region by superimposing the edge direction range of the edge direction range, and the determining means is configured to determine the determination target based on the remaining edge region extracted by the remaining edge extracting means. It is characterized by discriminating whether the paper sheet is normal or not.
また、本発明は、上記の発明において、前記方向別辞書画像生成手段は、検出された前記エッジ位置における前記エッジ方向が隣接する2つの前記エッジ方向範囲の境界線から所定の範囲内にある場合に、当該2つのエッジ方向範囲にそれぞれ対応する前記方向別辞書画像の該当する位置へ当該エッジ位置を振り分けることを特徴とする。
Further, the present invention is the above invention, wherein the direction-specific dictionary image generation means is such that the edge direction at the detected edge position is within a predetermined range from a boundary line between two adjacent edge direction ranges. In addition, the edge position is assigned to the corresponding position in the direction dictionary image corresponding to each of the two edge direction ranges.
また、本発明は、上記の発明において、前記残存エッジ抽出手段によって抽出された前記残存エッジ領域の密度を検出する密度検出手段をさらに備え、前記判別手段は、前記密度検出手段によって検出された前記密度に基づいて前記判別対象となる紙葉類の正損を判別することを特徴とする。
The present invention further includes a density detection unit that detects a density of the remaining edge region extracted by the remaining edge extraction unit in the above-described invention, and the determination unit detects the density detected by the density detection unit. It is characterized by discriminating whether or not the paper sheet to be discriminated is based on the density.
また、本発明は、上記の発明において、前記方向別辞書画像生成手段は、所定枚数の前記学習用画像を用いて、所定数以上の前記エッジ位置が振り分けられた画素を有効エッジ領域とすることを特徴とする。
Further, according to the present invention, in the above invention, the direction-specific dictionary image generation means uses, as a valid edge region, a pixel to which a predetermined number or more of the edge positions are allocated using a predetermined number of learning images. It is characterized by.
また、本発明は、上記の発明において、前記方向別辞書画像生成手段は、前記方向別辞書画像ごとに前記エッジ位置または前記有効エッジ領域を周辺画素へ拡張する膨張処理を行うことを特徴とする。
Further, the present invention is characterized in that, in the above-mentioned invention, the direction-specific dictionary image generating means performs expansion processing for expanding the edge position or the effective edge region to surrounding pixels for each direction-specific dictionary image. .
また、本発明は、上記の発明において、前記方向別辞書画像生成手段は、前記方向別辞書画像に対応する前記エッジ方向範囲を優先した前記膨張処理を行うことを特徴とする。
Further, the present invention is characterized in that, in the above-mentioned invention, the direction-specific dictionary image generation means performs the expansion process giving priority to the edge direction range corresponding to the direction-specific dictionary image.
また、本発明は、上記の発明において、前記方向別辞書画像における各画素の所定の近傍範囲に含まれる前記エッジ位置が振り分けられた画素の個数を計数する周辺エッジ計数手段と、前記周辺エッジ計数手段によって画素ごとに計数された前記個数を複数の前記学習用画像について統計した統計量を算出する統計量算出手段とをさらに備え、前記方向別辞書画像生成手段は、前記統計量算出手段によって前記統計量が算出された場合に、当該統計量に基づいて前記方向別辞書画像における画素の画素値を設定することで前記方向別辞書画像を生成することを特徴とする。
Further, according to the present invention, in the above invention, a peripheral edge counting unit that counts the number of pixels to which the edge positions included in a predetermined vicinity range of each pixel in the directional dictionary image are assigned, and the peripheral edge count Statistic calculation means for calculating a statistic obtained by statistically calculating the number counted for each pixel by the means for a plurality of learning images, and the direction-dependent dictionary image generation means is configured to perform the statistic calculation means by the statistic calculation means. When the statistic is calculated, the directional dictionary image is generated by setting the pixel value of the pixel in the directional dictionary image based on the statistic.
また、本発明は、紙葉類の撮像画像に基づいて紙葉類を判別する紙葉類判別方法であって、正当な紙葉類の撮像画像である学習用画像からエッジ位置および当該エッジ位置におけるエッジ方向が検出された場合に、所定のエッジ方向範囲ごとに用意された複数の方向別辞書画像のうち当該エッジ方向に対応する前記方向別辞書画像の該当する位置へ当該エッジ位置を振り分けることで前記方向別辞書画像をそれぞれ生成する方向別辞書画像生成工程と、判別対象となる紙葉類の撮像画像である入力画像を前記方向別辞書画像と対比することによって当該紙葉類の正損および種類を判別する判別工程とを備えたことを特徴とする。
The present invention also relates to a paper sheet discrimination method for discriminating a paper sheet based on a captured image of a paper sheet, the edge position and the edge position from a learning image that is a legitimate captured image of the paper sheet When the edge direction is detected, the edge position is distributed to the corresponding position in the direction-specific dictionary image corresponding to the edge direction among a plurality of direction-specific dictionary images prepared for each predetermined edge direction range. The direction-specific dictionary image generation step for generating the direction-specific dictionary image in FIG. 6 and the input image, which is a captured image of the paper sheet to be discriminated, are compared with the direction-specific dictionary image to determine whether the paper sheet is normal or not. And a discrimination step for discriminating the type.
本発明によれば、正当な紙葉類の撮像画像である学習用画像からエッジ位置およびかかるエッジ位置におけるエッジ方向が検出された場合に、所定のエッジ方向範囲ごとに用意された複数の方向別辞書画像のうち、検出されたエッジ方向に対応する方向別辞書画像の該当する位置へ検出されたエッジ位置を振り分けることで方向別辞書画像をそれぞれ生成し、判別対象となる紙葉類の撮像画像である入力画像を方向別辞書画像と対比することによってかかる紙葉類の正損および種類を判別することとしたので、紙葉類のデザインや品質に多様性がある場合であっても、高精度で最適な正損判別および種類判別を行うことができるという効果を奏する。
According to the present invention, when an edge position and an edge direction at the edge position are detected from a learning image that is a captured image of a valid paper sheet, a plurality of directions prepared for each predetermined edge direction range are detected. Of the dictionary images, direction-specific dictionary images are generated by allocating the detected edge positions to the corresponding positions in the direction-specific dictionary image corresponding to the detected edge direction, and the captured image of the paper sheet to be determined The input image is compared with the direction-specific dictionary image to determine the correctness and type of the paper, so even if there is a variety in the design and quality of the paper, There is an effect that it is possible to perform the optimum damage discrimination and type discrimination with accuracy.
また、本発明によれば、入力画像からエッジ位置およびかかるエッジ位置におけるエッジ方向が検出された場合に、検出されたエッジ方向に基づいてかかるエッジ位置をエッジ方向範囲ごとに分離することで方向別入力画像を生成し、同一のエッジ方向範囲に関する方向別入力画像および方向別辞書画像のうち、方向別入力画像にのみ存在するエッジ位置をすべてのエッジ方向範囲について重ね合わせることで残存エッジ領域を抽出し、抽出された残存エッジ領域に基づいて判別対象となる紙葉類の正損を判別することとしたので、複数のエッジ方向からより明確に残存エッジ領域を抽出することができ、絵柄が込み入ったデザインで汚損部分が目立ちにくい紙葉類であっても、高精度で最適な正損判別および種類判別を行うことができるという効果を奏する。
Further, according to the present invention, when an edge position and an edge direction at the edge position are detected from the input image, the edge position is separated for each direction by separating the edge position for each edge direction range based on the detected edge direction. Generates an input image and extracts the remaining edge region by superimposing the edge positions that exist only in the direction-specific input image for all edge direction ranges from the direction-specific input image and direction-specific dictionary image for the same edge direction range. In addition, since it is decided to determine whether or not the paper sheet to be discriminated is based on the extracted remaining edge area, the remaining edge area can be extracted more clearly from a plurality of edge directions, and the pattern is complicated. Highly accurate and optimal damage discrimination and type discrimination can be performed even for paper sheets that are not easily noticeable due to their design. The effect say.
また、本発明によれば、検出されたエッジ位置におけるエッジ方向が隣接する2つのエッジ方向範囲の境界線から所定の範囲内にある場合に、かかる2つのエッジ方向範囲にそれぞれ対応する方向別辞書画像の該当する位置へ検出されたエッジ位置を振り分けることとしたので、検出されたエッジ方向に所定の回転ずれ吸収量を加味した回転ずれに強い辞書画像を生成することができるという効果を奏する。
Further, according to the present invention, when the edge direction at the detected edge position is within a predetermined range from the boundary line between two adjacent edge direction ranges, the direction dictionary corresponding to each of the two edge direction ranges. Since the detected edge position is assigned to the corresponding position of the image, there is an effect that it is possible to generate a dictionary image that is resistant to rotational deviation in consideration of a predetermined rotational deviation absorption amount in the detected edge direction.
また、本発明によれば、抽出された残存エッジ領域の密度を検出し、検出された密度に基づいて判別対象となる紙葉類の正損を判別することとしたので、ある程度の幅と高さを有する汚損部分に絞り込んだ正損判別を行うことができ、紙葉類面で直線状に伸びたしわなどの誤判別を防止することができるという効果を奏する。
In addition, according to the present invention, the density of the extracted remaining edge region is detected, and whether or not the paper sheet to be determined is determined based on the detected density is determined. Therefore, it is possible to perform the damage discrimination that is narrowed down to the fouled portion having the thickness, and to prevent erroneous discrimination such as wrinkles extending linearly on the sheet surface.
また、本発明によれば、所定枚数の学習用画像を用いて、所定数以上のエッジ位置が振り分けられた画素を有効エッジ領域とすることとしたので、正券間の印刷ずれを考慮した辞書画像を作成することができるという効果を奏する。
In addition, according to the present invention, since a predetermined number of learning images are used and pixels to which a predetermined number or more of edge positions are assigned are used as effective edge regions, a dictionary that takes into account printing misalignment between regular tickets There is an effect that an image can be created.
また、本発明によれば、前記方向別辞書画像ごとにエッジ位置または有効エッジ領域を周辺画素へ拡張する膨張処理を行うこととしたので、残存エッジ領域の抽出能力を低下させることなく位置ずれに強い辞書画像を作成することができるという効果を奏する。また、方向別に辞書画像をもたない場合と比較して、膨張量を大きくすることができるという効果を奏する。
In addition, according to the present invention, since the edge processing or the effective edge region is expanded to the peripheral pixels for each of the directional dictionary images, the positional deviation is prevented without reducing the extraction capability of the remaining edge region. There is an effect that a strong dictionary image can be created. In addition, there is an effect that the expansion amount can be increased as compared with the case where there is no dictionary image for each direction.
また、本発明によれば、方向別辞書画像に対応するエッジ方向範囲を優先した膨張処理を行うこととしたので、エッジの検出ずれが発生する方向に膨張させた辞書画像を作成することができるという効果を奏する。
Further, according to the present invention, since the expansion process is performed with priority given to the edge direction range corresponding to the direction-specific dictionary image, it is possible to create a dictionary image expanded in the direction in which the edge detection deviation occurs. There is an effect.
また、本発明によれば、方向別辞書画像における各画素の所定の近傍範囲に含まれるエッジ位置が振り分けられた画素の個数を計数し、画素ごとに計数された個数を複数の学習用画像について統計した統計量を算出し、統計量が算出された場合に、かかる統計量に基づいて方向別辞書画像における画素の画素値を設定することで方向別辞書画像を生成することとしたので、画素ごとの周辺の特性(デザインの多様性など)に対応した辞書画像を作成することできるという効果を奏する。
In addition, according to the present invention, the number of pixels to which the edge positions included in a predetermined neighborhood range of each pixel in the direction-specific dictionary image are distributed is counted, and the number counted for each pixel is obtained for a plurality of learning images. Since the statistic is calculated, and when the statistic is calculated, the directional dictionary image is generated by setting the pixel value of the pixel in the directional dictionary image based on the statistic. There is an effect that it is possible to create a dictionary image corresponding to the peripheral characteristics (design diversity, etc.) of each.
以下に、添付図面を参照して、本発明に係る紙葉類判別手法の好適な実施例を詳細に説明する。なお、以下では、本発明に係る紙葉類判別手法の概要について図1を用いて説明した後に、本発明に係る紙葉類判別手法を適用した紙幣判別装置についての実施例を図2~図15を用いて説明することとする。また、以下では、紙葉類判別のうち、主に正損判別を行う場合について説明する。
Hereinafter, preferred embodiments of the paper sheet discrimination method according to the present invention will be described in detail with reference to the accompanying drawings. In the following, the outline of the paper sheet discriminating method according to the present invention will be described with reference to FIG. 1, and then an embodiment of a bill discriminating apparatus to which the paper sheet discriminating method according to the present invention is applied will be described with reference to FIGS. 15 will be used for explanation. In the following, a description will be given of a case where the damage determination is mainly performed in the paper sheet determination.
また、以下では、紙幣判別装置が、正当な紙葉類の撮像画像である学習用画像を加工して方向別に作成した方向別辞書画像を「辞書画像」と記載することとする。
In the following description, the banknote discriminating apparatus describes a direction-specific dictionary image created for each direction by processing a learning image, which is a captured image of a valid paper sheet, as a “dictionary image”.
まず、本発明に係る紙葉類判別手法の概要について図1を用いて説明する。図1は、本発明に係る紙葉類判別手法の概要を示す図である。
First, the outline of the paper sheet discrimination method according to the present invention will be described with reference to FIG. FIG. 1 is a diagram showing an outline of a paper sheet discrimination method according to the present invention.
同図に示すように、本発明に係る紙葉類判別手法は、正券画像のエッジ点とそのエッジ方向とを検出し、かかるエッジ方向ごとの辞書画像を作成することとした点に特徴がある。ここで、エッジ点とは、画像中のエッジに該当する画素とする。
As shown in the figure, the paper sheet discriminating method according to the present invention is characterized in that it detects the edge point and the edge direction of a genuine ticket image and creates a dictionary image for each edge direction. is there. Here, the edge point is a pixel corresponding to an edge in the image.
さらに、判別対象画像と辞書画像との比較および解析にあたって、判別対象画像と辞書画像との差分(以下、「残存エッジ点」と記載する)をとり、かかる残存エッジ点の所定サイズ領域における密度を用いて正損判別を行うこととした点にも特徴がある。
Further, when comparing and analyzing the discrimination target image and the dictionary image, a difference between the discrimination target image and the dictionary image (hereinafter referred to as “residual edge point”) is taken, and the density of the residual edge point in a predetermined size region is calculated. There is also a feature in that it is used to determine whether the damage is correct or not.
以下では、本発明に係る紙葉類判別手法を具体的に説明する。なお、本発明に係る紙葉類判別手法は、正券画像1を入力データとした辞書画像作成の段階と、判別対象画像5を入力データとした正損判別の段階とに大別できる。
Hereinafter, the paper sheet discrimination method according to the present invention will be described in detail. The paper sheet discriminating method according to the present invention can be broadly divided into a stage for creating a dictionary image using the correct bill image 1 as input data and a stage for discriminating damage using the discrimination target image 5 as input data.
本発明に係る紙葉類判別手法では、まず、辞書画像作成の段階として、複数枚(=N枚)の正券画像1について、すべてのエッジ点とそのエッジ方向とを検出する(同図の(1)参照)。そして、かかるN枚分の正券画像のエッジ点とエッジ方向とに基づき、エッジ方向ごとの辞書画像を作成する(同図の(2)参照)。
In the paper sheet discriminating method according to the present invention, first, as a stage of creating a dictionary image, all edge points and their edge directions are detected for a plurality (= N) of genuine ticket images 1 (in the figure). (See (1)). Then, a dictionary image for each edge direction is created based on the edge points and the edge directions of the N sheets of genuine ticket images (see (2) in the figure).
たとえば、同図の(2)に示したように、エッジ方向を360度の60度刻みで6方向に区分するものとする。そして、区分した6個の各方向に対して、一意のA~F方向の呼称を割り当てるものとする。なお、以下では、かかる方向の区分を「量子化」と、区分数を「量子化数」と、それぞれ記載するものとする。
For example, as shown in (2) of the figure, the edge direction is divided into six directions in increments of 60 degrees of 360 degrees. Then, a unique designation of directions A to F is assigned to each of the six divided directions. In the following, the division of the direction is described as “quantization”, and the number of divisions is described as “quantization number”.
そして、本発明に係る紙葉類判別手法では、かかる量子化に対応する平面(以下、「方向平面」と記載する)を用意し、検出した正券画像1の各エッジ点のエッジ方向に対応する各方向平面の該当座標に、エッジ点が検出されたことを記憶する。そして、所定数以上の記憶回数を有する座標を有効エッジ点とした辞書画像を作成する。
In the paper sheet discrimination method according to the present invention, a plane corresponding to the quantization (hereinafter referred to as “direction plane”) is prepared, and it corresponds to the edge direction of each edge point of the detected correct ticket image 1. The fact that an edge point has been detected is stored at the corresponding coordinates on each direction plane. Then, a dictionary image is created with the coordinates having the number of times of storage equal to or greater than a predetermined number as effective edge points.
たとえば、同図の(2)に示したように、A方向対応の方向平面A、B方向対応の方向平面B、C方向対応の方向平面C、D方向対応の方向平面D、E方向対応の方向平面E、F方向対応の方向平面Fを用意して、各エッジ点をエッジ方向ごとに各方向平面A~Fへ記憶し、方向ごとの辞書画像を作成する。なお、かかる辞書画像作成の詳細については、図3~5を用いて後述する。
For example, as shown in (2) of the figure, the direction plane A corresponding to the A direction, the direction plane B corresponding to the B direction, the direction plane C corresponding to the C direction, the direction plane D corresponding to the D direction, and the direction plane D corresponding to the E direction. A direction plane F corresponding to the direction planes E and F is prepared, each edge point is stored in each direction plane A to F for each edge direction, and a dictionary image for each direction is created. Details of the dictionary image creation will be described later with reference to FIGS.
ここで、向かい合う方向を同一方向平面に記憶してもよい。なぜなら、各エッジ点の近傍には、180度対称のエッジ方向のエッジ点が検出されることが多いからである。たとえば、A方向とD方向のエッジ点を方向平面Aに、B方向とE方向のエッジ点を方向平面Bに、C方向とF方向のエッジ点を方向平面Cに、それぞれ記憶することができる。かかる場合、量子化数6個に対して、方向平面数は3個となる。
Here, the facing direction may be stored in the same direction plane. This is because edge points in the edge direction that are 180 degrees symmetrical are often detected in the vicinity of each edge point. For example, edge points in the A and D directions can be stored in the direction plane A, edge points in the B and E directions can be stored in the direction plane B, and edge points in the C and F directions can be stored in the direction plane C, respectively. . In this case, the number of directional planes is 3 for 6 quantizations.
また、かかる辞書画像作成の段階において、本発明に係る紙葉類判別手法では、エッジ点である各画素を所定量膨張する。たとえば、図1の(2)には、円oに囲まれた部分の画素Pxが、膨張範囲D0まで膨張されたうえで辞書画像に登録されている例を示している。なお、かかる所定量の膨張の詳細については、図5を用いて後述する。
Also, at the stage of creating the dictionary image, the paper sheet discrimination method according to the present invention expands each pixel that is an edge point by a predetermined amount. For example, (2) of FIG. 1 shows an example in which the pixel Px of the portion surrounded by the circle o is expanded to the expansion range D0 and registered in the dictionary image. Details of the predetermined amount of expansion will be described later with reference to FIG.
ところで、従来のエッジ方向を利用した技術(たとえば、特許文献2)においては、本発明に係る紙葉類判別手法のようにエッジ方向ごとの辞書画像を作成する手法を用いていなかった。したがって、辞書画像は紙葉類1枚分の1平面データであり、エッジ点とそのエッジ方向とを1つに定める必要があった。
By the way, the conventional technique using the edge direction (for example, Patent Document 2) does not use a method for creating a dictionary image for each edge direction, unlike the paper sheet discrimination method according to the present invention. Therefore, the dictionary image is one plane data for one sheet of paper, and it is necessary to determine one edge point and one edge direction.
これに対して、本発明に係る紙葉類判別手法では、エッジ方向ごとに辞書画像を作成するため、たとえば、量子化の境界付近のエッジ方向を有するエッジ点については、2つのエッジ方向にまたがって取り扱うことができる。すなわち、辞書画像の1つの画像に対しては、2つのエッジ方向を持つことになる。したがって、紙葉類の印刷時およびスキャン時に発生する回転ずれなどを吸収した正損判別を行うことが可能となる。なお、以下では、2つのエッジ方向にまたがって取り扱うこととする量子化の境界からのずれの程度を「回転ずれ吸収量」と記載する。
On the other hand, in the paper sheet discrimination method according to the present invention, a dictionary image is created for each edge direction. For example, an edge point having an edge direction near the quantization boundary spans two edge directions. Can be handled. That is, one image of the dictionary image has two edge directions. Therefore, it is possible to perform a damage determination that absorbs a rotational deviation or the like that occurs during printing or scanning of a paper sheet. In the following, the degree of deviation from the quantization boundary to be handled across two edge directions will be referred to as “rotational deviation absorption amount”.
また、上述した所定量の膨張についても同様であり、1個のエッジ点が有する画素値やエッジ方向を膨張によって拡張することにより、デザインの多様性や各種の位置および回転ずれを吸収した正損判別を行うことが可能となる。また、かかる所定量の膨張によって、もともとエッジ点ではなかったある画素が、異なる方向平面の膨張範囲(すなわち、「所定量」の中)に含まれた場合、かかる画素は方向平面それぞれに対応するエッジ方向を持つことができる。この点については、図5を用いて後述する。また、有効エッジ点判定により複数の正券画像から所定数以上のエッジ点量を有する座標を有効エッジ点としたので、辞書画像の1つの座標に対して複数のエッジ方向を持たせることができる。したがって、紙葉類のデザインの多様性や複数の回転および位置ずれを吸収した正損判別を行うことが可能となる。
The same applies to the above-mentioned expansion of the predetermined amount. The pixel value and edge direction of one edge point are expanded by expansion to absorb the design diversity and various positions and rotation loss. Discrimination can be performed. Further, when a certain pixel that was not originally an edge point is included in an expansion range of a different direction plane (that is, within a “predetermined amount”) due to such a predetermined amount of expansion, the pixel corresponds to each direction plane. Can have an edge direction. This will be described later with reference to FIG. Further, since the coordinates having a predetermined number or more of edge points from the plurality of regular image images are determined as the effective edge points by the effective edge point determination, a plurality of edge directions can be given to one coordinate of the dictionary image. . Therefore, it is possible to perform a damage determination that absorbs a variety of paper sheet designs and a plurality of rotations and misalignments.
次に、正損判別の段階において、本発明に係る紙葉類判別手法では、正券画像1の場合と同様にエッジ点とそのエッジ方向とを検出した判別対象画像5と各辞書画像とを比較する(図1の(3)参照)。そして、かかる比較によって抽出した残存エッジ点の所定サイズ領域における密度を用いて正損判別を行う(図1の(4)参照)。なお、この点の詳細については、図8を用いて後述する。
Next, at the stage of discriminating damage, in the paper sheet discriminating method according to the present invention, the discrimination target image 5 and each dictionary image in which the edge point and the edge direction are detected are detected as in the case of the correct ticket image 1. Compare (see (3) in FIG. 1). Then, damage determination is performed using the density of the remaining edge points extracted by the comparison in a predetermined size region (see (4) in FIG. 1). Details of this point will be described later with reference to FIG.
ところで、従来の残存エッジ点を利用した技術(たとえば、特許文献1)においては、残存エッジ点として検出された画素数が所定の閾値を上回るか否かによって正損判別を行う手法を用いていた。したがって、紙葉類面で直線状に伸びたしわなどを汚損として判別することも多かった。
By the way, in the technique (for example, patent document 1) using the conventional remaining edge point, the method of performing a damage determination according to whether the number of pixels detected as a remaining edge point exceeds a predetermined threshold value was used. . Therefore, wrinkles that extend in a straight line on the sheet surface are often determined as fouling.
これに対して、本発明に係る紙葉類判別手法では、所定サイズ領域における残存エッジ点の密度を用いることとしたので、幅と高さを有する汚損のみを判別することができ、前述のしわなどの誤判別を防止することが可能となる。
On the other hand, in the paper sheet discrimination method according to the present invention, since the density of the remaining edge points in the predetermined size region is used, only the stain having the width and the height can be discriminated. It is possible to prevent erroneous discrimination such as.
このように、本発明に係る紙葉類判別手法では、エッジ点を所定量膨張したエッジ方向ごとの辞書画像を作成し、残存エッジ点の所定サイズ領域における密度を用いて正損判別を行うこととした。したがって、紙葉類のデザインや品質に多様性がある場合であっても、高精度で最適な正損判別を行うことができる。
As described above, in the paper sheet discrimination method according to the present invention, a dictionary image is created for each edge direction in which the edge points are expanded by a predetermined amount, and the damage determination is performed using the density of the remaining edge points in the predetermined size region. It was. Therefore, even if there is diversity in the design and quality of the paper sheets, it is possible to perform the optimum damage determination with high accuracy.
以下では、図1を用いて説明した本発明に係る紙葉類判別手法を適用した紙幣判別装置についての実施例を詳細に説明する。なお、以下では、各エッジ点を所定量膨張させた辞書画像を作成する紙幣判別装置の実施例を実施例1として、各エッジ点の周辺エッジ数に基づいて辞書画像を作成する紙幣判別装置の実施例を実施例2として、それぞれ説明することとする。また、以下の各実施例では、紙幣判別装置がイメージラインセンサ(以下、「ラインセンサ」と記載する)によって、紙幣の画像を撮像する場合について説明する。
Hereinafter, an embodiment of the bill discriminating apparatus to which the paper sheet discriminating method according to the present invention described with reference to FIG. 1 is applied will be described in detail. In the following, an embodiment of a banknote discriminating apparatus that creates a dictionary image in which each edge point is expanded by a predetermined amount will be referred to as Example 1, and a banknote discriminating apparatus that creates a dictionary image based on the number of peripheral edges of each edge point will be described. An embodiment will be described as a second embodiment. In each of the following embodiments, a case where the banknote discriminating apparatus captures an image of a banknote using an image line sensor (hereinafter referred to as “line sensor”) will be described.
図2は、実施例1に係る紙幣判別装置10の構成を示すブロック図である。なお、同図では、紙幣判別装置10の特徴を説明するために必要な構成要素のみを示しており、一般的な構成要素についての記載を省略している。
FIG. 2 is a block diagram illustrating the configuration of the bill discriminating apparatus 10 according to the first embodiment. In addition, in the same figure, only the component required in order to demonstrate the characteristic of the banknote discrimination | determination apparatus 10 is shown, and the description about a general component is abbreviate | omitted.
同図に示すように、紙幣判別装置10は、ラインセンサ部11と、制御部12と、記憶部13とを備えている。また、制御部12は、設定部12aと、画像データ取得部12bと、エッジ方向検出部12cと、有効エッジ判定部12dと、膨張処理部12eと、残存エッジ抽出部12fと、残存エッジ解析部12gと、判別部12hとをさらに備えている。また、記憶部13は、設定情報13aと、フィルタ群13bと、方向平面別エッジ点量13cと、方向別辞書画像13dと、判別基準情報13eとを記憶する。
As shown in the figure, the banknote discriminating apparatus 10 includes a line sensor unit 11, a control unit 12, and a storage unit 13. The control unit 12 includes a setting unit 12a, an image data acquisition unit 12b, an edge direction detection unit 12c, an effective edge determination unit 12d, an expansion processing unit 12e, a remaining edge extraction unit 12f, and a remaining edge analysis unit. 12g and the determination part 12h are further provided. The storage unit 13 also stores setting information 13a, a filter group 13b, direction plane edge point amounts 13c, direction dictionary images 13d, and discrimination reference information 13e.
ラインセンサ部11は、図示しない搬送機構によって搬送される紙幣からの透過光または反射光を受光するセンサであり、複数の受光センサを直線状に配置することで構成される。また、ラインセンサ部11は、取得したデータを制御部12の画像データ取得部12bに対して出力する処理を併せて行う。
The line sensor unit 11 is a sensor that receives transmitted light or reflected light from a bill conveyed by a conveyance mechanism (not shown), and is configured by arranging a plurality of light receiving sensors in a straight line. The line sensor unit 11 also performs a process of outputting the acquired data to the image data acquisition unit 12b of the control unit 12.
制御部12は、ラインセンサ部11からの出力に基づいて紙幣の画像データを生成し、生成した画像データについてエッジ点とその方向とを検出し、画像データが複数枚(=N枚)の正券画像である場合には、検出したエッジ点とその方向に基づいて辞書画像を作成し、画像データが判別対象画像である場合には、判別対象画像と辞書画像とを比較および解析して紙幣の正損判別を行う処理部である。
The control unit 12 generates banknote image data based on the output from the line sensor unit 11, detects edge points and directions of the generated image data, and corrects a plurality of (= N) image data. If it is a ticket image, a dictionary image is created based on the detected edge point and its direction, and if the image data is a discrimination target image, the discrimination target image and the dictionary image are compared and analyzed to determine the banknote. It is a processing part which performs the damage determination of.
設定部12aは、記憶部13の設定情報13aに含まれる量子化数、回転ずれ吸収量などの各種パラメータを抽出して、辞書作成および正損判別に関する初期設定を行う処理部である。また、設定部12aは、抽出した各種パラメータをエッジ方向検出部12cに対して出力する処理を併せて行う。
The setting unit 12a is a processing unit that extracts various parameters such as the number of quantization and the rotational deviation absorption amount included in the setting information 13a of the storage unit 13 and performs initial settings related to dictionary creation and damage determination. The setting unit 12a also performs processing for outputting the extracted various parameters to the edge direction detection unit 12c.
画像データ取得部12bは、ラインセンサ部11からの出力を1枚の紙幣ごとに合成し、紙幣全体についての画像データを生成する処理を行う処理部である。なお、画像データ取得部12bは、辞書画像を作成する場合にはN枚分の正券画像データを一括して生成する。
The image data acquisition unit 12b is a processing unit that performs a process of synthesizing the output from the line sensor unit 11 for each banknote and generating image data for the entire banknote. In addition, the image data acquisition part 12b produces | generates N sheets of genuine ticket image data collectively, when creating a dictionary image.
また、画像データ取得部12bは、生成した画像データをエッジ方向検出部12cに対して出力する処理を併せて行う。
The image data acquisition unit 12b also performs a process of outputting the generated image data to the edge direction detection unit 12c.
エッジ方向検出部12cは、画像データ取得部12bが出力した画像データを平滑化し、平滑化した画像データにおけるすべてのエッジ点とその方向とを検出する処理を行う処理部である。
The edge direction detection unit 12c is a processing unit that performs processing for smoothing the image data output from the image data acquisition unit 12b and detecting all edge points and directions in the smoothed image data.
なお、かかる平滑化や、エッジ点とその方向の検出においては、エッジ方向検出部12cは、記憶部13のフィルタ群13bへあらかじめ格納された各種フィルタを用いる。この点については、図3を用いて後述する。
Note that, in such smoothing and detection of edge points and their directions, the edge direction detection unit 12c uses various filters stored in advance in the filter group 13b of the storage unit 13. This will be described later with reference to FIG.
また、エッジ方向検出部12cは、検出したエッジ点とその方向を、辞書画像を作成する場合には有効エッジ判定部12dに対して、紙幣の正損判別を行う場合には残存エッジ抽出部12fに対して、それぞれ出力する処理を併せて行う。
Further, the edge direction detection unit 12c indicates the detected edge point and its direction to the valid edge determination unit 12d when creating a dictionary image, and the remaining edge extraction unit 12f when performing banknote damage determination. In addition, the respective output processes are performed together.
ここで、エッジ方向検出部12cが行うエッジ方向検出処理について、図3を用いてさらに詳細に説明する。図3は、エッジ方向検出部12cが行うエッジ方向検出処理を説明するための図である。
Here, the edge direction detection processing performed by the edge direction detection unit 12c will be described in more detail with reference to FIG. FIG. 3 is a diagram for explaining edge direction detection processing performed by the edge direction detection unit 12c.
なお、以下では、フィルタを用いたエッジ方向検出処理を示すこととするが、かかるフィルタを画像に適用する際、フィルタの中心画素が重なる画像上の画素を、以下、「注目画素」と記載することとする。
In the following, edge direction detection processing using a filter will be shown, but when applying such a filter to an image, a pixel on the image where the center pixel of the filter overlaps will be referred to as a “target pixel” hereinafter. I will do it.
同図に示したように、エッジ方向検出部12cは、画像データ取得部12bが出力した正券画像1あるいは判別対象画像5の平滑化を行う(同図の(1)参照)。かかる平滑化には、たとえば、注目画素を中心とした3×3近傍で係数がすべて1であるような平滑化用フィルタf1を用いることができる。
As shown in the figure, the edge direction detection unit 12c smoothes the genuine note image 1 or the discrimination target image 5 output by the image data acquisition unit 12b (see (1) in the figure). For such smoothing, for example, a smoothing filter f1 having a coefficient of 1 in the vicinity of 3 × 3 centering on the target pixel can be used.
そして、かかる平滑化を行うことによって、正券画像1あるいは判別対象画像5におけるノイズ成分を除去することができる。したがって、最適なノイズ除去結果を得られるまで、平滑化用フィルタf1の適用を複数回(たとえば、3回)繰り返してもよい。あるいは、平滑化用フィルタf1のサイズを大きくして(たとえば、7×7近傍)、1回のみ適用することとしてもよい。なお、平滑化フィルタf1を適用しない場合でも、エッジ点およびエッジ方向の検出を行うことは可能である。
Then, by performing such smoothing, it is possible to remove the noise component in the correct ticket image 1 or the discrimination target image 5. Therefore, the application of the smoothing filter f1 may be repeated a plurality of times (for example, three times) until an optimum noise removal result is obtained. Alternatively, the size of the smoothing filter f1 may be increased (for example, in the vicinity of 7 × 7) and applied only once. Even when the smoothing filter f1 is not applied, it is possible to detect edge points and edge directions.
つづいて、エッジ方向検出部12cは、正券画像1あるいは判別対象画像5のエッジ点を検出する(同図の(2)参照)。ここで、かかるエッジ点の検出には、たとえば、ラプラシアンフィルタf2を用いることができる。すなわち、ラプラシアンフィルタf2を適用することによって検出されるゼロクロス点が、正券画像1あるいは判別対象画像5のエッジ点となる。
Subsequently, the edge direction detection unit 12c detects an edge point of the correct ticket image 1 or the discrimination target image 5 (see (2) in the figure). Here, for the detection of the edge point, for example, a Laplacian filter f2 can be used. That is, the zero cross point detected by applying the Laplacian filter f2 becomes the edge point of the correct ticket image 1 or the discrimination target image 5.
次に、エッジ方向検出部12cは、たとえば、Prewittフィルタf3およびf4を用いることで、正券画像1あるいは判別対象画像5のエッジ方向を検出する(同図の(3)参照)。ここで、同図に示した、矩形rに含まれる5個の黒色画素における白色矢印は、検出されたそれぞれのエッジ方向をあらわしている。そして、各エッジ点におけるエッジ方向を検出する(同図の(4)参照)。
Next, the edge direction detection unit 12c detects the edge direction of the correct ticket image 1 or the discrimination target image 5 by using, for example, the Prewitt filters f3 and f4 (see (3) in the figure). Here, the white arrows in the five black pixels included in the rectangle r shown in the figure indicate the detected edge directions. Then, the edge direction at each edge point is detected (see (4) in the figure).
なお、同図を用いた説明では、ラプラシアンフィルタf2、Prewittフィルタf3およびf4を用いてエッジ点とその方向を検出する例を示したが、Sobelフィルタなど他のフィルタを用いることとしてもよいし、各種フィルタを適宜組み合わせることとしてもよい。また、各種フィルタのサイズや係数値についても、特に内容を限定するものではない。
In the description using the same figure, an example in which the edge point and its direction are detected using the Laplacian filter f2, the Prewitt filters f3 and f4 is shown, but other filters such as a Sobel filter may be used. Various filters may be appropriately combined. Further, the size and coefficient values of various filters are not particularly limited.
図2の説明に戻り、有効エッジ判定部12dについて説明する。有効エッジ判定部12dは、方向平面の各画素が有効エッジ点であるか否かを判定する処理部であり、方向平面の画素ごとに記憶された方向平面別エッジ点量13cを用いてかかる判定を行う。
Returning to the description of FIG. 2, the valid edge determination unit 12d will be described. The effective edge determination unit 12d is a processing unit that determines whether or not each pixel in the direction plane is an effective edge point, and performs such determination using the edge point amount 13c for each direction plane stored for each pixel in the direction plane. I do.
また、有効エッジ判定部12dは、有効エッジ点を保存した各方向平面を膨張処理部12eに対して出力する処理を併せて行う。
The effective edge determination unit 12d also performs a process of outputting each direction plane storing the effective edge points to the expansion processing unit 12e.
ここで、有効エッジ判定部12dが行う有効エッジ判定処理について、図4を用いてさらに詳細に説明する。図4は、有効エッジ判定部12dが行う有効エッジ判定処理を説明するための図である。なお、同図の(A)には、正券画像の任意のエッジ点P1におけるエッジ点量計数処理を、同図の(B)には、同じく任意のエッジ点P2におけるエッジ点量計数処理を、同図の(C)には、かかるエッジ点P1およびP2における有効エッジ判定処理を、それぞれ示している。
Here, the effective edge determination process performed by the effective edge determination unit 12d will be described in more detail with reference to FIG. FIG. 4 is a diagram for explaining the effective edge determination process performed by the effective edge determination unit 12d. Note that (A) in the figure shows an edge point amount counting process at an arbitrary edge point P1 of the genuine ticket image, and (B) in the same figure shows an edge point quantity counting process at an arbitrary edge point P2. (C) in the figure shows the valid edge determination processing at the edge points P1 and P2.
同図の(A)に示したように、エッジ方向検出部12cによって、エッジ点P1のエッジ方向がA方向であると検出されていた場合、有効エッジ判定部12dは、方向平面Aにおける同位置の画素P1のエッジ点量をカウントアップする。
As shown in FIG. 5A, when the edge direction detection unit 12c detects that the edge direction of the edge point P1 is the A direction, the valid edge determination unit 12d The edge point amount of the pixel P1 is counted up.
また、同図の(B)に示したように、エッジ点P2のエッジ方向がA方向とF方向との境界から回転ずれ吸収量以内であると検出されていた場合、有効エッジ判定部12dは、方向平面Aにおける同位置の画素P2と、方向平面Fにおける同位置の画素P2のエッジ点量をカウントアップする。
Further, as shown in FIG. 5B, when it is detected that the edge direction of the edge point P2 is within the rotational deviation absorption amount from the boundary between the A direction and the F direction, the effective edge determination unit 12d The edge point amounts of the pixel P2 at the same position in the direction plane A and the pixel P2 at the same position in the direction plane F are counted up.
すなわち、同図の(B)に示したように、エッジ点のエッジ方向が境界から回転ずれ吸収量以内である場合には、有効エッジ判定部12dは、2つの方向平面の同位置の画素のエッジ点量をカウントアップする。
That is, as shown in (B) of the figure, when the edge direction of the edge point is within the rotational deviation absorption amount from the boundary, the effective edge determination unit 12d detects the pixel at the same position in the two direction planes. Count up the amount of edge points.
そして、有効エッジ判定部12dは、辞書画像作成にあたって取得したすべての正券画像についてかかるエッジ点ごとのエッジ点量計数処理を行い、方向平面ごとに所定の判定閾値以上のエッジ点量を得た画素を、有効エッジ点として方向平面ごとに保存する。
Then, the valid edge determination unit 12d performs an edge point amount counting process for each edge point for all the regular image acquired in creating the dictionary image, and obtains an edge point amount equal to or greater than a predetermined determination threshold for each direction plane. The pixel is stored for each direction plane as an effective edge point.
具体的には、同図の(C)に示したように、読み取り正券数が「200」であり、判定閾値がいずれの画素についても「5」である場合、方向平面Aにおける画素P1は判定閾値以上である「200」のエッジ点量を得ているため、方向平面Aにおける有効エッジ点として保存される。
Specifically, as shown in (C) of the figure, when the number of read correct bills is “200” and the determination threshold is “5” for any pixel, the pixel P1 in the direction plane A is Since an edge point amount of “200” that is equal to or greater than the determination threshold is obtained, it is stored as an effective edge point in the direction plane A.
また、同様に、方向平面Aにおける画素P2も、判定閾値以上である「180」のエッジ点量を得ているため、方向平面Aにおける有効エッジ点として保存される。
Similarly, the pixel P2 in the direction plane A is also stored as an effective edge point in the direction plane A because the edge point amount of “180” that is equal to or greater than the determination threshold is obtained.
しかしながら、方向平面Fにおける画素P2は、判定閾値に満たない「3」のエッジ点量しか得ていないため、方向平面Fにおける有効エッジ点としては保存されない。
However, since the pixel P2 in the direction plane F has only obtained an edge point amount of “3” that does not satisfy the determination threshold, it is not stored as an effective edge point in the direction plane F.
図2の説明に戻り、膨張処理部12eについて説明する。膨張処理部12eは、各方向平面のすべての有効エッジ点を所定量膨張する処理を行う処理部である。また、膨張処理部12eは、かかる膨張処理を行った後の各方向平面を、方向ごとの辞書画像として記憶部13へ記憶させる処理を併せて行う。
Returning to the description of FIG. 2, the expansion processing unit 12e will be described. The expansion processing unit 12e is a processing unit that performs a process of expanding all effective edge points on each direction plane by a predetermined amount. The expansion processing unit 12e also performs a process of storing each direction plane after performing the expansion processing in the storage unit 13 as a dictionary image for each direction.
ここで、膨張処理部12eが行う膨張処理について、図5を用いてさらに詳細に説明する。図5は、膨張処理部12eが行う膨張処理を説明するための図である。なお、同図の(A)には、膨張処理の基本例を、同図の(B)には、膨張処理から辞書画像作成までの手順を、同図の(C)には、複数の膨張範囲に含まれる同一位置の画素についての説明図を、それぞれ示している。
Here, the expansion processing performed by the expansion processing unit 12e will be described in more detail with reference to FIG. FIG. 5 is a diagram for explaining expansion processing performed by the expansion processing unit 12e. (A) in the figure shows a basic example of the dilation process, (B) in the figure shows a procedure from the dilation process to creation of a dictionary image, and (C) in the figure shows a plurality of dilations. The explanatory diagrams about the pixels at the same position included in the range are respectively shown.
図5の(A)に示したように、m軸とn軸からなる座標系において、「m」が「i」で、「n」が「j」である画素を注目画素として仮定する。なお、以下では、画素の位置を示す場合には画素を「P」とし、「P(i,j)」のように記載することとする。
As shown in FIG. 5A, in a coordinate system composed of an m-axis and an n-axis, a pixel in which “m” is “i” and “n” is “j” is assumed as a target pixel. In the following, when indicating the position of a pixel, the pixel is referred to as “P” and described as “P (i, j)”.
ここで、膨張処理とは、かかる注目画素に隣接する近傍画素の画素値を、注目画素の画素値へと変換する処理のことを指す。なお、かかる画素値の変換には、フィルタを用いることができる。
Here, the expansion processing refers to processing for converting the pixel value of a neighboring pixel adjacent to the target pixel into the pixel value of the target pixel. A filter can be used for the conversion of the pixel value.
たとえば、注目画素を隣接する近傍8画素に対して膨張する場合(以下、「1画素膨張」と記載する)、P(i-1,j-1)、P(i-1,j)、P(i-1,j+1)、P(i,j-1)、P(i,j+1)、P(i+1,j-1)、P(i+1,j)、P(i+1,j+1)の各画素値は、注目画素P(i,j)の画素値へと変換されることになる(図5の(A)の黒色画素を参照)。
For example, when the target pixel is expanded with respect to adjacent eight neighboring pixels (hereinafter referred to as “1-pixel expansion”), P (i−1, j−1), P (i−1, j), P Each pixel value of (i−1, j + 1), P (i, j−1), P (i, j + 1), P (i + 1, j−1), P (i + 1, j), P (i + 1, j + 1) Is converted into the pixel value of the target pixel P (i, j) (see the black pixel in FIG. 5A).
この膨張処理は、画素範囲の拡張による位置ずれや回転ずれの吸収を主な目的とする(図1参照)。
This expansion process is mainly intended to absorb misalignment and rotational displacement due to expansion of the pixel range (see FIG. 1).
なお、ここでは、1画素膨張を例に挙げて説明したが、所定の膨張量を指定することで、かかる1画素膨張を繰り返すことができる。たとえば、3画素を所定の膨張量と指定した場合、1画素膨張を3回繰り返すこととすればよい。また、かかる3画素膨張を行った場合、最終的には、注目画素の画素値は注目画素を中心とする7×7近傍の各画素へと膨張することになる。
Note that, here, one-pixel expansion has been described as an example, but such one-pixel expansion can be repeated by specifying a predetermined expansion amount. For example, when 3 pixels are designated as a predetermined expansion amount, the 1-pixel expansion may be repeated three times. Further, when such a three-pixel expansion is performed, the pixel value of the target pixel finally expands to each pixel in the vicinity of 7 × 7 with the target pixel as the center.
また、図5の(B)に示したように、膨張処理部12eは、各方向平面のすべての有効エッジ点について上述した膨張処理を行う(同図の(B-1)参照)。そして、すべての有効エッジ点の膨張処理を行った後、各方向平面を方向ごとの辞書画像として記憶部13へ記憶させる(図5の(B-2)参照)。
Further, as shown in FIG. 5B, the expansion processing unit 12e performs the above-described expansion processing on all effective edge points on each direction plane (see (B-1) in FIG. 5). Then, after all the effective edge points are expanded, each direction plane is stored in the storage unit 13 as a dictionary image for each direction (see (B-2) in FIG. 5).
ここで、複数の方向平面の膨張範囲に含まれる同一位置の画素について説明しておく。図5の(C)に示したように、たとえば、所定位置の画素P3が、膨張処理を終えた方向平面Aの任意の膨張範囲D1、方向平面Fの任意の膨張範囲D2の双方に含まれるものとする。
Here, the pixels at the same position included in the expansion range of a plurality of direction planes will be described. As shown in FIG. 5C, for example, the pixel P3 at a predetermined position is included in both the arbitrary expansion range D1 of the directional plane A and the arbitrary expansion range D2 of the directional plane F that have undergone the expansion process. Shall.
かかる場合、方向別にではなく正券画像1枚として見たときの画素P3は、A方向、F方向の2つのエッジ方向(同図に示す角度θ分の範囲)を有することになる。したがって、判別対象画像の画素P3が、A方向、F方向のいずれであっても、かかるエッジ方向は正券画像に存在すると判定できる。
In such a case, the pixel P3 when viewed as one regular image rather than by direction has two edge directions (range of the angle θ shown in the figure) in the A direction and the F direction. Therefore, regardless of whether the pixel P3 of the discrimination target image is in the A direction or the F direction, it can be determined that such an edge direction exists in the genuine ticket image.
ところで、図5を用いた説明においては、注目画素を中心とするすべての隣接近傍画素に対して膨張処理を行う場合について説明した。しかしながら、エッジ方向に対応する方向の隣接近傍画素に対して膨張処理を行うこととしてもよい。
By the way, in the description using FIG. 5, the case where the expansion processing is performed on all adjacent neighboring pixels centering on the target pixel has been described. However, dilation processing may be performed on adjacent neighboring pixels in the direction corresponding to the edge direction.
そこで、以下では、かかる場合の膨張処理の変形例について、図6を用いて説明する。図6は、膨張処理の変形例を示す図である。なお、以下の説明においても、「m」が「i」であり、「n」が「j」である画素(図5参照)を注目画素として説明することとする。また、画素の位置を示す場合についても、上述した図5についての説明の場合と同様とする。
Therefore, in the following, a modification of the expansion process in such a case will be described with reference to FIG. FIG. 6 is a diagram illustrating a modification of the expansion process. In the following description, a pixel (see FIG. 5) in which “m” is “i” and “n” is “j” will be described as a target pixel. Further, the case of indicating the position of the pixel is the same as the case of the description of FIG. 5 described above.
図6の(A)に示したように、注目画素がAあるいはD方向のエッジ方向を有している場合、かかるAあるいはD方向に対応する隣接近傍画素に対してのみ膨張処理を行うことができる。具体的には、P(i-2,j)、P(i-2,j+1)、P(i-2,j+2)、P(i-1,j)、P(i-1,j+1)、P(i-1,j+2)、P(i,j-1)、P(i,j+1)、P(i+1,j-2)、P(i+1,j-1)、P(i+1,j)、P(i+2,j-2)、P(i+2,j-1)、P(i+2,j)などの各画素に対して膨張処理を行うこととすればよい。
As shown in FIG. 6A, when the pixel of interest has an edge direction in the A or D direction, the expansion process can be performed only on adjacent neighboring pixels corresponding to the A or D direction. it can. Specifically, P (i−2, j), P (i−2, j + 1), P (i−2, j + 2), P (i−1, j), P (i−1, j + 1), P (i-1, j + 2), P (i, j-1), P (i, j + 1), P (i + 1, j-2), P (i + 1, j-1), P (i + 1, j), Expansion processing may be performed on each pixel such as P (i + 2, j-2), P (i + 2, j-1), and P (i + 2, j).
また、同図の(B)に示したように、注目画素がBあるいはE方向のエッジ方向を有している場合には、かかるBあるいはE方向に対応するP(i-1,j-2)、P(i-1,j-1)、P(i-1,j)、P(i-1,j+1)、P(i-1,j+2)、P(i,j-2)、P(i,j-1)、P(i,j+1)、P(i,j+2)、P(i+1,j-2)、P(i+1,j-1)、P(i+1,j)、P(i+1,j+1)、P(i+1,j+2)などの各画素に対して、膨張処理を行うこととすればよい。
Further, as shown in FIG. 5B, when the target pixel has an edge direction in the B or E direction, P (i−1, j−2) corresponding to the B or E direction. ), P (i−1, j−1), P (i−1, j), P (i−1, j + 1), P (i−1, j + 2), P (i, j−2), P (I, j-1), P (i, j + 1), P (i, j + 2), P (i + 1, j-2), P (i + 1, j-1), P (i + 1, j), P (i + 1 , J + 1), P (i + 1, j + 2), etc., the expansion process may be performed.
また、同図の(C)に示したように、注目画素がCあるいはF方向のエッジ方向を有している場合には、かかるCあるいはF方向に対応するP(i-2,j-2)、P(i-2,j-1)、P(i-2,j)、P(i-1,j-2)、P(i-1,j-1)、P(i-1,j)、P(i,j-1)、P(i,j+1)、P(i+1,j)、P(i+1,j+1)、P(i+1,j+2)、P(i+2,j)、P(i+2,j+1)、P(i+2,j+2)などの各画素に対して、膨張処理を行うこととすればよい。
Further, as shown in FIG. 6C, when the target pixel has an edge direction in the C or F direction, P (i−2, j−2) corresponding to the C or F direction. ), P (i-2, j-1), P (i-2, j), P (i-1, j-2), P (i-1, j-1), P (i-1, j), P (i, j-1), P (i, j + 1), P (i + 1, j), P (i + 1, j + 1), P (i + 1, j + 2), P (i + 2, j), P (i + 2) , J + 1), P (i + 2, j + 2), etc., the expansion process may be performed.
図2の説明に戻り、残存エッジ抽出部12fについて説明する。残存エッジ抽出部12fは、紙幣の正損判別を行う場合に、判別対象画像と辞書画像との差分をとって、判別対象画像にのみ存在する残存エッジ点を抽出した差分画像を生成する処理を行う処理部である。
Returning to the description of FIG. 2, the remaining edge extraction unit 12f will be described. The remaining edge extraction unit 12f performs a process of generating a difference image obtained by extracting a difference between the determination target image and the dictionary image and extracting a remaining edge point that exists only in the determination target image when determining whether the bill is correct or not. It is a processing part to perform.
また、残存エッジ抽出部12fは、生成した差分画像を残存エッジ解析部12gに対して出力する処理を併せて行う。
The remaining edge extracting unit 12f also performs a process of outputting the generated difference image to the remaining edge analyzing unit 12g.
ここで、残存エッジ抽出部12fが行う残存エッジ抽出処理について、図7を用いてさらに詳細に説明する。図7は、残存エッジ抽出部12fが行う残存エッジ抽出処理を説明するための図である。
Here, the remaining edge extraction processing performed by the remaining edge extraction unit 12f will be described in more detail with reference to FIG. FIG. 7 is a diagram for explaining the remaining edge extraction processing performed by the remaining edge extraction unit 12f.
同図に示したように、残存エッジ抽出部12fは、エッジ方向検出部12cが検出した判別対象画像5の各エッジ点を、エッジ方向ごとに各方向平面へ記憶する(同図の(1)参照)。なお、判別対象画像5の各エッジ点には、辞書画像を構成する有効エッジ点以外のエッジ点が含まれる。
As shown in the figure, the remaining edge extraction unit 12f stores each edge point of the discrimination target image 5 detected by the edge direction detection unit 12c in each direction plane for each edge direction ((1) in the figure). reference). Note that each edge point of the discrimination target image 5 includes an edge point other than the effective edge points constituting the dictionary image.
そして、残存エッジ抽出部12fは、記憶部13が記憶する方向別辞書画像13dから、判別対象である紙幣の方向別辞書画像を抽出する(同図の(2)参照)。
Then, the remaining edge extraction unit 12f extracts the direction-specific dictionary image of the banknote as the discrimination target from the direction-specific dictionary image 13d stored in the storage unit 13 (see (2) in the figure).
そして、残存エッジ抽出部12fは、判別対象画像5の各エッジ点が記憶された各方向平面から、かかる各方向平面に対応する各辞書画像を差し引いた各差分をとる(同図の(3)参照)。
Then, the remaining edge extracting unit 12f takes each difference obtained by subtracting each dictionary image corresponding to each direction plane from each direction plane in which each edge point of the discrimination target image 5 is stored ((3) in the figure). reference).
そして、残存エッジ抽出部12fは、かかる各差分を合成して差分画像5´を生成する(同図の(4)参照)。これにより、差分画像5´には、判別対象画像5にのみ存在していた残存エッジ点(有効エッジ点以外のエッジ点を含む)が抽出される(同図の(4)参照)。
Then, the remaining edge extraction unit 12f generates a difference image 5 ′ by combining the differences (see (4) in the figure). As a result, residual edge points (including edge points other than the effective edge points) that existed only in the discrimination target image 5 are extracted from the difference image 5 ′ (see (4) in FIG. 5).
なお、同図には、破線R1あるいはR2によって囲まれた部分を残存エッジ点として示している。
In the figure, a portion surrounded by a broken line R1 or R2 is shown as a remaining edge point.
図2の説明に戻り、残存エッジ解析部12gについて説明する。残存エッジ解析部12gは、残存エッジ抽出部12fが抽出した残存エッジ点のうちの孤立点を除去し、残った残存エッジ点の所定のサイズ中における密度を検出した密度画像を生成する処理を行う処理部である。
Returning to the description of FIG. 2, the remaining edge analysis unit 12g will be described. The remaining edge analysis unit 12g removes the isolated points from the remaining edge points extracted by the remaining edge extraction unit 12f, and generates a density image in which the density of the remaining remaining edge points in a predetermined size is detected. It is a processing unit.
なお、かかる孤立点の除去や密度画像の生成においては、残存エッジ解析部12gは、記憶部13のフィルタ群13bへあらかじめ格納された各種フィルタを用いる。この点については、図8を用いて後述する。
In the removal of the isolated points and the generation of the density image, the remaining edge analysis unit 12g uses various filters stored in advance in the filter group 13b of the storage unit 13. This will be described later with reference to FIG.
また、残存エッジ解析部12gは、生成した密度画像を判別部12hに対して出力する処理を併せて行う。
The remaining edge analysis unit 12g also performs a process of outputting the generated density image to the determination unit 12h.
判別部12hは、所定の閾値で2値化した密度画像に基づき、判別対象である紙幣の正損を最終的に判別する処理を行う処理部である。なお、所定の閾値については、判別部12hは、記憶部13の判別基準情報13eを参照する。
The discrimination unit 12h is a processing unit that performs a process of finally discriminating whether or not the banknote is the discrimination target based on the density image binarized with a predetermined threshold. For the predetermined threshold, the determination unit 12h refers to the determination criterion information 13e in the storage unit 13.
ここで、残存エッジ解析12gが行う残存エッジ解析処理、および判別部12hが行う判別処理について、図8を用いてさらに詳細に説明する。図8は、残存エッジ解析部12gが行う残存エッジ解析処理、および判別部12hが行う判別処理を説明するための図である。
Here, the remaining edge analysis process performed by the remaining edge analysis 12g and the determination process performed by the determination unit 12h will be described in more detail with reference to FIG. FIG. 8 is a diagram for explaining the remaining edge analysis process performed by the remaining edge analysis unit 12g and the determination process performed by the determination unit 12h.
同図に示したように、残存エッジ解析部12gは、残存エッジ抽出部12fが出力した差分画像5´の残存エッジ点のうち、周囲から孤立した残存エッジ点を孤立点として除去する(同図の(1)参照)。たとえば、同図には、差分画像5´の残存エッジ点のうち、破線R2で囲まれた残存エッジ点が孤立点として除去され、破線R1で囲まれた残存エッジ点が最終的な残存エッジ点として残った場合を示している。
As shown in the figure, the remaining edge analyzing unit 12g removes the remaining edge points isolated from the surroundings as isolated points from the remaining edge points of the difference image 5 ′ output by the remaining edge extracting unit 12f (see FIG. (See (1)). For example, in the figure, among the remaining edge points of the difference image 5 ′, the remaining edge point surrounded by the broken line R2 is removed as an isolated point, and the remaining edge point surrounded by the broken line R1 is the final remaining edge point. Shows the remaining case.
ここで、かかる孤立点の除去には、孤立点除去用フィルタf5を用いることができる(同図の(1)参照)。かかる孤立点除去用フィルタf5は、注目画素の8近傍画素がすべて白である場合に、注目画素を白に変換する。
Here, an isolated point removal filter f5 can be used to remove such isolated points (see (1) in the figure). The isolated point removal filter f5 converts the target pixel into white when all the eight neighboring pixels of the target pixel are white.
そして、残存エッジ解析部12gは、差分画像5´に対して密度検出用フィルタf6およびf7を適用して(同図の(2)参照)、残存エッジ部分の密度を画素値として検出する。なお、かかる密度検出用フィルタf6およびf7は、L1×L2のフィルタ要素からなる所定のサイズのフィルタである。
Then, the remaining edge analysis unit 12g applies density detection filters f6 and f7 to the difference image 5 ′ (see (2) in the figure) to detect the density of the remaining edge portion as a pixel value. The density detection filters f6 and f7 are filters of a predetermined size composed of L1 × L2 filter elements.
ここで、所定のサイズは、正損判別にあたって検出したい汚損の最小サイズとすることが好ましい。また、フィルタ係数はすべて1とすることができるが、残存エッジ点の密集度を考慮する場合には、フィルタ中央部付近の係数を大きくすることとしてもよい。
Here, it is preferable that the predetermined size is the minimum size of the contamination that is desired to be detected in determining whether the damage is correct or not. In addition, all the filter coefficients can be set to 1, but when considering the density of the remaining edge points, the coefficients near the center of the filter may be increased.
また、横へ長い残存エッジ部分の密度を検出するためには横長密度検出用フィルタf6を、縦へ長い残存エッジ部分の密度を検出するためには縦長密度検出用フィルタf7を、差分画像5´に対してそれぞれ適用する。
In addition, a horizontally long density detection filter f6 is used to detect the density of the remaining edge portion that is horizontally long, and a vertically long density detection filter f7 is used to detect the density of the remaining edge portion that is vertically long. Respectively.
たとえば、同図の(2-a)に示したのは、横長密度検出用フィルタf6を差分画像5´に対して適用した場合である。かかる場合、横へ長い残存エッジ部分の密度は、破線Ryに囲まれた部分の各画素値としてあらわされる。
For example, (2-a) shown in the figure is a case where the horizontally long density detection filter f6 is applied to the difference image 5 ′. In this case, the density of the remaining edge portion that is long in the horizontal direction is expressed as each pixel value of the portion surrounded by the broken line Ry.
また、同図の(2-b)に示したように、縦長密度検出用フィルタf7を差分画像5´に対して適用した場合、縦へ長い残存エッジ部分の密度は、破線Rtに囲まれた部分の各画素値としてあらわされる。
Further, as shown in (2-b) of the figure, when the vertically long density detection filter f7 is applied to the difference image 5 ′, the density of the remaining edge portion which is vertically long is surrounded by the broken line Rt. It is expressed as each pixel value of the part.
そして、残存エッジ解析部12gは、密度検出用フィルタf6およびf7の適用結果を合成した密度画像5´´を、判別部12hに対して出力する。
The remaining edge analysis unit 12g outputs a density image 5 ″ obtained by synthesizing the application results of the density detection filters f6 and f7 to the determination unit 12h.
そして、判別部12hは、密度画像5´´を所定の閾値で2値化する(同図の(3)参照)。かかる所定の閾値は、上述した密度検出用フィルタf6およびf7の所定のサイズと、所定の基準密度とに基づく画素値である。なお、所定の基準密度とは、損券であれば所定のサイズ中に検出されることが予測される残存エッジ点数の、フィルタ係数の合計値に対する比である。
Then, the determination unit 12h binarizes the density image 5 ″ with a predetermined threshold (see (3) in the figure). The predetermined threshold is a pixel value based on the predetermined size of the density detection filters f6 and f7 and the predetermined reference density. The predetermined reference density is a ratio of the number of remaining edge points that are predicted to be detected in a predetermined size in the case of a slip to the total value of filter coefficients.
ここで、かかる所定の閾値の例を具体的に示す。たとえば、同図の(2)に示した、密度検出用フィルタf6およびf7の「L1」が「15」であり、「L2」が「51」であり、フィルタ係数がすべて「1」であるものとする。また、所定の基準密度を「0.05」と仮定する。かかる場合、所定の閾値は、フィルタ係数の合計値「765(=15×51×1)」と、所定の基準密度「0.05」との乗算値である「38(≒765×0.05)」とすることができる。
Here, an example of the predetermined threshold will be specifically shown. For example, “L1” of the density detection filters f6 and f7 shown in (2) of the figure is “15”, “L2” is “51”, and all the filter coefficients are “1”. And Further, it is assumed that the predetermined reference density is “0.05”. In this case, the predetermined threshold value is “38 (≈765 × 0.05), which is a product of the total value“ 765 (= 15 × 51 × 1) ”of the filter coefficients and the predetermined reference density“ 0.05 ”. ) ”.
したがって、所定の閾値を「38」とした場合、同図の(3)に示した破線Rで囲まれた黒色の部分は、すべて画素値が「38」を上回っている部分である。そして、判別部12hは、かかる破線Rで囲まれた黒色の部分が存在する場合、判別対象である紙幣を損券と判別することになる。
Therefore, when the predetermined threshold value is “38”, the black portions surrounded by the broken line R shown in (3) of FIG. 5 are all portions where the pixel values exceed “38”. And when the black part enclosed by this broken line R exists, the discrimination | determination part 12h will discriminate | determine the banknote which is a discrimination | determination object as a non-use ticket.
なお、同図を用いた説明では、密度検出用フィルタf6あるいはf7が矩形のフィルタである例を示したが、汚損の形状やサイズよりも密集度に重点を置いた正損判別を行う場合には、円形のガウシアンフィルタを用いることとしてもよい。
In the description with reference to the same figure, the density detection filter f6 or f7 is a rectangular filter. However, in the case of performing damage determination with emphasis on the density rather than the shape and size of the stain. May be a circular Gaussian filter.
図2の説明に戻り、記憶部13について説明する。記憶部13は、ハードディスクドライブや不揮発性メモリといった記憶デバイスで構成される記憶部であり、設定情報13aと、フィルタ群13bと、方向平面別エッジ点量13cと、方向別辞書画像13dと、判別基準情報13eとを記憶する。
Returning to the description of FIG. 2, the storage unit 13 will be described. The storage unit 13 is a storage unit configured by a storage device such as a hard disk drive or a non-volatile memory. The setting information 13a, the filter group 13b, the edge point amount by direction plane 13c, and the dictionary image by direction 13d are discriminated. The reference information 13e is stored.
設定情報13aは、たとえば、量子化数や回転ずれ吸収量といった初期設定に関する情報である。フィルタ群13bは、エッジ方向検出処理(図3参照)や残存エッジ解析処理(図8参照)において用いられる各種フィルタをあらかじめ定義したものであり、エッジ方向検出部12cおよび残存エッジ解析部12gによって参照される。
The setting information 13a is information relating to the initial setting such as the quantization number and the rotational deviation absorption amount. The filter group 13b defines various filters used in the edge direction detection process (see FIG. 3) and the remaining edge analysis process (see FIG. 8) in advance, and is referred to by the edge direction detection unit 12c and the remaining edge analysis unit 12g. Is done.
方向平面別エッジ点量13cは、N枚分の正券画像の各エッジ点が対応する各方向平面でのエッジ点量であり、有効エッジ判定部12dによって登録および更新される。方向別辞書画像13dは、方向別辞書画像の集まりであり、量子化された方向ごとに膨張処理部12eによって登録される。また、判別対象画像との比較の際には、残存エッジ抽出部12fによって参照される。判別基準情報13eは、最終的な正損判別における所定の閾値に関する情報であり、判別部12hによって参照される。
The edge point amount 13c by direction plane is an edge point amount in each direction plane corresponding to each edge point of the N sheets of the regular image, and is registered and updated by the valid edge determination unit 12d. The direction-specific dictionary image 13d is a collection of direction-specific dictionary images, and is registered by the expansion processing unit 12e for each quantized direction. Further, the comparison with the discrimination target image is referred to by the remaining edge extraction unit 12f. The discrimination reference information 13e is information relating to a predetermined threshold value in the final damage discrimination, and is referred to by the discrimination unit 12h.
なお、設定情報13a、フィルタ群13b、判別基準情報13eについてはあらかじめ定義された定義情報であるが、紙幣判別装置10の運用中においても適宜変更可能とする。
Note that the setting information 13a, the filter group 13b, and the discrimination reference information 13e are definition information defined in advance, but can be changed as appropriate even during operation of the bill discriminating apparatus 10.
次に、実施例1に係る紙幣判別装置10が実行する処理の処理手順について図9を用いて説明する。図9は、実施例1に係る紙幣判別装置10が実行する処理手順を示すフローチャートである。
Next, a processing procedure of processing executed by the banknote discriminating apparatus 10 according to the first embodiment will be described with reference to FIG. FIG. 9 is a flowchart illustrating a processing procedure executed by the banknote discriminating apparatus 10 according to the first embodiment.
図9に示すように、設定部12aは、量子化数、回転ずれ吸収量などの初期設定を行う(ステップS101)。そして、辞書画像の作成にあたって、画像データ取得部12bは、N枚分の紙幣の正券画像を取得する(ステップS102)。
As shown in FIG. 9, the setting unit 12a performs initial settings such as the number of quantization and the amount of rotational deviation absorption (step S101). Then, in creating the dictionary image, the image data acquisition unit 12b acquires N banknotes of genuine bill images (step S102).
そして、紙幣判別装置10は、N枚分の正券画像を入力データとしてエッジ方向ごとの辞書画像を作成する辞書画像作成処理を行う(ステップS103)。なお、かかる辞書画像作成処理の処理手順については、図10を用いて後述する。
Then, the banknote discriminating apparatus 10 performs a dictionary image creation process for creating a dictionary image for each edge direction using N sheets of genuine bill images as input data (step S103). Note that the processing procedure of the dictionary image creation processing will be described later with reference to FIG.
そして、紙幣の正損判別を行うにあたって、画像データ取得部12bは、判別対象である紙幣の画像を取得する(ステップS104)。
Then, when determining whether the banknote is correct or not, the image data acquisition unit 12b acquires an image of the banknote that is the target of determination (step S104).
そして、紙幣判別装置10は、判別対象画像を入力データとする正損判別処理を行う(ステップS105)。なお、かかる正損判別処理の処理手順については、図11を用いて後述する。
And the banknote discrimination device 10 performs a damage discrimination process using the discrimination target image as input data (step S105). Note that the processing procedure of the damage determination processing will be described later with reference to FIG.
そして、紙幣判別装置10は、次の判別対象の紙幣がないか否かを判定し(ステップS106)、次の判別対象の紙幣がない場合には(ステップS106,Yes)、処理を終了する。一方、ステップS106の判定条件を満たさない場合には(ステップS106,No)、紙幣判別装置10は、ステップS104以降の処理を繰り返す。
Then, the banknote discriminating apparatus 10 determines whether or not there is a next banknote to be discriminated (step S106), and when there is no next banknote to be discriminated (step S106, Yes), the process is terminated. On the other hand, when the determination condition of step S106 is not satisfied (step S106, No), the banknote determination apparatus 10 repeats the processes after step S104.
なお、実際の運用においては、辞書画像作成の段階(同図の破線F1に囲まれた部分参照)と、正損判別の段階(同図の破線F2に囲まれた部分参照)とを分けて行うこととしてもよい。
In actual operation, the dictionary image creation stage (see the part surrounded by the broken line F1 in the figure) and the damage determination stage (see the part enclosed by the broken line F2 in the figure) are separated. It may be done.
次に、図9に示した辞書画像作成処理(図9のステップS103参照)の詳細な処理手順について、図10を用いて説明する。図10は、実施例1に係る紙幣判別装置10が実行する辞書画像作成処理の処理手順を示すフローチャートである。なお、ここにいう辞書画像作成処理とは、N枚分の正券画像からエッジ方向ごとの辞書画像を作成するまでの処理に対応する。
Next, a detailed processing procedure of the dictionary image creation processing (see step S103 in FIG. 9) shown in FIG. 9 will be described with reference to FIG. FIG. 10 is a flowchart illustrating a processing procedure of dictionary image creation processing executed by the banknote determination apparatus 10 according to the first embodiment. Note that the dictionary image creation processing referred to here corresponds to processing from creation of dictionary images for each edge direction from N sheets of regular ticket images.
同図に示すように、エッジ方向検出部12cが、N枚分の正券画像を平滑化し(ステップS201)、N枚分の正券画像について各エッジ点とその方向を検出する(ステップS202)。
As shown in the figure, the edge direction detection unit 12c smoothes N sheets of genuine images (step S201), and detects each edge point and its direction for N sheets of genuine images (step S202). .
つづいて、有効エッジ判定部12dが、N枚分の正券画像の各エッジ点について、対応する各方向平面でのエッジ点量(記憶部13の方向平面別エッジ点量13c)をカウントアップする(ステップS203)。そして、有効エッジ判定部12dは、各方向平面において所定の判定閾値以上のエッジ点量を得た画素を、有効エッジ点として各方向平面上に保存する(ステップS204)。
Subsequently, the valid edge determination unit 12d counts up the edge point amount in each corresponding direction plane (the edge point amount 13c for each direction plane in the storage unit 13) for each edge point of the N sheets of the regular ticket image. (Step S203). Then, the effective edge determination unit 12d stores, on each direction plane, pixels that have obtained edge point amounts that are equal to or greater than a predetermined determination threshold in each direction plane (step S204).
つづいて、膨張処理部12eが、各方向平面に保存された各有効エッジ点を所定の膨張量で膨張する(ステップS205)。そして、膨張処理部12eは、各方向平面を方向別辞書画像13dとして記憶部13に対し記憶させたうえで(ステップS206)、処理を終了する。
Subsequently, the expansion processing unit 12e expands each effective edge point stored in each direction plane by a predetermined expansion amount (step S205). And the expansion | swelling process part 12e memorize | stores each direction plane as the dictionary image 13d according to direction with respect to the memory | storage part 13 (step S206), and complete | finishes a process.
次に、図9に示した正損判別処理(図9のステップS105参照)の詳細な処理手順について、図11を用いて説明する。図11は、実施例1に係る紙幣判別装置10が実行する正損判別処理の処理手順を示すフローチャートである。なお、ここにいう正損判別処理とは、判別対象画像と辞書画像とを比較および解析して、判別対象である紙幣の正損を判別するまでの処理に対応する。
Next, a detailed processing procedure of the damage determination processing (see step S105 in FIG. 9) shown in FIG. 9 will be described with reference to FIG. FIG. 11 is a flowchart illustrating the processing procedure of the damage determination process executed by the banknote determination apparatus 10 according to the first embodiment. Here, the damage determination process corresponds to a process from comparing and analyzing the determination target image and the dictionary image to determining the correctness of the banknote as the determination target.
同図に示すように、エッジ方向検出部12cが、判別対象画像を平滑化し(ステップS301)、判別対象画像の各エッジ点とその方向を検出する(ステップS302)。
As shown in the figure, the edge direction detection unit 12c smoothes the discrimination target image (step S301), and detects each edge point of the discrimination target image and its direction (step S302).
つづいて、残存エッジ抽出部12fは、判別対象画像の各エッジ点を対応する各方向平面へ記憶する(ステップS303)。そして、残存エッジ抽出部12fは、各方向平面と方向別辞書画像13dの対応する各辞書画像との各差分をとり(ステップS304)、各差分を合成して残存エッジ点を抽出する(ステップS305)。
Subsequently, the remaining edge extraction unit 12f stores each edge point of the discrimination target image in each corresponding direction plane (step S303). Then, the remaining edge extraction unit 12f takes each difference between each direction plane and each corresponding dictionary image of the direction-specific dictionary image 13d (step S304), and synthesizes each difference to extract a remaining edge point (step S305). ).
つづいて、残存エッジ解析部12gは、残存エッジ点のうちの孤立点を除去したうえで(ステップS306)、残存エッジ点の密度を解析する(ステップS307)。
Subsequently, the remaining edge analysis unit 12g removes isolated points from the remaining edge points (step S306) and then analyzes the density of the remaining edge points (step S307).
そして、判別部12hは、残存エッジ解析部12gが解析した残存エッジ点の密度が所定の閾値未満であるか否かを判定する(ステップS308)。なお、所定の閾値については、判別基準情報13eを参照する。
Then, the determination unit 12h determines whether the density of the remaining edge points analyzed by the remaining edge analysis unit 12g is less than a predetermined threshold (step S308). For the predetermined threshold, the discrimination criterion information 13e is referred to.
そして、密度が所定の閾値未満である場合(ステップS308,Yes)、判別部12hは、判別対象である紙幣を正券と判別し(ステップS309)、処理を終了する。
If the density is less than the predetermined threshold (step S308, Yes), the determination unit 12h determines that the banknote to be determined is a correct note (step S309), and ends the process.
一方、ステップS308の判定条件を満たさない場合には(ステップS308,No)、判別部12hは、判別対象である紙幣を損券と判定し(ステップS310)、処理を終了する。
On the other hand, when the determination condition of step S308 is not satisfied (step S308, No), the determination unit 12h determines that the banknote to be determined is a non-performing bill (step S310), and ends the process.
上述してきたように、実施例1では、設定部が、量子化などの初期設定を行い、画像データ取得部が、辞書画像の作成にあたっては複数枚の正券画像データを生成し、また、紙幣の正損判別にあたっては判別対象画像データを生成し、エッジ方向検出部が、生成された画像データのエッジ点とその方向とを検出し、有効エッジ判定部が、辞書画像の構成要素となる有効エッジ点を判定し、膨張処理部が、有効エッジ点を所定量膨張し、残存エッジ抽出部が、判別対象画像から残存エッジ点を抽出し、残存エッジ解析部が、残存エッジ点の密度を検出し、判別部が、検出された残存エッジ点の密度に基づいて紙幣の正損判別を行うように紙幣判別装置を構成した。したがって、紙葉類のデザインや品質に多様性がある場合であっても、高精度で最適な正損判別を行うことができる。
As described above, in the first embodiment, the setting unit performs an initial setting such as quantization, and the image data acquisition unit generates a plurality of genuine ticket image data when creating the dictionary image. In determining whether the image is correct or not, the image data to be determined is generated, the edge direction detection unit detects the edge point and the direction of the generated image data, and the effective edge determination unit is an effective component that is a component of the dictionary image. The edge point is determined, the expansion processing unit expands the effective edge point by a predetermined amount, the remaining edge extraction unit extracts the remaining edge point from the discrimination target image, and the remaining edge analysis unit detects the density of the remaining edge points. The banknote discriminating apparatus is configured such that the discriminating unit discriminates whether the banknotes are correct based on the detected density of the remaining edge points. Therefore, even if there is diversity in the design and quality of the paper sheets, it is possible to perform the optimum damage determination with high accuracy.
ところで、上述した実施例1では、紙幣判別装置が、各エッジ点を所定量膨張させた辞書画像を作成する場合について説明してきたが、ここで、紙幣判別装置が、各画素の周辺エッジ数に基づいて辞書画像を作成することとしてもよい。そこで、以下では、各画素の周辺エッジ数に基づいて辞書画像を作成する紙幣判別装置の実施例を実施例2として説明する。
By the way, in Example 1 mentioned above, although the banknote discriminating device demonstrated the case where the dictionary image which expanded each edge point by the predetermined amount was demonstrated, here, a banknote discriminating device is set to the peripheral edge number of each pixel. It is good also as creating a dictionary image based on it. Therefore, in the following, an embodiment of a bill discriminating apparatus that creates a dictionary image based on the number of peripheral edges of each pixel will be described as a second embodiment.
図12は、実施例2に係る紙幣判別装置の辞書画像作成の概要を示す図である。同図に示すように、実施例2に係る紙幣判別装置は、各方向平面の画素ごとに周辺エッジ数を計数し、かかる周辺エッジ数の統計量に基づいて方向ごとの辞書画像を作成する点に主たる特徴がある。
FIG. 12 is a diagram illustrating an outline of dictionary image creation of the bill discriminating apparatus according to the second embodiment. As shown in the figure, the banknote discriminating apparatus according to the second embodiment counts the number of peripheral edges for each pixel in each direction plane, and creates a dictionary image for each direction based on the statistics of the number of peripheral edges. Has the main characteristics.
なお、ここにいう周辺エッジとは、ある画素の所定の近傍範囲に含まれるエッジ点のことを指す。
Note that the peripheral edge here refers to an edge point included in a predetermined vicinity range of a certain pixel.
具体的には、実施例2に係る紙幣判別装置は、辞書画像の入力データであるN枚分の正券画像のそれぞれについて、各方向平面における画素ごとに周辺エッジ数を計数する(同図の(1)参照)。ここで、周辺エッジ数の計数には、たとえば、9×9画素で係数がすべて1であるような計数用フィルタ(図示せず)を用いることができる。
Specifically, the banknote discriminating apparatus according to the second embodiment counts the number of peripheral edges for each pixel in each directional plane for each of N regular ticket images that are input data of the dictionary image (see FIG. (See (1)). Here, for counting the number of peripheral edges, for example, a counting filter (not shown) having 9 × 9 pixels and all coefficients being 1 can be used.
なお、同図の(1)には、画素ごとの周辺エッジ数を、1~N枚目の正券画像にそれぞれ対応する周辺エッジ計数用の方向平面群1~Nの各方向平面へ画素ごとに記憶する例を示している。かかる例は、実施例2に係る紙幣判別装置の辞書画像作成処理の内容を限定するものではないが、以下では、これを前提として説明を行う。
Note that (1) in the figure shows the number of peripheral edges for each pixel on each direction plane of the direction plane groups 1 to N for counting the peripheral edges corresponding to the 1st to Nth genuine bill images. An example is shown in FIG. Although this example does not limit the contents of the dictionary image creation process of the banknote discriminating apparatus according to the second embodiment, the following description is based on this assumption.
そして、実施例2に係る紙幣判別装置は、計数した周辺エッジ数について、N枚分(方向平面群1~N)の統計量を算出する(同図の(2)参照)。
The banknote discriminating apparatus according to the second embodiment calculates N statistics (direction plane groups 1 to N) for the counted number of peripheral edges (see (2) in the figure).
そして、実施例2に係る紙幣判別装置は、算出した統計量に基づき、各方向平面における画素ごとに基準画素値を決定し、かかる基準画素値を辞書画像用の各方向平面へ記憶する(同図の(3)参照)。なお、かかる基準画素値の決定については、図13を用いて後述する。
Then, the banknote discriminating apparatus according to the second embodiment determines a reference pixel value for each pixel in each direction plane based on the calculated statistic, and stores the reference pixel value in each direction plane for dictionary images (same as above). (Refer to (3) in the figure). The determination of the reference pixel value will be described later with reference to FIG.
次に、実施例2に係る紙幣判別装置10aの構成について図13を用いて説明する。図13は、実施例2に係る紙幣判別装置10aの構成を示すブロック図である。なお、同図では、図2に示した実施例1に係る紙幣判別装置10と同一の構成要素には同一の符号を付しており、以下では、上述した実施例1と重複する構成要素についての説明を省略することとする。
Next, the configuration of the banknote discriminating apparatus 10a according to the second embodiment will be described with reference to FIG. FIG. 13 is a block diagram illustrating the configuration of the banknote discriminating apparatus 10a according to the second embodiment. In addition, in the same figure, the same code | symbol is attached | subjected to the component same as the banknote discrimination | determination apparatus 10 which concerns on Example 1 shown in FIG. 2, and about the component which overlaps with Example 1 mentioned above below. The description of will be omitted.
図13に示したように、実施例2に係る紙幣判別装置10aは、制御部12に周辺エッジ計数部12iをさらに備える点で、また、有効エッジ判定部12dおよび膨張処理部12e(図2参照)に代わる統計処理部12jを備える点で、上述した実施例1に係る紙幣判別装置10とは異なる。また、記憶部13に、方向平面別エッジ点量13cに代わる周辺エッジ計数用平面13fを備える点で、上述した実施例1に係る紙幣判別装置10とは異なる。
As shown in FIG. 13, the banknote discriminating apparatus 10 a according to the second embodiment is further provided with a peripheral edge counting unit 12 i in the control unit 12, and an effective edge determining unit 12 d and an expansion processing unit 12 e (see FIG. 2). ) Is different from the banknote discriminating apparatus 10 according to the first embodiment described above in that a statistical processing unit 12j is provided. Moreover, it differs from the banknote discrimination device 10 which concerns on Example 1 mentioned above by the point which equips the memory | storage part 13 with the peripheral edge counting plane 13f instead of the edge point amount 13c according to direction plane.
周辺エッジ計数部12iは、画素ごとに所定の近傍範囲内においてエッジ方向検出部12cが検出したエッジ点の数を計数する処理を行う処理部である。
The peripheral edge counting unit 12i is a processing unit that performs processing for counting the number of edge points detected by the edge direction detection unit 12c within a predetermined vicinity range for each pixel.
また、周辺エッジ計数部12iは、計数した各方向平面の画素ごとの周辺ヘッジ数を、辞書画像作成の場合には周辺エッジ計数用平面13fに対して記憶させ、正損判別の場合には残存エッジ抽出部12fに対して出力する処理部でもある。
Further, the peripheral edge counting unit 12i stores the counted peripheral hedge number for each pixel in each direction plane in the peripheral edge counting plane 13f in the case of creating a dictionary image, and remains in the case of determining whether the damage is correct or not. It is also a processing unit that outputs to the edge extraction unit 12f.
統計処理部12jは、周辺エッジ計数部12iが計数した各方向平面の画素ごとの周辺エッジ数について、周辺エッジ計数用平面13fを参照し、正券画像N枚分の統計量を算出する処理を行う処理部である。かかる統計量には、最大値、平均値、分散値、標準偏差などが含まれる。
The statistical processing unit 12j refers to the peripheral edge counting plane 13f with respect to the number of peripheral edges for each pixel in each direction plane counted by the peripheral edge counting unit 12i, and calculates a statistical amount for N sheets of the regular image. It is a processing part to perform. Such statistics include maximum values, average values, variance values, standard deviations, and the like.
また、統計処理部12jは、算出した統計量に基づき、各方向平面の画素ごとの基準画素値を決定する処理を行う処理部でもある。なお、かかる基準画素値の決定については、統計量に含まれる種々の値やパラメータを組み合わせることができる。
The statistical processing unit 12j is also a processing unit that performs a process of determining a reference pixel value for each pixel in each direction plane based on the calculated statistics. In addition, regarding the determination of the reference pixel value, various values and parameters included in the statistic can be combined.
ここで、統計量に含まれる最大値をV、平均値をμ、標準偏差をσとし、パラメータをαとする。たとえば、統計量の最大値に基づき、基準画素値を「V+α」とすることができる。また、統計量の平均値および標準偏差に基づき、基準画素値を「μ+ασ」とすることとしてもよい。なお、かかる基準画素値の算出式は、紙幣のデザインや求められる判別精度などに応じて、適宜変更可能とする。
Here, the maximum value included in the statistic is V, the average value is μ, the standard deviation is σ, and the parameter is α. For example, the reference pixel value can be set to “V + α” based on the maximum value of the statistic. The reference pixel value may be set to “μ + ασ” based on the average value and standard deviation of the statistics. The calculation formula for the reference pixel value can be appropriately changed according to the design of the banknote, the required discrimination accuracy, and the like.
また、統計処理部12jは、決定した各方向平面の画素ごとの基準画素値を辞書画像用の各方向平面へ記憶し、記憶後の各方向平面を方向別辞書画像13dとして記憶部13に対し記憶させる。
Further, the statistical processing unit 12j stores the determined reference pixel value for each pixel of each direction plane in each direction plane for the dictionary image, and stores each direction plane after storage as a direction-specific dictionary image 13d to the storage unit 13. Remember.
次に、実施例2に係る紙幣判別装置10aが実行する辞書画像作成処理の処理手順について図14を用いて説明する。図14は、実施例2に係る紙幣判別装置10aが実行する辞書画像作成処理の処理手順を示すフローチャートである。
Next, a processing procedure of dictionary image creation processing executed by the banknote discriminating apparatus 10a according to the second embodiment will be described with reference to FIG. FIG. 14 is a flowchart illustrating a processing procedure of dictionary image creation processing executed by the banknote discriminating apparatus 10a according to the second embodiment.
同図に示すように、エッジ方向検出部12cが、N枚分の正券画像を平滑化し(ステップS401)、N枚分の正券画像について各エッジ点とその方向を検出する(ステップS402)。
As shown in the figure, the edge direction detection unit 12c smoothes N sheets of genuine images (step S401), and detects each edge point and its direction for N sheets of genuine images (step S402). .
つづいて、周辺エッジ計数部12iが、N枚分の正券画像について、方向平面群1~Nの各方向平面の画素ごとに所定の近傍範囲内におけるエッジ点の数(周辺エッジ数)を計数する(ステップS403)。
Next, the peripheral edge counting unit 12i counts the number of edge points (the number of peripheral edges) within a predetermined neighborhood range for each pixel in each direction plane of the direction plane groups 1 to N for N sheets of genuine images. (Step S403).
そして、統計処理部12jは、周辺エッジ計数部12iが計数した画素ごとの周辺エッジ数について、N枚分(方向平面群1~N)の統計量を算出する(ステップS404)。そして、統計処理部12jは、算出した統計量に基づいて各方向平面の画素ごとに基準画素値を決定する(ステップS405)。
The statistical processing unit 12j then calculates N (direction plane group 1 to N) statistics for the number of peripheral edges for each pixel counted by the peripheral edge counting unit 12i (step S404). Then, the statistical processing unit 12j determines a reference pixel value for each pixel in each direction plane based on the calculated statistical amount (step S405).
そして、統計処理部12jは、決定した画素ごとの基準画素値を方向別辞書画像13dとして記憶部13に対し記憶させたうえで(ステップS406)、処理を終了する。
The statistical processing unit 12j stores the determined reference pixel value for each pixel in the storage unit 13 as the direction-specific dictionary image 13d (step S406), and ends the process.
次に、実施例2に係る紙幣判別装置10aが実行する正損判別処理の処理手順について図15を用いて説明する。図15は、実施例2に係る紙幣判別装置10aが実行する正損判別処理の処理手順を示すフローチャートである。
Next, the processing procedure of the damage determination process executed by the banknote determination apparatus 10a according to the second embodiment will be described with reference to FIG. FIG. 15 is a flowchart illustrating the processing procedure of the damage determination process executed by the banknote determination apparatus 10a according to the second embodiment.
同図に示すように、エッジ方向検出部12cが、判別対象画像を平滑化し(ステップS501)、判別対象画像の各エッジ点とその方向を検出する(ステップS502)。
As shown in the figure, the edge direction detection unit 12c smoothes the discrimination target image (step S501), and detects each edge point and the direction of the discrimination target image (step S502).
つづいて、周辺エッジ計数部12iが、判別対象画像について、各エッジ点を対応する各方向平面へ記憶する(ステップS503)。そして、周辺エッジ計数部12iは、各方向平面の画素ごとに周辺エッジ数の計数を行い、計数値を各方向平面の画素ごとの画素値とする(ステップS504)。
Subsequently, the peripheral edge counting unit 12i stores each edge point in the corresponding direction plane for the discrimination target image (step S503). The peripheral edge counting unit 12i then counts the number of peripheral edges for each pixel in each direction plane, and sets the count value as a pixel value for each pixel in each direction plane (step S504).
そして、残存エッジ抽出部12fは、各方向平面と方向別辞書画像13dの対応する各辞書画像とを画素ごとに比較し(ステップS505)、基準画素値を上回る画素値の画素を各方向平面における残存エッジ点とする(ステップS506)。そして、残存エッジ抽出部12fは、各方向平面の残存エッジ点を合成する(ステップS507)。
Then, the remaining edge extraction unit 12f compares each direction plane with each corresponding dictionary image in the direction-specific dictionary image 13d for each pixel (step S505), and adds pixels having pixel values exceeding the reference pixel value in each direction plane. The remaining edge point is set (step S506). Then, the remaining edge extraction unit 12f synthesizes the remaining edge points on each direction plane (step S507).
つづいて、残存エッジ解析部12gは、残存エッジ点のうちの孤立点を除去したうえで(ステップS508)、残存エッジ点の密度を解析する(ステップS509)。
Subsequently, the remaining edge analysis unit 12g removes isolated points from the remaining edge points (step S508) and then analyzes the density of the remaining edge points (step S509).
そして、判別部12hは、残存エッジ解析部12gが解析した残存エッジ点の密度が所定の閾値未満であるか否かを判定する(ステップS510)。なお、所定の閾値については、判別基準情報13eを参照する。
Then, the determination unit 12h determines whether the density of the remaining edge points analyzed by the remaining edge analysis unit 12g is less than a predetermined threshold (Step S510). For the predetermined threshold, the discrimination criterion information 13e is referred to.
そして、密度が所定の閾値未満である場合(ステップS510,Yes)、判別部12hは、判別対象である紙幣を正券と判別し(ステップS511)、処理を終了する。
If the density is less than the predetermined threshold (step S510, Yes), the determination unit 12h determines that the banknote to be determined is a correct note (step S511), and ends the process.
一方、ステップS510の判定条件を満たさない場合には(ステップS510,No)、判別部12hは、判別対象である紙幣を損券と判定し(ステップS512)、処理を終了する。
On the other hand, when the determination condition of step S510 is not satisfied (step S510, No), the determination unit 12h determines that the banknote to be determined is a damaged bill (step S512), and ends the process.
上述してきたように、実施例2では、周辺エッジ計数部が計数する、各方向平面の画素ごとの周辺エッジ数の統計量に基づき、統計処理部が画素ごとの基準画素値を決定することとしたので、画素ごとの周辺エッジ数というミクロ的な要素に基づきつつ統計量というマクロ的な要素を加味することができ、紙葉類のデザインや品質に多様性がある場合であっても、高精度で最適な正損判別を行うことができる。
As described above, in the second embodiment, the statistical processing unit determines the reference pixel value for each pixel based on the statistic of the number of peripheral edges for each pixel in each direction plane, which is counted by the peripheral edge counting unit. Therefore, it is possible to add a macro element called a statistic based on a micro element such as the number of peripheral edges for each pixel, and even if there is diversity in the design and quality of paper sheets, Optimal damage determination can be performed with accuracy.
また、上述した各実施例では、主に紙幣の正損判別を行う場合について説明したが、正損判別の対象となる紙葉類を特に限定するものではない。たとえば、商品券や通帳などであってもよい。
Further, in each of the above-described embodiments, description has been made mainly on the case of determining whether a banknote is correct or not, but there is no particular limitation on paper sheets that are targets of correctness determination. For example, it may be a gift certificate or a passbook.
また、上述した各実施例では、主に紙葉類の正損判別を行う場合について説明したが、本発明に係る手法を紙葉類の種類判別に適用することとしてもよい。かかる場合、たとえば紙幣であれば、紙幣の金種ごとに辞書画像を生成しておき、判別対象画像と辞書画像との比較をすべての金種について(すなわち、金種数分)行う。そして、残存エッジ点の密度が所定値以下となる場合の辞書画像の金種が、判別対象画像の金種であると判定することができる。
Further, in each of the above-described embodiments, description has been made mainly on the case of determining whether the paper sheet is correct or not, but the method according to the present invention may be applied to the paper sheet type determination. In such a case, for example, for a banknote, a dictionary image is generated for each denomination of the banknote, and the discrimination target image and the dictionary image are compared for all denominations (that is, for the number of denominations). Then, it is possible to determine that the denomination of the dictionary image when the density of the remaining edge points is a predetermined value or less is the denomination of the determination target image.
以上のように、本発明に係る紙葉類判別装置および紙葉類判別方法は、紙葉類のデザインや品質に多様性がある場合であっても、高精度で最適な正損判別および種類判別を行いたい場合に有用であり、特に、紙幣などの流通性の高い紙葉類を判別する装置への適用に適している。
As described above, the paper sheet discriminating apparatus and the paper sheet discriminating method according to the present invention are highly accurate and optimal for determining the correct damage and types even when the design and quality of the paper sheets are diverse. This is useful when discrimination is desired, and is particularly suitable for application to a device that discriminates highly circulated paper sheets such as banknotes.
1 正券画像
5 判別対象画像
10 紙幣判別装置
10a 紙幣判別装置
11 ラインセンサ部
12 制御部
12a 設定部
12b 画像データ取得部
12c エッジ方向検出部
12d 有効エッジ判定部
12e 膨張処理部
12f 残存エッジ抽出部
12g 残存エッジ解析部
12h 判別部
12i 周辺エッジ計数部
12j 統計処理部
13 記憶部
13a 設定情報
13b フィルタ群
13c 方向平面別エッジ点量
13d 方向別辞書画像群
13e 判別基準情報
13f 周辺エッジ計数用平面 DESCRIPTION OFSYMBOLS 1 Correct image 5 Discrimination object image 10 Banknote discrimination apparatus 10a Banknote discrimination apparatus 11 Line sensor part 12 Control part 12a Setting part 12b Image data acquisition part 12c Edge direction detection part 12d Effective edge determination part 12e Expansion processing part 12f Remaining edge extraction part 12g Remaining edge analysis unit 12h Discriminating unit 12i Peripheral edge counting unit 12j Statistical processing unit 13 Storage unit 13a Setting information 13b Filter group 13c Directional plane edge amount 13d Directional dictionary image group 13e Discrimination reference information 13f Peripheral edge counting plane
5 判別対象画像
10 紙幣判別装置
10a 紙幣判別装置
11 ラインセンサ部
12 制御部
12a 設定部
12b 画像データ取得部
12c エッジ方向検出部
12d 有効エッジ判定部
12e 膨張処理部
12f 残存エッジ抽出部
12g 残存エッジ解析部
12h 判別部
12i 周辺エッジ計数部
12j 統計処理部
13 記憶部
13a 設定情報
13b フィルタ群
13c 方向平面別エッジ点量
13d 方向別辞書画像群
13e 判別基準情報
13f 周辺エッジ計数用平面 DESCRIPTION OF
Claims (9)
- 紙葉類の撮像画像に基づいて紙葉類を判別する紙葉類判別装置であって、
正当な紙葉類の撮像画像である学習用画像からエッジ位置および当該エッジ位置におけるエッジ方向が検出された場合に、所定のエッジ方向範囲ごとに用意された複数の方向別辞書画像のうち当該エッジ方向に対応する前記方向別辞書画像の該当する位置へ当該エッジ位置を振り分けることで前記方向別辞書画像をそれぞれ生成する方向別辞書画像生成手段と、
判別対象となる紙葉類の撮像画像である入力画像を前記方向別辞書画像と対比することによって当該紙葉類の正損および種類を判別する判別手段と
を備えたことを特徴とする紙葉類判別装置。 A paper sheet discriminating apparatus that discriminates paper sheets based on a captured image of paper sheets,
When an edge position and an edge direction at the edge position are detected from a learning image that is a captured image of a valid paper sheet, the edge among a plurality of directional dictionary images prepared for each predetermined edge direction range Direction-specific dictionary image generating means for generating the direction-specific dictionary image by allocating the edge position to the corresponding position of the direction-specific dictionary image corresponding to the direction;
And a discriminating means for discriminating the damage and type of the paper sheet by comparing an input image, which is a captured image of the paper sheet to be discriminated, with the directional dictionary image. Classification device. - 前記入力画像からエッジ位置および当該エッジ位置におけるエッジ方向が検出された場合に、当該エッジ方向に基づいて当該エッジ位置を前記エッジ方向範囲ごとに分離することで方向別入力画像を生成する方向別入力画像生成手段と、
同一の前記エッジ方向範囲に関する前記方向別入力画像および前記方向別辞書画像のうち前記方向別入力画像にのみ存在する前記エッジ位置をすべての前記エッジ方向範囲について重ね合わせることで残存エッジ領域を抽出する残存エッジ抽出手段と
をさらに備え、
前記判別手段は、
前記残存エッジ抽出手段によって抽出された前記残存エッジ領域に基づいて前記判別対象となる紙葉類の正損を判別することを特徴とする請求項1に記載の紙葉類判別装置。 When the edge position and the edge direction at the edge position are detected from the input image, the direction-specific input generates a direction-specific input image by separating the edge position into the edge direction ranges based on the edge direction. Image generating means;
A residual edge region is extracted by superimposing the edge positions that exist only in the input image by direction among the input image by direction and the dictionary image by direction related to the same edge direction range for all the edge direction ranges. And a remaining edge extracting means,
The discrimination means includes
2. The paper sheet discriminating apparatus according to claim 1, wherein the paper sheet discriminating apparatus determines whether or not the paper sheet to be discriminated is correct based on the remaining edge area extracted by the remaining edge extracting unit. - 前記方向別辞書画像生成手段は、
検出された前記エッジ位置における前記エッジ方向が隣接する2つの前記エッジ方向範囲の境界線から所定の範囲内にある場合に、当該2つのエッジ方向範囲にそれぞれ対応する前記方向別辞書画像の該当する位置へ当該エッジ位置を振り分けることを特徴とする請求項1に記載の紙葉類判別装置。 The direction-specific dictionary image generation means includes:
When the edge direction at the detected edge position is within a predetermined range from the boundary line between two adjacent edge direction ranges, the corresponding dictionary images corresponding to the two edge direction ranges respectively correspond 2. The paper sheet discriminating apparatus according to claim 1, wherein the edge position is assigned to a position. - 前記残存エッジ抽出手段によって抽出された前記残存エッジ領域の密度を検出する密度検出手段
をさらに備え、
前記判別手段は、
前記密度検出手段によって検出された前記密度に基づいて前記判別対象となる紙葉類の正損を判別することを特徴とする請求項2に記載の紙葉類判別装置。 Density detecting means for detecting the density of the remaining edge region extracted by the remaining edge extracting means;
The discrimination means includes
The paper sheet discriminating apparatus according to claim 2, wherein the paper sheet discriminating apparatus determines whether or not the paper sheet to be discriminated is correct based on the density detected by the density detecting unit. - 前記方向別辞書画像生成手段は、
所定枚数の前記学習用画像を用いて、所定数以上の前記エッジ位置が振り分けられた画素を有効エッジ領域とすることを特徴とする請求項1~4のいずれか一つに記載の紙葉類判別装置。 The direction-specific dictionary image generation means includes:
The paper sheet according to any one of claims 1 to 4, wherein a pixel to which a predetermined number or more of the edge positions are allocated is set as an effective edge region by using a predetermined number of the learning images. Discriminator. - 前記方向別辞書画像生成手段は、
前記方向別辞書画像ごとに前記エッジ位置または前記有効エッジ領域を周辺画素へ拡張する膨張処理を行うことを特徴とする請求項5に記載の紙葉類判別装置。 The direction-specific dictionary image generation means includes:
6. The paper sheet discriminating apparatus according to claim 5, wherein expansion processing for expanding the edge position or the effective edge region to surrounding pixels is performed for each of the direction-specific dictionary images. - 前記方向別辞書画像生成手段は、
前記方向別辞書画像に対応する前記エッジ方向範囲を優先した前記膨張処理を行うことを特徴とする請求項6に記載の紙葉類判別装置。 The direction-specific dictionary image generation means includes:
The paper sheet discriminating apparatus according to claim 6, wherein the expansion processing is performed by giving priority to the edge direction range corresponding to the dictionary image according to direction. - 前記方向別辞書画像における各画素の所定の近傍範囲に含まれる前記エッジ位置が振り分けられた画素の個数を計数する周辺エッジ計数手段と、
前記周辺エッジ計数手段によって画素ごとに計数された前記個数を複数の前記学習用画像について統計した統計量を算出する統計量算出手段と
をさらに備え、
前記方向別辞書画像生成手段は、
前記統計量算出手段によって前記統計量が算出された場合に、当該統計量に基づいて前記方向別辞書画像における画素の画素値を設定することで前記方向別辞書画像を生成することを特徴とする請求項1に記載の紙葉類判別装置。 Peripheral edge counting means for counting the number of pixels to which the edge position included in a predetermined neighborhood range of each pixel in the dictionary image according to direction is distributed;
Statistic calculation means for calculating a statistic obtained by statistically calculating the number counted for each pixel by the peripheral edge counting means for the plurality of learning images; and
The direction-specific dictionary image generation means includes:
When the statistic is calculated by the statistic calculator, the directional dictionary image is generated by setting a pixel value of a pixel in the directional dictionary image based on the statistic. The paper sheet discrimination apparatus according to claim 1. - 紙葉類の撮像画像に基づいて紙葉類を判別する紙葉類判別方法であって、
正当な紙葉類の撮像画像である学習用画像からエッジ位置および当該エッジ位置におけるエッジ方向が検出された場合に、所定のエッジ方向範囲ごとに用意された複数の方向別辞書画像のうち当該エッジ方向に対応する前記方向別辞書画像の該当する位置へ当該エッジ位置を振り分けることで前記方向別辞書画像をそれぞれ生成する方向別辞書画像生成工程と、
判別対象となる紙葉類の撮像画像である入力画像を前記方向別辞書画像と対比することによって当該紙葉類の正損および種類を判別する判別工程と
を備えたことを特徴とする紙葉類判別方法。 A paper sheet discriminating method for discriminating a paper sheet based on a captured image of the paper sheet,
When an edge position and an edge direction at the edge position are detected from a learning image that is a captured image of a valid paper sheet, the edge among a plurality of directional dictionary images prepared for each predetermined edge direction range A direction-specific dictionary image generation step of generating the direction-specific dictionary image by allocating the edge position to a corresponding position of the direction-specific dictionary image corresponding to a direction;
And a discrimination step of discriminating the damage and type of the paper sheet by comparing an input image, which is a captured image of the paper sheet to be discriminated, with the directional dictionary image. Classification method.
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2010/054491 WO2011114447A1 (en) | 2010-03-17 | 2010-03-17 | Paper discriminating device and method of discriminating paper |
CN201080065507.5A CN102804233B (en) | 2010-03-17 | 2010-03-17 | Paper discriminating gear and paper method of discrimination |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2010/054491 WO2011114447A1 (en) | 2010-03-17 | 2010-03-17 | Paper discriminating device and method of discriminating paper |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2011114447A1 true WO2011114447A1 (en) | 2011-09-22 |
Family
ID=44648577
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2010/054491 WO2011114447A1 (en) | 2010-03-17 | 2010-03-17 | Paper discriminating device and method of discriminating paper |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN102804233B (en) |
WO (1) | WO2011114447A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016192026A (en) * | 2015-03-31 | 2016-11-10 | グローリー株式会社 | Paper sheets determination device and paper sheets determination method |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104464078B (en) * | 2014-12-08 | 2017-06-30 | 深圳怡化电脑股份有限公司 | By the method and system of photochromatic printing ink identification of damage paper money |
CN107680246B (en) * | 2017-10-24 | 2020-01-14 | 深圳怡化电脑股份有限公司 | Method and equipment for positioning curve boundary in paper money pattern |
JP6924413B2 (en) * | 2017-12-25 | 2021-08-25 | オムロン株式会社 | Data generator, data generation method and data generation program |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH01161137A (en) * | 1987-12-16 | 1989-06-23 | Fujitsu Ltd | Recognizing device |
JPH04294260A (en) * | 1991-03-22 | 1992-10-19 | Takaoka Electric Mfg Co Ltd | Inspecting apparatus for printed pattern quality |
JPH10105649A (en) * | 1996-09-30 | 1998-04-24 | Glory Ltd | Pattern discrimination device |
JP2001338304A (en) * | 1999-08-26 | 2001-12-07 | Nano Geometry Kenkyusho:Kk | Device and method for pattern inspection, and recording medium |
JP2008152450A (en) * | 2006-12-15 | 2008-07-03 | Toshiba Corp | Authenticating device for paper sheet and verifying method for paper sheet |
Family Cites Families (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN1653492B (en) * | 2002-08-30 | 2010-05-12 | 富士通株式会社 | Device and method for identifying paper sheet |
JP4563740B2 (en) * | 2004-07-13 | 2010-10-13 | グローリー株式会社 | Image collation device, image collation method, and image collation program. |
JP4679953B2 (en) * | 2005-04-22 | 2011-05-11 | グローリー株式会社 | Paper sheet damage ticket determination device, damage ticket determination method, and damage ticket determination program |
WO2009031242A1 (en) * | 2007-09-07 | 2009-03-12 | Glory Ltd. | Paper sheet identification device and paper sheet identification method |
-
2010
- 2010-03-17 WO PCT/JP2010/054491 patent/WO2011114447A1/en active Application Filing
- 2010-03-17 CN CN201080065507.5A patent/CN102804233B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH01161137A (en) * | 1987-12-16 | 1989-06-23 | Fujitsu Ltd | Recognizing device |
JPH04294260A (en) * | 1991-03-22 | 1992-10-19 | Takaoka Electric Mfg Co Ltd | Inspecting apparatus for printed pattern quality |
JPH10105649A (en) * | 1996-09-30 | 1998-04-24 | Glory Ltd | Pattern discrimination device |
JP2001338304A (en) * | 1999-08-26 | 2001-12-07 | Nano Geometry Kenkyusho:Kk | Device and method for pattern inspection, and recording medium |
JP2008152450A (en) * | 2006-12-15 | 2008-07-03 | Toshiba Corp | Authenticating device for paper sheet and verifying method for paper sheet |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2016192026A (en) * | 2015-03-31 | 2016-11-10 | グローリー株式会社 | Paper sheets determination device and paper sheets determination method |
Also Published As
Publication number | Publication date |
---|---|
CN102804233B (en) | 2015-08-12 |
CN102804233A (en) | 2012-11-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5616958B2 (en) | Method for banknote detector device and banknote detector device | |
KR101215278B1 (en) | Detection of document security marks using run profiles | |
JP5108018B2 (en) | Paper sheet identification device and paper sheet identification method | |
JP5174513B2 (en) | Paper sheet stain detection apparatus and stain detection method | |
WO2011114447A1 (en) | Paper discriminating device and method of discriminating paper | |
KR20140133860A (en) | Security element and method to inspect authenticity of a print | |
CN106599923B (en) | Method and device for detecting seal anti-counterfeiting features | |
WO2014201438A2 (en) | Printed authentication for low resolution reproductions | |
Wu et al. | A printer forensics method using halftone dot arrangement model | |
WO2016158392A1 (en) | Paper sheet detection device and paper sheet detection method | |
US7738690B2 (en) | Verification method for determining areas within an image corresponding to monetary banknotes | |
JP2006338330A (en) | Device and method for identifying slip of paper | |
Tkachenko et al. | Fighting against forged documents by using textured image | |
JP4679953B2 (en) | Paper sheet damage ticket determination device, damage ticket determination method, and damage ticket determination program | |
EP3367347B1 (en) | Validation of damaged banknotes | |
CN102592151A (en) | Blind detection method for median filter in digital image | |
US7844098B2 (en) | Method for performing color analysis operation on image corresponding to monetary banknote | |
EP1791081B1 (en) | Method for detecting perforation on the edge of an image of a form | |
JP4858046B2 (en) | Image recognition apparatus, copying apparatus, and image recognition method | |
JP4743065B2 (en) | Image recognition apparatus, copying apparatus, and image recognition method | |
JP2006338331A (en) | Device and method for registering slip | |
JP4187043B2 (en) | Image processing device | |
EP2911123B1 (en) | A method and device for characterising the state of use of banknotes, and their classification as fit and unfit for circulation | |
JP2005031802A (en) | Medium authenticity discriminating device | |
JP6732428B2 (en) | Image processing device, halftone dot determination method, and program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201080065507.5 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 10847862 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 10847862 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: JP |