CN108256475B - Bill image inversion detection method - Google Patents

Bill image inversion detection method Download PDF

Info

Publication number
CN108256475B
CN108256475B CN201810044894.2A CN201810044894A CN108256475B CN 108256475 B CN108256475 B CN 108256475B CN 201810044894 A CN201810044894 A CN 201810044894A CN 108256475 B CN108256475 B CN 108256475B
Authority
CN
China
Prior art keywords
image
gray value
gray
average value
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810044894.2A
Other languages
Chinese (zh)
Other versions
CN108256475A (en
Inventor
韦海成
肖明霞
祝玲
许亚杰
杨懋
王蓉
钞一非
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
North Minzu University
Original Assignee
North Minzu University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by North Minzu University filed Critical North Minzu University
Priority to CN201810044894.2A priority Critical patent/CN108256475B/en
Publication of CN108256475A publication Critical patent/CN108256475A/en
Application granted granted Critical
Publication of CN108256475B publication Critical patent/CN108256475B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/242Aligning, centring, orientation detection or correction of the image by image rotation, e.g. by 90 degrees
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Image Analysis (AREA)
  • Inspection Of Paper Currency And Valuable Securities (AREA)

Abstract

The invention provides a bill image inversion detection method, which comprises the steps of preprocessing an acquired bill image; uniformly carrying out region segmentation on the preprocessed bill image, and calculating the gray value of each pixel point in each region; selecting the same gray value interval from 0-255 gray values aiming at each area in the bill image, and respectively calculating the gray value average value of all pixel points in the gray value interval by each area; and determining the comparison relation of each area with respect to the gray value average value, and comparing the comparison relation with the comparison relation of each area with respect to the gray value average value in the judgment model to judge whether the bill image is inverted. The bill image inversion detection method provided by the invention utilizes the comparison relation of each area in the bill image about the gray value average value as a judgment basis, the principle is simpler, the calculated data volume is smaller, the accuracy rate is higher, the detection result is obvious, and the requirements of real-time performance and accuracy can be better met during application.

Description

Bill image inversion detection method
Technical Field
The invention relates to the technical field of image processing, in particular to a bill image inversion detection method.
Background
In the image or character recognition process, the direction of the photographed image has an important influence on the final recognition effect. Especially, in the process of segmenting image information by adopting a standard template, if the image direction cannot be identified, the image cannot be correctly identified. The traditional solution mainly adopts OCR recognition and image feature projection operation, and the two operation methods have large data volume and insufficient operation accuracy, and are difficult to meet the requirements of real-time performance and accuracy in use.
Disclosure of Invention
The invention aims to provide a bill image inversion detection method, which is based on a simple algorithm of multi-region histogram feature quantity analysis to realize bill image inversion detection and can improve the speed and accuracy of bill image inversion detection.
In order to achieve the above object, the present invention provides the following first technical means: a bill image inversion detection method comprises the following steps:
preprocessing the acquired bill image to make the length and width of the bill image consistent with the length and width of the standard image in the judgment model;
uniformly performing region segmentation on the preprocessed bill image, wherein the segmentation mode is consistent with that of a standard image when a judgment model is established; calculating the gray value of each pixel point in each region;
selecting the same gray value section from 0-255 gray values aiming at each area in the bill image, wherein the selected gray value section is the same as the gray value section selected when the judgment model is established; respectively calculating the gray value average value of all pixel points of each region in the gray value interval;
and determining the comparison relation of each area with respect to the gray value average value according to the gray value average value calculated by each area, comparing the comparison relation with the comparison relation of each area with respect to the gray value average value in the judgment model, and if the two comparison relations are consistent, determining that the direction of the bill image is consistent with the direction of the standard image in the judgment model.
Based on the first technical scheme of the invention, the first implementation mode is as follows: the method also comprises the steps of establishing a judgment model and determining the comparison relation of each area in the judgment model about the gray value average value, and specifically comprises the following steps:
preprocessing a plurality of collected bill images in the same direction to enable the length and width of each bill image to be equal to the length and width of a bill, and taking the preprocessed bill images as standard images;
uniformly performing region segmentation on each standard image, wherein the segmentation modes of each standard image are consistent, and calculating the gray value of each pixel point in each region aiming at each standard image;
selecting the same gray value interval from 0-255 gray values aiming at each area of each standard image, wherein the selected gray value intervals of each standard image are consistent; respectively calculating the gray value average value of all pixel points in each area of each standard image in the gray value interval;
determining the comparison relation of each area with respect to the gray value average value according to the gray value average value calculated by each area aiming at each standard image; and counting the comparison relation of each area in each standard image about the gray value average value, and selecting the comparison relation with the largest proportion from the comparison relations as the comparison relation of each area in the judgment model about the gray value average value.
Based on the first technical solution of the present invention, the second embodiment is: preprocessing the acquired bill image, specifically comprising bill image size adjustment and inclination angle adjustment;
the bill image size adjustment and the inclination angle adjustment specifically comprise the following steps: establishing a plane rectangular coordinate system on the plane of the bill image; according to the formula x ═ a0+a1u+a2v+a3uv,y=b0+b1u+b2v+b3uv, wherein (u, v) represents the known coordinates of each pixel point in the bill image before size and inclination adjustment, (x, y) represents the coordinates of each pixel point in the bill image after size and inclination adjustment, four pixel points are selected, the coordinates of the four pixel points after size and inclination adjustment are set, and a transformation coefficient a is calculated according to the known coordinates of the four pixel points before size and inclination adjustment and the coordinates of the four pixel points after size and inclination adjustment0、a1、a2、a3、b0、b1、b2、b3(ii) a And calculating the coordinates of each residual pixel point after the size and the inclination angle are adjusted according to the obtained transformation coefficient, and further finishing the size adjustment and the inclination angle adjustment of the bill image.
Based on the first technical aspect of the present invention and the second embodiment of the first technical aspect, a third embodiment is: uniformly performing region segmentation on the preprocessed bill image, wherein the segmentation mode is consistent with that of a standard image when a judgment model is established; calculating the gray value of each pixel point in each region; the method specifically comprises the following steps:
dividing the preprocessed bill image into at least two regions, wherein the area occupied by each region is equal; the position of each region in the plane rectangular coordinate system corresponds to the position of each region in the judgment model in the plane rectangular coordinate system one by one; each region corresponds to one number, and the number of each region is consistent with the number of the corresponding region in the judgment model;
by the formula
Figure BDA0001550602290000041
Figure BDA0001550602290000042
Calculating a Gray value Gray [ i, j ]]Wherein i is 1,2,3.. W, j is 1,2,3.. H, W represents the total row number of pixel points in the bill image, H represents the total column number of pixel points in the bill image, Gray [ i, j ]. H]Representing a pixel point [ i, j]Of gray value of R [ i, j ]]Representing a pixel point [ i, j]Of the RGB color space, G [ i, j]Representing a pixel point [ i, j]B [ i, j ] of the RGB color space]Representing a pixel point [ i, j]The blue component value of the RGB color space.
Based on the first technical aspect of the present invention and the third embodiment of the first technical aspect, the fourth embodiment is: determining a comparison relation of each region with respect to the gray value average value according to the gray value average value calculated by each region, comparing the comparison relation with the comparison relation of each region with respect to the gray value average value in the judgment model, and if the two comparison relations are consistent, determining that the direction of the bill image is consistent with the direction of the standard image in the judgment model; the method specifically comprises the following steps:
comparing the gray value average values calculated in all the areas in the bill image, selecting the maximum or minimum gray value average value from the gray value average values, and taking the number of the area corresponding to the maximum or minimum gray value average value as a comparison relation;
comparing the number of the area corresponding to the maximum gray level average value in the bill image with the number of the area corresponding to the maximum gray level average value in the judgment model, and if the two numbers are consistent, the direction of the bill image is consistent with the direction of the standard image in the judgment model;
or comparing the number of the area corresponding to the minimum gray value average value in the bill image with the number of the area corresponding to the minimum gray value average value in the judgment model, and if the two numbers are consistent, the direction of the bill image is consistent with the direction of the standard image in the judgment model.
Based on the first technical aspect of the present invention and the third embodiment of the first technical aspect, a fifth embodiment is: determining a comparison relation of each region with respect to the gray value average value according to the gray value average value calculated by each region, comparing the comparison relation with the comparison relation of each region with respect to the gray value average value in the judgment model, and if the two comparison relations are consistent, determining that the direction of the bill image is consistent with the direction of the standard image in the judgment model; the method specifically comprises the following steps:
comparing the gray value average values calculated by all the areas in the bill image, sequencing all the areas of the bill image according to the gray value average values, and taking the sequencing sequence as a comparison relation;
and comparing the sequencing sequence of each region in the bill image with the sequencing sequence of each region in the judgment model according to the gray value average value, wherein if the two sequencing sequences are consistent, the direction of the bill image is consistent with the direction of the standard image in the judgment model.
Based on the first technical solution of the present invention, the sixth embodiment is: before carrying out region segmentation on the preprocessed bill image, the method also comprises the step of judging the bill version in advance according to the RGB color components of the bill image, and specifically comprises the following steps:
respectively calculating the average value of the red component values, the average value of the yellow component values and the average value of the blue component values of all pixel points in the bill image aiming at the preprocessed bill image;
and respectively comparing the average value of the red component values, the average value of the yellow component values and the average value of the blue component values of the bill images with the average value of the red component values, the average value of the yellow component values and the average value of the blue component values of the standard images of all versions in the judgment model, and selecting one version in the judgment model with the closest color component as the version of the bill image.
According to the first aspect of the present invention and the sixth embodiment of the first aspect, a seventh embodiment is: the method also comprises the step of calculating the average value of the red component values, the average value of the yellow component values and the average value of the blue component values of the standard images of all versions in the judgment model in advance.
Compared with the prior art, the bill image inversion detection method provided by the invention utilizes the comparison relation of each region in the bill image about the gray value average value as a judgment basis, the principle is simpler, the calculated data volume is smaller, the accuracy rate is higher, the detection result is obvious, and the requirements of real-time performance and accuracy can be better met during application.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and it will be apparent to those skilled in the art that other relevant drawings can be obtained from the drawings without inventive effort.
Fig. 1 is a flowchart of a bill image inversion detection method according to an embodiment.
Fig. 2 illustrates a method for establishing a judgment model according to an embodiment.
FIG. 3 is a schematic diagram showing segmentation of a bill image region in an embodiment.
Fig. 4 is a gray level histogram of the a region in the bill image shown in fig. 3.
Fig. 5 is a gray level histogram of the B region in the bill image shown in fig. 3.
Fig. 6 shows a gray level histogram of the C region in the bill image shown in fig. 3.
Fig. 7 is a gray level histogram of the D region in the bill image shown in fig. 3.
FIG. 8 is a histogram of gray-level values in the range of 0-40 gray-level values in the region A of the document image shown in FIG. 3.
FIG. 9 is a histogram of gray-level values in the range of 0-40 gray-level values in the region B of the document image shown in FIG. 3.
FIG. 10 is a histogram of gray scale values in the range of 0-40 gray scale values in the region C of the document image shown in FIG. 3.
FIG. 11 is a histogram of gray-level values in the range of 0-40 gray-level values in the D region of the document image shown in FIG. 3.
Detailed Description
The technical solution in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the invention without inventive step, are within the scope of the invention.
In consideration of the fact that the existing bill image inversion detection method is large in operation data amount, insufficient in operation accuracy and difficult to meet requirements for real-time performance and accuracy in application, the bill image inversion detection method is provided based on the fact that the bill image inversion detection method is taken as an example in the embodiment.
Referring to fig. 1, the method according to the embodiment includes four steps S101, S102, S103 and S104, wherein a judgment model is needed in step S104.
Referring to fig. 2, the method for establishing the judgment model includes:
s201: and preprocessing a plurality of collected bill images in the same direction to enable the length and width of each bill image to be equal to the length and width of the bill, wherein the preprocessed bill images are used as standard images.
In this example, 104 sheets of 90 × 60mm are selected2The 104 tickets are respectively collected and placed in the right position, and the size and the angle of each ticket are adjusted. The method specifically comprises the following steps: according to the formula X ═ A0+A1U+A2V+A3UV,Y=B0+B1U+B2V+B3And UV, wherein (U, V) represents the coordinates of each pixel point of each known ticket image before size and inclination adjustment, and (X, Y) represents the coordinates of each pixel point of each ticket image after size and inclination adjustment. In this embodiment, pixel points at four corner points of the ticket image are selected, and coordinates of the four pixel points after the size and the inclination angle are adjusted are set to be (0,0), (0,60), (90,60), and (90,0), respectively (considering that the size of the ticket is 90 × 60mm2) Meanwhile, because the coordinates of the four pixel points before the size and the inclination angle are adjusted are known, the transformation coefficient A can be calculated according to the coordinates of the four pixel points before and after the size and the inclination angle are adjusted0、A1、A2、A3、B0、B1、B2、B3(ii) a According to the obtained transformation coefficient, the coordinates of the remaining pixel points after the size and the inclination angle are adjusted can be rapidly calculated, and then the size adjustment and the inclination angle adjustment of the ticket image are rapidly completed.
S202: and uniformly carrying out region segmentation on each standard image, wherein the segmentation modes of each standard image are consistent, and calculating the gray value of each pixel point in each region aiming at each standard image.
Referring to fig. 3, in the present embodiment, each standard image is divided equally into A, B, C, D four regions. By the formula
Figure BDA0001550602290000081
Figure BDA0001550602290000082
Calculating a Gray value Gray [ i, j ]]Wherein i is 1,2,3.. W, j is 1,2,3.. H, W represents the total number of rows of pixel points in the standard image, H represents the total number of columns of pixel points in the standard image, Gray [ i, j ·]Representing a pixel point [ i, j]Of gray value of R [ i, j ]]Representing a pixel point [ i, j]Of the RGB color space, G [ i, j]Representing a pixel point [ i, j]B [ i, j ] of the RGB color space]Representing a pixel point [ i, j]The blue component value of the RGB color space. Referring to fig. 4 to 7, the histogram is used to count the calculation results of the gray-level values of each region.
S203: selecting the same gray value interval from 0-255 gray values aiming at each area of each standard image, wherein the selected gray value intervals of each standard image are consistent; and respectively calculating the gray value average value of all pixel points in each region of each standard image in the gray value interval.
Referring to fig. 8 to 11, in the present embodiment, for each area of each standard image, a range of 0 to 40 gray values is selected from 0 to 255 gray values, and each standard image selects a range of 0 to 40 gray values. Calculating the average value of gray values of all pixel points of which the gray values are between 0 and 40 in the area A of each standard image, and correspondingly, carrying out the same calculation on the B, C, D area of each standard image.
It should be understood that the range of the gray scale values selected from the gray scale values of 0 to 255 is not limited to the above example, and for example, the embodiment may even use the gray scale values of 0 to 255 as the range of the selected gray scale values.
S204: determining the comparison relation of each area with respect to the gray value average value according to the gray value average value calculated by each area aiming at each standard image; and counting the comparison relation of each area in each standard image about the gray value average value, and selecting the comparison relation with the largest proportion from the comparison relations as the comparison relation of each area in the judgment model about the gray value average value.
In this embodiment, the rule of 102 standard images out of 104 standard images after calculation and comparison is: the average value of the gray values in the gray value interval of 0-40 of the four areas is ranked as C > A > B > D, and the average value of the gray values in the gray value interval of 0-255 of the four areas is ranked as C > A > B > D. And the region with the largest average value of the gray values in all the standard images is the C region.
In this embodiment, the characteristic that the region with the largest gray value average value in the range of 0 to 40 gray values is the C region is selected as the comparison relationship between the gray value average values of the regions in the judgment model.
Or in the embodiment, the characteristic that the gray value average value of the gray value intervals of 0-40 of the four regions is ranked as C > A > B > D can be selected as the comparison relation of each region in the judgment model about the gray value average value.
After the model is judged to be established, a sheet of 90 × 60mm is used in the embodiment2The ticket image inversion detection method is described in detail by taking the ticket as an example.
S101: and preprocessing the acquired bill image to ensure that the length and the width of the bill image are consistent with those of the standard image in the judgment model.
The image preprocessing needs to achieve the effect that the length and width of the ticket image are adjusted to be consistent with the length and width of the standard image in the broken model, and meanwhile, the angle of the ticket image needs to be adjusted, so that the ticket image is free of deflection. In the embodiment, a plane rectangular coordinate system is firstly established on a plane where a bill image is located; and then, on the premise of ensuring the adjustment accuracy, linearly changing the ticket image, namely adjusting the size and the inclination angle of the ticket image.
The size and the inclination angle of the ticket image are adjusted, and the method specifically comprises the following steps: according to the formula x ═ a0+a1u+a2v+a3uv,y=b0+b1u+b2v+b3uv, wherein (u, v) represents the known coordinates of each pixel point in the ticket image before size and inclination adjustment, (x, y) represents the coordinates of each pixel point in the ticket image after size and inclination adjustment, four pixel points are selected, the coordinates of the four pixel points after size and inclination adjustment are set, and a transformation coefficient a is calculated according to the known coordinates of the four pixel points before size and inclination adjustment and the coordinates of the four pixel points after size and inclination adjustment0、a1、a2、a3、b0、b1、b2、b3(ii) a Calculating the size and inclination of each residual pixel point according to the obtained transformation coefficientAnd (5) adjusting the size and the inclination angle of the bill image by the coordinate after angle adjustment.
In this embodiment, pixel points at four corner points of the ticket image are selected, and in order to make the length and width dimensions of the ticket image after the size and the tilt angle adjustment consistent with those of the standard image in the judgment model, the coordinates of the four pixel points after the size and the tilt angle adjustment are respectively set to (0,0), (0,60), (90, 0). Meanwhile, because the coordinates of the four pixel points before the size and the inclination angle are known, the transformation coefficient a can be calculated according to the coordinates of the four pixel points before and after the size and the inclination angle are adjusted0、a1、a2、a3、b0、b1、b2、b3(ii) a According to the obtained transformation coefficient, the coordinates of the remaining pixel points after the size and the inclination angle are adjusted can be rapidly calculated, and then the size adjustment and the inclination angle adjustment of the ticket image are rapidly completed.
S102: uniformly performing region segmentation on the preprocessed bill image, wherein the segmentation mode is consistent with that of a standard image when a judgment model is established; and calculating the gray value of each pixel point in each region.
In this embodiment, the preprocessed ticket images are equally divided into A, B, C, D four areas corresponding to the determination model. And A, B, C, D the four regions are in one-to-one correspondence with the positions of A, B, C, D four regions in the judgment model in the plane rectangular coordinate system.
By the formula
Figure BDA0001550602290000111
Figure BDA0001550602290000112
Calculating gray value gray [ i, j]Wherein i is 1,2,3.. w, j is 1,2,3.. h, w represents the total row number of pixel points in the ticket image, h represents the total column number of pixel points in the ticket image, and gray [ i, j ] is]Representing a pixel point [ i, j]R [ i, j ] of]Representing a pixel point [ i, j]G [ i, j ] of the RGB color space]Representing a pixel point [ i, j]B [ i, j ] of the RGB color space]Representing a pixelPoint [ i, j ]]The blue component value of the RGB color space.
S103: selecting the same gray value section from 0-255 gray values aiming at each area in the bill image, wherein the selected gray value section is the same as the gray value section selected when the judgment model is established; and respectively calculating the gray value average value of all pixel points of each region in the gray value interval.
In the embodiment, corresponding to the judgment model, each region of the ticket image selects a gray value range of 0-40 from gray values of 0-255, the average gray value of all pixel points with gray values between 0-40 in the region a of the ticket image is calculated, and correspondingly, the same calculation is performed on B, C, D regions of each ticket image.
Or corresponding to the judgment model, selecting a 0-255 gray value interval from 0-255 gray values in each area of the ticket image, calculating a gray value average value of pixel points of all gray values between 0-255 in the area A of the ticket image, and correspondingly, performing the same calculation on B, C, D areas of each ticket image.
S104: and determining the comparison relation of each area with respect to the gray value average value according to the gray value average value calculated by each area, comparing the comparison relation with the comparison relation of each area with respect to the gray value average value in the judgment model, and if the two comparison relations are consistent, determining that the direction of the bill image is consistent with the direction of the standard image in the judgment model.
As an example of the first possible implementation manner, the gray value average values calculated in each region in the bill image are compared, the maximum or minimum gray value average value is selected from the gray value average values, and the number of the region corresponding to the maximum or minimum gray value average value is used as the comparison relationship; comparing the number of the area corresponding to the maximum gray level average value in the bill image with the number of the area corresponding to the maximum gray level average value in the judgment model, and if the two numbers are consistent, the direction of the bill image is consistent with the direction of the standard image in the judgment model; or comparing the number of the area corresponding to the minimum gray value average value in the bill image with the number of the area corresponding to the minimum gray value average value in the judgment model, and if the two numbers are consistent, the direction of the bill image is consistent with the direction of the standard image in the judgment model.
The judgment model selects the characteristic that the area with the maximum gray value average value in the range of 0-40 gray values is the C area as the comparison relation of each area in the judgment model about the gray value average value. Correspondingly, in the embodiment, the ratio of the average gray-level values in the 0-40 gray-level value interval of the A, B, C, D four regions is calculated to be 1.1377:1.1264:1.3387:1, wherein the average gray-level value of the C region is the largest. From the above results, it can be seen that the average value of the gray scale values of the C region in the ticket is the largest, and the average value of the gray scale values of the C region in the determination model is also the largest, so the direction of the ticket image coincides with the direction of the standard image in the determination model. When the judgment model is established, 104 upright ticket images are selected, so that the ticket images can be judged to be upright.
And if the judgment model selects the characteristic that the area with the maximum gray value average value in the range of 0-255 gray values is the C area, the characteristic is used as the comparison relation of each area in the judgment model about the gray value average value. Correspondingly, in the embodiment, the ratio of the average gray-level values in the 0-255 gray-level value interval of the A, B, C, D four regions is calculated to be 1:1.1105:1.4933:1.1568, wherein the average gray-level value of the C region is the largest. From the above results, the ticket image is in the right position.
As an example of the second possible implementation manner, comparing the gray value average values calculated by each region in the bill image, sorting each region of the bill image according to the gray value average values, and taking the sorting order as the comparison relationship; and comparing the sequencing sequence of each region in the bill image with the sequencing sequence of each region in the judgment model according to the gray value average value, wherein if the two sequencing sequences are consistent, the direction of the bill image is consistent with the direction of the standard image in the judgment model.
If the judgment model selects the characteristic that the gray value average value of the gray value intervals of 0-40 of the four areas is ranked as C > A > B > D, the characteristic is used as a comparison relation of the gray value average value of each area in the judgment model. Correspondingly, in the embodiment, the ratio of the gray-level value average values of 0-40 gray-level value intervals of the four regions A, B, C, D is 1.1377:1.1264:1.3387:1, the gray-level value average value sequence of 0-40 gray-level value intervals of the four regions is also C > A > B > D, and according to the result, the ticket image is in the right position.
Considering that the ticket image comprises the tickets of the red version and the blue version, the sizes and the typesetting of the tickets of the two versions are different. Therefore, before the area segmentation is carried out on the preprocessed bill image, the method also comprises the step of judging the bill version in advance according to the RGB color components of the bill image. After the ticket versions are distinguished, the tickets of different versions correspond to different region segmentation modes and comparison relations.
In this embodiment, when determining the ticket version, the method specifically includes: respectively calculating the average value of the red component values, the average value of the yellow component values and the average value of the blue component values of all pixel points in the ticket image aiming at the preprocessed ticket image;
and respectively comparing the average value of the red component values, the average value of the yellow component values and the average value of the blue component values of the ticket image with the average value of the red component values, the average value of the yellow component values and the average value of the blue component values of the standard images of all versions in the judgment model, and selecting one version in the judgment model with the closest color component as the version of the ticket image.
For example, the ticket image may be sequentially differenced with the average value of the red component values, the average value of the yellow component values, and the average value of the blue component values of the ticket image of each version in the judgment model, and the absolute values of the three differences are added; if the ticket has two versions, the sum of the two absolute values is obtained, the sum of the two absolute values is compared, and the corresponding version with the smaller value is the version of the ticket image.
Before the bill version is judged in advance according to the RGB color components of the bill image, the average value of the red component values, the average value of the yellow component values and the average value of the blue component values of the standard images of all versions are calculated in a judgment model in advance.
The above description is only an embodiment of the present invention, but the scope of the present invention is not limited thereto, and any person skilled in the art can easily conceive of changes or substitutions within the technical scope of the present invention, and the present invention should be covered by the scope of the present invention.

Claims (7)

1. A bill image inversion detection method is characterized by comprising the following steps:
preprocessing the acquired bill image, including size adjustment and inclination angle adjustment of the bill image, so that the length and width of the bill image are consistent with those of the standard image in the judgment model; the bill image size adjustment and the inclination angle adjustment specifically comprise the following steps: establishing a plane rectangular coordinate system on the plane of the bill image; according to the formula x ═ a0+a1u+a2v+a3uv,y=b0+b1u+b2v+b3uv, wherein (u, v) represents the known coordinates of each pixel point in the bill image before size and inclination adjustment, (x, y) represents the coordinates of each pixel point in the bill image after size and inclination adjustment, four pixel points are selected, the coordinates of the four pixel points after size and inclination adjustment are set, and a transformation coefficient a is calculated according to the known coordinates of the four pixel points before size and inclination adjustment and the coordinates of the four pixel points after size and inclination adjustment0、a1、a2、a3、b0、b1、b2、b3(ii) a Calculating the coordinates of each residual pixel point after the size and the inclination angle are adjusted according to the obtained transformation coefficient, and further finishing the size adjustment and the inclination angle adjustment of the bill image;
uniformly performing region segmentation on the preprocessed bill image, wherein the segmentation mode is consistent with that of a standard image when a judgment model is established; calculating the gray value of each pixel point in each region;
selecting the same gray value section from 0-255 gray values aiming at each area in the bill image, wherein the selected gray value section is the same as the gray value section selected when the judgment model is established; respectively calculating the gray value average value of all pixel points of each region in the gray value interval;
and determining the comparison relation of each area with respect to the gray value average value according to the gray value average value calculated by each area, comparing the comparison relation with the comparison relation of each area with respect to the gray value average value in the judgment model, and if the two comparison relations are consistent, determining that the direction of the bill image is consistent with the direction of the standard image in the judgment model.
2. The method according to claim 1, further comprising establishing a judgment model and determining a comparison relationship between each region in the judgment model and the average value of the gray values, and specifically comprising the following steps:
preprocessing a plurality of collected bill images in the same direction to enable the length and width of each bill image to be equal to the length and width of a bill, and taking the preprocessed bill images as standard images;
uniformly performing region segmentation on each standard image, wherein the segmentation modes of each standard image are consistent, and calculating the gray value of each pixel point in each region aiming at each standard image;
selecting the same gray value interval from 0-255 gray values aiming at each area of each standard image, wherein the selected gray value intervals of each standard image are consistent; respectively calculating the gray value average value of all pixel points in each area of each standard image in the gray value interval;
determining the comparison relation of each area with respect to the gray value average value according to the gray value average value calculated by each area aiming at each standard image; and counting the comparison relation of each area in each standard image about the gray value average value, and selecting the comparison relation with the largest proportion from the comparison relations as the comparison relation of each area in the judgment model about the gray value average value.
3. The method according to claim 1, characterized in that the preprocessed bill image is uniformly divided into regions, and the division mode is consistent with that of the standard image when the judgment model is established; calculating the gray value of each pixel point in each region; the method specifically comprises the following steps:
dividing the preprocessed bill image into at least two regions, wherein the area occupied by each region is equal; the position of each region in the plane rectangular coordinate system corresponds to the position of each region in the judgment model in the plane rectangular coordinate system one by one; each region corresponds to one number, and the number of each region is consistent with the number of the corresponding region in the judgment model;
by the formula
Figure FDA0002981033250000031
Figure FDA0002981033250000032
Calculating a Gray value Gray [ i, j ]]Wherein i is 1,2,3.. W, j is 1,2,3.. H, W represents the total row number of pixel points in the bill image, H represents the total column number of pixel points in the bill image, Gray [ i, j ]. H]Representing a pixel point [ i, j]Of gray value of R [ i, j ]]Representing a pixel point [ i, j]Of the RGB color space, G [ i, j]Representing a pixel point [ i, j]B [ i, j ] of the RGB color space]Representing a pixel point [ i, j]The blue component value of the RGB color space.
4. The method according to claim 3, wherein the comparison relationship of each region with respect to the mean value of the gray value is determined according to the mean value of the gray value calculated by each region, the comparison relationship is compared with the comparison relationship of each region with respect to the mean value of the gray value in the judgment model, and if the two comparison relationships are consistent, the direction of the bill image is consistent with the direction of the standard image in the judgment model; the method specifically comprises the following steps:
comparing the gray value average values calculated in all the areas in the bill image, selecting the maximum or minimum gray value average value from the gray value average values, and taking the number of the area corresponding to the maximum or minimum gray value average value as a comparison relation;
comparing the number of the area corresponding to the maximum gray level average value in the bill image with the number of the area corresponding to the maximum gray level average value in the judgment model, and if the two numbers are consistent, the direction of the bill image is consistent with the direction of the standard image in the judgment model;
or comparing the number of the area corresponding to the minimum gray value average value in the bill image with the number of the area corresponding to the minimum gray value average value in the judgment model, and if the two numbers are consistent, the direction of the bill image is consistent with the direction of the standard image in the judgment model.
5. The method according to claim 3, wherein the comparison relationship of each region with respect to the mean value of the gray value is determined according to the mean value of the gray value calculated by each region, the comparison relationship is compared with the comparison relationship of each region with respect to the mean value of the gray value in the judgment model, and if the two comparison relationships are consistent, the direction of the bill image is consistent with the direction of the standard image in the judgment model; the method specifically comprises the following steps:
comparing the gray value average values calculated by all the areas in the bill image, sequencing all the areas of the bill image according to the gray value average values, and taking the sequencing sequence as a comparison relation;
and comparing the sequencing sequence of each region in the bill image with the sequencing sequence of each region in the judgment model according to the gray value average value, wherein if the two sequencing sequences are consistent, the direction of the bill image is consistent with the direction of the standard image in the judgment model.
6. The method of claim 1, further comprising, before the area segmentation of the preprocessed document image, pre-determining a document version according to RGB color components of the document image, and specifically comprising:
respectively calculating the average value of the red component values, the average value of the yellow component values and the average value of the blue component values of all pixel points in the bill image aiming at the preprocessed bill image;
and respectively comparing the average value of the red component values, the average value of the yellow component values and the average value of the blue component values of the bill images with the average value of the red component values, the average value of the yellow component values and the average value of the blue component values of the standard images of all versions in the judgment model, and selecting one version in the judgment model with the closest color component as the version of the bill image.
7. The method according to claim 6, further comprising the step of calculating in advance an average value of red component values, an average value of yellow component values and an average value of blue component values of each version of the standard image in the judgment model.
CN201810044894.2A 2018-01-17 2018-01-17 Bill image inversion detection method Active CN108256475B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810044894.2A CN108256475B (en) 2018-01-17 2018-01-17 Bill image inversion detection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810044894.2A CN108256475B (en) 2018-01-17 2018-01-17 Bill image inversion detection method

Publications (2)

Publication Number Publication Date
CN108256475A CN108256475A (en) 2018-07-06
CN108256475B true CN108256475B (en) 2021-05-11

Family

ID=62741444

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810044894.2A Active CN108256475B (en) 2018-01-17 2018-01-17 Bill image inversion detection method

Country Status (1)

Country Link
CN (1) CN108256475B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109447067B (en) * 2018-10-24 2021-07-20 北方民族大学 Bill direction detection and correction method and automatic bill checking system
CN114445467A (en) * 2021-12-21 2022-05-06 贵州大学 Specific target identification and tracking system of quad-rotor unmanned aerial vehicle based on vision

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102419817A (en) * 2010-09-27 2012-04-18 贵州黔驰电力信息技术有限公司 Automatic document scanning, analyzing and processing system based on intelligent image identification
CN106530483A (en) * 2016-11-10 2017-03-22 深圳怡化电脑股份有限公司 Banknote face direction identification method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8995012B2 (en) * 2010-11-05 2015-03-31 Rdm Corporation System for mobile image capture and processing of financial documents

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102419817A (en) * 2010-09-27 2012-04-18 贵州黔驰电力信息技术有限公司 Automatic document scanning, analyzing and processing system based on intelligent image identification
CN106530483A (en) * 2016-11-10 2017-03-22 深圳怡化电脑股份有限公司 Banknote face direction identification method and device

Also Published As

Publication number Publication date
CN108256475A (en) 2018-07-06

Similar Documents

Publication Publication Date Title
CN108427946B (en) Driver license detection and identification method based on internal features and text field layout in complex scene
CN105261110B (en) A kind of efficiently DSP paper money number recognition methods
CN110084241B (en) Automatic ammeter reading method based on image recognition
CN106339707B (en) A kind of gauge pointer image-recognizing method based on symmetric characteristics
CN103034848B (en) A kind of recognition methods of form types
CN108537782B (en) Building image matching and fusing method based on contour extraction
CN108133216B (en) Nixie tube reading identification method capable of realizing decimal point reading based on machine vision
CN102184544B (en) Method for correcting deformity and identifying image of go notation
CN106096610A (en) A kind of file and picture binary coding method based on support vector machine
CN106446894A (en) Method for recognizing position of spherical object based on contour
CN109523583B (en) Infrared and visible light image registration method for power equipment based on feedback mechanism
CN101958989A (en) Image processing apparatus, image processing system and image processing method
CN103177249A (en) Image processing apparatus and image processing method
CN106709952B (en) A kind of automatic calibration method of display screen
CN104966348B (en) A kind of bill images key element integrality detection method and system
CN104599288A (en) Skin color template based feature tracking method and device
CN108256475B (en) Bill image inversion detection method
CN112560847A (en) Image text region positioning method and device, storage medium and electronic equipment
CN105590112B (en) Text judgment method is tilted in a kind of image recognition
CN109858484B (en) Multi-class transformation license plate correction method based on deflection evaluation
CN108197624A (en) The recognition methods of certificate image rectification and device, computer storage media
CN116052152A (en) License plate recognition system based on contour detection and deep neural network
JP2011165170A (en) Object detection device and program
JP2003162719A (en) Detection of pattern in digital image
CN106951902B (en) Image binarization processing method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant