WO2016165574A1 - 一种有价票据的识别方法 - Google Patents

一种有价票据的识别方法 Download PDF

Info

Publication number
WO2016165574A1
WO2016165574A1 PCT/CN2016/078566 CN2016078566W WO2016165574A1 WO 2016165574 A1 WO2016165574 A1 WO 2016165574A1 CN 2016078566 W CN2016078566 W CN 2016078566W WO 2016165574 A1 WO2016165574 A1 WO 2016165574A1
Authority
WO
WIPO (PCT)
Prior art keywords
color
feature
data
color data
sub
Prior art date
Application number
PCT/CN2016/078566
Other languages
English (en)
French (fr)
Inventor
岳许要
肖助明
王丹丹
黄晓群
Original Assignee
广州广电运通金融电子股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 广州广电运通金融电子股份有限公司 filed Critical 广州广电运通金融电子股份有限公司
Priority to RU2017135352A priority Critical patent/RU2668731C1/ru
Priority to EP16779549.1A priority patent/EP3285210A4/en
Priority to US15/564,936 priority patent/US10235595B2/en
Publication of WO2016165574A1 publication Critical patent/WO2016165574A1/zh
Priority to HK18109044.7A priority patent/HK1249626A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G07CHECKING-DEVICES
    • G07DHANDLING OF COINS OR VALUABLE PAPERS, e.g. TESTING, SORTING BY DENOMINATIONS, COUNTING, DISPENSING, CHANGING OR DEPOSITING
    • G07D7/00Testing specially adapted to determine the identity or genuineness of valuable papers or for segregating those which are unacceptable, e.g. banknotes that are alien to a currency
    • G07D7/20Testing patterns thereon
    • G07D7/202Testing patterns thereon using pattern matching
    • G07D7/2033Matching unique patterns, i.e. patterns that are unique to each individual paper
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/12Details of acquisition arrangements; Constructional details thereof
    • G06V10/14Optical characteristics of the device performing the acquisition or on the illumination arrangements
    • G06V10/143Sensing or illuminating at different wavelengths
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/76Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries based on eigen-space representations, e.g. from pose or different illumination conditions; Shape manifolds
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/60Colour correction or control
    • H04N1/6016Conversion to subtractive colour signals
    • H04N1/6022Generating a fourth subtractive colour signal, e.g. under colour removal, black masking
    • H04N1/6025Generating a fourth subtractive colour signal, e.g. under colour removal, black masking using look-up tables

Definitions

  • the invention relates to a document of value identification, in particular to a method for identifying a document of value.
  • the method for identifying a document of value is usually identified by acquiring an image feature by a contact image sensor.
  • the sensor In the field of image-based pattern recognition, the sensor is first used to capture the recognized image. If the image resolution is insufficient, the correctness of the recognition result will be greatly reduced, especially among the confusing identification objects, for example, in character recognition. "O" and "D", etc., although computer technology is changing with each passing day, from the cost consideration, it will not select a very high-end hardware platform, just to meet the actual needs, in this case, in the actual production process, due to hardware The difference between the differences and the diversity of the recognition space will lead to more confusing object recognition errors. Therefore, a method based on the identification of color data to identify the type of value document has been developed.
  • the color data refers to a collection of color values including R, G, and B colors collected by a color sensor. Identifying the value notes by the color data of the notes of value is an intuitive and fast method.
  • the process of color data recognition is divided into two parts, feature extraction and recognition, and the robustness of the extracted features directly affects the accuracy of recognition.
  • the extraction of color features mainly has the following problems: on the one hand, because the color data is within the value document As a result of the interaction of the colors of a certain area, when the surface of the signal acquisition area is non-solid, the color data will have a large deviation from the true color; on the other hand, for the same color collection area, different textures have different The intensity of the reflection causes the color sensor to receive signals of different intensities, resulting in instability of the color data.
  • the feature extracted by the feature extraction is difficult to be robust, and the type of the ticket cannot be accurately identified.
  • the present invention provides a method for identifying a value document based on color data, by using color data.
  • the trend change of the stable sub-segment mean set identifies the value document to overcome the color cast problem of the color data and achieve accurate identification of the value document.
  • the method for identifying the document of value includes: Step 1, collecting color data of the document of value to be detected by a color collection device including a plurality of color sensors, and pre-processing the collected color data; Step 2, from pre-processing The corresponding feature is extracted from the color data, wherein the feature extracted from the color data specifically refers to a one-dimensional vector composed of the mean values of the sub-segments in which all the hue changes in the hue data are smaller in the hue data corresponding to the color data;
  • the extracted feature is matched with the feature template set corresponding to each type of the value document, and the corresponding matching score is obtained, and the feature template with the highest score is regarded as the matching template corresponding to the color data, wherein
  • the front and back images of the color ticket are divided into a plurality of sub-regions, and the simulated color data of each sub-region is obtained by simulating the working mode of the color sensor, and the feature corresponding to the simulated color data of each sub-region is the feature template, and the front and back images
  • the method further includes: setting, in the step of extracting, a set of feature templates corresponding to each type of the document of value
  • the color image of the face of the ticket is divided into regions to obtain a plurality of sub-regions
  • step 02 the working mode of the color sensor is simulated, and each sub-region obtained by the division is converted into color data
  • step 03 Converting the color data into color space conversion to obtain the hue data of the region
  • step 04 locating the stable sub-segment in the hue data
  • step 05 obtaining the hue mean of the stable sub-segment
  • step 06 the hue of all the stable sub-segments
  • the mean value constitutes a feature template corresponding to the region
  • step 07 a feature template corresponding to all the sub-regions constitutes a feature template set of the ticket
  • all the feature template sets of the ticket constitute a feature template set of the ticket.
  • the conversion method is described as:
  • step 04 the method of locating the stable sub-segment in the tone data in step 04 is described as:
  • SMAP i ⁇ smap 0 ,smap 1 ,...smap j ,...smap L ⁇ (1 ⁇ j ⁇ L);
  • SP is the number of stable sub-segments in the signal SH
  • spart s can be expressed as:
  • end s respectively represent the starting position, ending position of the stable sub-section, and:
  • St s is the first l value that satisfies the following formula:
  • St s firstl(abs(2 ⁇ map l+step/2 -(map l+step -map l )) ⁇ thres),(end s-1 ⁇ l ⁇ L);
  • End s is the last l value that satisfies the following formula:
  • Thres is a preset threshold for examining the stability of a signal within a segment.
  • the mean f s of each stable subsection in step 05 is:
  • the preprocessing of the color data comprises: positioning the starting point and the ending point of the effective area of the color data, and locating the data collected by the color sensor in the banknote; and filtering the color data obtained by the positioning.
  • the pre-processed color data is expressed as:
  • M is the number of color sensors.
  • M should be greater than 1
  • R i , G i , B i are the red, green and blue components of the path signal
  • N i is the color.
  • performing feature extraction on the pre-processed color data includes: step 21, performing color space conversion on the pre-processed color data to obtain tone data; and step 22, locating the stable sub-segment set in the tone data.
  • the stable sub-section is a segment with a small change in hue; in step 23, the mean value of the hue in the stable sub-segment is obtained, and the mean value of the hue of all sub-segments in the set of stable sub-segments constitutes a feature vector of the color data of the path, the feature vector Fi Expressed as:
  • N is the number of color sensors.
  • the front and back template sets of the ticket template set are respectively matched, and the forward and reverse matching are respectively performed, wherein the color data feature Fi and a certain
  • the positive matching of the feature Sf k of a template is described as:
  • the method further includes: step 31, obtaining distances between the color sensors by using position information of the plurality of color sensors in the color collection device; wherein the position information of the color sensor refers to structural information passing through the color collection device And obtaining a relative position between the color sensors; step 32, obtaining a distance between each matched feature template by using location information between the matched feature templates; wherein the location information of the feature template refers to creating a feature template set And dividing the relative position between the centers of each of the obtained sub-regions; in step 33, whether the distance between the matched feature templates is consistent with the distance between the corresponding color sensors, and if they are consistent, the matching is considered successful; otherwise, The match was not successful.
  • step 33 the similarity between the distance between the color sensor and the corresponding matching template is examined according to the following formula, wherein DistS i, j is the distance between the color sensors corresponding to the color data i, j, and DistM i, j is the color data.
  • the distance between the templates matched by i, j, T dist is the preset distance threshold:
  • the matching is considered successful; otherwise, the matching is unsuccessful.
  • a feature template set is first generated according to the bill image; then the color data is preprocessed; then the color data is converted into tone data, and the set of each stable sub-segment mean in the hue data Forming features of the color data; matching the color data to the feature and the feature template set, and finally obtaining the ticket type of the value document. Since the present invention utilizes the trend change of the stable sub-segment mean set of the color data to identify the value document, the color cast problem of the color data can be overcome, and the accurate identification of the value document can be achieved.
  • the invention simulates the working principle of the color sensor, and extracts the simulated color data from the color ticket image and forms the feature template set, which has the characteristics of being fast and practical.
  • the invention adopts the template matching method for identification, and confirms the matching result by comparing the distance between the color sensor and the matched template, and finally achieves accurate identification, so the algorithm has the advantages of fast and effective.
  • FIG. 1 is a flowchart of a method for identifying a value document based on color data according to an embodiment of the present invention.
  • FIG. 2 is a schematic diagram of segmenting tone data in an embodiment of the present invention.
  • FIG. 3 is a schematic diagram of “dividing a set of feature documents of a value document” in the embodiment of the present invention, and performing area division on the front image of the banknote.
  • Figure 4 is a schematic diagram of the width of the receiving surface of the color sensor.
  • FIG. 5 is a schematic diagram of a part of “extracting a set of valuable document templates”, which simulates a color sensor to collect signals by using a sliding window method.
  • An example of the present invention provides a banknote recognition method based on color data, which identifies a banknote by combining the acquired color data with a feature template set extracted from a known true color banknote image.
  • the value document identification method based on the color data in the embodiment of the present invention can be used not only for the identification of banknotes, but also for identifying a sheet-like document such as a check, which is not limited herein.
  • the ticket identification is taken as an example to explain the method of the embodiment of the present invention.
  • the identification of the banknote is only taken as an example, it should not be construed as limiting the method of the present invention.
  • a banknote identification method based on color data in an embodiment of the present invention includes:
  • the color image of a certain banknote is divided into regions to obtain a plurality of sub-regions; the working mode of the analog color sensor converts each sub-region obtained by the conversion into color data; and the color data obtained by the conversion is color-coded.
  • Spatial transformation obtaining the hue data of the region; locating the stable sub-segment in the hue data; obtaining the hue mean of the stable sub-segment; the hue mean of all the stable sub-sections constituting the corresponding feature template of the region;
  • the feature template constitutes a set of feature templates for the banknotes; the set of feature templates for all of the banknotes constitutes a feature template set of banknotes.
  • the color data obtained by the positioning is filtered to filter out noise.
  • step 101 extracting the banknote feature template set can be performed independently of other steps, that is, after the feature template set of each type of banknote is extracted in advance, the process of detecting the banknote is not required to be re-extracted each time.
  • the feature template set of each type of banknote may be used instead of the feature template set corresponding to each type of banknote previously extracted and stored in the identification system.
  • K + , K - is divided into the front and back of the banknote.
  • Figure 3 shows the frontal division of the banknote, where SL is the width of each area.
  • the matching template cannot be found.
  • dividing there is an overlapping area between two adjacent parts.
  • the length of the overlapping area is the width of the color sensor collecting surface, that is, SenserW.
  • the effective collection surface of the color sensor is determined by the distance h between the sensor and the surface of the banknote, and the effective acquisition angle ⁇ , and the effective collection surface height can be expressed as:
  • a window having a width of SL and a height of W is used to gradually move the sliding window, and the color average value of all the pixels in the window is the analog signal value of the current position.
  • the current is obtained.
  • the simulated color data for the area is shown in FIG. 5, for sectionk.
  • the characteristics of the simulated color data of each sub-region are obtained, and finally the feature set of the front and back of the banknote is formed, which is expressed as:
  • the simulated color data generated by step 2 is RGB data, and the signal intensity of each sampling point is described by three parameters, which is difficult to process and is susceptible to brightness, so the analog color data is converted to HSL. Space, and feature extraction on the tonal data corresponding to the simulated color data.
  • the feature extraction can be described as:
  • SH ⁇ sh 0 , sh 1 ,...sh j ,...sh L ⁇ (1 ⁇ j ⁇ L);
  • the conversion method can be described as:
  • SMAP i ⁇ smap 0 ,smap 1 ,...smap j ,...smap L ⁇ (1 ⁇ j ⁇ L);
  • SP is the number of stable sub-segments in the signal SH
  • spart s can be expressed as:
  • end s respectively represent the starting position, ending position of the stable sub-section, and:
  • St s is the first l value that satisfies the following formula:
  • St s firstl(abs(2 ⁇ map l+step/2 -(map l+step -map l )) ⁇ thres),(end s-1 ⁇ l ⁇ L);
  • End s is the last l value that satisfies the following formula:
  • Thres is a preset threshold for examining the signal stability in a certain segment.
  • the schematic diagram of the simulated data segmentation is shown in Figure 2.
  • step (1) the features of the simulated color data of each region are extracted, and finally the feature set of the front and back of the banknote is formed, which is expressed as:
  • the pre-processing includes locating the starting point and the ending point of the color data by a preset threshold. To filter out the influence of noise such as electromagnetic interference on the color data, a median filtering of the color data is performed on the window.
  • the preprocessed color data is expressed as:
  • M is the number of color sensors.
  • M should be greater than 1
  • R i , G i , B i are the red, green and blue components of the path signal
  • N i is the color.
  • the feature is extracted from the actually collected color data, and is expressed as:
  • N is the number of color sensors.
  • each color data is matched with the front feature template set and the reverse feature template set of the banknote feature template set respectively, and the template with the highest matching score is the matching template, and the position information of these templates is recorded. Since the direction of the banknotes is unknown, a forward-reverse match is required when matching one side of the banknote.
  • the positive matching degree of the color data feature F i and the feature Sf k of a certain template can be described as:
  • the inverse matching degree of the color data feature F i with the feature Sf k of a certain template can be described as:
  • DistS i,j is the distance between the color sensors corresponding to the color data i, j
  • DistM i,j is the color data.
  • the distance between the templates matched by i, j, T dist is the preset distance threshold.
  • a feature template set is first generated according to the ticket image; then the color data is preprocessed; the color data is converted into tone data, and the stable sub-segment set in the color data is found by using the integral graph sliding window method; The average color tone of each sub-segment is obtained and finally the feature vector of the color data is formed; the feature vector is extracted from the color data to match the feature template set, and finally the recognition result is obtained. Since the embodiment of the present invention utilizes the trend feature of the color data to identify the banknote, the color cast problem of the color data can be effectively overcome, and the accurate identification of the banknote can be achieved.
  • the trend feature refers to the size relationship between the mean values of two adjacent stable sub-segments in the feature vector corresponding to the color data.
  • the embodiment of the invention simulates the working principle of the color sensor, and extracts the simulated color data from the color banknote image and forms a feature template set, which has the characteristics of being fast and practical.
  • the embodiment of the invention adopts the method of template matching to identify, and confirms the matching result by comparing the distance between the color sensor and the matched template, and finally achieves accurate identification, so the algorithm has the advantages of fast and effective.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)
  • Character Input (AREA)
  • Inspection Of Paper Currency And Valuable Securities (AREA)

Abstract

一种有价票据的识别方法,包括:步骤1,通过包括多个颜色传感器的一颜色采集装置采集待检测的有价票据的颜色数据,并对采集的颜色数据进行预处理(102);步骤2,从预处理后的颜色数据中提取对应的特征(103);步骤3,将该提取的特征与有价票据每一种类型相对应的特征模板集进行匹配,得到对应的匹配得分,将得分最高的特征模板看作该颜色数据对应的匹配模板;步骤4,根据匹配结果确定有价票据的类型(104)。该方法利用颜色数据的稳定子段均值集合的趋势变化对有价票据进行识别,因此能够克服颜色数据的偏色问题,达到对有价票据的精准识别。

Description

一种有价票据的识别方法
本申请要求于2015年4月13号提交中国专利局、申请号为201510176330.0、发明名称为“一种有价票据的识别方法”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本发明涉及有价票据识别技术,尤其涉及一种有价票据识别方法。
背景技术
现有技术中,对于有价票据的识别方法通常是通过接触式图像传感器采集图像特征来进行识别。在基于图像的模式识别领域,首先要利用传感器采集识别的图像,如果图像分辨率不足,识别结果的正确率就会大大降低,特别是在易混淆的识别对象之间,比如,字符识别中的“O”和“D”等,虽然计算机技术日新月异,然而从成本考虑,并不会选取很高端的硬件平台,仅仅是满足实际需求即可,在这样情况下,实际生产过程中,由于硬件之间的差异性及识别空间的多样性就会出现较多易混淆对象识别错误。因此开发出了基于颜色数据的识别来识别有价票据类型的方法。
颜色数据是指,通过颜色传感器采集的,包含R,G,B三色的颜色数值的集合。通过有价票据的颜色数据对有价票据进行识别,是一种直观快速的方法。颜色数据识别的过程分为两部分,特征提取与识别,而提取的特征的鲁棒性直接影响识别的精准度,颜色特征的提取主要存在以下问题:一方面,由于颜色数据是有价票据内一定区域的颜色共同作用的结果,当信号采集区域表面为非纯色时,颜色数据相对于真实的颜色会有较大的偏差;另一方面,对于相同颜色的采集区域,不同的纹理具有不同的反射强度,会使颜色传感器接收到不同强度的信号,从而导致颜色数据的不稳定。
由于上述原因,使得特征提取得到的特征难以鲁棒,导致无法精准识别出票据的类型。
发明内容
为了解决现有技术中基于颜色数据识别有价票据类型的方法其颜色特征难以鲁棒,导致识别精度不高的问题,本发明提供一种基于颜色数据识别有价票据的方法,通过利用颜色数据的稳定子段均值集合的趋势变化对有价票据进行识别,以克服颜色数据的偏色问题,达到对有价票据的精准识别。
该有价票据的识别方法包括:步骤1,通过包括多个颜色传感器的一颜色采集装置采集待检测的有价票据的颜色数据,并对采集的颜色数据进行预处理;步骤2,从预处理后的颜色数据中提取对应的特征,其中,从颜色数据中提取的特征具体指:颜色数据对应的色调数据内,色调数据中所有色调变化较小的子段的均值所组成的一维向量;步骤3,将该提取的特征与有价票据每一种类型相对应的特征模板集进行匹配,得到对应的匹配得分,将得分最高的特征模板看作该颜色数据对应的匹配模板,其中,将彩色票据正反面图像分割为多个子区域,通过模拟颜色传感器的工作方式得到每个子区域的模拟颜色数据,每个子区域的模拟颜色数据对应的特征即为特征模板,彩色有价票据正反面图像的每个子区域对应的特征模板的集合即为有价票据的特征模板集;步骤4,根据匹配结果确定有价票据的类型。
优选的,步骤1之前还包括预先设置若干个与有价票据每一种类型相对应的特征模板集的步骤,该步骤中对真彩色票据图像进行提取有价票据特征模板集具体包括:步骤01,依据图像信息的复杂度,对票据某面向的彩色图像进行区域划分,得到多个子区域;步骤02,模拟颜色传感器的工作方式,将划分得到的每个子区域转化为颜色数据;步骤03,将转化得到的颜色数据进行颜色空间转换,得到该区域的色调数据;步骤04,定位色调数据中的稳定子段;步骤05,求取稳定子段的色调均值;步骤06,所有稳定子段的色调均值构成该区域对应的特征模板;步骤07,某面向所有子区域对应的特征模板构成票据某面向的特征模板集,票据所有面向的特征模板集构成票据的特征模板集。
具体的,步骤03中将模拟颜色数据转换到HSL颜色空间,并得到颜色数据SS的色调数据SH:SH={sh0,sh1,...shj,...shL}  (1<j<L);
转换方法描述为:
Figure PCTCN2016078566-appb-000001
具体的,步骤04中定位色调数据内的稳定子段的方法描述为:
求取色调数据SH的积分图:
SMAPi={smap0,smap1,...smapj,...smapL}  (1<j<L);
其中:
Figure PCTCN2016078566-appb-000002
利用滑动窗的方法搜索色调数据内的稳定子段:
设定信号SH的稳定子段集合为:
SPARTi={spart0,spart1,...sparts,...spartSP}  (1<s<SP);
其中SP为信号SH中的稳定子段数量,sparts可表示为:
sparts={sts,ends};
其中sts,ends分别表示稳定子段的起始位置,终止位置,且:
sts为满足下式的首个l值:
sts=firstl(abs(2×mapl+step/2-(mapl+step-mapl))<thres),(ends-1<l<L);
ends为满足下式的最后一个l值:
Figure PCTCN2016078566-appb-000003
Thres为考察某段内信号稳定度的预设阈值。
具体的,步骤05中每个稳定子段的均值fs为:
Figure PCTCN2016078566-appb-000004
所有稳定子段的色调均值构成该区域对应的特征模板,表示为:
Figure PCTCN2016078566-appb-000005
提取各个区域模拟颜色数据的特征,最终形成钞票正反面的特征集,表示为:
Figure PCTCN2016078566-appb-000006
Figure PCTCN2016078566-appb-000007
其中:
Figure PCTCN2016078566-appb-000008
Figure PCTCN2016078566-appb-000009
优选的,步骤1中,对颜色数据进行预处理包括:对颜色数据有效区域的起始点、终止点进行定位,定位到颜色传感器在钞票中所采集的数据;对定位得到的颜色数据进行滤波处理,滤除噪声,预处理后的颜色数据表示为:
Si={Ri,Gi,Bi}(1<i<M);
Figure PCTCN2016078566-appb-000010
Figure PCTCN2016078566-appb-000011
Figure PCTCN2016078566-appb-000012
其中,M为颜色传感器的个数,为了该发明实施例方法的鲁棒性,M应大于1,Ri,Gi,Bi为该路信号的红,绿,蓝分量,Ni为颜色数据i的信号长度。
优选的,步骤2中,对预处理后的颜色数据进行特征提取包括:步骤21,对预处理后的颜色数据进行颜色空间转换,得到色调数据;步骤22,定位色调数据内的稳定子段集合,稳定子段即为色调变化较小的段;步骤23,求取稳定子段内的色调均值,稳定子段集合内所有子段的色调均值构成该路颜色数据的特征向量,该特征向量Fi表示为:
Figure PCTCN2016078566-appb-000013
其中N为颜色传感器的数量。
优选的,步骤3中对每一种类型的票据的特征模板集进行匹配时,分别对票据模板集的正反面模板集进行匹配,且分别进行正反向匹配,其中,颜色数据特征Fi与某一个模板的特征Sfk的正向匹配度描述为:
Figure PCTCN2016078566-appb-000014
其中flag为正反面模板的标志,S(z)表示如下,T为预设阈值:
Figure PCTCN2016078566-appb-000015
颜色数据特征Fi与某一个模板的特征Sfk的反向匹配度描述为:
Figure PCTCN2016078566-appb-000016
其中flag为正反面模板的标志,S'(z)表示如下,T为预设阈值:
Figure PCTCN2016078566-appb-000017
优选的,步骤3中进一步包括:步骤31,通过该颜色采集装置中多个颜色传感器的位置信息,得到颜色传感器之间的距离;其中颜色传感器的位置信息是指,通过颜色采集装置的结构信息,得到的颜色传感器之间的相对位置;步骤32,通过匹配的特征模板之间的位置信息,得到每个匹配的特征模板之间的距离;其中特征模板的位置信息是指,制作特征模板集时,分割得到的每个子区域的中心之间的相对位置;步骤33,对比匹配的特征模板之间的距离与对应的颜色传感器之间的距离是否一致,若一致,则认为匹配成功,否则,匹配不成功。
具体的,步骤33中依据下式考察颜色传感器与对应匹配模板间的距离的相似度,其中DistSi,j为颜色数据i,j对应的颜色传感器之间的距离,DistMi,j为颜色数据i,j匹配的模板之间的距离,Tdist为预设的距离阈值:
Figure PCTCN2016078566-appb-000018
Figure PCTCN2016078566-appb-000019
若相似度大于某一预设阈值Tsim,则认为匹配成功,否则,匹配不成功。
本发明提供的有价票据的识别方法中,首先依据票据图像生成特征模板集;然后对颜色数据进行预处理;再将颜色数据转换为色调数据,色调数据中的每个稳定子段均值的集合形成颜色数据的特征;将从颜色数据提取到特征与特征模板集匹配,最终得到有价票据的票据类型。由于本发明利用颜色数据的稳定子段均值集合的趋势变化对有价票据进行识别,因此能够克服颜色数据的偏色问题,达到对有价票据的精准识别。本发明模拟颜色传感器的工作原理,从彩色票据图像中提取模拟颜色数据并形成特征模板集的方法,具有快捷实用的特点。本发明采用模板匹配的方法进行识别,并通过对比颜色传感器与匹配的模板之间的距离对匹配结果进行确认,最终达到精准识别,因此算法具有快速有效的优点。
附图说明
图1为本发明实施例中基于颜色数据的有价票据识别方法的流程图。
图2为本发明实施例中对色调数据进行分段的示意图。
图3为本发明实施例中“提取有价票据特征模板集”部分,对钞票正面图像进行区域划分的示意图。
图4为求取颜色传感器接收面宽度的示意图。
图5为“提取有价票据特征模板集”部分,利用滑动窗的方法,模拟颜色传感器采集信号的示意图。
具体实施方式
本发明实例提供了一种基于颜色数据的钞票识别方法,通过结合采集到的颜色数据和从已知真彩色钞票图像中提取到的特征模板集进行匹配来对钞票进行识别。
需要说明的是,本发明实施例基于颜色数据的有价票据识别方法不仅可以用于钞票的识别,还可用于识别支票等薄片类文件,在此不做限定。下面以钞 票识别为例对本本发明实施例的方法进行说明,虽然仅以钞票的识别为例进行说明,但是不应将此作为本发明方法的限定。
请参考图1,本发明实施例中基于颜色数据的钞票识别方法包括:
101.提取钞票特征模板集:
依据图像信息的复杂度,对钞票某面向的彩色图像进行区域划分,得到多个子区域;模拟颜色传感器的工作方式,将划分得到的每个子区域转化为颜色数据;将转化得到的颜色数据进行颜色空间转换,得到该区域的色调数据;定位色调数据中的稳定子段;求取稳定子段的色调均值;所有稳定子段的色调均值构成该区域对应的特征模板;某面向所有子区域对应的特征模板构成钞票某面向的特征模板集;钞票所有面向的特征模板集构成钞票的特征模板集。
102.对颜色数据进行预处理:
对颜色数据有效区域的起始点、终止点进行定位,定位到颜色传感器在钞票中所采集的数据。对定位得到的颜色数据进行滤波处理,滤除噪声。
103.对颜色数据进行特征提取:
对预处理后的颜色数据进行颜色空间转换,得到色调数据;定位色调数据内的稳定子段集合,稳定子段即为色调变化较小的段;求取稳定子段内的色调均值,稳定子段集合内所有子段的色调均值构成该路颜色数据的特征向量。
104.匹配识别:
将颜色数据的特征与特征模板集的每个特征模板进行匹配,并得到对应的匹配得分;将得分最高的特征模板看作与该颜色数据相匹配的模板;通过多个颜色传感器的位置信息,得到颜色传感器之间的距离;其中颜色传感器的位置信息是指,通过颜色采集装置的结构信息,得到的颜色传感器之间的相对位置;通过匹配的特征模板之间的位置信息,得到每个匹配的特征模板之间的距离;其中特征模板的位置信息是指,制作特征模板集时,分割得到的每个子区域的中心之间的相对位置;对比匹配的特征模板之间的距离与对应的颜色传感器之间的距离是否一致,若一致,则认为匹配成功,否则,匹配不成功。
由于钞票的进钞面向未知,对一个钞票的模板集进行匹配时,需分别对钞票模板集的正反面模板集进行匹配;另外,由于钞票的进钞方向未知,对每个面向的模板集进行匹配时,需分别进行正反向匹配。
需要说明的是,步骤101提取钞票特征模板集是可以独立于其他步骤进行的,也就是说,事先提取有各类型钞票的特征模板集后,对待检测钞票的识别过程中不需要每次重新提取各类型钞票的特征模板集,而是重复利用事先提取并存储于识别系统中的各类型钞票对应的特征模板集即可。
下面将对实施例进行详细的描述:
101.提取钞票特征模板集
一、钞票图像区域划分:
依据钞票图案复杂程度,对钞票正反面进行K+,K-等分,图3所示为钞票的正面划分示意图,其中SL为每个区域的宽度,为了防止颜色传感器跨道采集颜色数据时,无法找到匹配的模板,划分时,相邻两部分间存在交叠区域,交叠区域的长度为颜色传感器采集面的宽度,即SenserW。
二、模拟颜色传感器信号生成
由图4所示,颜色传感器的有效采集面由传感器与钞票表面的距离h,以及有效采集角度θ决定,则有效采集面高度可表示为:
W=2h×tan(θ/2);
如图5所示,对于sectionk,用宽度为SL,高度为W的窗口,逐步移动滑动窗,窗口内所有像素的颜色均值,即为当前位置的模拟信号值,当窗口滑动完毕后,得到当前区域的模拟颜色数据。
三、提取钞票特征模板集
对于由步骤二生成的模拟颜色数据进行特征提取,得到各个子区域模拟颜色数据的特征,最终形成钞票正反面的特征集,表示为:
(1)特征提取
由步骤二生成的模拟颜色数据为RGB数据,每个采样点的信号强度由三个参数描述,不易处理,且易受亮度影响,因此将模拟颜色数据转换到HSL 空间,并在模拟颜色数据对应的色调数据上进行特征提取,对于某个划分区域得到的颜色数据SS,其特征提取可描述为:
1)颜色空间转换
将模拟颜色数据转换到HSL颜色空间,并得到颜色数据SS的色调数据SH:
SH={sh0,sh1,...shj,...shL}  (1<j<L);
转换方法可描述为:
Figure PCTCN2016078566-appb-000020
2)子段搜索
转换完成后,开始搜索色调数据内的稳定子段,可描述为:
求取色调数据SH的积分图:
SMAPi={smap0,smap1,...smapj,...smapL}  (1<j<L);
其中:
Figure PCTCN2016078566-appb-000021
利用滑动窗的方法搜索色调数据内的稳定子段:
设定信号SH的稳定子段集合为:
SPARTi={spart0,spart1,...sparts,...spartSP}  (1<s<SP);
其中SP为信号SH中的稳定子段数量,sparts可表示为:
sparts={sts,ends};
其中sts,ends分别表示稳定子段的起始位置,终止位置,且:
sts为满足下式的首个l值:
sts=firstl(abs(2×mapl+step/2-(mapl+step-mapl))<thres),(ends-1<l<L);
ends为满足下式的最后一个l值:
Figure PCTCN2016078566-appb-000022
Thres为考察某段内信号稳定度的预设阈值,模拟数据分段后示意图如图2所示。
3)特征提取:
该模拟数据的特征可表示为为:
Figure PCTCN2016078566-appb-000023
其中fs为每个稳定子段的均值,即:
Figure PCTCN2016078566-appb-000024
(2)生成特征模板集
采用与步骤(1)相同的特征提取方法,提取各个区域模拟颜色数据的特征,最终形成钞票正反面的特征集,表示为:
Figure PCTCN2016078566-appb-000025
Figure PCTCN2016078566-appb-000026
其中:
Figure PCTCN2016078566-appb-000027
Figure PCTCN2016078566-appb-000028
102.预处理
预处理包含通过预设的阈值定位颜色数据的起始点和终止点,为了滤除电磁干扰等噪声对颜色数据的影响,对颜色数据进行窗口为5的中值滤波。
预处理后的颜色数据表示为:
Si={Ri,Gi,Bi}  (1<i<M);
Figure PCTCN2016078566-appb-000029
Figure PCTCN2016078566-appb-000030
Figure PCTCN2016078566-appb-000031
其中,M为颜色传感器的个数,为了该发明实施例方法的鲁棒性,M应大于1,Ri,Gi,Bi为该路信号的红,绿,蓝分量,Ni为颜色数据i的信号长度。
102.特征提取
参照步骤101中模拟颜色数据的特征提取方法,从真实采集到的颜色数据中提取特征,表示为:
Figure PCTCN2016078566-appb-000032
其中N为颜色传感器的数量。
104.匹配识别
(1)模板匹配
将每个颜色数据的特征与钞票特征模板集的正面特征模板集与反面特征模板集分别进行匹配,匹配得分最高的模板即为匹配模板,并记录这些模板的位置信息。由于钞票的进钞方向未知,在对钞票的某一面进行匹配时,需进行正反向匹配。
颜色数据特征Fi与某一个模板的特征Sfk的正向匹配度可描述为:
Figure PCTCN2016078566-appb-000033
其中flag为正反面模板的标志,S(z)可表示如下,T为预设阈值:
Figure PCTCN2016078566-appb-000034
颜色数据特征Fi与某一个模板的特征Sfk的反向匹配度可描述为:
Figure PCTCN2016078566-appb-000035
其中flag为正反面模板的标志,S'(z)可表示如下,T为预设阈值:
Figure PCTCN2016078566-appb-000036
(2)对比位置信息
模板特征匹配完成后,依据下式考察颜色传感器与对应匹配模板间的距离的相似度,其中DistSi,j为颜色数据i,j对应的颜色传感器之间的距离,DistMi,j为颜色数据i,j匹配的模板之间的距离,Tdist为预设的距离阈值。
Figure PCTCN2016078566-appb-000037
Figure PCTCN2016078566-appb-000038
若相似度大于某一预设阈值Tsim,则认为匹配成功,最终完成识别过程。
本发明实施例中,首先依据票据图像生成特征模板集;然后对颜色数据进行预处理;再将颜色数据转换为色调数据,并利用积分图滑动窗的方法,找到颜色数据中的稳定子段集合,求取每个子段的平均色调并最终形成颜色数据的特征向量;将从颜色数据提取到特征向量与特征模板集匹配,最终得到识别结果。由于本发明实施例利用颜色数据的趋势特征对钞票进行识别,因此能够有效克服颜色数据的偏色问题,达到对钞票的精准识别。其中趋势特征是指,颜色数据对应的特征向量中,相邻两个稳定子段均值之间的大小关系。本发明实施例模拟颜色传感器的工作原理,从彩色钞票图像中提取模拟颜色数据并形成特征模板集的方法,具有快捷实用的特点。本发明实施例采用模板匹配的方法进行识别,并通过对比颜色传感器与匹配的模板之间的距离对匹配结果进行确认,最终达到精准识别,因此算法具有快速有效的优点。
以上仅是本发明的优选实施方式,应当指出的是,上述优选实施方式不应视为对本发明的限制,本发明的保护范围应当以权利要求所限定的范围为准。对于本技术领域的普通技术人员来说,在不脱离本发明的精神和范围内,还可以做出若干改进和润饰,这些改进和润饰也应视为本发明的保护范围。

Claims (10)

  1. 一种有价票据的识别方法,包括:
    步骤1,通过包括多个颜色传感器的一颜色采集装置采集待检测的有价票据的颜色数据,并对采集的颜色数据进行预处理;
    步骤2,从预处理后的颜色数据中提取对应的特征,其中,从颜色数据中提取的特征具体指:颜色数据对应的色调数据内,色调数据中所有色调变化较小的子段的均值所组成的一维向量;
    步骤3,将该提取的特征与有价票据每一种类型相对应的特征模板集进行匹配,得到对应的匹配得分,将得分最高的特征模板看作该颜色数据对应的匹配模板,其中,将彩色票据正反面图像分割为多个子区域,通过模拟颜色传感器的工作方式得到每个子区域的模拟颜色数据,每个子区域的模拟颜色数据对应的特征即为特征模板,彩色有价票据正反面图像的每个子区域对应的特征模板的集合即为有价票据的特征模板集;
    步骤4,根据匹配结果确定有价票据的类型。
  2. 如权利要求1所述的有价票据的识别方法,其特征在于,步骤1之前还包括预先设置若干个与有价票据每一种类型相对应的特征模板集的步骤,该步骤中对真彩色票据图像进行提取有价票据特征模板集具体包括:
    步骤01,依据图像信息的复杂度,对票据某面向的彩色图像进行区域划分,得到多个子区域;
    步骤02,模拟颜色传感器的工作方式,将划分得到的每个子区域转化为颜色数据;
    步骤03,将转化得到的颜色数据进行颜色空间转换,得到该区域的色调 数据;
    步骤04,定位色调数据中的稳定子段;
    步骤05,求取稳定子段的色调均值;
    步骤06,所有稳定子段的色调均值构成该区域对应的特征模板;
    步骤07,某面向所有子区域对应的特征模板构成票据某面向的特征模板集,票据所有面向的特征模板集构成票据的特征模板集。
  3. 如权利要求2所述的有价票据的识别方法,其特征在于,步骤03中将模拟颜色数据转换到HSL颜色空间,并得到颜色数据SS的色调数据SH:
    SH={sh0,sh1,...shj,...shL} (1<j<L);
    转换方法描述为:
    Figure PCTCN2016078566-appb-100001
  4. 如权利要求3所述的有价票据的识别方法,其特征在于,步骤04中定位色调数据内的稳定子段的方法描述为:
    求取色调数据SH的积分图:
    SMAPi={smap0,smap1,...smapj,...smapL} (1<j<L);
    其中:
    Figure PCTCN2016078566-appb-100002
    利用滑动窗的方法搜索色调数据内的稳定子段:
    设定信号SH的稳定子段集合为:
    SPARTi={spart0,spart1,...sparts,...spartSP} (1<s<SP);
    其中SP为信号SH中的稳定子段数量,sparts可表示为:
    sparts={sts,ends};
    其中sts,ends分别表示稳定子段的起始位置,终止位置,且:
    sts为满足下式的首个l值:
    sts=firstl(abs(2×mapl+step/2-(mapl+step-mapl))<thres),(ends-1<l<L);
    ends为满足下式的最后一个l值:
    Figure PCTCN2016078566-appb-100003
    Thres为考察某段内信号稳定度的预设阈值。
  5. 如权利要求4所述的有价票据的识别方法,其特征在于,步骤05中每个稳定子段的均值fs为:
    Figure PCTCN2016078566-appb-100004
    所有稳定子段的色调均值构成该区域对应的特征模板,表示为:
    Figure PCTCN2016078566-appb-100005
    提取各个区域模拟颜色数据的特征,最终形成钞票正反面的特征集,表示为:
    Figure PCTCN2016078566-appb-100006
    Figure PCTCN2016078566-appb-100007
    其中:
    Figure PCTCN2016078566-appb-100008
    Figure PCTCN2016078566-appb-100009
  6. 如权利要求1~5中任意一项所述的有价票据的识别方法,其特征在于,步骤1中,对颜色数据进行预处理包括:对颜色数据有效区域的起始点、终止点进行定位,定位到颜色传感器在钞票中所采集的数据;对定位得到的颜色数据进行滤波处理,滤除噪声,预处理后的颜色数据表示为:
    Si={Ri,Gi,Bi} (1<i<M);
    Figure PCTCN2016078566-appb-100010
    Figure PCTCN2016078566-appb-100011
    Figure PCTCN2016078566-appb-100012
    其中,M为颜色传感器的个数,为了该发明实施例方法的鲁棒性,M应大于1,Ri,Gi,Bi为该路信号的红,绿,蓝分量,Ni为颜色数据i的信号长度。
  7. 如权利要求6所述的有价票据的识别方法,其特征在于,步骤2中,对预处理后的颜色数据进行特征提取包括:
    步骤21,对预处理后的颜色数据进行颜色空间转换,得到色调数据;
    步骤22,定位色调数据内的稳定子段集合,稳定子段即为色调变化较小的段;
    步骤23,求取稳定子段内的色调均值,稳定子段集合内所有子段的色调均值构成该路颜色数据的特征向量,该特征向量Fi表示为:
    Figure PCTCN2016078566-appb-100013
    其中N为颜色传感器的数量。
  8. 如权利要求7所述的有价票据的识别方法,其特征在于,步骤3中对每一种类型的票据的特征模板集进行匹配时,分别对票据模板集的正反面模板集进行匹配,且分别进行正反向匹配,其中,颜色数据特征Fi与某一个模板的特征Sfk的正向匹配度描述为:
    Figure PCTCN2016078566-appb-100014
    其中flag为正反面模板的标志,S(z)表示如下,T为预设阈值:
    Figure PCTCN2016078566-appb-100015
    颜色数据特征Fi与某一个模板的特征Sfk的反向匹配度描述为:
    Figure PCTCN2016078566-appb-100016
    其中flag为正反面模板的标志,S'(z)表示如下,T为预设阈值:
    Figure PCTCN2016078566-appb-100017
  9. 如权利要求8所述的有价票据的识别方法,其特征在于,步骤3中进 一步包括:
    步骤31,通过该颜色采集装置中多个颜色传感器的位置信息,得到颜色传感器之间的距离;其中颜色传感器的位置信息是指,通过颜色采集装置的结构信息,得到的颜色传感器之间的相对位置;
    步骤32,通过匹配的特征模板之间的位置信息,得到每个匹配的特征模板之间的距离;其中特征模板的位置信息是指,制作特征模板集时,分割得到的每个子区域的中心之间的相对位置;
    步骤33,对比匹配的特征模板之间的距离与对应的颜色传感器之间的距离是否一致,若一致,则认为匹配成功,否则,匹配不成功。
  10. 如权利要求9所述的有价票据的识别方法,其特征在于,步骤33中依据下式考察颜色传感器与对应匹配模板间的距离的相似度,其中DistSi,j为颜色数据i,j对应的颜色传感器之间的距离,DistMi,j为颜色数据i,j匹配的模板之间的距离,Tdist为预设的距离阈值:
    Figure PCTCN2016078566-appb-100018
    Figure PCTCN2016078566-appb-100019
    若相似度大于某一预设阈值Tsim,则认为匹配成功,否则,匹配不成功。
PCT/CN2016/078566 2015-04-13 2016-04-06 一种有价票据的识别方法 WO2016165574A1 (zh)

Priority Applications (4)

Application Number Priority Date Filing Date Title
RU2017135352A RU2668731C1 (ru) 2015-04-13 2016-04-06 Способ распознавания денежной купюры
EP16779549.1A EP3285210A4 (en) 2015-04-13 2016-04-06 Value bill identifying method
US15/564,936 US10235595B2 (en) 2015-04-13 2016-04-06 Value bill identifying method
HK18109044.7A HK1249626A1 (zh) 2015-04-13 2018-07-12 一種有價票據的識別方法

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510176330.0A CN104732231B (zh) 2015-04-13 2015-04-13 一种有价票据的识别方法
CN201510176330.0 2015-04-13

Publications (1)

Publication Number Publication Date
WO2016165574A1 true WO2016165574A1 (zh) 2016-10-20

Family

ID=53456106

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/078566 WO2016165574A1 (zh) 2015-04-13 2016-04-06 一种有价票据的识别方法

Country Status (6)

Country Link
US (1) US10235595B2 (zh)
EP (1) EP3285210A4 (zh)
CN (1) CN104732231B (zh)
HK (1) HK1249626A1 (zh)
RU (1) RU2668731C1 (zh)
WO (1) WO2016165574A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106803307A (zh) * 2016-12-16 2017-06-06 恒银金融科技股份有限公司 一种基于模板匹配的纸币面值面向识别方法

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104732231B (zh) * 2015-04-13 2019-02-26 广州广电运通金融电子股份有限公司 一种有价票据的识别方法
CN105894655B (zh) * 2016-04-25 2018-05-22 浙江大学 基于rgb-d相机的复杂环境下纸币检测和识别方法
CN107689006B (zh) * 2017-03-13 2020-02-14 平安科技(深圳)有限公司 理赔账单识别方法和装置
WO2019210237A1 (en) 2018-04-27 2019-10-31 Alibaba Group Holding Limited Method and system for performing machine learning
CN110674863B (zh) * 2019-09-19 2022-06-21 北京迈格威科技有限公司 汉明码识别方法、装置及电子设备
CN111444793A (zh) * 2020-03-13 2020-07-24 安诚迈科(北京)信息技术有限公司 基于ocr的票据识别方法、设备、存储介质及装置
CN111899411B (zh) * 2020-08-14 2022-02-25 中国工商银行股份有限公司 票据数据识别方法及装置

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1507232A2 (en) * 2003-07-15 2005-02-16 STMicroelectronics S.r.l. Method for classifying a digital image
CN102005078A (zh) * 2010-12-23 2011-04-06 北京新岸线软件科技有限公司 一种纸币、票券识别方法和装置
CN103208004A (zh) * 2013-03-15 2013-07-17 北京英迈杰科技有限公司 票据信息区域自动识别和提取方法及设备
CN104156732A (zh) * 2014-08-01 2014-11-19 北京利云技术开发公司 纸张真伪辨别系统和方法
CN104732231A (zh) * 2015-04-13 2015-06-24 广州广电运通金融电子股份有限公司 一种有价票据的识别方法

Family Cites Families (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
AUPN310195A0 (en) * 1995-05-22 1995-06-15 Canon Kabushiki Kaisha Template formation method
US20050276458A1 (en) * 2004-05-25 2005-12-15 Cummins-Allison Corp. Automated document processing system and method using image scanning
JP2001126107A (ja) * 1999-10-29 2001-05-11 Nippon Conlux Co Ltd 紙葉類の識別方法および装置
GB0313002D0 (en) 2003-06-06 2003-07-09 Ncr Int Inc Currency validation
EP1730705A1 (en) 2004-03-09 2006-12-13 Council Of Scientific And Industrial Research Improved fake currency detector using visual and reflective spectral response
JP5354842B2 (ja) * 2006-05-26 2013-11-27 キヤノン株式会社 画像処理方法および画像処理装置
US7916924B2 (en) * 2006-09-19 2011-03-29 Primax Electronics Ltd. Color processing method for identification of areas within an image corresponding to monetary banknotes
EP1944737A1 (en) * 2006-12-29 2008-07-16 NCR Corporation Validation template for valuable media of multiple classes
EP2338150B1 (de) * 2008-08-05 2021-03-17 Giesecke+Devrient Currency Technology GmbH Sicherheitsanordnung
US8780206B2 (en) * 2008-11-25 2014-07-15 De La Rue North America Inc. Sequenced illumination
US8265346B2 (en) * 2008-11-25 2012-09-11 De La Rue North America Inc. Determining document fitness using sequenced illumination
CN102054168B (zh) * 2010-12-23 2012-11-14 武汉大学苏州研究院 一种有价票据圆形印鉴识别方法
DE102010055974A1 (de) * 2010-12-23 2012-06-28 Giesecke & Devrient Gmbh Verfahren und Vorrichtung zur Bestimmung eines Klassenreferenzdatensatzes für die Klassifizierung von Wertdokumenten
EP2505356A1 (en) * 2011-03-30 2012-10-03 KBA-NotaSys SA Device for offline inspection and color measurement of printed sheets
US8531652B2 (en) * 2011-04-08 2013-09-10 Dri-Mark Products Three way desktop UV counterfeit detector
CN102890840B (zh) * 2012-08-22 2016-03-23 山东新北洋信息技术股份有限公司 纸币鉴别方法和装置
CN103136845B (zh) * 2013-01-23 2015-09-16 浙江大学 一种基于冠字号图像特征的人民币鉴伪方法

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1507232A2 (en) * 2003-07-15 2005-02-16 STMicroelectronics S.r.l. Method for classifying a digital image
CN102005078A (zh) * 2010-12-23 2011-04-06 北京新岸线软件科技有限公司 一种纸币、票券识别方法和装置
CN103208004A (zh) * 2013-03-15 2013-07-17 北京英迈杰科技有限公司 票据信息区域自动识别和提取方法及设备
CN104156732A (zh) * 2014-08-01 2014-11-19 北京利云技术开发公司 纸张真伪辨别系统和方法
CN104732231A (zh) * 2015-04-13 2015-06-24 广州广电运通金融电子股份有限公司 一种有价票据的识别方法

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3285210A4 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106803307A (zh) * 2016-12-16 2017-06-06 恒银金融科技股份有限公司 一种基于模板匹配的纸币面值面向识别方法
CN106803307B (zh) * 2016-12-16 2023-05-02 恒银金融科技股份有限公司 一种基于模板匹配的纸币面值面向识别方法

Also Published As

Publication number Publication date
CN104732231A (zh) 2015-06-24
EP3285210A1 (en) 2018-02-21
US20180101749A1 (en) 2018-04-12
CN104732231B (zh) 2019-02-26
HK1249626A1 (zh) 2018-11-02
EP3285210A4 (en) 2018-06-20
RU2668731C1 (ru) 2018-10-02
US10235595B2 (en) 2019-03-19

Similar Documents

Publication Publication Date Title
WO2016165574A1 (zh) 一种有价票据的识别方法
Silva et al. License plate detection and recognition in unconstrained scenarios
JP4932177B2 (ja) 硬貨分類装置および硬貨分類方法
CN104182973A (zh) 基于圆形描述算子csift的图像复制粘贴检测方法
CN111126240B (zh) 一种三通道特征融合人脸识别方法
US8744189B2 (en) Character region extracting apparatus and method using character stroke width calculation
JP5372183B2 (ja) 硬貨分類装置および硬貨分類方法
Liang et al. A new wavelet-Laplacian method for arbitrarily-oriented character segmentation in video text lines
Arya et al. Fake currency detection
CN113011426A (zh) 一种识别证件的方法和装置
CN103198299A (zh) 基于多方向尺度与Gabor相位投影特征结合的人脸识别方法
Raghunandan et al. New sharpness features for image type classification based on textual information
WO2016011640A1 (zh) 基于手部图像纹理的身份识别方法
Feng et al. Scene text detection based on multi-scale SWT and edge filtering
Nguyen et al. Robust car license plate localization using a novel texture descriptor
Ren et al. A novel scene text detection algorithm based on convolutional neural network
Creusen et al. A semi-automatic traffic sign detection, classification, and positioning system
Raghavendra et al. Improved face recognition by combining information from multiple cameras in Automatic Border Control system
Xu et al. Coin recognition method based on SIFT algorithm
KR101306576B1 (ko) 차분 성분을 고려한 조명 변화에 강인한 얼굴 인식 시스템
Wendel et al. Estimating hidden parameters for text localization and recognition
He et al. Scene text detection based on skeleton-cut detector
Nguyen et al. Scene text detection based on structural features
Ali et al. Detection and extraction of pantograph region from bank cheque images
Tang et al. A novel similar background components connection algorithm for colorful text detection in natural images

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16779549

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2017135352

Country of ref document: RU

WWE Wipo information: entry into national phase

Ref document number: 15564936

Country of ref document: US

NENP Non-entry into the national phase

Ref country code: DE