CN110765910A - Bill region identification method and device in dense scene - Google Patents

Bill region identification method and device in dense scene Download PDF

Info

Publication number
CN110765910A
CN110765910A CN201910973913.4A CN201910973913A CN110765910A CN 110765910 A CN110765910 A CN 110765910A CN 201910973913 A CN201910973913 A CN 201910973913A CN 110765910 A CN110765910 A CN 110765910A
Authority
CN
China
Prior art keywords
bill
area
detection model
trained
sample set
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910973913.4A
Other languages
Chinese (zh)
Inventor
张汉宁
苏斌
弋渤海
田福康
王长辉
张俊杰
任会
方红超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xi'an Network Computing Data Technology Co Ltd
Original Assignee
Xi'an Network Computing Data Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xi'an Network Computing Data Technology Co Ltd filed Critical Xi'an Network Computing Data Technology Co Ltd
Priority to CN201910973913.4A priority Critical patent/CN110765910A/en
Publication of CN110765910A publication Critical patent/CN110765910A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/40Document-oriented image-based pattern recognition
    • G06V30/41Analysis of document content
    • G06V30/414Extracting the geometrical structure, e.g. layout tree; Block segmentation, e.g. bounding boxes for graphics or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Artificial Intelligence (AREA)
  • General Physics & Mathematics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Computation (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to the technical field of image processing, and provides a bill region identification method and device in a dense scene, which comprises the steps of obtaining a plurality of bill shooting pictures; acquiring all labeling areas on the bill shooting picture, wherein one labeling area comprises bill area information; dividing the marked bill shooting pictures into a sample set I and a sample set II according to the number; obtaining a trained bill region detection model according to the sample set I; obtaining a tested bill region detection model according to the sample set two and the trained bill region detection model; and acquiring all the bill area information on the plurality of bill shooting pictures according to the tested bill area detection model, wherein the bill area information comprises the center coordinates, the height and the width of the bill area frame. Through the technical scheme, the problem that bills are difficult to identify in the intensive scene in the prior art is solved.

Description

Bill region identification method and device in dense scene
Technical Field
The invention belongs to the technical field of image processing, and relates to a bill area identification method and device in a dense scene.
Background
In the finance and taxation field, business personnel often can shoot various types of bills through image equipment to obtain corresponding bill picture information, the bill pictures generally contain a plurality of bill regions, the bill sample of each bill region is different, if dozens of or even more bill regions exist, it is very difficult to detect all bill regions in the pictures, the scheme that can be seen in the field at present can only solve the condition that the bill regions are less, the effect will be very poor when the bill regions are more, and the specific expression is as follows: on one hand, a large number of bill areas are missed or overdetected; on the other hand, a large number of overlapping areas exist between adjacent bill areas, especially under extreme conditions, for example, 50-100 bill areas with different bills are arranged in one bill picture, and the arrangement among the bill areas is compact, and the traditional bill area detection method is often poor in performance and cannot meet the use requirement, so that how to divide the bill areas in the bill picture under the condition that the bill areas are very dense is very challenging, and the demand for a new technology exists.
Disclosure of Invention
The invention provides a bill region identification method and device in a dense scene, and solves the problem that bills are difficult to identify in the dense scene in the prior art.
The technical scheme of the invention is realized as follows:
a bill region identification method and device under a dense scene comprises the following steps:
s1: acquiring a plurality of bill shooting pictures;
s2: marking each bill shooting picture and obtaining marking areas, wherein one marking area comprises bill area information;
s3: dividing the marked bill shooting pictures into a sample set I and a sample set II according to the number;
s4: training a pair of initial bill region detection models by using the sample set, and obtaining a trained bill region detection model;
s5: testing the trained bill region detection model according to the sample set and obtaining a trained bill region detection model;
s6: and acquiring all bill areas on the plurality of bill shooting pictures according to the trained bill area detection model, wherein the bill area information of each bill area comprises the center coordinate, the height and the width of the bill area frame.
As a further technical scheme, the training method of the bill region detection model comprises the following steps:
s41, acquiring bill area information of all marked areas in the sample set I;
s42, judging whether the marked area is empty or not, and obtaining the bill type of the marked area;
s43, calculating the overlapping degree of the bill area and the labeling area;
and S44, combining the results of the three steps S42-S43 as a common constraint condition to obtain the optimal labeling area.
As a further technical solution, the method for acquiring the trained bill region detection model comprises the following steps:
s51, obtaining all the optimal labeling areas in the sample set II according to the trained bill area detection model;
s52, acquiring deviation of bill area information of each actual bill area according to the optimal bill area;
and S53, obtaining the trained bill area detection model with deviation meeting a preset threshold range by adjusting the total number of the shot pictures of the bills.
As a further technical scheme, the overlapping degree of the note area and the labeling area is obtained by calculating the intersection ratio of each labeling area and the note area.
As a further technical scheme, the method for acquiring the actual bill region information comprises the following steps:
inputting the bill shooting pictures into the trained bill region detection model, starting a Web interface service of the trained bill region detection model, and returning the bill region information of each bill shooting picture in a Base64 coding mode.
As a further technical scheme, the number of the sample set I and the number of the sample set II are respectively 80% and 20% of the total amount.
As a further technical scheme, each labeling area is rectangular.
As a further technical solution, the method for obtaining the trained bill region detection model with deviation meeting the preset threshold range includes:
if the deviation is larger than the preset threshold range, increasing the total number of the bill shooting pictures, then training and testing the initial bill area detection model again, and obtaining the trained bill area detection model again until the deviation of the bill area detection model is smaller than the preset threshold.
A bill region identification device in a dense scene comprises:
the acquisition module is used for acquiring a plurality of bill shot pictures;
the marking module is used for acquiring all marking areas on the bill shot picture;
the classification module is used for dividing the marked bill shooting pictures into a sample set I and a sample set II according to the number;
the training module is used for obtaining a trained bill region detection model according to the sample set I;
the test module is used for testing according to the sample set II and the trained bill region detection model and obtaining a trained bill region detection model;
and the working module is used for obtaining all the bill area information on the plurality of bill shooting pictures according to the trained bill area detection model, wherein the bill area information comprises the center coordinates, the height and the width of the bill area frame.
The working principle and the beneficial effects of the invention are as follows:
1. in the invention, an innovative solution is provided for the problem, and the method is also suitable for scenes with few bill areas and has strong popularization for other similar applications. The method comprises the steps of firstly, obtaining a plurality of bill shooting pictures through shooting bills, starting a bill shooting picture file uploading service, placing various types of paper bills (the number of required bill areas is more than 50, the types of bills are more than 3) on one bill, and requiring no overlap between the bill areas, shooting personnel shooting and locally storing the bill shooting picture files by using an imaging device, wherein the imaging device comprises but is not limited to a mobile phone, a digital camera, a scanner and the like, opening a browser, inputting a bill shooting picture file uploading service address, selecting shot bill picture files from an album of the imaging device, uploading, then changing a batch of paper bills and shooting backgrounds, and repeating the process until the number of the bill shooting picture files reaches the preset number.
And secondly, marking the bill regions of all bill pictures in the intensive scene bill shooting picture set by using a picture marking tool in the field of deep learning.
And thirdly, randomly selecting 80% of the image files in the marked bill shooting image set to form a sample set I, namely a training sample set, and taking the rest 20% of the image files as a sample set II, namely a testing sample set. The first sample set is used for training the bill region detection model, and the second sample set is used for testing the correctness of the trained bill region detection model.
And fourthly, inputting the sample set I serving as input data into a pre-constructed characteristic pyramid network with three coarsening layers, outputting frame information of the bill and accuracy information of the marked area, and training a bill area detection model.
And fifthly, inputting a sample set II, detecting the trained bill region detection model, wherein the bill region detection model after the test is the bill region detection model after the detection result meets the threshold value.
Sixthly, inputting the shot picture of the bill area into the tested bill area detection model to obtain accurate information of each bill area, wherein the information comprises the center coordinate, the height and the width of the bill area
2. In the invention, a characteristic pyramid network with three coarsening layers is constructed, bill information is output, three types are provided, namely (1) frame information of a marked area, which is represented by (x, y, h and w)4 tuples, wherein x and y represent the center horizontal and vertical coordinates of a frame, h represents the height of the frame, and w represents the width of the frame; (2) judging information, labeling each bill type, converting all bill types into probability values with the total probability of 1, and judging whether each labeled area is an invoice area or not and which type of invoice. The sum of the softmax function values of all bill categories is 1, mapped to a value of (0,1) by the softmax function. (3) And calculating the overlapping degree of the detection area and the actual bill area, and screening out the marked areas with the overlapping degree meeting the threshold range, wherein the number of the marked areas is large. (4) The loss functions of the three heads are combined to form a joint loss function. (5) And selecting the area closest to the target area through calculation of a cross-over ratio function, wherein the area is repeated in a large amount in a dense scene, each area is close to the target area, the repeated area is not removed, and the repeated frames are screened out to be optimal through a maximum expectation algorithm, so that one actual bill area only corresponds to one marked area.
3. In the invention, a sample set II is input into a trained bill region detection model to obtain the information of the optimal marking region, the actual bill region information is compared with the information of the optimal marking region, and if the deviation is within the threshold range, the tested bill region detection model meeting the requirements is obtained; if the deviation exceeds the threshold range, the total amount of the sample set is increased, and the training and the testing are carried out again until the deviation is smaller than the threshold range.
4. According to the method, a bill shot picture set is obtained through an acquisition module, the bill shot picture set is labeled through a labeling module, the labeled bill shot picture is divided into two parts, namely a sample set I and a sample set II, the sample set I is input into a training module to train a bill area detection model, then the sample set II is input into a testing module to test the trained bill area detection model until a preset threshold value is met, the trained bill area detection model is obtained, and then the bill shot picture can be input into a working module to perform actual bill area segmentation.
Drawings
The present invention will be described in further detail with reference to the accompanying drawings and specific embodiments.
FIG. 1 is a schematic view of a bill of the present invention;
FIG. 2 is a block diagram of a bill region identification method according to the present invention;
FIG. 3 is a schematic diagram of a bill region inspection model training and testing process of the present invention;
in the figure:
1-bill shooting picture, 2-marking area, 3-bill area information, 4-sample set one, 5-sample set two, 6-trained bill area detection model, 7-trained bill area detection model, 8-optimal marking area, 10-initial bill area detection model, and 11-bill area.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 1-3, the present invention provides a method for identifying a bill area in a dense scene, which comprises the following steps:
s1: acquiring a plurality of bill shot pictures 1;
s2: marking each bill shooting picture 1 and obtaining marking areas 2, wherein one marking area 2 comprises bill area information 3;
s3: dividing a plurality of marked bill shooting pictures 1 into a sample set I4 and a sample set II 5 according to the number;
s4: training the initial bill region detection model 10 by using the sample set I4, and obtaining a trained bill region detection model 6;
s5: testing the trained bill region detection model 6 according to the sample set II 5 and obtaining a trained bill region detection model 7;
s6: all bill areas 11 on a plurality of bill shooting pictures 1 are obtained according to the trained bill area detection model 7, and the bill area information 3 of each bill area 11 comprises the center coordinates, the height and the width of the frame of the bill area 11.
In the embodiment, in the first step, a plurality of bill shot pictures 1 are obtained by shooting bills, a bill shot picture file uploading service is started, various types of paper bills (the number of bill areas is more than 50, and the types of bills are more than 3) are placed in one bill, and no overlap exists between the bill areas, a photographer shoots and locally stores the bill shot picture files by using an imaging device, wherein the imaging device comprises but is not limited to a mobile phone, a digital camera, a scanner and the like, a browser is opened, a bill shot picture file uploading service address is input, a shot bill picture file is selected from an album of the imaging device and uploaded, then a batch of paper bills and shooting backgrounds are changed, and the above process is repeated until the number of the bill shot picture 1 files reaches the preset number.
And secondly, marking the bill regions of all bill shooting pictures 1 in the intensive scene bill picture set by using a picture marking tool in the field of deep learning.
And thirdly, randomly selecting 80% of picture files from the marked bill shooting picture set to form a sample set I4, namely a training sample set, and taking the remaining 20% of picture files as a sample set II 5, namely a testing sample set. The sample set I4 is used for training the bill region detection model, and the sample set II 5 is used for testing the correctness of the trained bill region detection model 7.
And fourthly, inputting the sample set I4 serving as input data into a pre-constructed characteristic pyramid network with three coarsening layers, outputting frame information of the bill and accuracy information of the labeling area 2, and training an initial bill area detection model 10.
And fifthly, inputting a sample set II, detecting the trained bill region detection model 6, and obtaining a trained bill region detection model 7 when the detection result meets a threshold value.
And sixthly, inputting the shot picture 1 of the bill area into a trained bill area detection model 7 to obtain accurate information 3 of each bill area, wherein the information comprises the center coordinate, the height and the width of the bill area.
Further, the method for acquiring the trained bill region detection model 6 comprises the following steps:
s41, acquiring bill area information 3 of all the marked areas 2 in the sample set I4;
s42, judging whether the marking area 2 is empty or not, and obtaining the bill type of the marking area 2;
s43, calculating the overlapping degree of the bill area 11 and the labeling area 2;
and S44, combining the results of the three steps S42-S43 as a common constraint condition to obtain the optimal labeling area 8.
In this embodiment, a feature pyramid network with three coarsened layers is constructed, and after convolution calculation, bill related information is output through a full connection layer, which is three in total and is respectively:
(1) marking frame information of the area 2, and representing the frame information by using a (x, y, h, w) 4-tuple, wherein x and y represent the horizontal and vertical coordinates of the center of the frame, h represents the height of the frame, and w represents the width of the frame;
(2) judging information, labeling each bill type, converting all bill types into probability values with the total probability of 1, and then judging whether each labeled area 2 is an actual invoice area or not and which type of invoice. Each region is associated with a softmax score between a ticket category label and [0,1], mapped to a value of (0,1) by a softmax function, with the sum of the softmax function values for all ticket categories being 1.
(3) And (4) calculating the overlapping degree of the labeling areas 2 and the actual bill areas 11, and screening out the labeling areas 2 with the overlapping degree meeting the threshold range, wherein the number of the labeling areas 2 is large. (4) The loss functions of the three output heads are combined to form a joint loss function:
L=Lreturn head+LSorting head+LSoft-IoU head
Wherein L isSorting headAs a standard cross entropy loss function, LReturn headIs a euclidean loss function. (5) And selecting the area closest to the target area through calculation of a cross-over ratio function, wherein the area is repeated in a large amount in a dense scene, each area is close to the target area, the repeated area is not removed, and the repeated frames are screened out to be optimal through a maximum expectation algorithm, so that one actual bill area only corresponds to one marked area.
Figure BDA0002232996230000061
μi=(xi,yi),
Figure BDA0002232996230000062
Where p is a 2-dimensional image coordinate point, μiIs the position coordinate of the central point of the ith detection target, (h)i,wi) Representing the size of the detection bounding box, where N (-) follows a normal distribution, a joint gaussian distribution is constructed:
Figure BDA0002232996230000063
wherein the coefficient αii
Figure BDA0002232996230000064
Find a set of K gaussian distributions:
Figure BDA0002232996230000065
Figure BDA0002232996230000066
so that
Figure BDA0002232996230000067
A minimum value is reached, where KL represents the Kullback-Leibler distance. The minimum of d (f, g) is solved using a maximum expectation algorithm pair.
Firstly, calculating:
Figure BDA0002232996230000068
the following parameters were then re-estimated:
Figure BDA0002232996230000071
Figure BDA0002232996230000072
Figure BDA0002232996230000073
repeating the above steps for iteration until d (f, g) is less than or equal to 10-10Stopping the calculation when convergence is reached, and finally obtaining K which is the total number of bill areas mu'jIs the center point of the j-th bill area, will'jThe positive values obtained by multiplying the diagonal element extraction by 16 and opening the root number are the height and width of the bill area.
Further, the method for acquiring the trained bill region detection model 7 comprises the following steps:
s51, obtaining all the optimal labeling areas 8 in the sample set II 5 according to the trained bill area detection model 6;
s52, acquiring deviation of the bill area information 3 of each actual bill area 11 according to the optimal bill area 8;
and S53, obtaining the trained bill area detection model 7 with deviation meeting the preset threshold range by adjusting the total number of the bill shooting pictures 1.
In the embodiment, the sample set two 5 is input into the trained bill region detection model to obtain the information of the optimal labeling region 8, the actual bill region 11 information is compared with the information of the optimal labeling region 8, and if the deviation is within the threshold range, the trained bill region detection model 7 meeting the requirements is obtained; if the deviation exceeds the threshold range, the total amount of the sample set is increased, and the training and the testing are carried out again until the deviation is smaller than the threshold range.
Further, the overlapping degree of the note area 11 and the note area 2 is obtained by calculating the intersection ratio of each marking area 2 and the note area 11.
In this embodiment, the cross-over ratio function (Soft-IoU function) is:
where n represents the sample size of each batch, IoUiThe expression represents the intersection ratio between the ith actual bill area and the marking area.
Further, the method for obtaining the actual bill region information 3 includes: inputting the bill shooting picture 1 into the trained bill region detection model 7, starting the Web interface service of the trained bill region detection model 7, and returning the information of each bill region picture 1 in the form of Base64 coding.
Further, the number of the sample set one 4 and the sample set two 5 is 80% and 20% of the total amount, respectively.
In this embodiment, the first sample set 4 is a training sample set, the second sample set 5 is a testing sample set, the bill shooting pictures 1 are randomly divided into the first sample set 4 and the second sample set 5 according to the number, and the training and the detection of the initial bill region detection model 10 are respectively performed.
Further, each of the label areas 2 is rectangular.
A bill region identification device in a dense scene comprises:
the acquisition module is used for acquiring a plurality of bill shot pictures;
the marking module is used for acquiring all marking areas on the bill shot picture;
the classification module is used for dividing the marked bill shooting pictures into a sample set I and a sample set II according to the number;
the training module is used for obtaining a trained bill region detection model according to the sample set I;
the test module is used for testing according to the sample set II and the trained bill region detection model and obtaining the trained bill region detection model;
and the working module is used for obtaining all bill area information on the plurality of bill shooting pictures according to the tested bill area detection model, wherein the bill area information comprises the center coordinates, the height and the width of the bill area frame.
In this embodiment, acquire the bill shot picture set through the collection module, mark the bill shot picture set through the mark module, the bill shot picture after will marking in the classification module is divided into two parts, sample set one and sample set two respectively, input the training module with sample set one and train the regional detection model of bill, then input the testing module with sample set two and test the regional detection model of bill after training and test until satisfying the threshold value that predetermines, obtain the regional detection model of bill that trains, then can input the bill shot picture work module, carry out actual bill region and divide.
The present invention is not limited to the above preferred embodiments, and any modifications, equivalent substitutions, improvements, etc. within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. A bill area identification method under a dense scene is characterized by comprising the following steps:
s1: acquiring a plurality of bill shot pictures (1);
s2: labeling each bill shooting picture (1) and obtaining labeled areas (2), wherein one labeled area (2) comprises bill area information (3);
s3: dividing the marked bill shooting pictures (1) into a sample set I (4) and a sample set II (5) according to the number;
s4: training an initial bill region detection model (10) by using the sample set I (4), and obtaining a trained bill region detection model (6);
s5: testing the trained bill region detection model (6) according to the sample set II (5) and obtaining a trained bill region detection model (7);
s6: and obtaining all bill areas (11) on the plurality of bill shooting pictures (1) according to the trained bill area detection model (7), wherein the bill area information (3) of each bill area (11) comprises the center coordinate, the height and the width of the frame of the bill area (11).
2. The bill area recognition method under the dense scene as claimed in claim 1, wherein the acquisition method of the trained bill area detection model (6) comprises the following steps:
s41, acquiring bill region information (3) of all the marking regions (2) in the sample set I (4);
s42, judging whether the marking area (2) is empty or not, and obtaining the bill type of the marking area (2);
s43, calculating the overlapping degree of the bill area (11) and the labeling area (2);
and S44, combining the results of the three steps S42-S43 as a common constraint condition to obtain the optimal labeling area (8).
3. The bill area recognition method under the dense scene as claimed in claim 1, wherein the acquisition method of the trained bill area detection model (7) comprises the following steps:
s51, obtaining all the optimal labeling areas (8) in the sample set II (5) according to the trained bill area detection model (6);
s52, acquiring deviation of the bill area information (3) of each actual bill area (11) according to the optimal bill area (8);
s53, the trained bill region detection model (7) with deviation meeting the preset threshold range is obtained by adjusting the total number of the bill shooting pictures (1).
4. The method for identifying the bill area in the dense scene according to claim 1, wherein the overlapping degree of the marking area (2) and the bill area (11) is obtained by calculating the intersection ratio of each marking area (2) and the bill area (11).
5. The method for identifying bill regions in dense scenes according to claim 1, wherein the method for obtaining the actual bill region information (3) comprises the following steps:
inputting the bill shooting pictures (1) into the trained bill region detection model (7), starting a Web interface service of the trained bill region detection model (7), and returning the bill region information (3) of each bill shooting picture (1) in a Base64 encoded form.
6. The method for identifying bill sections in dense scenes according to claim 1, wherein the number of the sample set one (4) and the number of the sample set two (5) are respectively 80% and 20% of the total amount.
7. The bill area identification method in the dense scene according to claim 1, wherein each of the labeling areas (2) is rectangular.
8. The bill region identification method under the dense scene as claimed in claim 3, wherein the method for obtaining the trained bill region detection model (7) with deviation meeting the preset threshold range comprises:
if the deviation is larger than the preset threshold range, the total amount of the bill shooting pictures (1) is increased, then the initial bill region detection model (10) is trained and tested again, and the trained bill region detection model (7) is obtained again until the deviation is smaller than the preset threshold.
9. A bill region recognition device in a dense scene is characterized by comprising:
the acquisition module is used for acquiring a plurality of bill shot pictures;
the marking module is used for acquiring all marking areas on the bill shot picture;
the classification module is used for dividing the marked bill shooting pictures into a sample set I and a sample set II according to the number;
the training module is used for obtaining a trained bill region detection model according to the sample set I;
the test module is used for obtaining a trained bill area detection model according to the sample set two and the trained bill area detection model;
and the working module is used for obtaining all the bill area information on the plurality of bill shooting pictures according to the tested bill area detection model, wherein the bill area information comprises the center coordinates, the height and the width of the bill area frame.
CN201910973913.4A 2019-10-14 2019-10-14 Bill region identification method and device in dense scene Pending CN110765910A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910973913.4A CN110765910A (en) 2019-10-14 2019-10-14 Bill region identification method and device in dense scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910973913.4A CN110765910A (en) 2019-10-14 2019-10-14 Bill region identification method and device in dense scene

Publications (1)

Publication Number Publication Date
CN110765910A true CN110765910A (en) 2020-02-07

Family

ID=69330961

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910973913.4A Pending CN110765910A (en) 2019-10-14 2019-10-14 Bill region identification method and device in dense scene

Country Status (1)

Country Link
CN (1) CN110765910A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111695558A (en) * 2020-04-28 2020-09-22 深圳市跨越新科技有限公司 Logistics waybill picture rectification method and system based on YoloV3 model

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103208004A (en) * 2013-03-15 2013-07-17 北京英迈杰科技有限公司 Automatic recognition and extraction method and device for bill information area
CN108446621A (en) * 2018-03-14 2018-08-24 平安科技(深圳)有限公司 Bank slip recognition method, server and computer readable storage medium
CN109117814A (en) * 2018-08-27 2019-01-01 北京京东金融科技控股有限公司 Image processing method, device, electronic equipment and medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103208004A (en) * 2013-03-15 2013-07-17 北京英迈杰科技有限公司 Automatic recognition and extraction method and device for bill information area
CN108446621A (en) * 2018-03-14 2018-08-24 平安科技(深圳)有限公司 Bank slip recognition method, server and computer readable storage medium
CN109117814A (en) * 2018-08-27 2019-01-01 北京京东金融科技控股有限公司 Image processing method, device, electronic equipment and medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
ERAN GOLDMAN 等: "Precise Detection in Densely Packed Scenes", 《ARXIV:1904.00853V3 [CS.CV]》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111695558A (en) * 2020-04-28 2020-09-22 深圳市跨越新科技有限公司 Logistics waybill picture rectification method and system based on YoloV3 model
CN111695558B (en) * 2020-04-28 2023-08-04 深圳市跨越新科技有限公司 Logistics shipping list picture correction method and system based on YoloV3 model

Similar Documents

Publication Publication Date Title
CN109657665B (en) Invoice batch automatic identification system based on deep learning
CN104112128B (en) Digital image processing system and method applied to bill image character recognition
CN105046200B (en) Electronic paper marking method based on straight line detection
CN106033535B (en) Electronic paper marking method
CN109117885B (en) Stamp identification method based on deep learning
CN111695486A (en) High-precision direction signboard target extraction method based on point cloud
CN110689013A (en) Automatic marking method and system based on feature recognition
CN110163211B (en) Image recognition method, device and storage medium
CN105184225B (en) A kind of multinational banknote image recognition methods and device
CN114155527A (en) Scene text recognition method and device
CN112819748B (en) Training method and device for strip steel surface defect recognition model
CN110929746A (en) Electronic file title positioning, extracting and classifying method based on deep neural network
CN113688821B (en) OCR text recognition method based on deep learning
CN111553422A (en) Automatic identification and recovery method and system for surgical instruments
CN113159014A (en) Objective question reading method, device, equipment and storage medium based on handwritten question numbers
CN106778717A (en) A kind of test and appraisal table recognition methods based on image recognition and k nearest neighbor
CN112364883A (en) American license plate recognition method based on single-stage target detection and deptext recognition network
CN116092231A (en) Ticket identification method, ticket identification device, terminal equipment and storage medium
CN110728269A (en) High-speed rail contact net support pole number plate identification method
CN115239737A (en) Corrugated paper defect detection method based on image processing
CN110765910A (en) Bill region identification method and device in dense scene
CN113159029A (en) Method and system for accurately capturing local information in picture
CN112418262A (en) Vehicle re-identification method, client and system
CN110766001B (en) Bank card number positioning and end-to-end identification method based on CNN and RNN
CN115880683A (en) Urban waterlogging ponding intelligent water level detection method based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination