CN110210478A - A kind of commodity outer packing character recognition method - Google Patents
A kind of commodity outer packing character recognition method Download PDFInfo
- Publication number
- CN110210478A CN110210478A CN201910482146.7A CN201910482146A CN110210478A CN 110210478 A CN110210478 A CN 110210478A CN 201910482146 A CN201910482146 A CN 201910482146A CN 110210478 A CN110210478 A CN 110210478A
- Authority
- CN
- China
- Prior art keywords
- text
- character recognition
- outer packing
- recognition method
- administrative division
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 53
- 238000012856 packing Methods 0.000 title claims abstract description 25
- 238000013528 artificial neural network Methods 0.000 claims abstract description 35
- 238000001514 detection method Methods 0.000 claims abstract description 24
- 238000012549 training Methods 0.000 claims description 17
- 230000004927 fusion Effects 0.000 claims description 16
- 230000006870 function Effects 0.000 description 13
- 238000010586 diagram Methods 0.000 description 10
- 238000004891 communication Methods 0.000 description 7
- 102100032202 Cornulin Human genes 0.000 description 6
- 101000920981 Homo sapiens Cornulin Proteins 0.000 description 6
- 238000004590 computer program Methods 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000000605 extraction Methods 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000008878 coupling Effects 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000012360 testing method Methods 0.000 description 2
- 238000013518 transcription Methods 0.000 description 2
- 230000035897 transcription Effects 0.000 description 2
- 230000001052 transient effect Effects 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004422 calculation algorithm Methods 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000013527 convolutional neural network Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 235000013399 edible fruits Nutrition 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000005192 partition Methods 0.000 description 1
- 230000003134 recirculating effect Effects 0.000 description 1
- 230000011218 segmentation Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
- G06F18/253—Fusion techniques of extracted features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
- G06V20/63—Scene text, e.g. street names
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V30/00—Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
- G06V30/40—Document-oriented image-based pattern recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
This application involves a kind of commodity outer packing character recognition methods, this method comprises: utilizing trained text detection neural network recognization image to be detected, obtain text prediction administrative division map, using trained text identification neural network recognization treated text prediction administrative division map, text identification result is obtained.This for identification text method can recognize commodity on text, can be improved identification commodity when recognition accuracy.The application further relates to a kind of device of text for identification.
Description
Technical field
This application involves image identification technical fields, such as are related to a kind of commodity outer packing character recognition method.
Background technique
Currently, each retail convenience shop and chain-supermarket are already provided with many self-service cash registers.Self-service cash register is disappearing
When the person of expense settles accounts, whole process is not necessarily to cashier's cash register, can greatly reduce the human cost of supermarket, very efficient and convenient.It is wherein sharp
It is quick and precisely identification of the cash register to commodity with the key that self-service cash register carries out cash register to commodity, to obtain the classification of commodity
And price.Existing commodity identification method is broadly divided into two kinds: the first is the identification method based on RFID electronic label;Second
Kind is that user self-help puts the bar code on commodity under the scanner, passes through machine barcode scanning and realizes commodity identification.
During realizing the embodiment of the present disclosure, at least there are the following problems in the related technology for discovery: in the prior art
Recognition accuracy when identifying commodity is not high.
Summary of the invention
In order to which some aspects of the embodiment to disclosure have basic understanding, simple summary is shown below.It is described general
Including is not extensive overview, nor to determine key/critical component or describe the protection scope of these embodiments, but is made
For the preamble of following detailed description.
The embodiment of the present disclosure provides a kind of commodity outer packing character recognition method, to solve to identify that identification when commodity is quasi-
The not high technical problem of true rate.
A kind of commodity outer packing character recognition method characterized by comprising
Using trained text detection neural network recognization image to be detected, text prediction administrative division map is obtained;
Using trained text identification neural network recognization treated text prediction administrative division map, text identification knot is obtained
Fruit.
Preferably, described to utilize trained text detection neural network recognization image to be detected, obtain text prediction area
Domain figure, comprising:
Obtain the fusion feature figure of described image to be detected;
Obtain the text categories score characteristic pattern and text position characteristic pattern of the fusion feature figure;
Text prediction administrative division map is obtained according to the text categories score characteristic pattern and the text position characteristic pattern.
Preferably, the fusion feature figure for obtaining described image to be detected, comprising:
The high-level characteristic and low-level feature of described image to be detected are obtained by presetting convolutional network;
The high-level characteristic and the low-level feature are merged, the fusion feature figure is obtained.
Preferably, the default convolutional network is ResNet-50 network.
Preferably, described to utilize trained text identification neural network recognization treated text prediction administrative division map, packet
It includes:
Extract the characteristic sequence in treated the text prediction administrative division map;
It is predicted according to each frame of the characteristic sequence;
Sequence label is converted by the prediction of each frame.
It preferably, include position mark and content mark in the training set of the training text detection neural network.
Preferably, the position is labeled as using the coordinate on any one vertex in four text filed vertex as starting point
It is labeled.
It is preferably, described that treated that text prediction administrative division map is detects the text prediction administrative division map by rotation and obtain
's.
Preferably, during the training text detection neural network, loss function includes Classification Loss and recurrence
Loss.
The method for the text for identification that the embodiment of the present disclosure provides, may be implemented following technical effect:
Text information in commodity outer packing may be that commodity identification provides very valuable clue, by accurate
It identifies the text in commodity outer packing, that is, recognition accuracy when identification commodity can be improved.
Above general description and it is discussed below be only it is exemplary and explanatory, be not used in limitation the application.
Detailed description of the invention
One or more embodiments are illustrated by corresponding attached drawing, these exemplary illustrations and
Attached drawing does not constitute the restriction to embodiment, and the element with same reference numbers label is shown as similar element in attached drawing, attached
Composition does not limit figure, and wherein:
Fig. 1 is a kind of commodity outer packing character recognition method flow diagram that the embodiment of the present disclosure provides;
Fig. 2 is a kind of commodity outer packing character recognition method flow diagram that the embodiment of the present disclosure provides;
Fig. 3 is a kind of commodity outer packing character recognition method flow diagram that the embodiment of the present disclosure provides;
Fig. 4 is a kind of commodity outer packing character recognition method flow diagram that the embodiment of the present disclosure provides;
Fig. 5 is the block diagram of the device for the text for identification that the embodiment of the present disclosure provides;
Fig. 6 is the block diagram of the electronic equipment for the text for identification that the embodiment of the present disclosure provides.
Specific embodiment
The characteristics of in order to more fully hereinafter understand the embodiment of the present disclosure and technology contents, with reference to the accompanying drawing to this public affairs
The realization for opening embodiment is described in detail, appended attached drawing purposes of discussion only for reference, is not used to limit the embodiment of the present disclosure.
In technical description below, for convenience of explanation for the sake of, disclosed embodiment is fully understood with providing by multiple details.
However, one or more embodiments still can be implemented in the case where without these details.In other cases, it is
Simplify attached drawing, well known construction and device can simplify displaying.
The embodiment of the present disclosure provides a kind of commodity outer packing character recognition method.
As shown in Figure 1, in some embodiments, the method for text includes: for identification
S101, trained text detection neural network recognization image to be detected, acquisition text prediction administrative division map are utilized.
Optionally, text detection neural network is EAST (An Efficient and Accurate Scene Text
Detector, the scene text detector of an efficiently and accurately), Textboxes (A Fast Text Detector with a
Single Deep Neural Network, a fast text detector using single deep neural network), RRPN
(Arbitrary-Oriented Scene Text Detection via Rotation Proposals is suggested using rotation
Any direction scene text detector in region), SegLink (Detecting Oriented Text in Natural
Images by Linking Segments, use joining part natural picture orient text detector), FTSN (Fused
Text Segmentation Networks for Multi-oriented Scene Text Detection is used for more scenes
The fusing text of text detection divides network) in any one.
It optionally, include position mark and content mark in the training set of training text detection neural network.
Optionally, position is labeled as carrying out by starting point of the coordinate on any one vertex in four text filed vertex
Mark.Optionally, position is labeled as tactic according to setting.Optionally, setting sequence includes clock-wise order and inverse
Clocking sequence.
Optionally, during training text detects neural network, loss function includes Classification Loss and recurrence loss.
When training text detects neural network, need text image being divided into training set and test set.Optionally, training
Integrate and the ratio of test set is 9:1.
Optionally, image to be detected is to text filed one and one or more figure for carrying out multi-angled shooting acquisition
Picture.
S102, using trained text identification neural network recognization treated text prediction administrative division map, obtain text
Recognition result.
Optionally, text identification neural network is CRNN (An End-to-End Trainable Neural Network
for Image-based Sequence Recognition and Its Application to Scene Text
Recognition, the neural network of the end-to-end training for being identified based on image sequence and its in scene text identification
Using), (Robust text recognizer with Automatic Rectification, uses automatic straightening mould to RARE
The Robust Text detector of block) in any one.
Optionally, treated, and text prediction administrative division map is obtained by rotation detection text prediction administrative division map, is handled
Data format needed for text prediction administrative division map afterwards meets text identification network.
Text information in commodity outer packing may be that commodity identification provides very valuable clue, by accurate
It identifies the text in commodity outer packing, that is, recognition accuracy when identification commodity can be improved.
This method uses text detection and text recognition method based on deep learning, with conventional text recognition methods phase
Than can quickly be identified to pictograph region, improve recognition accuracy, and its strong real-time.
In addition, the method for the present invention is with a wide range of applications: for example can be in accurate commodity outer packing using this method
Text, and then judge merchandise classification;Likewise, believing using the crucial literal on the files such as this method identification bill, identity card
Breath.
As shown in Fig. 2, in some embodiments, it is to be detected using trained text detection neural network recognization in S101
Image obtains text prediction administrative division map, comprising:
S201, the fusion feature figure for obtaining image to be detected;
S202, the text categories score characteristic pattern and text position characteristic pattern for obtaining fusion feature figure;
S203, text prediction administrative division map is obtained according to text categories score characteristic pattern and text position characteristic pattern.
As shown in figure 3, in some embodiments, the fusion feature figure of image to be detected is obtained in S201, comprising:
S301, the high-level characteristic and low-level feature that image to be detected is obtained by presetting convolutional network.
Optionally, presetting convolutional network is ResNet (Residual Neural Network, residual error neural network) -50
Network.
ResNet-50 generally comprises 5 parts, wherein first part is made of the convolutional layer of use 7*7 convolution kernel, so
Afterwards by the pond layer that convolution kernel is 3*3, step-length is 2, the convolution that rear four parts are each 3*3 by convolution kernel in varying numbers
Layer and a pond layer composition.ResNet-50 has powerful character representation ability, often uses in different Computer Vision Tasks
Do basic network.
S302, fusion high-level characteristic and low-level feature, obtain fusion feature figure.
As shown in figure 4, in some embodiments, after being handled in S101 using trained text identification neural network recognization
Text prediction administrative division map, comprising:
The characteristic sequence in text prediction administrative division map after S401, extraction process;
S402, it is predicted according to each frame of characteristic sequence;
S403, sequence label is converted by the prediction of each frame.
For example, the network architecture of CRNN consists of three parts using CRNN neural network as text identification neural network, packet
Convolutional layer, circulation layer and transcription layer are included, Down-Up.In the bottom of CRNN, convolutional layer is extracted from each input picture automatically
Characteristic sequence.On convolutional network, a recirculating network is constructed, each frame of the characteristic sequence for exporting to convolutional layer
It is predicted.Sequence label is converted for every frame prediction of circulation layer using the transcription layer at the top of CRNN.Although CRNN is by difference
The network architecture of type forms, such as is made of CNN and RNN, but can carry out joint training by a loss function.
The embodiment of the present disclosure provides a kind of device for commodity identification.
As shown in figure 5, in some embodiments, the device of commodity for identification, comprising:
Text detection module 51 is configured as obtaining using trained text detection neural network recognization image to be detected
Obtain text prediction administrative division map;
Text identification module 52, is configured as that treated that text is pre- using trained text identification neural network recognization
Administrative division map is surveyed, text identification result is obtained.
In some embodiments, text detection module includes:
Fisrt feature obtaining unit is configured as obtaining the fusion feature figure of image to be detected;
Second feature obtaining unit is configured as obtaining the text categories score characteristic pattern and text position of fusion feature figure
Characteristic pattern;
It predicts output unit, is configured as pre- according to text categories score characteristic pattern and text position characteristic pattern acquisition text
Survey administrative division map.
In some embodiments, fisrt feature obtaining unit is configured as:
The high-level characteristic and low-level feature of image to be detected are obtained by presetting convolutional network;
High-level characteristic and low-level feature are merged, fusion feature figure is obtained.
In some embodiments, presetting convolutional network is ResNet-50 network.
In some embodiments, text identification module includes:
Characteristic sequence extraction unit, the characteristic sequence in text prediction administrative division map after being configured as extraction process;
Predicting unit is configured as being predicted according to each frame of characteristic sequence;
Label output unit is configured as converting sequence label for the prediction of each frame.
It in some embodiments, include position mark and content mark in the training set of training text detection neural network.
In some embodiments, position be labeled as be with the coordinate on any one vertex in four text filed vertex
What starting point was labeled.
In some embodiments, treated, and text prediction administrative division map is obtained by rotation detection text prediction administrative division map
's.
In some embodiments, training text detect neural network during, loss function include Classification Loss and
Return loss.
The embodiment of the present disclosure provides a kind of computer readable storage medium, is stored with computer executable instructions, described
The method that computer executable instructions are arranged to carry out above-mentioned text for identification.
The embodiment of the present disclosure provides a kind of computer program product, and the computer program product includes being stored in calculating
Computer program on machine readable storage medium storing program for executing, the computer program include program instruction, when described program instruction is calculated
When machine executes, the method that makes the computer execute above-mentioned text for identification.
Above-mentioned computer readable storage medium can be transitory computer readable storage medium, be also possible to non-transient meter
Calculation machine readable storage medium storing program for executing.
The embodiment of the present disclosure provides a kind of electronic equipment, and structure is as shown in fig. 6, the electronic equipment includes:
In at least one processor (processor) 60, Fig. 6 by taking a processor 60 as an example;With memory (memory)
61, it can also include communication interface (Communication Interface) 62 and bus 63.Wherein, processor 60, communication connect
Mouth 62, memory 61 can complete mutual communication by bus 63.Communication interface 62 can be used for information transmission.Processor
60 can call the logical order in memory 61, the method to execute the text for identification of above-described embodiment.
In addition, the logical order in above-mentioned memory 61 can be realized and as only by way of SFU software functional unit
Vertical product when selling or using, can store in a computer readable storage medium.
Memory 61 is used as a kind of computer readable storage medium, can be used for storing software program, journey can be performed in computer
Sequence, such as the corresponding program instruction/module of the method in the embodiment of the present disclosure.Processor 60 is stored in memory 61 by operation
Software program, instruction and module, thereby executing functional application and data processing, i.e., in realization above method embodiment
Method.
Memory 61 may include storing program area and storage data area, wherein storing program area can storage program area, extremely
Application program needed for a few function;Storage data area, which can be stored, uses created data etc. according to terminal device.This
Outside, memory 61 may include high-speed random access memory, can also include nonvolatile memory.
The technical solution of the embodiment of the present disclosure can be embodied in the form of software products, which deposits
It stores up in one storage medium, including one or more instructions are used so that a computer equipment (can be personal meter
Calculation machine, server or network equipment etc.) execute embodiment of the present disclosure the method all or part of the steps.And it is above-mentioned
Storage medium can be non-transient storage media, comprising: USB flash disk, mobile hard disk, read-only memory (ROM, Read-Only
Memory), random access memory (RAM, Random Access Memory), magnetic or disk etc. are a variety of can store journey
The medium of sequence code, is also possible to transitory memory medium.
Above description and attached drawing sufficiently illustrate embodiment of the disclosure, to enable those skilled in the art to practice
They.Other embodiments may include structure, logic, it is electrical, process and other change.Embodiment only represents
Possible variation.Unless explicitly requested, otherwise individual components and functionality is optional, and the sequence operated can change.
The part of some embodiments and feature can be included in or replace part and the feature of other embodiments.The embodiment of the present disclosure
Range includes the entire scope of claims and all obtainable equivalents of claims.When for the application
When middle, although term " first ", " second " etc. may be used in this application to describe each element, these elements should not be by
To the limitation of these terms.These terms are only used to differentiate an element with another element.For example, not changing description
Meaning in the case where, first element can be called second element, and similarly, and second element can be called first element,
As long as " first element " that is occurred unanimously renames and " second element " occurred unanimously renames.First
Element and second element are all elements, but can not be identical element.Moreover, word used herein is only used for describing
Embodiment and it is not used in limitation claim.As used in the description in embodiment and claim, unless context
It clearly illustrates, otherwise "one" (a) of singular, "one" (an) and " described " (the) is intended to equally include plural shape
Formula.Similarly, term "and/or" refers to and associated lists comprising one or more as used in this specification
Any and all possible combination.In addition, when in the application, term " includes " (comprise) and its modification " packet
Include " (comprises) and/or feature, entirety, step, operation, element and/or group including the statement such as (comprising) fingers
The presence of part, but it is not excluded for one or more other features, entirety, step, operation, element, component and/or these point
The presence or addition of group.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that
There is also other identical elements in the process, method or equipment for including the element.Herein, each embodiment emphasis
What is illustrated can be the difference from other embodiments, and the same or similar parts in each embodiment can refer to each other.It is right
For the method disclosed in embodiment, product etc., if it is corresponding with method part disclosed in embodiment, related place
It may refer to the description of method part.
It will be appreciated by those of skill in the art that unit described in conjunction with the examples disclosed in the embodiments of the present disclosure and
Algorithm steps can be realized with the combination of electronic hardware or computer software and electronic hardware.These functions are actually with hard
Part or software mode execute, and can depend on the specific application and design constraint of technical solution.The technical staff
Described function can be realized using distinct methods to each specific application, but this realization is it is not considered that exceed
The range of the embodiment of the present disclosure.The technical staff can be understood that, for convenience and simplicity of description, foregoing description
The specific work process of system, device and unit, can refer to corresponding processes in the foregoing method embodiment, no longer superfluous herein
It states.
In embodiments disclosed herein, disclosed method, product (including but not limited to device, equipment etc.) can be with
It realizes by another way.For example, the apparatus embodiments described above are merely exemplary, for example, the unit
Divide, can be only a kind of logical function partition, there may be another division manner in actual implementation, for example, multiple units or
Component can be combined or can be integrated into another system, or some features can be ignored or not executed.In addition, shown
Or the mutual coupling, direct-coupling or communication connection discussed can be through some interfaces, device or unit it is indirect
Coupling or communication connection can be electrical property, mechanical or other forms.The unit as illustrated by the separation member can be or
Person, which may not be, to be physically separated, and component shown as a unit may or may not be physical unit
With in one place, or may be distributed over multiple network units.Portion therein can be selected according to the actual needs
Point or whole unit realize the present embodiment.In addition, each functional unit in the embodiments of the present disclosure can integrate at one
In processing unit, it is also possible to each unit and physically exists alone, a list can also be integrated in two or more units
In member.
The flow chart and block diagram in the drawings show system, the method and computer program products according to the embodiment of the present disclosure
Architecture, function and operation in the cards.In this regard, each box in flowchart or block diagram can represent one
A part of a part of module, section or code, the module, section or code is used for comprising one or more
The executable instruction of logic function as defined in realizing.In some implementations as replacements, function marked in the box can also
To occur in a different order than that indicated in the drawings.For example, two continuous boxes can actually be basically executed in parallel,
They can also be executed in the opposite order sometimes, this can be depended on the functions involved.It is every in block diagram and or flow chart
The combination of box in a box and block diagram and or flow chart can use the dedicated base for executing defined function or movement
It realizes, or can realize using a combination of dedicated hardware and computer instructions in the system of hardware.
Claims (9)
1. a kind of commodity outer packing character recognition method characterized by comprising
Using trained text detection neural network recognization image to be detected, text prediction administrative division map is obtained;
Using trained text identification neural network recognization treated text prediction administrative division map, text identification result is obtained.
2. a kind of commodity outer packing character recognition method according to claim 1, which is characterized in that the utilization trains
Text detection neural network recognization image to be detected, obtain text prediction administrative division map, comprising:
Obtain the fusion feature figure of described image to be detected;
Obtain the text categories score characteristic pattern and text position characteristic pattern of the fusion feature figure;
Text prediction administrative division map is obtained according to the text categories score characteristic pattern and the text position characteristic pattern.
3. a kind of commodity outer packing character recognition method according to claim 2, which is characterized in that described in the acquisition to
The fusion feature figure of detection image, comprising:
The high-level characteristic and low-level feature of described image to be detected are obtained by presetting convolutional network;
The high-level characteristic and the low-level feature are merged, the fusion feature figure is obtained.
4. a kind of commodity outer packing character recognition method according to claim 3, which is characterized in that
The default convolutional network is ResNet-50 network.
5. a kind of commodity outer packing character recognition method according to claim 1, which is characterized in that the utilization trains
Text identification neural network recognization treated text prediction administrative division map, comprising:
Extract the characteristic sequence in treated the text prediction administrative division map;
It is predicted according to each frame of the characteristic sequence;
Sequence label is converted by the prediction of each frame.
6. a kind of commodity outer packing character recognition method according to claim 1, which is characterized in that
It include position mark and content mark in the training set of the training text detection neural network.
7. a kind of commodity outer packing character recognition method according to claim 6, which is characterized in that
What the position was labeled as being labeled using the coordinate on any one vertex in four text filed vertex as starting point.
8. a kind of commodity outer packing character recognition method according to claim 1, which is characterized in that
It is described that treated that text prediction administrative division map is detects the text prediction administrative division map by rotation and obtain.
9. a kind of commodity outer packing character recognition method as claimed in any of claims 1 to 8, which is characterized in that
During the training text detection neural network, loss function includes Classification Loss and recurrence loss.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910482146.7A CN110210478A (en) | 2019-06-04 | 2019-06-04 | A kind of commodity outer packing character recognition method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910482146.7A CN110210478A (en) | 2019-06-04 | 2019-06-04 | A kind of commodity outer packing character recognition method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110210478A true CN110210478A (en) | 2019-09-06 |
Family
ID=67790700
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910482146.7A Pending CN110210478A (en) | 2019-06-04 | 2019-06-04 | A kind of commodity outer packing character recognition method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110210478A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110689658A (en) * | 2019-10-08 | 2020-01-14 | 北京邮电大学 | Taxi bill identification method and system based on deep learning |
CN111444908A (en) * | 2020-03-25 | 2020-07-24 | 腾讯科技(深圳)有限公司 | Image recognition method, device, terminal and storage medium |
CN111832550A (en) * | 2020-07-13 | 2020-10-27 | 北京易真学思教育科技有限公司 | Data set manufacturing method and device, electronic equipment and storage medium |
CN112329774A (en) * | 2020-11-10 | 2021-02-05 | 杭州微洱网络科技有限公司 | Commodity size table automatic generation method based on image |
CN113657213A (en) * | 2021-07-30 | 2021-11-16 | 五邑大学 | Text recognition method, text recognition device and computer-readable storage medium |
CN114301180A (en) * | 2021-12-31 | 2022-04-08 | 南方电网大数据服务有限公司 | Power distribution room equipment switch component state monitoring method and device based on deep learning |
US11893767B2 (en) | 2019-12-13 | 2024-02-06 | Huawei Technologies Co., Ltd. | Text recognition method and apparatus |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101004793A (en) * | 2007-01-08 | 2007-07-25 | 中国民航大学 | Method for recognizing characters in handwritten form based on convex cone structure in high dimensional space |
CN105260734A (en) * | 2015-10-10 | 2016-01-20 | 燕山大学 | Commercial oil surface laser code recognition method with self modeling function |
US20170004374A1 (en) * | 2015-06-30 | 2017-01-05 | Yahoo! Inc. | Methods and systems for detecting and recognizing text from images |
WO2018054326A1 (en) * | 2016-09-22 | 2018-03-29 | 北京市商汤科技开发有限公司 | Character detection method and device, and character detection training method and device |
CN109117848A (en) * | 2018-09-07 | 2019-01-01 | 泰康保险集团股份有限公司 | A kind of line of text character identifying method, device, medium and electronic equipment |
CN109299274A (en) * | 2018-11-07 | 2019-02-01 | 南京大学 | A kind of natural scene Method for text detection based on full convolutional neural networks |
CN109376731A (en) * | 2018-08-24 | 2019-02-22 | 北京三快在线科技有限公司 | A kind of character recognition method and device |
-
2019
- 2019-06-04 CN CN201910482146.7A patent/CN110210478A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101004793A (en) * | 2007-01-08 | 2007-07-25 | 中国民航大学 | Method for recognizing characters in handwritten form based on convex cone structure in high dimensional space |
US20170004374A1 (en) * | 2015-06-30 | 2017-01-05 | Yahoo! Inc. | Methods and systems for detecting and recognizing text from images |
CN105260734A (en) * | 2015-10-10 | 2016-01-20 | 燕山大学 | Commercial oil surface laser code recognition method with self modeling function |
WO2018054326A1 (en) * | 2016-09-22 | 2018-03-29 | 北京市商汤科技开发有限公司 | Character detection method and device, and character detection training method and device |
CN109376731A (en) * | 2018-08-24 | 2019-02-22 | 北京三快在线科技有限公司 | A kind of character recognition method and device |
CN109117848A (en) * | 2018-09-07 | 2019-01-01 | 泰康保险集团股份有限公司 | A kind of line of text character identifying method, device, medium and electronic equipment |
CN109299274A (en) * | 2018-11-07 | 2019-02-01 | 南京大学 | A kind of natural scene Method for text detection based on full convolutional neural networks |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110689658A (en) * | 2019-10-08 | 2020-01-14 | 北京邮电大学 | Taxi bill identification method and system based on deep learning |
US11893767B2 (en) | 2019-12-13 | 2024-02-06 | Huawei Technologies Co., Ltd. | Text recognition method and apparatus |
CN111444908A (en) * | 2020-03-25 | 2020-07-24 | 腾讯科技(深圳)有限公司 | Image recognition method, device, terminal and storage medium |
CN111444908B (en) * | 2020-03-25 | 2024-02-02 | 腾讯科技(深圳)有限公司 | Image recognition method, device, terminal and storage medium |
CN111832550A (en) * | 2020-07-13 | 2020-10-27 | 北京易真学思教育科技有限公司 | Data set manufacturing method and device, electronic equipment and storage medium |
CN112329774A (en) * | 2020-11-10 | 2021-02-05 | 杭州微洱网络科技有限公司 | Commodity size table automatic generation method based on image |
CN112329774B (en) * | 2020-11-10 | 2023-07-28 | 广州探域科技有限公司 | Commodity ruler code table automatic generation method based on image |
CN113657213A (en) * | 2021-07-30 | 2021-11-16 | 五邑大学 | Text recognition method, text recognition device and computer-readable storage medium |
CN114301180A (en) * | 2021-12-31 | 2022-04-08 | 南方电网大数据服务有限公司 | Power distribution room equipment switch component state monitoring method and device based on deep learning |
CN114301180B (en) * | 2021-12-31 | 2024-08-06 | 南方电网大数据服务有限公司 | Power distribution room equipment switch part state monitoring method and device based on deep learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110210478A (en) | A kind of commodity outer packing character recognition method | |
CN107690657B (en) | Trade company is found according to image | |
Qiao et al. | Lgpma: Complicated table structure recognition with local and global pyramid mask alignment | |
US9965719B2 (en) | Subcategory-aware convolutional neural networks for object detection | |
US8744196B2 (en) | Automatic recognition of images | |
US9141886B2 (en) | Method for the automated extraction of a planogram from images of shelving | |
CN111931664A (en) | Mixed note image processing method and device, computer equipment and storage medium | |
JP6693059B2 (en) | Product shelf recognition device, product shelf recognition method, program, and image processing device | |
EP3745368A1 (en) | Self-checkout device to which hybrid product recognition technology is applied | |
US11354549B2 (en) | Method and system for region proposal based object recognition for estimating planogram compliance | |
Su et al. | An effective staff detection and removal technique for musical documents | |
CN111061890A (en) | Method for verifying labeling information, method and device for determining category | |
US10225521B2 (en) | System and method for receipt acquisition | |
Rosado et al. | Supervised learning for Out-of-Stock detection in panoramas of retail shelves | |
CN113158895B (en) | Bill identification method and device, electronic equipment and storage medium | |
CN108229418A (en) | Human body critical point detection method and apparatus, electronic equipment, storage medium and program | |
CN110399882A (en) | A kind of character detecting method based on deformable convolutional neural networks | |
Verma et al. | Automatic container code recognition via spatial transformer networks and connected component region proposals | |
Zhang et al. | Fine detection and classification of multi-class barcode in complex environments | |
Yu et al. | SignHRNet: Street-level traffic signs recognition with an attentive semi-anchoring guided high-resolution network | |
Zhou et al. | Library on-shelf book segmentation and recognition based on deep visual features | |
Guimarães et al. | A review of recent advances and challenges in grocery label detection and recognition | |
CN110245594A (en) | A kind of commodity recognition method for cash register system | |
Vidhyalakshmi et al. | Text detection in natural images with hybrid stroke feature transform and high performance deep Convnet computing | |
Manlises et al. | Expiry Date Character Recognition on Canned Goods Using Convolutional Neural Network VGG16 Architecture |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |