CN112364861A - Method and device for automatically identifying box label number of consignment and storage medium - Google Patents

Method and device for automatically identifying box label number of consignment and storage medium Download PDF

Info

Publication number
CN112364861A
CN112364861A CN202011291580.6A CN202011291580A CN112364861A CN 112364861 A CN112364861 A CN 112364861A CN 202011291580 A CN202011291580 A CN 202011291580A CN 112364861 A CN112364861 A CN 112364861A
Authority
CN
China
Prior art keywords
box
sticker
picture
consignment
model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011291580.6A
Other languages
Chinese (zh)
Inventor
戴福双
王铎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
BMW Brilliance Automotive Ltd
Original Assignee
BMW Brilliance Automotive Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by BMW Brilliance Automotive Ltd filed Critical BMW Brilliance Automotive Ltd
Priority to CN202011291580.6A priority Critical patent/CN112364861A/en
Publication of CN112364861A publication Critical patent/CN112364861A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/22Image preprocessing by selection of a specific region containing or referencing a pattern; Locating or processing of specific regions to guide the detection or recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • G06K7/14131D bar codes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1439Methods for optical code recognition including a method step for retrieval of the optical code
    • G06K7/1443Methods for optical code recognition including a method step for retrieval of the optical code locating of the code in an image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Electromagnetism (AREA)
  • General Health & Medical Sciences (AREA)
  • Toxicology (AREA)
  • Artificial Intelligence (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Character Input (AREA)

Abstract

A method and apparatus for automatically identifying a box label number of a consignment and a storage medium are disclosed. A method for automatically identifying a case sticker number for a consignment, the case sticker number uniquely identifying the consignment, comprising: receiving a picture of a box label of a consignment; detecting a text block in a picture of a box sticker by using a first neural network model; identifying text in each text block by using a second model; searching at least one character string which has a predetermined length and only contains numbers and spaces in the recognized text by using a sliding window with the predetermined length; locating at least one bar code in the picture of the box sticker; and outputting one character string in the searched character strings as a box label number based on the relative position relationship between the searched character string and the bar code.

Description

Method and device for automatically identifying box label number of consignment and storage medium
Technical Field
The present disclosure relates to a method and apparatus for automatically identifying a box label number of a consignment, and a storage medium.
Background
Shipments include paper packaging boxes, wood packaging boxes, and the like to be shipped. Typically, a box sticker is affixed to each consignment, which box sticker identifies the consignment's outgoing address, destination address, outgoing time, consignment weight, and the like. Additionally, box stickers are also typically identified with box sticker numbers that uniquely identify the consignment.
When managing a consignment, it is common to measure the length, width, and height of the consignment, and store the measurement results in association with a box label number. At this time, the manager needs to manually input the box sticker number, which is troublesome for the manager.
Disclosure of Invention
An object of the present disclosure is to provide a new box label number automatic identification method and apparatus.
The present disclosure presents a method for automatically identifying a box sticker number for a consignment, the box sticker number uniquely identifying the consignment, the method comprising: receiving a picture of a box label of a consignment; detecting a text block in a picture of a box sticker by using a first neural network model; identifying text in each text block by using a second model; searching at least one character string which has a predetermined length and only contains numbers and spaces in the recognized text by using a sliding window with the predetermined length; locating at least one bar code in the picture of the box sticker; and outputting one character string in the searched character strings as a box label number based on the relative position relationship between the searched character string and the bar code.
Other features and advantages of the present disclosure will become apparent from the following description with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of the specification, illustrate embodiments of the disclosure and together with the description, serve to explain, without limitation, the principles of the disclosure. In the drawings, like numbering is used to indicate like items.
Fig. 1 is a block diagram of an exemplary box sticker number automatic identification apparatus, according to some embodiments of the present disclosure.
Fig. 2 is a flow diagram illustrating an example box sticker number automatic identification method according to some embodiments of the present disclosure.
Fig. 3 illustrates a picture of an exemplary tote box sticker.
FIG. 4 illustrates a general hardware environment in which the present disclosure may be applied, according to some embodiments of the present disclosure.
Detailed Description
In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the described exemplary embodiments. It will be apparent, however, to one skilled in the art, that the described embodiments may be practiced without some or all of these specific details. In the described exemplary embodiments, well-known structures or processing steps have not been described in detail in order to avoid unnecessarily obscuring the concepts of the present disclosure.
The blocks within each block diagram shown below may be implemented by hardware, software, firmware, or any combination thereof to implement the principles of the present disclosure. It will be appreciated by those skilled in the art that the blocks described in each block diagram can be combined or divided into sub-blocks to implement the principles of the disclosure.
The steps of the methods presented in this disclosure are intended to be illustrative. In some embodiments, the method may be accomplished with one or more additional steps not described and/or without one or more of the steps discussed. Further, the order in which the steps of the method are illustrated and described is not intended to be limiting.
In the present disclosure, text may include words, numbers, letters, symbols, and the like. The characters may include numbers, letters, symbols, and the like.
Fig. 1 is a block diagram of an exemplary box sticker number automatic identification apparatus 100 according to some embodiments of the present disclosure.
As shown in fig. 1, the apparatus 100 may include: a receiving component 110 configured to receive a picture of a box sticker of a consignment; a text block detecting part 120 configured to detect a text block in a picture of a box sticker by using a CTPN (connecting semantic text candidate box network) model; a text recognition section 130 configured to recognize text in each text block by using a Tesseract model; a sliding window finding part 140 configured to find at least one character string having a predetermined length, which contains only numbers and spaces, in the recognized text by using a sliding window having the predetermined length; a barcode positioning component 150 configured to position at least one barcode in a picture of a box sticker; and an output section 160 configured to output one of the searched character strings as a box number based on the relative positional relationship of the searched character string and the barcode.
The operation of the various components shown in fig. 1 will be described in further detail below.
Fig. 2 is a flow diagram illustrating an example box sticker number automatic identification method 200 according to some embodiments of the present disclosure.
The method 200 begins at step S210, where the receiving component 110 receives a picture of a box sticker of a consignment at step S210.
The picture of the box sticker of the consignment can be taken by a camera. For example, a box sticker of a consignment may be photographed by a smart device (smart phone, smart tablet, etc.) having a camera, so as to obtain a picture of the box sticker of the consignment. The receiving component 110 can receive or retrieve a picture of the tote's signature from the camera.
A picture of the box sticker of the consignment is obtained by photographing the box sticker of the consignment under the condition that the box sticker of the consignment is made to fall in a view frame of a camera, wherein the image of the box sticker occupies more than half of the area of the picture. In some embodiments, the image of the box sticker may occupy more than 70%, 80%, or 90% of the area of the picture. In some embodiments, the photographing is performed with the sticker in the middle portion of the finder frame. In this way, a complete box sticker can be captured. In some embodiments, the shooting is performed with the camera facing the box sticker. Alternatively, shooting may be performed with the imaging surface of the camera forming a small angle (e.g., forming an angle of less than 30 °) with the plane in which the box is attached. In this way, excessive distortion and deformation of the box sticker image can be avoided. In some embodiments, the viewing frame may have any suitable shape, such as rectangular, square, and the like. Alternatively, the view frame may have a shape similar to the real box sticker, for example, the view frame may have an aspect ratio similar to the aspect ratio of the real box sticker, thereby being able to assist in better photographing the box sticker. The box sticker may be of uniform size and shape. Here, similar aspect ratios mean that the difference between the aspect ratios falls within ± 5% of the aspect ratio of the box sticker.
Fig. 3 illustrates a picture of an exemplary tote box sticker. It can be seen that a number of information items are printed on the box sticker, including: box label number "1902061681124", consignment weight "62.000" (in kg), consignment outgoing address "RDC BBA Chengdu,800, Logistic Avenue, xihangggang, 610207, Chengdu, People's Republic of China", consignment Destination address "Final Destination:66913, BMW Brilliance automatic ltd., RDC BBA Shanghai, CN", box label generation date and time (i.e. outgoing time) "19.06.201912: 00", several bar codes, and so on. In the present disclosure, the box sticker number contains only numbers and spaces, and has a predetermined length. Here, the box label number has a length of 16 digits in the case where one digit is regarded as one digit.
It should be understood that the box sticker picture shown in fig. 3 is merely exemplary, and the size, shape, layout of the information items of the box sticker may vary. The information items contained by the box sticker are also not limited to the information items shown, but can be increased or decreased according to the actual needs. Further, only the main portion of the box sticker is shown in fig. 3, it being understood that the picture received in step S210 may include an image of the box sticker as well as the background image, as long as the image of the box sticker occupies more than half of the area of the received picture.
In some embodiments, the received box sticker may be pre-processed. The size, brightness, contrast, etc. of the picture can be adjusted. The picture may also be rotated, cropped, etc.
Next, the method 200 proceeds to step S220, and at step S220, the text block detecting part 120 detects a text block in a box-and-post picture by using a CTPN model. The CTPN model is a known neural network model. Here, the box map received at step S210 is input into the CTPN model, and the detected text block is output by the CTPN model. It should be noted that the CTPN model used here is a model trained in advance, and how to train the CTPN model will be described later.
Taking the box label picture shown in fig. 3 as an example, when the box label picture shown in fig. 3 is input to the CTPN model, the CTPN model outputs all text blocks in the box label picture. For example, the CTPN model may detect a box label number "1902061681124" as one or more text blocks, may detect each line of address content as one text block, and may detect the generation date and time of the box label as one or more text blocks.
Next, the method 200 proceeds to step S230, and at step S230, the text recognition part 130 recognizes the text in each text block by using the Tesseract model. The Tesseract model is a known deep learning model. Here, the text block detected at step S220 is input into the Tesseract model, and the text in each recognized text block is output by the Tesseract model. The Tesseract model is excellent in recognizing letters, numbers, symbols, and the like, and it is not necessary to train the model in advance when the model is used for recognizing letters, numbers, symbols, and the like.
Specifically, the Tesseract model will output numbers, letters, symbols, etc. in each block of text.
Next, the method 200 proceeds to step S240, and at step S240, the sliding window finding part 140 finds at least one character string having a predetermined length, which contains only numbers and spaces, in the recognized text by using a sliding window having the predetermined length. More specifically, the sliding window may have a length of 16 bits, and the sliding window is used to find a 16-bit character string containing only numbers and spaces, i.e., a box label number.
Still taking the box sticker picture shown in fig. 3 as an example, a sliding window with a length of 16 bits is used to traverse all the texts identified in step S230. When the contents in the sliding window are only numbers and spaces, the searched contents are determined as candidate box label numbers. In some embodiments, a string of characters having a predetermined length, containing only numbers and spaces, and having a number at the top and bottom may be found. It should be understood that one or more strings may be found.
The position of the box label number in the actual box label may be fixed. Thus, a sliding window may be used to find the target string based on the location of the box label number in the box label. In some embodiments, a sliding window may be used to traverse the recognized text only in the top half of the box sticker picture to find the target string because the box sticker number is located in the top half of the box sticker and therefore the image of the box sticker number is unlikely to be located in the bottom half of the box sticker picture. In other embodiments, where a box sticker image can be located into a box sticker picture, a sliding window may be used to traverse text at a predetermined location (or predetermined area) of the box sticker image to find the box sticker number.
Next, the method 200 proceeds to step S250, where the barcode positioning part 150 positions at least one barcode in the picture of the box sticker at step S250. Specifically, the barcode positioning section 150 performs expansion etching processing on the box map received at step S210, performs connected component detection on the box map after the expansion etching processing, and determines a connected component having an area larger than a preset value as the positioned barcode. The dilation-erosion process and the connected component detection process are known image processing methods. After the dilation-erosion process and the connected component detection process, the barcode in the box label picture may be detected as a connected component, and some character strings (such as "190") may also be detected as connected components. Therefore, only connected domains having an area greater than a preset value are determined as barcodes. For example, only connected components having an area greater than half (or 60% or the like) of the area of the largest connected component may be determined as the barcode region.
As mentioned later, step S205 is intended to locate the bar code adjacent to the box label number, and is not intended to accurately locate all the bar codes. Therefore, it is sufficient that step S205 only needs to locate a part of the barcode having a large area. Here, the preset value for screening barcodes may be a fixed value (or a fixed ratio with respect to the maximum connected domain area) that is empirically set.
Next, the method 200 proceeds to step S260, and at step S260, the output section 160 outputs one of the searched character strings as a box number based on the relative positional relationship of the searched character string and the barcode. Further, the output section 160 also outputs a numeric string identified by the Tesseract model, which is located at a predetermined position in the picture of the box sticker, as the weight of the consignment.
The output component 160 may compare one or more first positions of the at least one character string found in step S240 in the box label picture with one or more second positions of the at least one barcode located in step S250 in the box label picture, and output the character string located at a certain first position as a box label number when a relative positional relationship between the certain first position and the certain one or more second positions satisfies a preset condition.
Taking the box sticker picture of fig. 3 as an example, the output section 160 may output one found character string adjacently located above one barcode as a box sticker number. More specifically, the output part 160 may determine that a certain first position and a certain second position are close to each other and further determine that they are adjacent to each other in the case where the distance between the two positions is smaller than a preset value. Here, the preset value may be an empirical value. It will be appreciated that the layout of the information items in the box postings may vary. In the case where the layout varies and is known, the box label number may be determined depending on the relative positional relationship between the box label number and one or more bar codes.
It should be understood that even in the case where only one character string is found in step S240, it can be confirmed that the character string is the box label number by determining the relative positional relationship of the character string and the one or more barcodes. By verifying the relative positional relationship between the character string and the barcode, the box label number can be identified more reliably.
Also, taking the box sticker sheet of fig. 3 as an example, the output section 160 may output, as the weight of the consignment, a numeric string located at the lowermost and leftmost side of the box sticker sheet among the texts recognized by the Tesseract model. This string of numbers representing weight contains only numbers and symbols (such as decimal points) and may be of any length or precision.
It should be understood that the present disclosure is not so limited, and that where the string of numbers representing the weight is located elsewhere in the box sticker, the method according to the present disclosure is still able to find it based on location, as this string of numbers has a fixed position in the box sticker.
In some embodiments, the output component 160 may output the found box sticker number and the found weight number string to a predefined interface. The predefined interface may be, for example, some output interface of a software application for shipment management. For example, the length, width, and height data of the consignment, the box label number of the consignment, and the weight data of the consignment, etc. may be output via the output interface. This greatly facilitates the management of shipments. In addition, by using a case label number that uniquely identifies a consignment, a user of method 200 (e.g., a consignor) can easily retrieve information about the consignment, such as a consignment's outgoing address, destination address, logistics information, what parts to consigne, the purpose of the part, and so forth.
How to train the CTPN model is described below. Training of the CTPN model may be performed before the method 200 begins.
A large number of tote box stickers (e.g., 2000-4000 pictures) are used to train the CTPN model. These pictures may be box sticker pictures taken in the field. A label (or tag) may be added to each picture manually. The added mark may be a mark marking a text block in the box poster picture, such as a rectangular box marking the position and size of the text block and a sign indicating that the rectangular box represents text, such as the sign "1" may indicate text and the sign "0" may indicate non-text. The CTPN model may be trained using a tote box picture with labels so that the text blocks detected by the CTPN model are as consistent as possible with the labeled text blocks.
It should be understood that the above describes using the CTPN model as the text block detection model and the Tesseract model as the text recognition model. However, the present disclosure is not limited thereto. For text block detection, other neural network models may be used in addition to the CTPN model, such as an RRPN (rotating region candidate box network) model, a DMPNet (depth matching prior network) model, a SegLink model, a TextBoxes model, and the like. In the case of text recognition, in addition to the Tesseract model, other deep learning models or neural network models may be used, such as a CRNN (convolutional recurrent neural network) model, FOTS (fast oriented text localization) model, STN-OCR (space transformation network-optical character recognition) model, and the like.
Exemplary box sticker number automatic identification methods and apparatus according to the present disclosure are described above with reference to fig. 1 and 2. It can be appreciated that this disclosure has realized the automatic identification to the case subsides number. By applying the method of the present disclosure to software applications such as those used for shipment management, the burden of a manager having to manually enter a box sticker number is eliminated. Further, by displaying and/or storing the identification number (box label number) of the consignment, the weight of the consignment, the length, width, height data of the consignment, and the like in association, more sophisticated management of the consignment is achieved.
Hardware implementation
Fig. 4 illustrates a general hardware environment 400 in which the present disclosure may be applied, according to an exemplary embodiment of the present disclosure.
Referring to fig. 4, a computing device 400 will now be described as an example of a hardware device applicable to aspects of the present disclosure. Computing device 400 may be any machine configured to perform processing and/or computing, and may be, but is not limited to, a workstation, a server, a desktop computer, a laptop computer, a tablet computer, a personal digital assistant, a smart phone, a portable camera, or any combination thereof. The apparatus 100 described above may be implemented in whole or at least in part by a computing device 400 or similar device or system.
Computing device 400 may include elements capable of connecting with bus 402 or communicating with bus 402 via one or more interfaces. For example, computing device 400 may include a bus 402, one or more processors 404, one or more input devices 406, and one or more output devices 408. The one or more processors 404 may be any type of processor and may include, but are not limited to, one or more general purpose processors and/or one or more special purpose processors (such as special purpose processing chips). The input device 406 mayIs any type of device capable of inputting information to a computing device and may include, but is not limited to, a mouse, a keyboard, a touch screen, a microphone, and/or a remote control. Output device 408 may be any type of device capable of presenting information and may include, but is not limited to, a display, speakers, a video/audio output terminal, and/or a printer. Computing device 400 may also include or be connected with non-transitory storage device 410, non-transitory storage device 410 may be any storage device that is non-transitory and that may implement a data storage library, and may include, but is not limited to, disk drives, optical storage devices, solid state storage, floppy disks, flexible disks, hard disks, tapes or any other magnetic medium, compact disks or any other optical medium, ROM (read only memory), RAM (random access memory), cache memory and/or any other memory chip or cartridge, and/or any other medium from which a computer may read data, instructions and/or code. Non-transitory storage device 410 may be detachable from the interface. The non-transitory storage device 410 may have data/instructions/code for implementing the above-described methods and steps. Computing device 400 may also include a communication device 412. The communication device 412 may be any type of device or system capable of communicating with external apparatus and/or with a network, and may include, but is not limited to, a modem, a network card, an infrared communication device, wireless communication equipment, and/or a device such as bluetoothTMDevices, 802.11 devices, WiFi devices, WiMax devices, cellular communications facilities, and the like.
The bus 402 may include, but is not limited to, an Industry Standard Architecture (ISA) bus, a Micro Channel Architecture (MCA) bus, an enhanced ISA (eisa) bus, a Video Electronics Standards Association (VESA) local bus, and a Peripheral Component Interconnect (PCI) bus.
Computing device 400 may also include a working memory 414, working memory 414 may be any type of working memory that can store instructions and/or data useful for the operation of processor 404 and may include, but is not limited to, random access memory and/or read only memory devices.
Software elements may reside in the working memory 414 including, but not limited to, an operating system 416, one or more application programs 418, drivers, and/or other data and code. Instructions for performing the above-described methods and steps may be included in one or more applications 418, and the above-described components of apparatus 100 may be implemented by processor 404 reading and executing the instructions of one or more applications 418. More specifically, the text block detection component 120 can be implemented, for example, by the processor 404 when executing the application 418 with instructions to perform step S220. Text recognition component 130 can be implemented, for example, by processor 404 when executing application 418 with instructions to perform step S230. The sliding window finding component 140 may be implemented, for example, by the processor 404 when executing the application 418 with instructions to perform step S240. The barcode locating component 150 may be implemented, for example, by the processor 404 when executing the application 418 with instructions to perform step S250. Similarly, receiving component 110 and output component 160 can be implemented, for example, by processor 404 when executing application 418 with instructions to perform steps S210 and S260, respectively. Executable or source code for the instructions of the software elements may be stored in a non-transitory computer-readable storage medium, such as storage device(s) 410 described above, and may be read into working memory 414, where compilation and/or installation is possible. Executable code or source code for the instructions of the software elements may also be downloaded from a remote location.
From the above embodiments, it is apparent to those skilled in the art that the present disclosure can be implemented by software and necessary hardware, or can be implemented by hardware, firmware, and the like. Based on this understanding, embodiments of the present disclosure may be implemented partially in software. The computer software may be stored in a computer readable storage medium, such as a floppy disk, hard disk, optical disk, or flash memory. The computer software includes a series of instructions that cause a computer (e.g., a personal computer, a service station, or a network terminal) to perform a method or a portion thereof according to various embodiments of the disclosure.
Having thus described the disclosure, it will be apparent that the same may be varied in many ways. Such variations are not to be regarded as a departure from the spirit and scope of the disclosure, and all such modifications as would be obvious to one skilled in the art are intended to be included within the scope of the following claims.

Claims (10)

1. A method for automatically identifying a bin sticker number for a consignment, the bin sticker number uniquely identifying the consignment, comprising:
receiving a picture of a box sticker of a consignment,
by detecting text blocks in the picture of the box sticker using the first neural network model,
by using the second model to identify the text in the respective text block,
by using a sliding window having a predetermined length to find at least one character string having only numbers and spaces of the predetermined length in the recognized text,
locating at least one bar code in a picture of a box sticker, and
and outputting one character string in the searched character strings as a box label number based on the relative position relation between the searched character string and the bar code.
2. The method of claim 1, wherein locating at least one barcode in a picture of a box sticker comprises:
the picture of the box sticker is subjected to expansion corrosion treatment,
detecting the connected domain of the picture of the box sticker after the expansion corrosion treatment,
and determining the connected domain with the area larger than the preset value as the positioned bar code.
3. The method of claim 1, further comprising: outputting the string of numbers identified by the second model at the predetermined position in the picture of the box sticker as the weight of the consignment.
4. The method of claim 1, wherein the picture of the bin paste of the shipment is obtained by taking a picture of the bin paste of the shipment with the bin paste of the shipment falling within a view box of the camera, wherein the image of the bin paste occupies more than half of the area of the picture.
5. The method of claim 3, wherein the outputting further comprises: the character string is output to the predefined interface as a box label number and the numeric string is output as a weight of the consignment.
6. The method of claim 1, further comprising: the first neural network is trained using a picture of a box sticker with a consignment having a tag that tags a text block in the picture of the box sticker to minimize a difference between the detected text block and the tag.
7. The method of claim 1, wherein the first neural network model comprises a CTPN model and the second model comprises a Tesseract model.
8. An apparatus for automatically identifying a bin sticker number for a consignment, comprising: means for performing the method of any one of claims 1-7.
9. An apparatus for automatically identifying a bin sticker number for a consignment, comprising:
at least one processor; and
at least one storage device storing instructions that, when executed by the at least one processor, cause the at least one processor to perform the method of any one of claims 1-7.
10. A non-transitory computer-readable storage medium having stored thereon instructions which, when executed by a processor, cause performance of the method recited in any one of claims 1-7.
CN202011291580.6A 2020-11-18 2020-11-18 Method and device for automatically identifying box label number of consignment and storage medium Pending CN112364861A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011291580.6A CN112364861A (en) 2020-11-18 2020-11-18 Method and device for automatically identifying box label number of consignment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011291580.6A CN112364861A (en) 2020-11-18 2020-11-18 Method and device for automatically identifying box label number of consignment and storage medium

Publications (1)

Publication Number Publication Date
CN112364861A true CN112364861A (en) 2021-02-12

Family

ID=74532449

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011291580.6A Pending CN112364861A (en) 2020-11-18 2020-11-18 Method and device for automatically identifying box label number of consignment and storage medium

Country Status (1)

Country Link
CN (1) CN112364861A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100185754A1 (en) * 2009-01-20 2010-07-22 Parata Systems, Llc Methods, systems, and apparatus for determining and automatically programming network addresses for devices operating in a network
CN105809158A (en) * 2014-12-29 2016-07-27 张继锋 Parcel form, parcel form information identification method and parcel form information identification system
CN108205673A (en) * 2016-12-16 2018-06-26 塔塔顾问服务有限公司 For the method and system of container code identification
CN109241798A (en) * 2018-08-28 2019-01-18 江苏博睿通智能装备有限公司 A kind of intelligent identifying system in driving
CN110956171A (en) * 2019-11-06 2020-04-03 广州供电局有限公司 Automatic nameplate identification method and device, computer equipment and storage medium
US20200302385A1 (en) * 2019-03-18 2020-09-24 Coupang Corp. Systems and methods for automatic package tracking and prioritized reordering
CN111767973A (en) * 2020-07-02 2020-10-13 林昌全 Design method of logistics management software function based on associated label

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100185754A1 (en) * 2009-01-20 2010-07-22 Parata Systems, Llc Methods, systems, and apparatus for determining and automatically programming network addresses for devices operating in a network
CN105809158A (en) * 2014-12-29 2016-07-27 张继锋 Parcel form, parcel form information identification method and parcel form information identification system
CN108205673A (en) * 2016-12-16 2018-06-26 塔塔顾问服务有限公司 For the method and system of container code identification
CN109241798A (en) * 2018-08-28 2019-01-18 江苏博睿通智能装备有限公司 A kind of intelligent identifying system in driving
US20200302385A1 (en) * 2019-03-18 2020-09-24 Coupang Corp. Systems and methods for automatic package tracking and prioritized reordering
CN110956171A (en) * 2019-11-06 2020-04-03 广州供电局有限公司 Automatic nameplate identification method and device, computer equipment and storage medium
CN111767973A (en) * 2020-07-02 2020-10-13 林昌全 Design method of logistics management software function based on associated label

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王宾;陆松廷;: "基于微信平台的中医药院校健身气功互动教辅模式的应用研究", 中医药管理杂志, no. 01 *

Similar Documents

Publication Publication Date Title
CN111476227B (en) Target field identification method and device based on OCR and storage medium
US11645826B2 (en) Generating searchable text for documents portrayed in a repository of digital images utilizing orientation and text prediction neural networks
CN108416403B (en) Method, system, equipment and storage medium for automatically associating commodity with label
US8744196B2 (en) Automatic recognition of images
US8788930B2 (en) Automatic identification of fields and labels in forms
JP4995554B2 (en) Retrieval method of personal information using knowledge base for optical character recognition correction
US8086038B2 (en) Invisible junction features for patch recognition
US20160300115A1 (en) Image processing apparatus, image processing method and computer-readable storage medium
US9934444B2 (en) Image processing apparatus, image processing method and computer-readable storage medium
CN112800848A (en) Structured extraction method, device and equipment of information after bill identification
JP2016048444A (en) Document identification program, document identification device, document identification system, and document identification method
US8792730B2 (en) Classification and standardization of field images associated with a field in a form
US20100158375A1 (en) Signal processing apparatus, signal processing method, computer-readable medium and computer data signal
CN110909740A (en) Information processing apparatus and storage medium
CN112861842A (en) Case text recognition method based on OCR and electronic equipment
US9396389B2 (en) Techniques for detecting user-entered check marks
CN110659346A (en) Table extraction method, device, terminal and computer readable storage medium
CN112308046A (en) Method, device, server and readable storage medium for positioning text region of image
CN112364861A (en) Method and device for automatically identifying box label number of consignment and storage medium
JP2008282094A (en) Character recognition processing apparatus
KR20200078880A (en) System for guiding store information using technology of image recognize
CN112287763A (en) Image processing method, apparatus, device and medium
JP2022014793A (en) Information processing device, information processing method, and program
CN112270321A (en) Method and apparatus for automatically recognizing vehicle identification code and storage medium
US11869260B1 (en) Extracting structured data from an image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination