CN110827247A - Method and equipment for identifying label - Google Patents

Method and equipment for identifying label Download PDF

Info

Publication number
CN110827247A
CN110827247A CN201911032918.3A CN201911032918A CN110827247A CN 110827247 A CN110827247 A CN 110827247A CN 201911032918 A CN201911032918 A CN 201911032918A CN 110827247 A CN110827247 A CN 110827247A
Authority
CN
China
Prior art keywords
image
target label
recognition result
coordinate information
label
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201911032918.3A
Other languages
Chinese (zh)
Other versions
CN110827247B (en
Inventor
徐鹏
沈圣远
常树林
姚巨虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Yueyi Network Information Technology Co Ltd
Original Assignee
Shanghai Yueyi Network Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Yueyi Network Information Technology Co Ltd filed Critical Shanghai Yueyi Network Information Technology Co Ltd
Priority to CN201911032918.3A priority Critical patent/CN110827247B/en
Publication of CN110827247A publication Critical patent/CN110827247A/en
Application granted granted Critical
Publication of CN110827247B publication Critical patent/CN110827247B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1439Methods for optical code recognition including a method step for retrieval of the optical code
    • G06K7/1447Methods for optical code recognition including a method step for retrieval of the optical code extracting optical codes from image or text carrying said optical code
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/146Methods for optical code recognition the method including quality enhancement steps
    • G06K7/1478Methods for optical code recognition the method including quality enhancement steps adapting the threshold for pixels in a CMOS or CCD pixel sensor for black and white recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Toxicology (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Electromagnetism (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Character Discrimination (AREA)

Abstract

The method comprises the steps of obtaining an image to be detected, and detecting the image to be detected by using a convolutional neural network detection model to determine coordinate information of a target label; segmenting the image to be detected according to the coordinate information of the target label to obtain the target label image; performing bar code identification processing on the target label image to obtain a first identification result; performing character recognition processing on the target label image by using a character recognition matching model to obtain a second recognition result; and matching the first identification result with the second identification result to determine a label identification result. Thereby rapidly and accurately identifying the tag.

Description

Method and equipment for identifying label
Technical Field
The present application relates to the field of tag identification, and in particular, to a method and device for identifying a tag.
Background
For the identification of the back plate label, in the prior art, any one of barcode identification or character identification is mostly adopted, but the two modes are mistakenly identified to a certain extent under the condition of independent use, so that the identification accuracy of the back plate label is often very low.
Disclosure of Invention
An object of the present application is to provide a method and apparatus for identifying a tag, which solve the problem of low accuracy of tag identification in the prior art.
According to an aspect of the present application, there is provided a method of identifying a tag, the method including:
acquiring an image to be detected, and detecting the image to be detected by using a convolutional neural network detection model to determine coordinate information of a target label;
segmenting the image to be detected according to the coordinate information of the target label to obtain the target label image;
performing bar code identification processing on the target label image to obtain a first identification result;
performing character recognition processing on the target label image by using a character recognition matching model to obtain a second recognition result;
and matching the first identification result with the second identification result to determine a label identification result.
Further, before detecting the image to be detected by using a convolutional neural network detection model to determine the coordinate information of the target tag, the method further includes:
acquiring a plurality of label images and marking the label images;
and determining a position deviation iteration parameter according to the marked image, and determining the convolutional neural network detection model according to the position deviation iteration parameter.
Further, the detecting the image to be detected by using a convolutional neural network detection model to determine the coordinate information of the target label includes:
detecting all pixels of the image to be detected by using a convolutional neural network detection model to obtain a first score corresponding to each coordinate information of the image to be detected;
and judging whether the first score is larger than a first preset threshold value, if so, determining the coordinate information corresponding to the first score as the coordinate information of the target label.
Further, the segmenting the image to be detected according to the coordinate information of the target label to obtain the target label image includes:
matting the image to be detected according to the coordinate information of the target label to obtain a plurality of pixels corresponding to the coordinate information of the target label;
calculating each pixel by using a preset segmentation neural network to obtain a second score corresponding to each pixel, judging whether the second score is greater than a second preset threshold value, and if so, determining the target label image according to the pixel corresponding to the second score; and if not, setting the pixel corresponding to the second score to be black.
Further, the barcode recognition processing on the target label image includes:
and performing bar code identification processing on the target label image by using a decoding packet, wherein the bar code identification processing comprises hard decoding.
Further, the performing character recognition processing on the target label image by using a character recognition matching model to obtain a second recognition result includes:
inputting the target label image into a preset convolution neural network to obtain a convolution characteristic diagram;
performing characteristic serialization processing on the convolution characteristic diagram to obtain a predicted character sequence;
and calculating the predicted character sequence by using a connection sequence classification algorithm to determine a second recognition result.
Further, matching the first recognition result with the second recognition result to determine a tag recognition result, including:
matching the first recognition result according to a preset character length and a preset character format, wherein if the first recognition result meets the preset character length and the preset character format, the first recognition result is a label recognition result;
and matching the second recognition result according to a preset character length and a preset character format, wherein if the second recognition result meets the preset character length and the preset character format, the second recognition result is a label recognition result.
According to another aspect of the present application, there is also provided a computer readable medium having computer readable instructions stored thereon, the computer readable instructions being executable by a processor to implement the aforementioned method of identifying a tag.
According to still another aspect of the present application, there is also provided an apparatus for identifying a tag, wherein the apparatus includes:
one or more processors; and
a memory storing computer readable instructions that, when executed, cause the processor to perform the operations of one of the aforementioned methods of identifying tags.
Compared with the prior art, the method and the device have the advantages that the image to be detected is obtained, and the convolutional neural network detection model is used for detecting the image to be detected so as to determine the coordinate information of the target label; segmenting the image to be detected according to the coordinate information of the target label to obtain the target label image; performing bar code identification processing on the target label image to obtain a first identification result; performing character recognition processing on the target label image by using a character recognition matching model to obtain a second recognition result; and matching the first identification result with the second identification result to determine a label identification result. Thereby rapidly and accurately identifying the tag.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 illustrates a flow diagram of a method of identifying a tag according to an aspect of the present application;
FIG. 2 illustrates a network flow diagram for determining a convolutional neural network detection model from a convolutional neural network in accordance with a preferred embodiment of the present application;
FIG. 3 is a flow chart of detecting pixels by a residual neural network in a preferred embodiment of the present application;
FIG. 4 illustrates a schematic view of an image of a target tag in a preferred embodiment of the present application;
fig. 5 shows a flow chart of character recognition in a preferred embodiment of the present application.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
The present application is described in further detail below with reference to the attached figures.
In a typical configuration of the present application, the terminal, the device serving the network, and the trusted party each include one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include forms of volatile memory in a computer readable medium, Random Access Memory (RAM) and/or non-volatile memory, such as Read Only Memory (ROM) or flash memory (flash RAM). Memory is an example of a computer-readable medium.
Computer-readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of computer storage media include, but are not limited to, phase change memory (PRAM), Static Random Access Memory (SRAM), Dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), Read Only Memory (ROM), Electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), Digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other non-transmission medium that can be used to store information that can be accessed by a computing device. As defined herein, computer readable media does not include non-transitory computer readable media (transient media), such as modulated data signals and carrier waves.
Fig. 1 shows a schematic flow chart of a method for identifying a tag according to an aspect of the present application, the method comprising: S11-S15, wherein in the S11, an image to be detected is obtained, and the image to be detected is detected by using a convolutional neural network detection model to determine the coordinate information of a target label; step S12, segmenting the image to be detected according to the coordinate information of the target label to obtain the target label image; step S13, performing bar code identification processing on the target label image to obtain a first identification result; step S14, using a character recognition matching model to perform character recognition processing on the target label image to obtain a second recognition result; step S15, matching the first recognition result and the second recognition result, and determining a tag recognition result. Thereby rapidly and accurately identifying the tag.
Specifically, step S11, an image to be detected is obtained, and the image to be detected is detected using a convolutional neural network detection model to determine coordinate information of the target label. Herein, an image to be detected, such as a camera, a video screenshot, and the like, is obtained through a front-end device and the like, wherein the image to be detected may be a backboard image. The convolutional neural network detection model may be obtained by deep learning training, for example, the convolutional neural network detection model is obtained after the convolutional neural network is trained by using the labeled picture data. The target label is a label positioned in the image to be detected, for example, the image to be detected a contains a label P, and the label P is positioned by detecting the image a and is the target label. Detecting all coordinate information in the images to be detected through a convolutional neural network detection model, detecting the coordinate information of each image to be detected, for example, scoring the coordinates of each image to be detected, and determining the coordinate information of the target label according to the obtained score.
And step S12, segmenting the image to be detected according to the coordinate information of the target label to obtain the target label image. Here, the to-be-detected image may be segmented according to the coordinate information of the target label by using a segmentation neural network, for example, a score corresponding to each pixel of the to-be-detected image is calculated by using the segmentation neural network, and each pixel corresponding to the target label image is determined according to the score to obtain the target label image.
Step S13, performing barcode recognition processing on the target label image to obtain a first recognition result. The decoding packet can be used for carrying out bar code identification processing on the target label image so as to quickly and accurately acquire a first identification result; it should be noted that, the above method of performing barcode identification processing on the target label image is only an example, and other methods may also be included, for example, barcode identification processing may also be performed on the target label image by using a barcode scanner.
And step S14, performing character recognition processing on the target label image by using a character recognition matching model to obtain a second recognition result. Here, the character recognition matching model may be composed of a VGG16 network and a bidirectional long-short term memory network (BiLSTM network) in a convolutional neural network as main constituent networks. And then, performing operations such as feature extraction, character recognition, character sequence output and the like on the target label image to complete character recognition processing on the target label image, and taking the character sequence obtained after the target label image is recognized as a second recognition result.
Step S15, matching the first recognition result and the second recognition result, and determining a tag recognition result. Here, the first recognition result is a character sequence obtained after the barcode of the target tag is recognized, and the second recognition result is a character sequence obtained after the character of the target tag is recognized. And determining and comparing the character sequence corresponding to the first recognition result and the character sequence corresponding to the second recognition result to determine the tag recognition result. Compared with the mode of only identifying the bar code and only identifying the character in the prior art, the label can be identified more accurately.
Preferably, before step S12, a plurality of label images are acquired, and the label images are labeled; and determining a position deviation iteration parameter according to the marked image, and determining the convolutional neural network detection model according to the position deviation iteration parameter. Here, the method for identifying a label described in the present application is preferably applied to label processing in the case of back sheet detection: after obtaining a plurality of label images, manually marking the label images, such as marking label areas and non-label areas; then, training a convolutional neural network by using the labeled plurality of label images, calculating a position deviation iteration parameter by using the convolutional neural network, performing multiple iterations in the convolutional neural network according to the position deviation iteration parameter to determine a position deviation iteration function, and determining a convolutional neural network detection model according to the position deviation iteration function and the convolutional neural network. As shown in fig. 2, the labeled label image data is input into the model, the difference between the predicted coordinate position and the labeled actual label position is calculated, then the gradient is reversely transferred to obtain a position deviation iteration parameter, and the parameter in the convolutional neural network detection model is updated according to the position deviation iteration parameter to train the convolutional neural network detection model.
Preferably, in step S11, detecting all pixels of the image to be detected by using a convolutional neural network detection model, to obtain a first score corresponding to each coordinate information of the image to be detected; and judging whether the first score is larger than a first preset threshold value, if so, determining the coordinate information corresponding to the first score as the coordinate information of the target label. As shown in fig. 3, in a preferred embodiment of the present application, a residual neural network (Resnet network) may be used to detect all pixels of the image to be detected, and output a pixel label, pixel coordinate information, and a score (score) corresponding to the pixel coordinate information, where the pixel coordinate information includes horizontal axis coordinate information and vertical axis coordinate information. Setting a first preset threshold, determining a first score according to pixel coordinate information and a score corresponding to the pixel coordinate information, and determining the coordinate information of the target label according to the coordinate information of the pixel corresponding to the first score when the first score is greater than the first preset threshold, wherein the pixel corresponding to the first score is the pixel of the target label, so as to improve the accuracy.
Preferably, in step S12, the image to be detected is extracted according to the coordinate information of the target label to obtain a plurality of pixels corresponding to the coordinate information of the target label; calculating each pixel by using a preset segmentation neural network to obtain a second score corresponding to each pixel, judging whether the second score is greater than a second preset threshold value, and if so, determining the target label image according to the pixel corresponding to the second score; and if not, setting the pixel corresponding to the second score to be black. After the image to be detected is segmented by the preset segmentation neural network, the non-label area image corresponding to the target label coordinate information can be removed, the data processing amount is reduced, and the label image identification processing speed is accelerated. Wherein the segmented neural network includes, but is not limited to: full convolutional network (FCN network), U-net network. In a preferred embodiment of the application, the U-net network is used as a segmented neural network, so that parameters needing to be trained can be reduced to a great extent, and all information in the image to be detected can be well reserved due to a special U-shaped structure; meanwhile, the U-net network can also carry out convolution operation on pictures with any shapes and sizes, so that the back panel pictures serving as the images to be detected with any sizes can be conveniently processed.
Fig. 4 shows a schematic diagram of a target label image in a preferred embodiment of the present application, which utilizes a trained segmentation neural network to calculate whether each pixel in a scratched image to be detected belongs to a target label, so as to generate a picture with only label information. The segmentation processing is mainly to remove interference of a non-target label area, after segmentation, a picture with only label data can be obtained, and all pixels of other non-target label areas are blackened, so that the data processing amount during subsequent identification processing of a target label is reduced, and the identification processing of the target label is accelerated. Presetting a second preset threshold, and then taking the pixel with the second score larger than the second preset threshold as a target label pixel; and adjusting the pixel value of the pixel with the second score less than or equal to the second preset threshold value to be 0 to finish black setting, so as to reduce interference information.
Preferably, in step S13, the decoding packet is used to perform barcode recognition processing on the target label image, where the barcode recognition processing includes hard decoding. Here, the barcode in the target label image may be subjected to barcode recognition processing in a hard decoding manner using the decoding packet to quickly recognize the barcode of the target label.
Preferably, in step S14, the target label image is input into a preset convolutional neural network, so as to obtain a convolutional feature map; performing characteristic serialization processing on the convolution characteristic diagram to obtain a predicted character sequence; and calculating the predicted character sequence by using a connection sequence classification algorithm to determine a second recognition result. Here, the predetermined convolutional neural network includes, but is not limited to, a VGG16 network, a residual neural network (Resnet network), and an inclusion network. Fig. 5 shows a character recognition flowchart in a preferred embodiment of the present application, which uses a VGG16 network to calculate a new convolution feature map from the preprocessed target label image, and performs feature serialization on the convolution feature map, that is, a dimension transformation processes on the convolution feature map, where the preprocessing may be rotation, sharpening, or the like; in addition, the VGG16 network is used, so that the calculation amount is small, the effect is good, and a new convolution characteristic diagram can be quickly and accurately acquired; then, a convolution characteristic diagram after dimension transformation is calculated through a bidirectional long-short term memory network (BiLTSM network) to obtain a predicted character sequence, and a character result after character recognition processing, namely the second recognition result, can be calculated and determined by utilizing a connection sequence classification algorithm (CTC algorithm) so as to accurately recognize the label.
Preferably, in step S15, the first recognition result is matched according to a preset character length and a preset character format, and if the first recognition result meets the preset character length and the preset character format, the first recognition result is a tag recognition result; and matching the second recognition result according to a preset character length and a preset character format, wherein if the second recognition result meets the preset character length and the preset character format, the second recognition result is a label recognition result. Here, the tag identification result is determined jointly according to the first identification result and the second identification result, and the tag identification accuracy is improved.
In a preferred embodiment of the present application, as shown in fig. 4, the tag in fig. 4 is identified, the first identification result is a hard decoding result, and the characters of the hard decoding result are: 201940911492756111, the second recognition result is a character recognition result, the character recognition result is: 20190409114927561115. the hard decoding result and the character recognition result are respectively matched with the preset character length and the preset year, month and day format, the hard decoding result does not accord with the preset character length and the preset year, month and day format, and the character recognition result accords with the preset year, month and day format and the preset character length, so that the character recognition result is determined to be the label recognition result, and the label recognition accuracy is improved.
Furthermore, the embodiment of the present application also provides a computer readable medium, on which computer readable instructions are stored, the computer readable instructions being executable by a processor to implement the aforementioned method for identifying a tag.
According to still another aspect of the present application, there is also provided an apparatus for identifying a tag, wherein the apparatus includes:
one or more processors; and
a memory storing computer readable instructions that, when executed, cause the processor to perform the operations of one of the aforementioned methods of identifying tags.
For example, the computer readable instructions, when executed, cause the one or more processors to: acquiring an image to be detected, and detecting the image to be detected by using a convolutional neural network detection model to determine coordinate information of a target label; segmenting the image to be detected according to the coordinate information of the target label to obtain the target label image; performing bar code identification processing on the target label image to obtain a first identification result; performing character recognition processing on the target label image by using a character recognition matching model to obtain a second recognition result; and matching the first identification result with the second identification result to determine a label identification result.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present application without departing from the spirit and scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims of the present application and their equivalents, the present application is intended to include such modifications and variations as well.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, implemented using Application Specific Integrated Circuits (ASICs), general purpose computers or any other similar hardware devices. In one embodiment, the software programs of the present application may be executed by a processor to implement the steps or functions described above. Likewise, the software programs (including associated data structures) of the present application may be stored in a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. Additionally, some of the steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
In addition, some of the present application may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or techniques in accordance with the present application through the operation of the computer. Program instructions which invoke the methods of the present application may be stored on a fixed or removable recording medium and/or transmitted via a data stream on a broadcast or other signal-bearing medium and/or stored within a working memory of a computer device operating in accordance with the program instructions. An embodiment according to the present application comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to perform a method and/or a solution according to the aforementioned embodiments of the present application.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the apparatus claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.

Claims (9)

1. A method of identifying a tag, wherein the method comprises:
acquiring an image to be detected, and detecting the image to be detected by using a convolutional neural network detection model to determine coordinate information of a target label;
segmenting the image to be detected according to the coordinate information of the target label to obtain the target label image;
performing bar code identification processing on the target label image to obtain a first identification result;
performing character recognition processing on the target label image by using a character recognition matching model to obtain a second recognition result;
and matching the first identification result with the second identification result to determine a label identification result.
2. The method of claim 1, wherein before detecting the image to be detected using a convolutional neural network detection model to determine coordinate information of the target tag, further comprising:
acquiring a plurality of label images and marking the label images;
and determining a position deviation iteration parameter according to the marked image, and determining the convolutional neural network detection model according to the position deviation iteration parameter.
3. The method of claim 1, wherein the detecting the image to be detected to determine the coordinate information of the target tag by using a convolutional neural network detection model comprises:
detecting all pixels of the image to be detected by using a convolutional neural network detection model to obtain a first score corresponding to each coordinate information of the image to be detected;
and judging whether the first score is larger than a first preset threshold value, if so, determining the coordinate information corresponding to the first score as the coordinate information of the target label.
4. The method as claimed in claim 1, wherein the segmenting the image to be detected according to the coordinate information of the target label to obtain the target label image comprises:
matting the image to be detected according to the coordinate information of the target label to obtain a plurality of pixels corresponding to the coordinate information of the target label;
calculating each pixel by using a preset segmentation neural network to obtain a second score corresponding to each pixel, judging whether the second score is greater than a second preset threshold value, and if so, determining the target label image according to the pixel corresponding to the second score; and if not, setting the pixel corresponding to the second score to be black.
5. The method of claim 1, wherein the barcode recognition processing of the target tag image comprises:
and performing bar code identification processing on the target label image by using a decoding packet, wherein the bar code identification processing comprises hard decoding.
6. The method of claim 1, wherein the performing character recognition processing on the target label image by using a character recognition matching model to obtain a second recognition result comprises:
inputting the target label image into a preset convolution neural network to obtain a convolution characteristic diagram;
performing characteristic serialization processing on the convolution characteristic diagram to obtain a predicted character sequence;
and calculating the predicted character sequence by using a connection sequence classification algorithm to determine a second recognition result.
7. The method of claim 1, wherein matching the first recognition result with the second recognition result to determine a tag recognition result comprises:
matching the first recognition result according to a preset character length and a preset character format, wherein if the first recognition result meets the preset character length and the preset character format, the first recognition result is a label recognition result;
and matching the second recognition result according to a preset character length and a preset character format, wherein if the second recognition result meets the preset character length and the preset character format, the second recognition result is a label recognition result.
8. An apparatus for identifying a tag, wherein the apparatus comprises:
one or more processors; and
a memory storing computer readable instructions that, when executed, cause the processor to perform the operations of the method of any of claims 1 to 7.
9. A computer readable medium having computer readable instructions stored thereon which are executable by a processor to implement the method of any one of claims 1 to 7.
CN201911032918.3A 2019-10-28 2019-10-28 Label identification method and device Active CN110827247B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911032918.3A CN110827247B (en) 2019-10-28 2019-10-28 Label identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911032918.3A CN110827247B (en) 2019-10-28 2019-10-28 Label identification method and device

Publications (2)

Publication Number Publication Date
CN110827247A true CN110827247A (en) 2020-02-21
CN110827247B CN110827247B (en) 2024-03-15

Family

ID=69551323

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911032918.3A Active CN110827247B (en) 2019-10-28 2019-10-28 Label identification method and device

Country Status (1)

Country Link
CN (1) CN110827247B (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111291743A (en) * 2020-03-31 2020-06-16 深圳前海微众银行股份有限公司 Tool disinfection monitoring method, device, equipment and storage medium
CN111651571A (en) * 2020-05-19 2020-09-11 腾讯科技(深圳)有限公司 Man-machine cooperation based session realization method, device, equipment and storage medium
CN112052696A (en) * 2020-08-31 2020-12-08 中冶赛迪重庆信息技术有限公司 Bar finished product warehouse-out label identification method, device and equipment based on machine vision
CN112329851A (en) * 2020-11-05 2021-02-05 腾讯科技(深圳)有限公司 Icon detection method and device and computer readable storage medium
CN112560767A (en) * 2020-12-24 2021-03-26 南方电网深圳数字电网研究院有限公司 Document signature identification method and device and computer readable storage medium
CN112733568A (en) * 2021-01-21 2021-04-30 北京京东振世信息技术有限公司 One-dimensional bar code identification method, device, equipment and storage medium
CN113298357A (en) * 2021-04-29 2021-08-24 福建宏泰智能工业互联网有限公司 Quality data processing method, device and system
CN113567450A (en) * 2021-07-20 2021-10-29 上汽通用五菱汽车股份有限公司 Engine label information visual detection system and method
CN113591857A (en) * 2020-04-30 2021-11-02 阿里巴巴集团控股有限公司 Character image processing method and device and ancient Chinese book image identification method
CN114519858A (en) * 2022-02-16 2022-05-20 北京百度网讯科技有限公司 Document image recognition method and device, storage medium and electronic equipment
WO2022116720A1 (en) * 2020-12-02 2022-06-09 歌尔股份有限公司 Target detection method and apparatus, and electronic device
CN115937868A (en) * 2022-12-12 2023-04-07 江苏中烟工业有限责任公司 Cigarette packet label information matching method and device, electronic equipment and storage medium
US11798250B2 (en) 2019-02-18 2023-10-24 Ecoatm, Llc Neural network based physical condition evaluation of electronic devices, and associated systems and methods
CN117054378A (en) * 2023-08-12 2023-11-14 深圳市华怡丰科技有限公司 Label sensor adjustment method, system, equipment and storage medium
US11843206B2 (en) 2019-02-12 2023-12-12 Ecoatm, Llc Connector carrier for electronic device kiosk
WO2023241102A1 (en) * 2022-06-15 2023-12-21 卡奥斯工业智能研究院(青岛)有限公司 Label recognition method and apparatus, and electronic device and storage medium
US11922467B2 (en) 2020-08-17 2024-03-05 ecoATM, Inc. Evaluating an electronic device using optical character recognition
US11989710B2 (en) 2018-12-19 2024-05-21 Ecoatm, Llc Systems and methods for vending and/or purchasing mobile phones and other electronic devices
US12033454B2 (en) 2020-08-17 2024-07-09 Ecoatm, Llc Kiosk for evaluating and purchasing used electronic devices

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1179752A (en) * 1995-03-31 1998-04-22 基维软件程序有限公司 Machine-readable label
CN105760886A (en) * 2016-02-23 2016-07-13 北京联合大学 Image scene multi-object segmentation method based on target identification and saliency detection
CN105787482A (en) * 2016-02-26 2016-07-20 华北电力大学 Specific target outline image segmentation method based on depth convolution neural network
CN107103613A (en) * 2017-03-28 2017-08-29 深圳市未来媒体技术研究院 A kind of three-dimension gesture Attitude estimation method
WO2018099194A1 (en) * 2016-11-30 2018-06-07 杭州海康威视数字技术股份有限公司 Character identification method and device
CN108133233A (en) * 2017-12-18 2018-06-08 中山大学 A kind of multi-tag image-recognizing method and device
CN108416412A (en) * 2018-01-23 2018-08-17 浙江瀚镪自动化设备股份有限公司 A kind of logistics compound key recognition methods based on multitask deep learning
CN108898045A (en) * 2018-04-23 2018-11-27 杭州电子科技大学 The multi-tag image pre-processing method of gesture identification based on deep learning
CN108920992A (en) * 2018-08-08 2018-11-30 长沙理工大学 A kind of positioning and recognition methods of the medical label bar code based on deep learning
CN109117862A (en) * 2018-06-29 2019-01-01 北京达佳互联信息技术有限公司 Image tag recognition methods, device and server
CN110084150A (en) * 2019-04-09 2019-08-02 山东师范大学 A kind of Automated Classification of White Blood Cells method and system based on deep learning

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1179752A (en) * 1995-03-31 1998-04-22 基维软件程序有限公司 Machine-readable label
CN105760886A (en) * 2016-02-23 2016-07-13 北京联合大学 Image scene multi-object segmentation method based on target identification and saliency detection
CN105787482A (en) * 2016-02-26 2016-07-20 华北电力大学 Specific target outline image segmentation method based on depth convolution neural network
WO2018099194A1 (en) * 2016-11-30 2018-06-07 杭州海康威视数字技术股份有限公司 Character identification method and device
CN107103613A (en) * 2017-03-28 2017-08-29 深圳市未来媒体技术研究院 A kind of three-dimension gesture Attitude estimation method
CN108133233A (en) * 2017-12-18 2018-06-08 中山大学 A kind of multi-tag image-recognizing method and device
CN108416412A (en) * 2018-01-23 2018-08-17 浙江瀚镪自动化设备股份有限公司 A kind of logistics compound key recognition methods based on multitask deep learning
CN108898045A (en) * 2018-04-23 2018-11-27 杭州电子科技大学 The multi-tag image pre-processing method of gesture identification based on deep learning
CN109117862A (en) * 2018-06-29 2019-01-01 北京达佳互联信息技术有限公司 Image tag recognition methods, device and server
CN108920992A (en) * 2018-08-08 2018-11-30 长沙理工大学 A kind of positioning and recognition methods of the medical label bar code based on deep learning
CN110084150A (en) * 2019-04-09 2019-08-02 山东师范大学 A kind of Automated Classification of White Blood Cells method and system based on deep learning

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11989710B2 (en) 2018-12-19 2024-05-21 Ecoatm, Llc Systems and methods for vending and/or purchasing mobile phones and other electronic devices
US11843206B2 (en) 2019-02-12 2023-12-12 Ecoatm, Llc Connector carrier for electronic device kiosk
US11798250B2 (en) 2019-02-18 2023-10-24 Ecoatm, Llc Neural network based physical condition evaluation of electronic devices, and associated systems and methods
CN111291743A (en) * 2020-03-31 2020-06-16 深圳前海微众银行股份有限公司 Tool disinfection monitoring method, device, equipment and storage medium
CN113591857A (en) * 2020-04-30 2021-11-02 阿里巴巴集团控股有限公司 Character image processing method and device and ancient Chinese book image identification method
CN111651571A (en) * 2020-05-19 2020-09-11 腾讯科技(深圳)有限公司 Man-machine cooperation based session realization method, device, equipment and storage medium
CN111651571B (en) * 2020-05-19 2023-10-17 腾讯科技(深圳)有限公司 Conversation realization method, device, equipment and storage medium based on man-machine cooperation
US11922467B2 (en) 2020-08-17 2024-03-05 ecoATM, Inc. Evaluating an electronic device using optical character recognition
US12033454B2 (en) 2020-08-17 2024-07-09 Ecoatm, Llc Kiosk for evaluating and purchasing used electronic devices
CN112052696A (en) * 2020-08-31 2020-12-08 中冶赛迪重庆信息技术有限公司 Bar finished product warehouse-out label identification method, device and equipment based on machine vision
CN112329851A (en) * 2020-11-05 2021-02-05 腾讯科技(深圳)有限公司 Icon detection method and device and computer readable storage medium
WO2022116720A1 (en) * 2020-12-02 2022-06-09 歌尔股份有限公司 Target detection method and apparatus, and electronic device
CN112560767A (en) * 2020-12-24 2021-03-26 南方电网深圳数字电网研究院有限公司 Document signature identification method and device and computer readable storage medium
CN112733568A (en) * 2021-01-21 2021-04-30 北京京东振世信息技术有限公司 One-dimensional bar code identification method, device, equipment and storage medium
CN112733568B (en) * 2021-01-21 2024-04-19 北京京东振世信息技术有限公司 One-dimensional bar code recognition method, device, equipment and storage medium
CN113298357A (en) * 2021-04-29 2021-08-24 福建宏泰智能工业互联网有限公司 Quality data processing method, device and system
CN113567450A (en) * 2021-07-20 2021-10-29 上汽通用五菱汽车股份有限公司 Engine label information visual detection system and method
CN114519858B (en) * 2022-02-16 2023-09-05 北京百度网讯科技有限公司 Document image recognition method and device, storage medium and electronic equipment
CN114519858A (en) * 2022-02-16 2022-05-20 北京百度网讯科技有限公司 Document image recognition method and device, storage medium and electronic equipment
WO2023241102A1 (en) * 2022-06-15 2023-12-21 卡奥斯工业智能研究院(青岛)有限公司 Label recognition method and apparatus, and electronic device and storage medium
CN115937868A (en) * 2022-12-12 2023-04-07 江苏中烟工业有限责任公司 Cigarette packet label information matching method and device, electronic equipment and storage medium
CN117054378A (en) * 2023-08-12 2023-11-14 深圳市华怡丰科技有限公司 Label sensor adjustment method, system, equipment and storage medium

Also Published As

Publication number Publication date
CN110827247B (en) 2024-03-15

Similar Documents

Publication Publication Date Title
CN110827247B (en) Label identification method and device
CN109117848B (en) Text line character recognition method, device, medium and electronic equipment
CN110689037B (en) Method and system for automatic object annotation using deep networks
US11080839B2 (en) System and method for training a damage identification model
CN111191649A (en) Method and equipment for identifying bent multi-line text image
CN104217203B (en) Complex background card face information identifying method and system
CN110348392B (en) Vehicle matching method and device
CN110796145B (en) Multi-certificate segmentation association method and related equipment based on intelligent decision
CN109977824B (en) Article taking and placing identification method, device and equipment
CN110288612B (en) Nameplate positioning and correcting method and device
CN113505781B (en) Target detection method, target detection device, electronic equipment and readable storage medium
CN113762455B (en) Detection model training method, single word detection method, device, equipment and medium
CN112085022A (en) Method, system and equipment for recognizing characters
CN111552837A (en) Animal video tag automatic generation method based on deep learning, terminal and medium
CN111507332A (en) Vehicle VIN code detection method and equipment
US11704476B2 (en) Text line normalization systems and methods
WO2020047316A1 (en) System and method for training a damage identification model
CN111881923B (en) Bill element extraction method based on feature matching
CN110728193B (en) Method and device for detecting richness characteristics of face image
CN115953744A (en) Vehicle identification tracking method based on deep learning
CN114463858B (en) Signature behavior recognition method and system based on deep learning
CN114882204A (en) Automatic ship name recognition method
CN117115823A (en) Tamper identification method and device, computer equipment and storage medium
CN116580390A (en) Price tag content acquisition method, price tag content acquisition device, storage medium and computer equipment
CN116563876A (en) Invoice identification method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information
CB02 Change of applicant information

Address after: Room 1101-1103, No. 433, Songhu Road, Yangpu District, Shanghai

Applicant after: Shanghai wanwansheng Environmental Protection Technology Group Co.,Ltd.

Address before: Room 1101-1103, No. 433, Songhu Road, Yangpu District, Shanghai

Applicant before: SHANGHAI YUEYI NETWORK INFORMATION TECHNOLOGY Co.,Ltd.

GR01 Patent grant
GR01 Patent grant