CN110827247B - Label identification method and device - Google Patents

Label identification method and device Download PDF

Info

Publication number
CN110827247B
CN110827247B CN201911032918.3A CN201911032918A CN110827247B CN 110827247 B CN110827247 B CN 110827247B CN 201911032918 A CN201911032918 A CN 201911032918A CN 110827247 B CN110827247 B CN 110827247B
Authority
CN
China
Prior art keywords
image
tag
recognition result
target
detected
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911032918.3A
Other languages
Chinese (zh)
Other versions
CN110827247A (en
Inventor
徐鹏
沈圣远
常树林
姚巨虎
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Wanwu Xinsheng Environmental Technology Group Co
Original Assignee
Shanghai Wanwu Xinsheng Environmental Technology Group Co
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Wanwu Xinsheng Environmental Technology Group Co filed Critical Shanghai Wanwu Xinsheng Environmental Technology Group Co
Priority to CN201911032918.3A priority Critical patent/CN110827247B/en
Publication of CN110827247A publication Critical patent/CN110827247A/en
Application granted granted Critical
Publication of CN110827247B publication Critical patent/CN110827247B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1439Methods for optical code recognition including a method step for retrieval of the optical code
    • G06K7/1447Methods for optical code recognition including a method step for retrieval of the optical code extracting optical codes from image or text carrying said optical code
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/146Methods for optical code recognition the method including quality enhancement steps
    • G06K7/1478Methods for optical code recognition the method including quality enhancement steps adapting the threshold for pixels in a CMOS or CCD pixel sensor for black and white recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • G06V10/267Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion by performing operations on regions, e.g. growing, shrinking or watersheds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V30/00Character recognition; Recognising digital ink; Document-oriented image-based pattern recognition
    • G06V30/10Character recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Health & Medical Sciences (AREA)
  • Toxicology (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Electromagnetism (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Character Discrimination (AREA)

Abstract

The application aims to provide a method and equipment for identifying a tag, wherein the method and equipment are used for determining coordinate information of a target tag by acquiring an image to be detected and detecting the image to be detected by using a convolutional neural network detection model; dividing the image to be detected according to the coordinate information of the target label to obtain the target label image; performing bar code recognition processing on the target tag image to obtain a first recognition result; performing character recognition processing on the target tag image by using a character recognition matching model to obtain a second recognition result; and matching the first identification result with the second identification result to determine a tag identification result. Thereby rapidly and accurately identifying the tag.

Description

Label identification method and device
Technical Field
The present disclosure relates to the field of tag identification, and in particular, to a method and apparatus for identifying a tag.
Background
For the identification of the backboard label, the prior art mostly adopts any one of bar code identification or character identification to identify the label, but the two methods are used independently, and error identification exists to a certain extent, so that the identification accuracy of the backboard label is often low.
Disclosure of Invention
An object of the present application is to provide a method and apparatus for identifying a tag, which solve the problem of low accuracy of tag identification in the prior art.
According to one aspect of the present application, there is provided a method of identifying a tag, the method comprising:
acquiring an image to be detected, and detecting the image to be detected by using a convolutional neural network detection model to determine coordinate information of a target tag;
dividing the image to be detected according to the coordinate information of the target label to obtain the target label image;
performing bar code recognition processing on the target tag image to obtain a first recognition result;
performing character recognition processing on the target tag image by using a character recognition matching model to obtain a second recognition result;
and matching the first identification result with the second identification result to determine a tag identification result.
Further, before the image to be detected is detected by using the convolutional neural network detection model to determine the coordinate information of the target tag, the method further comprises:
acquiring a plurality of tag images, and marking the tag images;
and determining a position deviation iteration parameter according to the marked image, and determining the convolutional neural network detection model according to the position deviation iteration parameter.
Further, the detecting the image to be detected by using the convolutional neural network detection model to determine the coordinate information of the target tag includes:
detecting all pixels of the image to be detected by using a convolutional neural network detection model to obtain a first score corresponding to each coordinate information of the image to be detected;
and judging whether the first score is larger than a first preset threshold value, if so, the coordinate information corresponding to the first score is the coordinate information of the target label.
Further, the dividing the image to be detected according to the coordinate information of the target tag to obtain the target tag image includes:
the image to be detected is scratched according to the coordinate information of the target label so as to obtain a plurality of pixels corresponding to the coordinate information of the target label;
calculating each pixel by using a preset segmentation neural network, obtaining a second score corresponding to each pixel, judging whether the second score is larger than a second preset threshold, and if so, determining the target label image according to the pixel corresponding to the second score; and if not, setting black the pixel corresponding to the second score.
Further, the performing barcode recognition processing on the target tag image includes:
and performing bar code identification processing on the target tag image by using a decoding packet, wherein the bar code identification processing comprises hard decoding.
Further, the performing character recognition processing on the target tag image by using the character recognition matching model to obtain a second recognition result, including:
inputting the target label image into a preset convolutional neural network to obtain a convolutional feature map;
performing feature serialization processing on the convolution feature map to obtain a predicted character sequence;
and calculating the predicted character sequence by using a connection order classification algorithm, and determining a second recognition result.
Further, matching the first recognition result with the second recognition result, determining a tag recognition result includes:
matching the first recognition result according to the preset character length and the preset character format, and if the first recognition result meets the preset character length and the preset character format, the first recognition result is a label recognition result;
and matching the second recognition result according to the preset character length and the preset character format, and if the second recognition result meets the preset character length and the preset character format, the second recognition result is a label recognition result.
According to another aspect of the present application, there is also provided a computer readable medium having stored thereon computer readable instructions executable by a processor to implement a method of identifying a tag as described above.
According to yet another aspect of the present application, there is also provided an apparatus for identifying a tag, wherein the apparatus includes:
one or more processors; and
a memory storing computer readable instructions that, when executed, cause the processor to perform the operations of the aforementioned method of identifying a tag.
Compared with the prior art, the method and the device have the advantages that the image to be detected is detected by using the convolutional neural network detection model to determine the coordinate information of the target tag; dividing the image to be detected according to the coordinate information of the target label to obtain the target label image; performing bar code recognition processing on the target tag image to obtain a first recognition result; performing character recognition processing on the target tag image by using a character recognition matching model to obtain a second recognition result; and matching the first identification result with the second identification result to determine a tag identification result. Thereby rapidly and accurately identifying the tag.
Drawings
Other features, objects and advantages of the present application will become more apparent upon reading of the detailed description of non-limiting embodiments, made with reference to the following drawings, in which:
FIG. 1 illustrates a method flow diagram for providing an identification tag according to one aspect of the present application;
FIG. 2 illustrates a network flow diagram for determining a convolutional neural network detection model from a convolutional neural network in accordance with a preferred embodiment of the present application;
FIG. 3 is a schematic flow chart of a residual neural network detection pixel in a preferred embodiment of the present application;
FIG. 4 shows a schematic representation of an image of a target tag in a preferred embodiment of the present application;
fig. 5 shows a character recognition flow chart in a preferred embodiment of the present application.
The same or similar reference numbers in the drawings refer to the same or similar parts.
Detailed Description
The present application is described in further detail below with reference to the accompanying drawings.
In one typical configuration of the present application, the terminal, the device of the service network, and the trusted party each include one or more processors (CPUs), input/output interfaces, network interfaces, and memory.
The memory may include volatile memory in a computer-readable medium, random Access Memory (RAM) and/or nonvolatile memory, etc., such as Read Only Memory (ROM) or flash memory (flashRAM). Memory is an example of computer-readable media.
Computer readable media, including both non-transitory and non-transitory, removable and non-removable media, may implement information storage by any method or technology. The information may be computer readable instructions, data structures, modules of a program, or other data. Examples of storage media for a computer include, but are not limited to, phase change memory (PRAM), static Random Access Memory (SRAM), dynamic Random Access Memory (DRAM), other types of Random Access Memory (RAM), read Only Memory (ROM), electrically Erasable Programmable Read Only Memory (EEPROM), flash memory or other memory technology, compact disc read only memory (CD-ROM), digital Versatile Discs (DVD) or other optical storage, magnetic cassettes, magnetic tape disk storage or other magnetic storage devices, or any other non-transmission medium, which can be used to store information that can be accessed by a computing device. Computer readable media, as defined herein, does not include non-transitory computer readable media (transmission media), such as modulated data signals and carrier waves.
FIG. 1 illustrates a flow chart of a method of providing an identification tag according to one aspect of the present application, the method comprising: S11-S15, wherein in the step S11, an image to be detected is obtained, and the image to be detected is detected by using a convolutional neural network detection model so as to determine coordinate information of a target label; step S12, dividing the image to be detected according to the coordinate information of the target label to obtain the target label image; step S13, performing bar code recognition processing on the target tag image to obtain a first recognition result; s14, performing character recognition processing on the target tag image by using a character recognition matching model to obtain a second recognition result; step S15, the first recognition result and the second recognition result are matched, and a tag recognition result is determined. Thereby rapidly and accurately identifying the tag.
Specifically, in step S11, an image to be detected is acquired, and the image to be detected is detected by using a convolutional neural network detection model to determine coordinate information of the target tag. In this case, the front-end device or the like acquires an image to be detected, such as a camera, a video capture, etc., where the image to be detected may be a backboard image. The convolutional neural network detection model can be obtained through deep learning training, for example, the convolutional neural network is trained by using the marked picture data, and then the convolutional neural network detection model is obtained. The target tag is a tag positioned in the image to be detected, for example, the image to be detected A contains a tag P, and if the tag P is positioned by the image to be detected A, the tag P is the target tag. Detecting all coordinate information in the images to be detected through a convolutional neural network detection model, detecting the coordinate information of each image to be detected, for example, scoring the coordinates of each image to be detected, and determining the coordinate information of the target label according to the obtained score.
And step S12, dividing the image to be detected according to the coordinate information of the target label to obtain the target label image. Here, the image to be detected may be segmented according to the coordinate information of the target label by using a segmentation neural network, for example, a score corresponding to each pixel of the image to be detected is calculated by using the segmentation neural network, and each pixel corresponding to the target label image is determined according to the score, so as to obtain the target label image.
And S13, performing bar code recognition processing on the target tag image to obtain a first recognition result. The decoding packet can be utilized to carry out bar code identification processing on the target tag image so as to rapidly and accurately acquire a first identification result; it should be noted that, the above method of performing barcode recognition processing on the target tag image is only an example, and other methods may be also included, for example, barcode recognition processing may also be performed on the target tag image by using a barcode scanner.
And S14, performing character recognition processing on the target tag image by using a character recognition matching model to obtain a second recognition result. Here, the character recognition matching model may be composed mainly of a VGG16 network and a two-way long and short term memory network (BiLSTM network) in a convolutional neural network. And then, performing operations such as feature extraction, character recognition, character sequence output and the like on the target label image to complete character recognition processing on the target label image, and taking the character sequence obtained after the target label image is recognized as a second recognition result.
Step S15, the first recognition result and the second recognition result are matched, and a tag recognition result is determined. The first recognition result is a character sequence obtained after the bar code of the target tag is recognized, and the second recognition result is a character sequence obtained after the character of the target tag is recognized. And determining the tag recognition result by judging and comparing the character sequence corresponding to the first recognition result and the character sequence corresponding to the second recognition result. Compared with the mode of only recognizing the bar code and only recognizing the character in the prior art, the method and the device can recognize the label more accurately.
Preferably, before step S12, a plurality of label images are acquired, and the label images are marked; and determining a position deviation iteration parameter according to the marked image, and determining the convolutional neural network detection model according to the position deviation iteration parameter. Here, the method for identifying a tag described in the present application is preferably applied to tag processing at the time of back plate detection: manually labeling a plurality of label images after the label images are acquired, such as labeling a label area and a non-label area; and training a convolutional neural network by using the labeled multiple tag images, calculating a position deviation iteration parameter through the convolutional neural network, and then carrying out multiple iterations in the convolutional neural network according to the position deviation iteration parameter to determine a position deviation iteration function, and determining a convolutional neural network detection model according to the position deviation iteration function and the convolutional neural network. As shown in fig. 2, the labeled label image data is input into a model, the difference between the predicted coordinate position and the labeled actual label position is calculated, then the position deviation iteration parameter is obtained by back-transferring the gradient, and the parameter in the convolutional neural network detection model is updated according to the position deviation iteration parameter so as to train the convolutional neural network detection model.
Preferably, in step S11, all pixels of the image to be detected are detected by using a convolutional neural network detection model, so as to obtain a first score corresponding to each coordinate information of the image to be detected; and judging whether the first score is larger than a first preset threshold value, if so, the coordinate information corresponding to the first score is the coordinate information of the target label. As shown in fig. 3, in a preferred embodiment of the present application, a residual neural network (Resnet network) may be used to detect all pixels of the image to be detected, and output a pixel label, pixel coordinate information, and a score (score) corresponding to the pixel coordinate information, where the pixel coordinate information includes horizontal axis coordinate information and vertical axis coordinate information. Setting a first preset threshold value, determining a first score according to pixel coordinate information and scores corresponding to the pixel coordinate information, and determining the coordinate information of the target label according to the coordinate information of the pixels corresponding to the first score when the first score is larger than the first preset threshold value, wherein the pixels corresponding to the first score are the pixels of the target label, so that accuracy is improved.
Preferably, in step S12, the image to be detected is scratched according to the coordinate information of the target label to obtain a plurality of pixels corresponding to the coordinate information of the target label; calculating each pixel by using a preset segmentation neural network, obtaining a second score corresponding to each pixel, judging whether the second score is larger than a second preset threshold, and if so, determining the target label image according to the pixel corresponding to the second score; and if not, setting black the pixel corresponding to the second score. And the non-label area image corresponding to the target label coordinate information can be removed after the image to be detected is segmented by using a preset segmentation neural network, so that the data processing amount is reduced, and the speed of label image recognition processing is increased. Wherein the segmented neural network includes, but is not limited to: full convolutional network (FCN network), U-net network. In a preferred embodiment of the application, the parameters to be trained can be reduced to a great extent by using the U-net network as the segmentation neural network, and all information in the image to be detected can be well reserved due to the special U-shaped structure; meanwhile, the U-net network can also carry out convolution operation on pictures with any shape and size, so that backboard pictures with any size serving as images to be detected can be conveniently processed.
Fig. 4 is a schematic diagram of an image of a target label in a preferred embodiment of the present application, and a trained segmentation neural network is used to calculate whether each pixel in the scratched image to be detected belongs to the target label, so as to generate a picture with only label information. The segmentation processing is mainly used for removing interference of non-target tag areas, after segmentation, a picture with only tag data can be obtained, and all pixels of other non-target tag areas are blacked out, so that the data processing amount in the subsequent recognition processing of the target tag is reduced, and the recognition processing of the target tag is quickened. Presetting a second preset threshold, wherein pixels with second scores larger than the second preset threshold are target label pixels; and adjusting the pixel value of the pixel with the second score smaller than or equal to the second preset threshold value to be 0 so as to finish black setting, and reducing interference information.
Preferably, in step S13, a barcode recognition process is performed on the target tag image using a decoding packet, wherein the barcode recognition process includes hard decoding. Here, the barcode in the target tag image may be subjected to a barcode recognition process in a hard-decoded manner using the decoding packet to rapidly recognize the barcode of the target tag.
Preferably, in step S14, the target label image is input into a preset convolutional neural network to obtain a convolutional feature map; performing feature serialization processing on the convolution feature map to obtain a predicted character sequence; and calculating the predicted character sequence by using a connection order classification algorithm, and determining a second recognition result. Here, the preset convolutional neural network includes, but is not limited to, VGG16 network, residual neural network (reset network), and acceptance network. FIG. 5 is a flowchart of character recognition in a preferred embodiment of the present application, wherein a VGG16 network is used to calculate the preprocessed target label image to obtain a new convolution feature map, and the convolution feature map is subjected to feature serialization, that is, dimension transformation processing, where the preprocessing may be rotation, sharpening, etc.; in addition, the VGG16 network is used, so that the calculated amount is less, the effect is good, and a new convolution characteristic diagram can be quickly and accurately obtained; then, the convolution characteristic diagram after dimension transformation is calculated through a two-way long-short-term memory network (BiLTSM network) to obtain a predicted character sequence, and a character result after character recognition processing, namely the second recognition result, can be calculated and determined by utilizing a connection sequence classification algorithm (CTC algorithm) so as to accurately recognize the label.
Preferably, in step S15, the first recognition result is matched according to a preset character length and a preset character format, and if the first recognition result meets the preset character length and the preset character format, the first recognition result is a label recognition result; and matching the second recognition result according to the preset character length and the preset character format, and if the second recognition result meets the preset character length and the preset character format, the second recognition result is a label recognition result. Here, the tag recognition result is determined according to the first recognition result and the second recognition result together, so that the accuracy of tag recognition is improved.
In a preferred embodiment of the present application, as shown in fig. 4, the tag in fig. 4 is identified, the first identification result is a hard decoding result, and the hard decoding result character is: 201940911492756111, the second recognition result is a character recognition result, and the character recognition result is: 20190409114927561115. by matching the hard decoding result and the character recognition result with the preset character length and the preset year, month and day format respectively, the hard decoding result does not accord with the preset character length and the preset year, month and day format, and the character recognition result accords with the preset year, month and day format and the preset character length, so that the character recognition result is determined to be the label recognition result, and the label recognition accuracy is improved.
Further, embodiments of the present application provide a computer readable medium having stored thereon computer readable instructions executable by a processor to implement a method of identifying a tag as described above.
According to yet another aspect of the present application, there is also provided an apparatus for identifying a tag, wherein the apparatus includes:
one or more processors; and
a memory storing computer readable instructions that, when executed, cause the processor to perform the operations of the aforementioned method of identifying a tag.
For example, computer-readable instructions, when executed, cause the one or more processors to: acquiring an image to be detected, and detecting the image to be detected by using a convolutional neural network detection model to determine coordinate information of a target tag; dividing the image to be detected according to the coordinate information of the target label to obtain the target label image; performing bar code recognition processing on the target tag image to obtain a first recognition result; performing character recognition processing on the target tag image by using a character recognition matching model to obtain a second recognition result; and matching the first identification result with the second identification result to determine a tag identification result.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the spirit or scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to cover such modifications and variations.
It should be noted that the present application may be implemented in software and/or a combination of software and hardware, for example, using Application Specific Integrated Circuits (ASIC), a general purpose computer or any other similar hardware device. In one embodiment, the software programs of the present application may be executed by a processor to implement the steps or functions as described above. Likewise, the software programs of the present application (including associated data structures) may be stored on a computer readable recording medium, such as RAM memory, magnetic or optical drive or diskette and the like. In addition, some steps or functions of the present application may be implemented in hardware, for example, as circuitry that cooperates with the processor to perform various steps or functions.
Furthermore, portions of the present application may be implemented as a computer program product, such as computer program instructions, which when executed by a computer, may invoke or provide methods and/or techniques in accordance with the present application by way of operation of the computer. Program instructions for invoking the methods of the present application may be stored in fixed or removable recording media and/or transmitted via a data stream in a broadcast or other signal bearing medium and/or stored within a working memory of a computer device operating according to the program instructions. An embodiment according to the present application comprises an apparatus comprising a memory for storing computer program instructions and a processor for executing the program instructions, wherein the computer program instructions, when executed by the processor, trigger the apparatus to operate a method and/or a solution according to the embodiments of the present application as described above.
It will be evident to those skilled in the art that the present application is not limited to the details of the foregoing illustrative embodiments, and that the present application may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of the application being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. A plurality of units or means recited in the apparatus claims can also be implemented by means of one unit or means in software or hardware. The terms first, second, etc. are used to denote a name, but not any particular order.

Claims (7)

1. A method of identifying a tag, wherein the method comprises:
acquiring an image to be detected, and detecting the image to be detected by using a convolutional neural network detection model to determine coordinate information of a target tag;
dividing the image to be detected according to the coordinate information of the target label, calculating the score corresponding to each image to be detected, and determining each pixel corresponding to the target label image according to the score to obtain the target label image;
performing bar code recognition processing on the target tag image to obtain a first recognition result;
performing character recognition processing on the target tag image by using a character recognition matching model to obtain a second recognition result;
matching the first identification result with the second identification result, and determining a tag identification result;
the detecting the image to be detected by using a convolutional neural network detection model to determine coordinate information of a target tag comprises the following steps:
detecting all pixels of the image to be detected by using a convolutional neural network detection model to obtain a first score corresponding to each coordinate information of the image to be detected;
judging whether the first score is larger than a first preset threshold value or not, if so, the coordinate information corresponding to the first score is the coordinate information of the target label;
the character recognition processing is performed on the target tag image by using the character recognition matching model to obtain a second recognition result, and the method comprises the following steps:
inputting the target label image into a preset convolutional neural network to obtain a convolutional feature map;
performing feature serialization processing on the convolution feature map through a two-way long-short-term memory network to obtain a predicted character sequence;
and calculating the predicted character sequence by using a connection order classification algorithm, and determining a second recognition result.
2. The method of claim 1, wherein prior to detecting the image to be detected using a convolutional neural network detection model to determine coordinate information of a target tag, further comprising:
acquiring a plurality of tag images, and marking the tag images;
and determining a position deviation iteration parameter according to the marked image, and determining the convolutional neural network detection model according to the position deviation iteration parameter.
3. The method according to claim 1, wherein the method comprises:
the image to be detected is scratched according to the coordinate information of the target label so as to obtain a plurality of pixels corresponding to the coordinate information of the target label;
calculating each pixel by using a preset segmentation neural network, obtaining a second score corresponding to each pixel, judging whether the second score is larger than a second preset threshold, and if so, determining the target label image according to the pixel corresponding to the second score; and if not, setting black the pixel corresponding to the second score.
4. The method of claim 1, wherein the barcode recognition processing of the target tag image comprises:
and performing bar code identification processing on the target tag image by using a decoding packet, wherein the bar code identification processing comprises hard decoding.
5. The method of claim 1, wherein matching the first recognition result with the second recognition result, determining a tag recognition result, comprises:
matching the first recognition result according to the preset character length and the preset character format, and if the first recognition result meets the preset character length and the preset character format, the first recognition result is a label recognition result;
and matching the second recognition result according to the preset character length and the preset character format, and if the second recognition result meets the preset character length and the preset character format, the second recognition result is a label recognition result.
6. An apparatus for identifying a tag, wherein the apparatus comprises:
one or more processors; and
a memory storing computer readable instructions that, when executed, cause the processor to perform the operations of the method of any one of claims 1 to 5.
7. A computer readable medium having stored thereon computer readable instructions executable by a processor to implement the method of any of claims 1 to 5.
CN201911032918.3A 2019-10-28 2019-10-28 Label identification method and device Active CN110827247B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911032918.3A CN110827247B (en) 2019-10-28 2019-10-28 Label identification method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911032918.3A CN110827247B (en) 2019-10-28 2019-10-28 Label identification method and device

Publications (2)

Publication Number Publication Date
CN110827247A CN110827247A (en) 2020-02-21
CN110827247B true CN110827247B (en) 2024-03-15

Family

ID=69551323

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911032918.3A Active CN110827247B (en) 2019-10-28 2019-10-28 Label identification method and device

Country Status (1)

Country Link
CN (1) CN110827247B (en)

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20210106493A (en) 2018-12-19 2021-08-30 에코에이티엠, 엘엘씨 Systems and methods for the sale and/or purchase of mobile phones and other electronic devices
KR20210125526A (en) 2019-02-12 2021-10-18 에코에이티엠, 엘엘씨 Connector Carrier for Electronic Device Kiosk
CN211956539U (en) 2019-02-18 2020-11-17 埃科亚特姆公司 System for evaluating the condition of an electronic device
CN111651571B (en) * 2020-05-19 2023-10-17 腾讯科技(深圳)有限公司 Conversation realization method, device, equipment and storage medium based on man-machine cooperation
US11922467B2 (en) 2020-08-17 2024-03-05 ecoATM, Inc. Evaluating an electronic device using optical character recognition
CN112052696B (en) * 2020-08-31 2023-05-02 中冶赛迪信息技术(重庆)有限公司 Bar product warehouse-out label identification method, device and equipment based on machine vision
CN112634201B (en) * 2020-12-02 2023-12-05 歌尔股份有限公司 Target detection method and device and electronic equipment
CN112560767A (en) * 2020-12-24 2021-03-26 南方电网深圳数字电网研究院有限公司 Document signature identification method and device and computer readable storage medium
CN112733568B (en) * 2021-01-21 2024-04-19 北京京东振世信息技术有限公司 One-dimensional bar code recognition method, device, equipment and storage medium
CN113298357A (en) * 2021-04-29 2021-08-24 福建宏泰智能工业互联网有限公司 Quality data processing method, device and system
CN113567450A (en) * 2021-07-20 2021-10-29 上汽通用五菱汽车股份有限公司 Engine label information visual detection system and method
CN114519858B (en) * 2022-02-16 2023-09-05 北京百度网讯科技有限公司 Document image recognition method and device, storage medium and electronic equipment
CN114972880A (en) * 2022-06-15 2022-08-30 卡奥斯工业智能研究院(青岛)有限公司 Label identification method and device, electronic equipment and storage medium
CN115937868A (en) * 2022-12-12 2023-04-07 江苏中烟工业有限责任公司 Cigarette packet label information matching method and device, electronic equipment and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1179752A (en) * 1995-03-31 1998-04-22 基维软件程序有限公司 Machine-readable label
CN105760886A (en) * 2016-02-23 2016-07-13 北京联合大学 Image scene multi-object segmentation method based on target identification and saliency detection
CN105787482A (en) * 2016-02-26 2016-07-20 华北电力大学 Specific target outline image segmentation method based on depth convolution neural network
CN107103613A (en) * 2017-03-28 2017-08-29 深圳市未来媒体技术研究院 A kind of three-dimension gesture Attitude estimation method
WO2018099194A1 (en) * 2016-11-30 2018-06-07 杭州海康威视数字技术股份有限公司 Character identification method and device
CN108133233A (en) * 2017-12-18 2018-06-08 中山大学 A kind of multi-tag image-recognizing method and device
CN108416412A (en) * 2018-01-23 2018-08-17 浙江瀚镪自动化设备股份有限公司 A kind of logistics compound key recognition methods based on multitask deep learning
CN108898045A (en) * 2018-04-23 2018-11-27 杭州电子科技大学 The multi-tag image pre-processing method of gesture identification based on deep learning
CN108920992A (en) * 2018-08-08 2018-11-30 长沙理工大学 A kind of positioning and recognition methods of the medical label bar code based on deep learning
CN109117862A (en) * 2018-06-29 2019-01-01 北京达佳互联信息技术有限公司 Image tag recognition methods, device and server
CN110084150A (en) * 2019-04-09 2019-08-02 山东师范大学 A kind of Automated Classification of White Blood Cells method and system based on deep learning

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1179752A (en) * 1995-03-31 1998-04-22 基维软件程序有限公司 Machine-readable label
CN105760886A (en) * 2016-02-23 2016-07-13 北京联合大学 Image scene multi-object segmentation method based on target identification and saliency detection
CN105787482A (en) * 2016-02-26 2016-07-20 华北电力大学 Specific target outline image segmentation method based on depth convolution neural network
WO2018099194A1 (en) * 2016-11-30 2018-06-07 杭州海康威视数字技术股份有限公司 Character identification method and device
CN107103613A (en) * 2017-03-28 2017-08-29 深圳市未来媒体技术研究院 A kind of three-dimension gesture Attitude estimation method
CN108133233A (en) * 2017-12-18 2018-06-08 中山大学 A kind of multi-tag image-recognizing method and device
CN108416412A (en) * 2018-01-23 2018-08-17 浙江瀚镪自动化设备股份有限公司 A kind of logistics compound key recognition methods based on multitask deep learning
CN108898045A (en) * 2018-04-23 2018-11-27 杭州电子科技大学 The multi-tag image pre-processing method of gesture identification based on deep learning
CN109117862A (en) * 2018-06-29 2019-01-01 北京达佳互联信息技术有限公司 Image tag recognition methods, device and server
CN108920992A (en) * 2018-08-08 2018-11-30 长沙理工大学 A kind of positioning and recognition methods of the medical label bar code based on deep learning
CN110084150A (en) * 2019-04-09 2019-08-02 山东师范大学 A kind of Automated Classification of White Blood Cells method and system based on deep learning

Also Published As

Publication number Publication date
CN110827247A (en) 2020-02-21

Similar Documents

Publication Publication Date Title
CN110827247B (en) Label identification method and device
CN109117848B (en) Text line character recognition method, device, medium and electronic equipment
CN110378165B (en) Two-dimensional code identification method, two-dimensional code positioning identification model establishment method and device
US9171204B2 (en) Method of perspective correction for devanagari text
US11080839B2 (en) System and method for training a damage identification model
CN104217203B (en) Complex background card face information identifying method and system
US20210056715A1 (en) Object tracking method, object tracking device, electronic device and storage medium
CN110210400B (en) Table file detection method and equipment
CN110348393B (en) Vehicle feature extraction model training method, vehicle identification method and equipment
CN111191649A (en) Method and equipment for identifying bent multi-line text image
CN112580707A (en) Image recognition method, device, equipment and storage medium
CN110348392B (en) Vehicle matching method and device
CN111178290A (en) Signature verification method and device
CN111507332A (en) Vehicle VIN code detection method and equipment
CN110688902A (en) Method and device for detecting vehicle area in parking space
CN111178282A (en) Road traffic speed limit sign positioning and identifying method and device
CN110728193B (en) Method and device for detecting richness characteristics of face image
CN112001200A (en) Identification code identification method, device, equipment, storage medium and system
CN112052702A (en) Method and device for identifying two-dimensional code
CN111797832B (en) Automatic generation method and system for image region of interest and image processing method
CN116580390A (en) Price tag content acquisition method, price tag content acquisition device, storage medium and computer equipment
CN115953744A (en) Vehicle identification tracking method based on deep learning
CN112883973A (en) License plate recognition method and device, electronic equipment and computer storage medium
CN114417965A (en) Training method of image processing model, target detection method and related device
CN111611986A (en) Focus text extraction and identification method and system based on finger interaction

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: Room 1101-1103, No. 433, Songhu Road, Yangpu District, Shanghai

Applicant after: Shanghai wanwansheng Environmental Protection Technology Group Co.,Ltd.

Address before: Room 1101-1103, No. 433, Songhu Road, Yangpu District, Shanghai

Applicant before: SHANGHAI YUEYI NETWORK INFORMATION TECHNOLOGY Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant