CN115238723A - Local vertex detection method and device - Google Patents

Local vertex detection method and device Download PDF

Info

Publication number
CN115238723A
CN115238723A CN202210756755.9A CN202210756755A CN115238723A CN 115238723 A CN115238723 A CN 115238723A CN 202210756755 A CN202210756755 A CN 202210756755A CN 115238723 A CN115238723 A CN 115238723A
Authority
CN
China
Prior art keywords
vertex
bar code
local
image
predicted
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210756755.9A
Other languages
Chinese (zh)
Inventor
钟华堡
张帆
沈亚锋
谢立寅
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Hualian Electronics Co Ltd
Original Assignee
Xiamen Hualian Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Hualian Electronics Co Ltd filed Critical Xiamen Hualian Electronics Co Ltd
Priority to CN202210756755.9A priority Critical patent/CN115238723A/en
Publication of CN115238723A publication Critical patent/CN115238723A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K7/00Methods or arrangements for sensing record carriers, e.g. for reading patterns
    • G06K7/10Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
    • G06K7/14Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation using light without selection of wavelength, e.g. sensing reflected white light
    • G06K7/1404Methods for optical code recognition
    • G06K7/1408Methods for optical code recognition the method being specifically adapted for the type of code
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/774Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Electromagnetism (AREA)
  • Toxicology (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a local vertex detection method and a local vertex detection device, wherein the method comprises the following steps: inputting an image to be detected; detecting an image to be detected by using a pre-trained deep learning local vertex detection model so as to output a plurality of predicted vertexes; the prediction information of the vertex includes: vertex type, confidence score, vertex coordinates, vertex vector; screening a plurality of predicted vertexes to filter vertexes with the same vertex vector direction and coordinate position distance smaller than a preset value; matching diagonal vertex pairs for representing the bar codes according to the predicted vertex information, and calculating an inclined frame formed by the vertex pairs; and rotating and cutting the inclined frame to obtain a coarse positioning image of the bar code, and decoding the coarse positioning image to identify the corresponding bar code. According to the invention, the identification difficulty is simplified based on the characteristic of simple structure of the graph of the local area of the vertex of the bar code, and the accuracy of bar code identification is improved based on the deep learning detection technology and adapting to various background interferences.

Description

Local vertex detection method and device
Technical Field
The invention relates to the field of image recognition, in particular to a local vertex detection method and a local vertex detection device.
Background
The bar code is a general name of a one-dimensional bar code and a two-dimensional bar code.
The conventional barcode recognition process includes: carrying out image binaryzation, searching for a locator for accurate positioning, correcting distortion, reading coding information and decoding; however, under the interference of illumination change and complex background, the image binarization and positioning effects are poor.
In order to solve the above problems, the prior art provides a two-dimensional code detection algorithm based on a deep learning convolutional neural network, and the basic flow is as follows: the bar code is roughly positioned by utilizing a deep learning technology and then is identified and read by utilizing a traditional detection method. Compared with the traditional method, the method based on deep learning has the advantages that the positioning detection of the bar code is more accurate, the identification rate of the bar code is obviously improved, and the method is mainly applied to smart phones with better chip performance at present.
However, the method based on deep learning has the problems of large calculation amount and difficult application to embedded devices with low configuration. In order to reduce the amount of computation and increase the detection speed, the prior art accelerates the calculation by simplifying the network structure and pruning, quantizing and other means, however, the method has a larger difference compared with the detection speed of the traditional method. In addition, the existing barcode detection method based on deep learning detects the whole barcode, however, because the barcodes have many styles, for example, 40 versions of a QR code, the recognition difficulty is higher, and a larger and more complex deep learning model and a large amount of training data are required to achieve a good detection effect.
Disclosure of Invention
The invention mainly solves the technical problem of providing a local vertex detection method and a local vertex detection device, which can simplify the difficulty of bar code identification and improve the accuracy of the bar code identification.
In order to solve the technical problems, the invention adopts a technical scheme that: a local vertex detection method comprises the following steps: inputting an image to be detected; the image to be detected comprises a plurality of multi-type bar codes; detecting the image to be detected by utilizing a pre-trained deep learning local vertex detection model so as to output a plurality of predicted vertexes; the prediction information of the vertex includes: vertex type c, confidence score p, vertex coordinates (x, y), vertex vector
Figure BDA0003719777750000021
Figure BDA0003719777750000022
Screening the predicted vertexes to filter vertexes with the same vertex vector direction and the coordinate position distance smaller than a preset value; matching diagonal vertex pairs for representing the bar codes according to the predicted vertex information, and calculating an inclined frame formed by the vertex pairs; and rotating and cutting the inclined frame to obtain a coarse positioning image of the bar code, and decoding the coarse positioning image to identify the corresponding bar code.
Wherein, screening the predicted vertexes to filter vertexes with the same vertex vector direction and the coordinate position distance smaller than a preset value specifically comprises: sorting the vertex input list according to the confidence scores p of the multiple predicted vertices; selecting a vertex with the highest confidence score to be added into a vertex output list, and deleting the vertex with the highest confidence score from the vertex input list; calculating Euclidean distances between the vertexes in the vertex input list and the vertex with the highest confidence score; computing a vector inner product of vertices in the vertex input list and vertices with the highest confidence scores; and selecting the Euclidean distance smaller than the threshold value T according to the calculation result 1 And the vector inner product is greater than the threshold value T 2 And deleting the selected vertex from the vertex input list; the steps described above are repeated until the vertex input list is empty.
The method comprises the following steps of matching diagonal vertex pairs for representing bar codes according to predicted vertex information, and calculating an inclined frame formed by the vertex pairs, and specifically comprises the following steps: traversing the vertex output list to screen and obtain vertex pairs according to screening conditions; calculating a corresponding bar code oblique frame according to the acquired prediction information of the vertex pair, and judging whether the length value and the width value of the bar code oblique frame meet preset conditions or not; if so, the vertex pair is established, and the corresponding calculated inclined frame is output; otherwise, the vertex pair does not hold; wherein the screening conditions are as follows: screening out the vertex pairs according to the following conditions (1) and (2) which are simultaneously met: condition (1): c. C i =c j (ii) a Wherein i and j are two vertex numbers respectively; condition (2):
Figure BDA0003719777750000023
namely, it is
Figure BDA0003719777750000024
The preset conditions are as follows: w is a>T 4 And h is>T 4 And T is 5 <w/h<T 6 (ii) a Wherein w is the width of the slant frame, h is the height of the slant frame, and the threshold T 4 Is the minimum width or height value, T, of the bar code 5 Is the minimum aspect ratio, T, of the bar code 6 The maximum aspect ratio of the barcode.
The "calculating the corresponding barcode inclined frame according to the obtained prediction information of the vertex pair" specifically includes: let the oblique bar code frame be (cx, cy, w, h, theta); wherein, (cx, cy) represents the central coordinate of the oblique frame, and theta represents the anticlockwise rotation angle of the oblique frame; the bounding box consisting of vertex i and vertex j is computed according to equation (1) as follows:
Figure BDA0003719777750000031
the training method of the deep learning local vertex detection model specifically comprises the following steps: collecting a plurality of barcode images to form a data set; detecting the bar code image to identify a bar code, and outputting a bar code type c and 4 bar code vertex coordinates (x, y); automatically labeling the diagonal vertexes of all the bar codes in the image; wherein, the labeling content comprises: vertex type, vertex coordinates, vertex vector; manually marking the bar code image which fails to be identified, and manually removing the bar code image which cannot obtain the vertex coordinates; and training the deep learning local vertex detection model according to the labeled data set.
Wherein the vertex vector is calculated using the following equation (2):
Figure BDA0003719777750000032
wherein i, j, k, l respectively represent the top points of the lower left corner, the upper right corner, the upper left corner and the lower right corner.
Another technical solution adopted by the present invention is to provide a local vertex detection apparatus, including a storage unit, an image acquisition unit, and a processing unit; the image acquisition unit is used for acquiring an image to be detected; the image to be detected comprises a plurality of multi-type bar codes; the processing unit includes: the vertex prediction unit is used for detecting the image to be detected by utilizing a pre-trained deep learning local vertex detection model so as to output a plurality of predicted vertexes; wherein the prediction information of each vertex comprises: vertex type c, confidence score p, vertex coordinates (x, y), vertex vector
Figure BDA0003719777750000041
The vertex screening unit is used for screening the plurality of predicted vertexes to filter vertexes which have the same vertex vector direction and have the coordinate position distance smaller than a preset value; the oblique frame calculating unit is used for matching out diagonal vertex pairs representing the bar codes according to the predicted vertex information and calculating an oblique frame formed by the vertex pairs; the cutting unit is used for rotating and cutting the inclined frame to obtain a coarse positioning image of the bar code; and the decoding unit is used for decoding the coarse positioning image so as to identify a corresponding bar code.
Wherein the vertex screening unit includes: a sorting module forSorting the vertex input list according to the confidence scores p of the multiple predicted vertices; the vertex processing module is used for selecting a vertex with the highest confidence score to be added into a vertex output list and deleting the vertex with the highest confidence score from the vertex input list; a calculation module, configured to calculate euclidean distances between vertices in the vertex input list and vertices with the highest confidence scores, and calculate vector inner products between vertices in the vertex input list and vertices with the highest confidence scores; the vertex processing module is also used for selecting the Euclidean distance smaller than the threshold value T according to the calculation result 1 And the vector inner product is greater than the threshold value T 2 And removing the selected vertex from the vertex input list.
Wherein the bezel calculation unit is configured to: traversing the vertex output list to screen and obtain vertex pairs according to screening conditions; calculating a corresponding bar code oblique frame according to the acquired prediction information of the vertex pair, and judging whether the length value and the width value of the bar code oblique frame meet preset conditions or not; if yes, the vertex pair is established; otherwise, the vertex pair does not hold; wherein the screening conditions are as follows: screening out the vertex pairs according to the following conditions (1) and (2) simultaneously: condition (1): c. C i =c j (ii) a Wherein i and j are respectively two vertex numbers; condition (2):
Figure BDA0003719777750000042
namely, it is
Figure BDA0003719777750000043
Figure BDA0003719777750000044
The preset conditions are as follows: w is a>T 4 And h is>T 4 And T 5 <w/h<T 6 (ii) a Wherein w represents the width of the slant frame, h represents the height of the slant frame, and the threshold T 4 Is the minimum width or height value, T, of the bar code 5 Is the minimum aspect ratio, T, of the bar code 6 The maximum aspect ratio of the barcode.
The method for calculating the diagonal vertex pair of the barcode comprises the following steps of matching a diagonal vertex pair used for representing the barcode according to predicted vertex information by the diagonal frame calculating unit, wherein the diagonal vertex pair comprises the following specific steps: the bounding box consisting of vertex i and vertex j is computed according to the following equation (1):
Figure BDA0003719777750000051
the bar code oblique frame is represented as (cx, cy, w, h, theta), (cx, cy) represents the central coordinate of the oblique frame, (w, h) represents the width value and the height value of the oblique frame, and theta represents the anticlockwise rotation angle of the oblique frame.
The system further comprises a model training unit, a local vertex detection unit and a local vertex detection unit, wherein the model training unit is used for training the deep learning local vertex detection model; the model training unit comprises: and the storage module is used for collecting a plurality of bar code images to form a data set. The data annotation module is used for detecting the bar code image to identify a bar code and outputting a bar code type c and 4 bar code vertex coordinates (x, y); automatically labeling the diagonal vertexes of all the bar codes in the image; wherein, the labeling content comprises: vertex type, vertex coordinates, vertex vector; and the model training module trains the deep learning local vertex detection model according to the labeled data set.
Wherein the data annotation module is further configured to: the vertex vector is calculated using the following equation (2):
Figure BDA0003719777750000052
wherein i, j, k, l respectively represent the top points of the lower left corner, the upper right corner, the upper left corner and the lower right corner.
The invention provides a local vertex detection method and a device, which are characterized in that a pre-trained deep learning local vertex detection model is utilized to predict multiple vertexes of an image to be detected, vertexes with the same vertex vector direction and coordinate positions smaller than a preset value are filtered out to filter invalid vertexes, diagonal vertex pairs capable of representing a bar code region are matched from the screened vertexes, so that the prediction and the coarse positioning of the bar code region are realized according to a slant frame formed by calculating the diagonal vertex pairs, and finally, the image subjected to the coarse positioning is rotated, cut and decoded to identify a corresponding bar code. Under the condition that the image to be detected contains a plurality of barcodes of various types, screening a plurality of groups of vertex pairs according to predicted vertex information and calculating corresponding inclined frames to predict the barcodes, wherein the local area graph structure of the vertex of the barcode is simple, so that the recognition difficulty is greatly simplified; meanwhile, based on the deep learning detection technology, the matching of filtering invalid vertexes and valid vertex pairs is combined, various background interferences can be adapted, and the accuracy of barcode identification is improved.
Drawings
FIG. 1 is a flow chart illustrating a local vertex detection method according to an embodiment of the present invention;
FIG. 2 is a schematic diagram of a vertex output by the present invention, which detects the image to be detected by using a pre-trained deep learning local vertex detection model;
FIG. 3 is a schematic flow chart of the method for filtering out the vertices with the same vector direction and coordinate position distance smaller than a predetermined value, shown in FIG. 1, by filtering the plurality of predicted vertices;
FIG. 4 is a flow chart of the method for matching diagonal vertex pairs representing barcodes according to the predicted vertex information, calculating the inclined frame formed by the vertex pairs and identifying multiple barcodes, which is shown in FIG. 1;
FIG. 5 is a schematic diagram of the invention matching diagonal vertex pairs representing barcodes according to predicted vertex information and calculating the slant boxes formed by the vertex pairs;
FIG. 6 is a flowchart illustrating a method for training a deep learning local vertex detection model according to an embodiment of the present invention;
FIG. 7 is a functional block diagram of a local vertex detection apparatus according to an embodiment of the present invention;
FIG. 8 is a functional block diagram of the vertex screening unit shown in FIG. 7;
fig. 9 is a functional structure diagram of the model training unit shown in fig. 7.
Detailed Description
In order to explain the technical contents, structural features, objects and effects of the present invention in detail, the present invention will be explained in detail with reference to the accompanying drawings and examples.
Referring to fig. 1, a schematic flow chart of a local vertex detection method according to an embodiment of the present invention is shown, where the method includes the following steps:
and S11, inputting an image to be detected.
The image to be detected comprises a plurality of multi-type bar codes.
And S12, detecting the image to be detected by using a pre-trained deep learning local vertex detection model so as to output a plurality of predicted vertices.
Wherein the prediction information of each vertex comprises: vertex type c, confidence score p, vertex coordinates (x, y), vertex vector
Figure BDA0003719777750000071
In particular, the numerical activation function of the vertex vector is a tanh function and is normalized to a unit vector. Since the values of the vector have positive and negative directions, the value is mapped between [ -1,1] by the common tanh function in deep learning.
Please refer to fig. 2, which is a schematic diagram of the vertices output by using the pre-trained deep learning local vertex detection model to detect the image to be detected. Wherein, the coordinate position of the vertex is not completely coincided with the real vertex of the bar code, compared with the real vertex position of the bar code, the area formed by predicting the coordinate position of the vertex is larger than the real bar code area, thus the processing function lies in: the method can ensure that the content of the real bar code cannot be lost when the oblique frame area of the bar code is calculated subsequently.
Setting the included angle between the vertex vector and the x axis as theta, and setting the included angle between the bottom edge of the bar code and the x axis as theta; then the normalized unit vertex vectors of the lower left corner and the upper right corner of the bar code are (cos theta, sin theta), (-cos theta, -sin theta), respectively.
In the embodiment of the invention, diagonal vertexes need to be screened from a plurality of vertexes predicted by the model, and any diagonal vertex pair can be selected, so that the barcode region frame is determined. In this embodiment, because the bottom-right corner vertex feature of the QR code is used for system classification and is easily confused, and the QR code is the most common two-dimensional code, the pair of diagonal vertices selects the bottom-left corner vertex and the top-right corner vertex.
As mentioned above, the unit vector is oriented from the vertex along the bottom or top edge of the barcode.
And S13, screening the plurality of predicted vertexes to filter vertexes with the same vertex vector direction and the coordinate position distance smaller than a preset value.
In this embodiment, a plurality of predicted vertices are filtered using a non-maximum suppression algorithm based on vertex distance and direction to filter out overly adjacent co-directional vertices.
Please refer to fig. 3, which is a flowchart illustrating a method for filtering out vertices having the same vector direction and a coordinate position distance smaller than a predetermined value by filtering the plurality of predicted vertices shown in fig. 1.
Step S13, namely, screening the plurality of predicted vertexes to filter the vertexes with the same vertex vector direction and the coordinate position distance smaller than a preset value, and specifically comprising the following steps:
step S131, sorting the vertex input list according to the confidence scores p of the plurality of predicted vertices.
Step S132, selecting the vertex with the highest confidence score to be added into the vertex output list, and deleting the vertex with the highest confidence score from the vertex input list.
Step S133, calculating the euclidean distance between the vertex in the vertex input list and the vertex with the highest confidence score.
Step S134, calculating a vector inner product of a vertex in the vertex input list and a vertex with the highest confidence score.
Step S135, selecting the Euclidean distance smaller than the threshold value T according to the calculation result 1 And the vector inner product is greater than the threshold value T 2 And inputting the selected vertex from the vertex listIs deleted.
Wherein the threshold value T 1 Arranged according to the pixel size normally occupied by the bar code, e.g. T 1 =50. Threshold value T 2 Is the inner product of unit vectors, and the value is [ -1,1]To (c) to (d); since an inner product equal to 1 indicates that the vectors are in the same direction and parallel, T can be set 2 =0.9。
Step S136, judging whether the vertex input list is empty; if yes, ending the process; otherwise, return to step S131.
As described above, steps S131 to S136 are repeatedly executed until the vertex input list is empty, so that the screening of the predicted vertices is completed, and the predicted vertices obtained by the screening are all stored in the vertex output list.
And S14, matching diagonal vertex pairs representing the bar codes according to the predicted vertex information, and calculating an inclined frame formed by the vertex pairs.
Please refer to fig. 4, which is a flowchart illustrating a method for matching diagonal vertex pairs representing bar codes according to the predicted vertex information and calculating a slant frame formed by the vertex pairs, as shown in fig. 1.
Step S14, namely, matching out a diagonal vertex pair for representing the bar code according to the predicted vertex information, and calculating an inclined frame formed by the vertex pair, specifically comprising the following steps:
step S141, traverse the vertex output list to filter and obtain vertex pairs according to the filtering condition.
Specifically, the screening conditions are as follows: screening out the vertex pairs according to the following conditions (1) and (2) simultaneously:
condition (1): c. C i =c j (ii) a Wherein c is a code system, and i and j are respectively two vertex numbers; that is, vertex i and vertex j are of the same type of code system;
condition (2):
Figure BDA0003719777750000091
namely that
Figure BDA0003719777750000092
I.e. vertex vectorThe quantities i, j are antiparallel; in this case, since the vector inner product of-1 indicates the antiparallel state, the threshold value T can be set 3 Is set to T 3 =-0.9。
Step S142, calculating a corresponding bar code inclined frame according to the acquired prediction information of the vertex pair, and judging whether the length value and the width value of the bar code inclined frame meet preset conditions or not; if yes, the vertex pair is established, and the step S15 is executed; otherwise, the vertex pair is not established, and the process is ended.
"calculating the corresponding bar code oblique frame according to the obtained prediction information of the vertex pair", specifically includes:
let the oblique bar code frame be (cx, cy, w, h, theta); wherein, (cx, cy) represents the central coordinate of the oblique frame, (w, h) represents the width value and the height value of the oblique frame, and theta represents the anticlockwise rotation angle of the oblique frame;
the bounding box consisting of vertex i and vertex j is computed according to the following equation (1):
Figure BDA0003719777750000093
further, the preset conditions are as follows: w is a>T 4 And h is>T 4 And T 5 <w/h<T 6 (ii) a Wherein the threshold value T 4 Is the minimum width or height value, T, of the bar code 5 Is the minimum aspect ratio, T, of the bar code 6 The maximum aspect ratio of the barcode.
Please refer to fig. 5, which is a schematic diagram of an oblique frame formed by matching diagonal vertex pairs representing a barcode according to predicted vertex information and calculating the vertex pairs according to the present invention.
Matching diagonal vertex pairs for representing QR codes according to the predicted vertex information, namely a lower left vertex and an upper right vertex; and detecting the QR code image by using a pre-trained deep learning local vertex detection model, and possibly detecting and outputting a vertex at the upper left corner, but screening the vertex at the upper left corner according to the screening condition in the step S14, so that the calculation of the slant frame of the QR code is not influenced.
And S15, rotating and cutting the inclined frame to obtain a coarse positioning image of the bar code, and decoding the coarse positioning image to identify the corresponding bar code.
The "rotating and cutting the oblique frame to obtain the coarse positioning image of the barcode" and the "decoding the coarse positioning image to identify the corresponding barcode" both adopt the prior art, and are not described herein again.
According to the local vertex detection method, multi-vertex prediction is carried out on an image to be detected by utilizing a pre-trained deep learning local vertex detection model, vertexes with the same vertex vector direction and coordinate positions smaller than a preset value are filtered out to filter invalid vertexes, diagonal vertex pairs capable of representing a bar code region are matched from the screened vertexes, so that prediction and rough positioning of the bar code region are realized according to an inclined frame formed by calculating the diagonal vertex pairs, and finally, the roughly positioned image is rotated, cut and decoded to identify a corresponding bar code. Under the condition that the image to be detected contains a plurality of barcodes and various barcodes, screening a plurality of groups of vertex pairs according to predicted vertex information and calculating corresponding oblique frames to predict the barcodes, wherein due to the fact that the local area graph structure of the barcode vertices is simple, the difficulty of identification is greatly simplified; meanwhile, based on the deep learning detection technology and combined with the filtering of the matching of the invalid vertex and the valid vertex pair, the method can adapt to various background interferences and improve the accuracy of barcode identification.
In other embodiments, for example, when detecting an object whose local vertex has an obvious identification feature, such as a license plate or a certificate, the equivalent flow of the local vertex detection method in the embodiments of the present invention may be implemented by transformation.
Fig. 6 is a schematic flow chart of a training method for a deep learning local vertex detection model according to an embodiment of the present invention.
The training method of the deep learning local vertex detection model is a bar code vertex data set labeling and training method, labeling is carried out by adding vertex vector information, and the method specifically comprises the following steps:
step S21, collecting a plurality of barcode images to form a data set.
And S22, detecting the bar code image to identify the bar code, and outputting a bar code type c and 4 bar code vertex coordinates (x, y).
The method for detecting and identifying the barcode image is the prior art, for example, open source detection algorithms such as ZXing, ZBar, openCV and the like; taking ZXing as an example, wherein the dmdetector.scan () interface can return 4 vertex coordinates of DataMatrix code; for the PDF417 code, a DetectBar () interface returns 4 vertex coordinates of the barcode and 4 vertex coordinates of its data region.
Step S23, automatically labeling diagonal vertexes of all barcodes in the image; wherein, the labeling content includes: vertex type, vertex coordinates, vertex vector; (annotating, i.e., marking the desired information data and saving, and associating with the image when saving).
As described above, a plurality of barcodes are detected according to step S22, wherein the known information of each barcode includes: vertex type (barcode type c), 4 vertex coordinates;
assuming that the vertices of the lower left corner, the upper right corner, the upper left corner and the lower right corner are respectively represented by i, j, k and l, the vertex vector is calculated according to the following formula:
Figure BDA0003719777750000111
because the model detection is based on the characteristics of the local vertex region, other vertices are detected by the deep learning model when being very similar to the local characteristics of diagonal vertices (for example, the lower left corner and the upper right corner) to be labeled; if such vertices are not labeled, the deep learning model training is difficult to converge; so, such vertices should also be labeled at this time. The vertices with similar local features do not affect the trained model for detection, because the vertices can be filtered according to the filtering condition of step S14.
And step S24, manually marking or eliminating the bar code image which fails to be identified or cannot obtain the vertex coordinates.
For example, the desired vertex coordinate position and barcode type may be manually recorded via a computer graphical visualization interface.
And S25, training the deep learning local vertex detection model according to the labeled data set.
For example, the model may be constructed by using an open source SSD or a YOLO algorithm, and the existing deep learning target detection model represents a detection target as (c, cx, cy, w, h); where c is the object class, (cx, cy) is the object center coordinates, and (w, h) is the object rectangle frame width and height. In the present embodiment, the target type is set as a vertex type, the target center coordinates are replaced with vertex coordinates, and the target rectangular frame width and height are replaced with vertex vectors. And training the target class and the target center coordinate by using a deep learning target detection technology, training the vertex coordinate and the vertex vector by using a tanh activation function and a regression method, and classifying the vertex type by using a Softmax classifier. The deep learning target detection technology, the tanh activation function and the regression method thereof, and the Softmax classifier are prior art, and are not described herein again.
Based on the local vertex detection device in the embodiment of the invention, the deep learning local vertex detection model training only carries out detection and classification according to the local vertex of the bar code, and the difficulty of identification is greatly simplified because the graph structure of the local area of the bar code vertex is simple; therefore, compared with the existing deep learning technology, the same detection accuracy can be achieved by adopting a more simplified model, the operation on embedded equipment with lower calculation power is facilitated, the model network parameters are fewer, the requirement on the memory of a computer is lower, and the calculation speed is higher after the model network is simplified; because a single bar code can have a plurality of locators or vertexes, the difficulty of model training data collection is reduced, and sufficient data can be obtained more easily; because the recognition difficulty is reduced, the requirement on the training data volume can be reduced.
Fig. 7 is a functional structure diagram of a local vertex detection apparatus according to an embodiment of the present invention. The apparatus 30 comprises a storage unit 31, an image acquisition unit 32, a processing unit 33 and a model training unit 34.
The storage unit 31 is configured to store deep learning model parameters, raw data, raw format information, and the application program itself. Specifically, the medium of the storage unit 31 may be, for example, a flexible disk, a hard disk, a CD-ROM, a semiconductor memory, and the like.
The image obtaining unit 32 is configured to obtain an image to be detected. The image to be detected comprises a plurality of multi-type bar codes.
The processing unit 33 includes a vertex prediction unit 331, a vertex filtering unit 332, a slant-frame calculation unit 333, a cropping unit 334, and a decoding unit 335.
The vertex prediction unit 331 is configured to detect the image to be detected by using a pre-trained deep learning local vertex detection model, so as to output a plurality of predicted vertices. Wherein the prediction information of each vertex comprises: vertex type c, confidence score p, vertex coordinates (x, y), vertex vector
Figure BDA0003719777750000131
The vertex screening unit 332 is configured to screen the predicted vertices to filter out vertices with the same vertex vector direction and a coordinate position distance smaller than a preset value.
Further, referring to fig. 8, the vertex filtering unit 332 includes:
a ranking module 3321 configured to rank the vertex input list according to the confidence scores p of the multiple predicted vertices;
the vertex processing module 3322 is configured to select a vertex with the highest confidence score to be added to a vertex output list, and delete the vertex with the highest confidence score from the vertex input list;
a calculating module 3323, configured to calculate euclidean distances between vertices in the vertex input list and vertices with the highest confidence scores, and calculate vector inner products between vertices in the vertex input list and vertices with the highest confidence scores;
the vertex processing module 3322, further configured to process the vertex data according toSelecting Euclidean distance smaller than threshold T according to calculation result 1 And the vector inner product is greater than the threshold value T 2 And removing the selected vertex from the vertex input list.
Wherein, the threshold value T 1 Set according to the pixel size normally occupied by the bar code, e.g. T 1 =50. Threshold value T 2 Is the inner product of unit vectors, and has a value of [ -1,1 [)]To (c) to (d); since an inner product equal to 1 indicates that the vectors are in the same direction and parallel, T can be set 2 =0.9。
The vertex filtering unit 332 filters the plurality of predicted vertices until the vertex input list is empty as described above, and stores the filtered predicted vertices in the vertex output list.
The oblique frame calculating unit 333 is configured to match a diagonal vertex pair representing a barcode according to the predicted vertex information, and calculate an oblique frame formed by the vertex pair.
Specifically, the bezel calculating unit 333 is configured to:
traversing the vertex output list to screen and acquire vertex pairs according to screening conditions; and
calculating a corresponding bar code oblique frame according to the acquired prediction information of the vertex pair, and judging whether the length value and the width value of the bar code oblique frame meet preset conditions or not; if yes, the vertex pair is established; otherwise, the vertex pair does not hold.
Wherein the screening conditions are as follows: screening out the vertex pairs according to the following conditions (1) and (2) simultaneously:
condition (1): c. C i =c j (ii) a Wherein c is a code system, and i and j are respectively two vertex numbers; that is, vertex i and vertex j are of the same type of code system;
condition (2):
Figure BDA0003719777750000141
namely that
Figure BDA0003719777750000142
That is, vertex vectors i, j are antiparallel; wherein, when the vector inner product is-1, it representsAre heterodromous parallel, so that the threshold value T can be set 3 Is set to T 3 =-0.9。
The preset conditions are as follows: w is a>T 4 And h is>T 4 And T 5 <w/h<T 6 (ii) a Wherein w represents the width of the slant frame, h represents the height of the slant frame, and the threshold T 4 Is the minimum width or height value, T, of the bar code 5 Is the minimum aspect ratio, T, of the bar code 6 The maximum aspect ratio of the barcode.
Further, the slant frame calculating unit 333 matches a diagonal vertex pair representing a barcode according to the predicted vertex information, and specifically includes:
the bounding box consisting of vertex i and vertex j is computed according to equation (1) as follows:
Figure BDA0003719777750000143
the bar code oblique frame is represented as (cx, cy, w, h, theta), (cx, cy) represents the central coordinate of the oblique frame, (w, h) represents the width value and the height value of the oblique frame, and theta represents the anticlockwise rotation angle of the oblique frame.
The cutting unit 334 is configured to rotate and cut the oblique frame to obtain a coarse positioning image of the barcode.
The decoding unit 335 is configured to decode the coarse positioning image to identify a corresponding barcode.
In other embodiments, for example, when detecting an object whose local vertex has an obvious identification feature, such as a license plate and a certificate, the equivalent flow of the local vertex detection method in the embodiments of the present invention may be implemented by conversion.
Further, referring to fig. 9, the model training unit 34 is configured to train the deep learning local vertex detection model.
The model training unit 34 includes:
the storage module 341 is configured to collect a plurality of barcode images to form a data set.
A data labeling module 342, configured to detect the barcode image to identify a barcode, and output a barcode type c and 4 barcode vertex coordinates (x, y); automatically labeling the diagonal vertexes of all the bar codes in the image; wherein, the labeling content comprises: vertex type, vertex coordinates, vertex vector.
And the model training module 343 is configured to train the deep learning local vertex detection model according to the labeled data set.
And manually marking or eliminating the bar code image which fails to be identified or cannot obtain the vertex coordinates. For example, the desired vertex coordinate position and barcode type may be manually recorded via a computer graphical visualization interface.
For example, the model may be constructed by using an open source SSD or a YOLO algorithm, and the existing deep learning target detection model represents a detection target as (c, cx, cy, w, h); where c is the object class, (cx, cy) is the object center coordinates, and (w, h) is the object rectangle frame width and height. In the present embodiment, the target type is set as a vertex type, the target center coordinates are replaced with vertex coordinates, and the target rectangular frame width and height are replaced with vertex vectors. And training the target class and the target center coordinate by using a deep learning target detection technology, training the vertex coordinate and the vertex vector by using a tanh activation function and a regression method, and classifying the vertex type by using a Softmax classifier. The deep learning target detection technology, the tanh activation function and the regression method thereof, and the Softmax classifier are prior art, and are not described herein again.
Further, the data labeling module 342 is further configured to: the vertex vector is calculated using the following equation (2):
Figure BDA0003719777750000151
wherein i, j, k, l respectively represent the top points of the lower left corner, the upper right corner, the upper left corner and the lower right corner.
According to the deep learning local vertex detection model training, based on the local vertex detection device in the embodiment of the invention, detection and classification are implemented only according to the local vertex of the bar code, and because the local area graph structure of the bar code vertex is simple, the difficulty of identification is greatly simplified; therefore, compared with the existing deep learning technology, the same detection accuracy can be achieved by adopting a more simplified model, the operation on embedded equipment with lower calculation power is facilitated, the model network parameters are fewer, the requirement on the memory of a computer is lower, and the calculation speed is higher after the model network is simplified; because a single bar code can have a plurality of locators or vertexes, the difficulty of model training data collection is reduced, and sufficient data can be obtained more easily; because the recognition difficulty is reduced, the requirement on the training data volume can be reduced.
According to the local vertex detection method and device provided by the embodiment of the invention, multi-vertex prediction is carried out on an image to be detected by utilizing a pre-trained deep learning local vertex detection model, vertexes with the same vertex vector direction and coordinate positions smaller than a preset value are filtered out to filter invalid vertexes, diagonal vertex pairs capable of representing bar code regions are matched from the screened vertexes, so that prediction and coarse positioning of the bar code regions are realized according to an inclined frame formed by calculating the diagonal vertex pairs, and finally, the image subjected to coarse positioning is rotated, cut and decoded to identify corresponding bar codes. Under the condition that the image to be detected contains a plurality of barcodes of various types, screening a plurality of groups of vertex pairs according to predicted vertex information and calculating corresponding inclined frames to predict the barcodes, wherein the local area graph structure of the vertex of the barcode is simple, so that the recognition difficulty is greatly simplified; meanwhile, based on the deep learning detection technology and combined with the filtering of the matching of the invalid vertex and the valid vertex pair, the method can adapt to various background interferences and improve the accuracy of barcode identification.
In the several embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules or units is only one type of logical functional division, and other divisions may be realized in practice, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a management server, or a network device) or a processor to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a read-only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above description is only an embodiment of the present invention, and not intended to limit the scope of the present invention, and all modifications of equivalent structures and equivalent processes performed by the present specification and drawings, or directly or indirectly applied to other related technical fields, are included in the scope of the present invention.

Claims (12)

1. A local vertex detection method is characterized by comprising the following steps:
inputting an image to be detected; the image to be detected comprises a plurality of multi-type bar codes;
detecting the image to be detected by utilizing a pre-trained deep learning local vertex detection model so as to output a plurality of predicted vertexes; the prediction information of the vertex comprises: vertex type c, confidence score p, vertex coordinates (x, y), vertex vector
Figure FDA0003719777740000011
Screening the plurality of predicted vertexes to filter vertexes with the same vertex vector direction and coordinate position distance smaller than a preset value;
matching diagonal vertex pairs for representing the bar codes according to the predicted vertex information, and calculating an inclined frame formed by the vertex pairs; and
and rotating and cutting the inclined frame to obtain a coarse positioning image of the bar code, and decoding the coarse positioning image to identify the corresponding bar code.
2. The local vertex detection method of claim 1, wherein the step of filtering the plurality of predicted vertices to filter vertices having the same vertex vector direction and a coordinate position distance smaller than a predetermined value comprises:
sorting the vertex input list according to the confidence scores p of the multiple predicted vertices;
selecting a vertex with the highest confidence score to be added into a vertex output list, and deleting the vertex with the highest confidence score from the vertex input list;
calculating Euclidean distances between the vertexes in the vertex input list and the vertex with the highest confidence score;
calculating a vector inner product of a vertex in the vertex input list and a vertex with a highest confidence score; and
selecting Euclidean distance smaller than threshold value T according to calculation result 1 And the vector inner product is greater than the threshold value T 2 And deleting the selected vertex from the vertex input list;
the steps described above are repeated until the vertex input list is empty.
3. The local vertex detection method according to claim 2, wherein matching diagonal vertex pairs representing barcodes according to predicted vertex information, and calculating a slant frame formed by the vertex pairs specifically comprise:
traversing the vertex output list to screen and obtain vertex pairs according to screening conditions; and
calculating a corresponding bar code oblique frame according to the acquired prediction information of the vertex pair, and judging whether the length value and the width value of the bar code oblique frame meet preset conditions or not; if so, the vertex pair is established, and the corresponding calculated inclined frame is output; otherwise, the vertex pair does not hold;
wherein the screening conditions are as follows: screening out the vertex pairs according to the following conditions (1) and (2) which are simultaneously met:
condition (1): c. C i =c j (ii) a Wherein i and j are two vertex numbers respectively;
condition (2):
Figure FDA0003719777740000021
i.e. v i x v j x +v i y v j y <T 3
The preset conditions are as follows: w is a>T 4 And h is>T 4 And T 5 <w/h<T 6 (ii) a Wherein w is the width of the bevel frame, h is the height of the bevel frame, and the threshold T 4 Is the minimum width or height value, T, of the bar code 5 Is the minimum aspect ratio, T, of the bar code 6 The maximum aspect ratio of the barcode.
4. The local vertex detection method according to claim 3, wherein calculating a corresponding barcode slant frame according to the obtained prediction information of the vertex pair specifically includes:
let the bar code slant be represented as (cx, cy, w, h, θ); wherein, (cx, cy) represents the central coordinate of the oblique frame, and theta represents the anticlockwise rotation angle of the oblique frame;
the bounding box consisting of vertex i and vertex j is computed according to equation (1) as follows:
Figure FDA0003719777740000022
5. the local vertex detection method according to claim 1, wherein the training method of the deep learning local vertex detection model specifically includes:
collecting a plurality of barcode images to form a data set;
detecting the bar code image to identify a bar code, and outputting a bar code type c and 4 bar code vertex coordinates (x, y);
automatically labeling diagonal vertexes of all barcodes in the image; wherein, the labeling content comprises: vertex type, vertex coordinates and vertex vector;
manually marking the bar code image which fails to be identified, and manually removing the bar code image which cannot obtain the vertex coordinates; and
and training the deep learning local vertex detection model according to the labeled data set.
6. The local vertex detection method according to claim 5, wherein the vertex vector is calculated using the following formula (2):
Figure FDA0003719777740000031
wherein i, j, k, l respectively represent the top points of the lower left corner, the upper right corner, the upper left corner and the lower right corner.
7. A local vertex detection device comprises a storage unit, an image acquisition unit and a processing unit; the image acquisition unit is used for acquiring an image to be detected; the image to be detected comprises a plurality of multi-type bar codes; characterized in that said processing unit comprises:
the vertex prediction unit is used for detecting the image to be detected by utilizing a pre-trained deep learning local vertex detection model so as to output a plurality of predicted vertexes; wherein the prediction information of each vertex comprises: vertex type c, confidence score p, vertex coordinates (x, y), vertex vector V ρ = (V) x ,v y );
The vertex screening unit is used for screening the plurality of predicted vertexes to filter vertexes which have the same vertex vector direction and have the coordinate position distance smaller than a preset value;
the oblique frame calculation unit is used for matching out diagonal vertex pairs representing the bar codes according to the predicted vertex information and calculating an oblique frame formed by the vertex pairs;
the cutting unit is used for rotating and cutting the inclined frame to obtain a coarse positioning image of the bar code; and
and the decoding unit is used for decoding the coarse positioning image so as to identify a corresponding bar code.
8. The local vertex detecting device according to claim 7, wherein the vertex screening unit includes:
a ranking module for ranking the vertex input list according to the confidence scores p of the plurality of predicted vertices;
the vertex processing module is used for selecting a vertex with the highest confidence score to be added into a vertex output list and deleting the vertex with the highest confidence score from the vertex input list;
a calculation module, configured to calculate euclidean distances between vertices in the vertex input list and vertices with the highest confidence scores, and calculate vector inner products between vertices in the vertex input list and vertices with the highest confidence scores;
the vertex processing module is also used for selecting the Euclidean distance smaller than the threshold value T according to the calculation result 1 And the vector inner product is greater than the threshold value T 2 And removing the selected vertex from the vertex input list.
9. The local vertex detection apparatus of claim 8, wherein the bezel calculation unit is configured to:
traversing the vertex output list to screen and obtain vertex pairs according to screening conditions; and
calculating a corresponding bar code oblique frame according to the acquired prediction information of the vertex pair, and judging whether the length value and the width value of the bar code oblique frame meet preset conditions or not; if yes, the vertex pair is established; otherwise, the vertex pair does not hold;
wherein the screening conditions are as follows: screening out the vertex pairs according to the following conditions (1) and (2) simultaneously:
condition (1): c. C i =c j (ii) a Wherein i and j are respectively two vertex numbers;
condition (2):
Figure FDA0003719777740000041
i.e. v i x v j x +v i y v j y <T 3
The preset conditions are as follows: w is a>T 4 And h is>T 4 And T 5 <w/h<T 6 (ii) a Wherein w represents the width of the slant frame, h represents the height of the slant frame, and the threshold T 4 Is the minimum width or height value, T, of the bar code 5 Is the minimum aspect ratio, T, of the bar code 6 The maximum aspect ratio of the barcode.
10. The local vertex detecting device of claim 9, wherein the bezel calculating unit matches a diagonal vertex pair representing a barcode according to the predicted vertex information, and specifically comprises:
the bounding box consisting of vertex i and vertex j is computed according to the following equation (1):
Figure FDA0003719777740000042
the bar code oblique frame is represented as (cx, cy, w, h, theta), (cx, cy) represents the central coordinate of the oblique frame, (w, h) represents the width value and the height value of the oblique frame, and theta represents the anticlockwise rotation angle of the oblique frame.
11. The local vertex detection apparatus according to claim 7, further comprising a model training unit configured to train the deep learning local vertex detection model;
the model training unit comprises:
and the storage module is used for collecting a plurality of bar code images to form a data set.
The data annotation module is used for detecting the bar code image to identify a bar code and outputting a bar code type c and 4 bar code vertex coordinates (x, y); automatically labeling the diagonal vertexes of all the bar codes in the image; wherein, the labeling content includes: vertex type, vertex coordinates, vertex vector;
and the model training module trains the deep learning local vertex detection model according to the labeled data set.
12. The local vertex detection apparatus of claim 11, wherein the data annotation module is further configured to: the vertex vector is calculated using the following equation (2):
Figure FDA0003719777740000051
wherein i, j, k and l respectively represent the top points of the lower left corner, the upper right corner, the upper left corner and the lower right corner.
CN202210756755.9A 2022-06-29 2022-06-29 Local vertex detection method and device Pending CN115238723A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210756755.9A CN115238723A (en) 2022-06-29 2022-06-29 Local vertex detection method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210756755.9A CN115238723A (en) 2022-06-29 2022-06-29 Local vertex detection method and device

Publications (1)

Publication Number Publication Date
CN115238723A true CN115238723A (en) 2022-10-25

Family

ID=83672284

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210756755.9A Pending CN115238723A (en) 2022-06-29 2022-06-29 Local vertex detection method and device

Country Status (1)

Country Link
CN (1) CN115238723A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116630317A (en) * 2023-07-24 2023-08-22 四川新荷花中药饮片股份有限公司 On-line quality monitoring method for traditional Chinese medicine decoction pieces
CN116978051A (en) * 2023-08-03 2023-10-31 杭州海量信息技术有限公司 Method and device for extracting key information of form image
WO2024120096A1 (en) * 2022-12-09 2024-06-13 腾讯科技(深圳)有限公司 Key point detection method, training method and apparatus, electronic device, computer-readable storage medium, and computer program product
CN118228754A (en) * 2024-03-18 2024-06-21 深圳市前海研祥亚太电子装备技术有限公司 Graphic code decoding method, device, equipment and storage medium

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2024120096A1 (en) * 2022-12-09 2024-06-13 腾讯科技(深圳)有限公司 Key point detection method, training method and apparatus, electronic device, computer-readable storage medium, and computer program product
CN116630317A (en) * 2023-07-24 2023-08-22 四川新荷花中药饮片股份有限公司 On-line quality monitoring method for traditional Chinese medicine decoction pieces
CN116630317B (en) * 2023-07-24 2023-09-26 四川新荷花中药饮片股份有限公司 On-line quality monitoring method for traditional Chinese medicine decoction pieces
CN116978051A (en) * 2023-08-03 2023-10-31 杭州海量信息技术有限公司 Method and device for extracting key information of form image
CN118228754A (en) * 2024-03-18 2024-06-21 深圳市前海研祥亚太电子装备技术有限公司 Graphic code decoding method, device, equipment and storage medium

Similar Documents

Publication Publication Date Title
CN115238723A (en) Local vertex detection method and device
Tsai et al. Vehicle detection using normalized color and edge map
CN102693409B (en) Method for quickly identifying two-dimension code system type in images
Zamberletti et al. Robust angle invariant 1d barcode detection
CN113591967B (en) Image processing method, device, equipment and computer storage medium
Skoryukina et al. Fast method of ID documents location and type identification for mobile and server application
CN108875542B (en) Face recognition method, device and system and computer storage medium
US12086681B2 (en) Method for detecting and reading a matrix code marked on a glass substrate
CN103959330A (en) Systems and methods for matching visual object components
WO2020164278A1 (en) Image processing method and device, electronic equipment and readable storage medium
Lin et al. Real-time automatic recognition of omnidirectional multiple barcodes and dsp implementation
US20230093474A1 (en) Efficient location and identification of documents in images
CN106203539A (en) The method and apparatus identifying container number
CN104298947A (en) Method and device for accurately positioning two-dimensional bar code
CN107578011A (en) The decision method and device of key frame of video
CN116245882A (en) Circuit board electronic element detection method and device and computer equipment
WO2019209756A1 (en) System and method of decoding a barcode using machine learning
CN109977762B (en) Text positioning method and device and text recognition method and device
CN112184843B (en) Redundant data removing system and method for image data compression
Zamberletti et al. Neural 1D barcode detection using the Hough transform
JP7121132B2 (en) Image processing method, apparatus and electronic equipment
CN108961262A (en) A kind of Bar code positioning method under complex scene
CN108960246B (en) Binarization processing device and method for image recognition
CN114120309A (en) Instrument reading identification method and device and computer equipment
CN112070077B (en) Deep learning-based food identification method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination