CN114548132A - Bar code detection model training method and device and bar code detection method and device - Google Patents
Bar code detection model training method and device and bar code detection method and device Download PDFInfo
- Publication number
- CN114548132A CN114548132A CN202210163904.0A CN202210163904A CN114548132A CN 114548132 A CN114548132 A CN 114548132A CN 202210163904 A CN202210163904 A CN 202210163904A CN 114548132 A CN114548132 A CN 114548132A
- Authority
- CN
- China
- Prior art keywords
- bar code
- training
- barcode
- model
- sample data
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06K—GRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
- G06K7/00—Methods or arrangements for sensing record carriers, e.g. for reading patterns
- G06K7/10—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation
- G06K7/10544—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum
- G06K7/10821—Methods or arrangements for sensing record carriers, e.g. for reading patterns by electromagnetic radiation, e.g. optical sensing; by corpuscular radiation by scanning of the records by radiation in the optical part of the electromagnetic spectrum further details of bar or optical code scanning devices
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Electromagnetism (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Physics & Mathematics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Evolutionary Computation (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Toxicology (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a bar code detection model training method and device and a bar code detection method and device.A training module gradually adjusts model parameters to train and obtain a bar code detection model for training each batch of training data through stages of feature extraction, feature fusion, target prediction, loss calculation, parameter updating and the like in sequence, and obtains a second convolutional neural network by reducing the number of layers of convolutional layers in a network structure of a trained first convolutional neural network and the number of convolutional cores of each convolutional layer in the feature extraction stage, and improves the model training efficiency and the bar code detection speed by reducing the parameter number and the calculated amount of the model. By reducing the size of the features in the feature fusion stage, the computational load of the model is further reduced. And designing a cross-layer connection characteristic fusion mode, and increasing the detection capability of the model on bar codes with different image scales and different proportions.
Description
Technical Field
The invention belongs to the technical field of machine vision, and particularly relates to a bar code detection model training method and device and a bar code detection method and device.
Background
Bar codes include one-dimensional codes and two-dimensional codes, which can effectively record much information of products, and thus are widely used in many fields. Different from civil barcodes, the application environment of industrial barcodes is generally complex, for example, the problems of distortion, blurring, abrasion, low contrast, no static area, serious noise interference and the like exist, so that the conventional detection positioning method cannot meet the requirements of the conventional detection positioning method.
The detection positioning method based on deep learning is superior to the conventional detection positioning method in robustness and accuracy, however, the parameter quantity and the calculated quantity of the detection training model are large, and the model training efficiency and the bar code detection speed are reduced. In addition, the conventional detection model based on deep learning has high requirements on computing resources, so that the use cost is high, and the detection model has poor performance in detecting two-dimensional codes and one-dimensional codes with different image scales and different ratios.
Disclosure of Invention
The invention provides a bar code detection model training method and device and a bar code detection method and device, which can solve or at least partially solve the technical problems.
Therefore, the invention adopts the following technical scheme:
in a first aspect, a training method for a barcode detection model is provided, including:
based on training sample data and a target frame of a bar code graph in the training sample data, gradually adjusting model parameters for each batch of training sample data through stages of feature extraction, feature fusion, target prediction, loss calculation, parameter updating and the like in sequence, and training to obtain a detection model of the bar code;
in the feature extraction stage, a second convolutional neural network is obtained by reducing the number of convolutional layers in the trained network structure of the first convolutional neural network, reducing the number of convolutional kernels of each convolutional layer and performing cross-layer connection of feature fusion.
Optionally, in the feature fusion stage, the size of the feature map input into the second convolutional neural network is respectively reduced to a first size, a second size, and a third size, where the first size to the second size are span sizes or the second size to the third size are span sizes, and the feature map is obtained by extracting the third sample data through the feature extraction stage.
Optionally, in the target prediction stage, the data obtained in the feature fusion stage is predicted, and an abscissa of a central point of the barcode pattern in the training sample data, a ordinate of the central point, a width, a height, and a rotation angle are respectively obtained.
Optionally, the method for acquiring training sample data includes:
acquiring a bar code picture with a bar code graph, wherein the bar code graph comprises a one-dimensional code and/or a two-dimensional code; wherein the training sample data comprises the barcode picture.
Optionally, the barcode graphic has a plurality of different angles on the barcode graphic.
In a second aspect, a barcode detection method is provided, including:
acquiring a target picture to be detected;
and detecting the bar code in the target picture based on the detection model of the bar code trained by the method, and if the bar code exists, outputting the position and the type of the bar code.
In a third aspect, a training apparatus for a barcode detection model is provided, including:
the training module is used for gradually adjusting model parameters for each batch of training sample data through stages of feature extraction, feature fusion, target prediction, loss calculation, parameter updating and the like in sequence based on the training sample data and a target frame of a bar code graph in the training sample data, and training to obtain a detection model of the bar code;
in the feature extraction stage, a second convolutional neural network is obtained by reducing the number of convolutional layers in the trained network structure of the first convolutional neural network, reducing the number of convolutional kernels of each convolutional layer and performing cross-layer connection of feature fusion.
In a fourth aspect, there is provided a barcode detection apparatus comprising:
the image acquisition module is used for acquiring a target image to be detected;
and the bar code detection module is used for detecting the bar code in the target picture based on the bar code detection model obtained by the training of the method, and outputting the position and the type of the bar code if the bar code exists.
Compared with the prior art, the embodiment of the invention has the following beneficial effects:
in the training method and device for the barcode detection model, provided by the embodiment of the invention, the training module gradually adjusts model parameters and trains to obtain the barcode detection model for each batch of training data through stages of feature extraction, feature fusion, target prediction, loss calculation, parameter updating and the like in sequence, and in the feature extraction stage, a second convolutional neural network is obtained by reducing the number of convolutional layers in a network structure of a trained first convolutional neural network (convolutional neural network in the prior art) and reducing the number of convolutional kernels of each convolutional layer, and the model training efficiency and the barcode detection speed are improved by reducing the parameter number and the calculated amount of the model. By reducing the size of the features in the feature fusion stage, the computational load of the model is further reduced. And designing a cross-layer connection characteristic fusion mode, and increasing the detection capability of the model on bar codes with different image scales and different proportions.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the drawings without creative efforts.
The structures, ratios, sizes, and the like shown in the present specification are only used for matching with the contents disclosed in the specification, so that those skilled in the art can understand and read the present invention, and do not limit the conditions for implementing the present invention, so that the present invention has no technical significance, and any structural modifications, changes in the ratio relationship, or adjustments of the sizes, without affecting the functions and purposes of the present invention, should still fall within the scope covered by the contents disclosed in the present invention.
Fig. 1 is a network structure diagram of a detection model provided in this embodiment;
FIG. 2 is a flowchart of a method of detecting a barcode according to the present embodiment;
FIG. 3 is a block diagram of the barcode detection apparatus according to the present embodiment;
FIG. 4 is a representation of a one-dimensional or two-dimensional code target box;
FIG. 5 is a schematic diagram of a data set;
FIG. 6 is a plurality of normal target pictures;
fig. 7 to 16 are schematic diagrams illustrating detection effects of different target pictures.
Detailed Description
In order to make the objects, features and advantages of the present invention more obvious and understandable, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the accompanying drawings in the embodiments of the present invention, and it is apparent that the embodiments described below are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Please refer to fig. 1 to 16.
The embodiment provides a training method and a barcode detection method for a barcode detection model, which can reduce the parameter and the calculation amount of the model, improve the model training efficiency and the barcode detection speed, and increase the detection capability of the model on barcodes with different image sizes and different ratios.
Specifically, the training method of the barcode detection model comprises the following steps:
acquiring a bar code picture with a bar code graph, wherein the bar code graph comprises a one-dimensional code and/or a two-dimensional code; wherein the training sample data comprises the barcode picture; it should be noted that the training sample data in this embodiment may also be obtained by other existing technologies;
and training to obtain a detection model of the bar code based on the training sample data and the target frame of the bar code graph in the training sample data.
Optionally, a barcode pattern may include one or more one-dimensional codes, or include one or more two-dimensional codes, or may include both one or more one-dimensional codes and one or more two-dimensional codes.
Alternatively, the barcode pattern in the barcode picture may be rotated or not rotated. Specifically, after training sample data is collected, a rectangular box of the barcode graph may be labeled, and then each barcode graph is processed according to the labeling information to generate a data set, as shown in fig. 5.
Further, as shown in fig. 1, the network structure of the detection model includes three parts, feature extraction, feature fusion and object prediction. Therefore, the detection model for training the bar code needs to pass through a feature extraction stage, a feature fusion stage and a target prediction stage.
Specifically, in the feature extraction stage, the second convolutional neural network is obtained by reducing the number of convolutional layers in the network structure of the trained first convolutional neural network (convolutional neural network in the prior art) and reducing the number of convolutional cores of each convolutional layer, so that the parameter and the calculation amount of the model can be reduced, and the data processing efficiency can be improved.
Further, in the feature fusion stage, the size of the feature map input into the second convolutional neural network is respectively reduced to a first size, a second size and a third size, where the second size to the third size are cross-size connection operations, and the feature map is obtained by extracting the training sample data through the feature extraction stage. For example, the first size (width or height) is 16 times the size of the target image, the second size is 32 times the size, and the third size is 128 times the size. Wherein the second dimension and the third dimension span one-third 64 times, and therefore the second dimension to the third dimension are referred to as span dimensions. In the feature fusion stage, by reducing the size of the feature map and increasing the cross-scale feature fusion, the embodiment can not only reduce the calculated amount of the model, but also detect two-dimensional codes and one-dimensional codes with different image scales and different ratios.
Further, as shown in fig. 4, in the target prediction stage, in the embodiment, data obtained in the feature fusion stage is predicted, and an abscissa x of a central point of a barcode pattern in the third sample data, a ordinate y of the central point, a width w, a height h, and a rotation angle θ are obtained respectively. Compared with the prior art, the predicted target frame is described by adding the rotation angle theta, so that the QRCode, the Data Matrix two-dimensional code and the one-dimensional code with any angle commonly used in the industry can be positioned and identified.
Specifically, in the target prediction stage, clustering analysis is carried out on bar code data, 3 heads are designed in a network structure, each Head has 4 anchor frames, and the description of a predicted target frame is increased by a rotation angle theta.
In this embodiment, the parameter calculation formula of the model is as follows:
the calculation formula of the calculated amount of the model is as follows:
wherein, ClWhere C denotes the number of convolution kernels and l denotes the l-th layer volumeAccumulating; k represents the size of the convolution kernel, e.g., the size of a particular convolution kernel may be represented as K x K, 3 x 3, etc.; n represents that the network has N convolutional layers; fwWidth of the characteristic diagram; fhIndicating a high profile.
As a specific application scenario of the embodiment, if the existing detection model is the YOLO V5 in the prior art, and the input target picture to be detected has parameters WxH (where W is the width of the target picture and H is the height of the target picture), the embodiment may sequentially reduce the feature sizes W/8H/8 (representing that W is reduced to one eighth of the original size and H is reduced to one eighth of the original size), W/16H/16 and W/32H/32 in the existing model to W/16H/16, W/32H/32 and W/128H/128. In the embodiment, the jump is directly from w/32 × h/32 to w/128 × h/128, that is, the feature fusion across scales is increased, w/128 × h/128 can be applied to the detection of some large-scale one-dimensional codes and two-dimensional codes, and the purpose of this step is to reduce the calculation amount of the model and enhance the detection capability of the two-dimensional codes and the one-dimensional codes with different scales and different ratios.
It should be understood that, in the present embodiment, w/64 × h/64 is selected to be skipped, but w/32 × h/32 may also be selected to be skipped, and w/64 × h/64 is reserved, which also has the capability of enhancing the detection capability for two-dimensional codes and one-dimensional codes with different dimensions and different ratios.
Further, the present embodiment modifies the description (x, y, w, h) of the existing predicted target frame into (x, y, w, h, θ), that is, increases the description of the rotation angle θ of the image, and not only has the capability of detecting two-dimensional codes and one-dimensional codes with different scales, but also increases the function of detecting two-dimensional codes and one-dimensional codes with rotation angles.
Specifically, the network structure can detect three industrially common barcodes, namely QRCode, Data Matrix two-dimensional codes and one-dimensional codes. Each layer of the target prediction module comprises 4 anchor blocks with different sizes, so that the output dimension of the module is as follows:
4*(x,y,w,h,θ,conf,pdm,pqr,pbarcode)=36。
whereinX, y, w, h and theta respectively represent the coordinates (x, y) of the central point of the frame, the width w, the height h and the rotation angle theta, the confidence conf of the existence of the bar code, and the probabilities of the DM, the QR and the one-dimensional code are respectively pdm,pqr,pbarcode。
Further, the loss function of the model is:
Loss=λcoord∑loss(xy)+λcoord∑loss(wh)+λcomf∑loss(conf)+λcls∑loss(cls)+λangle∑loss(θ)
wherein, loss (xy) represents the loss of the central point of the model prediction code, loss (wh) represents the loss of the width and height of the model prediction code, loss (conf) represents whether the model prediction is the confidence loss of the code, loss (cls) represents the class loss of the model prediction, and loss (theta) represents the loss of the model prediction angle.
Specifically, the optimizer of the model can adopt an Adam optimization algorithm, momentum is 0.937, the initial learning rate is 0.01, the scale of training data is 50 ten thousand pictures, and the model is stopped after 90 rounds of training.
Referring to fig. 2, a barcode detection method provided in another embodiment of the present application includes the following steps:
s21, acquiring a target picture to be detected;
s22, detecting the barcode in the target picture based on the barcode detection model obtained by the training method, and if the barcode exists, outputting the position and type (such as a one-dimensional code or a two-dimensional code) of the barcode.
Specifically, feature extraction is carried out on the target picture to be detected through a model, and a large number of candidate two-dimensional code or one-dimensional code target candidate frames are obtained through target prediction; and then deleting redundant candidate frames from the candidate frames by using a non-maximum suppression algorithm, setting a proper threshold value tau, and outputting target frames with confidence conf _ i > tau of the candidate frames.
As a result of the experiment of this embodiment, as shown in fig. 6, a plurality of normal target pictures are provided, and the detection effect of the target pictures detected by the barcode detection method provided in this embodiment is shown in fig. 7 to 16.
The barcode detection method provided by the embodiment has the advantages of strong universality and high robustness, is suitable for code reading positioning in various different industrial scenes, can detect two-dimensional codes and one-dimensional codes with different image scales and different ratios, is high in precision and wide in category, can position and identify QRCode, Data Matrix two-dimensional codes and one-dimensional codes with any angle commonly used in the industry, and has the advantages of high detection speed, small model scale, high detection accuracy and the like, the floating point operand of the model is less than 1GFLOPS, the parameter number is 1.6 x 10^6, compared with YOLOV5S (one of the existing fastest networks), the calculated amount and the parameter amount are respectively reduced to 1/15 and 1/4, the quantized model size is less than 2MB, and the reasoning speed can reach 67FPS on Ruiko micro NN 3566.
In another embodiment of the present application, a training apparatus for a barcode detection model is provided, which can be used to implement the above training method for a barcode detection model, and specifically includes:
the training module is used for gradually adjusting model parameters for each batch of training sample data through stages of feature extraction, feature fusion, target prediction, loss calculation, parameter updating and the like in sequence based on the training sample data and a target frame of a bar code graph in the training sample data, and training to obtain a detection model of the bar code;
in the feature extraction stage, a second convolutional neural network is obtained by reducing the number of convolutional layers in the trained network structure of the first convolutional neural network, reducing the number of convolutional kernels of each convolutional layer and performing cross-layer connection of feature fusion.
Since the specific training method for implementing the barcode detection model has been explained above, it is not described herein again. Compared with the prior art, the training device of the barcode detection model provided by the embodiment can detect two-dimensional codes and one-dimensional codes with different image scales and different ratios, can reduce the parameters and the calculated amount of the model, improve the model training efficiency and the barcode detection speed, and increase the detection capability of the model on barcodes with different image sizes and different ratios.
As shown in fig. 3, in another embodiment of the present application, there is provided a barcode detection apparatus, specifically including:
the image acquisition module 21 is configured to acquire a target image to be detected;
the barcode detection module 22 is configured to detect a barcode in the target picture based on a detection model of the barcode obtained through training by the training method, and if the barcode exists, output a position and a type (such as a one-dimensional code or a two-dimensional code) of the barcode.
Since the specific training method for implementing the barcode detection model and the barcode detection method have been explained above, they are not described herein again. Compared with the prior art, the barcode detection device provided by the embodiment can detect two-dimensional codes and one-dimensional codes with different image scales and different ratios, can reduce the parameters and the calculated amount of the model, improve the model training efficiency and the barcode detection speed, and increase the detection capability of the model on barcodes with different image sizes and different ratios.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.
Claims (8)
1. The training method of the bar code detection model is characterized by comprising the following steps:
based on training sample data and a target frame of a bar code graph in the training sample data, gradually adjusting model parameters for each batch of training sample data through stages of feature extraction, feature fusion, target prediction, loss calculation, parameter updating and the like in sequence, and training to obtain a detection model of the bar code;
in the feature extraction stage, a second convolutional neural network is obtained by reducing the number of convolutional layers in the trained network structure of the first convolutional neural network, reducing the number of convolutional kernels of each convolutional layer and performing cross-layer connection of feature fusion.
2. The training method according to claim 1, wherein in the feature fusion stage, the size of the feature map input into the second convolutional neural network is respectively reduced to a first size, a second size and a third size, wherein the first size to the second size are cross-sizes or the second size to the third size are cross-sizes, and the feature map is extracted from the third sample data in the feature extraction stage.
3. The training method according to claim 2, wherein in the target prediction stage, the data obtained in the feature fusion stage is predicted to obtain an abscissa of a center point of the barcode pattern in the training sample data, and an ordinate, a width, a height, and a rotation angle of the center point, respectively.
4. The training method according to claim 3, wherein the method for acquiring training sample data comprises:
acquiring a bar code picture with a bar code graph, wherein the bar code graph comprises a one-dimensional code and/or a two-dimensional code; wherein the training sample data comprises the barcode picture.
5. Training method according to claim 4, wherein the barcode graphic has a plurality of different angles on the barcode graphic.
6. A bar code detection method, comprising:
acquiring a target picture to be detected;
detecting the bar code in the target picture based on a detection model of the bar code obtained by training according to the method of any one of claims 1 to 5, and outputting the position and the type of the bar code if the bar code exists.
7. The training device of the bar code detection model is characterized by comprising:
the training module is used for gradually adjusting model parameters for each batch of training sample data through stages of feature extraction, feature fusion, target prediction, loss calculation, parameter updating and the like in sequence based on the training sample data and a target frame of a bar code graph in the training sample data, and training to obtain a detection model of the bar code;
in the feature extraction stage, a second convolutional neural network is obtained by reducing the number of convolutional layers in the trained network structure of the first convolutional neural network, reducing the number of convolutional kernels of each convolutional layer and performing cross-layer connection of feature fusion.
8. A bar code detection device, comprising:
the image acquisition module is used for acquiring a target image to be detected;
a barcode detection module, configured to detect a barcode in the target picture based on a barcode detection model trained by the method according to any one of claims 1 to 5, and output a position and a category of the barcode if the barcode exists.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210163904.0A CN114548132A (en) | 2022-02-22 | 2022-02-22 | Bar code detection model training method and device and bar code detection method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210163904.0A CN114548132A (en) | 2022-02-22 | 2022-02-22 | Bar code detection model training method and device and bar code detection method and device |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114548132A true CN114548132A (en) | 2022-05-27 |
Family
ID=81676823
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210163904.0A Pending CN114548132A (en) | 2022-02-22 | 2022-02-22 | Bar code detection model training method and device and bar code detection method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114548132A (en) |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108038474A (en) * | 2017-12-28 | 2018-05-15 | 深圳云天励飞技术有限公司 | Method for detecting human face, the training method of convolutional neural networks parameter, device and medium |
CN110532825A (en) * | 2019-08-21 | 2019-12-03 | 厦门壹普智慧科技有限公司 | A kind of bar code identifying device and method based on artificial intelligence target detection |
US20200210773A1 (en) * | 2019-01-02 | 2020-07-02 | Boe Technology Group Co., Ltd. | Neural network for image multi-label identification, related method, medium and device |
CN112307853A (en) * | 2019-08-02 | 2021-02-02 | 成都天府新区光启未来技术研究院 | Detection method of aerial image, storage medium and electronic device |
JP6830707B1 (en) * | 2020-01-23 | 2021-02-17 | 同▲済▼大学 | Person re-identification method that combines random batch mask and multi-scale expression learning |
CN112419325A (en) * | 2020-11-27 | 2021-02-26 | 北京工业大学 | Super-pixel segmentation method based on deep learning |
CN113297870A (en) * | 2020-02-21 | 2021-08-24 | 北京三星通信技术研究有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
-
2022
- 2022-02-22 CN CN202210163904.0A patent/CN114548132A/en active Pending
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108038474A (en) * | 2017-12-28 | 2018-05-15 | 深圳云天励飞技术有限公司 | Method for detecting human face, the training method of convolutional neural networks parameter, device and medium |
US20200210773A1 (en) * | 2019-01-02 | 2020-07-02 | Boe Technology Group Co., Ltd. | Neural network for image multi-label identification, related method, medium and device |
CN112307853A (en) * | 2019-08-02 | 2021-02-02 | 成都天府新区光启未来技术研究院 | Detection method of aerial image, storage medium and electronic device |
CN110532825A (en) * | 2019-08-21 | 2019-12-03 | 厦门壹普智慧科技有限公司 | A kind of bar code identifying device and method based on artificial intelligence target detection |
JP6830707B1 (en) * | 2020-01-23 | 2021-02-17 | 同▲済▼大学 | Person re-identification method that combines random batch mask and multi-scale expression learning |
CN113297870A (en) * | 2020-02-21 | 2021-08-24 | 北京三星通信技术研究有限公司 | Image processing method, image processing device, electronic equipment and computer readable storage medium |
CN112419325A (en) * | 2020-11-27 | 2021-02-26 | 北京工业大学 | Super-pixel segmentation method based on deep learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110175613B (en) | Streetscape image semantic segmentation method based on multi-scale features and codec model | |
CN110135427B (en) | Method, apparatus, device and medium for recognizing characters in image | |
CN111126359B (en) | High-definition image small target detection method based on self-encoder and YOLO algorithm | |
CN113657388B (en) | Image semantic segmentation method for super-resolution reconstruction of fused image | |
CN114528865B (en) | Training method and device of bar code detection model and bar code detection method and device | |
CN112364931B (en) | Few-sample target detection method and network system based on meta-feature and weight adjustment | |
CN111753828A (en) | Natural scene horizontal character detection method based on deep convolutional neural network | |
CN111209858B (en) | Real-time license plate detection method based on deep convolutional neural network | |
CN112084923A (en) | Semantic segmentation method for remote sensing image, storage medium and computing device | |
CN112949338A (en) | Two-dimensional bar code accurate positioning method combining deep learning and Hough transformation | |
CN113869138A (en) | Multi-scale target detection method and device and computer readable storage medium | |
CN113642602B (en) | Multi-label image classification method based on global and local label relation | |
CN113034511A (en) | Rural building identification algorithm based on high-resolution remote sensing image and deep learning | |
CN115147418B (en) | Compression training method and device for defect detection model | |
CN114677596A (en) | Remote sensing image ship detection method and device based on attention model | |
CN111523429A (en) | Deep learning-based steel pile identification method | |
CN113537085A (en) | Ship target detection method based on two-time transfer learning and data augmentation | |
CN111429424A (en) | Heating furnace inlet abnormity identification method based on deep learning | |
CN114596273B (en) | Intelligent detection method for multiple defects of ceramic substrate by using YOLOV4 network | |
CN115909378A (en) | Document text detection model training method and document text detection method | |
CN112364709A (en) | Cabinet intelligent asset checking method based on code identification | |
CN113963333B (en) | Traffic sign board detection method based on improved YOLOF model | |
CN116977712B (en) | Knowledge distillation-based road scene segmentation method, system, equipment and medium | |
CN114548132A (en) | Bar code detection model training method and device and bar code detection method and device | |
CN116563844A (en) | Cherry tomato maturity detection method, device, equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |