CN113487603A - Accessory packaging inspection method based on target detection algorithm - Google Patents

Accessory packaging inspection method based on target detection algorithm Download PDF

Info

Publication number
CN113487603A
CN113487603A CN202110860188.7A CN202110860188A CN113487603A CN 113487603 A CN113487603 A CN 113487603A CN 202110860188 A CN202110860188 A CN 202110860188A CN 113487603 A CN113487603 A CN 113487603A
Authority
CN
China
Prior art keywords
detection algorithm
target detection
image
package
images
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110860188.7A
Other languages
Chinese (zh)
Inventor
黄坤山
李霁峰
谢克庆
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Foshan Nanhai Guangdong Technology University CNC Equipment Cooperative Innovation Institute
Original Assignee
Foshan Nanhai Guangdong Technology University CNC Equipment Cooperative Innovation Institute
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Foshan Nanhai Guangdong Technology University CNC Equipment Cooperative Innovation Institute filed Critical Foshan Nanhai Guangdong Technology University CNC Equipment Cooperative Innovation Institute
Priority to CN202110860188.7A priority Critical patent/CN113487603A/en
Publication of CN113487603A publication Critical patent/CN113487603A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/60Rotation of whole images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Quality & Reliability (AREA)
  • Health & Medical Sciences (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an accessory packing inspection method based on a target detection algorithm, which comprises the following steps of S1, constructing the target detection algorithm according to the packing detection requirement; s2, collecting an electronic image to be detected and packaged, and preprocessing the electronic image; s3, respectively manufacturing the preprocessed packaging images into a training set and a test set; s4, putting the training set prepared in the step S2 into a target detection algorithm for training; s5, putting the test set prepared in the step S2 into the target detection algorithm trained in the step S4 for generalization test; s6, judging whether the target detection algorithm reaches the required performance index according to the test result, learning the package object of the item depended on by the TPS658 package detection item, detecting the package on the production line, and sending an alarm when missing package is found.

Description

Accessory packaging inspection method based on target detection algorithm
Technical Field
The invention relates to the technical field of assembly line detection, in particular to an accessory packaging inspection method based on a target detection algorithm.
Background
At present, the packing process on the factory assembly line is that workers stand at two sides of the assembly line and place objects in boxes, but because manual packing often has certain negligence, detection personnel are always needed to detect the packages, and the personnel are mainly responsible for checking whether the packages are completely assembled or not at last, but the method is backward, the efficiency is low, the cost is high, and because the working time is prolonged, the workers are tired and cause many missed detections.
Disclosure of Invention
Aiming at the situation, the invention develops a set of accessory package inspection method based on a target detection algorithm, mainly relies on TPS658 package detection items to learn package objects of the relied items, and finally can detect packages on a production line, and can send out an alarm when package missing is found. Experiments show that the method has stable and reliable performance and can operate for a long time, the workload of detection personnel is reduced, and high-speed, high-precision and non-contact detection of the bag is realized.
The invention provides an accessory packing inspection method based on a target detection algorithm, which is specifically used for detecting a target to be packed on a production line, finding whether the assembly is in a missing state or not, and giving an alarm if the assembly is in the missing state, wherein the method comprises the following steps:
s1, building a target detection algorithm according to the packaging detection requirement;
s2, collecting an electronic image to be detected and packaged, and preprocessing the electronic image;
s3, respectively manufacturing the preprocessed packaging images into a training set and a test set;
s4, putting the training set prepared in the step S2 into the target detection algorithm for training;
s5, putting the test set prepared in the step S2 into the target detection algorithm trained in the step S4 for generalization test;
s6, judging whether the target detection algorithm reaches the required performance index according to the test result, and if the target detection algorithm reaches the required performance index, putting the target detection algorithm into the detection process of actual production; if not, the process returns to step S2.
The further improvement is that the target detection algorithm is structurally characterized in that an input end structure, a backhaul network structure, a neutral network structure and a Prediction network structure are respectively built.
In a further improvement, the preprocessing of step S2 specifically includes:
sequentially sorting, cleaning and labeling the acquired gasket images, wherein the sorting operation is to adjust the size direction of the images so that all the package images to be detected are unified into the same format; the cleaning treatment operation is to define the label of the image of the package to be detected according to the production requirement; and the labeling processing operation is to label a corresponding type label on the packaged image by using a labeling tool, generate a labeling file as a training positive sample and classify the non-packaged image part as a training negative sample.
The further improvement is that the input end structure mainly performs data enhancement operation and size resetting operation, and the data enhancement operation mainly performs random rotation, scaling and clipping on original data; the resizing operation resets the input image size to 640x640 size using bilinear interpolation, and the number of channels is 3.
The further improvement is that the Backbone network adopts Focus and CSP structures; inputting an original 608x608x3 image into a Focus structure, performing slicing operation to change the original 608x608x3 image into a 320x320x12 feature map, and performing 32 convolution operations to finally change the original 608x608x3 image into a 320x320x32 feature map; then the characteristic diagram enters the CSP structure and is split into two parts, one part is subjected to convolution operation, and the other part and the result of the former part of the convolution operation are subjected to concatee operation; the number of the CSP structures is 3, namely, the images can be processed by CSP 3 times and output three times, and after the CSP operation is carried out for the first time, the size of a characteristic diagram is obtained by 80x80x 128; performing CSP operation on the feature map to obtain the feature map with the size of 40x40x 256; and inputting the obtained feature map into an SPP module to obtain the third output, wherein the feature map is 20x20x512 in size.
The further improvement is that the hack network adopts FPN and PAN structures, and pyramid feature processing is carried out on the feature image obtained by the backhaul network.
The further improvement is that the prediction network adopts a GIOU structure,
Figure BDA0003185425250000031
the above formula GIOU _ Loss is a Loss function of the Bounding box, and the Loss function is mainly obtained by first calculating the minimum area of the closure regions of the two frames, then calculating IoU, then calculating the proportion of the regions of the closure regions not belonging to the two frames to the closure regions, and finally subtracting the proportion from IoU.
In a further improvement, the step S6 further includes:
the performance indexes are that the recognition accuracy of the package images is counted according to a target detection algorithm, the false detection rate and the accuracy are respectively calculated, if the performance does not reach the standard, the number and the types of the package images to be detected are continuously increased so as to improve the detection richness of the target detection algorithm on the package images, and meanwhile, the proportion of positive samples and negative samples is adjusted, and the elimination capability of the target detection algorithm on the negative samples is enhanced.
Compared with the prior art, the invention has the beneficial effects that:
1. a target detection algorithm of the deep learning method is introduced into package detection, objects neglected in the package are detected quickly, an alarm is given, and production automation and intellectualization are achieved.
2. And an inefficient manual method is replaced, so that the efficiency is improved and the cost is saved.
3. By adopting the current latest yolov5 structure, the method can obtain higher reasoning speed and better detection performance than the traditional target detection algorithm.
Drawings
The drawings are for illustrative purposes only and are not to be construed as limiting the patent; for the purpose of better illustrating the embodiments, certain features of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
Fig. 1 is a schematic overall flow chart of an embodiment of the present invention.
Detailed Description
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted" and "connected" are to be interpreted broadly, e.g., as being either fixedly connected, detachably connected, or integrally connected; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, so to speak, as communicating between the two elements. The specific meaning of the above terms in the present invention can be understood in specific cases to those skilled in the art. The technical solution of the present invention is further described below with reference to the accompanying drawings and examples.
Referring to fig. 1, a method for inspecting a package of accessories based on a target detection algorithm mainly includes the steps of firstly, finding a large number of packages from a production factory as objects to be tested, and ensuring that the types in the packages include all article types; the collected picture data of the bag is sorted, cleaned and labeled, and finally a training set and a test set are manufactured, the type and the label of the bag are defined according to the detection requirements of a manufacturer on the bag, and then a target detection algorithm is built according to the number of the types to be identified; training the target detection algorithm for multiple times by using the manufactured training set, and finally performing generalized test on the trained target detection algorithm by using the manufactured test set; evaluating the performance of the target detection algorithm through parameters such as the accuracy rate, the recall rate and the like of the test result calculation, and if the performance does not meet the requirement, continuously repeating the previous operation until the target detection algorithm with the performance meeting the requirement is trained; and finally, putting the trained target detection algorithm into use.
The steps of the inspection method thereof will now be described in detail, the method comprising the steps of:
s1, building a target detection algorithm according to the packaging detection requirement;
s2, collecting an electronic image to be detected and packaged, and preprocessing the electronic image;
s3, respectively manufacturing the preprocessed packaging images into a training set and a test set, wherein the test set cannot be overlapped with the training set;
s4, putting the training set prepared in the step S2 into the target detection algorithm for training;
s5, putting the test set prepared in the step S2 into the target detection algorithm trained in the step S4 for generalization test;
and S6, judging whether the target detection algorithm reaches the required performance index according to the test result, and if the target detection algorithm reaches the required performance index, putting the target detection algorithm into the detection process of actual production for use, namely, developing the target detection algorithm. Transplanting the test paper to a detection production line; if not, the process returns to step S2.
Specifically, the target detection method structurally adopts a structure consistent with yolov5, and the target detection algorithm structurally comprises the steps of respectively building an input end structure and a backhaul network, a neutral network and a Prediction network structure.
Specifically, the preprocessing of step S2 includes:
sequentially sorting, cleaning and labeling the acquired gasket images, wherein the sorting operation is to adjust the size direction of the images so that all the package images to be detected are unified into the same format; the cleaning treatment operation is to define the label of the image of the package to be detected according to the production requirement; and the labeling processing operation is to label a corresponding type label on the packaged image by using a labeling tool, generate a labeling file as a training positive sample and classify the non-packaged image part as a training negative sample.
Specifically, the input end structure mainly performs Mosaic data enhancement operation and size reset (Resize) operation, and the data enhancement operation mainly performs random rotation, scaling and clipping on original data; the Resize (Resize) operation uses bilinear interpolation to bring the input image size Resize to 640x640 size and the number of channels to 3.
Specifically, the Backbone network adopts Focus and CSP structures; inputting an original 608x608x3 image into a Focus structure, performing slicing operation to change the original 608x608x3 image into a 320x320x12 feature map, and performing 32 convolution operations to finally change the original 608x608x3 image into a 320x320x32 feature map; then the characteristic diagram enters the CSP structure and is split into two parts, one part is subjected to convolution operation, and the other part and the result of the former part of the convolution operation are subjected to concatee operation; the number of the CSP structures is 3, namely, the images can be processed by CSP 3 times and output three times, and after the CSP operation is carried out for the first time, the size of a characteristic diagram is obtained by 80x80x 128; performing CSP operation on the feature map to obtain the feature map with the size of 40x40x 256; and inputting the obtained feature map into an SPP module to obtain the third output, wherein the feature map is 20x20x512 in size. Each output is taken as an input of a certain layer of the hack network.
Specifically, the hack network adopts FPN and PAN structures, and pyramid feature processing is carried out on the feature image obtained by the backhaul network.
Specifically, the prediction network adopts a GIOU structure,
Figure BDA0003185425250000071
the above formula GIOU _ Loss is a Loss function of the Bounding box, and the Loss function is mainly obtained by first calculating the minimum area of the closure regions of the two frames, then calculating IoU, then calculating the proportion of the regions of the closure regions not belonging to the two frames to the closure regions, and finally subtracting the proportion from IoU.
Specifically, the step S6 further includes:
the performance indexes are that the recognition accuracy of the package images is counted according to a target detection algorithm, the false detection rate and the accuracy are respectively calculated, if the performance does not reach the standard, the number and the types of the package images to be detected are continuously increased so as to improve the detection richness of the target detection algorithm on the package images, and meanwhile, the proportion of positive samples and negative samples is adjusted, and the elimination capability of the target detection algorithm on the negative samples is enhanced.
In the drawings, the positional relationship is described for illustrative purposes only and is not to be construed as limiting the present patent; it should be understood that the above-described embodiments of the present invention are merely examples for clearly illustrating the present invention, and are not intended to limit the embodiments of the present invention. Other variations and modifications will be apparent to persons skilled in the art in light of the above description. And are neither required nor exhaustive of all embodiments. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present invention should be included in the protection scope of the claims of the present invention.

Claims (8)

1. An accessory packaging inspection method based on a target detection algorithm is characterized by comprising the following steps:
s1, building a target detection algorithm according to the packaging detection requirement;
s2, collecting an electronic image to be detected and packaged, and preprocessing the electronic image;
s3, respectively manufacturing the preprocessed packaging images into a training set and a test set;
s4, putting the training set prepared in the step S2 into the target detection algorithm for training;
s5, putting the test set prepared in the step S2 into the target detection algorithm trained in the step S4 for generalization test;
s6, judging whether the target detection algorithm reaches the required performance index according to the test result, and if the target detection algorithm reaches the required performance index, putting the target detection algorithm into the detection process of actual production; if not, the process returns to step S2.
2. The accessory packaging inspection method based on the target detection algorithm as claimed in claim 1, wherein the target detection algorithm is structurally characterized in that an input end structure and a backhaul network, a neutral network and a Prediction network structure are respectively built.
3. The method for inspecting accessory packaging based on target detection algorithm as claimed in claim 1, wherein the preprocessing of step S2 specifically includes:
sequentially sorting, cleaning and labeling the acquired gasket images, wherein the sorting operation is to adjust the size direction of the images so that all the package images to be detected are unified into the same format; the cleaning treatment operation is to define the label of the image of the package to be detected according to the production requirement; and the labeling processing operation is to label a corresponding type label on the packaged image by using a labeling tool, generate a labeling file as a training positive sample and classify the non-packaged image part as a training negative sample.
4. The method for inspecting the accessory package based on the target detection algorithm as claimed in claim 2, wherein the input end structure mainly performs data enhancement operation and size resetting operation, and the data enhancement operation mainly performs random rotation, scaling and clipping on original data; the resizing operation resets the input image size to 640x640 size using bilinear interpolation, and the number of channels is 3.
5. The accessory packaging inspection method based on the target detection algorithm is characterized in that the Backbone network adopts Focus and CSP structures; inputting an original 608x608x3 image into a Focus structure, performing slicing operation to change the original 608x608x3 image into a 320x320x12 feature map, and performing 32 convolution operations to finally change the original 608x608x3 image into a 320x320x32 feature map; then the characteristic diagram enters the CSP structure and is split into two parts, one part is subjected to convolution operation, and the other part and the result of the former part of the convolution operation are subjected to concatee operation; the number of the CSP structures is 3, namely, the images can be processed by CSP 3 times and output three times, and after the CSP operation is carried out for the first time, the size of a characteristic diagram is obtained by 80x80x 128; performing CSP operation on the feature map to obtain the feature map with the size of 40x40x 256; and inputting the obtained feature map into an SPP module to obtain the third output, wherein the feature map is 20x20x512 in size.
6. The accessory packaging inspection method based on the target detection algorithm, as claimed in claim 5, wherein the Neck network adopts FPN and PAN structures, and the feature image obtained by the Backbone network is processed by pyramid features.
7. The method of claim 2, wherein the prediction network adopts GIOU structure,
Figure FDA0003185425240000021
the above formula GIOU _ Loss is a Loss function of the Bounding box, and the Loss function is mainly obtained by first calculating the minimum area of the closure regions of the two frames, then calculating IoU, then calculating the proportion of the regions of the closure regions not belonging to the two frames to the closure regions, and finally subtracting the proportion from IoU.
8. The method for inspecting accessory packaging based on object detection algorithm as claimed in claim 1, wherein the step S6 further comprises:
the performance indexes are that the recognition accuracy of the package images is counted according to a target detection algorithm, the false detection rate and the accuracy are respectively calculated, if the performance does not reach the standard, the number and the types of the package images to be detected are continuously increased so as to improve the detection richness of the target detection algorithm on the package images, and meanwhile, the proportion of positive samples and negative samples is adjusted, and the elimination capability of the target detection algorithm on the negative samples is enhanced.
CN202110860188.7A 2021-07-28 2021-07-28 Accessory packaging inspection method based on target detection algorithm Pending CN113487603A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110860188.7A CN113487603A (en) 2021-07-28 2021-07-28 Accessory packaging inspection method based on target detection algorithm

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110860188.7A CN113487603A (en) 2021-07-28 2021-07-28 Accessory packaging inspection method based on target detection algorithm

Publications (1)

Publication Number Publication Date
CN113487603A true CN113487603A (en) 2021-10-08

Family

ID=77944291

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110860188.7A Pending CN113487603A (en) 2021-07-28 2021-07-28 Accessory packaging inspection method based on target detection algorithm

Country Status (1)

Country Link
CN (1) CN113487603A (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109829907A (en) * 2019-01-31 2019-05-31 浙江工业大学 A kind of metal shaft surface defect recognition method based on deep learning
US10545497B1 (en) * 2019-01-04 2020-01-28 Ankobot (Shanghai) Smart Technologies Co., Ltd. Control method and device for mobile robot, mobile robot
CN110909660A (en) * 2019-11-19 2020-03-24 佛山市南海区广工大数控装备协同创新研究院 Plastic bottle detection and positioning method based on target detection
CN111091534A (en) * 2019-11-19 2020-05-01 佛山市南海区广工大数控装备协同创新研究院 Target detection-based pcb defect detection and positioning method
CN111563557A (en) * 2020-05-12 2020-08-21 山东科华电力技术有限公司 Method for detecting target in power cable tunnel

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10545497B1 (en) * 2019-01-04 2020-01-28 Ankobot (Shanghai) Smart Technologies Co., Ltd. Control method and device for mobile robot, mobile robot
CN109829907A (en) * 2019-01-31 2019-05-31 浙江工业大学 A kind of metal shaft surface defect recognition method based on deep learning
CN110909660A (en) * 2019-11-19 2020-03-24 佛山市南海区广工大数控装备协同创新研究院 Plastic bottle detection and positioning method based on target detection
CN111091534A (en) * 2019-11-19 2020-05-01 佛山市南海区广工大数控装备协同创新研究院 Target detection-based pcb defect detection and positioning method
CN111563557A (en) * 2020-05-12 2020-08-21 山东科华电力技术有限公司 Method for detecting target in power cable tunnel

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
王书献: ""基于深度学习YOLOV5网络模型的金枪鱼延绳钓电子监控系统目标检测应用"", 《大连海洋大学学报》, pages 3 - 4 *

Similar Documents

Publication Publication Date Title
US11175650B2 (en) Product knitting systems and methods
CN106557778B (en) General object detection method and device, data processing device and terminal equipment
US10438371B2 (en) Three-dimensional bounding box from two-dimensional image and point cloud data
Kuznetsova et al. Detecting apples in orchards using YOLOv3 and YOLOv5 in general and close-up images
CN105701476A (en) Machine vision-based automatic identification system and method for production line products
US9875427B2 (en) Method for object localization and pose estimation for an object of interest
Lestari et al. Fire hotspots detection system on CCTV videos using you only look once (YOLO) method and tiny YOLO model for high buildings evacuation
CN111179250B (en) Industrial product defect detection system based on multitask learning
CN112561882B (en) Logistics sorting method, system, equipment and storage medium
US20120070075A1 (en) Image processing based on visual attention and reduced search based generated regions of interest
EP3798924A1 (en) System and method for classifying manufactured products
CN111161263A (en) Package flatness detection method and system, electronic equipment and storage medium
CN109283182A (en) A kind of detection method of battery welding point defect, apparatus and system
CN111767902A (en) Method, device and equipment for identifying dangerous goods of security check machine and storage medium
CN111062252B (en) Real-time dangerous goods semantic segmentation method, device and storage device
CN110163852A (en) The real-time sideslip detection method of conveyer belt based on lightweight convolutional neural networks
CN112819796A (en) Tobacco shred foreign matter identification method and equipment
CN113780423A (en) Single-stage target detection neural network based on multi-scale fusion and industrial product surface defect detection model
CN111210412B (en) Packaging detection method and device, electronic equipment and storage medium
CN111144417A (en) Intelligent container small target detection method and detection system based on teacher student network
CN113487603A (en) Accessory packaging inspection method based on target detection algorithm
CN108021914B (en) Method for extracting character area of printed matter based on convolutional neural network
CN117095155A (en) Multi-scale nixie tube detection method based on improved YOLO self-adaptive attention-feature enhancement network
EP3579138A1 (en) Method for determining a type and a state of an object of interest
CN114463686A (en) Moving target detection method and system based on complex background

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination