CN114332622A - Label detection method based on machine vision - Google Patents
Label detection method based on machine vision Download PDFInfo
- Publication number
- CN114332622A CN114332622A CN202111658741.5A CN202111658741A CN114332622A CN 114332622 A CN114332622 A CN 114332622A CN 202111658741 A CN202111658741 A CN 202111658741A CN 114332622 A CN114332622 A CN 114332622A
- Authority
- CN
- China
- Prior art keywords
- label
- image
- product
- standard
- detected
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 35
- 238000000034 method Methods 0.000 claims description 7
- 238000004364 calculation method Methods 0.000 claims description 6
- 238000012545 processing Methods 0.000 claims description 5
- 238000003708 edge detection Methods 0.000 claims description 4
- 238000011176 pooling Methods 0.000 claims description 4
- 238000005520 cutting process Methods 0.000 claims description 2
- 238000007689 inspection Methods 0.000 abstract description 3
- 238000006073 displacement reaction Methods 0.000 abstract 1
- 230000006698 induction Effects 0.000 description 3
- 238000004519 manufacturing process Methods 0.000 description 3
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 206010063385 Intellectualisation Diseases 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 238000011109 contamination Methods 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 238000009776 industrial production Methods 0.000 description 1
- 238000013507 mapping Methods 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
- 230000001502 supplementing effect Effects 0.000 description 1
- 230000001629 suppression Effects 0.000 description 1
- 238000012546 transfer Methods 0.000 description 1
Images
Landscapes
- Image Analysis (AREA)
Abstract
The invention discloses a label detection method based on machine vision, which comprises the following steps: establishing a standard database; acquiring a product image; realizing product appearance matching based on the product image; intercepting a preset position image to obtain a label image to be detected; and judging whether the image of the label to be detected meets a preset standard or not. The invention solves the problems of low accuracy of manual inspection or scanning identification and difficulty in accurately judging the displacement, pollution and other states of the label.
Description
Technical Field
The invention relates to the technical field of computer vision image detection, in particular to a label detection method based on machine vision.
Background
Before the product leaves the factory and enters the warehouse, a specific label is generally pasted at a fixed position and bears important information such as the product model, a manufacturer, notes, a check standard, a two-dimensional identification code and the like. According to the label of the product, the corresponding product can be quickly and conveniently identified and traced, and the method is an important part for improving production informatization and intellectualization.
For a product to be of a good specification, the label needs to be affixed in a predetermined position while the surface of the label needs to be kept clean and free of contamination. And this kind of problem often needs the manual work to confirm in the production line, and not only needs extra work consuming time, and when assembly line transfer rate was very fast simultaneously, the degree of accuracy of inspection was also lower, was difficult to carry out comparatively accurate judgement to the offset of label.
However, the general automatic detection can only detect whether the corresponding label exists according to the identification condition of the one-dimensional code or the two-dimensional code in the scanning scene, cannot distinguish the conditions of label leakage and surface pollution, and is difficult to detect whether the label deviates from a predetermined position, thereby affecting the appearance or the specification of the product.
The problem addressed by the present patent application is therefore: for products with fixed shapes and label pasting positions, the detection accuracy of manual detection on label positions and surface dirt is low, and meanwhile, the label positions and unqualified conditions are difficult to accurately position and analyze by adopting a scanning identification method.
Disclosure of Invention
The invention aims to provide a label detection method based on machine vision so as to solve the technical problems in the background technology.
In order to achieve the purpose, the invention adopts the following technical scheme:
a label detection method based on machine vision comprises the following steps:
establishing a standard database;
acquiring a product image;
realizing product appearance matching based on the product image;
intercepting a preset position image to obtain a label image to be detected;
and judging whether the image of the label to be detected meets a preset standard or not.
In some embodiments, the establishing a criteria database comprises:
selecting a product meeting the detection standard, and carrying out image acquisition to obtain standard image data;
and acquiring the appearance and the label information of the product in sequence based on the standard image data.
In some embodiments, the capturing of the product form comprises:
extracting the product edge through an edge detection algorithm and carrying out binarization; and performing downsampling processing on the edge image by utilizing maximum pooling to obtain the edge image of the product.
In some embodiments, the collecting of the tag information comprises:
positioning the outer part of the label outline, connecting the label outline, intercepting and reserving the image of the label, and counting the number of pixel points in the label outline as a standard label area S;
and dividing the image in the label into pixel intervals with the same size, sequentially counting the pixel gradient values in the intervals to form a histogram, connecting and normalizing to obtain the HOG descriptor of the whole label image, and storing the HOG descriptor.
In some embodiments, implementing product appearance matching based on the product image includes: obtaining an edge image of the product to be detected based on the obtained image of the product to be detected; subtracting the edge image of the product to be detected from the edge image of the standard product, and counting the number of difference pixel points with the gray value not being 0; setting a threshold T1And when the number of the different pixel points is larger than the threshold value, determining that the model difference exists between the current product and the standard product, otherwise, determining that the appearance matching is successful.
In some embodiments, the intercepting the preset position image to obtain the target to be detected includes: and after the appearance is successfully matched, image cutting is carried out on the image of the product to be detected according to the preset intercepting position, and the image is reserved as the image of the label to be detected.
In some embodiments, determining whether the image of the label to be detected meets a preset criterion includes:
judging whether the label edge of the image of the label to be detected meets a preset standard or not;
and judging whether the label content of the to-be-detected label image meets a preset standard or not.
In some embodiments, the determining whether the label edge of the image of the label to be detected meets a preset criterion includes:
the gradient calculation of x, y two directions is carried out to the label image of waiting to be examined who obtains and the gradient value and the gradient direction of keeping every pixel, straight line detection is carried out afterwards, keeps four longest straight lines in the image, connects and forms the inscribed rectangle, statistics rectangle area size S', calculates its and the IOU value of standard label, and the calculation mode of IOU value is as follows:
setting a threshold T2When the IOU value is greater than T2If so, the label is normally pasted; otherwise, the label is proved not to be pasted on the preset position of the product surface or the surface of the label is damaged and destroyed to cause surface incompleteness, and the label needs to be pasted again after being removed.
In some embodiments, the determining whether the label content of the to-be-detected label image meets a preset criterion includes:
after the label is confirmed to be complete and normally pasted, counting and calculating the HOG descriptor of the image to be labeled, calculating the similarity between the descriptor of the image to be labeled and the descriptor of the standard image, and setting a threshold value T3When the similarity is greater than T3If so, judging that the content of the product label is complete; otherwise, judging that the label surface of the product has the conditions of incomplete label content or wrong label pasting caused by pollution and shielding.
The machine vision-based label detection method disclosed in the present application may bring beneficial effects including but not limited to:
through the method of computer vision image detection, combine image acquisition module to carry out fixed point location detection to the product, compare manual inspection, promoted the detection accuracy rate of label unusual adhesion condition, can provide the alarm information of pasting different conditions such as position error, incomplete content, existence surface pollution simultaneously, help relevant personnel in time to rectify and reform.
Drawings
FIG. 1 is a standard data information collection diagram of the present invention
FIG. 2 is a flow chart of the inter-label detection of the present invention
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is further described in detail with reference to the following embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application.
On the contrary, this application is intended to cover any alternatives, modifications, equivalents, and alternatives that may be included within the spirit and scope of the application as defined by the appended claims. Furthermore, in the following detailed description of the present application, certain specific details are set forth in order to provide a better understanding of the present application. It will be apparent to one skilled in the art that the present application may be practiced without these specific details.
A label detection method based on machine vision according to an embodiment of the present application will be described in detail below. It is to be noted that the following examples are only for explaining the present application and do not constitute a limitation to the present application.
The method comprises the following steps: building detection equipment
Firstly, a detection device is set up above a conveyor belt according to the figure 1, and the distance between an infrared induction switch and a light supplementing device and the distance between the infrared induction switch and an image acquisition module are adjusted according to the conveying speed of the conveyor belt and the time required by image acquisition, so that the image acquisition module can accurately and clearly acquire the appearance and surface information of a product.
Because the assembly line work is mostly used in industrial production, the output product model, the outward appearance is unified, the placing mode is also the same, consequently confirm detection position and direction of product on conveyer at first, set up infrared induction switch, light filling equipment and image acquisition device according to the label position that awaits measuring, and according to the distance between the transport speed adjusting switch of device and the collection system, make the object transport confirm position trigger switch after, can obtain the product image that just easily discerns openly clearly and easily through image acquisition device after the light filling.
Step two: establishing a standard database
And selecting products meeting the detection standard for image acquisition. For standard image data, the appearance of the product and the label information need to be acquired in sequence. In the acquisition of the product appearance, firstly, the product edge is extracted through an edge detection algorithm and binaryzation is carried out; and then, utilizing maximum pooling to carry out downsampling processing on the edge image to obtain an edge template of the product. In the acquisition of the label information, firstly, positioning the outer part of the outline of a rectangular label, connecting the outline with a line, intercepting and reserving the image of the label to avoid the interference of the external information, and counting the number of pixel points in the outline of the label as a standard label area S; and secondly, dividing the image in the label into pixel intervals with the same size, sequentially counting the pixel gradient values in the intervals to form a histogram, connecting and normalizing to obtain the HOG descriptor of the whole label image and storing the HOG descriptor.
And collecting the appearance image of the product with the appearance and the label pasting position meeting the standard, storing the image as original standard data after Gaussian filtering, and sequentially collecting appearance characteristics and label information.
In the acquisition of the appearance of the product, a Canny algorithm is adopted to extract the image contour. For a point P (x, y) in the original data, firstly, according to the gray values of the point and 8 surrounding neighborhood points, the gradient values Gx, Gy along the x and y directions are calculated by combining a sobel operator, and the calculation method is as formula (1):
wherein p1, p2, etc. are the gray values of the corresponding points in the neighborhood. Then, from the gradient values in the two directions, the gradient value G and the gradient direction θ of the point are calculated as shown in (2):
and next, extracting gray values of two neighborhood points with the minimum distance from the gradient direction to carry out non-maximum value suppression, selecting the point with the gray value as the extreme value as an initial edge point, setting a threshold TL, and excluding the points with the gradient values smaller than the threshold and the gradients of the neighborhood points higher than the threshold. And (4) assigning the value of the screened neighborhood points to be 255, and assigning the value of the rest points to be 1, so as to obtain and retain the standard binary contour map.
And finally, performing maximum pooling on the binary contour map. Dividing the image into a plurality of areas, replacing the whole area with the maximum gray value in each area, and performing down-sampling processing on the edge image to obtain an edge template of a product.
In the acquisition of label information, firstly, a rectangular label needs to be manually positioned, and an area slightly larger than the label is divided to carry out image interception on the standard binary contour map. And for the intercepted label image, counting and keeping the position, gradient information and total number of pixels of a pixel point in the outline of the label image as the information of the standard label.
Then, the label internal image is divided into a plurality of areas, and for each area, the gradient direction of each pixel in the area is counted and a gradient histogram is formed. And after the statistics is finished, connecting the gradient histograms of the regions into a histogram matrix, and performing normalization processing to obtain the HOG descriptor of the standard image.
Step three: transporting the product to be inspected and acquiring the image
And (3) placing the product subjected to the label pasting step on a conveyor belt according to the direction and the posture of the product during standard data acquisition by using a mechanical arm, conveying the product to a set position by the conveyor belt, triggering a switch and acquiring surface information of the product.
Step four: product shape matching and label intercepting to be detected
According to the appearance image of the product to be detected obtained in the third step, firstly, the pixel gradient information and the product outline binary image in the image to be detected are calculated by adopting the same parameters as those in the second step, and the difference image is obtained by subtracting the standard outline binary image. Counting the number of the difference pixel points with the gray value not 0 in the difference image, and comparing the number with a set threshold value T1And comparing, and when the number of the different pixel points is greater than a threshold value, giving an alarm by the system to prompt relevant personnel that the current product model is not matched with a preset standard.
And for the image to be detected with successfully matched appearance, intercepting the image of the same part as the image of the label to be detected in the step three, extracting corresponding gradient information, and storing the gradient information as the image of the label to be detected.
Step five: label edge detection
Firstly, carrying out linear detection according to the label image to be detected obtained in the step four. And mapping the coordinates of each pixel point to a Hough space, and keeping the coordinates of the four points with the largest contribution quantity in the Hough space as linear equation parameters of the label edge line, thereby obtaining the edge information of the label image.
Secondly, calculating an inscribed rectangle of the edge line, performing intersection with the standard label image, and calculating an IOU value.
The calculation formula is as follows (3):
wherein S is the area occupied by the standard label, and S' is the area of the label to be detected.
Finally, a threshold value T is set2When the IOU value is less than T2If the label is not adhered to the predetermined position of the surface of the product, or the surface of the label is damaged or destroyed, the surface is incomplete, and the label needs to be removed and then adhered again.
Step six: tag content detection
And for the label images which are normally pasted in the step five, firstly selecting the same parameters to perform the pixel interval division mode in the step two, and counting and calculating the HOG descriptor of the label images to be detected.
Secondly, the babbit distance between the descriptor of the image to be detected and the descriptor of the standard image is calculated as the similarity. Setting a threshold T3When the similarity is greater than T3If so, judging that the content of the product label is complete, and entering the next link of production; and when the similarity is less than T3In the process, the situation that the label content is incomplete or the label is wrongly pasted due to pollution and shielding on the surface of the product label can be judged, and related personnel are required to be informed to remove the pollution as soon as possible or replace the label in time.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.
Claims (9)
1. A label detection method based on machine vision is characterized by comprising the following steps:
establishing a standard database;
acquiring a product image;
realizing product appearance matching based on the product image;
intercepting a preset position image to obtain a label image to be detected;
and judging whether the image of the label to be detected meets a preset standard or not.
2. The machine vision-based label detection method according to claim 1, wherein said establishing a standard database comprises:
selecting a product meeting the detection standard, and carrying out image acquisition to obtain standard image data;
and acquiring the appearance and the label information of the product in sequence based on the standard image data.
3. The machine vision-based label detection method of claim 2,
the collection of product appearance includes:
extracting the product edge through an edge detection algorithm and carrying out binarization; and performing downsampling processing on the edge image by utilizing maximum pooling to obtain the edge image of the product.
4. The machine vision-based label detection method of claim 2,
the collecting of the label information comprises:
positioning the outer part of the label outline, connecting the label outline, intercepting and reserving the image of the label, and counting the number of pixel points in the label outline as a standard label area S;
and dividing the image in the label into pixel intervals with the same size, sequentially counting the pixel gradient values in the intervals to form a histogram, connecting and normalizing to obtain the HOG descriptor of the whole label image, and storing the HOG descriptor.
5. The machine vision-based label detection method of claim 1, wherein implementing product contour matching based on the product image comprises: obtaining an edge image of the product to be detected based on the obtained image of the product to be detected; subtracting the edge image of the product to be detected from the edge image of the standard product, and counting the number of difference pixel points with the gray value not being 0; setting a threshold T1And when the number of the different pixel points is larger than the threshold value, determining that the model difference exists between the current product and the standard product, otherwise, determining that the appearance matching is successful.
6. The label detection method based on machine vision according to claim 1, wherein said intercepting the preset position image to obtain the target to be detected comprises: and after the appearance is successfully matched, image cutting is carried out on the image of the product to be detected according to the preset intercepting position, and the image is reserved as the image of the label to be detected.
7. The label detection method based on machine vision as claimed in claim 1, wherein judging whether the image of the label to be detected meets a preset standard comprises:
judging whether the label edge of the image of the label to be detected meets a preset standard or not;
and judging whether the label content of the to-be-detected label image meets a preset standard or not.
8. The machine vision-based label detection method of claim 7,
whether the label edge of the image of the label to be detected meets the preset standard or not is judged, and the method comprises the following steps:
the gradient calculation of x, y two directions is carried out to the label image of waiting to be examined who obtains and the gradient value and the gradient direction of keeping every pixel, straight line detection is carried out afterwards, keeps four longest straight lines in the image, connects and forms the inscribed rectangle, statistics rectangle area size S', calculates its and the IOU value of standard label, and the calculation mode of IOU value is as follows:
setting a threshold T2When the IOU value is greater than T2If so, the label is normally pasted; otherwise, the label is proved not to be pasted on the preset position of the product surface or the surface of the label is damaged and destroyed to cause surface incompleteness, and the label needs to be pasted again after being removed.
9. The machine vision-based label detection method of claim 7,
whether the label content of the label image to be detected meets the preset standard or not is judged, and the method comprises the following steps:
after the label is confirmed to be complete and normally pasted, counting and calculating the HOG descriptor of the image to be labeled, calculating the similarity between the descriptor of the image to be labeled and the descriptor of the standard image, and setting a threshold value T3When the similarity is greater than T3If so, judging that the content of the product label is complete; otherwise, judging that the label content is incomplete due to pollution and shielding on the surface of the product labelOr in the case of a label application error.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111658741.5A CN114332622A (en) | 2021-12-30 | 2021-12-30 | Label detection method based on machine vision |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202111658741.5A CN114332622A (en) | 2021-12-30 | 2021-12-30 | Label detection method based on machine vision |
Publications (1)
Publication Number | Publication Date |
---|---|
CN114332622A true CN114332622A (en) | 2022-04-12 |
Family
ID=81018211
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202111658741.5A Pending CN114332622A (en) | 2021-12-30 | 2021-12-30 | Label detection method based on machine vision |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114332622A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114707904A (en) * | 2022-05-05 | 2022-07-05 | 江苏文友软件有限公司 | Quality detection method and system based on big data |
CN115457253A (en) * | 2022-11-11 | 2022-12-09 | 浪潮金融信息技术有限公司 | Object detection method, system, equipment and medium based on multiple camera modules |
CN118365699A (en) * | 2024-06-18 | 2024-07-19 | 珠海格力电器股份有限公司 | Label position deviation detection method, device and detection equipment |
-
2021
- 2021-12-30 CN CN202111658741.5A patent/CN114332622A/en active Pending
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN114707904A (en) * | 2022-05-05 | 2022-07-05 | 江苏文友软件有限公司 | Quality detection method and system based on big data |
CN115457253A (en) * | 2022-11-11 | 2022-12-09 | 浪潮金融信息技术有限公司 | Object detection method, system, equipment and medium based on multiple camera modules |
CN118365699A (en) * | 2024-06-18 | 2024-07-19 | 珠海格力电器股份有限公司 | Label position deviation detection method, device and detection equipment |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN107617573B (en) | Logistics code identification and sorting method based on multitask deep learning | |
CN114332622A (en) | Label detection method based on machine vision | |
CN109255787B (en) | System and method for detecting scratch of silk ingot based on deep learning and image processing technology | |
CN110610141A (en) | Logistics storage regular shape goods recognition system | |
CN110603533A (en) | Method and apparatus for object state detection | |
CN109839385B (en) | Self-adaptive PCB defect visual positioning detection and classification system | |
CN107610176A (en) | A kind of pallet Dynamic Recognition based on Kinect and localization method, system and medium | |
CN109949331B (en) | Container edge detection method and device | |
CN106778737A (en) | A kind of car plate antidote, device and a kind of video acquisition device | |
CN112329587B (en) | Beverage bottle classification method and device and electronic equipment | |
CN115359021A (en) | Target positioning detection method based on laser radar and camera information fusion | |
CN110533654A (en) | The method for detecting abnormality and device of components | |
CN114049624B (en) | Ship cabin intelligent detection method and system based on machine vision | |
CN113283439B (en) | Intelligent counting method, device and system based on image recognition | |
CN116228678A (en) | Automatic identification and processing method for chip packaging defects | |
CN113393426A (en) | Method for detecting surface defects of rolled steel plate | |
CN111487192A (en) | Machine vision surface defect detection device and method based on artificial intelligence | |
CN115937203A (en) | Visual detection method, device, equipment and medium based on template matching | |
CN112345534A (en) | Vision-based bubble plate particle defect detection method and system | |
CN110807354B (en) | Industrial assembly line product counting method | |
CN110619336A (en) | Goods identification algorithm based on image processing | |
CN118247331B (en) | Automatic part size detection method and system based on image recognition | |
CN116309277A (en) | Steel detection method and system based on deep learning | |
CN114267032A (en) | Container positioning identification method, device, equipment and storage medium | |
CN116993725B (en) | Intelligent patch information processing system of flexible circuit board |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |